NewJob - MLX Fine-tuned Vision Language Model β‘οΈ
π₯ REAL MLX FINE-TUNED WEIGHTS INCLUDED - This model contains actual fine-tuned adapter weights!
π Model Details
- Base Model:
mlx-community/SmolVLM-256M-Instruct-bf16
- Training Platform: VisualAI (MLX-optimized for Apple Silicon)
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- GPU Type: MLX (Apple Silicon)
- Training Job ID: 1
- Created: 2025-06-03 06:51:02.458447
- Real Weights: β
YES - Contains actual fine-tuned MLX adapter weights
- Adapter Weights: β
Found
π Training Data
This model was fine-tuned on visual brake component data with 3 training examples.
π οΈ Usage with REAL Fine-tuned Weights
Installation
pip install mlx-vlm
Loading the Fine-tuned Model
from mlx_vlm import load, generate
from mlx_vlm.prompt_utils import apply_chat_template
from mlx_vlm.utils import load_config
from PIL import Image
import json
model_path = "truworthai/Combined-mlx"
try:
model, processor = load(model_path)
print("β
Loaded FINE-TUNED MLX model with learned weights!")
config = load_config(model_path)
except Exception as e:
print(f"β οΈ Loading fine-tuned model failed, falling back to base: {e}")
model, processor = load("mlx-community/SmolVLM-256M-Instruct-bf16")
config = load_config("mlx-community/SmolVLM-256M-Instruct-bf16")
Inference with Fine-tuned Model
image = Image.open("brake_component.jpg")
question = "What is the OEM part number of this brake component?"
formatted_prompt = apply_chat_template(processor, config, question, num_images=1)
response = generate(
model,
processor,
formatted_prompt,
[image],
verbose=False,
max_tokens=100,
temp=0.3
)
print(f"Fine-tuned model response: {response}")
π Model Files (REAL WEIGHTS)
This repository contains ACTUAL fine-tuned model weights:
Core Model Files
config.json: Model configuration
model.safetensors or model.npz: Base model weights (if included)
adapters.safetensors or adapters.npz: FINE-TUNED LoRA ADAPTER WEIGHTS β‘οΈ
adapter_config.json: Adapter configuration
tokenizer.json: Tokenizer configuration
preprocessor_config.json: Image preprocessing config
Training Artifacts
training_args.json: Training hyperparameters used
trainer_state.json: Training state and metrics
mlx_model_info.json: Training metadata and learned mappings
training_images/: Reference images from training data (if included)
Documentation
README.md: This documentation
β‘οΈ Performance Features
β
Real MLX Weights: Contains actual fine-tuned adapter weights, not just metadata
β
Apple Silicon Optimized: Native MLX format for M1/M2/M3 chips
β
LoRA Adapters: Efficient fine-tuning with low memory usage
β
Domain-Specific: Trained specifically on brake components
β
Visual Learning: Learned patterns from visual training data
π Training Statistics
- Training Examples: 3
- Learned Visual Patterns: 2
- Fine-tuning Epochs: 3
- Domain Keywords: 59
β οΈ Important Notes
- REAL WEIGHTS: This model contains actual fine-tuned MLX weights, not just metadata
- MLX Required: Use
mlx-vlm library for loading and inference
- Apple Silicon: Optimized for M1/M2/M3 Mac devices
- Adapter Architecture: Uses LoRA for efficient fine-tuning
- Domain-Specific: Best performance on brake component images
π Comparison
| Feature |
This Model |
Base Model |
| Fine-tuned Weights |
β
YES |
β No |
| Brake Component Knowledge |
β
Specialized |
β General |
| Domain-Specific Responses |
β
Trained |
β Generic |
| Visual Pattern Learning |
β
2 patterns |
β Base only |
π Support
For questions about this model or the VisualAI platform, please refer to the training logs or contact support.
This model was trained using VisualAI's MLX-optimized training pipeline with REAL gradient updates and weight saving.