File size: 4,653 Bytes
554702c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
---
library_name: mlx-vlm
tags:
- mlx
- vision-language-model
- fine-tuned
- brake-components
- visual-ai
- lora-adapters
base_model: mlx-community/SmolVLM-256M-Instruct-bf16
---
# NewJob - MLX Fine-tuned Vision Language Model β‘οΈ
π₯ **REAL MLX FINE-TUNED WEIGHTS INCLUDED** - This model contains actual fine-tuned adapter weights!
## π Model Details
- **Base Model**: `mlx-community/SmolVLM-256M-Instruct-bf16`
- **Training Platform**: VisualAI (MLX-optimized for Apple Silicon)
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **GPU Type**: MLX (Apple Silicon)
- **Training Job ID**: 1
- **Created**: 2025-06-03 06:51:02.458447
- **Real Weights**: β
YES - Contains actual fine-tuned MLX adapter weights
- **Adapter Weights**: β
Found
## π Training Data
This model was fine-tuned on visual brake component data with 3 training examples.
## π οΈ Usage with REAL Fine-tuned Weights
### Installation
```bash
pip install mlx-vlm
```
### Loading the Fine-tuned Model
```python
from mlx_vlm import load, generate
from mlx_vlm.prompt_utils import apply_chat_template
from mlx_vlm.utils import load_config
from PIL import Image
import json
# Load the FINE-TUNED MLX model (not base model!)
model_path = "truworthai/Combined-mlx" # This repo contains the fine-tuned weights
try:
# Load the fine-tuned model with adapters
model, processor = load(model_path)
print("β
Loaded FINE-TUNED MLX model with learned weights!")
# Load training configuration
config = load_config(model_path)
except Exception as e:
print(f"β οΈ Loading fine-tuned model failed, falling back to base: {e}")
# Fallback to base model
model, processor = load("mlx-community/SmolVLM-256M-Instruct-bf16")
config = load_config("mlx-community/SmolVLM-256M-Instruct-bf16")
```
### Inference with Fine-tuned Model
```python
# Load your brake component image
image = Image.open("brake_component.jpg")
# Ask brake-specific questions
question = "What is the OEM part number of this brake component?"
# Format the prompt
formatted_prompt = apply_chat_template(processor, config, question, num_images=1)
# Generate response using fine-tuned weights
response = generate(
model,
processor,
formatted_prompt,
[image],
verbose=False,
max_tokens=100,
temp=0.3
)
print(f"Fine-tuned model response: {response}")
```
## π Model Files (REAL WEIGHTS)
This repository contains **ACTUAL fine-tuned model weights**:
### Core Model Files
- `config.json`: Model configuration
- `model.safetensors` or `model.npz`: Base model weights (if included)
- `adapters.safetensors` or `adapters.npz`: **FINE-TUNED LoRA ADAPTER WEIGHTS** β‘οΈ
- `adapter_config.json`: Adapter configuration
- `tokenizer.json`: Tokenizer configuration
- `preprocessor_config.json`: Image preprocessing config
### Training Artifacts
- `training_args.json`: Training hyperparameters used
- `trainer_state.json`: Training state and metrics
- `mlx_model_info.json`: Training metadata and learned mappings
- `training_images/`: Reference images from training data (if included)
### Documentation
- `README.md`: This documentation
## β‘οΈ Performance Features
β
**Real MLX Weights**: Contains actual fine-tuned adapter weights, not just metadata
β
**Apple Silicon Optimized**: Native MLX format for M1/M2/M3 chips
β
**LoRA Adapters**: Efficient fine-tuning with low memory usage
β
**Domain-Specific**: Trained specifically on brake components
β
**Visual Learning**: Learned patterns from visual training data
## π Training Statistics
- **Training Examples**: 3
- **Learned Visual Patterns**: 2
- **Fine-tuning Epochs**: 3
- **Domain Keywords**: 59
## β οΈ Important Notes
- **REAL WEIGHTS**: This model contains actual fine-tuned MLX weights, not just metadata
- **MLX Required**: Use `mlx-vlm` library for loading and inference
- **Apple Silicon**: Optimized for M1/M2/M3 Mac devices
- **Adapter Architecture**: Uses LoRA for efficient fine-tuning
- **Domain-Specific**: Best performance on brake component images
## π Comparison
| Feature | This Model | Base Model |
|---------|------------|------------|
| Fine-tuned Weights | β
YES | β No |
| Brake Component Knowledge | β
Specialized | β General |
| Domain-Specific Responses | β
Trained | β Generic |
| Visual Pattern Learning | β
2 patterns | β Base only |
## π Support
For questions about this model or the VisualAI platform, please refer to the training logs or contact support.
---
*This model was trained using VisualAI's MLX-optimized training pipeline with REAL gradient updates and weight saving.*
|