--- library_name: mlx-vlm tags: - mlx - vision-language-model - fine-tuned - brake-components - visual-ai - lora-adapters base_model: mlx-community/SmolVLM-256M-Instruct-bf16 --- # NewJob - MLX Fine-tuned Vision Language Model ⚡️ 🔥 **REAL MLX FINE-TUNED WEIGHTS INCLUDED** - This model contains actual fine-tuned adapter weights! ## 🚀 Model Details - **Base Model**: `mlx-community/SmolVLM-256M-Instruct-bf16` - **Training Platform**: VisualAI (MLX-optimized for Apple Silicon) - **Fine-tuning Method**: LoRA (Low-Rank Adaptation) - **GPU Type**: MLX (Apple Silicon) - **Training Job ID**: 1 - **Created**: 2025-06-03 06:51:02.458447 - **Real Weights**: ✅ YES - Contains actual fine-tuned MLX adapter weights - **Adapter Weights**: ✅ Found ## 📊 Training Data This model was fine-tuned on visual brake component data with 3 training examples. ## 🛠️ Usage with REAL Fine-tuned Weights ### Installation ```bash pip install mlx-vlm ``` ### Loading the Fine-tuned Model ```python from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config from PIL import Image import json # Load the FINE-TUNED MLX model (not base model!) model_path = "truworthai/Combined-mlx" # This repo contains the fine-tuned weights try: # Load the fine-tuned model with adapters model, processor = load(model_path) print("✅ Loaded FINE-TUNED MLX model with learned weights!") # Load training configuration config = load_config(model_path) except Exception as e: print(f"⚠️ Loading fine-tuned model failed, falling back to base: {e}") # Fallback to base model model, processor = load("mlx-community/SmolVLM-256M-Instruct-bf16") config = load_config("mlx-community/SmolVLM-256M-Instruct-bf16") ``` ### Inference with Fine-tuned Model ```python # Load your brake component image image = Image.open("brake_component.jpg") # Ask brake-specific questions question = "What is the OEM part number of this brake component?" # Format the prompt formatted_prompt = apply_chat_template(processor, config, question, num_images=1) # Generate response using fine-tuned weights response = generate( model, processor, formatted_prompt, [image], verbose=False, max_tokens=100, temp=0.3 ) print(f"Fine-tuned model response: {response}") ``` ## 📁 Model Files (REAL WEIGHTS) This repository contains **ACTUAL fine-tuned model weights**: ### Core Model Files - `config.json`: Model configuration - `model.safetensors` or `model.npz`: Base model weights (if included) - `adapters.safetensors` or `adapters.npz`: **FINE-TUNED LoRA ADAPTER WEIGHTS** ⚡️ - `adapter_config.json`: Adapter configuration - `tokenizer.json`: Tokenizer configuration - `preprocessor_config.json`: Image preprocessing config ### Training Artifacts - `training_args.json`: Training hyperparameters used - `trainer_state.json`: Training state and metrics - `mlx_model_info.json`: Training metadata and learned mappings - `training_images/`: Reference images from training data (if included) ### Documentation - `README.md`: This documentation ## ⚡️ Performance Features ✅ **Real MLX Weights**: Contains actual fine-tuned adapter weights, not just metadata ✅ **Apple Silicon Optimized**: Native MLX format for M1/M2/M3 chips ✅ **LoRA Adapters**: Efficient fine-tuning with low memory usage ✅ **Domain-Specific**: Trained specifically on brake components ✅ **Visual Learning**: Learned patterns from visual training data ## 🔍 Training Statistics - **Training Examples**: 3 - **Learned Visual Patterns**: 2 - **Fine-tuning Epochs**: 3 - **Domain Keywords**: 59 ## ⚠️ Important Notes - **REAL WEIGHTS**: This model contains actual fine-tuned MLX weights, not just metadata - **MLX Required**: Use `mlx-vlm` library for loading and inference - **Apple Silicon**: Optimized for M1/M2/M3 Mac devices - **Adapter Architecture**: Uses LoRA for efficient fine-tuning - **Domain-Specific**: Best performance on brake component images ## 🆚 Comparison | Feature | This Model | Base Model | |---------|------------|------------| | Fine-tuned Weights | ✅ YES | ❌ No | | Brake Component Knowledge | ✅ Specialized | ❌ General | | Domain-Specific Responses | ✅ Trained | ❌ Generic | | Visual Pattern Learning | ✅ 2 patterns | ❌ Base only | ## 📞 Support For questions about this model or the VisualAI platform, please refer to the training logs or contact support. --- *This model was trained using VisualAI's MLX-optimized training pipeline with REAL gradient updates and weight saving.*