mistral-small-3.1-24b-harmoni-sft
Version: v1.0.0
Model Description
This model is a fine-tuned version of mistralai/Mistral-Small-3.1-24B-Instruct-2503 for manufacturing domain applications.
Training Method: SFT (Supervised Fine-Tuning)
Training Pipeline:
- SFT Phase: QLoRA fine-tuning on domain-specific instruction data (31K examples)
- Merge: LoRA adapter merged with base model
Training Configuration
- PEFT Strategy: QLoRA (4-bit quantized base, LoRA rank tuning)
- Dataset: Harmoni Manufacturing SFT Dataset (31,119 examples)
- Hardware: 8x NVIDIA H200 GPUs (DDP)
- Context Length: 128K tokens
- Training Framework: HuggingFace Transformers + PEFT
Usage
Using Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ssoni-harmoni/mistral-small-3.1-24b-harmoni-sft"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype="auto"
)
messages = [
{"role": "system", "content": "You are a helpful manufacturing assistant."},
{"role": "user", "content": "What are the key steps in CNC machining?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Using vLLM (Recommended for Quantized Models)
from vllm import LLM, SamplingParams
llm = LLM(
model="ssoni-harmoni/mistral-small-3.1-24b-harmoni-sft",
quantization="compressed-tensors",
tensor_parallel_size=1
)
sampling_params = SamplingParams(temperature=0.7, top_p=0.9, max_tokens=512)
outputs = llm.generate("What are the key steps in CNC machining?", sampling_params)
print(outputs[0].outputs[0].text)
Model Details
- Base Model: mistralai/Mistral-Small-3.1-24B-Instruct-2503
- Organization: ssoni-harmoni
- Training Date: 2026-02-09
- Model Type: Causal Language Model
Version History
| Version | Date | Changes |
|---|---|---|
| v1.0.0 | 2026-02-09 | Initial release |
License
This model inherits the Apache 2.0 license from the base model.
Citation
@misc{harmoni-manufacturing-model,
title={Harmoni Manufacturing Domain Fine-Tuned Model},
author={Harmoni ML Team},
year={2026},
publisher={HuggingFace},
url={https://huggingface.co/ssoni-harmoni/mistral-small-3.1-24b-harmoni-sft}
}
Contact
For questions or issues, please contact the Harmoni ML team.
- Downloads last month
- 34
Model tree for ssoni-harmoni/mistral-small-3.1-24b-harmoni-sft
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503