Avara X1 Mini
Avara X1 Mini is a lightweight AI model developed by Omnionix. Based on the Qwen2.5 architecture, this model is fine-tuned to balance technical reasoning with a grounded and supportive personality.
Join the Community: Omnionix Discord
Technical Specifications
| Feature | Details |
|---|---|
| Developer | Omnionix |
| Architecture | Qwen2.5-1.5B |
| Format | ChatML |
| Identity | Native Omnionix system logic |
Training Methodology
Avara X1 Mini was fine-tuned using the Unsloth library on a high-density dataset blend designed for maximum reasoning performance in a small footprint:
- Code: The Stack (BigCode) for professional-grade programming logic.
- Mathematics: Focused math/competition datasets for step-by-step problem solving.
- Logic: Open-Platypus for enhanced deductive reasoning and instruction following.
We also have the LoRA adapter and the Q4_K_M GGUF: huggingface.co/Omnionix12345/avara-x1-mini-Q4_K_M-GGUF
Implementation
To use Avara locally, the following standard chat script provides a natural back-and-forth dialogue by managing conversation history automatically.
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="Omnionix12345/avara-x1-mini",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "system", "content": "You are Avara, an AI assistant created by Omnionix."}
]
print("\n--- Avara X1 Mini is Online ---")
while True:
user_input = input("You: ")
if user_input.lower() in ["exit", "quit"]:
break
messages.append({"role": "user", "content": user_input})
outputs = pipe(
messages,
max_new_tokens=512,
do_sample=True,
temperature=0.7
)
assistant_response = outputs[0]["generated_text"][-1]["content"]
print(f"\nAvara: {assistant_response}\n")
messages.append({"role": "assistant", "content": assistant_response})
- Downloads last month
- 839