π§ SocraticLM-Qwen-LoRA
LoRA-Fine-Tuned Qwen2.5-7B-Instruct for Pedagogy-Optimized Teaching Conversations
This repository contains a LoRA adapter fine-tuned on 160k curated pedagogical dialogs
designed for step-by-step Socratic teaching, self-explanation, and guided reasoning.
The base model remains unchanged:
π Qwen/Qwen2.5-7B-Instruct
and this repo stores only the LoRA weights (adapter_model.safetensors).
π How to Use
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base = "Qwen/Qwen2.5-7B-Instruct"
adapter = "Aditya-m04/SocraticLM-Qwen-Lora"
tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(base, device_map="auto")
model = PeftModel.from_pretrained(model, adapter)
inp = "Explain the Pythagorean theorem to a 10-year-old."
out = model.generate(**tokenizer(inp, return_tensors="pt").to(model.device), max_new_tokens=200)
print(tokenizer.decode(out[0], skip_special_tokens=True))
- Downloads last month
- 56
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support