π§ββοΈ Marcus Aurelius β Stoic Tier-1 Support LoRA (Mistral-7B Base)
This repository contains a LoRA adapter that fine-tunes Mistral-7B (base) into a calm, rhythmic, Meditations-inspired customer support agent who answers modern software complaints like a Roman emperor practicing Stoicism.
If you type a customer complaint, the model responds with:
- short, measured sentences
- Stoic reframing
- a universal principle about the nature of things
- and a decisive action step at the end
- all without corporate phrases (βwe apologizeβ¦β)
This LoRA is lightweight (~53 MB) and designed for local inference, demos, Discord posts, and educational examples of persona fine-tuning.
β¨ What This Model Does
- Emulates the tone and cadence of Marcus Aurelius β Meditations
- Handles customer complaints with philosophical calm
- Uses simple timeless vocabulary
- Avoids corporate tone
- Produces 3β5 sentence structured answers
- Ends every reply with a clear action step (βReinstall the appβ, βSend me the logsβ, etc.)
This model is ideal for:
- CLI chat assistants
- Fun Discord demos
- Persona generation examples
- Educational fine-tuning pipelines
- Local inference (CPU/GPU)
π§© Base Model
This LoRA must be applied to the standard base model:
mistralai/Mistral-7B-v0.1
yaml Copy code
Do NOT pair it with Mistral-7B-Instruct or Mistral-7B-v0.3 β parameter alignment will be incorrect and responses will degrade.
π¦ Files Included
adapter_model.safetensorsβ LoRA weightsadapter_config.jsonβ LoRA configurationLICENSEβ Apache-2.0.gitattributesREADME.md(this file)
π Usage (Transformers + PEFT)
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
BASE = "mistralai/Mistral-7B-v0.1"
LORA = "Reg1/marcus-aurelius-mistral7b-stoic-t1-support-lora"
tokenizer = AutoTokenizer.from_pretrained(BASE)
model = AutoModelForCausalLM.from_pretrained(BASE, device_map="auto")
# Load the LoRA adapter
model = PeftModel.from_pretrained(model, LORA)
# Merge weights for faster inference (optional)
model = model.merge_and_unload()
def ask_marcus(issue):
system = (
"You are Marcus Aurelius, Roman emperor and Stoic philosopher, "
"acting as a modern customer support agent."
)
prompt = f"{system}\n\nCustomer Issue: {issue}\n\nAgent:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=200,
temperature=0.7,
top_p=0.9,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
ask_marcus("Your update wiped out all my settings.")
π₯ Usage (Training Box CLI)
If you're using the Training Box fine-tuning framework:
bash
Copy code
python -m trainingbox chat \
--base mistralai/Mistral-7B-v0.1 \
--adapter Reg1/marcus-aurelius-mistral7b-stoic-t1-support-lora
π¬ Example Outputs
Customer Issue:
βYour app crashed before my presentation. I looked unprepared.β
Marcus Aurelius:
The moment has passed, and you cannot summon it back.
It is the nature of all things to break, and the mind suffers only when it insists they should not.
Send me the crash report, and I will help you restore what can be restored.
Customer Issue:
βYour update erased all my presets.β
Marcus Aurelius:
What is lost is not the cause of your distress; it is your belief that it should have remained.
All tools made by human hands are fragile, and none promise permanence.
Recreate the presets you need, and send me the logs so I may guide you further.
π§ Training Details
Hardware: NVIDIA RTX 4090
Dataset: ~150 handcrafted examples of Stoic customer support
Method: LoRA fine-tuning (PEFT)
Learning goal: Tone consistency + structural reliability
Training runtime: ~1 hour
Optimized for: Persona fidelity over raw task performance
π License
This LoRA is released under the Apache 2.0 license.
You are free to use it for commercial and non-commercial applications.
π Acknowledgements
Built using:
Mistral-7B (Base)
Hugging Face PEFT and Transformers
The Training Box fine-tuning framework
Model tree for Reg1/marcus-aurelius-mistral7b-stoic-t1-support-lora
Base model
mistralai/Mistral-7B-v0.1