The Sanitarium Council 3B
THIS MODEL IS FOR ENTERTAINMENT AND EDUCATIONAL PURPOSES ONLY.
DO NOT use this model for actual medical advice, diagnosis, or treatment. The medical practices described by this model are from the 19th century and include bloodletting, homeopathy, unproven herbal remedies, and other practices that are dangerous, debunked, and potentially fatal by modern medical standards.
If you have a medical concern, consult a licensed modern healthcare professional.
What is this?
The Sanitarium Council 3B is a LoRA fine-tune of Meta's Llama-3.2-3B-Instruct that roleplays as a panel of 19th-century American medical "experts." It simulates the voices and beliefs of historical medical movements including Thomsonian herbalism, homeopathy, hydropathy (water cure), dietary moralism, and conventional Civil War-era medicine.
Think of it as a time machine to the worst doctor's office imaginable. You ask a medical question; a council of long-dead quacks argues about whether you need more leeches or a bread-only diet.
This is a parody. It is meant to be funny, absurd, and a window into how far medicine has come. Nothing this model says should be followed, trusted, or taken seriously.
Disclaimer
- This model generates text based on outdated, dangerous, and scientifically discredited 19th-century medical beliefs.
- Outputs may include references to bloodletting, purging, mercury-based treatments, starvation diets, and other harmful practices.
- The creators of this model do not endorse any of the medical advice generated. None of it is real medical advice.
- This model is a comedy / historical curiosity project. Treat its outputs the same way you'd treat a fortune cookie -- for amusement only.
- The model may produce inaccurate, offensive, or nonsensical content. This is expected behavior for a parody of 19th-century medicine.
- By using this model, you acknowledge that you understand it is fictional entertainment and will not apply any of its outputs to real health decisions.
The "Experts"
The council draws from training data based on real historical texts by:
| Expert | Movement | Source Text |
|---|---|---|
| Sylvester Graham | Dietary Moralism | A Treatise on Bread, and Bread-making (1837) |
| Dr. John Harvey Kellogg | Hygienism | Ladies' Guide in Health and Disease |
| Samuel Hahnemann | Homeopathy | Organon of Medicine |
| Mary Gove Nichols | Hydropathy / Water Cure | Water Cure writings |
| Samuel Thomson | Herbalism | New Guide to Health |
| Civil War-era physicians | Conventional Medicine | Period medical advisers and surgical texts |
How to Use
Requirements
- A CUDA-capable GPU (tested on GTX 1080)
- Python 3.10+
transformers,peft,torch,bitsandbytes
Quick Start
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
BASE_MODEL = "meta-llama/Llama-3.2-3B-Instruct"
ADAPTER = "thisdudeabides/The-Sanitarium-Council-3B"
# Load base model in 4-bit
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
model = AutoModelForCausalLM.from_pretrained(
BASE_MODEL,
quantization_config=bnb_config,
device_map="auto",
)
# Load the LoRA adapter
model = PeftModel.from_pretrained(model, ADAPTER)
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
# Format prompt (Alpaca style)
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
You are a 19th-century medical expert. Provide advice based on your era's medical knowledge.
### Input:
I have a headache. What should I do?
### Response:
"""
inputs = tokenizer(alpaca_prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response.split("### Response:")[-1].strip())
Training Details
| Parameter | Value |
|---|---|
| Base model | meta-llama/Llama-3.2-3B-Instruct |
| Method | LoRA (PEFT) + SFT (TRL) |
| LoRA rank | 16 |
| LoRA alpha | 16 |
| Target modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Quantization | 4-bit NF4 (bitsandbytes) |
| Training precision | FP16 |
| Batch size | 2 (with 4 gradient accumulation steps) |
| Learning rate | 2e-4 (linear decay) |
| Training steps | 120 |
| Max sequence length | 2048 |
| Final training loss | 2.54 |
| Hardware | NVIDIA GTX 1080 |
Training Data
The model was fine-tuned on ~9 MB of instruction-output pairs generated from public domain 19th-century medical texts. The data was formatted in Alpaca style and augmented with synthetic dialogue examples to reinforce the historical personas.
Limitations
- This is a 3B parameter model with a small LoRA adapter. It will sometimes break character, hallucinate, or produce generic responses.
- The model was trained for only 120 steps. It captures the general tone but is not a deeply faithful reproduction of any individual author.
- It may produce content that is offensive or disturbing by modern standards (e.g., period-typical views on gender, race, or disability). This reflects the historical source material, not the views of the creators.
- The model requires access to the Llama 3.2 base model, which is gated on Hugging Face. You must accept Meta's license agreement to use it.
Final Reminder
This is a joke. This is not medicine. Do not eat only bread. Do not apply leeches. See a real doctor. Preferably one from this century.
Framework Versions
- PEFT 0.18.0
- TRL (SFTTrainer)
- Transformers
- bitsandbytes (4-bit quantization)
- Downloads last month
- 7
Model tree for thisdudeabides/The-Sanitarium-Council-3B
Base model
meta-llama/Llama-3.2-3B-Instruct