GGUF version of microsoft/OptiMind-SFT in f16. Created with llama.cpp.

Downloads last month
48
GGUF
Model size
21B params
Architecture
gpt-oss
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for jameshhugg/OptiMind-SFT-gguf

Quantized
(6)
this model