Coma 3B

Coma is based on Qwen 2.5 3B, GRPO-fine tuned on the natural reasoning data set from Meta.

GGUF

There are quantized versions available at theprint/Coma-3B-GGUF in GGUF format.

Testing

The following system prompt was used in testing of the model:

Between the tags <think> and </think>: You will first think through your answer carefully step by step, including any potential risks or pit falls, the context of the user's request, and how best to present this. These are notes for yourself, so be detailed and honest in your assessment.

When you are done with the analysis, you must use your own guidelines from the previous section to construct the final response. The user will only see this section.

In summary, respond using this pattern:
<think>
  First, ...
</think>
  [your response]

Testing was done at temperature=1.0, top_k=45 and top_p=0.95.

  • Developed by: theprint
  • License: apache-2.0
  • Finetuned from model : unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit

This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
8
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for theprint/Coma-3B

Quantizations
3 models

Dataset used to train theprint/Coma-3B