metadata
base_model: theprint/MathTutor-7B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
datasets:
- theprint/CoT-Explaining-Math
- facebook/natural_reasoning
Match Coma 7B
The theprint/MathTutor-7B model further finetuned on natural reasoning using GRPO. This is an experimental model and likely to hallucinate.
- Developed by: theprint
- License: apache-2.0
- Finetuned from model : theprint/MathTutor-7B
This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.
