This fine tune model is inspired by Nathan Lambert's talk "Traits of Next Generation Reasoning Models". It introduces a structured multi-phase reasoning cycle for large language models (LLMs).

The fine tune model extends beyond simple question-answer pairs by adding explicit reasoning phases:

  • Planning – The model outlines a step-by-step plan before attempting a solution.
  • Answering – The model provides its initial solution.
  • Double-Checking – The model revisits its answer, verifying correctness and coherence.
  • Confidence – The model assigns a confidence score or justification for its final response. This structure encourages models to reason more transparently, self-correct, and calibrate their confidence.

Uploaded model

  • Developed by: EpistemeAI
  • License: apache-2.0
  • Finetuned from model : unsloth/gpt-oss-20b-unsloth-bnb-4bit

This gpt_oss model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Model tree for EpistemeAI/gpt-oss-deepplan

Base model

openai/gpt-oss-20b
Finetuned
(382)
this model