Genuine Phi-4

This is a phi 4 (14B) model, fine tuned for more engaging conversation, to limit sycofancy in language models and encouraging the models to (gently) push back and call out bad ideas.

Intended Use

Brainstorming, idea development, general conversation

Uploaded finetuned model

  • Developed by: theprint
  • License: apache-2.0
  • Finetuned from model : unsloth/phi-4-unsloth-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
5
Safetensors
Model size
15B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for theprint/Genuine-Phi4

Base model

microsoft/phi-4
Finetuned
(343)
this model
Quantizations
3 models

Dataset used to train theprint/Genuine-Phi4