Why does the EXAONE-4.0 chat_template strip <think> with [:9] instead of [:7]?
1
#13 opened 4 months ago
by
kihunKim
question regarding the reasoning feature.
1
#12 opened 5 months ago
by
wowohuhud
Are there any plans for 8B or 14B models like Qwen3?
#11 opened 6 months ago
by
lesj0610
llama.cpp thinking mode support
👍 4
1
#10 opened 7 months ago
by
bpool
AWQ or GPTQ quantization?
2
#8 opened 8 months ago
by
lesj0610
Really appreciate the work you put into this.🤍
❤️ 🚀 8
#7 opened 8 months ago
by
deep-div
This model is not much better than qwen3 32b for writing code
👍 3
9
#4 opened 8 months ago
by
xldistance