Osmosis-Apply-1.7B-GGUF

Osmosis-Apply-1.7B is a specialized language model finetuned on Qwen3-1.7B designed to perform code merges, similar to the apply feature of modern AI code editors. Given an original code snippet and an edit snippet, this model can apply the edit snippet to original code snippet, updating the code snippet with the edit.

Model files

File Size Format
Osmosis-Apply-1.7B.BF16.gguf 3.45 GB BF16
Osmosis-Apply-1.7B.F16.gguf 3.45 GB F16
Osmosis-Apply-1.7B.F32.gguf 6.89 GB F32
Osmosis-Apply-1.7B.Q2_K.gguf 778 MB Q2_K
Osmosis-Apply-1.7B.Q3_K_L.gguf 1 GB Q3_K_L
Osmosis-Apply-1.7B.Q3_K_M.gguf 940 MB Q3_K_M
Osmosis-Apply-1.7B.Q3_K_S.gguf 867 MB Q3_K_S
Osmosis-Apply-1.7B.Q4_K_M.gguf 1.11 GB Q4_K_M
Osmosis-Apply-1.7B.Q4_K_S.gguf 1.06 GB Q4_K_S
Osmosis-Apply-1.7B.Q5_K_M.gguf 1.26 GB Q5_K_M
Osmosis-Apply-1.7B.Q5_K_S.gguf 1.23 GB Q5_K_S
Osmosis-Apply-1.7B.Q6_K.gguf 1.42 GB Q6_K
Osmosis-Apply-1.7B.Q8_0.gguf 1.83 GB Q8_0
.gitattributes 2.38 kB -
README.md 428 Bytes -
config.json 29 Bytes -

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
121
GGUF
Model size
2B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/Osmosis-Apply-1.7B-GGUF

Quantized
(2)
this model