-
-
-
-
-
-
Inference Providers
Active filters:
quantllm
codewithdark/Llama-3.2-3B-4bit
3B
•
Updated
•
16
codewithdark/Llama-3.2-3B-GGUF-4bit
3B
•
Updated
•
10
codewithdark/Llama-3.2-3B-4bit-mlx
Text Generation
•
3B
•
Updated
•
51
QuantLLM/Llama-3.2-3B-4bit-mlx
Text Generation
•
3B
•
Updated
•
22
QuantLLM/Llama-3.2-3B-2bit-mlx
Text Generation
•
3B
•
Updated
•
21
QuantLLM/Llama-3.2-3B-8bit-mlx
Text Generation
•
3B
•
Updated
•
69
QuantLLM/Llama-3.2-3B-5bit-mlx
Text Generation
•
3B
•
Updated
•
83
QuantLLM/Llama-3.2-3B-5bit-gguf
3B
•
Updated
•
7
QuantLLM/Llama-3.2-3B-2bit-gguf
3B
•
Updated
•
9
QuantLLM/functiongemma-270m-it-8bit-gguf
0.3B
•
Updated
•
24
•
1
QuantLLM/functiongemma-270m-it-4bit-gguf
0.3B
•
Updated
•
22
QuantLLM/functiongemma-270m-it-4bit-mlx
Text Generation
•
0.3B
•
Updated
•
60