Llama 3+
Collection
Meta's open Llama models
•
4 items
•
Updated
Pure .gguf Q4_0 and Q8_0 quantizations of Llama 3 8B instruct, ready to consume by llama3.java.
In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the output.weights tensor is quantized with Q6_K, instead of Q4_0.
A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the quantize utility from llama.cpp as follows:
./quantize --pure ./Meta-Llama-3-8B-Instruct-F32.gguf ./Meta-Llama-3-8B-Instruct-Q4_0.gguf Q4_0
4-bit
8-bit