These are the GGUF's of the model LFM2-VL-3B.
Usage Notes:
- Download the latest llama.cpp to use these quantizations.
- Try to use the best quality you can run.
- For the
mmprojfile, the F32 version is recommended for best results (F32 > BF16 > F16).
- Downloads last month
- 40
Hardware compatibility
Log In to add your hardware
8-bit
16-bit
Model tree for noctrex/LFM2-VL-3B-GGUF
Base model
LiquidAI/LFM2-VL-3B