| --- |
| library_name: transformers |
| license: other |
| license_name: lfm1.0 |
| license_link: LICENSE |
| language: |
| - en |
| pipeline_tag: text-generation |
| tags: |
| - liquid |
| - lfm2 |
| - edge |
| base_model: LiquidAI/LFM2-2.6B-Transcript |
| --- |
| |
| <div align="center"> |
| <img |
| src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/2b08LKpev0DNEk6DlnWkY.png" |
| alt="Liquid AI" |
| style="width: 100%; max-width: 100%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;" |
| /> |
| <div style="display: flex; justify-content: center; gap: 0.5em; margin-bottom: 1em;"> |
| <a href="https://playground.liquid.ai/"><strong>Try LFM</strong></a> • |
| <a href="https://docs.liquid.ai/lfm"><strong>Documentation</strong></a> • |
| <a href="https://leap.liquid.ai/"><strong>LEAP</strong></a> |
| </div> |
| </div> |
| |
| # LFM2-2.6B-Transcript-GGUF |
|
|
| Based on [LFM2-2.6B](https://huggingface.co/LiquidAI/LFM2-2.6B), LFM2-2.6B-Transcript is designed for **private, on-device meeting summarization**. We partnered with AMD to deliver cloud-level summary quality while running entirely locally, ensuring your meeting data never leaves your device. |
|
|
| **Highlights**: |
| - Cloud-level summary quality, approaching much larger models |
| - Under 3GB of RAM usage for long meetings |
| - Fast summaries in seconds, not minutes |
| - Runs fully locally across CPU, GPU, and NPU |
|
|
| You can find more information about this model [here](https://huggingface.co/LiquidAI/LFM2-2.6B-Transcript). |
|
|
| ## 🏃 How to run |
|
|
| Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp): |
|
|
| ``` |
| llama-cli -hf LiquidAI/LFM2-2.6B-Transcript-GGUF |
| ``` |
|
|
| ## 📬 Contact |
|
|
| If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact). |