Instructions to use oxide-lab/LTX-Video-0.9.5-diffusers with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use oxide-lab/LTX-Video-0.9.5-diffusers with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("oxide-lab/LTX-Video-0.9.5-diffusers", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - llama-cpp-python
How to use oxide-lab/LTX-Video-0.9.5-diffusers with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="oxide-lab/LTX-Video-0.9.5-diffusers", filename="text_encoder_gguf/t5-v1_1-xxl-encoder-Q5_K_M.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use oxide-lab/LTX-Video-0.9.5-diffusers with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf oxide-lab/LTX-Video-0.9.5-diffusers:Q5_K_M # Run inference directly in the terminal: llama-cli -hf oxide-lab/LTX-Video-0.9.5-diffusers:Q5_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf oxide-lab/LTX-Video-0.9.5-diffusers:Q5_K_M # Run inference directly in the terminal: llama-cli -hf oxide-lab/LTX-Video-0.9.5-diffusers:Q5_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf oxide-lab/LTX-Video-0.9.5-diffusers:Q5_K_M # Run inference directly in the terminal: ./llama-cli -hf oxide-lab/LTX-Video-0.9.5-diffusers:Q5_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf oxide-lab/LTX-Video-0.9.5-diffusers:Q5_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf oxide-lab/LTX-Video-0.9.5-diffusers:Q5_K_M
Use Docker
docker model run hf.co/oxide-lab/LTX-Video-0.9.5-diffusers:Q5_K_M
- LM Studio
- Jan
- Ollama
How to use oxide-lab/LTX-Video-0.9.5-diffusers with Ollama:
ollama run hf.co/oxide-lab/LTX-Video-0.9.5-diffusers:Q5_K_M
- Unsloth Studio new
How to use oxide-lab/LTX-Video-0.9.5-diffusers with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for oxide-lab/LTX-Video-0.9.5-diffusers to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for oxide-lab/LTX-Video-0.9.5-diffusers to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for oxide-lab/LTX-Video-0.9.5-diffusers to start chatting
- Docker Model Runner
How to use oxide-lab/LTX-Video-0.9.5-diffusers with Docker Model Runner:
docker model run hf.co/oxide-lab/LTX-Video-0.9.5-diffusers:Q5_K_M
- Lemonade
How to use oxide-lab/LTX-Video-0.9.5-diffusers with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull oxide-lab/LTX-Video-0.9.5-diffusers:Q5_K_M
Run and chat with the model
lemonade run user.LTX-Video-0.9.5-diffusers-Q5_K_M
List all available models
lemonade list
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("oxide-lab/LTX-Video-0.9.5-diffusers", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]LTX-Video in Rust (Candle)
This repository provides a high-performance, native Rust implementation of LTX-Video using the Candle ML framework.
Features
- ๐ฆ Native Rust: No Python dependency required for inference.
- ๐ Performance: Optimized for NVIDIA GPUs with Flash Attention v2 and cuDNN.
- ๐พ Memory Efficient: Supports GGUF quantization for T5-XXL text encoder and VAE tiling/slicing for generating HD videos on consumer GPUs.
- ๐ Flexible: Easy to use CLI for video generation and library for custom integration.
Quick Start
Installation
Ensure you have Rust and the CUDA Toolkit installed, then:
git clone https://github.com/FerrisMind/candle-video
cd candle-video
cargo build --release --features flash-attn,cudnn
Video Generation
cargo run --example ltx-video --release -- \
--local-weights ./models/ltx-video \
--prompt "A serene mountain lake at sunset, photorealistic, 4k" \
--width 768 --height 512 --num-frames 97 \
--steps 30
Performance & Memory
| Resolution | Frames | VRAM (BF16) | VRAM (VAE Tiling) |
|---|---|---|---|
| 512x768 | 97 | ~8-13 GB | ~8-9 GB |
Note: Using GGUF T5 encoder saves an additional ~8-12GB of VRAM.
Credits
- Original Model: Lightricks/LTX-Video
- Framework: HuggingFace Candle
- T5 v1_1 XXl GGUF and Safetensors: city96/LTX-Video-gguf (for GGUF support patterns, T5 XXl GGUF and Safetensors)
For more details, visit the main GitHub Repository.
- Downloads last month
- 209
5-bit
Model tree for oxide-lab/LTX-Video-0.9.5-diffusers
Base model
Lightricks/LTX-Video-0.9.5