Zen Musician
Music and audio generation model for creating soundtracks and compositions.
Overview
Built on Zen MoDE (Mixture of Distilled Experts) architecture with 1B parameters.
Developed by Hanzo AI and the Zoo Labs Foundation.
Quick Start
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
import torch
model_id = "zenlm/zen-musician"
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
# Load audio
import librosa
audio, sr = librosa.load("audio.wav", sr=16000)
inputs = processor(audio, sampling_rate=sr, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs)
print(processor.batch_decode(outputs, skip_special_tokens=True)[0])
Model Details
| Attribute | Value |
|---|---|
| Parameters | 1B |
| Architecture | Zen MoDE |
| Context | 30s audio |
| License | Apache 2.0 |
License
Apache 2.0
- Downloads last month
- 154