Zen Musician

Music and audio generation model for creating soundtracks and compositions.

Overview

Built on Zen MoDE (Mixture of Distilled Experts) architecture with 1B parameters.

Developed by Hanzo AI and the Zoo Labs Foundation.

Quick Start

from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
import torch

model_id = "zenlm/zen-musician"
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")

# Load audio
import librosa
audio, sr = librosa.load("audio.wav", sr=16000)
inputs = processor(audio, sampling_rate=sr, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs)
print(processor.batch_decode(outputs, skip_special_tokens=True)[0])

Model Details

Attribute Value
Parameters 1B
Architecture Zen MoDE
Context 30s audio
License Apache 2.0

License

Apache 2.0

Downloads last month
154
Safetensors
Model size
6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zenlm/zen-musician

Quantizations
3 models