abjadsr-he-pretrain
Fine-tuned from ivrit-ai/whisper-large-v3-turbo on Hebrew speech. Given audio, outputs word-aligned hebrew=ascii_ipa pairs instead of a plain transcript.
Stage 1 of 2 โ pretrained on SASpeech (larger, noisier). For the final model use abjadsr-he-finetune.
Training
- Dataset: SASpeech (~13 300 utterances, ~30h Hebrew)
- Checkpoint: step 21 000 (best by dev token accuracy)
- Dev token accuracy: 98.0%
- Dev loss: 0.096
- Base model: ivrit-ai/whisper-large-v3-turbo
- Learning rate: 1e-5, warmup 500 steps
- Batch size: 2 ร 2 grad-accum steps (effective 4)
Output format
Each word is output as hebrew_word=ascii_ipa, space-separated:
ืืืืื=hexl'it ืืืื=j'uzem ืื ืฆื=lenats'el
IPA special characters are mapped to ASCII: สโS, สโZ, dสโdZ, tสโtS, สโq, หโ', สโr, ฯโx, ษกโg.
Usage
import torch
import soundfile as sf
from transformers import WhisperForConditionalGeneration, WhisperProcessor
model_id = "malper/abjadsr-he-pretrain"
processor = WhisperProcessor.from_pretrained(model_id)
model = WhisperForConditionalGeneration.from_pretrained(model_id)
model.eval()
# Load audio (must be 16 kHz mono float32)
audio, sr = sf.read("audio.wav", dtype="float32", always_2d=False)
# resample if needed: torchaudio.functional.resample(torch.from_numpy(audio), sr, 16000).numpy()
inputs = processor(audio, sampling_rate=16000, return_tensors="pt")
forced_ids = processor.get_decoder_prompt_ids(language="he", task="transcribe")
with torch.no_grad():
generated = model.generate(
inputs.input_features,
forced_decoder_ids=forced_ids,
max_new_tokens=444,
)
output = processor.batch_decode(generated, skip_special_tokens=True)[0].strip()
print(output)
# e.g. "ืืืืื=hexl'it ืืืื=j'uzem"
# Parse into (hebrew_word, ascii_ipa) pairs
pairs = [token.split("=", 1) for token in output.split() if "=" in token]
# [("ืืืืื", "hexl'it"), ("ืืืื", "j'uzem")]
- Downloads last month
- 35
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for malper/abjadsr-he-pretrain
Base model
openai/whisper-large-v3 Finetuned
openai/whisper-large-v3-turbo Finetuned
ivrit-ai/whisper-large-v3-turbo