yuriyvnv's picture
Update README.md
6a27854 verified
metadata
license: apache-2.0
language:
  - pt
base_model: openai/whisper-tiny
tags:
  - automatic-speech-recognition
  - whisper
  - portuguese
  - speech
  - audio
  - synthetic-data
  - asr
  - hf-asr-leaderboard
datasets:
  - mozilla-foundation/common_voice_17_0
  - yuriyvnv/synthetic_transcript_pt
model-index:
  - name: whisper-tiny-high-mixed-pt
    results:
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: Common Voice 17.0 (Portuguese)
          type: mozilla-foundation/common_voice_17_0
          config: pt
          split: test
        metrics:
          - type: wer
            value: 29.33
            name: Test WER
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: Multilingual LibriSpeech (Portuguese)
          type: facebook/multilingual_librispeech
          config: portuguese
          split: test
        metrics:
          - type: wer
            value: 44.18
            name: Test WER (MLS)
pipeline_tag: automatic-speech-recognition
library_name: transformers

Whisper-Tiny Portuguese - High-Quality Filtered Synthetic Data (Best Tiny Configuration)

This model is a fine-tuned version of openai/whisper-tiny for Portuguese automatic speech recognition (ASR). It was trained on Common Voice 17.0 Portuguese combined with WAVe-filtered high-quality synthetic speech data using a strict threshold (q ≥ 0.8).

Purpose

This model represents the best configuration for Whisper-Tiny Portuguese, achieving a 1.39 percentage point improvement over the CV-only baseline. However, the paper emphasizes that this gain is modest:

"The Portuguese Whisper-Tiny model achieves its lowest test WER of 29.33% using the high-quality filtered subset, an improvement of just 1.39 percentage points over the Common Voice baseline of 30.72%. This modest gain offers limited justification for the additional data filtering and preprocessing overhead."

This model demonstrates that while high-quality filtering provides the best results for Tiny, the improvement is marginal compared to the dramatic gains seen with Large-v3 models (+32.6%).

Model Details

Property Value
Base Model openai/whisper-tiny
Language Portuguese (pt)
Task Automatic Speech Recognition (transcribe)
Parameters 39M
Training Data Common Voice 17.0 + High-Quality Synthetic (q ≥ 0.8)
Total Training Samples 29,178
Sampling Rate 16kHz

Evaluation Results

This Model (whisper-tiny-high-mixed-pt)

Metric Value
Validation Loss 0.4481
Validation WER 26.74%
Test WER (Common Voice) 29.33%
Test WER (MLS) 44.18%
Best Checkpoint Step 350
Max Training Steps 575

Comparison with Other Training Configurations (Whisper-Tiny Portuguese)

Training Data Max Steps Val Loss Val WER Test WER (CV) Test WER (MLS)
Common Voice Only 430 0.4463 27.05% 30.72% 45.83%
High-Quality (q ≥ 0.8) + CV 575 0.4481 26.74% 29.33% 44.18%
Mid-High (q ≥ 0.5) + CV 805 0.4550 26.95% 30.11% 47.25%
All Synthetic + CV 860 0.4517 28.06% 29.84% 46.54%

Key Performance Highlights

  • Best Tiny configuration: Lowest Test WER (29.33%) and MLS WER (44.18%)
  • Modest improvement: Only 1.39% better than baseline on in-domain
  • Best cross-domain: 44.18% MLS WER (best among Tiny configurations)
  • Quality threshold matters: Strict q ≥ 0.8 filtering provides best results for Tiny

Tiny vs Large: Synthetic Data Impact

The contrast with Large-v3 models illustrates the architectural capacity limitation:

Model High-Quality Synthetic Impact Test WER vs Baseline
Whisper-Tiny Best config, but marginal 29.33% +1.39% (4.5% relative)
Whisper-Large-v3 Dramatic improvement 7.94% +3.84% (32.6% relative)

For Large-v3, high-quality synthetic data reduces WER by 32.6%. For Tiny, the same approach yields only 4.5% relative improvement.

Training Data

Dataset Composition

Source Samples Description
Common Voice 17.0 Portuguese 21,866 Real speech from Mozilla's crowdsourced dataset
Synthetic Transcript PT (q ≥ 0.8) 7,312 Strictly WAVe-filtered TTS audio (high quality only)
Total 29,178

WAVe Quality Distribution (Portuguese Synthetic Data)

Quality Level Samples Percentage Used in This Model
High (q ≥ 0.8) 7,312 33.3%
Medium (0.5 ≤ q < 0.8) 11,869 54.0%
Low (q < 0.5) 2,787 12.7%

This strict threshold retains only the top 33.3% of synthetic samples, which proves optimal for Tiny models that cannot handle noisier data.

Training Procedure

Hyperparameters

Parameter Value
Learning Rate 5e-5
Batch Size (Global) 256
Warmup Steps 200
Max Epochs 5
Precision BF16
Optimizer AdamW (fused)
Eval Steps 50
Metric for Best Model eval_loss

Training Infrastructure

  • GPU: NVIDIA H200 (140GB VRAM)
  • Operating System: Ubuntu 22.04
  • Framework: Hugging Face Transformers

Usage

Transcription Pipeline

from transformers import pipeline

transcriber = pipeline(
    "automatic-speech-recognition",
    model="yuriyvnv/whisper-tiny-high-mixed-pt",
    device="cuda"
)

result = transcriber("path/to/portuguese_audio.wav")
print(result["text"])

Direct Model Usage

from transformers import WhisperProcessor, WhisperForConditionalGeneration
import librosa

processor = WhisperProcessor.from_pretrained("yuriyvnv/whisper-tiny-high-mixed-pt")
model = WhisperForConditionalGeneration.from_pretrained("yuriyvnv/whisper-tiny-high-mixed-pt")
model.to("cuda")

audio, sr = librosa.load("path/to/portuguese_audio.wav", sr=16000)
input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features.to("cuda")

predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
print(transcription)

Specifying Language

model.generation_config.language = "pt"
model.generation_config.task = "transcribe"

When to Use This Model

This model is ideal when:

  • Best Tiny accuracy needed: 29.33% WER (best among Tiny configurations)
  • Resource-constrained deployment: 39M parameters for edge devices
  • Cross-domain robustness for Tiny: Best MLS performance (44.18%)
  • Quality-filtered augmentation available: Have WAVe-scored synthetic data

Consider alternatives based on your needs:

Research Implications

This model demonstrates an important finding:

High-quality filtering is necessary but not sufficient for smaller models.

For Tiny models:

  • Quality filtering (q ≥ 0.8) is the only configuration that helps
  • Mid-high quality (q ≥ 0.5) actually hurts performance vs baseline
  • Unfiltered data provides worse results than strict filtering
  • The improvement is marginal regardless of filtering quality

Recommendation: For resource-constrained deployments, the baseline CV-only model may be more practical given the marginal 1.39% improvement doesn't justify the additional preprocessing complexity.

Limitations

  • Lower accuracy than larger models: 29.33% vs 7.94% (Large-v3)
  • Marginal improvement over baseline: Only 1.39 percentage points
  • Limited capacity: Cannot fully leverage synthetic data benefits
  • Domain specificity: Optimized for general Portuguese
  • Dialect coverage: Performance may vary across Portuguese regional variants

Citation

This model is part of research on WAVe (Word-Aligned Verification) for synthetic speech quality assessment. While the WAVe methodology paper is currently under review, please cite our previous work that motivated this research:

@article{perezhohin2024enhancing,
  title={Enhancing Automatic Speech Recognition: Effects of Semantic Audio Filtering on Models Performance},
  author={Perezhohin, Yuriy and Santos, Tiago and Costa, Victor and Peres, Fernando and Castelli, Mauro},
  journal={IEEE Access},
  year={2024},
  publisher={IEEE}
}

References

License

Apache 2.0