Instructions to use MU-NLPC/whisper-tiny-audio-captioning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MU-NLPC/whisper-tiny-audio-captioning with Transformers:
# Load model directly from transformers import AutoProcessor, WhisperForAudioCaptioning processor = AutoProcessor.from_pretrained("MU-NLPC/whisper-tiny-audio-captioning") model = WhisperForAudioCaptioning.from_pretrained("MU-NLPC/whisper-tiny-audio-captioning") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -47,7 +47,7 @@ A transformer encoder-decoder model for automatic audio captioning. As opposed t
|
|
| 47 |
- **Parent Model:** openai/whisper-tiny
|
| 48 |
- **Resources for more information:**
|
| 49 |
- [GitHub Repo](https://github.com/prompteus/audio-captioning)
|
| 50 |
-
- [Technical Report](
|
| 51 |
|
| 52 |
|
| 53 |
## Usage
|
|
|
|
| 47 |
- **Parent Model:** openai/whisper-tiny
|
| 48 |
- **Resources for more information:**
|
| 49 |
- [GitHub Repo](https://github.com/prompteus/audio-captioning)
|
| 50 |
+
- [Technical Report](https://arxiv.org/abs/2305.09690)
|
| 51 |
|
| 52 |
|
| 53 |
## Usage
|