MAAP LAB π΅ β Music AI & Audio Research
βMusic AI Assemble People, MAAP!β
Focus areas:
πΌ Audio Generation Β· π·οΈ Music Tagging Β· π£οΈ Voice Conversion Β· π§ Transformers Β· π¨ Diffusion
Mission
Advance the foundations of Music AI through practical research, then share our results openly with the community. β¨
Open Science
We aim to publish at top venues (e.g., ICASSP, ISMIR, AAAI) and release code, models, and datasets whenever possible. π’
Latest News ποΈ
- β
NeurIPS Workshop 2025:
- Accepted π AIBA: Attention-based Instrument Band Alignment for Text-to-Audio Diffusion
- βοΈ ICASSP 2026 submission:
- π§° GPU resources via university support: NVIDIA A100, A6000, RTX 4090 βοΈ
Our Activities π―
Project 1 β Music Tagging (Completed)π·οΈ
- Built a tagging & augmentation pipeline with CLAP, Beam Search, Stable Audio
- Focus: dataset augmentation/creation for future work
- Targets: short-term word generation β long-term sentence generation with LLMs
- Outcome: 2 NeurIPS Workshop submissions (1 accepted, 1 rejected β resubmitted to ICASSP)
AIBA β Accepted β
Β· Jamendo-QA β Arxiv π
Project 2 - Self Supervised Learning (On going)
Project 3 - Music Generation with Reinforcement Learning (On going)
Project 4 - Pitch Estimation (On going)
Project 5 - Extending QA Dataset (On going)
Publications & Submissions π
- AIBA: Attention-based Instrument Band Alignment for Text-to-Audio Diffusion β Accepted at NeurIPS Workshop 2025
- Jamendo-QA: A Large-Scale Music Question Answering Dataset β Submitted to ICASSP 2026 Β· Preprint on arXiv
Get Involved π€
Interested in collaborating on Music AI? We welcome discussions on datasets, evaluation, and model design.
Contact: [email protected] βοΈ
Β© 2025 MAAP LAB β’ Built with β€οΈ for music & AI.