Datasets:
File size: 2,081 Bytes
0ae8cc2 87f9672 914a634 87f9672 2ba6ed3 87f9672 2ba6ed3 87f9672 0ae8cc2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
license: apache-2.0
task_categories:
- reinforcement-learning
language:
- en
---
# [NeurIPS 2025] Enhancing the Outcome Reward-based RL Training of MLLMs with Self-Consistency Sampling
**A simple, general sampling method for RLVR with multi-choice dataset to solve unfaithful reasoning phenomenon!**
# SCS Resouces
[**📖 Paper**](https://arxiv.org/abs/2511.10648) | [**🤗 Dataset**](https://huggingface.co/datasets/GenuineWWD/SCS_data) | [**💻 Code**](https://github.com/GenuineWWD/SCS)
## 🔔News
- **🔥[2025-11-9] Release the eval codes! 🚀**
- **🔥[2025-10-13] Release the dataset the codes! 🚀**
- **🔥[2025-9-17] Our SCS paper is accepted by NeurIPS 2025! 🚀**
## To-do
- [x] Release the eval codes
## 📖 Introduction
**Self‑Consistency Sampling (SCS)** improves outcome‑reward reinforcement learning for multimodal large language models (MLLMs). In multiple‑choice reasoning tasks, models often get the correct answer through faulty reasoning and receive unmerited rewards. SCS mitigates this by introducing visual perturbations and repeated resampling of reasoning trajectories, rewarding only consistent reasoning paths. Integrated into methods like RLOO, GRPO, and REINFORCE++, SCS boosts accuracy by up to **7.7%** on six multimodal benchmarks with minimal extra cost, and generalizes across models including **Qwen2.5‑VL** and **InternVL3**.

## Training
Please refer to [code repo](https://github.com/GenuineWWD/SCS) for more details.
## Evaluation
Please refer to [code repo](https://github.com/GenuineWWD/SCS) for more details.
## Contact
- Jiahao Wang: [email protected]
- Weiye Xu: [email protected]
## Citation
**BibTeX:**
```bibtex
@article{wang2025enhancing,
title={Enhancing the Outcome Reward-based RL Training of MLLMs with Self-Consistency Sampling},
author={Wang, Jiahao and Xu, Weiye and Yang, Aijun and Zhou, Wengang and Lu, Lewei and Li, Houqiang and Wang, Xiaohua and Zhu, Jinguo},
journal={arXiv preprint arXiv:2511.10648},
year={2025}
}
``` |