File size: 6,510 Bytes
1480f58 c3f7e99 29355c2 c3f7e99 8bc79a6 c3f7e99 29355c2 c3f7e99 29355c2 8bc79a6 29355c2 c3f7e99 29355c2 c3f7e99 29355c2 c3f7e99 29355c2 c3f7e99 29355c2 1480f58 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 |
---
configs:
- config_name: ParamBench
data_files:
- path: ParamBench*
split: test
language:
- hi
tags:
- benchmark
---
# Dataset Card for ParamBench
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact](#social-impact)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
- [Contributing](#contributing)
## Dataset Description
- **Homepage:** [ParamBench GitHub Repository](https://github.com/bharatgenai/ParamBench)
- **Repository:** [https://github.com/bharatgenai/ParamBench](https://github.com/bharatgenai/ParamBench)
- **Paper:** [ParamBench: A Graduate-Level Benchmark for Evaluating LLM Understanding on Indic Subjects](https://arxiv.org/abs/2508.16185)
### Dataset Summary
ParamBench is a comprehensive graduate-level benchmark designed to evaluate Large Language Models (LLMs) on their understanding of Indic subjects. The dataset contains **17,275 multiple-choice questions** in **Hindi** across **21 diverse subjects** from Indian competitive examinations.
This benchmark addresses a critical gap in evaluating LLMs on culturally and linguistically diverse content, specifically focusing on India-specific knowledge domains that are underrepresented in existing benchmarks.
### Supported Tasks
This dataset supports the following tasks:
- `multiple-choice-qa`: The dataset can be used to evaluate language models on multiple-choice question answering in Hindi
- `cultural-knowledge-evaluation`: Assessing LLM understanding of India-specific cultural and academic content
- `subject-wise-evaluation`: Fine-grained analysis of model performance across 21 different subjects
- `question-type-evaluation`: Detailed analysis of model performance across different question types (Normal MCQ, Assertion and Reason, Blank-filling, etc.)
### Languages
The dataset is in **Hindi** (hi).
## Dataset Structure
### Data Instances
An example from the dataset:
```json
{
"unique_question_id": "5d210d8db510451d6bf01b493a0f4430",
"subject": "Anthropology",
"exam_name": "Question Papers of NET Dec. 2012 Anthropology Paper III hindi",
"paper_number": "Question Papers of NET Dec. 2012 Anthropology Paper III hindi",
"question_number": 1,
"question_text": "भारतीय मध्य पाषाणकाल निम्नलिखित में से किस स्थान पर सर्वोत्तम प्रदर्शित है ?",
"option_a": "गिद्दालूर",
"option_b": "नेवासा",
"option_c": "टेरी समूह",
"option_d": "बागोर",
"correct_answer": "D",
"question_type": "Normal MCQ"
}
```
### Data Fields
- `unique_question_id` (string): Unique identifier for each question
- `subject` (string): One of 21 subject categories
- `exam_name` (string): Name of the source examination
- `paper_number` (string): Paper/section identifier
- `question_number` (int): Question number in the original exam
- `question_text` (string): The question text in Hindi
- `option_a` (string): First option
- `option_b` (string): Second option
- `option_c` (string): Third option
- `option_d` (string): Fourth option
- `correct_answer` (string): Correct option (A, B, C, or D)
- `question_type` (string): Type of question (Normal MCQ, Assertion and Reason, etc.)
### Data Splits
The dataset contains a single `test` split with 17,275 questions.
| Split | Number of Questions |
|-------|-------------------|
| test | 17,275 |
## Subject Distribution
The 21 subjects covered in ParamBench (sorted by number of questions):
| Subject | Number of Questions | Percentage |
|---------|-------------------|------------|
| Education | 1,199 | 6.94% |
| Sociology | 1,191 | 6.89% |
| Anthropology | 1,139 | 6.60% |
| Psychology | 1,102 | 6.38% |
| Archaeology | 1,076 | 6.23% |
| History | 996 | 5.77% |
| Comparative Study of Religions | 954 | 5.52% |
| Law | 951 | 5.51% |
| Indian Culture | 927 | 5.37% |
| Economics | 919 | 5.32% |
| Current Affairs | 833 | 4.82% |
| Philosophy | 817 | 4.73% |
| Political Science | 774 | 4.48% |
| Drama and Theatre | 649 | 3.76% |
| Sanskrit | 639 | 3.70% |
| Karnataka Music | 617 | 3.57% |
| Tribal and Regional Language | 611 | 3.54% |
| Person on Instruments | 596 | 3.45% |
| Defence and Strategic Studies | 521 | 3.02% |
| Music | 433 | 2.51% |
| Yoga | 331 | 1.92% |
| **Total** | **17,275** | **100%** |
## Dataset Creation
## Considerations for Using the Data
### Social Impact
This dataset aims to:
- Promote development of culturally-aware AI systems
- Reduce bias in LLMs towards Western-centric knowledge
- Support research in multilingual and multicultural AI
- Enhance LLM capabilities for Indian languages and contexts
### Evaluation Guidelines
When evaluating models on ParamBench:
1. Use greedy decoding (temperature=0) for consistent results
2. Evaluate responses based on exact match with correct options (A, B, C, or D)
3. Consider subject-wise performance for detailed analysis
4. Report both overall accuracy and per-subject breakdowns
## Additional Information
Key contributors include:
- [Ayush Maheshwari](https://huggingface.co/acomquest)
- Kaushal Sharma
- [Vivek Patel](https://bento.me/vivek-patel)
- Aditya Maheshwari
We thank all data annotators involved in the dataset curation process.
### Citation Information
If you use ParamBench in your research, please cite:
```bibtex
@article{parambench2024,
title={ParamBench: A Graduate-Level Benchmark for Evaluating LLM Understanding on Indic Subjects},
author={[Author Names]},
journal={arXiv preprint arXiv:2508.16185},
year={2024},
url={https://arxiv.org/abs/2508.16185}
}
```
### License
This dataset is released for **non-commercial research and evaluation**.
### Acknowledgments
We thank all the contributors who helped create this benchmark.
---
**Note**: This dataset is part of our ongoing effort to make AI systems more inclusive and culturally aware. We encourage researchers to use this benchmark to evaluate and improve their models' understanding of Indic content.
--- |