Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Hindi
ArXiv:
Libraries:
Datasets
pandas
vjdevane commited on
Commit
29355c2
Β·
verified Β·
1 Parent(s): 4a1b82d

Upload 2 files

Browse files
Files changed (2) hide show
  1. ParamBench.parquet +3 -0
  2. README.md +125 -0
ParamBench.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b354c063220b0154200270919e50f412ae9c334a226e288f442e8f041e7e3e5c
3
+ size 4747403
README.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ParamBench: A Graduate-Level Benchmark for Evaluating LLM Understanding on Indic Subjects
2
+
3
+ <div align="center">
4
+
5
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
6
+ [![arXiv](https://img.shields.io/badge/arXiv-2508.16185-f9f107.svg)](https://arxiv.org/pdf/2508.16185)
7
+ </div>
8
+
9
+ ## πŸ“‹ Overview
10
+
11
+ ParamBench is a comprehensive graduate-level benchmark in Hindi designed to evaluate Large Language Models (LLMs) on their understanding of Indic subjects. The benchmark contains **17,275 multiple-choice questions** across **21 subjects**, covering a wide range of topics from Indian competitive examinations.
12
+
13
+ This benchmark is specifically designed to:
14
+ - Assess LLM performance on culturally and linguistically diverse content
15
+ - Evaluate understanding of India-specific knowledge domains
16
+ - Support the development of more culturally aware AI systems
17
+
18
+ ## 🎯 Key Features
19
+
20
+ - **17,275 Questions**: Extensive collection of graduate-level MCQs in Hindi
21
+ - **21 Subjects**: Comprehensive coverage of diverse academic domains
22
+ - **Standardized Format**: Consistent question structure for reliable evaluation
23
+ - **Automated Evaluation**: Scripts for benchmarking and analysis
24
+ - **Detailed Metrics**: Subject-wise and question-type-wise performance analysis
25
+
26
+ ## πŸ“Š Dataset Structure
27
+
28
+ ### Question Format
29
+ Each question in the dataset includes:
30
+ - `unique_question_id`: Unique identifier for each question
31
+ - `question_text`: The question text
32
+ - `option_a`, `option_b`, `option_c`, `option_d`: Four multiple choice options
33
+ - `correct_answer`: The correct option (A, B, C, or D)
34
+ - `subject`: Subject category
35
+ - `exam_name`: Source examination
36
+ - `paper_number`: Paper/section identifier
37
+ - `question_type`: Type of question (MCQ, Blank-filling, assertion/reasoning, etc.)
38
+
39
+ ### Subject Distribution
40
+ The benchmark covers 21 subjects including but not limited to:
41
+ - Music
42
+ - History
43
+ - Drama and Theatre
44
+ - Economics
45
+ - Anthropology
46
+ - Current Affairs
47
+ - Indian Culture
48
+ - And more...
49
+
50
+ <img width="682" height="395" alt="image" src="https://github.com/user-attachments/assets/65a7350f-26c1-46de-9c3a-d2828296ddca" />
51
+
52
+
53
+ ## πŸ—οΈ Repository Structure
54
+
55
+ ```
56
+ ParamBench/
57
+ β”œβ”€β”€ data/
58
+ β”‚ └── full-data.csv # Main dataset file
59
+ β”œβ”€β”€ checkpoints/ # Model evaluation checkpoints
60
+ β”œβ”€β”€ results/ # Analysis results and visualizations
61
+ β”œβ”€β”€ benchmark_script.py # Main benchmarking script
62
+ β”œβ”€β”€ analysis_models.py # Analysis and visualization script
63
+ β”œβ”€β”€ requirements.txt # Python dependencies
64
+ └── README.md # This file
65
+ ```
66
+
67
+ ## πŸš€ Quick Start
68
+
69
+ ### Requirements
70
+
71
+ ```bash
72
+ pip install -r requirements.txt
73
+ ```
74
+
75
+ ### Basic Requirements
76
+ - Python 3.8+
77
+ - PyTorch 2.0+
78
+ - Transformers 4.45+
79
+ - Pandas
80
+ - NumPy
81
+ - Plotly (for visualization)
82
+
83
+ ### Running Benchmarks
84
+
85
+ 1. **Clone the repository**
86
+ ```bash
87
+ git clone https://github.com/yourusername/ParamBench.git
88
+ cd ParamBench
89
+ ```
90
+
91
+ 2. **Run the benchmark**
92
+ ```bash
93
+ python benchmark_script.py
94
+ ```
95
+
96
+ ### Configuration Options
97
+
98
+ The benchmark script supports various configuration options:
99
+
100
+ ```python
101
+ # In benchmark_script.py
102
+ group_to_run = "small" # Options: "small", "medium", "large", or "all"
103
+ batch_size = 16 # Adjust based on GPU memory
104
+ ```
105
+
106
+
107
+ ## πŸ“Š Running Analysis
108
+
109
+ After running benchmarks, generate comprehensive analysis reports:
110
+
111
+ ```bash
112
+ python analysis_models.py
113
+ ```
114
+
115
+ This will generate:
116
+ - Model performance summary CSV
117
+ - Subject-wise accuracy charts
118
+ - Question type analysis
119
+ - Combined report with all metrics
120
+
121
+ ## πŸ”— Links
122
+
123
+ - [Paper](https://arxiv.org/abs/2508.16185)
124
+
125
+ ---