Datasets:
Update README.md
Browse filesFirst update on the readme
README.md
CHANGED
|
@@ -6,5 +6,21 @@ task_categories:
|
|
| 6 |
language:
|
| 7 |
- fa
|
| 8 |
size_categories:
|
| 9 |
-
-
|
| 10 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
language:
|
| 7 |
- fa
|
| 8 |
size_categories:
|
| 9 |
+
- 10K<n<100K
|
| 10 |
+
---
|
| 11 |
+
# 📘 PerCoR: Persian Commonsense Reasoning (Multiple-Choice Sentence Completion)
|
| 12 |
+
|
| 13 |
+
**PerCoR** is a large-scale Persian benchmark for **commonsense reasoning** in a **4-choice sentence-completion** format.
|
| 14 |
+
It contains **~106K** examples from **40+** Persian websites across news, culture, lifestyle, tech, religion, travel, and more.
|
| 15 |
+
Each instance provides a **prefix** (context) and **four candidate completions** — one correct and three distractors.
|
| 16 |
+
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
## 📦 What’s inside
|
| 20 |
+
|
| 21 |
+
- 🧮 **Total size:** ~106K multiple-choice instances
|
| 22 |
+
- 📊 **Splits:** `train` 86,217 • `validation` 10,000 • `test` 10,000
|
| 23 |
+
- 🧱 **Format:** single passage/prefix + 4 completions (A–D / 0–3) with one correct answer
|
| 24 |
+
- 🧠 **Human accuracy:** ~89% on a random subset
|
| 25 |
+
|
| 26 |
+
> 💡 *The dataset is designed to be difficult for LLMs while remaining answerable by humans; no LLM text is used to generate distractors (reducing generation-style biases).*
|