## Dataset Description
SCORE-Bench is a curated collection of 224 diverse, real-world documents manually annotated by experts. It is designed to benchmark document parsing systems against true production-grade challenges. Unlike traditional academic datasets often composed of clean, digital-native PDFs, this benchmark specifically targets the complexity found in actual enterprise workflows.
Note on replication: This dataset is a standalone benchmark released after the publication of the original SCORE framework paper. It is not the exact dataset used in the paper's experiments. Researchers should view this as a new, more challenging evaluation set using the same methodology.
The dataset allows researchers and developers to move beyond "clean" evaluation to test how systems handle the irregularities of the real world. It includes:
* **Complex Layouts:** Financial reports with deeply nested tables, technical manuals with multi-column dense text, and articles where whitespace (rather than lines) defines structure.
* **Visual Noise & Variety:** Scanned forms with skew, photocopied documents with artifacts, and forms containing mixed printed and handwritten text.
* **Semantic Ambiguity:** Documents selected to break brittle systems, requiring parsers to distinguish between varying structural interpretations (e.g., identifying a two-column article versus a list of key-value pairs).
Every document in SCORE-Bench has been manually annotated by domain experts, not algorithmically generated from metadata.
## Dataset Coverage
**Distribution of document layout characteristics**
Each document typically presents more than one of the characteristics:
| Document characteristic | Count |
| :---- |:------|
| Scanned documents | 54 |
| Documents with noise and visual degradation | 39 |
| Multi-column layout | 98 |
| Flowing text blocks | 143 |
| Complex layout | 127 |
| Simple tables | 40 |
| Complex tables with merged cells | 48 |
| Embedded images or plots | 81 |
| Forms | 54 |
| Handwriting mixed with printed text | 33 |
| Layout with complex visual branding | 114 |
**Document content types**
The dataset captures the heterogeneity of real-world unstructured data not only across verticals, but also across document types. It includes operational and regulatory content: government reports, financial statements, legal agreements, insurance forms, and technical manuals alongside lower-frequency but operationally critical artifacts such as patent documents, research papers, curriculum vitae, marketing collateral, schematics, and more.
This breadth ensures that evaluation can be done on content representative of real-world enterprise workflows: complex unstructured documents that span both common and occasional niche types. By incorporating this long tail of document types, the dataset reflects the diversity and functional richness encountered in actual organizational settings, providing a realistic benchmark for document parsing.
## Annotation Format
The dataset uses the following types of ground truth data:
1. **Text Content Ground Truth**: Content is structured with markers for different document elements, enabling evaluation against a clean concatenated text representation (CCT).
```
--------------------------------------------------- Unstructured Plain Text Format 1
--------------------------------------------------- Unstructured Title Begin
DOCUMENT TITLE
--------------------------------------------------- Unstructured Title End
--------------------------------------------------- Unstructured NarrativeText Begin
Document content...
--------------------------------------------------- Unstructured NarrativeText End
```
2. **Table Ground Truth**: Tables are represented as JSON with cell coordinates and content, serving as the ground truth for our format-agnostic table evaluation
```json
[
{
"type": "Table",
"text": [
{
"id": "cell-id",
"x": 0,
"y": 0,
"w": 1,
"h": 1,
"content": "Cell content"
},
...
]
}
]
```
## Intended Usage
This dataset is designed to serve as a standardized benchmark for evaluating modern document parsing systems. Its composition specifically addresses the limitations of traditional metrics when applied to generative models. The intended use cases include:
* **Fair Benchmarking of Generative Systems**: The dataset intentionally contains layouts with multiple valid structural interpretations. The annotations are constructed to allow the SCORE system to evaluate based on semantics, ensuring that Vision Language Models (VLMs) are not penalized for legitimate interpretive flexibility (e.g. distinct but semantically equivalent readings of a complex page).
* **Format-Agnostic Comparison**: The ground truth allows for the comparison of outputs across varying representational formats (e.g., HTML, JSON, flattened text) by validating semantic equivalence rather than rigid string-level or tree-level matching.
* **Granular Error Analysis:** The variety of noise and document types enables the SCORE framework to identify specific system behaviors, such as distinguishing between content hallucinations (spurious tokens) and content omissions.
* **Complex Table Evaluation:** The data includes tables with ambiguous structures, merged cells, and irregular layouts to test extraction capabilities. This supports evaluation that separates content accuracy from index/spatial accuracy.
* **Structural Hierarchy Assessment:** The documents are selected to challenge a system's ability to maintain consistent, semantically coherent hierarchies (e.g., mapping headers and list items correctly) across long or complex pages.
## Evaluations
Measured on Nov 24th 2025.
**Content Fidelity Metrics**
| | cct | adjusted\_cct | percent\_tokens\_found | percent\_tokens\_added |
| :---- |:----------| :---- |:-----------------------|:-----------------------|
| Snowflake Layout Mode | 0.782 | 0.792 | 0.823 | 0.102 |
| Snowflake OCR Mode | 0.705 | 0.705 | 0.900 | 0.048 |
| Databricks AI Parse Document | 0.795 | 0.809 | 0.840 | 0.053 |
| LlamaParse High Resolution OCR | 0.761 | 0.776 | 0.826 | 0.055 |
| LlamaParse VLM | 0.827 | 0.835 | 0.890 | 0.069 |
| Reducto Agentic | 0.811 | 0.812 | 0.937 | 0.124 |
| Unstructured High-Res Refined with GPT-5-mini | 0.855 | 0.857 | 0.909 | 0.069 |
| Unstructured High-Res Refined with Claude Sonnet 4 | 0.862 | 0.863 | 0.911 | 0.057 |
| Docling Default | 0.702 | 0.716 | 0.720 | 0.135 |
| Unstructured VLM Partitioner GPT-5-mini | 0.885 | 0.883 | 0.924 | 0.036 |
| Unstructured VLM Partitioner Claude Sonnet 4 | 0.857 | 0.864 | 0.914 | 0.043 |
| Unstructured OSS | 0.707 | 0.715 | 0.876 | 0.119 |
| NVIDIA Nemotron-Parse-v1.1 | 0.625 | 0.648 | 0.737 | 0.070 |
| Docling Granite VLM | 0.587 | 0.625 | 0.644 | 0.163 |
**Table Extraction Metrics**
| | detection\_f | cell\_level\_index\_acc | cell\_content\_acc | shifted\_cell\_content\_acc | page\_teds\_corrected | table\_teds | table\_teds\_corrected |
| :---- | :---- |:------------------------|:-------------------| :---- | :---- | :---- | :---- |
| Snowflake Layout Mode | 0.841 | 0.583 | 0.556 | 0.589 | 0.57 | 0.589 | 0.55 |
| Snowflake OCR Mode | 0.545 | N/A | N/A | N/A | N/A | N/A | N/A |
| Databricks AI Parse Document | 0.826 | 0.623 | 0.615 | 0.653 | 0.663 | 0.657 | 0.631 |
| LlamaParse High Resolution OCR | 0.704 | 0.409 | 0.361 | 0.422 | 0.49 | 0.452 | 0.42 |
| LlamaParse VLM | 0.802 | 0.578 | 0.522 | 0.564 | 0.64 | 0.599 | 0.567 |
| Reducto Agentic | 0.854 | 0.706 | 0.708 | 0.742 | 0.772 | 0.775 | 0.75 |
| Unstructured High-Res Refined with GPT-5-mini | 0.85 | 0.774 | 0.76 | 0.782 | 0.778 | 0.796 | 0.776 |
| Unstructured High-Res Refined with Claude Sonnet 4 | 0.855 | 0.776 | 0.773 | 0.813 | 0.782 | 0.803 | 0.779 |
| Docling Default | 0.815 | 0.659 | 0.606 | 0.628 | 0.679 | 0.67 | 0.65 |
| Unstructured VLM Partitioner GPT-5-mini | 0.837 | 0.734 | 0.69 | 0.731 | 0.757 | 0.743 | 0.722 |
| Unstructured VLM Partitioner Claude Sonnet 4 | 0.855 | 0.656 | 0.65 | 0.683 | 0.714 | 0.708 | 0.678 |
| Unstructured OSS | 0.839 | 0.498 | 0.426 | 0.475 | 0.47 | 0.492 | 0.449 |
| NVIDIA Nemotron-Parse-v1.1 | 0.715 | 0.651 | 0.559 | 0.589 | 0.583 | 0.613 | 0.567 |
| Docling Granite VLM | 0.725 | 0.716 | 0.657 | 0.694 | 0.673 | 0.72 | 0.687 |
**Structural Understanding Metrics**
| pipeline | element\_alignment |
|:-------------------------------------------------------------------| ----- |
| Snowflake Layout Mode | 0.608 |
| Snowflake OCR Mode | N/A |
| Databricks AI Parse Document | 0.417 |
| LlamaParse High Resolution OCR | 0.277 |
| LlamaParse VLM | 0.266 |
| Reducto Agentic | 0.595 |
| Unstructured High-Res Refined with GPT-5-mini | 0.58 |
| Unstructured High-Res Refined with Claude Sonnet 4 | 0.58 |
| Docling Default | 0.599 |
| Unstructured OSS | 0.534 |
| NVIDIA Nemotron-Parse-v1.1 | 0.339 |
| Docling Granite VLM | 0.558 |
| Unstructured VLM Partitioner Claude Sonnet 4 | 0.598 |
| Unstructured VLM Partitioner GPT-5-mini | 0.575 |
## Dataset Creation Date
Nov 24, 2025
## SCORE-Bench – Licensing & Attribution
This repository contains:
- Third-party PDF documents used as part of the SCORE-Bench document parsing benchmark
- Unstructured-authored annotations and metadata
This following summarizes licensing and attribution requirements for both.
---
### License for Unstructured-authored annotations and metadata
Except where otherwise noted, **Unstructured-authored annotation files, labels, and metadata** in this repository are licensed under:
**Creative Commons Attribution 4.0 International (CC BY 4.0)**
This license allows reuse, modification, and redistribution including commercial use, provided appropriate credit is given.
A suggested attribution format is:
> “SCORE-Bench annotations © Unstructured Technologies, licensed under CC BY 4.0.”
If you publish work based on this dataset, you may also wish to cite the SCORE framework / SCORE-Bench paper.
> **Important:** The CC BY 4.0 license above applies **only** to Unstructured-created content (annotations, metadata, and documentation). The third-party PDFs listed below remain under their original licenses and all disclaimers and non-endorsement statements contained within the original documents continue to apply.
---
### Third-party PDFs
For each work we list:
- **Files** – filenames used in this dataset
- **Citation** – the original work to credit and copyright notice, if applicable
- **License** – the license under which the work is shared
- **Source** – a DOI or canonical URL
Where works list many authors, we abbreviate the author list with “et al.”; see the linked source for the full details.
---
#### 1. FAO, Rikolto and RUAF – *Urban and peri-urban agriculture sourcebook – From production to food systems*
**Files:**
- `cb9722en_p35-36-p001.pdf`
- `cb9722en_p35-36-p002.pdf`
**Citation:**
FAO, Rikolto and RUAF. 2022. *Urban and peri-urban agriculture sourcebook – From production to food systems*. Rome, FAO and Rikolto.
© FAO, 2022
**License:**
Creative Commons Attribution-NonCommercial-ShareAlike 3.0 IGO (CC BY-NC-SA 3.0 IGO) –
**Source:**
---
#### 2. World Health Organization – *Global strategy on digital health 2020–2025*
**Files:**
- `gs4dhdStrategicObjectives-p008.pdf`
- `gs4dhdStrategicObjectives-p009.pdf`
**Citation:**
World Health Organization. 2021. *Global strategy on digital health 2020–2025*. Geneva: World Health Organization.
© World Health Organization 2021
**License:**
Creative Commons Attribution-NonCommercial-ShareAlike 3.0 IGO (CC BY-NC-SA 3.0 IGO) –
**Source:**
---
#### 3. Park et al. – *Korean Power System Challenges and Opportunities, Priorities for Swift and Successful Clean Energy Deployment at Scale*
**Files:**
- `korean_power_system_challenges-p001.pdf`
- `korean_power_system_challenges-p003.pdf`
**Citation:**
Park, W. Y, Khanna, N., Kim, J. H., et al. (2023) *Korean Power System Challenges and Opportunities, Priorities for Swift and Successful Clean Energy Deployment at Scale*.
Copyright Notice: This manuscript has been authored by authors at Lawrence Berkeley National Laboratory under Contract No. DE-AC02-05CH11231 with the U.S. Department of Energy. The U.S. Government retains, and the publisher, by accepting the article for publication, acknowledges, that the U.S. Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for U.S. Government purposes.
**License:**
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) –
**Source:**
---
#### 4. Razzhigaev et al. – *Kandinsky: an Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion*
**Files:**
- `2310.03502text_to_image_synthesis1-7-p005.pdf`
- `2310.03502text_to_image_synthesis1-7-p006.pdf`
**Citation:**
Razzhigaev, A., Shakhmatov, A., Maltseva, A., et al. 2023. *Kandinsky: an Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion*.
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) –
**Source:**
---
#### 5. Katsouris – *Optimal Estimation Methodologies for Panel Data Regression Models*
**Files:**
- `OptimalEstimationMethodologies-for-PanelDataRegressionModels-pg9-12-p002.pdf`
- `OptimalEstimationMethodologies-for-PanelDataRegressionModels-pg9-12-p003.pdf`
**Citation:**
Katsouris, C. 2023. *Optimal Estimation Methodologies for Panel Data Regression Models*.
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) –
**Source:**
---
#### 6. Singh et al. – *The Role of Colour in Influencing Consumer Buying Behaviour: An Empirical Study*
**File:**
- `661_Singh_p9-9.pdf`
**Citation:**
Singh, P. K., Kumari, A., Agrawal, S., et al. (2023). *The Role of Colour in Influencing Consumer Buying Behaviour: An Empirical Study*.
© 2023 The Author(s). Published by Vilnius Gediminas Technical University
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) –
**Source:**
---
#### 7. Degerman – *Brexit anxiety: a case study in the medicalization of dissent*
**Files:**
- `ijerph-19-00825-p008.pdf`
- `ijerph-19-00825-p020.pdf`
**Citation:**
Degerman, D. (2018). *Brexit anxiety: a case study in the medicalization of dissent*. Critical Review of International Social and Political Philosophy, 22(7), 823–840.
© 2018 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) –
**Source:**
---
#### 8. Zhang & Ilavsky – *Bridging length scales in hard materials with ultra-small angle X-ray scattering – a critical review*
**Files:**
- `Zhand-Ilavsky-p004.pdf`
- `Zhand-Ilavsky-p012.pdf`
**Citation:**
Zhang, F., & Ilavsky, J. (2024). *Bridging length scales in hard materials with ultra-small angle X-ray scattering – a critical review*. IUCrJ, 11, 675–694.
© International Union of Crystallography
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) –
**Source:**
---
#### 9. O’Hara et al. – *Regional-scale patterns of deep seafloor biodiversity for conservation assessment*
**Files:**
- `O27Hara_DeepSeaFloorBio-p001.pdf`
- `O27Hara_DeepSeaFloorBio-p002.pdf`
**Citation:**
O'Hara, T. D., Williams, A., Althaus, F., et al. (2020). *Regional-scale patterns of deep seafloor biodiversity for conservation assessment*. Diversity and Distributions, 26, 479–494.
© 2020 The Authors. Diversity and Distributions Published by John Wiley & Sons Ltd.
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) –
**Source:**
---
#### 10. Raimondi et al. – *Rainwater Harvesting and Treatment: State of the Art and Perspectives*
**File:**
- `water-15-0151828729_p3-3.pdf`
**Citation:**
Raimondi, A., Quinn, R., Abhijith, G. R., et al. (2023). *Rainwater Harvesting and Treatment: State of the Art and Perspectives*. Water, 15(8), 1518.
© 2023 by the authors.
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) –
**Source:**
---
#### 11. Hunt et al. – *Artificial Intelligence, Big Data, and mHealth: The Frontiers of the Prevention of Violence Against Children*
**Files:**
- `frai_03_543305_p1-2-p001.pdf`
- `frai_03_543305_p1-2-p002.pdf`
**Citation:**
Hunt, X., Tomlinson, M., Sikander, S., Skeen, S., Marlow, M., du Toit, S., & Eisner, M. (2020). *Artificial Intelligence, Big Data, and mHealth: The Frontiers of the Prevention of Violence Against Children*. Frontiers in Artificial Intelligence, 3, 543305.
Copyright 2020 the authors.
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) –
**Source:**
---
#### 12. World Intellectual Property Organization – *WIPO Financial Report*
**Files:**
- `wipo-2022-financial-report-p24-p30-p001.pdf`
- `wipo-2022-financial-report-p24-p30-p005.pdf`
**Citation:**
World Intellectual Property Organization (WIPO). *WIPO Financial Report*.
© WIPO, 2021
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) –
**Source:**
---
##3 Usage reminder
- Unstructured-authored annotations and metadata: **CC BY 4.0**
- Third-party PDFs: **original licenses as listed per file above**
Any reuse of this repository must respect both the Unstructured license and relevant third-party licenses, along with all terms set forth in the original documents, including disclaimers and non-endorsement statements.
## References
**Primary Citation**
* **Title:** SCORE: A Semantic Evaluation Framework for Generative Document Parsing
* **Authors:** Renyu Li, Antonio Jimeno Yepes, Yao You, Kamil Pluciński, Maximilian Operlejn, and Crag Wolfe
* **Organization:** Unstructured Technologies
* **Abstract:** This work introduces the framework used to evaluate this benchmark, detailing the methodology for Adjusted Edit Distance, token-level diagnostics, and format-agnostic table evaluation.
## Evaluation Code
[https://github.com/Unstructured-IO/unstructured-eval-metrics](https://github.com/Unstructured-IO/unstructured-eval-metrics)