|
|
--- |
|
|
tags: |
|
|
- ocr |
|
|
- document-processing |
|
|
- hunyuan-ocr |
|
|
- multilingual |
|
|
- markdown |
|
|
- uv-script |
|
|
- generated |
|
|
--- |
|
|
|
|
|
# Document OCR using HunyuanOCR |
|
|
|
|
|
This dataset contains OCR results from images in [NationalLibraryOfScotland/Scottish-School-Exam-Papers](https://huggingface.co/datasets/NationalLibraryOfScotland/Scottish-School-Exam-Papers) using HunyuanOCR, a lightweight 1B VLM from Tencent. |
|
|
|
|
|
## Processing Details |
|
|
|
|
|
- **Source Dataset**: [NationalLibraryOfScotland/Scottish-School-Exam-Papers](https://huggingface.co/datasets/NationalLibraryOfScotland/Scottish-School-Exam-Papers) |
|
|
- **Model**: [tencent/HunyuanOCR](https://huggingface.co/tencent/HunyuanOCR) |
|
|
- **Number of Samples**: 100 |
|
|
- **Processing Time**: 9.8 min |
|
|
- **Processing Date**: 2025-11-25 16:15 UTC |
|
|
|
|
|
### Configuration |
|
|
|
|
|
- **Image Column**: `image` |
|
|
- **Output Column**: `markdown` |
|
|
- **Dataset Split**: `train` |
|
|
- **Batch Size**: 1 |
|
|
- **Prompt Mode**: parse-document |
|
|
- **Prompt Language**: English |
|
|
- **Max Model Length**: 16,384 tokens |
|
|
- **Max Output Tokens**: 16,384 |
|
|
- **GPU Memory Utilization**: 80.0% |
|
|
|
|
|
## Model Information |
|
|
|
|
|
HunyuanOCR is a lightweight 1B VLM that excels at: |
|
|
- π **Document Parsing** - Full markdown extraction with reading order |
|
|
- π **Table Extraction** - HTML format tables |
|
|
- π **Formula Recognition** - LaTeX format formulas |
|
|
- π **Chart Parsing** - Mermaid/Markdown format |
|
|
- π **Text Spotting** - Detection with coordinates |
|
|
- π **Information Extraction** - Key-value, fields, subtitles |
|
|
- π **Translation** - Multilingual photo translation |
|
|
|
|
|
## Prompt Modes Available |
|
|
|
|
|
- `parse-document` - Full document parsing (default) |
|
|
- `parse-formula` - LaTeX formula extraction |
|
|
- `parse-table` - HTML table extraction |
|
|
- `parse-chart` - Chart/flowchart parsing |
|
|
- `spot` - Text detection with coordinates |
|
|
- `extract-key` - Extract specific key value |
|
|
- `extract-fields` - Extract multiple fields as JSON |
|
|
- `extract-subtitles` - Subtitle extraction |
|
|
- `translate` - Document translation |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
The dataset contains all original columns plus: |
|
|
- `markdown`: The extracted text in markdown format |
|
|
- `inference_info`: JSON list tracking all OCR models applied to this dataset |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
import json |
|
|
|
|
|
# Load the dataset |
|
|
dataset = load_dataset("{output_dataset_id}", split="train") |
|
|
|
|
|
# Access the markdown text |
|
|
for example in dataset: |
|
|
print(example["markdown"]) |
|
|
break |
|
|
|
|
|
# View all OCR models applied to this dataset |
|
|
inference_info = json.loads(dataset[0]["inference_info"]) |
|
|
for info in inference_info: |
|
|
print(f"Column: {info['column_name']} - Model: {info['model_id']}") |
|
|
``` |
|
|
|
|
|
## Reproduction |
|
|
|
|
|
This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) HunyuanOCR script: |
|
|
|
|
|
```bash |
|
|
uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/hunyuan-ocr.py \ |
|
|
NationalLibraryOfScotland/Scottish-School-Exam-Papers \ |
|
|
<output-dataset> \ |
|
|
--image-column image \ |
|
|
--batch-size 1 \ |
|
|
--prompt-mode parse-document \ |
|
|
--max-model-len 16384 \ |
|
|
--max-tokens 16384 \ |
|
|
--gpu-memory-utilization 0.8 |
|
|
``` |
|
|
|
|
|
Generated with [UV Scripts](https://huggingface.co/uv-scripts) |
|
|
|