Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -36,4 +36,43 @@ annotations_creators:
|
|
| 36 |
- no-annotation
|
| 37 |
multilinguality:
|
| 38 |
- monolingual
|
| 39 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
- no-annotation
|
| 37 |
multilinguality:
|
| 38 |
- monolingual
|
| 39 |
+
license: apache-2.0
|
| 40 |
+
---
|
| 41 |
+
|
| 42 |
+
# BERnaT: Basque Encoders for Representing Natural Textual Diversity
|
| 43 |
+
|
| 44 |
+
Submitted to LREC 2026
|
| 45 |
+
|
| 46 |
+
## Abstract
|
| 47 |
+
|
| 48 |
+
Language models depend on massive text corpora that are often filtered for quality, a process that can unintentionally
|
| 49 |
+
exclude non-standard linguistic varieties, reduce model robustness and reinforce representational biases. In this
|
| 50 |
+
paper, we argue that language models should aim to capture the full spectrum of language variation (dialectal,
|
| 51 |
+
historical, informal, etc.) rather than relying solely on standardized text. Focusing on Basque, a morphologically rich
|
| 52 |
+
and low-resource language, we construct new corpora combining standard, social media, and historical sources, and
|
| 53 |
+
pre-train the BERnaT family of encoder-only models in three configurations: standard, diverse, and combined. We
|
| 54 |
+
further propose an evaluation framework that separates Natural Language Understanding (NLU) tasks into standard
|
| 55 |
+
and diverse subsets to assess linguistic generalization. Results show that models trained on both standard and
|
| 56 |
+
diverse data consistently outperform those trained on standard corpora, improving performance across all task types
|
| 57 |
+
without compromising standard benchmark accuracy. These findings highlight the importance of linguistic diversity in
|
| 58 |
+
building inclusive, generalizable language models.
|
| 59 |
+
|
| 60 |
+
## Acknowledgments
|
| 61 |
+
|
| 62 |
+
This work has been partially supported by the Basque Government (Research group funding IT1570-22 and IKER-GAITU project), the Spanish Ministry for Digital Transformation and Civil Service, and the EU-funded NextGenerationEU Recovery, Transformation and Resilience Plan (ILENIA project, 2022/TL22/00215335; and ALIA project). The project also received funding from the European Union’s Horizon Europe research and innovation program under Grant Agreement No 101135724, Topic HORIZON-CL4-2023-HUMAN-01-21 and DeepKnowledge (PID2021-127777OB-C21) founded by MCIN/AEI/10.13039/501100011033 and FEDER. Jaione Bengoetxea, Julen Etxaniz and Ekhi Azurmendi hold a PhD grant from the Basque Government (PRE_2024_1_0028, PRE_2024_2_0028 and PRE_2024_1_0035, respectively). Maite Heredia and Mikel Zubillaga hold a PhD grant from the University of the Basque Country UPV/EHU (PIF23/218 and PIF24/04, respectively). The models were trained on the Leonardo supercomputer at CINECA under the EuroHPC Joint Undertaking, project EHPC-EXT-2024E01-042.
|
| 63 |
+
|
| 64 |
+
## Citation:
|
| 65 |
+
|
| 66 |
+
To cite our work, please use:
|
| 67 |
+
|
| 68 |
+
```bibtex
|
| 69 |
+
@misc{azurmendi2025bernatbasqueencodersrepresenting,
|
| 70 |
+
title={BERnaT: Basque Encoders for Representing Natural Textual Diversity},
|
| 71 |
+
author={Ekhi Azurmendi and Joseba Fernandez de Landa and Jaione Bengoetxea and Maite Heredia and Julen Etxaniz and Mikel Zubillaga and Ander Soraluze and Aitor Soroa},
|
| 72 |
+
year={2025},
|
| 73 |
+
eprint={2512.03903},
|
| 74 |
+
archivePrefix={arXiv},
|
| 75 |
+
primaryClass={cs.CL},
|
| 76 |
+
url={https://arxiv.org/abs/2512.03903},
|
| 77 |
+
}
|
| 78 |
+
```
|