Datasets:
language:
- eu
configs:
- config_name: BSMauthor
data_files:
- split: train
path: BSMauthor/train.jsonl.gz
- split: validation
path: BSMauthor/validation.jsonl.gz
- split: test
path: BSMauthor/test.jsonl.gz
- config_name: BSMtime
data_files:
- split: train
path: BSMtime/train.jsonl.gz
- split: validation
path: BSMtime/validation.jsonl.gz
- split: test
path: BSMtime/test.jsonl.gz
- config_name: EKC
data_files:
- split: train
path: EKC/train.jsonl.gz
- split: validation
path: EKC/validation.jsonl.gz
- split: test
path: EKC/test.jsonl.gz
task_categories:
- fill-mask
- text-generation
task_ids:
- language-modeling
- masked-language-modeling
annotations_creators:
- no-annotation
multilinguality:
- monolingual
license: cc-by-sa-4.0
BERnaT: Basque Encoders for Representing Natural Textual Diversity
Submitted to LREC 2026
Abstract
Language models depend on massive text corpora that are often filtered for quality, a process that can unintentionally exclude non-standard linguistic varieties, reduce model robustness and reinforce representational biases. In this paper, we argue that language models should aim to capture the full spectrum of language variation (dialectal, historical, informal, etc.) rather than relying solely on standardized text. Focusing on Basque, a morphologically rich and low-resource language, we construct new corpora combining standard, social media, and historical sources, and pre-train the BERnaT family of encoder-only models in three configurations: standard, diverse, and combined. We further propose an evaluation framework that separates Natural Language Understanding (NLU) tasks into standard and diverse subsets to assess linguistic generalization. Results show that models trained on both standard and diverse data consistently outperform those trained on standard corpora, improving performance across all task types without compromising standard benchmark accuracy. These findings highlight the importance of linguistic diversity in building inclusive, generalizable language models.
Diverse Corpora
We create Diverse Corpora by combining text from two linguistically rich and varied sources: social media and historical texts. Social media captures informal, spontaneous, and up-to-date language use, including dialectal variation, slang, and code-switching. Historical texts, in turn, provide access to pre-standard Basque, which is highly rich in dialectal diversity. Diverse Corpora represents the largest and most diverse non-standard corpus constructed to date, with its overall size being half that of EusCrawl-v1.1
BSM:
The Basque Social Media corpus (BSM) provides a valuable source of personal, sponta- neous, and varied written content. It includes standard language as well as a wide range of dialectal, slang, informal, and code-switched data. It contains approximately 11 million posts produced by more than 13,000 Basque-speaking users, amounting to around 188 million words. We perform a data augmentation step that effectively doubles the dataset size by reordering the existing material in two distinct ways. Specifically, we divide BSM into two complementary subsets containing the same textual content but organized differently: i) BSMtime, where posts are ordered chronologically by publication time, independent of author identity (11M posts). ii) BSMauthor, where posts are grouped by author, forming 13K complete individual documents representing their timelines. This dual organization enables the analysis of both diachronic language variation and author-specific linguistic behavior, providing a flexible resource for a range of sociolinguistic and computational studies.
EKC:
The Corpus of Basque classical writers (Euskal Klasikoen Corpusa, EKC) aims to serve as the repository for nearly all classical texts up to the 20th century. It contains 338 documents or books ranging from the 16th century to 1975, comprising a total of 21 million words. This corpus covers various literary genres, including poetry, narrative, theater, essays, and religious texts. The texts originate from different dialects, as the standard Basque language was not established until the late 20th century. Due to the historical span and dialectal variety represented in the collection, the corpus is expected to be highly diverse
Licensing
CC-BY-SA 4.0.
Acknowledgments
This work has been partially supported by the Basque Government (Research group funding IT1570-22 and IKER-GAITU project), the Spanish Ministry for Digital Transformation and Civil Service, and the EU-funded NextGenerationEU Recovery, Transformation and Resilience Plan (ILENIA project, 2022/TL22/00215335; and ALIA project). The project also received funding from the European Union’s Horizon Europe research and innovation program under Grant Agreement No 101135724, Topic HORIZON-CL4-2023-HUMAN-01-21 and DeepKnowledge (PID2021-127777OB-C21) founded by MCIN/AEI/10.13039/501100011033 and FEDER. Jaione Bengoetxea, Julen Etxaniz and Ekhi Azurmendi hold a PhD grant from the Basque Government (PRE_2024_1_0028, PRE_2024_2_0028 and PRE_2024_1_0035, respectively). Maite Heredia and Mikel Zubillaga hold a PhD grant from the University of the Basque Country UPV/EHU (PIF23/218 and PIF24/04, respectively). The models were trained on the Leonardo supercomputer at CINECA under the EuroHPC Joint Undertaking, project EHPC-EXT-2024E01-042.
Citation:
To cite our work, please use:
@misc{azurmendi2025bernatbasqueencodersrepresenting,
title={BERnaT: Basque Encoders for Representing Natural Textual Diversity},
author={Ekhi Azurmendi and Joseba Fernandez de Landa and Jaione Bengoetxea and Maite Heredia and Julen Etxaniz and Mikel Zubillaga and Ander Soraluze and Aitor Soroa},
year={2025},
eprint={2512.03903},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.03903},
}