BERnaT: Basque Encoders for Representing Natural Textual Diversity
dic. 3, 2025·,,,,,,,·
0 min de lectura
Ekhi Azurmendi
Joseba Fernandez de Landa
Jaione Bengoetxea
Maite Heredia
Julen Etxaniz
Mikel Zubillaga
Ander Soraluze
Aitor Soroa

Resumen
Language models depend on massive text corpora that are often filtered for quality, a process that can unintentionally exclude non-standard linguistic varieties, reduce model robustness and reinforce representational biases. In this paper, we argue that language models should aim to capture the full spectrum of language variation (dialectal, historical, informal, etc.) rather than relying solely on standardized text. Focusing on Basque, a morphologically rich and low-resource language, we construct new corpora combining standard, social media, and historical sources, and pre-train the BERnaT family of encoder-only models in three configurations: standard, diverse, and combined. We further propose an evaluation framework that separates Natural Language Understanding (NLU) tasks into standard and diverse subsets to assess linguistic generalization. Results show that models trained on both standard and diverse data consistently outperform those trained on standard corpora, improving performance across all task types without compromising standard benchmark accuracy. These findings highlight the importance of linguistic diversity in building inclusive, generalizable language models.
Tipo
Publicación
arXiv
Natural Language Processing
Large Language Models
Deep Learning
Evaluation
Multilinguality
Basque
Linguistic Diversity
Autores
Autores
Autores
Autores
Autores
Autores
Autores
Autores