BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data

oct. 11, 2025·
Jaap Jumelet
,
Abdellah Fourtassi
,
Akari Haga
,
Bastian Bunzeck
,
Bhargav Shandilya
,
Diana Galvan-Sosa
,
Faiz Ghifari Haznitrama
,
Francesca Padovani
,
Francois Meyer
,
Hai Hu
,
Julen Etxaniz
,
others
· 0 min de lectura
Resumen
We present BabyBabelLM, a multilingual collection of datasets modeling the language a person observes from birth until they acquire a native language. We curate developmentally plausible pretraining data aiming to cover the equivalent of 100M English words of content in each of 45 languages. We compile evaluation suites and train baseline models in each language. BabyBabelLM aims to facilitate multilingual pretraining and cognitive modeling.
Tipo
Publicación
EACL 2026
publications