Publications

Stats

View publication

Title Evaluation Benchmarks for Spanish Sentence Representations
Authors Vladimir Araujo, Andres Carvallo, Souvik Kundu, José Cañete, Marcelo Mendoza, Robert E. Mercer, Felipe Bravo-Marquez, Marie-Francine Moens, Alvaro Soto
Publication date 2022
Abstract Due to the success of pre-trained language models,
versions of
languages other than English have been released in recent years. This fact
implies the need for resources to evaluate these models. In the case of
Spanish, there are few ways to systematically assess the models' quality.
In this paper, we narrow the gap by building two evaluation benchmarks.
Inspired by previous work (Conneau and Kiela, 2018; Chen et al., 2019), we
introduce Spanish SentEval and Spanish DiscoEval, aiming to assess the
capabilities of stand-alone and discourse-aware sentence representations,
respectively. Our benchmarks include
considerable pre-existing and newly constructed datasets that address
different tasks from various domains. In addition, we evaluate and analyze
the most recent pre-trained Spanish language models to exhibit their
capabilities and limitations.
As an example, we discover that for the case of discourse evaluation tasks,
mBERT, a language model trained on multiple languages, usually provides a
richer latent representation than models trained only with documents in
Spanish. We hope our contribution will motivate a fairer, more comparable,
and less cumbersome way to evaluate future Spanish language
models.
Downloaded 8 times
Pages 6024-6034
Conference name International Conference on Language Resources and Evaluation
Publisher European Language Resources Association (ELRA)
PDF View PDF
Reference URL View reference page