Publications

Stats

View publication

Title Adapting Bias Evaluation to Domain Contexts using Generative Models
Authors Tamara Quiroga, Felipe Bravo-Marquez, Valentín Barriere
Publication date 2025
Abstract Numerous datasets have been proposed to evaluate social
bias in
Natural Language Processing (NLP) systems. However, assessing bias within
specific application domains remains challenging, as existing approaches
often face limitations in scalability and fidelity across domains. In this
work, we introduce a domain-adaptive framework that utilizes prompting with
Large Language Models (LLMs) to automatically transform template-based bias
datasets into domain-specific variants. We apply our method to two widely
used benchmarks -- Equity Evaluation Corpus (EEC) and Identity Phrase
Templates Test Set (IPTTS) -- adapting them to the Twitter and Wikipedia Talk
data. Our results show that the adapted datasets yield bias estimates more
closely aligned with real-world data. These findings highlight the potential
of LLM-based prompting to enhance the realism and contextual relevance of
bias evaluation in NLP systems.
Pages 28043-28054
Conference name Empirical Methods in Natural Language Processing
Publisher Association for Computational Linguistic
Reference URL View reference page