Publications

Stats

View publication

Title cover Can Large Language Models Compete with Specialized Models in Lexical Semantic Change Detection?
Authors Frank D. Zamora-Reina, Felipe Bravo-Marquez, Dominik Schlechtweg, Nikolay Arefyev
Publication date 2025
Abstract In this paper, we present a comprehensive comparison
between
specialized Lexical Semantic Change Detection (LSCD) models and Large
Language Models (LLMs) for the LSCD task. In addition to comparing models,
we also investigate the role of automatic prompt selection for improving LLM
performance. We evaluate three approaches: Average Pairwise Distance (APD),
Word-in-Context (WiC), and Word Sense Induction (WSI). Using Spearman
correlation as the evaluation metric, we assess the performance of Mixtral,
Llama 3.1, Llama 3.3, and specialized LSCD models across English and Spanish
datasets. Our results show that by using prompt optimization and LLMs, we
achieve state-of-the-art performance for the English dataset and outperform
specialized LSCD models at the annotation level in the same dataset. For
Spanish, specialized models outperform LLMs across all three
approaches--WiC, APD, and WSI--indicating that specialized LSCD models are
still more effective for semantic change detection in Spanish.
Pages 4201-4208
Conference name European Conference on Artificial Intelligence
Publisher IOS Press (Amsterdam, The Netherlands)
Reference URL View reference page