Publications

Stats

View publication

Title Large Language Models in Crisis Informatics for Zero and Few-Shot Classification
Authors Cinthia Sánchez, Andrés Abeliuk, Bárbara Poblete
Publication date 2025
Abstract This article presents an exploration of the use of
pre-trained
Large Language Models (LLMs) for crisis classification to address labeled
data dependency issues. We present a methodology that enhances open LLMs
through fine-tuning, creating zero-shot and few-shot classifiers that
approach traditional supervised models in classifying crisis-related
messages. A comparative study evaluates crisis classification tasks using
general domain pre-trained LLMs, crisis-specific LLMs, and traditional
supervised learning methods, establishing a benchmark in the field. Our
task-specific fine-tuned Llama model achieved a 69% macro F1 score in
classifying humanitarian information--a remarkable 26% improvement compared
to the Llama baseline, even with limited training data. Moreover, it
outperformed ChatGPT4 by 3% in macro F1. This improvement increased to 71%
macro F1 when fine-tuning Llama with multitask data. For the binary
classification of messages as related vs. not related to crises, we observed
that pre-trained LLMs, such as Llama 2 and ChatGPT4, performed well without
fine-tuning, achieving an 87% macro F1 score with ChatGPT4. This research
expands our knowledge of how to exploit the potential of LLMs for crisis
classification, representing a great opportunity for crisis scenarios that
lack labeled data. The findings emphasize the potential of LLMs in crisis
informatics to address cold start challenges, especially critical in the
initial phases of a disaster, while also showcasing their capacity to attain
high accuracy even with limited training data.
Pages article 45
Volume 19
Journal name ACM Transactions on the Web
Publisher ACM Press (New York, NY, USA)
Reference URL View reference page