Details
Original language | English |
---|---|
Title of host publication | WWW `24 |
Subtitle of host publication | Proceedings of the ACM Web Conference 2024 |
Pages | 4534-4543 |
Number of pages | 10 |
ISBN (electronic) | 9798400701719 |
Publication status | Published - 13 May 2024 |
Event | 33rd ACM Web Conference, WWW 2024 - Singapore, Singapore Duration: 13 May 2024 → 17 May 2024 |
Abstract
Recent studies have exploited the vital role of microblogging platforms, such as Twitter, in crisis situations. Various machine-learning approaches have been proposed to identify and prioritize crucial information from different humanitarian categories for preparation and rescue purposes. In crisis domain, the explanation of models' output decisions is gaining significant research momentum. Some previous works focused on human annotations of rationales to train and extract supporting evidence for model interpretability. However, such annotations are usually expensive, require much effort, and are not always available in real-time situations of a new crisis event. In this paper, we investigate the recent advances in large language models (LLMs) as data annotators on informal tweet text. We perform a detailed qualitative and quantitative evaluation of ChatGPT rationale annotations over a few-shot setup. ChatGPT annotations are quite close to humans but less precise in nature. Further, we propose an active learning-based interpretable classification model from a small set of annotated data. Our experiments show that (a). ChatGPT has the potential to extract rationales for the crisis tweet classification tasks, but the performance is slightly less than the model trained on human-annotated rationale data (\sim3-6%), (b). active learning setup can help reduce the burden of manual annotations and maintain a trade-off between performance and data size.
Keywords
- active learning, crisis events, interpretability, large language model, semi-supervised learning, twitter
ASJC Scopus subject areas
- Computer Science(all)
- Computer Networks and Communications
- Computer Science(all)
- Software
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
WWW `24 : Proceedings of the ACM Web Conference 2024. 2024. p. 4534-4543.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Human vs ChatGPT
T2 - 33rd ACM Web Conference, WWW 2024
AU - Nguyen, Thi Huyen
AU - Rudra, Koustav
N1 - Publisher Copyright: © 2024 ACM.
PY - 2024/5/13
Y1 - 2024/5/13
N2 - Recent studies have exploited the vital role of microblogging platforms, such as Twitter, in crisis situations. Various machine-learning approaches have been proposed to identify and prioritize crucial information from different humanitarian categories for preparation and rescue purposes. In crisis domain, the explanation of models' output decisions is gaining significant research momentum. Some previous works focused on human annotations of rationales to train and extract supporting evidence for model interpretability. However, such annotations are usually expensive, require much effort, and are not always available in real-time situations of a new crisis event. In this paper, we investigate the recent advances in large language models (LLMs) as data annotators on informal tweet text. We perform a detailed qualitative and quantitative evaluation of ChatGPT rationale annotations over a few-shot setup. ChatGPT annotations are quite close to humans but less precise in nature. Further, we propose an active learning-based interpretable classification model from a small set of annotated data. Our experiments show that (a). ChatGPT has the potential to extract rationales for the crisis tweet classification tasks, but the performance is slightly less than the model trained on human-annotated rationale data (\sim3-6%), (b). active learning setup can help reduce the burden of manual annotations and maintain a trade-off between performance and data size.
AB - Recent studies have exploited the vital role of microblogging platforms, such as Twitter, in crisis situations. Various machine-learning approaches have been proposed to identify and prioritize crucial information from different humanitarian categories for preparation and rescue purposes. In crisis domain, the explanation of models' output decisions is gaining significant research momentum. Some previous works focused on human annotations of rationales to train and extract supporting evidence for model interpretability. However, such annotations are usually expensive, require much effort, and are not always available in real-time situations of a new crisis event. In this paper, we investigate the recent advances in large language models (LLMs) as data annotators on informal tweet text. We perform a detailed qualitative and quantitative evaluation of ChatGPT rationale annotations over a few-shot setup. ChatGPT annotations are quite close to humans but less precise in nature. Further, we propose an active learning-based interpretable classification model from a small set of annotated data. Our experiments show that (a). ChatGPT has the potential to extract rationales for the crisis tweet classification tasks, but the performance is slightly less than the model trained on human-annotated rationale data (\sim3-6%), (b). active learning setup can help reduce the burden of manual annotations and maintain a trade-off between performance and data size.
KW - active learning
KW - crisis events
KW - interpretability
KW - large language model
KW - semi-supervised learning
KW - twitter
UR - http://www.scopus.com/inward/record.url?scp=85194082785&partnerID=8YFLogxK
U2 - 10.1145/3589334.3648141
DO - 10.1145/3589334.3648141
M3 - Conference contribution
AN - SCOPUS:85194082785
SP - 4534
EP - 4543
BT - WWW `24
Y2 - 13 May 2024 through 17 May 2024
ER -