Human vs ChatGPT: Effect of Data Annotation in Interpretable Crisis-Related Microblog Classification

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Thi Huyen Nguyen
  • Koustav Rudra

Organisationseinheiten

Externe Organisationen

  • Indian Institute of Technology Kharagpur (IITKGP)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksWWW `24
UntertitelProceedings of the ACM Web Conference 2024
Seiten4534-4543
Seitenumfang10
ISBN (elektronisch)9798400701719
PublikationsstatusVeröffentlicht - 13 Mai 2024
Veranstaltung33rd ACM Web Conference, WWW 2024 - Singapore, Singapur
Dauer: 13 Mai 202417 Mai 2024

Abstract

Recent studies have exploited the vital role of microblogging platforms, such as Twitter, in crisis situations. Various machine-learning approaches have been proposed to identify and prioritize crucial information from different humanitarian categories for preparation and rescue purposes. In crisis domain, the explanation of models' output decisions is gaining significant research momentum. Some previous works focused on human annotations of rationales to train and extract supporting evidence for model interpretability. However, such annotations are usually expensive, require much effort, and are not always available in real-time situations of a new crisis event. In this paper, we investigate the recent advances in large language models (LLMs) as data annotators on informal tweet text. We perform a detailed qualitative and quantitative evaluation of ChatGPT rationale annotations over a few-shot setup. ChatGPT annotations are quite close to humans but less precise in nature. Further, we propose an active learning-based interpretable classification model from a small set of annotated data. Our experiments show that (a). ChatGPT has the potential to extract rationales for the crisis tweet classification tasks, but the performance is slightly less than the model trained on human-annotated rationale data (\sim3-6%), (b). active learning setup can help reduce the burden of manual annotations and maintain a trade-off between performance and data size.

ASJC Scopus Sachgebiete

Zitieren

Human vs ChatGPT: Effect of Data Annotation in Interpretable Crisis-Related Microblog Classification. / Nguyen, Thi Huyen; Rudra, Koustav.
WWW `24 : Proceedings of the ACM Web Conference 2024. 2024. S. 4534-4543.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Nguyen, TH & Rudra, K 2024, Human vs ChatGPT: Effect of Data Annotation in Interpretable Crisis-Related Microblog Classification. in WWW `24 : Proceedings of the ACM Web Conference 2024. S. 4534-4543, 33rd ACM Web Conference, WWW 2024, Singapore, Singapur, 13 Mai 2024. https://doi.org/10.1145/3589334.3648141
Nguyen, T. H., & Rudra, K. (2024). Human vs ChatGPT: Effect of Data Annotation in Interpretable Crisis-Related Microblog Classification. In WWW `24 : Proceedings of the ACM Web Conference 2024 (S. 4534-4543) https://doi.org/10.1145/3589334.3648141
Nguyen TH, Rudra K. Human vs ChatGPT: Effect of Data Annotation in Interpretable Crisis-Related Microblog Classification. in WWW `24 : Proceedings of the ACM Web Conference 2024. 2024. S. 4534-4543 doi: 10.1145/3589334.3648141
Nguyen, Thi Huyen ; Rudra, Koustav. / Human vs ChatGPT : Effect of Data Annotation in Interpretable Crisis-Related Microblog Classification. WWW `24 : Proceedings of the ACM Web Conference 2024. 2024. S. 4534-4543
Download
@inproceedings{98e161e419734f879ccbcba1c6f529ed,
title = "Human vs ChatGPT: Effect of Data Annotation in Interpretable Crisis-Related Microblog Classification",
abstract = "Recent studies have exploited the vital role of microblogging platforms, such as Twitter, in crisis situations. Various machine-learning approaches have been proposed to identify and prioritize crucial information from different humanitarian categories for preparation and rescue purposes. In crisis domain, the explanation of models' output decisions is gaining significant research momentum. Some previous works focused on human annotations of rationales to train and extract supporting evidence for model interpretability. However, such annotations are usually expensive, require much effort, and are not always available in real-time situations of a new crisis event. In this paper, we investigate the recent advances in large language models (LLMs) as data annotators on informal tweet text. We perform a detailed qualitative and quantitative evaluation of ChatGPT rationale annotations over a few-shot setup. ChatGPT annotations are quite close to humans but less precise in nature. Further, we propose an active learning-based interpretable classification model from a small set of annotated data. Our experiments show that (a). ChatGPT has the potential to extract rationales for the crisis tweet classification tasks, but the performance is slightly less than the model trained on human-annotated rationale data (\sim3-6%), (b). active learning setup can help reduce the burden of manual annotations and maintain a trade-off between performance and data size.",
keywords = "active learning, crisis events, interpretability, large language model, semi-supervised learning, twitter",
author = "Nguyen, {Thi Huyen} and Koustav Rudra",
note = "Publisher Copyright: {\textcopyright} 2024 ACM.; 33rd ACM Web Conference, WWW 2024 ; Conference date: 13-05-2024 Through 17-05-2024",
year = "2024",
month = may,
day = "13",
doi = "10.1145/3589334.3648141",
language = "English",
pages = "4534--4543",
booktitle = "WWW `24",

}

Download

TY - GEN

T1 - Human vs ChatGPT

T2 - 33rd ACM Web Conference, WWW 2024

AU - Nguyen, Thi Huyen

AU - Rudra, Koustav

N1 - Publisher Copyright: © 2024 ACM.

PY - 2024/5/13

Y1 - 2024/5/13

N2 - Recent studies have exploited the vital role of microblogging platforms, such as Twitter, in crisis situations. Various machine-learning approaches have been proposed to identify and prioritize crucial information from different humanitarian categories for preparation and rescue purposes. In crisis domain, the explanation of models' output decisions is gaining significant research momentum. Some previous works focused on human annotations of rationales to train and extract supporting evidence for model interpretability. However, such annotations are usually expensive, require much effort, and are not always available in real-time situations of a new crisis event. In this paper, we investigate the recent advances in large language models (LLMs) as data annotators on informal tweet text. We perform a detailed qualitative and quantitative evaluation of ChatGPT rationale annotations over a few-shot setup. ChatGPT annotations are quite close to humans but less precise in nature. Further, we propose an active learning-based interpretable classification model from a small set of annotated data. Our experiments show that (a). ChatGPT has the potential to extract rationales for the crisis tweet classification tasks, but the performance is slightly less than the model trained on human-annotated rationale data (\sim3-6%), (b). active learning setup can help reduce the burden of manual annotations and maintain a trade-off between performance and data size.

AB - Recent studies have exploited the vital role of microblogging platforms, such as Twitter, in crisis situations. Various machine-learning approaches have been proposed to identify and prioritize crucial information from different humanitarian categories for preparation and rescue purposes. In crisis domain, the explanation of models' output decisions is gaining significant research momentum. Some previous works focused on human annotations of rationales to train and extract supporting evidence for model interpretability. However, such annotations are usually expensive, require much effort, and are not always available in real-time situations of a new crisis event. In this paper, we investigate the recent advances in large language models (LLMs) as data annotators on informal tweet text. We perform a detailed qualitative and quantitative evaluation of ChatGPT rationale annotations over a few-shot setup. ChatGPT annotations are quite close to humans but less precise in nature. Further, we propose an active learning-based interpretable classification model from a small set of annotated data. Our experiments show that (a). ChatGPT has the potential to extract rationales for the crisis tweet classification tasks, but the performance is slightly less than the model trained on human-annotated rationale data (\sim3-6%), (b). active learning setup can help reduce the burden of manual annotations and maintain a trade-off between performance and data size.

KW - active learning

KW - crisis events

KW - interpretability

KW - large language model

KW - semi-supervised learning

KW - twitter

UR - http://www.scopus.com/inward/record.url?scp=85194082785&partnerID=8YFLogxK

U2 - 10.1145/3589334.3648141

DO - 10.1145/3589334.3648141

M3 - Conference contribution

AN - SCOPUS:85194082785

SP - 4534

EP - 4543

BT - WWW `24

Y2 - 13 May 2024 through 17 May 2024

ER -