Details
Original language | English |
---|---|
Title of host publication | CIKM 2024 |
Subtitle of host publication | Proceedings of the 33rd ACM International Conference on Information and Knowledge Management |
Pages | 2189-2199 |
Number of pages | 11 |
ISBN (electronic) | 9798400704369 |
Publication status | Published - 21 Oct 2024 |
Event | 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024 - Boise, United States Duration: 21 Oct 2024 → 25 Oct 2024 |
Abstract
The increasing proliferation of misinformation and its alarming impact have motivated both industry and academia to develop approaches for misinformation detection and fact checking. Recent advances on large language models (LLMs) have shown remarkable performance in various tasks, but their potential in misinformation detection remains relatively underexplored. Most of existing state-of-the-art approaches either do not consider evidence and solely focus on claim related features or assume the evidence is provided. Few approaches consider evidence retrieval as part of the misinformation detection but rely on fine-tuning models. In this paper, we investigate the potential of LLMs for misinformation detection in a zero-shot setting. We incorporate an evidence retrieval component as it is crucial to gather pertinent information from various sources to detect the veracity of claims. To this end, we propose a novel re-ranking approach for multimodal evidence retrieval using both LLMs and large vision-language models (LVLM). The retrieved evidence samples (images and texts) serve as the input for an LVLM-based approach for multimodal fact verification (LVLM4FV). To enable a fair evaluation, we address the issue of incomplete ground truth in an existing evidence retrieval dataset by annotating a more complete set of evidence samples for both image and text retrieval. Our experimental results on two datasets demonstrate the superiority of the proposed approach in both evidence retrieval and fact verification tasks, with a better generalization capability.
Keywords
- multimodal misinformation detection, news analytics, social media
ASJC Scopus subject areas
- Business, Management and Accounting(all)
- General Business,Management and Accounting
- Decision Sciences(all)
- General Decision Sciences
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2024. p. 2189-2199.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Multimodal Misinformation Detection using Large Vision-Language Models
AU - Tahmasebi, Sahar
AU - Müller-Budack, Eric
AU - Ewerth, Ralph
N1 - Publisher Copyright: © 2024 Owner/Author.
PY - 2024/10/21
Y1 - 2024/10/21
N2 - The increasing proliferation of misinformation and its alarming impact have motivated both industry and academia to develop approaches for misinformation detection and fact checking. Recent advances on large language models (LLMs) have shown remarkable performance in various tasks, but their potential in misinformation detection remains relatively underexplored. Most of existing state-of-the-art approaches either do not consider evidence and solely focus on claim related features or assume the evidence is provided. Few approaches consider evidence retrieval as part of the misinformation detection but rely on fine-tuning models. In this paper, we investigate the potential of LLMs for misinformation detection in a zero-shot setting. We incorporate an evidence retrieval component as it is crucial to gather pertinent information from various sources to detect the veracity of claims. To this end, we propose a novel re-ranking approach for multimodal evidence retrieval using both LLMs and large vision-language models (LVLM). The retrieved evidence samples (images and texts) serve as the input for an LVLM-based approach for multimodal fact verification (LVLM4FV). To enable a fair evaluation, we address the issue of incomplete ground truth in an existing evidence retrieval dataset by annotating a more complete set of evidence samples for both image and text retrieval. Our experimental results on two datasets demonstrate the superiority of the proposed approach in both evidence retrieval and fact verification tasks, with a better generalization capability.
AB - The increasing proliferation of misinformation and its alarming impact have motivated both industry and academia to develop approaches for misinformation detection and fact checking. Recent advances on large language models (LLMs) have shown remarkable performance in various tasks, but their potential in misinformation detection remains relatively underexplored. Most of existing state-of-the-art approaches either do not consider evidence and solely focus on claim related features or assume the evidence is provided. Few approaches consider evidence retrieval as part of the misinformation detection but rely on fine-tuning models. In this paper, we investigate the potential of LLMs for misinformation detection in a zero-shot setting. We incorporate an evidence retrieval component as it is crucial to gather pertinent information from various sources to detect the veracity of claims. To this end, we propose a novel re-ranking approach for multimodal evidence retrieval using both LLMs and large vision-language models (LVLM). The retrieved evidence samples (images and texts) serve as the input for an LVLM-based approach for multimodal fact verification (LVLM4FV). To enable a fair evaluation, we address the issue of incomplete ground truth in an existing evidence retrieval dataset by annotating a more complete set of evidence samples for both image and text retrieval. Our experimental results on two datasets demonstrate the superiority of the proposed approach in both evidence retrieval and fact verification tasks, with a better generalization capability.
KW - multimodal misinformation detection
KW - news analytics
KW - social media
UR - http://www.scopus.com/inward/record.url?scp=85210041544&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2407.14321
DO - 10.48550/arXiv.2407.14321
M3 - Conference contribution
AN - SCOPUS:85210041544
SP - 2189
EP - 2199
BT - CIKM 2024
T2 - 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024
Y2 - 21 October 2024 through 25 October 2024
ER -