Details
Original language | English |
---|---|
Pages (from-to) | 111-125 |
Number of pages | 15 |
Journal | Int. J. Multim. Inf. Retr. |
Volume | 10 |
Issue number | 2 |
Early online date | 28 Apr 2021 |
Publication status | Published - Jun 2021 |
Abstract
The World Wide Web has become a popular source to gather information and news. Multimodal information, e.g., supplement text with photographs, is typically used to convey the news more effectively or to attract attention. The photographs can be decorative, depict additional details, but might also contain misleading information. The quantification of the cross-modal consistency of entity representations can assist human assessors’ evaluation of the overall multimodal message. In some cases such measures might give hints to detect fake news, which is an increasingly important topic in today’s society. In this paper, we present a multimodal approach to quantify the entity coherence between image and text in real-world news. Named entity linking is applied to extract persons, locations, and events from news texts. Several measures are suggested to calculate the cross-modal similarity of the entities in text and photograph by exploiting state-of-the-art computer vision approaches. In contrast to previous work, our system automatically acquires example data from the Web and is applicable to real-world news. Moreover, an approach that quantifies contextual image-text relations is introduced. The feasibility is demonstrated on two datasets that cover different languages, topics, and domains.
Keywords
- Cross-modal consistency, Image repurposing detection, Image-text relations, News analytics
ASJC Scopus subject areas
- Computer Science(all)
- Information Systems
- Social Sciences(all)
- Library and Information Sciences
- Engineering(all)
- Media Technology
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: Int. J. Multim. Inf. Retr., Vol. 10, No. 2, 06.2021, p. 111-125.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Multimodal news analytics using measures of cross-modal entity and context consistency
AU - Müller-Budack, Eric
AU - Theiner, Jonas
AU - Diering, Sebastian
AU - Idahl, Maximilian
AU - Hakimov, Sherzod
AU - Ewerth, Ralph
N1 - Funding Information: This work has partially received funding from the European Union’s Horizon research and innovation programme 2020 under the Marie Skłodowska-Curie Grant Agreement No 812997, and the German Research Foundation (DFG: Deutsche Forschungsgemeinschaft, project number: 388420599). We are very grateful to Avishek Anand (L3S Research Center, Leibniz University Hannover) for his valuable comments that improved the quality of the paper.
PY - 2021/6
Y1 - 2021/6
N2 - The World Wide Web has become a popular source to gather information and news. Multimodal information, e.g., supplement text with photographs, is typically used to convey the news more effectively or to attract attention. The photographs can be decorative, depict additional details, but might also contain misleading information. The quantification of the cross-modal consistency of entity representations can assist human assessors’ evaluation of the overall multimodal message. In some cases such measures might give hints to detect fake news, which is an increasingly important topic in today’s society. In this paper, we present a multimodal approach to quantify the entity coherence between image and text in real-world news. Named entity linking is applied to extract persons, locations, and events from news texts. Several measures are suggested to calculate the cross-modal similarity of the entities in text and photograph by exploiting state-of-the-art computer vision approaches. In contrast to previous work, our system automatically acquires example data from the Web and is applicable to real-world news. Moreover, an approach that quantifies contextual image-text relations is introduced. The feasibility is demonstrated on two datasets that cover different languages, topics, and domains.
AB - The World Wide Web has become a popular source to gather information and news. Multimodal information, e.g., supplement text with photographs, is typically used to convey the news more effectively or to attract attention. The photographs can be decorative, depict additional details, but might also contain misleading information. The quantification of the cross-modal consistency of entity representations can assist human assessors’ evaluation of the overall multimodal message. In some cases such measures might give hints to detect fake news, which is an increasingly important topic in today’s society. In this paper, we present a multimodal approach to quantify the entity coherence between image and text in real-world news. Named entity linking is applied to extract persons, locations, and events from news texts. Several measures are suggested to calculate the cross-modal similarity of the entities in text and photograph by exploiting state-of-the-art computer vision approaches. In contrast to previous work, our system automatically acquires example data from the Web and is applicable to real-world news. Moreover, an approach that quantifies contextual image-text relations is introduced. The feasibility is demonstrated on two datasets that cover different languages, topics, and domains.
KW - Cross-modal consistency
KW - Image repurposing detection
KW - Image-text relations
KW - News analytics
UR - http://www.scopus.com/inward/record.url?scp=85105420523&partnerID=8YFLogxK
U2 - 10.1007/S13735-021-00207-4
DO - 10.1007/S13735-021-00207-4
M3 - Article
VL - 10
SP - 111
EP - 125
JO - Int. J. Multim. Inf. Retr.
JF - Int. J. Multim. Inf. Retr.
IS - 2
ER -