Details
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 31-45 |
Seitenumfang | 15 |
Fachzeitschrift | International Journal of Multimedia Information Retrieval |
Jahrgang | 9 |
Ausgabenummer | 1 |
Frühes Online-Datum | 22 Jan. 2020 |
Publikationsstatus | Veröffentlicht - März 2020 |
Abstract
The beneficial, complementary nature of visual and textual information to convey information is widely known, for example, in entertainment, news, advertisements, science, or education. While the complex interplay of image and text to form semantic meaning has been thoroughly studied in linguistics and communication sciences for several decades, computer vision and multimedia research remained on the surface of the problem more or less. An exception is previous work that introduced the two metrics Cross-Modal Mutual Information and Semantic Correlation in order to model complex image-text relations. In this paper, we motivate the necessity of an additional metric called Status in order to cover complex image-text relations more completely. This set of metrics enables us to derive a novel categorization of eight semantic image-text classes based on three dimensions. In addition, we demonstrate how to automatically gather and augment a dataset for these classes from the Web. Further, we present a deep learning system to automatically predict either of the three metrics, as well as a system to directly predict the eight image-text classes. Experimental results show the feasibility of the approach, whereby the predict-all approach outperforms the cascaded approach of the metric classifiers.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Information systems
- Ingenieurwesen (insg.)
- Medientechnik
- Sozialwissenschaften (insg.)
- Bibliotheks- und Informationswissenschaften
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
in: International Journal of Multimedia Information Retrieval, Jahrgang 9, Nr. 1, 03.2020, S. 31-45.
Publikation: Beitrag in Fachzeitschrift › Artikel › Forschung › Peer-Review
}
TY - JOUR
T1 - Characterization and classification of semantic image-text relations
AU - Otto, Christian
AU - Springstein, Matthias
AU - Anand, Avishek
AU - Ewerth, Ralph
N1 - Funding Information: Open Access funding provided by Projekt DEAL. Part of this work is financially supported by the Leibniz Association, Germany (Leibniz Competition 2018, funding line “Collaborative Excellence”, Project SALIENT [K68/2017]).
PY - 2020/3
Y1 - 2020/3
N2 - The beneficial, complementary nature of visual and textual information to convey information is widely known, for example, in entertainment, news, advertisements, science, or education. While the complex interplay of image and text to form semantic meaning has been thoroughly studied in linguistics and communication sciences for several decades, computer vision and multimedia research remained on the surface of the problem more or less. An exception is previous work that introduced the two metrics Cross-Modal Mutual Information and Semantic Correlation in order to model complex image-text relations. In this paper, we motivate the necessity of an additional metric called Status in order to cover complex image-text relations more completely. This set of metrics enables us to derive a novel categorization of eight semantic image-text classes based on three dimensions. In addition, we demonstrate how to automatically gather and augment a dataset for these classes from the Web. Further, we present a deep learning system to automatically predict either of the three metrics, as well as a system to directly predict the eight image-text classes. Experimental results show the feasibility of the approach, whereby the predict-all approach outperforms the cascaded approach of the metric classifiers.
AB - The beneficial, complementary nature of visual and textual information to convey information is widely known, for example, in entertainment, news, advertisements, science, or education. While the complex interplay of image and text to form semantic meaning has been thoroughly studied in linguistics and communication sciences for several decades, computer vision and multimedia research remained on the surface of the problem more or less. An exception is previous work that introduced the two metrics Cross-Modal Mutual Information and Semantic Correlation in order to model complex image-text relations. In this paper, we motivate the necessity of an additional metric called Status in order to cover complex image-text relations more completely. This set of metrics enables us to derive a novel categorization of eight semantic image-text classes based on three dimensions. In addition, we demonstrate how to automatically gather and augment a dataset for these classes from the Web. Further, we present a deep learning system to automatically predict either of the three metrics, as well as a system to directly predict the eight image-text classes. Experimental results show the feasibility of the approach, whereby the predict-all approach outperforms the cascaded approach of the metric classifiers.
KW - Data augmentation
KW - Image-text class
KW - Multimodality
KW - Semantic gap
UR - http://www.scopus.com/inward/record.url?scp=85078351928&partnerID=8YFLogxK
U2 - 10.1007/s13735-019-00187-6
DO - 10.1007/s13735-019-00187-6
M3 - Article
AN - SCOPUS:85078351928
VL - 9
SP - 31
EP - 45
JO - International Journal of Multimedia Information Retrieval
JF - International Journal of Multimedia Information Retrieval
SN - 2192-6611
IS - 1
ER -