Characterization and classification of semantic image-text relations

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Christian Otto
  • Matthias Springstein
  • Avishek Anand
  • Ralph Ewerth

Organisationseinheiten

Externe Organisationen

  • Technische Informationsbibliothek (TIB) Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)31-45
Seitenumfang15
FachzeitschriftInternational Journal of Multimedia Information Retrieval
Jahrgang9
Ausgabenummer1
Frühes Online-Datum22 Jan. 2020
PublikationsstatusVeröffentlicht - März 2020

Abstract

The beneficial, complementary nature of visual and textual information to convey information is widely known, for example, in entertainment, news, advertisements, science, or education. While the complex interplay of image and text to form semantic meaning has been thoroughly studied in linguistics and communication sciences for several decades, computer vision and multimedia research remained on the surface of the problem more or less. An exception is previous work that introduced the two metrics Cross-Modal Mutual Information and Semantic Correlation in order to model complex image-text relations. In this paper, we motivate the necessity of an additional metric called Status in order to cover complex image-text relations more completely. This set of metrics enables us to derive a novel categorization of eight semantic image-text classes based on three dimensions. In addition, we demonstrate how to automatically gather and augment a dataset for these classes from the Web. Further, we present a deep learning system to automatically predict either of the three metrics, as well as a system to directly predict the eight image-text classes. Experimental results show the feasibility of the approach, whereby the predict-all approach outperforms the cascaded approach of the metric classifiers.

ASJC Scopus Sachgebiete

Zitieren

Characterization and classification of semantic image-text relations. / Otto, Christian; Springstein, Matthias; Anand, Avishek et al.
in: International Journal of Multimedia Information Retrieval, Jahrgang 9, Nr. 1, 03.2020, S. 31-45.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Otto, C, Springstein, M, Anand, A & Ewerth, R 2020, 'Characterization and classification of semantic image-text relations', International Journal of Multimedia Information Retrieval, Jg. 9, Nr. 1, S. 31-45. https://doi.org/10.1007/s13735-019-00187-6
Otto, C., Springstein, M., Anand, A., & Ewerth, R. (2020). Characterization and classification of semantic image-text relations. International Journal of Multimedia Information Retrieval, 9(1), 31-45. https://doi.org/10.1007/s13735-019-00187-6
Otto C, Springstein M, Anand A, Ewerth R. Characterization and classification of semantic image-text relations. International Journal of Multimedia Information Retrieval. 2020 Mär;9(1):31-45. Epub 2020 Jan 22. doi: 10.1007/s13735-019-00187-6
Otto, Christian ; Springstein, Matthias ; Anand, Avishek et al. / Characterization and classification of semantic image-text relations. in: International Journal of Multimedia Information Retrieval. 2020 ; Jahrgang 9, Nr. 1. S. 31-45.
Download
@article{cc603d4ec452451db3f63607fad06eea,
title = "Characterization and classification of semantic image-text relations",
abstract = "The beneficial, complementary nature of visual and textual information to convey information is widely known, for example, in entertainment, news, advertisements, science, or education. While the complex interplay of image and text to form semantic meaning has been thoroughly studied in linguistics and communication sciences for several decades, computer vision and multimedia research remained on the surface of the problem more or less. An exception is previous work that introduced the two metrics Cross-Modal Mutual Information and Semantic Correlation in order to model complex image-text relations. In this paper, we motivate the necessity of an additional metric called Status in order to cover complex image-text relations more completely. This set of metrics enables us to derive a novel categorization of eight semantic image-text classes based on three dimensions. In addition, we demonstrate how to automatically gather and augment a dataset for these classes from the Web. Further, we present a deep learning system to automatically predict either of the three metrics, as well as a system to directly predict the eight image-text classes. Experimental results show the feasibility of the approach, whereby the predict-all approach outperforms the cascaded approach of the metric classifiers.",
keywords = "Data augmentation, Image-text class, Multimodality, Semantic gap",
author = "Christian Otto and Matthias Springstein and Avishek Anand and Ralph Ewerth",
note = "Funding Information: Open Access funding provided by Projekt DEAL. Part of this work is financially supported by the Leibniz Association, Germany (Leibniz Competition 2018, funding line “Collaborative Excellence”, Project SALIENT [K68/2017]). ",
year = "2020",
month = mar,
doi = "10.1007/s13735-019-00187-6",
language = "English",
volume = "9",
pages = "31--45",
number = "1",

}

Download

TY - JOUR

T1 - Characterization and classification of semantic image-text relations

AU - Otto, Christian

AU - Springstein, Matthias

AU - Anand, Avishek

AU - Ewerth, Ralph

N1 - Funding Information: Open Access funding provided by Projekt DEAL. Part of this work is financially supported by the Leibniz Association, Germany (Leibniz Competition 2018, funding line “Collaborative Excellence”, Project SALIENT [K68/2017]).

PY - 2020/3

Y1 - 2020/3

N2 - The beneficial, complementary nature of visual and textual information to convey information is widely known, for example, in entertainment, news, advertisements, science, or education. While the complex interplay of image and text to form semantic meaning has been thoroughly studied in linguistics and communication sciences for several decades, computer vision and multimedia research remained on the surface of the problem more or less. An exception is previous work that introduced the two metrics Cross-Modal Mutual Information and Semantic Correlation in order to model complex image-text relations. In this paper, we motivate the necessity of an additional metric called Status in order to cover complex image-text relations more completely. This set of metrics enables us to derive a novel categorization of eight semantic image-text classes based on three dimensions. In addition, we demonstrate how to automatically gather and augment a dataset for these classes from the Web. Further, we present a deep learning system to automatically predict either of the three metrics, as well as a system to directly predict the eight image-text classes. Experimental results show the feasibility of the approach, whereby the predict-all approach outperforms the cascaded approach of the metric classifiers.

AB - The beneficial, complementary nature of visual and textual information to convey information is widely known, for example, in entertainment, news, advertisements, science, or education. While the complex interplay of image and text to form semantic meaning has been thoroughly studied in linguistics and communication sciences for several decades, computer vision and multimedia research remained on the surface of the problem more or less. An exception is previous work that introduced the two metrics Cross-Modal Mutual Information and Semantic Correlation in order to model complex image-text relations. In this paper, we motivate the necessity of an additional metric called Status in order to cover complex image-text relations more completely. This set of metrics enables us to derive a novel categorization of eight semantic image-text classes based on three dimensions. In addition, we demonstrate how to automatically gather and augment a dataset for these classes from the Web. Further, we present a deep learning system to automatically predict either of the three metrics, as well as a system to directly predict the eight image-text classes. Experimental results show the feasibility of the approach, whereby the predict-all approach outperforms the cascaded approach of the metric classifiers.

KW - Data augmentation

KW - Image-text class

KW - Multimodality

KW - Semantic gap

UR - http://www.scopus.com/inward/record.url?scp=85078351928&partnerID=8YFLogxK

U2 - 10.1007/s13735-019-00187-6

DO - 10.1007/s13735-019-00187-6

M3 - Article

AN - SCOPUS:85078351928

VL - 9

SP - 31

EP - 45

JO - International Journal of Multimedia Information Retrieval

JF - International Journal of Multimedia Information Retrieval

SN - 2192-6611

IS - 1

ER -