MLM: A Benchmark Dataset for Multitask Learning with Multiple Languages and Modalities

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Jason Armitage
  • Endri Kacupaj
  • Golsa Tahmasebzadeh
  • Swati
  • Maria Maleshkova
  • Ralph Ewerth
  • Jens Lehmann

External Research Organisations

  • University of Bonn
  • German National Library of Science and Technology (TIB)
  • Jožef Stefan Institute (JSI)
View graph of relations

Details

Original languageEnglish
Title of host publicationCIKM 2020
Subtitle of host publicationProceedings of the 29th ACM International Conference on Information and Knowledge Management
PublisherAssociation for Computing Machinery (ACM)
Pages2967-2974
Number of pages8
ISBN (electronic)9781450368599
Publication statusPublished - 19 Oct 2020
Externally publishedYes
Event29th ACM International Conference on Information and Knowledge Management, CIKM 2020 - online, Virtual, Online, Ireland
Duration: 19 Oct 202023 Oct 2020

Abstract

In this paper, we introduce the MLM (Multiple Languages and Modalities) dataset - a new resource to train and evaluate multitask systems on samples in multiple modalities and three languages. The generation process and inclusion of semantic data provide a resource that further tests the ability for multitask systems to learn relationships between entities. The dataset is designed for researchers and developers who build applications that perform multiple tasks on data encountered on the web and in digital archives. A second version of MLM provides a geo-representative subset of the data with weighted samples for countries of the European Union. We demonstrate the value of the resource in developing novel applications in the digital humanities with a motivating use case and specify a benchmark set of tasks to retrieve modalities and locate entities in the dataset. Evaluation of baseline multitask and single task systems on the full and geo-representative versions of MLM demonstrate the challenges of generalising on diverse data. In addition to the digital humanities, we expect the resource to contribute to research in multimodal representation learning, location estimation, and scene understanding.

Keywords

    machine learning, multilingual data, multimodal data, multitask learning

ASJC Scopus subject areas

Cite this

MLM: A Benchmark Dataset for Multitask Learning with Multiple Languages and Modalities. / Armitage, Jason; Kacupaj, Endri; Tahmasebzadeh, Golsa et al.
CIKM 2020: Proceedings of the 29th ACM International Conference on Information and Knowledge Management. Association for Computing Machinery (ACM), 2020. p. 2967-2974.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Armitage, J, Kacupaj, E, Tahmasebzadeh, G, Swati, Maleshkova, M, Ewerth, R & Lehmann, J 2020, MLM: A Benchmark Dataset for Multitask Learning with Multiple Languages and Modalities. in CIKM 2020: Proceedings of the 29th ACM International Conference on Information and Knowledge Management. Association for Computing Machinery (ACM), pp. 2967-2974, 29th ACM International Conference on Information and Knowledge Management, CIKM 2020, Virtual, Online, Ireland, 19 Oct 2020. https://doi.org/10.48550/arXiv.2008.06376, https://doi.org/10.1145/3340531.3412783
Armitage, J., Kacupaj, E., Tahmasebzadeh, G., Swati, Maleshkova, M., Ewerth, R., & Lehmann, J. (2020). MLM: A Benchmark Dataset for Multitask Learning with Multiple Languages and Modalities. In CIKM 2020: Proceedings of the 29th ACM International Conference on Information and Knowledge Management (pp. 2967-2974). Association for Computing Machinery (ACM). https://doi.org/10.48550/arXiv.2008.06376, https://doi.org/10.1145/3340531.3412783
Armitage J, Kacupaj E, Tahmasebzadeh G, Swati, Maleshkova M, Ewerth R et al. MLM: A Benchmark Dataset for Multitask Learning with Multiple Languages and Modalities. In CIKM 2020: Proceedings of the 29th ACM International Conference on Information and Knowledge Management. Association for Computing Machinery (ACM). 2020. p. 2967-2974 doi: 10.48550/arXiv.2008.06376, 10.1145/3340531.3412783
Armitage, Jason ; Kacupaj, Endri ; Tahmasebzadeh, Golsa et al. / MLM : A Benchmark Dataset for Multitask Learning with Multiple Languages and Modalities. CIKM 2020: Proceedings of the 29th ACM International Conference on Information and Knowledge Management. Association for Computing Machinery (ACM), 2020. pp. 2967-2974
Download
@inproceedings{ff0d5a229c004bcfb053a06eceae2d4a,
title = "MLM: A Benchmark Dataset for Multitask Learning with Multiple Languages and Modalities",
abstract = "In this paper, we introduce the MLM (Multiple Languages and Modalities) dataset - a new resource to train and evaluate multitask systems on samples in multiple modalities and three languages. The generation process and inclusion of semantic data provide a resource that further tests the ability for multitask systems to learn relationships between entities. The dataset is designed for researchers and developers who build applications that perform multiple tasks on data encountered on the web and in digital archives. A second version of MLM provides a geo-representative subset of the data with weighted samples for countries of the European Union. We demonstrate the value of the resource in developing novel applications in the digital humanities with a motivating use case and specify a benchmark set of tasks to retrieve modalities and locate entities in the dataset. Evaluation of baseline multitask and single task systems on the full and geo-representative versions of MLM demonstrate the challenges of generalising on diverse data. In addition to the digital humanities, we expect the resource to contribute to research in multimodal representation learning, location estimation, and scene understanding.",
keywords = "machine learning, multilingual data, multimodal data, multitask learning",
author = "Jason Armitage and Endri Kacupaj and Golsa Tahmasebzadeh and Swati and Maria Maleshkova and Ralph Ewerth and Jens Lehmann",
note = "Funding information: MLM is supported by a team of researchers from the University of Bonn, the Leibniz Information Center for Science and Technology, and Jo{\v z}ef Stefan Institute. The resource is already in use for individual projects and as a contribution to the project deliverables of the Marie Sk?odowska-Curie CLEOPATRA Innovative Training Network. In addition to the steps above that make the resource available to the wider community, usage of MLM will be promoted to the network of researchers in this project. Awareness among researchers and practitioners in digital humanities will be promoted by demonstrations and presentations at domain-related events. The range of modalities and languages present in the dataset also extends its application to research on multimodal representation learning, multilingual machine learning, information retrieval, location estimation, and the Semantic Web. MLM will be supported and maintained for three years in the first instance. A second release of the dataset is already scheduled and the generation process outlined above is designed to enable rapid scaling. The project leading to this publication has received funding from the European Union{\textquoteright}s Horizon 2020 research and innovation programme under the Marie Sk?odowska-Curie grant agreement No 812997.; 29th ACM International Conference on Information and Knowledge Management, CIKM 2020 ; Conference date: 19-10-2020 Through 23-10-2020",
year = "2020",
month = oct,
day = "19",
doi = "10.48550/arXiv.2008.06376",
language = "English",
pages = "2967--2974",
booktitle = "CIKM 2020",
publisher = "Association for Computing Machinery (ACM)",
address = "United States",

}

Download

TY - GEN

T1 - MLM

T2 - 29th ACM International Conference on Information and Knowledge Management, CIKM 2020

AU - Armitage, Jason

AU - Kacupaj, Endri

AU - Tahmasebzadeh, Golsa

AU - Swati,

AU - Maleshkova, Maria

AU - Ewerth, Ralph

AU - Lehmann, Jens

N1 - Funding information: MLM is supported by a team of researchers from the University of Bonn, the Leibniz Information Center for Science and Technology, and Jožef Stefan Institute. The resource is already in use for individual projects and as a contribution to the project deliverables of the Marie Sk?odowska-Curie CLEOPATRA Innovative Training Network. In addition to the steps above that make the resource available to the wider community, usage of MLM will be promoted to the network of researchers in this project. Awareness among researchers and practitioners in digital humanities will be promoted by demonstrations and presentations at domain-related events. The range of modalities and languages present in the dataset also extends its application to research on multimodal representation learning, multilingual machine learning, information retrieval, location estimation, and the Semantic Web. MLM will be supported and maintained for three years in the first instance. A second release of the dataset is already scheduled and the generation process outlined above is designed to enable rapid scaling. The project leading to this publication has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sk?odowska-Curie grant agreement No 812997.

PY - 2020/10/19

Y1 - 2020/10/19

N2 - In this paper, we introduce the MLM (Multiple Languages and Modalities) dataset - a new resource to train and evaluate multitask systems on samples in multiple modalities and three languages. The generation process and inclusion of semantic data provide a resource that further tests the ability for multitask systems to learn relationships between entities. The dataset is designed for researchers and developers who build applications that perform multiple tasks on data encountered on the web and in digital archives. A second version of MLM provides a geo-representative subset of the data with weighted samples for countries of the European Union. We demonstrate the value of the resource in developing novel applications in the digital humanities with a motivating use case and specify a benchmark set of tasks to retrieve modalities and locate entities in the dataset. Evaluation of baseline multitask and single task systems on the full and geo-representative versions of MLM demonstrate the challenges of generalising on diverse data. In addition to the digital humanities, we expect the resource to contribute to research in multimodal representation learning, location estimation, and scene understanding.

AB - In this paper, we introduce the MLM (Multiple Languages and Modalities) dataset - a new resource to train and evaluate multitask systems on samples in multiple modalities and three languages. The generation process and inclusion of semantic data provide a resource that further tests the ability for multitask systems to learn relationships between entities. The dataset is designed for researchers and developers who build applications that perform multiple tasks on data encountered on the web and in digital archives. A second version of MLM provides a geo-representative subset of the data with weighted samples for countries of the European Union. We demonstrate the value of the resource in developing novel applications in the digital humanities with a motivating use case and specify a benchmark set of tasks to retrieve modalities and locate entities in the dataset. Evaluation of baseline multitask and single task systems on the full and geo-representative versions of MLM demonstrate the challenges of generalising on diverse data. In addition to the digital humanities, we expect the resource to contribute to research in multimodal representation learning, location estimation, and scene understanding.

KW - machine learning

KW - multilingual data

KW - multimodal data

KW - multitask learning

UR - http://www.scopus.com/inward/record.url?scp=85095865453&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2008.06376

DO - 10.48550/arXiv.2008.06376

M3 - Conference contribution

AN - SCOPUS:85095865453

SP - 2967

EP - 2974

BT - CIKM 2020

PB - Association for Computing Machinery (ACM)

Y2 - 19 October 2020 through 23 October 2020

ER -