Back to the Roots: Predicting the Source Domain of Metaphors using Contrastive Learning

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

View graph of relations

Details

Original languageEnglish
Title of host publicationProceedings of the 2022 Workshop on Figurative Language Processing
Pages137-142
Number of pages6
ISBN (electronic)9781959429111
Publication statusPublished - 2022
Event3rd Workshop on Figurative Language Processing, FigLang 2022 - Abu Dhabi, United Arab Emirates
Duration: 8 Dec 20228 Dec 2022

Abstract

Metaphors frame a given target domain using concepts from another, usually more concrete, source domain. Previous research in NLP has focused on the identification of metaphors and the interpretation of their meaning. In contrast, this paper studies to what extent the source domain can be predicted computationally from a metaphorical text. Given a dataset with metaphorical texts from a finite set of source domains, we propose a contrastive learning approach that ranks source domains by their likelihood of being referred to in a metaphorical text. In experiments, it achieves reasonable performance even for rare source domains, clearly outperforming a classification baseline.

ASJC Scopus subject areas

Sustainable Development Goals

Cite this

Back to the Roots: Predicting the Source Domain of Metaphors using Contrastive Learning. / Sengupta, Meghdut; Alshomary, Milad; Wachsmuth, Henning.
Proceedings of the 2022 Workshop on Figurative Language Processing. 2022. p. 137-142.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Sengupta, M, Alshomary, M & Wachsmuth, H 2022, Back to the Roots: Predicting the Source Domain of Metaphors using Contrastive Learning. in Proceedings of the 2022 Workshop on Figurative Language Processing. pp. 137-142, 3rd Workshop on Figurative Language Processing, FigLang 2022, Abu Dhabi, United Arab Emirates, 8 Dec 2022.
Sengupta, M., Alshomary, M., & Wachsmuth, H. (2022). Back to the Roots: Predicting the Source Domain of Metaphors using Contrastive Learning. In Proceedings of the 2022 Workshop on Figurative Language Processing (pp. 137-142)
Sengupta M, Alshomary M, Wachsmuth H. Back to the Roots: Predicting the Source Domain of Metaphors using Contrastive Learning. In Proceedings of the 2022 Workshop on Figurative Language Processing. 2022. p. 137-142
Sengupta, Meghdut ; Alshomary, Milad ; Wachsmuth, Henning. / Back to the Roots : Predicting the Source Domain of Metaphors using Contrastive Learning. Proceedings of the 2022 Workshop on Figurative Language Processing. 2022. pp. 137-142
Download
@inproceedings{30802e138a7d4644bae32d1153f14104,
title = "Back to the Roots: Predicting the Source Domain of Metaphors using Contrastive Learning",
abstract = "Metaphors frame a given target domain using concepts from another, usually more concrete, source domain. Previous research in NLP has focused on the identification of metaphors and the interpretation of their meaning. In contrast, this paper studies to what extent the source domain can be predicted computationally from a metaphorical text. Given a dataset with metaphorical texts from a finite set of source domains, we propose a contrastive learning approach that ranks source domains by their likelihood of being referred to in a metaphorical text. In experiments, it achieves reasonable performance even for rare source domains, clearly outperforming a classification baseline.",
author = "Meghdut Sengupta and Milad Alshomary and Henning Wachsmuth",
note = "Publisher Copyright: {\textcopyright} 2022 Association for Computational Linguistics.; 3rd Workshop on Figurative Language Processing, FigLang 2022 ; Conference date: 08-12-2022 Through 08-12-2022",
year = "2022",
language = "English",
pages = "137--142",
booktitle = "Proceedings of the 2022 Workshop on Figurative Language Processing",

}

Download

TY - GEN

T1 - Back to the Roots

T2 - 3rd Workshop on Figurative Language Processing, FigLang 2022

AU - Sengupta, Meghdut

AU - Alshomary, Milad

AU - Wachsmuth, Henning

N1 - Publisher Copyright: © 2022 Association for Computational Linguistics.

PY - 2022

Y1 - 2022

N2 - Metaphors frame a given target domain using concepts from another, usually more concrete, source domain. Previous research in NLP has focused on the identification of metaphors and the interpretation of their meaning. In contrast, this paper studies to what extent the source domain can be predicted computationally from a metaphorical text. Given a dataset with metaphorical texts from a finite set of source domains, we propose a contrastive learning approach that ranks source domains by their likelihood of being referred to in a metaphorical text. In experiments, it achieves reasonable performance even for rare source domains, clearly outperforming a classification baseline.

AB - Metaphors frame a given target domain using concepts from another, usually more concrete, source domain. Previous research in NLP has focused on the identification of metaphors and the interpretation of their meaning. In contrast, this paper studies to what extent the source domain can be predicted computationally from a metaphorical text. Given a dataset with metaphorical texts from a finite set of source domains, we propose a contrastive learning approach that ranks source domains by their likelihood of being referred to in a metaphorical text. In experiments, it achieves reasonable performance even for rare source domains, clearly outperforming a classification baseline.

UR - http://www.scopus.com/inward/record.url?scp=85153300238&partnerID=8YFLogxK

M3 - Conference contribution

SP - 137

EP - 142

BT - Proceedings of the 2022 Workshop on Figurative Language Processing

Y2 - 8 December 2022 through 8 December 2022

ER -

By the same author(s)