Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

Externe Organisationen

  • Martin-Luther-Universität Halle-Wittenberg
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des Sammelwerks8th Workshop on Argument Mining, ArgMining 2021 - Proceedings
Herausgeber/-innenKhalid Al-Khatib, Yufang Hou, Manfred Stede
Seiten175-183
Seitenumfang9
ISBN (elektronisch)9781954085923
PublikationsstatusVeröffentlicht - 1 Nov. 2021
Extern publiziertJa

Abstract

We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.

Zitieren

Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders. / Reimer, Jan Heinrich; Luu, Thi Kim Hanh; Henze, Max et al.
8th Workshop on Argument Mining, ArgMining 2021 - Proceedings. Hrsg. / Khalid Al-Khatib; Yufang Hou; Manfred Stede. 2021. S. 175-183.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Reimer, JH, Luu, TKH, Henze, M & Ajjour, Y 2021, Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders. in K Al-Khatib, Y Hou & M Stede (Hrsg.), 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings. S. 175-183. https://doi.org/10.18653/v1/2021.argmining-1.18
Reimer, J. H., Luu, T. K. H., Henze, M., & Ajjour, Y. (2021). Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders. In K. Al-Khatib, Y. Hou, & M. Stede (Hrsg.), 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings (S. 175-183) https://doi.org/10.18653/v1/2021.argmining-1.18
Reimer JH, Luu TKH, Henze M, Ajjour Y. Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders. in Al-Khatib K, Hou Y, Stede M, Hrsg., 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings. 2021. S. 175-183 doi: 10.18653/v1/2021.argmining-1.18
Reimer, Jan Heinrich ; Luu, Thi Kim Hanh ; Henze, Max et al. / Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders. 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings. Hrsg. / Khalid Al-Khatib ; Yufang Hou ; Manfred Stede. 2021. S. 175-183
Download
@inproceedings{46551af58ada41c09bf9e82fae13071d,
title = "Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders",
abstract = "We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.",
author = "Reimer, {Jan Heinrich} and Luu, {Thi Kim Hanh} and Max Henze and Yamen Ajjour",
note = "Publisher Copyright: {\textcopyright} 2021 Association for Computational Linguistics.",
year = "2021",
month = nov,
day = "1",
doi = "10.18653/v1/2021.argmining-1.18",
language = "English",
pages = "175--183",
editor = "Khalid Al-Khatib and Yufang Hou and Manfred Stede",
booktitle = "8th Workshop on Argument Mining, ArgMining 2021 - Proceedings",

}

Download

TY - GEN

T1 - Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders

AU - Reimer, Jan Heinrich

AU - Luu, Thi Kim Hanh

AU - Henze, Max

AU - Ajjour, Yamen

N1 - Publisher Copyright: © 2021 Association for Computational Linguistics.

PY - 2021/11/1

Y1 - 2021/11/1

N2 - We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.

AB - We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.

UR - http://www.scopus.com/inward/record.url?scp=85118303881&partnerID=8YFLogxK

U2 - 10.18653/v1/2021.argmining-1.18

DO - 10.18653/v1/2021.argmining-1.18

M3 - Conference contribution

SP - 175

EP - 183

BT - 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings

A2 - Al-Khatib, Khalid

A2 - Hou, Yufang

A2 - Stede, Manfred

ER -

Von denselben Autoren