Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

External Research Organisations

  • Martin Luther University Halle-Wittenberg
View graph of relations

Details

Original languageEnglish
Title of host publication8th Workshop on Argument Mining, ArgMining 2021 - Proceedings
EditorsKhalid Al-Khatib, Yufang Hou, Manfred Stede
Pages175-183
Number of pages9
ISBN (electronic)9781954085923
Publication statusPublished - 1 Nov 2021
Externally publishedYes

Abstract

We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.

Cite this

Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders. / Reimer, Jan Heinrich; Luu, Thi Kim Hanh; Henze, Max et al.
8th Workshop on Argument Mining, ArgMining 2021 - Proceedings. ed. / Khalid Al-Khatib; Yufang Hou; Manfred Stede. 2021. p. 175-183.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Reimer, JH, Luu, TKH, Henze, M & Ajjour, Y 2021, Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders. in K Al-Khatib, Y Hou & M Stede (eds), 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings. pp. 175-183. https://doi.org/10.18653/v1/2021.argmining-1.18
Reimer, J. H., Luu, T. K. H., Henze, M., & Ajjour, Y. (2021). Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders. In K. Al-Khatib, Y. Hou, & M. Stede (Eds.), 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings (pp. 175-183) https://doi.org/10.18653/v1/2021.argmining-1.18
Reimer JH, Luu TKH, Henze M, Ajjour Y. Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders. In Al-Khatib K, Hou Y, Stede M, editors, 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings. 2021. p. 175-183 doi: 10.18653/v1/2021.argmining-1.18
Reimer, Jan Heinrich ; Luu, Thi Kim Hanh ; Henze, Max et al. / Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders. 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings. editor / Khalid Al-Khatib ; Yufang Hou ; Manfred Stede. 2021. pp. 175-183
Download
@inproceedings{46551af58ada41c09bf9e82fae13071d,
title = "Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders",
abstract = "We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.",
author = "Reimer, {Jan Heinrich} and Luu, {Thi Kim Hanh} and Max Henze and Yamen Ajjour",
note = "Publisher Copyright: {\textcopyright} 2021 Association for Computational Linguistics.",
year = "2021",
month = nov,
day = "1",
doi = "10.18653/v1/2021.argmining-1.18",
language = "English",
pages = "175--183",
editor = "Khalid Al-Khatib and Yufang Hou and Manfred Stede",
booktitle = "8th Workshop on Argument Mining, ArgMining 2021 - Proceedings",

}

Download

TY - GEN

T1 - Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders

AU - Reimer, Jan Heinrich

AU - Luu, Thi Kim Hanh

AU - Henze, Max

AU - Ajjour, Yamen

N1 - Publisher Copyright: © 2021 Association for Computational Linguistics.

PY - 2021/11/1

Y1 - 2021/11/1

N2 - We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.

AB - We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.

UR - http://www.scopus.com/inward/record.url?scp=85118303881&partnerID=8YFLogxK

U2 - 10.18653/v1/2021.argmining-1.18

DO - 10.18653/v1/2021.argmining-1.18

M3 - Conference contribution

SP - 175

EP - 183

BT - 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings

A2 - Al-Khatib, Khalid

A2 - Hou, Yufang

A2 - Stede, Manfred

ER -

By the same author(s)