Details
Original language | English |
---|---|
Title of host publication | 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings |
Editors | Khalid Al-Khatib, Yufang Hou, Manfred Stede |
Pages | 175-183 |
Number of pages | 9 |
ISBN (electronic) | 9781954085923 |
Publication status | Published - 1 Nov 2021 |
Externally published | Yes |
Abstract
We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
8th Workshop on Argument Mining, ArgMining 2021 - Proceedings. ed. / Khalid Al-Khatib; Yufang Hou; Manfred Stede. 2021. p. 175-183.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders
AU - Reimer, Jan Heinrich
AU - Luu, Thi Kim Hanh
AU - Henze, Max
AU - Ajjour, Yamen
N1 - Publisher Copyright: © 2021 Association for Computational Linguistics.
PY - 2021/11/1
Y1 - 2021/11/1
N2 - We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.
AB - We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.
UR - http://www.scopus.com/inward/record.url?scp=85118303881&partnerID=8YFLogxK
U2 - 10.18653/v1/2021.argmining-1.18
DO - 10.18653/v1/2021.argmining-1.18
M3 - Conference contribution
SP - 175
EP - 183
BT - 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings
A2 - Al-Khatib, Khalid
A2 - Hou, Yufang
A2 - Stede, Manfred
ER -