Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings |
Herausgeber/-innen | Khalid Al-Khatib, Yufang Hou, Manfred Stede |
Seiten | 175-183 |
Seitenumfang | 9 |
ISBN (elektronisch) | 9781954085923 |
Publikationsstatus | Veröffentlicht - 1 Nov. 2021 |
Extern publiziert | Ja |
Abstract
We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
8th Workshop on Argument Mining, ArgMining 2021 - Proceedings. Hrsg. / Khalid Al-Khatib; Yufang Hou; Manfred Stede. 2021. S. 175-183.
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders
AU - Reimer, Jan Heinrich
AU - Luu, Thi Kim Hanh
AU - Henze, Max
AU - Ajjour, Yamen
N1 - Publisher Copyright: © 2021 Association for Computational Linguistics.
PY - 2021/11/1
Y1 - 2021/11/1
N2 - We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.
AB - We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.
UR - http://www.scopus.com/inward/record.url?scp=85118303881&partnerID=8YFLogxK
U2 - 10.18653/v1/2021.argmining-1.18
DO - 10.18653/v1/2021.argmining-1.18
M3 - Conference contribution
SP - 175
EP - 183
BT - 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings
A2 - Al-Khatib, Khalid
A2 - Hou, Yufang
A2 - Stede, Manfred
ER -