Modeling Appropriate Language in Argumentation

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

Externe Organisationen

  • Universität Leipzig
  • Universität Paderborn
  • Zentrum für skalierbare Datenanalyse und Künstliche Intelligenz Dresden/Leipzig (ScaDS.AI)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksProceedings of the 61st Annual Meeting of the Association for Computational Linguistics
Seiten4344-4363
Seitenumfang20
ISBN (elektronisch)9781959429722
PublikationsstatusVeröffentlicht - Juli 2023
Veranstaltung61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 - Toronto, Kanada
Dauer: 9 Juli 202314 Juli 2023

Publikationsreihe

NameProceedings of the Annual Meeting of the Association for Computational Linguistics
Band1
ISSN (Print)0736-587X

Abstract

Online discussion moderators must make ad-hoc decisions about whether the contributions of discussion participants are appropriate or should be removed to maintain civility. Existing research on offensive language and the resulting tools cover only one aspect among many involved in such decisions. The question of what is considered appropriate in a controversial discussion has not yet been systematically addressed. In this paper, we operationalize appropriate language in argumentation for the first time. In particular, we model appropriateness through the absence of flaws, grounded in research on argument quality assessment, especially in aspects from rhetoric. From these, we derive a new taxonomy of 14 dimensions that determine inappropriate language in online discussions. Building on three argument quality corpora, we then create a corpus of 2191 arguments annotated for the 14 dimensions. Empirical analyses support that the taxonomy covers the concept of appropriateness comprehensively, showing several plausible correlations with argument quality dimensions. Moreover, results of baseline approaches to assessing appropriateness suggest that all dimensions can be modeled computationally on the corpus.

ASJC Scopus Sachgebiete

Zitieren

Modeling Appropriate Language in Argumentation. / Ziegenbein, Timon; Syed, Shahbaz; Lange, Felix et al.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023. S. 4344-4363 (Proceedings of the Annual Meeting of the Association for Computational Linguistics; Band 1).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Ziegenbein, T, Syed, S, Lange, F, Potthast, M & Wachsmuth, H 2023, Modeling Appropriate Language in Argumentation. in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Proceedings of the Annual Meeting of the Association for Computational Linguistics, Bd. 1, S. 4344-4363, 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023, Toronto, Kanada, 9 Juli 2023. https://doi.org/10.18653/v1/2023.acl-long.238
Ziegenbein, T., Syed, S., Lange, F., Potthast, M., & Wachsmuth, H. (2023). Modeling Appropriate Language in Argumentation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (S. 4344-4363). (Proceedings of the Annual Meeting of the Association for Computational Linguistics; Band 1). https://doi.org/10.18653/v1/2023.acl-long.238
Ziegenbein T, Syed S, Lange F, Potthast M, Wachsmuth H. Modeling Appropriate Language in Argumentation. in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023. S. 4344-4363. (Proceedings of the Annual Meeting of the Association for Computational Linguistics). doi: 10.18653/v1/2023.acl-long.238
Ziegenbein, Timon ; Syed, Shahbaz ; Lange, Felix et al. / Modeling Appropriate Language in Argumentation. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023. S. 4344-4363 (Proceedings of the Annual Meeting of the Association for Computational Linguistics).
Download
@inproceedings{f8b5afda6d8f4819a97b7129a8d9cb7f,
title = "Modeling Appropriate Language in Argumentation",
abstract = "Online discussion moderators must make ad-hoc decisions about whether the contributions of discussion participants are appropriate or should be removed to maintain civility. Existing research on offensive language and the resulting tools cover only one aspect among many involved in such decisions. The question of what is considered appropriate in a controversial discussion has not yet been systematically addressed. In this paper, we operationalize appropriate language in argumentation for the first time. In particular, we model appropriateness through the absence of flaws, grounded in research on argument quality assessment, especially in aspects from rhetoric. From these, we derive a new taxonomy of 14 dimensions that determine inappropriate language in online discussions. Building on three argument quality corpora, we then create a corpus of 2191 arguments annotated for the 14 dimensions. Empirical analyses support that the taxonomy covers the concept of appropriateness comprehensively, showing several plausible correlations with argument quality dimensions. Moreover, results of baseline approaches to assessing appropriateness suggest that all dimensions can be modeled computationally on the corpus.",
author = "Timon Ziegenbein and Shahbaz Syed and Felix Lange and Martin Potthast and Henning Wachsmuth",
note = "Funding Information: This project has been partially funded by the German Research Foundation (DFG) within the project OASiS, project number 455913891, as part of the Priority Program “Robust Argumentation Machines (RATIO)” (SPP-1999). We would like to thank the participants of our study and the anonymous reviewers for the feedback and their time.; 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023, ACL 2023 ; Conference date: 09-07-2023 Through 14-07-2023",
year = "2023",
month = jul,
doi = "10.18653/v1/2023.acl-long.238",
language = "English",
series = "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
pages = "4344--4363",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",

}

Download

TY - GEN

T1 - Modeling Appropriate Language in Argumentation

AU - Ziegenbein, Timon

AU - Syed, Shahbaz

AU - Lange, Felix

AU - Potthast, Martin

AU - Wachsmuth, Henning

N1 - Funding Information: This project has been partially funded by the German Research Foundation (DFG) within the project OASiS, project number 455913891, as part of the Priority Program “Robust Argumentation Machines (RATIO)” (SPP-1999). We would like to thank the participants of our study and the anonymous reviewers for the feedback and their time.

PY - 2023/7

Y1 - 2023/7

N2 - Online discussion moderators must make ad-hoc decisions about whether the contributions of discussion participants are appropriate or should be removed to maintain civility. Existing research on offensive language and the resulting tools cover only one aspect among many involved in such decisions. The question of what is considered appropriate in a controversial discussion has not yet been systematically addressed. In this paper, we operationalize appropriate language in argumentation for the first time. In particular, we model appropriateness through the absence of flaws, grounded in research on argument quality assessment, especially in aspects from rhetoric. From these, we derive a new taxonomy of 14 dimensions that determine inappropriate language in online discussions. Building on three argument quality corpora, we then create a corpus of 2191 arguments annotated for the 14 dimensions. Empirical analyses support that the taxonomy covers the concept of appropriateness comprehensively, showing several plausible correlations with argument quality dimensions. Moreover, results of baseline approaches to assessing appropriateness suggest that all dimensions can be modeled computationally on the corpus.

AB - Online discussion moderators must make ad-hoc decisions about whether the contributions of discussion participants are appropriate or should be removed to maintain civility. Existing research on offensive language and the resulting tools cover only one aspect among many involved in such decisions. The question of what is considered appropriate in a controversial discussion has not yet been systematically addressed. In this paper, we operationalize appropriate language in argumentation for the first time. In particular, we model appropriateness through the absence of flaws, grounded in research on argument quality assessment, especially in aspects from rhetoric. From these, we derive a new taxonomy of 14 dimensions that determine inappropriate language in online discussions. Building on three argument quality corpora, we then create a corpus of 2191 arguments annotated for the 14 dimensions. Empirical analyses support that the taxonomy covers the concept of appropriateness comprehensively, showing several plausible correlations with argument quality dimensions. Moreover, results of baseline approaches to assessing appropriateness suggest that all dimensions can be modeled computationally on the corpus.

UR - http://www.scopus.com/inward/record.url?scp=85174374271&partnerID=8YFLogxK

U2 - 10.18653/v1/2023.acl-long.238

DO - 10.18653/v1/2023.acl-long.238

M3 - Conference contribution

T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics

SP - 4344

EP - 4363

BT - Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics

T2 - 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023

Y2 - 9 July 2023 through 14 July 2023

ER -

Von denselben Autoren