Toxicity, Morality, and Speech Act Guided Stance Detection

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

Organisationseinheiten

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksFindings of the Association for Computational Linguistics
UntertitelEMNLP 2023
Seiten4464-4478
Seitenumfang15
ISBN (elektronisch)9798891760615
PublikationsstatusVeröffentlicht - 2023
Veranstaltung2023 Findings of the Association for Computational Linguistics: EMNLP 2023 - Singapore, Singapur
Dauer: 6 Dez. 202310 Dez. 2023

Publikationsreihe

NameFindings of the Association for Computational Linguistics: EMNLP 2023

Abstract

In this work, we focus on the task of determining the public attitude toward various social issues discussed on social media platforms. Platforms such as Twitter, however, are often used to spread misinformation, fake news through polarizing views. Existing literature suggests that higher levels of toxicity prevalent in Twitter conversations often spread negativity and delay addressing issues. Further, the embedded moral values and speech acts specifying the intention of the tweet correlate with public opinions expressed on various topics. However, previous works, which mainly focus on stance detection, either ignore the speech act, toxic, and moral features of these tweets that can collectively help capture public opinion or lack an efficient architecture that can detect the attitudes across targets. Therefore, in our work, we focus on the main task of stance detection by exploiting the toxicity, morality, and speech act as auxiliary tasks. We propose a multitasking model TWISTED that initially extracts the valence, arousal, and dominance aspects hidden in the tweets and injects the emotional sense into the embedded text followed by an efficient attention framework to correctly detect the tweet's stance by using the shared features of toxicity, morality, and speech acts present in the tweet. Extensive experiments conducted on 4 benchmark stance detection datasets (SemEval-2016, P-Stance, COVID19-Stance, and ClimateChange) comprising different domains demonstrate the effectiveness and generalizability of our approach.

ASJC Scopus Sachgebiete

Zitieren

Toxicity, Morality, and Speech Act Guided Stance Detection. / Upadhyaya, Apoorva; Fisichella, Marco; Nejdl, Wolfgang.
Findings of the Association for Computational Linguistics: EMNLP 2023. 2023. S. 4464-4478 (Findings of the Association for Computational Linguistics: EMNLP 2023).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Upadhyaya, A, Fisichella, M & Nejdl, W 2023, Toxicity, Morality, and Speech Act Guided Stance Detection. in Findings of the Association for Computational Linguistics: EMNLP 2023. Findings of the Association for Computational Linguistics: EMNLP 2023, S. 4464-4478, 2023 Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, Singapur, 6 Dez. 2023. https://doi.org/10.18653/v1/2023.findings-emnlp.295
Upadhyaya, A., Fisichella, M., & Nejdl, W. (2023). Toxicity, Morality, and Speech Act Guided Stance Detection. In Findings of the Association for Computational Linguistics: EMNLP 2023 (S. 4464-4478). (Findings of the Association for Computational Linguistics: EMNLP 2023). https://doi.org/10.18653/v1/2023.findings-emnlp.295
Upadhyaya A, Fisichella M, Nejdl W. Toxicity, Morality, and Speech Act Guided Stance Detection. in Findings of the Association for Computational Linguistics: EMNLP 2023. 2023. S. 4464-4478. (Findings of the Association for Computational Linguistics: EMNLP 2023). doi: 10.18653/v1/2023.findings-emnlp.295
Upadhyaya, Apoorva ; Fisichella, Marco ; Nejdl, Wolfgang. / Toxicity, Morality, and Speech Act Guided Stance Detection. Findings of the Association for Computational Linguistics: EMNLP 2023. 2023. S. 4464-4478 (Findings of the Association for Computational Linguistics: EMNLP 2023).
Download
@inproceedings{f8fbc4597f9645159b810cba1f569ace,
title = "Toxicity, Morality, and Speech Act Guided Stance Detection",
abstract = "In this work, we focus on the task of determining the public attitude toward various social issues discussed on social media platforms. Platforms such as Twitter, however, are often used to spread misinformation, fake news through polarizing views. Existing literature suggests that higher levels of toxicity prevalent in Twitter conversations often spread negativity and delay addressing issues. Further, the embedded moral values and speech acts specifying the intention of the tweet correlate with public opinions expressed on various topics. However, previous works, which mainly focus on stance detection, either ignore the speech act, toxic, and moral features of these tweets that can collectively help capture public opinion or lack an efficient architecture that can detect the attitudes across targets. Therefore, in our work, we focus on the main task of stance detection by exploiting the toxicity, morality, and speech act as auxiliary tasks. We propose a multitasking model TWISTED that initially extracts the valence, arousal, and dominance aspects hidden in the tweets and injects the emotional sense into the embedded text followed by an efficient attention framework to correctly detect the tweet's stance by using the shared features of toxicity, morality, and speech acts present in the tweet. Extensive experiments conducted on 4 benchmark stance detection datasets (SemEval-2016, P-Stance, COVID19-Stance, and ClimateChange) comprising different domains demonstrate the effectiveness and generalizability of our approach.",
author = "Apoorva Upadhyaya and Marco Fisichella and Wolfgang Nejdl",
note = "Funding Information: This work was partly funded by the SoMeCliCS project under the Volkswagen Stiftung and Nieders{\"a}chsisches Ministerium f{\"u}r Wissenschaft und Kultur. ; 2023 Findings of the Association for Computational Linguistics: EMNLP 2023 ; Conference date: 06-12-2023 Through 10-12-2023",
year = "2023",
doi = "10.18653/v1/2023.findings-emnlp.295",
language = "English",
series = "Findings of the Association for Computational Linguistics: EMNLP 2023",
pages = "4464--4478",
booktitle = "Findings of the Association for Computational Linguistics",

}

Download

TY - GEN

T1 - Toxicity, Morality, and Speech Act Guided Stance Detection

AU - Upadhyaya, Apoorva

AU - Fisichella, Marco

AU - Nejdl, Wolfgang

N1 - Funding Information: This work was partly funded by the SoMeCliCS project under the Volkswagen Stiftung and Niedersächsisches Ministerium für Wissenschaft und Kultur.

PY - 2023

Y1 - 2023

N2 - In this work, we focus on the task of determining the public attitude toward various social issues discussed on social media platforms. Platforms such as Twitter, however, are often used to spread misinformation, fake news through polarizing views. Existing literature suggests that higher levels of toxicity prevalent in Twitter conversations often spread negativity and delay addressing issues. Further, the embedded moral values and speech acts specifying the intention of the tweet correlate with public opinions expressed on various topics. However, previous works, which mainly focus on stance detection, either ignore the speech act, toxic, and moral features of these tweets that can collectively help capture public opinion or lack an efficient architecture that can detect the attitudes across targets. Therefore, in our work, we focus on the main task of stance detection by exploiting the toxicity, morality, and speech act as auxiliary tasks. We propose a multitasking model TWISTED that initially extracts the valence, arousal, and dominance aspects hidden in the tweets and injects the emotional sense into the embedded text followed by an efficient attention framework to correctly detect the tweet's stance by using the shared features of toxicity, morality, and speech acts present in the tweet. Extensive experiments conducted on 4 benchmark stance detection datasets (SemEval-2016, P-Stance, COVID19-Stance, and ClimateChange) comprising different domains demonstrate the effectiveness and generalizability of our approach.

AB - In this work, we focus on the task of determining the public attitude toward various social issues discussed on social media platforms. Platforms such as Twitter, however, are often used to spread misinformation, fake news through polarizing views. Existing literature suggests that higher levels of toxicity prevalent in Twitter conversations often spread negativity and delay addressing issues. Further, the embedded moral values and speech acts specifying the intention of the tweet correlate with public opinions expressed on various topics. However, previous works, which mainly focus on stance detection, either ignore the speech act, toxic, and moral features of these tweets that can collectively help capture public opinion or lack an efficient architecture that can detect the attitudes across targets. Therefore, in our work, we focus on the main task of stance detection by exploiting the toxicity, morality, and speech act as auxiliary tasks. We propose a multitasking model TWISTED that initially extracts the valence, arousal, and dominance aspects hidden in the tweets and injects the emotional sense into the embedded text followed by an efficient attention framework to correctly detect the tweet's stance by using the shared features of toxicity, morality, and speech acts present in the tweet. Extensive experiments conducted on 4 benchmark stance detection datasets (SemEval-2016, P-Stance, COVID19-Stance, and ClimateChange) comprising different domains demonstrate the effectiveness and generalizability of our approach.

UR - http://www.scopus.com/inward/record.url?scp=85183298280&partnerID=8YFLogxK

U2 - 10.18653/v1/2023.findings-emnlp.295

DO - 10.18653/v1/2023.findings-emnlp.295

M3 - Conference contribution

AN - SCOPUS:85183298280

T3 - Findings of the Association for Computational Linguistics: EMNLP 2023

SP - 4464

EP - 4478

BT - Findings of the Association for Computational Linguistics

T2 - 2023 Findings of the Association for Computational Linguistics: EMNLP 2023

Y2 - 6 December 2023 through 10 December 2023

ER -

Von denselben Autoren