Toxicity, Morality, and Speech Act Guided Stance Detection

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

Research Organisations

View graph of relations

Details

Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics
Subtitle of host publicationEMNLP 2023
Pages4464-4478
Number of pages15
ISBN (electronic)9798891760615
Publication statusPublished - 2023
Event2023 Findings of the Association for Computational Linguistics: EMNLP 2023 - Singapore, Singapore
Duration: 6 Dec 202310 Dec 2023

Publication series

NameFindings of the Association for Computational Linguistics: EMNLP 2023

Abstract

In this work, we focus on the task of determining the public attitude toward various social issues discussed on social media platforms. Platforms such as Twitter, however, are often used to spread misinformation, fake news through polarizing views. Existing literature suggests that higher levels of toxicity prevalent in Twitter conversations often spread negativity and delay addressing issues. Further, the embedded moral values and speech acts specifying the intention of the tweet correlate with public opinions expressed on various topics. However, previous works, which mainly focus on stance detection, either ignore the speech act, toxic, and moral features of these tweets that can collectively help capture public opinion or lack an efficient architecture that can detect the attitudes across targets. Therefore, in our work, we focus on the main task of stance detection by exploiting the toxicity, morality, and speech act as auxiliary tasks. We propose a multitasking model TWISTED that initially extracts the valence, arousal, and dominance aspects hidden in the tweets and injects the emotional sense into the embedded text followed by an efficient attention framework to correctly detect the tweet's stance by using the shared features of toxicity, morality, and speech acts present in the tweet. Extensive experiments conducted on 4 benchmark stance detection datasets (SemEval-2016, P-Stance, COVID19-Stance, and ClimateChange) comprising different domains demonstrate the effectiveness and generalizability of our approach.

ASJC Scopus subject areas

Cite this

Toxicity, Morality, and Speech Act Guided Stance Detection. / Upadhyaya, Apoorva; Fisichella, Marco; Nejdl, Wolfgang.
Findings of the Association for Computational Linguistics: EMNLP 2023. 2023. p. 4464-4478 (Findings of the Association for Computational Linguistics: EMNLP 2023).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Upadhyaya, A, Fisichella, M & Nejdl, W 2023, Toxicity, Morality, and Speech Act Guided Stance Detection. in Findings of the Association for Computational Linguistics: EMNLP 2023. Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 4464-4478, 2023 Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, Singapore, 6 Dec 2023. https://doi.org/10.18653/v1/2023.findings-emnlp.295
Upadhyaya, A., Fisichella, M., & Nejdl, W. (2023). Toxicity, Morality, and Speech Act Guided Stance Detection. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 4464-4478). (Findings of the Association for Computational Linguistics: EMNLP 2023). https://doi.org/10.18653/v1/2023.findings-emnlp.295
Upadhyaya A, Fisichella M, Nejdl W. Toxicity, Morality, and Speech Act Guided Stance Detection. In Findings of the Association for Computational Linguistics: EMNLP 2023. 2023. p. 4464-4478. (Findings of the Association for Computational Linguistics: EMNLP 2023). doi: 10.18653/v1/2023.findings-emnlp.295
Upadhyaya, Apoorva ; Fisichella, Marco ; Nejdl, Wolfgang. / Toxicity, Morality, and Speech Act Guided Stance Detection. Findings of the Association for Computational Linguistics: EMNLP 2023. 2023. pp. 4464-4478 (Findings of the Association for Computational Linguistics: EMNLP 2023).
Download
@inproceedings{f8fbc4597f9645159b810cba1f569ace,
title = "Toxicity, Morality, and Speech Act Guided Stance Detection",
abstract = "In this work, we focus on the task of determining the public attitude toward various social issues discussed on social media platforms. Platforms such as Twitter, however, are often used to spread misinformation, fake news through polarizing views. Existing literature suggests that higher levels of toxicity prevalent in Twitter conversations often spread negativity and delay addressing issues. Further, the embedded moral values and speech acts specifying the intention of the tweet correlate with public opinions expressed on various topics. However, previous works, which mainly focus on stance detection, either ignore the speech act, toxic, and moral features of these tweets that can collectively help capture public opinion or lack an efficient architecture that can detect the attitudes across targets. Therefore, in our work, we focus on the main task of stance detection by exploiting the toxicity, morality, and speech act as auxiliary tasks. We propose a multitasking model TWISTED that initially extracts the valence, arousal, and dominance aspects hidden in the tweets and injects the emotional sense into the embedded text followed by an efficient attention framework to correctly detect the tweet's stance by using the shared features of toxicity, morality, and speech acts present in the tweet. Extensive experiments conducted on 4 benchmark stance detection datasets (SemEval-2016, P-Stance, COVID19-Stance, and ClimateChange) comprising different domains demonstrate the effectiveness and generalizability of our approach.",
author = "Apoorva Upadhyaya and Marco Fisichella and Wolfgang Nejdl",
note = "Funding Information: This work was partly funded by the SoMeCliCS project under the Volkswagen Stiftung and Nieders{\"a}chsisches Ministerium f{\"u}r Wissenschaft und Kultur. ; 2023 Findings of the Association for Computational Linguistics: EMNLP 2023 ; Conference date: 06-12-2023 Through 10-12-2023",
year = "2023",
doi = "10.18653/v1/2023.findings-emnlp.295",
language = "English",
series = "Findings of the Association for Computational Linguistics: EMNLP 2023",
pages = "4464--4478",
booktitle = "Findings of the Association for Computational Linguistics",

}

Download

TY - GEN

T1 - Toxicity, Morality, and Speech Act Guided Stance Detection

AU - Upadhyaya, Apoorva

AU - Fisichella, Marco

AU - Nejdl, Wolfgang

N1 - Funding Information: This work was partly funded by the SoMeCliCS project under the Volkswagen Stiftung and Niedersächsisches Ministerium für Wissenschaft und Kultur.

PY - 2023

Y1 - 2023

N2 - In this work, we focus on the task of determining the public attitude toward various social issues discussed on social media platforms. Platforms such as Twitter, however, are often used to spread misinformation, fake news through polarizing views. Existing literature suggests that higher levels of toxicity prevalent in Twitter conversations often spread negativity and delay addressing issues. Further, the embedded moral values and speech acts specifying the intention of the tweet correlate with public opinions expressed on various topics. However, previous works, which mainly focus on stance detection, either ignore the speech act, toxic, and moral features of these tweets that can collectively help capture public opinion or lack an efficient architecture that can detect the attitudes across targets. Therefore, in our work, we focus on the main task of stance detection by exploiting the toxicity, morality, and speech act as auxiliary tasks. We propose a multitasking model TWISTED that initially extracts the valence, arousal, and dominance aspects hidden in the tweets and injects the emotional sense into the embedded text followed by an efficient attention framework to correctly detect the tweet's stance by using the shared features of toxicity, morality, and speech acts present in the tweet. Extensive experiments conducted on 4 benchmark stance detection datasets (SemEval-2016, P-Stance, COVID19-Stance, and ClimateChange) comprising different domains demonstrate the effectiveness and generalizability of our approach.

AB - In this work, we focus on the task of determining the public attitude toward various social issues discussed on social media platforms. Platforms such as Twitter, however, are often used to spread misinformation, fake news through polarizing views. Existing literature suggests that higher levels of toxicity prevalent in Twitter conversations often spread negativity and delay addressing issues. Further, the embedded moral values and speech acts specifying the intention of the tweet correlate with public opinions expressed on various topics. However, previous works, which mainly focus on stance detection, either ignore the speech act, toxic, and moral features of these tweets that can collectively help capture public opinion or lack an efficient architecture that can detect the attitudes across targets. Therefore, in our work, we focus on the main task of stance detection by exploiting the toxicity, morality, and speech act as auxiliary tasks. We propose a multitasking model TWISTED that initially extracts the valence, arousal, and dominance aspects hidden in the tweets and injects the emotional sense into the embedded text followed by an efficient attention framework to correctly detect the tweet's stance by using the shared features of toxicity, morality, and speech acts present in the tweet. Extensive experiments conducted on 4 benchmark stance detection datasets (SemEval-2016, P-Stance, COVID19-Stance, and ClimateChange) comprising different domains demonstrate the effectiveness and generalizability of our approach.

UR - http://www.scopus.com/inward/record.url?scp=85183298280&partnerID=8YFLogxK

U2 - 10.18653/v1/2023.findings-emnlp.295

DO - 10.18653/v1/2023.findings-emnlp.295

M3 - Conference contribution

AN - SCOPUS:85183298280

T3 - Findings of the Association for Computational Linguistics: EMNLP 2023

SP - 4464

EP - 4478

BT - Findings of the Association for Computational Linguistics

T2 - 2023 Findings of the Association for Computational Linguistics: EMNLP 2023

Y2 - 6 December 2023 through 10 December 2023

ER -

By the same author(s)