Disentangling Dialect from Social Bias via Multitask Learning to Improve Fairness

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Externe Organisationen

  • Universität Paderborn
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksFindings of the Association for Computational Linguistics ACL 2024
Herausgeber/-innenLun-Wei Ku, Andre Martins, Vivek Srikumar
Seiten9294-9313
Seitenumfang20
ISBN (elektronisch)9798891760998
PublikationsstatusVeröffentlicht - Aug. 2024
VeranstaltungFindings of the Association for Computational Linguistics ACL 2024 - Bangkok, Thailand
Dauer: 11 Aug. 202416 Aug. 2024
https://2024.aclweb.org/

Publikationsreihe

NameProceedings of the Annual Meeting of the Association for Computational Linguistics
ISSN (Print)0736-587X

Abstract

Dialects introduce syntactic and lexical variations in language that occur in regional or social groups. Most NLP methods are not sensitive to such variations. This may lead to unfair behavior of the methods, conveying negative bias towards dialect speakers. While previous work has studied dialect-related fairness for aspects like hate speech, other aspects of biased language, such as lewdness, remain fully unexplored. To fill this gap, we investigate performance disparities between dialects in the detection of five aspects of biased language and how to mitigate them. To alleviate bias, we present a multitask learning approach that models dialect language as an auxiliary task to incorporate syntactic and lexical variations. In our experiments with African-American English dialect, we provide empirical evidence that complementing common learning approaches with dialect modeling improves their fairness. Furthermore, the results suggest that multitask learning achieves state-of-the-art performance and helps to detect properties of biased language more reliably.

ASJC Scopus Sachgebiete

Zitieren

Disentangling Dialect from Social Bias via Multitask Learning to Improve Fairness. / Spliethöver, Maximilian; Menon, Sai Nikhil; Wachsmuth, Henning.
Findings of the Association for Computational Linguistics ACL 2024. Hrsg. / Lun-Wei Ku; Andre Martins; Vivek Srikumar. 2024. S. 9294-9313 (Proceedings of the Annual Meeting of the Association for Computational Linguistics).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Spliethöver, M, Menon, SN & Wachsmuth, H 2024, Disentangling Dialect from Social Bias via Multitask Learning to Improve Fairness. in L-W Ku, A Martins & V Srikumar (Hrsg.), Findings of the Association for Computational Linguistics ACL 2024. Proceedings of the Annual Meeting of the Association for Computational Linguistics, S. 9294-9313, Findings of the Association for Computational Linguistics ACL 2024, Bangkok, Thailand, 11 Aug. 2024. https://doi.org/10.18653/v1/2024.findings-acl.553
Spliethöver, M., Menon, S. N., & Wachsmuth, H. (2024). Disentangling Dialect from Social Bias via Multitask Learning to Improve Fairness. In L.-W. Ku, A. Martins, & V. Srikumar (Hrsg.), Findings of the Association for Computational Linguistics ACL 2024 (S. 9294-9313). (Proceedings of the Annual Meeting of the Association for Computational Linguistics). https://doi.org/10.18653/v1/2024.findings-acl.553
Spliethöver M, Menon SN, Wachsmuth H. Disentangling Dialect from Social Bias via Multitask Learning to Improve Fairness. in Ku LW, Martins A, Srikumar V, Hrsg., Findings of the Association for Computational Linguistics ACL 2024. 2024. S. 9294-9313. (Proceedings of the Annual Meeting of the Association for Computational Linguistics). doi: 10.18653/v1/2024.findings-acl.553
Spliethöver, Maximilian ; Menon, Sai Nikhil ; Wachsmuth, Henning. / Disentangling Dialect from Social Bias via Multitask Learning to Improve Fairness. Findings of the Association for Computational Linguistics ACL 2024. Hrsg. / Lun-Wei Ku ; Andre Martins ; Vivek Srikumar. 2024. S. 9294-9313 (Proceedings of the Annual Meeting of the Association for Computational Linguistics).
Download
@inproceedings{bb8f3577a9b44dbeb32dfae4170e3da0,
title = "Disentangling Dialect from Social Bias via Multitask Learning to Improve Fairness",
abstract = "Dialects introduce syntactic and lexical variations in language that occur in regional or social groups. Most NLP methods are not sensitive to such variations. This may lead to unfair behavior of the methods, conveying negative bias towards dialect speakers. While previous work has studied dialect-related fairness for aspects like hate speech, other aspects of biased language, such as lewdness, remain fully unexplored. To fill this gap, we investigate performance disparities between dialects in the detection of five aspects of biased language and how to mitigate them. To alleviate bias, we present a multitask learning approach that models dialect language as an auxiliary task to incorporate syntactic and lexical variations. In our experiments with African-American English dialect, we provide empirical evidence that complementing common learning approaches with dialect modeling improves their fairness. Furthermore, the results suggest that multitask learning achieves state-of-the-art performance and helps to detect properties of biased language more reliably.",
author = "Maximilian Splieth{\"o}ver and Menon, {Sai Nikhil} and Henning Wachsmuth",
note = "Publisher Copyright: {\textcopyright} 2024 Association for Computational Linguistics.; Findings of the Association for Computational Linguistics ACL 2024 ; Conference date: 11-08-2024 Through 16-08-2024",
year = "2024",
month = aug,
doi = "10.18653/v1/2024.findings-acl.553",
language = "English",
series = "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
pages = "9294--9313",
editor = "Lun-Wei Ku and Andre Martins and Vivek Srikumar",
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
url = "https://2024.aclweb.org/",

}

Download

TY - GEN

T1 - Disentangling Dialect from Social Bias via Multitask Learning to Improve Fairness

AU - Spliethöver, Maximilian

AU - Menon, Sai Nikhil

AU - Wachsmuth, Henning

N1 - Publisher Copyright: © 2024 Association for Computational Linguistics.

PY - 2024/8

Y1 - 2024/8

N2 - Dialects introduce syntactic and lexical variations in language that occur in regional or social groups. Most NLP methods are not sensitive to such variations. This may lead to unfair behavior of the methods, conveying negative bias towards dialect speakers. While previous work has studied dialect-related fairness for aspects like hate speech, other aspects of biased language, such as lewdness, remain fully unexplored. To fill this gap, we investigate performance disparities between dialects in the detection of five aspects of biased language and how to mitigate them. To alleviate bias, we present a multitask learning approach that models dialect language as an auxiliary task to incorporate syntactic and lexical variations. In our experiments with African-American English dialect, we provide empirical evidence that complementing common learning approaches with dialect modeling improves their fairness. Furthermore, the results suggest that multitask learning achieves state-of-the-art performance and helps to detect properties of biased language more reliably.

AB - Dialects introduce syntactic and lexical variations in language that occur in regional or social groups. Most NLP methods are not sensitive to such variations. This may lead to unfair behavior of the methods, conveying negative bias towards dialect speakers. While previous work has studied dialect-related fairness for aspects like hate speech, other aspects of biased language, such as lewdness, remain fully unexplored. To fill this gap, we investigate performance disparities between dialects in the detection of five aspects of biased language and how to mitigate them. To alleviate bias, we present a multitask learning approach that models dialect language as an auxiliary task to incorporate syntactic and lexical variations. In our experiments with African-American English dialect, we provide empirical evidence that complementing common learning approaches with dialect modeling improves their fairness. Furthermore, the results suggest that multitask learning achieves state-of-the-art performance and helps to detect properties of biased language more reliably.

UR - http://www.scopus.com/inward/record.url?scp=85205286976&partnerID=8YFLogxK

U2 - 10.18653/v1/2024.findings-acl.553

DO - 10.18653/v1/2024.findings-acl.553

M3 - Conference contribution

T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics

SP - 9294

EP - 9313

BT - Findings of the Association for Computational Linguistics ACL 2024

A2 - Ku, Lun-Wei

A2 - Martins, Andre

A2 - Srikumar, Vivek

T2 - Findings of the Association for Computational Linguistics ACL 2024

Y2 - 11 August 2024 through 16 August 2024

ER -

Von denselben Autoren