Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | FAccT '23 |
Untertitel | Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency |
Herausgeber (Verlag) | Association for Computing Machinery (ACM) |
Seiten | 89-100 |
Seitenumfang | 12 |
ISBN (elektronisch) | 9781450372527 |
Publikationsstatus | Veröffentlicht - 12 Juni 2023 |
Veranstaltung | 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023 - Chicago, USA / Vereinigte Staaten Dauer: 12 Juni 2023 → 15 Juni 2023 |
Publikationsreihe
Name | ACM International Conference Proceeding Series |
---|
Abstract
AI-driven decision-making can lead to discrimination against certain individuals or social groups based on protected characteristics/attributes such as race, gender, or age. The domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models. Still, thus far, the vast majority of the proposed methods assess fairness based on a single protected attribute, e.g. only gender or race. In reality, though, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic, leading to the so-called "multi-dimensional discrimination"or "multi-dimensional fairness"problem. While well-elaborated in legal literature, the multi-dimensionality of discrimination is less explored in the machine learning community. Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain, whereas other notions like additive and sequential discrimination are less studied or not considered thus far. In this work, we overview the different definitions of multi-dimensional discrimination/fairness in the legal domain as well as how they have been transferred/ operationalized (if) in the fairness-aware machine learning domain. By juxtaposing these two domains, we draw the connections, identify the limitations, and point out open research directions.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Mensch-Maschine-Interaktion
- Informatik (insg.)
- Computernetzwerke und -kommunikation
- Informatik (insg.)
- Maschinelles Sehen und Mustererkennung
- Informatik (insg.)
- Software
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 2023. S. 89-100 (ACM International Conference Proceeding Series).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Multi-dimensional Discrimination in Law and Machine Learning
T2 - 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023
AU - Roy, Arjun
AU - Horstmann, Jan
AU - Ntoutsi, Eirini
N1 - Funding Information: This research work is funded by Volkswagen Foundation under the call “Artificial Intelligence and the Society of the Future" project “Bias and Discrimination in Big Data and Algorithmic Processing -BIAS” and by the European Union under the Horizon Europe MAMMOth project, Grant Agreement ID: 101070285. We especially thank our colleagues in the projects for invaluable discussions and helpful suggestions
PY - 2023/6/12
Y1 - 2023/6/12
N2 - AI-driven decision-making can lead to discrimination against certain individuals or social groups based on protected characteristics/attributes such as race, gender, or age. The domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models. Still, thus far, the vast majority of the proposed methods assess fairness based on a single protected attribute, e.g. only gender or race. In reality, though, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic, leading to the so-called "multi-dimensional discrimination"or "multi-dimensional fairness"problem. While well-elaborated in legal literature, the multi-dimensionality of discrimination is less explored in the machine learning community. Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain, whereas other notions like additive and sequential discrimination are less studied or not considered thus far. In this work, we overview the different definitions of multi-dimensional discrimination/fairness in the legal domain as well as how they have been transferred/ operationalized (if) in the fairness-aware machine learning domain. By juxtaposing these two domains, we draw the connections, identify the limitations, and point out open research directions.
AB - AI-driven decision-making can lead to discrimination against certain individuals or social groups based on protected characteristics/attributes such as race, gender, or age. The domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models. Still, thus far, the vast majority of the proposed methods assess fairness based on a single protected attribute, e.g. only gender or race. In reality, though, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic, leading to the so-called "multi-dimensional discrimination"or "multi-dimensional fairness"problem. While well-elaborated in legal literature, the multi-dimensionality of discrimination is less explored in the machine learning community. Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain, whereas other notions like additive and sequential discrimination are less studied or not considered thus far. In this work, we overview the different definitions of multi-dimensional discrimination/fairness in the legal domain as well as how they have been transferred/ operationalized (if) in the fairness-aware machine learning domain. By juxtaposing these two domains, we draw the connections, identify the limitations, and point out open research directions.
KW - additive fairness
KW - intersectional fairness
KW - multi-discrimination
KW - multi-fairness
KW - sequential fairness
UR - http://www.scopus.com/inward/record.url?scp=85163682034&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2302.05995
DO - 10.48550/arXiv.2302.05995
M3 - Conference contribution
AN - SCOPUS:85163682034
T3 - ACM International Conference Proceeding Series
SP - 89
EP - 100
BT - FAccT '23
PB - Association for Computing Machinery (ACM)
Y2 - 12 June 2023 through 15 June 2023
ER -