Details
Original language | English |
---|---|
Title of host publication | FAccT '23 |
Subtitle of host publication | Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency |
Publisher | Association for Computing Machinery (ACM) |
Pages | 89-100 |
Number of pages | 12 |
ISBN (electronic) | 9781450372527 |
Publication status | Published - 12 Jun 2023 |
Event | 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023 - Chicago, United States Duration: 12 Jun 2023 → 15 Jun 2023 |
Publication series
Name | ACM International Conference Proceeding Series |
---|
Abstract
AI-driven decision-making can lead to discrimination against certain individuals or social groups based on protected characteristics/attributes such as race, gender, or age. The domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models. Still, thus far, the vast majority of the proposed methods assess fairness based on a single protected attribute, e.g. only gender or race. In reality, though, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic, leading to the so-called "multi-dimensional discrimination"or "multi-dimensional fairness"problem. While well-elaborated in legal literature, the multi-dimensionality of discrimination is less explored in the machine learning community. Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain, whereas other notions like additive and sequential discrimination are less studied or not considered thus far. In this work, we overview the different definitions of multi-dimensional discrimination/fairness in the legal domain as well as how they have been transferred/ operationalized (if) in the fairness-aware machine learning domain. By juxtaposing these two domains, we draw the connections, identify the limitations, and point out open research directions.
Keywords
- additive fairness, intersectional fairness, multi-discrimination, multi-fairness, sequential fairness
ASJC Scopus subject areas
- Computer Science(all)
- Human-Computer Interaction
- Computer Science(all)
- Computer Networks and Communications
- Computer Science(all)
- Computer Vision and Pattern Recognition
- Computer Science(all)
- Software
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 2023. p. 89-100 (ACM International Conference Proceeding Series).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Multi-dimensional Discrimination in Law and Machine Learning
T2 - 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023
AU - Roy, Arjun
AU - Horstmann, Jan
AU - Ntoutsi, Eirini
N1 - Funding Information: This research work is funded by Volkswagen Foundation under the call “Artificial Intelligence and the Society of the Future" project “Bias and Discrimination in Big Data and Algorithmic Processing -BIAS” and by the European Union under the Horizon Europe MAMMOth project, Grant Agreement ID: 101070285. We especially thank our colleagues in the projects for invaluable discussions and helpful suggestions
PY - 2023/6/12
Y1 - 2023/6/12
N2 - AI-driven decision-making can lead to discrimination against certain individuals or social groups based on protected characteristics/attributes such as race, gender, or age. The domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models. Still, thus far, the vast majority of the proposed methods assess fairness based on a single protected attribute, e.g. only gender or race. In reality, though, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic, leading to the so-called "multi-dimensional discrimination"or "multi-dimensional fairness"problem. While well-elaborated in legal literature, the multi-dimensionality of discrimination is less explored in the machine learning community. Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain, whereas other notions like additive and sequential discrimination are less studied or not considered thus far. In this work, we overview the different definitions of multi-dimensional discrimination/fairness in the legal domain as well as how they have been transferred/ operationalized (if) in the fairness-aware machine learning domain. By juxtaposing these two domains, we draw the connections, identify the limitations, and point out open research directions.
AB - AI-driven decision-making can lead to discrimination against certain individuals or social groups based on protected characteristics/attributes such as race, gender, or age. The domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models. Still, thus far, the vast majority of the proposed methods assess fairness based on a single protected attribute, e.g. only gender or race. In reality, though, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic, leading to the so-called "multi-dimensional discrimination"or "multi-dimensional fairness"problem. While well-elaborated in legal literature, the multi-dimensionality of discrimination is less explored in the machine learning community. Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain, whereas other notions like additive and sequential discrimination are less studied or not considered thus far. In this work, we overview the different definitions of multi-dimensional discrimination/fairness in the legal domain as well as how they have been transferred/ operationalized (if) in the fairness-aware machine learning domain. By juxtaposing these two domains, we draw the connections, identify the limitations, and point out open research directions.
KW - additive fairness
KW - intersectional fairness
KW - multi-discrimination
KW - multi-fairness
KW - sequential fairness
UR - http://www.scopus.com/inward/record.url?scp=85163682034&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2302.05995
DO - 10.48550/arXiv.2302.05995
M3 - Conference contribution
AN - SCOPUS:85163682034
T3 - ACM International Conference Proceeding Series
SP - 89
EP - 100
BT - FAccT '23
PB - Association for Computing Machinery (ACM)
Y2 - 12 June 2023 through 15 June 2023
ER -