Parity-based cumulative fairness-aware boosting

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Vasileios Iosifidis
  • Arjun Roy
  • Eirini Ntoutsi

Organisationseinheiten

Externe Organisationen

  • Freie Universität Berlin (FU Berlin)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)2737-2770
Seitenumfang34
FachzeitschriftKnowledge and information systems
Jahrgang64
Ausgabenummer10
Frühes Online-Datum27 Juli 2022
PublikationsstatusVeröffentlicht - Okt. 2022

Abstract

Data-driven AI systems can lead to discrimination on the basis of protected attributes like gender or race. One cause for this is the encoded societal biases in the training data (e.g., under-representation of females in the tech workforce), which is aggravated in the presence of unbalanced class distributions (e.g., when “hired” is the minority class in a hiring application). State-of-the-art fairness-aware machine learning approaches focus on preserving the overall classification accuracy while mitigating discrimination. In the presence of class-imbalance, such methods may further aggravate the problem of discrimination by denying an already underrepresented group (e.g., females) the fundamental rights of equal social privileges (e.g., equal access to employment). To this end, we propose AdaFair, a fairness-aware boosting ensemble that changes the data distribution at each round, taking into account not only the class errors but also the fairness-related performance of the model defined cumulatively based on the partial ensemble. Except for the in-training boosting of the group discriminated over each round, AdaFair directly tackles imbalance during the post-training phase by optimizing the number of ensemble learners for balanced error performance. AdaFair can facilitate different parity-based fairness notions and mitigate effectively discriminatory outcomes.

ASJC Scopus Sachgebiete

Zitieren

Parity-based cumulative fairness-aware boosting. / Iosifidis, Vasileios; Roy, Arjun; Ntoutsi, Eirini.
in: Knowledge and information systems, Jahrgang 64, Nr. 10, 10.2022, S. 2737-2770.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Iosifidis V, Roy A, Ntoutsi E. Parity-based cumulative fairness-aware boosting. Knowledge and information systems. 2022 Okt;64(10):2737-2770. Epub 2022 Jul 27. doi: 10.48550/arXiv.2201.01148, 10.1007/s10115-022-01723-3
Iosifidis, Vasileios ; Roy, Arjun ; Ntoutsi, Eirini. / Parity-based cumulative fairness-aware boosting. in: Knowledge and information systems. 2022 ; Jahrgang 64, Nr. 10. S. 2737-2770.
Download
@article{ffa700c5510742f8ad9ea2b322e7eb67,
title = "Parity-based cumulative fairness-aware boosting",
abstract = "Data-driven AI systems can lead to discrimination on the basis of protected attributes like gender or race. One cause for this is the encoded societal biases in the training data (e.g., under-representation of females in the tech workforce), which is aggravated in the presence of unbalanced class distributions (e.g., when “hired” is the minority class in a hiring application). State-of-the-art fairness-aware machine learning approaches focus on preserving the overall classification accuracy while mitigating discrimination. In the presence of class-imbalance, such methods may further aggravate the problem of discrimination by denying an already underrepresented group (e.g., females) the fundamental rights of equal social privileges (e.g., equal access to employment). To this end, we propose AdaFair, a fairness-aware boosting ensemble that changes the data distribution at each round, taking into account not only the class errors but also the fairness-related performance of the model defined cumulatively based on the partial ensemble. Except for the in-training boosting of the group discriminated over each round, AdaFair directly tackles imbalance during the post-training phase by optimizing the number of ensemble learners for balanced error performance. AdaFair can facilitate different parity-based fairness notions and mitigate effectively discriminatory outcomes.",
keywords = "Boosting, Class-imbalance, Disparate mistreatment, Ensemble learning, Equal opportunity, Fairness-aware classification, Statistical parity",
author = "Vasileios Iosifidis and Arjun Roy and Eirini Ntoutsi",
note = "Funding Information: The work is supported by the Volkswagen Foundation project BIAS (“Bias and Discrimination in Big Data and Algorithmic Processing. Philosophical Assessments, Legal Dimensions, and Technical Solutions”) within the initiative “AI and the Society of the Future”. ",
year = "2022",
month = oct,
doi = "10.48550/arXiv.2201.01148",
language = "English",
volume = "64",
pages = "2737--2770",
journal = "Knowledge and information systems",
issn = "0219-1377",
publisher = "Springer London",
number = "10",

}

Download

TY - JOUR

T1 - Parity-based cumulative fairness-aware boosting

AU - Iosifidis, Vasileios

AU - Roy, Arjun

AU - Ntoutsi, Eirini

N1 - Funding Information: The work is supported by the Volkswagen Foundation project BIAS (“Bias and Discrimination in Big Data and Algorithmic Processing. Philosophical Assessments, Legal Dimensions, and Technical Solutions”) within the initiative “AI and the Society of the Future”.

PY - 2022/10

Y1 - 2022/10

N2 - Data-driven AI systems can lead to discrimination on the basis of protected attributes like gender or race. One cause for this is the encoded societal biases in the training data (e.g., under-representation of females in the tech workforce), which is aggravated in the presence of unbalanced class distributions (e.g., when “hired” is the minority class in a hiring application). State-of-the-art fairness-aware machine learning approaches focus on preserving the overall classification accuracy while mitigating discrimination. In the presence of class-imbalance, such methods may further aggravate the problem of discrimination by denying an already underrepresented group (e.g., females) the fundamental rights of equal social privileges (e.g., equal access to employment). To this end, we propose AdaFair, a fairness-aware boosting ensemble that changes the data distribution at each round, taking into account not only the class errors but also the fairness-related performance of the model defined cumulatively based on the partial ensemble. Except for the in-training boosting of the group discriminated over each round, AdaFair directly tackles imbalance during the post-training phase by optimizing the number of ensemble learners for balanced error performance. AdaFair can facilitate different parity-based fairness notions and mitigate effectively discriminatory outcomes.

AB - Data-driven AI systems can lead to discrimination on the basis of protected attributes like gender or race. One cause for this is the encoded societal biases in the training data (e.g., under-representation of females in the tech workforce), which is aggravated in the presence of unbalanced class distributions (e.g., when “hired” is the minority class in a hiring application). State-of-the-art fairness-aware machine learning approaches focus on preserving the overall classification accuracy while mitigating discrimination. In the presence of class-imbalance, such methods may further aggravate the problem of discrimination by denying an already underrepresented group (e.g., females) the fundamental rights of equal social privileges (e.g., equal access to employment). To this end, we propose AdaFair, a fairness-aware boosting ensemble that changes the data distribution at each round, taking into account not only the class errors but also the fairness-related performance of the model defined cumulatively based on the partial ensemble. Except for the in-training boosting of the group discriminated over each round, AdaFair directly tackles imbalance during the post-training phase by optimizing the number of ensemble learners for balanced error performance. AdaFair can facilitate different parity-based fairness notions and mitigate effectively discriminatory outcomes.

KW - Boosting

KW - Class-imbalance

KW - Disparate mistreatment

KW - Ensemble learning

KW - Equal opportunity

KW - Fairness-aware classification

KW - Statistical parity

UR - http://www.scopus.com/inward/record.url?scp=85137649535&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2201.01148

DO - 10.48550/arXiv.2201.01148

M3 - Article

AN - SCOPUS:85137649535

VL - 64

SP - 2737

EP - 2770

JO - Knowledge and information systems

JF - Knowledge and information systems

SN - 0219-1377

IS - 10

ER -