AdaFair: Cumulative Fairness Adaptive Boosting

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Vasileios Iosifidis
  • Eirini Ntoutsi

Organisationseinheiten

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksCIKM '19
UntertitelProceedings of the 28th ACM International Conference on Information and Knowledge Management
Seiten781-790
Seitenumfang10
ISBN (elektronisch)9781450369763
PublikationsstatusVeröffentlicht - 3 Nov. 2019
Veranstaltung28th ACM International Conference on Information and Knowledge Management, CIKM 2019 - Beijing, China
Dauer: 3 Nov. 20197 Nov. 2019

Abstract

The widespread use of ML-based decision making in domains with high societal impact such as recidivism, job hiring and loan credit has raised a lot of concerns regarding potential discrimination. In particular, in certain cases it has been observed that ML algorithms can provide different decisions based on sensitive attributes such as gender or race and therefore can lead to discrimination. Although, several fairness-aware ML approaches have been proposed, their focus has been largely on preserving the overall classification accuracy while improving fairness in predictions for both protected and non-protected groups (defined based on the sensitive attribute(s)). The overall accuracy however is not a good indicator of performance in case of class imbalance, as it is biased towards the majority class. As we will see in our experiments, many of the fairness-related datasets suffer from class imbalance and therefore, tackling fairness requires also tackling the imbalance problem. To this end, we propose AdaFair, a fairness-aware classifier based on AdaBoost that further updates the weights of the instances in each boosting round taking into account a cumulative notion of fairness based upon all current ensemble members, while explicitly tackling class-imbalance by optimizing the number of ensemble members for balanced classification error. Our experiments show that our approach can achieve parity in true positive and true negative rates for both protected and non-protected groups, while it significantly outperforms existing fairness-aware methods up to 25% in terms of balanced error.

ASJC Scopus Sachgebiete

Zitieren

AdaFair: Cumulative Fairness Adaptive Boosting. / Iosifidis, Vasileios; Ntoutsi, Eirini.
CIKM '19: Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 2019. S. 781-790.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Iosifidis, V & Ntoutsi, E 2019, AdaFair: Cumulative Fairness Adaptive Boosting. in CIKM '19: Proceedings of the 28th ACM International Conference on Information and Knowledge Management. S. 781-790, 28th ACM International Conference on Information and Knowledge Management, CIKM 2019, Beijing, China, 3 Nov. 2019. https://doi.org/10.48550/arXiv.1909.08982, https://doi.org/10.1145/3357384.3357974
Iosifidis, V., & Ntoutsi, E. (2019). AdaFair: Cumulative Fairness Adaptive Boosting. In CIKM '19: Proceedings of the 28th ACM International Conference on Information and Knowledge Management (S. 781-790) https://doi.org/10.48550/arXiv.1909.08982, https://doi.org/10.1145/3357384.3357974
Iosifidis V, Ntoutsi E. AdaFair: Cumulative Fairness Adaptive Boosting. in CIKM '19: Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 2019. S. 781-790 doi: 10.48550/arXiv.1909.08982, 10.1145/3357384.3357974
Iosifidis, Vasileios ; Ntoutsi, Eirini. / AdaFair : Cumulative Fairness Adaptive Boosting. CIKM '19: Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 2019. S. 781-790
Download
@inproceedings{f590e62610f14df997904cb0668d06a0,
title = "AdaFair: Cumulative Fairness Adaptive Boosting",
abstract = "The widespread use of ML-based decision making in domains with high societal impact such as recidivism, job hiring and loan credit has raised a lot of concerns regarding potential discrimination. In particular, in certain cases it has been observed that ML algorithms can provide different decisions based on sensitive attributes such as gender or race and therefore can lead to discrimination. Although, several fairness-aware ML approaches have been proposed, their focus has been largely on preserving the overall classification accuracy while improving fairness in predictions for both protected and non-protected groups (defined based on the sensitive attribute(s)). The overall accuracy however is not a good indicator of performance in case of class imbalance, as it is biased towards the majority class. As we will see in our experiments, many of the fairness-related datasets suffer from class imbalance and therefore, tackling fairness requires also tackling the imbalance problem. To this end, we propose AdaFair, a fairness-aware classifier based on AdaBoost that further updates the weights of the instances in each boosting round taking into account a cumulative notion of fairness based upon all current ensemble members, while explicitly tackling class-imbalance by optimizing the number of ensemble members for balanced classification error. Our experiments show that our approach can achieve parity in true positive and true negative rates for both protected and non-protected groups, while it significantly outperforms existing fairness-aware methods up to 25% in terms of balanced error.",
keywords = "Boosting, Class imbalance, Fairness-aware classification",
author = "Vasileios Iosifidis and Eirini Ntoutsi",
note = "Funding Information: The work was funded by the German Research Foundation (DFG) project OSCAR (Opinion Stream Classification with Ensembles and Active leaRners) and inspired by the Volkswagen Foundation project BIAS ({"}Bias and Discrimination in Big Data and Algorithmic Processing. Philosophical Assessments, Legal Dimensions, and Technical Solutions{"}) within the initiative {"}AI and the Society of the Future{"}; the last author is a Project Investigator for both of them.; 28th ACM International Conference on Information and Knowledge Management, CIKM 2019 ; Conference date: 03-11-2019 Through 07-11-2019",
year = "2019",
month = nov,
day = "3",
doi = "10.48550/arXiv.1909.08982",
language = "English",
pages = "781--790",
booktitle = "CIKM '19",

}

Download

TY - GEN

T1 - AdaFair

T2 - 28th ACM International Conference on Information and Knowledge Management, CIKM 2019

AU - Iosifidis, Vasileios

AU - Ntoutsi, Eirini

N1 - Funding Information: The work was funded by the German Research Foundation (DFG) project OSCAR (Opinion Stream Classification with Ensembles and Active leaRners) and inspired by the Volkswagen Foundation project BIAS ("Bias and Discrimination in Big Data and Algorithmic Processing. Philosophical Assessments, Legal Dimensions, and Technical Solutions") within the initiative "AI and the Society of the Future"; the last author is a Project Investigator for both of them.

PY - 2019/11/3

Y1 - 2019/11/3

N2 - The widespread use of ML-based decision making in domains with high societal impact such as recidivism, job hiring and loan credit has raised a lot of concerns regarding potential discrimination. In particular, in certain cases it has been observed that ML algorithms can provide different decisions based on sensitive attributes such as gender or race and therefore can lead to discrimination. Although, several fairness-aware ML approaches have been proposed, their focus has been largely on preserving the overall classification accuracy while improving fairness in predictions for both protected and non-protected groups (defined based on the sensitive attribute(s)). The overall accuracy however is not a good indicator of performance in case of class imbalance, as it is biased towards the majority class. As we will see in our experiments, many of the fairness-related datasets suffer from class imbalance and therefore, tackling fairness requires also tackling the imbalance problem. To this end, we propose AdaFair, a fairness-aware classifier based on AdaBoost that further updates the weights of the instances in each boosting round taking into account a cumulative notion of fairness based upon all current ensemble members, while explicitly tackling class-imbalance by optimizing the number of ensemble members for balanced classification error. Our experiments show that our approach can achieve parity in true positive and true negative rates for both protected and non-protected groups, while it significantly outperforms existing fairness-aware methods up to 25% in terms of balanced error.

AB - The widespread use of ML-based decision making in domains with high societal impact such as recidivism, job hiring and loan credit has raised a lot of concerns regarding potential discrimination. In particular, in certain cases it has been observed that ML algorithms can provide different decisions based on sensitive attributes such as gender or race and therefore can lead to discrimination. Although, several fairness-aware ML approaches have been proposed, their focus has been largely on preserving the overall classification accuracy while improving fairness in predictions for both protected and non-protected groups (defined based on the sensitive attribute(s)). The overall accuracy however is not a good indicator of performance in case of class imbalance, as it is biased towards the majority class. As we will see in our experiments, many of the fairness-related datasets suffer from class imbalance and therefore, tackling fairness requires also tackling the imbalance problem. To this end, we propose AdaFair, a fairness-aware classifier based on AdaBoost that further updates the weights of the instances in each boosting round taking into account a cumulative notion of fairness based upon all current ensemble members, while explicitly tackling class-imbalance by optimizing the number of ensemble members for balanced classification error. Our experiments show that our approach can achieve parity in true positive and true negative rates for both protected and non-protected groups, while it significantly outperforms existing fairness-aware methods up to 25% in terms of balanced error.

KW - Boosting

KW - Class imbalance

KW - Fairness-aware classification

UR - http://www.scopus.com/inward/record.url?scp=85075460237&partnerID=8YFLogxK

U2 - 10.48550/arXiv.1909.08982

DO - 10.48550/arXiv.1909.08982

M3 - Conference contribution

AN - SCOPUS:85075460237

SP - 781

EP - 790

BT - CIKM '19

Y2 - 3 November 2019 through 7 November 2019

ER -