FAE: A Fairness-Aware Ensemble Framework

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Vasileios Iosifidis
  • Besnik Fetahu
  • Eirini Ntoutsi

Research Organisations

View graph of relations

Details

Original languageEnglish
Title of host publication2019 IEEE International Conference on Big Data (Big Data)
EditorsChaitanya Baru, Jun Huan, Latifur Khan, Xiaohua Tony Hu, Ronay Ak, Yuanyuan Tian, Roger Barga, Carlo Zaniolo, Kisung Lee, Yanfang Fanny Ye
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1375-1380
Number of pages6
ISBN (electronic)9781728108582
ISBN (print)9781728108599
Publication statusPublished - 2020
Event2019 IEEE International Conference on Big Data, Big Data 2019 - Los Angeles, United States
Duration: 9 Dec 201912 Dec 2019

Abstract

Automated decision making based on big data and machine learning (ML) algorithms can result in discriminatory decisions against certain protected groups defined upon personal data like gender, race, sexual orientation etc. Such algorithms designed to discover patterns in big data might not only pick up any encoded societal biases in the training data, but even worse, they might reinforce such biases resulting in more severe discrimination. The majority of thus far proposed fairness-aware machine learning approaches focus solely on the pre-, in-or post-processing steps of the machine learning process, that is, input data, learning algorithms or derived models, respectively. However, the fairness problem cannot be isolated to a single step of the ML process. Rather, discrimination is often a result of complex interactions between big data and algorithms, and therefore, a more holistic approach is required.The proposed FAE (Fairness-Aware Ensemble) framework combines fairness-related interventions at both pre-and post-processing steps of the data analysis process. In the pre-processing step, we tackle the problems of under-representation of the protected group (group imbalance) and of class-imbalance by generating balanced training samples. In the post-processing step, we tackle the problem of class overlapping by shifting the decision boundary in the direction of fairness.

Keywords

    class imbalance, class overlap, ensemble learning, fairness-aware classification, group imbalance

ASJC Scopus subject areas

Cite this

FAE: A Fairness-Aware Ensemble Framework. / Iosifidis, Vasileios; Fetahu, Besnik; Ntoutsi, Eirini.
2019 IEEE International Conference on Big Data (Big Data). ed. / Chaitanya Baru; Jun Huan; Latifur Khan; Xiaohua Tony Hu; Ronay Ak; Yuanyuan Tian; Roger Barga; Carlo Zaniolo; Kisung Lee; Yanfang Fanny Ye. Institute of Electrical and Electronics Engineers Inc., 2020. p. 1375-1380 9006487.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Iosifidis, V, Fetahu, B & Ntoutsi, E 2020, FAE: A Fairness-Aware Ensemble Framework. in C Baru, J Huan, L Khan, XT Hu, R Ak, Y Tian, R Barga, C Zaniolo, K Lee & YF Ye (eds), 2019 IEEE International Conference on Big Data (Big Data)., 9006487, Institute of Electrical and Electronics Engineers Inc., pp. 1375-1380, 2019 IEEE International Conference on Big Data, Big Data 2019, Los Angeles, United States, 9 Dec 2019. https://doi.org/10.1109/BigData47090.2019.9006487
Iosifidis, V., Fetahu, B., & Ntoutsi, E. (2020). FAE: A Fairness-Aware Ensemble Framework. In C. Baru, J. Huan, L. Khan, X. T. Hu, R. Ak, Y. Tian, R. Barga, C. Zaniolo, K. Lee, & Y. F. Ye (Eds.), 2019 IEEE International Conference on Big Data (Big Data) (pp. 1375-1380). Article 9006487 Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/BigData47090.2019.9006487
Iosifidis V, Fetahu B, Ntoutsi E. FAE: A Fairness-Aware Ensemble Framework. In Baru C, Huan J, Khan L, Hu XT, Ak R, Tian Y, Barga R, Zaniolo C, Lee K, Ye YF, editors, 2019 IEEE International Conference on Big Data (Big Data). Institute of Electrical and Electronics Engineers Inc. 2020. p. 1375-1380. 9006487 doi: 10.1109/BigData47090.2019.9006487
Iosifidis, Vasileios ; Fetahu, Besnik ; Ntoutsi, Eirini. / FAE : A Fairness-Aware Ensemble Framework. 2019 IEEE International Conference on Big Data (Big Data). editor / Chaitanya Baru ; Jun Huan ; Latifur Khan ; Xiaohua Tony Hu ; Ronay Ak ; Yuanyuan Tian ; Roger Barga ; Carlo Zaniolo ; Kisung Lee ; Yanfang Fanny Ye. Institute of Electrical and Electronics Engineers Inc., 2020. pp. 1375-1380
Download
@inproceedings{704e74923b204ae2bcce6cc652ddcbce,
title = "FAE: A Fairness-Aware Ensemble Framework",
abstract = "Automated decision making based on big data and machine learning (ML) algorithms can result in discriminatory decisions against certain protected groups defined upon personal data like gender, race, sexual orientation etc. Such algorithms designed to discover patterns in big data might not only pick up any encoded societal biases in the training data, but even worse, they might reinforce such biases resulting in more severe discrimination. The majority of thus far proposed fairness-aware machine learning approaches focus solely on the pre-, in-or post-processing steps of the machine learning process, that is, input data, learning algorithms or derived models, respectively. However, the fairness problem cannot be isolated to a single step of the ML process. Rather, discrimination is often a result of complex interactions between big data and algorithms, and therefore, a more holistic approach is required.The proposed FAE (Fairness-Aware Ensemble) framework combines fairness-related interventions at both pre-and post-processing steps of the data analysis process. In the pre-processing step, we tackle the problems of under-representation of the protected group (group imbalance) and of class-imbalance by generating balanced training samples. In the post-processing step, we tackle the problem of class overlapping by shifting the decision boundary in the direction of fairness.",
keywords = "class imbalance, class overlap, ensemble learning, fairness-aware classification, group imbalance",
author = "Vasileios Iosifidis and Besnik Fetahu and Eirini Ntoutsi",
note = "Funding information: This work is part of a project that has received funding from the European Unions Horizon 2020, under the Innovative Training Networks (ITN-ETN) programme Marie Skodowska-Curie grant (NoBIAS-Artificial Intelligence without Bias) agreement no. 860630. The work is also inspired by the Volkswagen Foundation project BIAS (”Bias and Discrimination in Big Data and Algorithmic Processing. Philosophical Assessments, Legal Dimensions, and Technical Solutions”) within the initiative ”AI and the Society of the Future”; the last author is a Project Investigator for both of them.; 2019 IEEE International Conference on Big Data, Big Data 2019 ; Conference date: 09-12-2019 Through 12-12-2019",
year = "2020",
doi = "10.1109/BigData47090.2019.9006487",
language = "English",
isbn = "9781728108599",
pages = "1375--1380",
editor = "Chaitanya Baru and Jun Huan and Latifur Khan and Hu, {Xiaohua Tony} and Ronay Ak and Yuanyuan Tian and Roger Barga and Carlo Zaniolo and Kisung Lee and Ye, {Yanfang Fanny}",
booktitle = "2019 IEEE International Conference on Big Data (Big Data)",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

Download

TY - GEN

T1 - FAE

T2 - 2019 IEEE International Conference on Big Data, Big Data 2019

AU - Iosifidis, Vasileios

AU - Fetahu, Besnik

AU - Ntoutsi, Eirini

N1 - Funding information: This work is part of a project that has received funding from the European Unions Horizon 2020, under the Innovative Training Networks (ITN-ETN) programme Marie Skodowska-Curie grant (NoBIAS-Artificial Intelligence without Bias) agreement no. 860630. The work is also inspired by the Volkswagen Foundation project BIAS (”Bias and Discrimination in Big Data and Algorithmic Processing. Philosophical Assessments, Legal Dimensions, and Technical Solutions”) within the initiative ”AI and the Society of the Future”; the last author is a Project Investigator for both of them.

PY - 2020

Y1 - 2020

N2 - Automated decision making based on big data and machine learning (ML) algorithms can result in discriminatory decisions against certain protected groups defined upon personal data like gender, race, sexual orientation etc. Such algorithms designed to discover patterns in big data might not only pick up any encoded societal biases in the training data, but even worse, they might reinforce such biases resulting in more severe discrimination. The majority of thus far proposed fairness-aware machine learning approaches focus solely on the pre-, in-or post-processing steps of the machine learning process, that is, input data, learning algorithms or derived models, respectively. However, the fairness problem cannot be isolated to a single step of the ML process. Rather, discrimination is often a result of complex interactions between big data and algorithms, and therefore, a more holistic approach is required.The proposed FAE (Fairness-Aware Ensemble) framework combines fairness-related interventions at both pre-and post-processing steps of the data analysis process. In the pre-processing step, we tackle the problems of under-representation of the protected group (group imbalance) and of class-imbalance by generating balanced training samples. In the post-processing step, we tackle the problem of class overlapping by shifting the decision boundary in the direction of fairness.

AB - Automated decision making based on big data and machine learning (ML) algorithms can result in discriminatory decisions against certain protected groups defined upon personal data like gender, race, sexual orientation etc. Such algorithms designed to discover patterns in big data might not only pick up any encoded societal biases in the training data, but even worse, they might reinforce such biases resulting in more severe discrimination. The majority of thus far proposed fairness-aware machine learning approaches focus solely on the pre-, in-or post-processing steps of the machine learning process, that is, input data, learning algorithms or derived models, respectively. However, the fairness problem cannot be isolated to a single step of the ML process. Rather, discrimination is often a result of complex interactions between big data and algorithms, and therefore, a more holistic approach is required.The proposed FAE (Fairness-Aware Ensemble) framework combines fairness-related interventions at both pre-and post-processing steps of the data analysis process. In the pre-processing step, we tackle the problems of under-representation of the protected group (group imbalance) and of class-imbalance by generating balanced training samples. In the post-processing step, we tackle the problem of class overlapping by shifting the decision boundary in the direction of fairness.

KW - class imbalance

KW - class overlap

KW - ensemble learning

KW - fairness-aware classification

KW - group imbalance

UR - http://www.scopus.com/inward/record.url?scp=85081292429&partnerID=8YFLogxK

U2 - 10.1109/BigData47090.2019.9006487

DO - 10.1109/BigData47090.2019.9006487

M3 - Conference contribution

AN - SCOPUS:85081292429

SN - 9781728108599

SP - 1375

EP - 1380

BT - 2019 IEEE International Conference on Big Data (Big Data)

A2 - Baru, Chaitanya

A2 - Huan, Jun

A2 - Khan, Latifur

A2 - Hu, Xiaohua Tony

A2 - Ak, Ronay

A2 - Tian, Yuanyuan

A2 - Barga, Roger

A2 - Zaniolo, Carlo

A2 - Lee, Kisung

A2 - Ye, Yanfang Fanny

PB - Institute of Electrical and Electronics Engineers Inc.

Y2 - 9 December 2019 through 12 December 2019

ER -