Details
Original language | English |
---|---|
Title of host publication | Discovery Science - 23rd International Conference, DS 2020, Proceedings |
Subtitle of host publication | 23rd International Conference |
Editors | Annalisa Appice, Grigorios Tsoumakas, Yannis Manolopoulos, Stan Matwin |
Place of Publication | Cham |
Pages | 581-595 |
Number of pages | 15 |
Edition | 1 |
ISBN (electronic) | 9783030615277 |
Publication status | Published - 15 Oct 2020 |
Publication series
Name | Lecture Notes in Computer Science |
---|---|
ISSN (Print) | 0302-9743 |
ISSN (electronic) | 1611-3349 |
Abstract
In this paper, we propose FairNN a neural network that performs joint feature representation and classification for fairness-aware learning. Our approach optimizes a multi-objective loss function which (a) learns a fair representation by suppressing protected attributes (b) maintains the information content by minimizing the reconstruction loss and (c) allows for solving a classification task in a fair manner by minimizing the classification error and respecting the equalized odds-based fairness regularizer. Our experiments on a variety of datasets demonstrate that such a joint approach is superior to separate treatment of unfairness in representation learning or supervised learning. Additionally, our regularizers can be adaptively weighted to balance the different components of the loss function, thus allowing for a very general framework for conjoint fair representation learning and decision making.
Keywords
- cs.LG, stat.ML, Auto-encoders, Neural networks, Bias, Fairness
ASJC Scopus subject areas
- Mathematics(all)
- Theoretical Computer Science
- Computer Science(all)
- General Computer Science
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Discovery Science - 23rd International Conference, DS 2020, Proceedings: 23rd International Conference. ed. / Annalisa Appice; Grigorios Tsoumakas; Yannis Manolopoulos; Stan Matwin. 1. ed. Cham, 2020. p. 581-595 (Lecture Notes in Computer Science).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research
}
TY - GEN
T1 - FairNN
T2 - Conjoint Learning of Fair Representations for Fair Decisions
AU - Hu, Tongxin
AU - Iosifidis, Vasileios
AU - Liao, Wentong
AU - Zhang, Hang
AU - Ying Yang, Michael
AU - Ntoutsi, Eirini
AU - Rosenhahn, Bodo
N1 - Funding information:. The work is supported by BIAS (Bias and Discrimination in Big Data and Algorithmic Processing. Philosophical Assessments, Legal Dimensions, and Technical Solutions) a project funded by the Volkswagen Foundation within the initiative AI and the Society of the Future for which the last authors are Principal Investigators.
PY - 2020/10/15
Y1 - 2020/10/15
N2 - In this paper, we propose FairNN a neural network that performs joint feature representation and classification for fairness-aware learning. Our approach optimizes a multi-objective loss function which (a) learns a fair representation by suppressing protected attributes (b) maintains the information content by minimizing the reconstruction loss and (c) allows for solving a classification task in a fair manner by minimizing the classification error and respecting the equalized odds-based fairness regularizer. Our experiments on a variety of datasets demonstrate that such a joint approach is superior to separate treatment of unfairness in representation learning or supervised learning. Additionally, our regularizers can be adaptively weighted to balance the different components of the loss function, thus allowing for a very general framework for conjoint fair representation learning and decision making.
AB - In this paper, we propose FairNN a neural network that performs joint feature representation and classification for fairness-aware learning. Our approach optimizes a multi-objective loss function which (a) learns a fair representation by suppressing protected attributes (b) maintains the information content by minimizing the reconstruction loss and (c) allows for solving a classification task in a fair manner by minimizing the classification error and respecting the equalized odds-based fairness regularizer. Our experiments on a variety of datasets demonstrate that such a joint approach is superior to separate treatment of unfairness in representation learning or supervised learning. Additionally, our regularizers can be adaptively weighted to balance the different components of the loss function, thus allowing for a very general framework for conjoint fair representation learning and decision making.
KW - cs.LG
KW - stat.ML
KW - Auto-encoders
KW - Neural networks
KW - Bias
KW - Fairness
UR - http://www.scopus.com/inward/record.url?scp=85094128085&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-61527-7_38
DO - 10.1007/978-3-030-61527-7_38
M3 - Conference contribution
SN - 9783030615260
T3 - Lecture Notes in Computer Science
SP - 581
EP - 595
BT - Discovery Science - 23rd International Conference, DS 2020, Proceedings
A2 - Appice, Annalisa
A2 - Tsoumakas, Grigorios
A2 - Manolopoulos, Yannis
A2 - Matwin, Stan
CY - Cham
ER -