Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Proceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022 |
Herausgeber (Verlag) | Association for Computing Machinery (ACM) |
Seiten | 2263-2279 |
Seitenumfang | 17 |
ISBN (elektronisch) | 9781450393522 |
Publikationsstatus | Veröffentlicht - 21 Juni 2022 |
Veranstaltung | 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022 - Virtual, Online, Südkorea Dauer: 21 Juni 2022 → 24 Juni 2022 |
Publikationsreihe
Name | ACM International Conference Proceeding Series |
---|
Abstract
Metrics commonly used to assess group fairness in ranking require the knowledge of group membership labels (e.g., whether a job applicant is male or female). Obtaining accurate group membership labels, however, may be costly, operationally difficult, or even infeasible. Where it is not possible to obtain these labels, one common solution is to use proxy labels in their place, which are typically predicted by machine learning models. Proxy labels are susceptible to systematic biases, and using them for fairness estimation can thus lead to unreliable assessments. We investigate the problem of measuring group fairness in ranking for a suite of divergence-based metrics in the presence of proxy labels. We show that under certain assumptions, fairness of a ranking can reliably be measured from the proxy labels. We formalize two assumptions and provide a theoretical analysis for each showing how the true metric values can be derived from the estimates based on proxy labels. We prove that without such assumptions fairness assessment based on proxy labels is impossible. Through extensive experiments on both synthetic and real datasets, we demonstrate the effectiveness of our proposed methods for recovering reliable fairness assessments in rankings.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Software
- Informatik (insg.)
- Mensch-Maschine-Interaktion
- Informatik (insg.)
- Maschinelles Sehen und Mustererkennung
- Informatik (insg.)
- Computernetzwerke und -kommunikation
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Proceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022. Association for Computing Machinery (ACM), 2022. S. 2263-2279 (ACM International Conference Proceeding Series).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Measuring Fairness of Rankings under Noisy Sensitive Information
AU - Ghazimatin, Azin
AU - Kleindessner, Matthaus
AU - Russell, Chris
AU - Abedjan, Ziawasch
AU - Golebiowski, Jacek
PY - 2022/6/21
Y1 - 2022/6/21
N2 - Metrics commonly used to assess group fairness in ranking require the knowledge of group membership labels (e.g., whether a job applicant is male or female). Obtaining accurate group membership labels, however, may be costly, operationally difficult, or even infeasible. Where it is not possible to obtain these labels, one common solution is to use proxy labels in their place, which are typically predicted by machine learning models. Proxy labels are susceptible to systematic biases, and using them for fairness estimation can thus lead to unreliable assessments. We investigate the problem of measuring group fairness in ranking for a suite of divergence-based metrics in the presence of proxy labels. We show that under certain assumptions, fairness of a ranking can reliably be measured from the proxy labels. We formalize two assumptions and provide a theoretical analysis for each showing how the true metric values can be derived from the estimates based on proxy labels. We prove that without such assumptions fairness assessment based on proxy labels is impossible. Through extensive experiments on both synthetic and real datasets, we demonstrate the effectiveness of our proposed methods for recovering reliable fairness assessments in rankings.
AB - Metrics commonly used to assess group fairness in ranking require the knowledge of group membership labels (e.g., whether a job applicant is male or female). Obtaining accurate group membership labels, however, may be costly, operationally difficult, or even infeasible. Where it is not possible to obtain these labels, one common solution is to use proxy labels in their place, which are typically predicted by machine learning models. Proxy labels are susceptible to systematic biases, and using them for fairness estimation can thus lead to unreliable assessments. We investigate the problem of measuring group fairness in ranking for a suite of divergence-based metrics in the presence of proxy labels. We show that under certain assumptions, fairness of a ranking can reliably be measured from the proxy labels. We formalize two assumptions and provide a theoretical analysis for each showing how the true metric values can be derived from the estimates based on proxy labels. We prove that without such assumptions fairness assessment based on proxy labels is impossible. Through extensive experiments on both synthetic and real datasets, we demonstrate the effectiveness of our proposed methods for recovering reliable fairness assessments in rankings.
UR - http://www.scopus.com/inward/record.url?scp=85132995702&partnerID=8YFLogxK
U2 - 10.1145/3531146.3534641
DO - 10.1145/3531146.3534641
M3 - Conference contribution
AN - SCOPUS:85132995702
T3 - ACM International Conference Proceeding Series
SP - 2263
EP - 2279
BT - Proceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022
PB - Association for Computing Machinery (ACM)
T2 - 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022
Y2 - 21 June 2022 through 24 June 2022
ER -