Measuring Fairness of Rankings under Noisy Sensitive Information

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Azin Ghazimatin
  • Matthaus Kleindessner
  • Chris Russell
  • Ziawasch Abedjan
  • Jacek Golebiowski

Externe Organisationen

  • Spotify
  • Amazon.com, Inc.
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksProceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022
Herausgeber (Verlag)Association for Computing Machinery (ACM)
Seiten2263-2279
Seitenumfang17
ISBN (elektronisch)9781450393522
PublikationsstatusVeröffentlicht - 21 Juni 2022
Veranstaltung5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022 - Virtual, Online, Südkorea
Dauer: 21 Juni 202224 Juni 2022

Publikationsreihe

NameACM International Conference Proceeding Series

Abstract

Metrics commonly used to assess group fairness in ranking require the knowledge of group membership labels (e.g., whether a job applicant is male or female). Obtaining accurate group membership labels, however, may be costly, operationally difficult, or even infeasible. Where it is not possible to obtain these labels, one common solution is to use proxy labels in their place, which are typically predicted by machine learning models. Proxy labels are susceptible to systematic biases, and using them for fairness estimation can thus lead to unreliable assessments. We investigate the problem of measuring group fairness in ranking for a suite of divergence-based metrics in the presence of proxy labels. We show that under certain assumptions, fairness of a ranking can reliably be measured from the proxy labels. We formalize two assumptions and provide a theoretical analysis for each showing how the true metric values can be derived from the estimates based on proxy labels. We prove that without such assumptions fairness assessment based on proxy labels is impossible. Through extensive experiments on both synthetic and real datasets, we demonstrate the effectiveness of our proposed methods for recovering reliable fairness assessments in rankings.

ASJC Scopus Sachgebiete

Zitieren

Measuring Fairness of Rankings under Noisy Sensitive Information. / Ghazimatin, Azin; Kleindessner, Matthaus; Russell, Chris et al.
Proceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022. Association for Computing Machinery (ACM), 2022. S. 2263-2279 (ACM International Conference Proceeding Series).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Ghazimatin, A, Kleindessner, M, Russell, C, Abedjan, Z & Golebiowski, J 2022, Measuring Fairness of Rankings under Noisy Sensitive Information. in Proceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022. ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), S. 2263-2279, 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022, Virtual, Online, Südkorea, 21 Juni 2022. https://doi.org/10.1145/3531146.3534641
Ghazimatin, A., Kleindessner, M., Russell, C., Abedjan, Z., & Golebiowski, J. (2022). Measuring Fairness of Rankings under Noisy Sensitive Information. In Proceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022 (S. 2263-2279). (ACM International Conference Proceeding Series). Association for Computing Machinery (ACM). https://doi.org/10.1145/3531146.3534641
Ghazimatin A, Kleindessner M, Russell C, Abedjan Z, Golebiowski J. Measuring Fairness of Rankings under Noisy Sensitive Information. in Proceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022. Association for Computing Machinery (ACM). 2022. S. 2263-2279. (ACM International Conference Proceeding Series). doi: 10.1145/3531146.3534641
Ghazimatin, Azin ; Kleindessner, Matthaus ; Russell, Chris et al. / Measuring Fairness of Rankings under Noisy Sensitive Information. Proceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022. Association for Computing Machinery (ACM), 2022. S. 2263-2279 (ACM International Conference Proceeding Series).
Download
@inproceedings{3b865c0fadb24896acc0a8e509062004,
title = "Measuring Fairness of Rankings under Noisy Sensitive Information",
abstract = "Metrics commonly used to assess group fairness in ranking require the knowledge of group membership labels (e.g., whether a job applicant is male or female). Obtaining accurate group membership labels, however, may be costly, operationally difficult, or even infeasible. Where it is not possible to obtain these labels, one common solution is to use proxy labels in their place, which are typically predicted by machine learning models. Proxy labels are susceptible to systematic biases, and using them for fairness estimation can thus lead to unreliable assessments. We investigate the problem of measuring group fairness in ranking for a suite of divergence-based metrics in the presence of proxy labels. We show that under certain assumptions, fairness of a ranking can reliably be measured from the proxy labels. We formalize two assumptions and provide a theoretical analysis for each showing how the true metric values can be derived from the estimates based on proxy labels. We prove that without such assumptions fairness assessment based on proxy labels is impossible. Through extensive experiments on both synthetic and real datasets, we demonstrate the effectiveness of our proposed methods for recovering reliable fairness assessments in rankings.",
author = "Azin Ghazimatin and Matthaus Kleindessner and Chris Russell and Ziawasch Abedjan and Jacek Golebiowski",
year = "2022",
month = jun,
day = "21",
doi = "10.1145/3531146.3534641",
language = "English",
series = "ACM International Conference Proceeding Series",
publisher = "Association for Computing Machinery (ACM)",
pages = "2263--2279",
booktitle = "Proceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022",
address = "United States",
note = "5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022 ; Conference date: 21-06-2022 Through 24-06-2022",

}

Download

TY - GEN

T1 - Measuring Fairness of Rankings under Noisy Sensitive Information

AU - Ghazimatin, Azin

AU - Kleindessner, Matthaus

AU - Russell, Chris

AU - Abedjan, Ziawasch

AU - Golebiowski, Jacek

PY - 2022/6/21

Y1 - 2022/6/21

N2 - Metrics commonly used to assess group fairness in ranking require the knowledge of group membership labels (e.g., whether a job applicant is male or female). Obtaining accurate group membership labels, however, may be costly, operationally difficult, or even infeasible. Where it is not possible to obtain these labels, one common solution is to use proxy labels in their place, which are typically predicted by machine learning models. Proxy labels are susceptible to systematic biases, and using them for fairness estimation can thus lead to unreliable assessments. We investigate the problem of measuring group fairness in ranking for a suite of divergence-based metrics in the presence of proxy labels. We show that under certain assumptions, fairness of a ranking can reliably be measured from the proxy labels. We formalize two assumptions and provide a theoretical analysis for each showing how the true metric values can be derived from the estimates based on proxy labels. We prove that without such assumptions fairness assessment based on proxy labels is impossible. Through extensive experiments on both synthetic and real datasets, we demonstrate the effectiveness of our proposed methods for recovering reliable fairness assessments in rankings.

AB - Metrics commonly used to assess group fairness in ranking require the knowledge of group membership labels (e.g., whether a job applicant is male or female). Obtaining accurate group membership labels, however, may be costly, operationally difficult, or even infeasible. Where it is not possible to obtain these labels, one common solution is to use proxy labels in their place, which are typically predicted by machine learning models. Proxy labels are susceptible to systematic biases, and using them for fairness estimation can thus lead to unreliable assessments. We investigate the problem of measuring group fairness in ranking for a suite of divergence-based metrics in the presence of proxy labels. We show that under certain assumptions, fairness of a ranking can reliably be measured from the proxy labels. We formalize two assumptions and provide a theoretical analysis for each showing how the true metric values can be derived from the estimates based on proxy labels. We prove that without such assumptions fairness assessment based on proxy labels is impossible. Through extensive experiments on both synthetic and real datasets, we demonstrate the effectiveness of our proposed methods for recovering reliable fairness assessments in rankings.

UR - http://www.scopus.com/inward/record.url?scp=85132995702&partnerID=8YFLogxK

U2 - 10.1145/3531146.3534641

DO - 10.1145/3531146.3534641

M3 - Conference contribution

AN - SCOPUS:85132995702

T3 - ACM International Conference Proceeding Series

SP - 2263

EP - 2279

BT - Proceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022

PB - Association for Computing Machinery (ACM)

T2 - 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022

Y2 - 21 June 2022 through 24 June 2022

ER -