Leveraging Ontologies to Document Bias in Data

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Mayra Russo
  • Maria Esther Vidal

Organisationseinheiten

Externe Organisationen

  • Technische Informationsbibliothek (TIB) Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksProceedings of the 2nd Workshop on Fairness and Bias in AI
Seitenumfang16
ISBN (elektronisch)1613-0073
PublikationsstatusVeröffentlicht - 29 Okt. 2024
Veranstaltung2nd Workshop on Fairness and Bias in AI, AEQUITAS 2024 - Santiago de Compostela, Spanien
Dauer: 20 Okt. 202420 Okt. 2024

Publikationsreihe

NameCEUR Workshop Proceedings
Band3808
ISSN (Print)1613-0073

Abstract

Machine Learning (ML) systems are capable of reproducing and often amplifying undesired biases. This puts emphasis on the importance of operating under practices that enable the study and understanding of the intrinsic characteristics of ML pipelines, prompting the emergence of documentation frameworks with the idea that “any remedy for bias starts with awareness of its existence”. However, a resource that can formally describe these pipelines in terms of biases detected is still amiss. To fill this gap, we present the Doc-BiasO ontology, a resource that aims to create an integrated vocabulary of biases defined in the fair-ML literature and their measures, as well as to incorporate relevant terminology and the relationships between them. Overseeing ontology engineering best practices, we re-use existing vocabulary on machine learning and AI, to foster knowledge sharing and interoperability between the actors concerned with its research, development, regulation, among others. Overall, our main objective is to contribute towards clarifying existing terminology on bias research as it rapidly expands to all areas of AI and to improve the interpretation of bias in data and downstream impact.

ASJC Scopus Sachgebiete

Zitieren

Leveraging Ontologies to Document Bias in Data. / Russo, Mayra; Vidal, Maria Esther.
Proceedings of the 2nd Workshop on Fairness and Bias in AI. 2024. (CEUR Workshop Proceedings; Band 3808).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Russo, M & Vidal, ME 2024, Leveraging Ontologies to Document Bias in Data. in Proceedings of the 2nd Workshop on Fairness and Bias in AI. CEUR Workshop Proceedings, Bd. 3808, 2nd Workshop on Fairness and Bias in AI, AEQUITAS 2024, Santiago de Compostela, Spanien, 20 Okt. 2024. <https://ceur-ws.org/Vol-3808/paper5.pdf>
Russo, M., & Vidal, M. E. (2024). Leveraging Ontologies to Document Bias in Data. In Proceedings of the 2nd Workshop on Fairness and Bias in AI (CEUR Workshop Proceedings; Band 3808). https://ceur-ws.org/Vol-3808/paper5.pdf
Russo M, Vidal ME. Leveraging Ontologies to Document Bias in Data. in Proceedings of the 2nd Workshop on Fairness and Bias in AI. 2024. (CEUR Workshop Proceedings).
Russo, Mayra ; Vidal, Maria Esther. / Leveraging Ontologies to Document Bias in Data. Proceedings of the 2nd Workshop on Fairness and Bias in AI. 2024. (CEUR Workshop Proceedings).
Download
@inproceedings{1c1a7badb7ab4dbdab50e0f1ec4ff504,
title = "Leveraging Ontologies to Document Bias in Data",
abstract = "Machine Learning (ML) systems are capable of reproducing and often amplifying undesired biases. This puts emphasis on the importance of operating under practices that enable the study and understanding of the intrinsic characteristics of ML pipelines, prompting the emergence of documentation frameworks with the idea that “any remedy for bias starts with awareness of its existence”. However, a resource that can formally describe these pipelines in terms of biases detected is still amiss. To fill this gap, we present the Doc-BiasO ontology, a resource that aims to create an integrated vocabulary of biases defined in the fair-ML literature and their measures, as well as to incorporate relevant terminology and the relationships between them. Overseeing ontology engineering best practices, we re-use existing vocabulary on machine learning and AI, to foster knowledge sharing and interoperability between the actors concerned with its research, development, regulation, among others. Overall, our main objective is to contribute towards clarifying existing terminology on bias research as it rapidly expands to all areas of AI and to improve the interpretation of bias in data and downstream impact.",
keywords = "Bias, Machine Learning, Ontology, Trustworthy AI",
author = "Mayra Russo and Vidal, {Maria Esther}",
note = "Publisher Copyright: {\textcopyright} 2024 Copyright for this paper by its authors.; 2nd Workshop on Fairness and Bias in AI, AEQUITAS 2024 ; Conference date: 20-10-2024 Through 20-10-2024",
year = "2024",
month = oct,
day = "29",
language = "English",
series = "CEUR Workshop Proceedings",
booktitle = "Proceedings of the 2nd Workshop on Fairness and Bias in AI",

}

Download

TY - GEN

T1 - Leveraging Ontologies to Document Bias in Data

AU - Russo, Mayra

AU - Vidal, Maria Esther

N1 - Publisher Copyright: © 2024 Copyright for this paper by its authors.

PY - 2024/10/29

Y1 - 2024/10/29

N2 - Machine Learning (ML) systems are capable of reproducing and often amplifying undesired biases. This puts emphasis on the importance of operating under practices that enable the study and understanding of the intrinsic characteristics of ML pipelines, prompting the emergence of documentation frameworks with the idea that “any remedy for bias starts with awareness of its existence”. However, a resource that can formally describe these pipelines in terms of biases detected is still amiss. To fill this gap, we present the Doc-BiasO ontology, a resource that aims to create an integrated vocabulary of biases defined in the fair-ML literature and their measures, as well as to incorporate relevant terminology and the relationships between them. Overseeing ontology engineering best practices, we re-use existing vocabulary on machine learning and AI, to foster knowledge sharing and interoperability between the actors concerned with its research, development, regulation, among others. Overall, our main objective is to contribute towards clarifying existing terminology on bias research as it rapidly expands to all areas of AI and to improve the interpretation of bias in data and downstream impact.

AB - Machine Learning (ML) systems are capable of reproducing and often amplifying undesired biases. This puts emphasis on the importance of operating under practices that enable the study and understanding of the intrinsic characteristics of ML pipelines, prompting the emergence of documentation frameworks with the idea that “any remedy for bias starts with awareness of its existence”. However, a resource that can formally describe these pipelines in terms of biases detected is still amiss. To fill this gap, we present the Doc-BiasO ontology, a resource that aims to create an integrated vocabulary of biases defined in the fair-ML literature and their measures, as well as to incorporate relevant terminology and the relationships between them. Overseeing ontology engineering best practices, we re-use existing vocabulary on machine learning and AI, to foster knowledge sharing and interoperability between the actors concerned with its research, development, regulation, among others. Overall, our main objective is to contribute towards clarifying existing terminology on bias research as it rapidly expands to all areas of AI and to improve the interpretation of bias in data and downstream impact.

KW - Bias

KW - Machine Learning

KW - Ontology

KW - Trustworthy AI

UR - http://www.scopus.com/inward/record.url?scp=85210024328&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85210024328

T3 - CEUR Workshop Proceedings

BT - Proceedings of the 2nd Workshop on Fairness and Bias in AI

T2 - 2nd Workshop on Fairness and Bias in AI, AEQUITAS 2024

Y2 - 20 October 2024 through 20 October 2024

ER -