Details
Original language | English |
---|---|
Title of host publication | Proceedings of the 2nd Workshop on Fairness and Bias in AI |
Number of pages | 16 |
ISBN (electronic) | 1613-0073 |
Publication status | Published - 29 Oct 2024 |
Event | 2nd Workshop on Fairness and Bias in AI, AEQUITAS 2024 - Santiago de Compostela, Spain Duration: 20 Oct 2024 → 20 Oct 2024 |
Publication series
Name | CEUR Workshop Proceedings |
---|---|
Volume | 3808 |
ISSN (Print) | 1613-0073 |
Abstract
Machine Learning (ML) systems are capable of reproducing and often amplifying undesired biases. This puts emphasis on the importance of operating under practices that enable the study and understanding of the intrinsic characteristics of ML pipelines, prompting the emergence of documentation frameworks with the idea that “any remedy for bias starts with awareness of its existence”. However, a resource that can formally describe these pipelines in terms of biases detected is still amiss. To fill this gap, we present the Doc-BiasO ontology, a resource that aims to create an integrated vocabulary of biases defined in the fair-ML literature and their measures, as well as to incorporate relevant terminology and the relationships between them. Overseeing ontology engineering best practices, we re-use existing vocabulary on machine learning and AI, to foster knowledge sharing and interoperability between the actors concerned with its research, development, regulation, among others. Overall, our main objective is to contribute towards clarifying existing terminology on bias research as it rapidly expands to all areas of AI and to improve the interpretation of bias in data and downstream impact.
Keywords
- Bias, Machine Learning, Ontology, Trustworthy AI
ASJC Scopus subject areas
- Computer Science(all)
- General Computer Science
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Proceedings of the 2nd Workshop on Fairness and Bias in AI. 2024. (CEUR Workshop Proceedings; Vol. 3808).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Leveraging Ontologies to Document Bias in Data
AU - Russo, Mayra
AU - Vidal, Maria Esther
N1 - Publisher Copyright: © 2024 Copyright for this paper by its authors.
PY - 2024/10/29
Y1 - 2024/10/29
N2 - Machine Learning (ML) systems are capable of reproducing and often amplifying undesired biases. This puts emphasis on the importance of operating under practices that enable the study and understanding of the intrinsic characteristics of ML pipelines, prompting the emergence of documentation frameworks with the idea that “any remedy for bias starts with awareness of its existence”. However, a resource that can formally describe these pipelines in terms of biases detected is still amiss. To fill this gap, we present the Doc-BiasO ontology, a resource that aims to create an integrated vocabulary of biases defined in the fair-ML literature and their measures, as well as to incorporate relevant terminology and the relationships between them. Overseeing ontology engineering best practices, we re-use existing vocabulary on machine learning and AI, to foster knowledge sharing and interoperability between the actors concerned with its research, development, regulation, among others. Overall, our main objective is to contribute towards clarifying existing terminology on bias research as it rapidly expands to all areas of AI and to improve the interpretation of bias in data and downstream impact.
AB - Machine Learning (ML) systems are capable of reproducing and often amplifying undesired biases. This puts emphasis on the importance of operating under practices that enable the study and understanding of the intrinsic characteristics of ML pipelines, prompting the emergence of documentation frameworks with the idea that “any remedy for bias starts with awareness of its existence”. However, a resource that can formally describe these pipelines in terms of biases detected is still amiss. To fill this gap, we present the Doc-BiasO ontology, a resource that aims to create an integrated vocabulary of biases defined in the fair-ML literature and their measures, as well as to incorporate relevant terminology and the relationships between them. Overseeing ontology engineering best practices, we re-use existing vocabulary on machine learning and AI, to foster knowledge sharing and interoperability between the actors concerned with its research, development, regulation, among others. Overall, our main objective is to contribute towards clarifying existing terminology on bias research as it rapidly expands to all areas of AI and to improve the interpretation of bias in data and downstream impact.
KW - Bias
KW - Machine Learning
KW - Ontology
KW - Trustworthy AI
UR - http://www.scopus.com/inward/record.url?scp=85210024328&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85210024328
T3 - CEUR Workshop Proceedings
BT - Proceedings of the 2nd Workshop on Fairness and Bias in AI
T2 - 2nd Workshop on Fairness and Bias in AI, AEQUITAS 2024
Y2 - 20 October 2024 through 20 October 2024
ER -