Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Proceedings of the 2nd Workshop on Fairness and Bias in AI |
Seitenumfang | 16 |
ISBN (elektronisch) | 1613-0073 |
Publikationsstatus | Veröffentlicht - 29 Okt. 2024 |
Veranstaltung | 2nd Workshop on Fairness and Bias in AI, AEQUITAS 2024 - Santiago de Compostela, Spanien Dauer: 20 Okt. 2024 → 20 Okt. 2024 |
Publikationsreihe
Name | CEUR Workshop Proceedings |
---|---|
Band | 3808 |
ISSN (Print) | 1613-0073 |
Abstract
Machine Learning (ML) systems are capable of reproducing and often amplifying undesired biases. This puts emphasis on the importance of operating under practices that enable the study and understanding of the intrinsic characteristics of ML pipelines, prompting the emergence of documentation frameworks with the idea that “any remedy for bias starts with awareness of its existence”. However, a resource that can formally describe these pipelines in terms of biases detected is still amiss. To fill this gap, we present the Doc-BiasO ontology, a resource that aims to create an integrated vocabulary of biases defined in the fair-ML literature and their measures, as well as to incorporate relevant terminology and the relationships between them. Overseeing ontology engineering best practices, we re-use existing vocabulary on machine learning and AI, to foster knowledge sharing and interoperability between the actors concerned with its research, development, regulation, among others. Overall, our main objective is to contribute towards clarifying existing terminology on bias research as it rapidly expands to all areas of AI and to improve the interpretation of bias in data and downstream impact.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Allgemeine Computerwissenschaft
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Proceedings of the 2nd Workshop on Fairness and Bias in AI. 2024. (CEUR Workshop Proceedings; Band 3808).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Leveraging Ontologies to Document Bias in Data
AU - Russo, Mayra
AU - Vidal, Maria Esther
N1 - Publisher Copyright: © 2024 Copyright for this paper by its authors.
PY - 2024/10/29
Y1 - 2024/10/29
N2 - Machine Learning (ML) systems are capable of reproducing and often amplifying undesired biases. This puts emphasis on the importance of operating under practices that enable the study and understanding of the intrinsic characteristics of ML pipelines, prompting the emergence of documentation frameworks with the idea that “any remedy for bias starts with awareness of its existence”. However, a resource that can formally describe these pipelines in terms of biases detected is still amiss. To fill this gap, we present the Doc-BiasO ontology, a resource that aims to create an integrated vocabulary of biases defined in the fair-ML literature and their measures, as well as to incorporate relevant terminology and the relationships between them. Overseeing ontology engineering best practices, we re-use existing vocabulary on machine learning and AI, to foster knowledge sharing and interoperability between the actors concerned with its research, development, regulation, among others. Overall, our main objective is to contribute towards clarifying existing terminology on bias research as it rapidly expands to all areas of AI and to improve the interpretation of bias in data and downstream impact.
AB - Machine Learning (ML) systems are capable of reproducing and often amplifying undesired biases. This puts emphasis on the importance of operating under practices that enable the study and understanding of the intrinsic characteristics of ML pipelines, prompting the emergence of documentation frameworks with the idea that “any remedy for bias starts with awareness of its existence”. However, a resource that can formally describe these pipelines in terms of biases detected is still amiss. To fill this gap, we present the Doc-BiasO ontology, a resource that aims to create an integrated vocabulary of biases defined in the fair-ML literature and their measures, as well as to incorporate relevant terminology and the relationships between them. Overseeing ontology engineering best practices, we re-use existing vocabulary on machine learning and AI, to foster knowledge sharing and interoperability between the actors concerned with its research, development, regulation, among others. Overall, our main objective is to contribute towards clarifying existing terminology on bias research as it rapidly expands to all areas of AI and to improve the interpretation of bias in data and downstream impact.
KW - Bias
KW - Machine Learning
KW - Ontology
KW - Trustworthy AI
UR - http://www.scopus.com/inward/record.url?scp=85210024328&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85210024328
T3 - CEUR Workshop Proceedings
BT - Proceedings of the 2nd Workshop on Fairness and Bias in AI
T2 - 2nd Workshop on Fairness and Bias in AI, AEQUITAS 2024
Y2 - 20 October 2024 through 20 October 2024
ER -