Details
Original language | English |
---|---|
Title of host publication | SEMPDS 2023 |
Subtitle of host publication | Posters and Demos at SEMANTiCS 2023 |
Number of pages | 5 |
Publication status | Published - 2023 |
Event | 19th International Conference on Semantic Systems, SEMPDS 2023 - Leipzing, Germany Duration: 20 Sept 2023 → 22 Sept 2023 |
Publication series
Name | CEUR Workshop Proceedings |
---|---|
Publisher | CEUR Workshop Proceedings |
Volume | 3526 |
ISSN (Print) | 1613-0073 |
Abstract
Artificial Intelligence (AI) plays a critical role in data-driven decision-making frameworks. However, the lack of transparency in some machine learning (ML) models hampers their trustworthiness, especially in domains like healthcare. This demonstration aims to showcase the potential of Semantic Web technologies in enhancing the interpretability of AI. By incorporating an interpretability layer, ML models can become more reliable, providing decision-makers with deeper insights into the model’s decision-making process. InterpretME effectively documents the execution of an ML pipeline using factual statements within the InterpretME knowledge graph (KG). Consequently, crucial metadata such as hyperparameters, decision trees, and local ML interpretations are presented in both human- and machine-readable formats, facilitating symbolic reasoning on a model’s outcomes. Following the Linked Data principles, InterpretME establishes connections between entities in the InterpretME KG and their counterparts in existing KGs, thus, enhancing contextual information of the InterpretME KG entities.
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
SEMPDS 2023: Posters and Demos at SEMANTiCS 2023. 2023. (CEUR Workshop Proceedings; Vol. 3526).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Enhancing Interpretability of Machine Learning Models over Knowledge Graphs
AU - Chudasama, Yashrajsinh
AU - Purohit, Disha
AU - Rohde, Philipp D.
AU - Vidal, Maria Esther
N1 - Funding Information: †All authors contributed equally. This work has been supported by "Leibniz Best Minds: Programme for Women Professors", project TrustKG-Transforming Data in Trustable Insights with grant P99/2020 and Federal Ministry for Economic Affairs and Energy of Germany (BMWK) in the project CoyPu (project number 01MK21007[A-L]). $ yashrajsinh.chudasama@tib.eu (Y. Chudasama); disha.purohit@tib.eu (D. Purohit); philipp.rohde@tib.eu (P. D. Rohde); maria.vidal@tib.eu (M. Vidal) 0000-0003-3422-366X (Y. Chudasama); 0000-0002-1442-335X (D. Purohit); 0000-0002-9835-4354 (P. D. Rohde); 0000-0003-1160-8727 (M. Vidal)
PY - 2023
Y1 - 2023
N2 - Artificial Intelligence (AI) plays a critical role in data-driven decision-making frameworks. However, the lack of transparency in some machine learning (ML) models hampers their trustworthiness, especially in domains like healthcare. This demonstration aims to showcase the potential of Semantic Web technologies in enhancing the interpretability of AI. By incorporating an interpretability layer, ML models can become more reliable, providing decision-makers with deeper insights into the model’s decision-making process. InterpretME effectively documents the execution of an ML pipeline using factual statements within the InterpretME knowledge graph (KG). Consequently, crucial metadata such as hyperparameters, decision trees, and local ML interpretations are presented in both human- and machine-readable formats, facilitating symbolic reasoning on a model’s outcomes. Following the Linked Data principles, InterpretME establishes connections between entities in the InterpretME KG and their counterparts in existing KGs, thus, enhancing contextual information of the InterpretME KG entities.
AB - Artificial Intelligence (AI) plays a critical role in data-driven decision-making frameworks. However, the lack of transparency in some machine learning (ML) models hampers their trustworthiness, especially in domains like healthcare. This demonstration aims to showcase the potential of Semantic Web technologies in enhancing the interpretability of AI. By incorporating an interpretability layer, ML models can become more reliable, providing decision-makers with deeper insights into the model’s decision-making process. InterpretME effectively documents the execution of an ML pipeline using factual statements within the InterpretME knowledge graph (KG). Consequently, crucial metadata such as hyperparameters, decision trees, and local ML interpretations are presented in both human- and machine-readable formats, facilitating symbolic reasoning on a model’s outcomes. Following the Linked Data principles, InterpretME establishes connections between entities in the InterpretME KG and their counterparts in existing KGs, thus, enhancing contextual information of the InterpretME KG entities.
UR - http://www.scopus.com/inward/record.url?scp=85176547279&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85176547279
T3 - CEUR Workshop Proceedings
BT - SEMPDS 2023
T2 - 19th International Conference on Semantic Systems, SEMPDS 2023
Y2 - 20 September 2023 through 22 September 2023
ER -