Enhancing Interpretability of Machine Learning Models over Knowledge Graphs

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Yashrajsinh Chudasama
  • Disha Purohit
  • Philipp D. Rohde
  • Maria Esther Vidal

External Research Organisations

  • German National Library of Science and Technology (TIB)
View graph of relations

Details

Original languageEnglish
Title of host publicationSEMPDS 2023
Subtitle of host publicationPosters and Demos at SEMANTiCS 2023
Number of pages5
Publication statusPublished - 2023
Event19th International Conference on Semantic Systems, SEMPDS 2023 - Leipzing, Germany
Duration: 20 Sept 202322 Sept 2023

Publication series

NameCEUR Workshop Proceedings
PublisherCEUR Workshop Proceedings
Volume3526
ISSN (Print)1613-0073

Abstract

Artificial Intelligence (AI) plays a critical role in data-driven decision-making frameworks. However, the lack of transparency in some machine learning (ML) models hampers their trustworthiness, especially in domains like healthcare. This demonstration aims to showcase the potential of Semantic Web technologies in enhancing the interpretability of AI. By incorporating an interpretability layer, ML models can become more reliable, providing decision-makers with deeper insights into the model’s decision-making process. InterpretME effectively documents the execution of an ML pipeline using factual statements within the InterpretME knowledge graph (KG). Consequently, crucial metadata such as hyperparameters, decision trees, and local ML interpretations are presented in both human- and machine-readable formats, facilitating symbolic reasoning on a model’s outcomes. Following the Linked Data principles, InterpretME establishes connections between entities in the InterpretME KG and their counterparts in existing KGs, thus, enhancing contextual information of the InterpretME KG entities.

ASJC Scopus subject areas

Sustainable Development Goals

Cite this

Enhancing Interpretability of Machine Learning Models over Knowledge Graphs. / Chudasama, Yashrajsinh; Purohit, Disha; Rohde, Philipp D. et al.
SEMPDS 2023: Posters and Demos at SEMANTiCS 2023. 2023. (CEUR Workshop Proceedings; Vol. 3526).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Chudasama, Y, Purohit, D, Rohde, PD & Vidal, ME 2023, Enhancing Interpretability of Machine Learning Models over Knowledge Graphs. in SEMPDS 2023: Posters and Demos at SEMANTiCS 2023. CEUR Workshop Proceedings, vol. 3526, 19th International Conference on Semantic Systems, SEMPDS 2023, Leipzing, Germany, 20 Sept 2023. <https://ceur-ws.org/Vol-3526/paper-05.pdf>
Chudasama, Y., Purohit, D., Rohde, P. D., & Vidal, M. E. (2023). Enhancing Interpretability of Machine Learning Models over Knowledge Graphs. In SEMPDS 2023: Posters and Demos at SEMANTiCS 2023 (CEUR Workshop Proceedings; Vol. 3526). https://ceur-ws.org/Vol-3526/paper-05.pdf
Chudasama Y, Purohit D, Rohde PD, Vidal ME. Enhancing Interpretability of Machine Learning Models over Knowledge Graphs. In SEMPDS 2023: Posters and Demos at SEMANTiCS 2023. 2023. (CEUR Workshop Proceedings).
Chudasama, Yashrajsinh ; Purohit, Disha ; Rohde, Philipp D. et al. / Enhancing Interpretability of Machine Learning Models over Knowledge Graphs. SEMPDS 2023: Posters and Demos at SEMANTiCS 2023. 2023. (CEUR Workshop Proceedings).
Download
@inproceedings{ed09f064fd59418984040c1798c4759f,
title = "Enhancing Interpretability of Machine Learning Models over Knowledge Graphs",
abstract = "Artificial Intelligence (AI) plays a critical role in data-driven decision-making frameworks. However, the lack of transparency in some machine learning (ML) models hampers their trustworthiness, especially in domains like healthcare. This demonstration aims to showcase the potential of Semantic Web technologies in enhancing the interpretability of AI. By incorporating an interpretability layer, ML models can become more reliable, providing decision-makers with deeper insights into the model{\textquoteright}s decision-making process. InterpretME effectively documents the execution of an ML pipeline using factual statements within the InterpretME knowledge graph (KG). Consequently, crucial metadata such as hyperparameters, decision trees, and local ML interpretations are presented in both human- and machine-readable formats, facilitating symbolic reasoning on a model{\textquoteright}s outcomes. Following the Linked Data principles, InterpretME establishes connections between entities in the InterpretME KG and their counterparts in existing KGs, thus, enhancing contextual information of the InterpretME KG entities.",
author = "Yashrajsinh Chudasama and Disha Purohit and Rohde, {Philipp D.} and Vidal, {Maria Esther}",
note = "Funding Information: †All authors contributed equally. This work has been supported by {"}Leibniz Best Minds: Programme for Women Professors{"}, project TrustKG-Transforming Data in Trustable Insights with grant P99/2020 and Federal Ministry for Economic Affairs and Energy of Germany (BMWK) in the project CoyPu (project number 01MK21007[A-L]). $ yashrajsinh.chudasama@tib.eu (Y. Chudasama); disha.purohit@tib.eu (D. Purohit); philipp.rohde@tib.eu (P. D. Rohde); maria.vidal@tib.eu (M. Vidal) 0000-0003-3422-366X (Y. Chudasama); 0000-0002-1442-335X (D. Purohit); 0000-0002-9835-4354 (P. D. Rohde); 0000-0003-1160-8727 (M. Vidal) ; 19th International Conference on Semantic Systems, SEMPDS 2023 ; Conference date: 20-09-2023 Through 22-09-2023",
year = "2023",
language = "English",
series = "CEUR Workshop Proceedings",
publisher = "CEUR Workshop Proceedings",
booktitle = "SEMPDS 2023",

}

Download

TY - GEN

T1 - Enhancing Interpretability of Machine Learning Models over Knowledge Graphs

AU - Chudasama, Yashrajsinh

AU - Purohit, Disha

AU - Rohde, Philipp D.

AU - Vidal, Maria Esther

N1 - Funding Information: †All authors contributed equally. This work has been supported by "Leibniz Best Minds: Programme for Women Professors", project TrustKG-Transforming Data in Trustable Insights with grant P99/2020 and Federal Ministry for Economic Affairs and Energy of Germany (BMWK) in the project CoyPu (project number 01MK21007[A-L]). $ yashrajsinh.chudasama@tib.eu (Y. Chudasama); disha.purohit@tib.eu (D. Purohit); philipp.rohde@tib.eu (P. D. Rohde); maria.vidal@tib.eu (M. Vidal) 0000-0003-3422-366X (Y. Chudasama); 0000-0002-1442-335X (D. Purohit); 0000-0002-9835-4354 (P. D. Rohde); 0000-0003-1160-8727 (M. Vidal)

PY - 2023

Y1 - 2023

N2 - Artificial Intelligence (AI) plays a critical role in data-driven decision-making frameworks. However, the lack of transparency in some machine learning (ML) models hampers their trustworthiness, especially in domains like healthcare. This demonstration aims to showcase the potential of Semantic Web technologies in enhancing the interpretability of AI. By incorporating an interpretability layer, ML models can become more reliable, providing decision-makers with deeper insights into the model’s decision-making process. InterpretME effectively documents the execution of an ML pipeline using factual statements within the InterpretME knowledge graph (KG). Consequently, crucial metadata such as hyperparameters, decision trees, and local ML interpretations are presented in both human- and machine-readable formats, facilitating symbolic reasoning on a model’s outcomes. Following the Linked Data principles, InterpretME establishes connections between entities in the InterpretME KG and their counterparts in existing KGs, thus, enhancing contextual information of the InterpretME KG entities.

AB - Artificial Intelligence (AI) plays a critical role in data-driven decision-making frameworks. However, the lack of transparency in some machine learning (ML) models hampers their trustworthiness, especially in domains like healthcare. This demonstration aims to showcase the potential of Semantic Web technologies in enhancing the interpretability of AI. By incorporating an interpretability layer, ML models can become more reliable, providing decision-makers with deeper insights into the model’s decision-making process. InterpretME effectively documents the execution of an ML pipeline using factual statements within the InterpretME knowledge graph (KG). Consequently, crucial metadata such as hyperparameters, decision trees, and local ML interpretations are presented in both human- and machine-readable formats, facilitating symbolic reasoning on a model’s outcomes. Following the Linked Data principles, InterpretME establishes connections between entities in the InterpretME KG and their counterparts in existing KGs, thus, enhancing contextual information of the InterpretME KG entities.

UR - http://www.scopus.com/inward/record.url?scp=85176547279&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85176547279

T3 - CEUR Workshop Proceedings

BT - SEMPDS 2023

T2 - 19th International Conference on Semantic Systems, SEMPDS 2023

Y2 - 20 September 2023 through 22 September 2023

ER -