Enhancing Interpretability of Machine Learning Models over Knowledge Graphs

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Yashrajsinh Chudasama
  • Disha Purohit
  • Philipp D. Rohde
  • Maria Esther Vidal

Externe Organisationen

  • Technische Informationsbibliothek (TIB) Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksSEMPDS 2023
UntertitelPosters and Demos at SEMANTiCS 2023
Seitenumfang5
PublikationsstatusVeröffentlicht - 2023
Veranstaltung19th International Conference on Semantic Systems, SEMPDS 2023 - Leipzing, Deutschland
Dauer: 20 Sept. 202322 Sept. 2023

Publikationsreihe

NameCEUR Workshop Proceedings
Herausgeber (Verlag)CEUR Workshop Proceedings
Band3526
ISSN (Print)1613-0073

Abstract

Artificial Intelligence (AI) plays a critical role in data-driven decision-making frameworks. However, the lack of transparency in some machine learning (ML) models hampers their trustworthiness, especially in domains like healthcare. This demonstration aims to showcase the potential of Semantic Web technologies in enhancing the interpretability of AI. By incorporating an interpretability layer, ML models can become more reliable, providing decision-makers with deeper insights into the model’s decision-making process. InterpretME effectively documents the execution of an ML pipeline using factual statements within the InterpretME knowledge graph (KG). Consequently, crucial metadata such as hyperparameters, decision trees, and local ML interpretations are presented in both human- and machine-readable formats, facilitating symbolic reasoning on a model’s outcomes. Following the Linked Data principles, InterpretME establishes connections between entities in the InterpretME KG and their counterparts in existing KGs, thus, enhancing contextual information of the InterpretME KG entities.

ASJC Scopus Sachgebiete

Ziele für nachhaltige Entwicklung

Zitieren

Enhancing Interpretability of Machine Learning Models over Knowledge Graphs. / Chudasama, Yashrajsinh; Purohit, Disha; Rohde, Philipp D. et al.
SEMPDS 2023: Posters and Demos at SEMANTiCS 2023. 2023. (CEUR Workshop Proceedings; Band 3526).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Chudasama, Y, Purohit, D, Rohde, PD & Vidal, ME 2023, Enhancing Interpretability of Machine Learning Models over Knowledge Graphs. in SEMPDS 2023: Posters and Demos at SEMANTiCS 2023. CEUR Workshop Proceedings, Bd. 3526, 19th International Conference on Semantic Systems, SEMPDS 2023, Leipzing, Deutschland, 20 Sept. 2023. <https://ceur-ws.org/Vol-3526/paper-05.pdf>
Chudasama, Y., Purohit, D., Rohde, P. D., & Vidal, M. E. (2023). Enhancing Interpretability of Machine Learning Models over Knowledge Graphs. In SEMPDS 2023: Posters and Demos at SEMANTiCS 2023 (CEUR Workshop Proceedings; Band 3526). https://ceur-ws.org/Vol-3526/paper-05.pdf
Chudasama Y, Purohit D, Rohde PD, Vidal ME. Enhancing Interpretability of Machine Learning Models over Knowledge Graphs. in SEMPDS 2023: Posters and Demos at SEMANTiCS 2023. 2023. (CEUR Workshop Proceedings).
Chudasama, Yashrajsinh ; Purohit, Disha ; Rohde, Philipp D. et al. / Enhancing Interpretability of Machine Learning Models over Knowledge Graphs. SEMPDS 2023: Posters and Demos at SEMANTiCS 2023. 2023. (CEUR Workshop Proceedings).
Download
@inproceedings{ed09f064fd59418984040c1798c4759f,
title = "Enhancing Interpretability of Machine Learning Models over Knowledge Graphs",
abstract = "Artificial Intelligence (AI) plays a critical role in data-driven decision-making frameworks. However, the lack of transparency in some machine learning (ML) models hampers their trustworthiness, especially in domains like healthcare. This demonstration aims to showcase the potential of Semantic Web technologies in enhancing the interpretability of AI. By incorporating an interpretability layer, ML models can become more reliable, providing decision-makers with deeper insights into the model{\textquoteright}s decision-making process. InterpretME effectively documents the execution of an ML pipeline using factual statements within the InterpretME knowledge graph (KG). Consequently, crucial metadata such as hyperparameters, decision trees, and local ML interpretations are presented in both human- and machine-readable formats, facilitating symbolic reasoning on a model{\textquoteright}s outcomes. Following the Linked Data principles, InterpretME establishes connections between entities in the InterpretME KG and their counterparts in existing KGs, thus, enhancing contextual information of the InterpretME KG entities.",
author = "Yashrajsinh Chudasama and Disha Purohit and Rohde, {Philipp D.} and Vidal, {Maria Esther}",
note = "Funding Information: †All authors contributed equally. This work has been supported by {"}Leibniz Best Minds: Programme for Women Professors{"}, project TrustKG-Transforming Data in Trustable Insights with grant P99/2020 and Federal Ministry for Economic Affairs and Energy of Germany (BMWK) in the project CoyPu (project number 01MK21007[A-L]). $ yashrajsinh.chudasama@tib.eu (Y. Chudasama); disha.purohit@tib.eu (D. Purohit); philipp.rohde@tib.eu (P. D. Rohde); maria.vidal@tib.eu (M. Vidal) 0000-0003-3422-366X (Y. Chudasama); 0000-0002-1442-335X (D. Purohit); 0000-0002-9835-4354 (P. D. Rohde); 0000-0003-1160-8727 (M. Vidal) ; 19th International Conference on Semantic Systems, SEMPDS 2023 ; Conference date: 20-09-2023 Through 22-09-2023",
year = "2023",
language = "English",
series = "CEUR Workshop Proceedings",
publisher = "CEUR Workshop Proceedings",
booktitle = "SEMPDS 2023",

}

Download

TY - GEN

T1 - Enhancing Interpretability of Machine Learning Models over Knowledge Graphs

AU - Chudasama, Yashrajsinh

AU - Purohit, Disha

AU - Rohde, Philipp D.

AU - Vidal, Maria Esther

N1 - Funding Information: †All authors contributed equally. This work has been supported by "Leibniz Best Minds: Programme for Women Professors", project TrustKG-Transforming Data in Trustable Insights with grant P99/2020 and Federal Ministry for Economic Affairs and Energy of Germany (BMWK) in the project CoyPu (project number 01MK21007[A-L]). $ yashrajsinh.chudasama@tib.eu (Y. Chudasama); disha.purohit@tib.eu (D. Purohit); philipp.rohde@tib.eu (P. D. Rohde); maria.vidal@tib.eu (M. Vidal) 0000-0003-3422-366X (Y. Chudasama); 0000-0002-1442-335X (D. Purohit); 0000-0002-9835-4354 (P. D. Rohde); 0000-0003-1160-8727 (M. Vidal)

PY - 2023

Y1 - 2023

N2 - Artificial Intelligence (AI) plays a critical role in data-driven decision-making frameworks. However, the lack of transparency in some machine learning (ML) models hampers their trustworthiness, especially in domains like healthcare. This demonstration aims to showcase the potential of Semantic Web technologies in enhancing the interpretability of AI. By incorporating an interpretability layer, ML models can become more reliable, providing decision-makers with deeper insights into the model’s decision-making process. InterpretME effectively documents the execution of an ML pipeline using factual statements within the InterpretME knowledge graph (KG). Consequently, crucial metadata such as hyperparameters, decision trees, and local ML interpretations are presented in both human- and machine-readable formats, facilitating symbolic reasoning on a model’s outcomes. Following the Linked Data principles, InterpretME establishes connections between entities in the InterpretME KG and their counterparts in existing KGs, thus, enhancing contextual information of the InterpretME KG entities.

AB - Artificial Intelligence (AI) plays a critical role in data-driven decision-making frameworks. However, the lack of transparency in some machine learning (ML) models hampers their trustworthiness, especially in domains like healthcare. This demonstration aims to showcase the potential of Semantic Web technologies in enhancing the interpretability of AI. By incorporating an interpretability layer, ML models can become more reliable, providing decision-makers with deeper insights into the model’s decision-making process. InterpretME effectively documents the execution of an ML pipeline using factual statements within the InterpretME knowledge graph (KG). Consequently, crucial metadata such as hyperparameters, decision trees, and local ML interpretations are presented in both human- and machine-readable formats, facilitating symbolic reasoning on a model’s outcomes. Following the Linked Data principles, InterpretME establishes connections between entities in the InterpretME KG and their counterparts in existing KGs, thus, enhancing contextual information of the InterpretME KG entities.

UR - http://www.scopus.com/inward/record.url?scp=85176547279&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85176547279

T3 - CEUR Workshop Proceedings

BT - SEMPDS 2023

T2 - 19th International Conference on Semantic Systems, SEMPDS 2023

Y2 - 20 September 2023 through 22 September 2023

ER -