VISE: Validated and Invalidated Symbolic Explanations for Knowledge Graph Integrity

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autorschaft

  • Disha Purohit
  • Yashrajsinh Chudasama
  • Maria Torrente
  • Maria Esther Vidal

Externe Organisationen

  • Technische Informationsbibliothek (TIB) Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek
  • Hospital Universitario Puerta de Hierro de Majadahonda
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksExplainable Artificial Intelligence for the Medical Domain 2024
UntertitelProceedings of the First Workshop on Explainable Artificial Intelligence for the Medical Domain (EXPLIMED 2024)
Seitenumfang23
PublikationsstatusVeröffentlicht - 14 Nov. 2024
Veranstaltung1st Workshop on Explainable Artificial Intelligence for the Medical Domain, EXPLIMED 2024 - Santiago de Compostela, Spanien
Dauer: 20 Okt. 202420 Okt. 2024

Publikationsreihe

NameCEUR workshop proceedings
Herausgeber (Verlag)CEUR-WS
Band3831
ISSN (Print)1613-0073

Abstract

Knowledge graphs (KGs) are naturally capable of capturing the convergence of data and knowledge, thereby making them highly expressive frameworks for describing and integrating heterogeneous data in a coherent and interconnected manner. However, based on the Open World Assumption (OWA), the absence of information within KGs does not indicate falsity or non-existence; it merely reflects incompleteness. The process of inductive learning over KGs involves predicting new relationships based on existing factual statements in the KG, utilizing either numerical or symbolic learning models. Recently, Knowledge Graph Embedding (KGE) and symbolic learning have received considerable attention in various downstream tasks, including Link Prediction (LP). LP techniques employ latent vector representations of entities and their relationships in KGs to infer missing links. Furthermore, as the quantity of data generated by KGs continues to increase, the necessity for additional quality assessment and validation efforts becomes more apparent. Nevertheless, state-of-the-art KG completion approaches fail to consider the quality constraints while generating predictions, resulting in the completion of KGs with erroneous relationships. The generation of accurate data and insights is of vital importance in the context of healthcare decision-making, including the processes of diagnosis, the formulation of treatment strategies, and the implementation of preventive actions. We propose a hybrid approach, VISE, which adopts the integration of symbolic learning, constraint validation, and numerical learning techniques. VISE leverages KGE to capture implicit knowledge and represent negation in KGs, thereby enhancing the predictive performance of numerical models. Our experimental results demonstrate the effectiveness of this hybrid strategy, which combines the strengths of symbolic, numerical, and constraint validation paradigms. VISE implementation is publicly accessible on GitHub (https://github.com/SDM-TIB/VISE).

ASJC Scopus Sachgebiete

Zitieren

VISE: Validated and Invalidated Symbolic Explanations for Knowledge Graph Integrity. / Purohit, Disha; Chudasama, Yashrajsinh; Torrente, Maria et al.
Explainable Artificial Intelligence for the Medical Domain 2024: Proceedings of the First Workshop on Explainable Artificial Intelligence for the Medical Domain (EXPLIMED 2024). 2024. (CEUR workshop proceedings; Band 3831).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Purohit, D, Chudasama, Y, Torrente, M & Vidal, ME 2024, VISE: Validated and Invalidated Symbolic Explanations for Knowledge Graph Integrity. in Explainable Artificial Intelligence for the Medical Domain 2024: Proceedings of the First Workshop on Explainable Artificial Intelligence for the Medical Domain (EXPLIMED 2024). CEUR workshop proceedings, Bd. 3831, 1st Workshop on Explainable Artificial Intelligence for the Medical Domain, EXPLIMED 2024, Santiago de Compostela, Spanien, 20 Okt. 2024. <https://ceur-ws.org/Vol-3831/paper5.pdf>
Purohit, D., Chudasama, Y., Torrente, M., & Vidal, M. E. (2024). VISE: Validated and Invalidated Symbolic Explanations for Knowledge Graph Integrity. In Explainable Artificial Intelligence for the Medical Domain 2024: Proceedings of the First Workshop on Explainable Artificial Intelligence for the Medical Domain (EXPLIMED 2024) (CEUR workshop proceedings; Band 3831). https://ceur-ws.org/Vol-3831/paper5.pdf
Purohit D, Chudasama Y, Torrente M, Vidal ME. VISE: Validated and Invalidated Symbolic Explanations for Knowledge Graph Integrity. in Explainable Artificial Intelligence for the Medical Domain 2024: Proceedings of the First Workshop on Explainable Artificial Intelligence for the Medical Domain (EXPLIMED 2024). 2024. (CEUR workshop proceedings).
Purohit, Disha ; Chudasama, Yashrajsinh ; Torrente, Maria et al. / VISE : Validated and Invalidated Symbolic Explanations for Knowledge Graph Integrity. Explainable Artificial Intelligence for the Medical Domain 2024: Proceedings of the First Workshop on Explainable Artificial Intelligence for the Medical Domain (EXPLIMED 2024). 2024. (CEUR workshop proceedings).
Download
@inproceedings{5d6715d02e0544d2829c7c7ca317e2b8,
title = "VISE: Validated and Invalidated Symbolic Explanations for Knowledge Graph Integrity",
abstract = "Knowledge graphs (KGs) are naturally capable of capturing the convergence of data and knowledge, thereby making them highly expressive frameworks for describing and integrating heterogeneous data in a coherent and interconnected manner. However, based on the Open World Assumption (OWA), the absence of information within KGs does not indicate falsity or non-existence; it merely reflects incompleteness. The process of inductive learning over KGs involves predicting new relationships based on existing factual statements in the KG, utilizing either numerical or symbolic learning models. Recently, Knowledge Graph Embedding (KGE) and symbolic learning have received considerable attention in various downstream tasks, including Link Prediction (LP). LP techniques employ latent vector representations of entities and their relationships in KGs to infer missing links. Furthermore, as the quantity of data generated by KGs continues to increase, the necessity for additional quality assessment and validation efforts becomes more apparent. Nevertheless, state-of-the-art KG completion approaches fail to consider the quality constraints while generating predictions, resulting in the completion of KGs with erroneous relationships. The generation of accurate data and insights is of vital importance in the context of healthcare decision-making, including the processes of diagnosis, the formulation of treatment strategies, and the implementation of preventive actions. We propose a hybrid approach, VISE, which adopts the integration of symbolic learning, constraint validation, and numerical learning techniques. VISE leverages KGE to capture implicit knowledge and represent negation in KGs, thereby enhancing the predictive performance of numerical models. Our experimental results demonstrate the effectiveness of this hybrid strategy, which combines the strengths of symbolic, numerical, and constraint validation paradigms. VISE implementation is publicly accessible on GitHub (https://github.com/SDM-TIB/VISE).",
keywords = "Explainability, Knowledge Graphs, Numerical Learning, SHACL Constraints, Symbolic Learning",
author = "Disha Purohit and Yashrajsinh Chudasama and Maria Torrente and Vidal, {Maria Esther}",
note = "Publisher Copyright: {\textcopyright} 2024 Copyright for this paper by its authors.; 1st Workshop on Explainable Artificial Intelligence for the Medical Domain, EXPLIMED 2024 ; Conference date: 20-10-2024 Through 20-10-2024",
year = "2024",
month = nov,
day = "14",
language = "English",
series = "CEUR workshop proceedings",
publisher = "CEUR-WS",
booktitle = "Explainable Artificial Intelligence for the Medical Domain 2024",

}

Download

TY - GEN

T1 - VISE

T2 - 1st Workshop on Explainable Artificial Intelligence for the Medical Domain, EXPLIMED 2024

AU - Purohit, Disha

AU - Chudasama, Yashrajsinh

AU - Torrente, Maria

AU - Vidal, Maria Esther

N1 - Publisher Copyright: © 2024 Copyright for this paper by its authors.

PY - 2024/11/14

Y1 - 2024/11/14

N2 - Knowledge graphs (KGs) are naturally capable of capturing the convergence of data and knowledge, thereby making them highly expressive frameworks for describing and integrating heterogeneous data in a coherent and interconnected manner. However, based on the Open World Assumption (OWA), the absence of information within KGs does not indicate falsity or non-existence; it merely reflects incompleteness. The process of inductive learning over KGs involves predicting new relationships based on existing factual statements in the KG, utilizing either numerical or symbolic learning models. Recently, Knowledge Graph Embedding (KGE) and symbolic learning have received considerable attention in various downstream tasks, including Link Prediction (LP). LP techniques employ latent vector representations of entities and their relationships in KGs to infer missing links. Furthermore, as the quantity of data generated by KGs continues to increase, the necessity for additional quality assessment and validation efforts becomes more apparent. Nevertheless, state-of-the-art KG completion approaches fail to consider the quality constraints while generating predictions, resulting in the completion of KGs with erroneous relationships. The generation of accurate data and insights is of vital importance in the context of healthcare decision-making, including the processes of diagnosis, the formulation of treatment strategies, and the implementation of preventive actions. We propose a hybrid approach, VISE, which adopts the integration of symbolic learning, constraint validation, and numerical learning techniques. VISE leverages KGE to capture implicit knowledge and represent negation in KGs, thereby enhancing the predictive performance of numerical models. Our experimental results demonstrate the effectiveness of this hybrid strategy, which combines the strengths of symbolic, numerical, and constraint validation paradigms. VISE implementation is publicly accessible on GitHub (https://github.com/SDM-TIB/VISE).

AB - Knowledge graphs (KGs) are naturally capable of capturing the convergence of data and knowledge, thereby making them highly expressive frameworks for describing and integrating heterogeneous data in a coherent and interconnected manner. However, based on the Open World Assumption (OWA), the absence of information within KGs does not indicate falsity or non-existence; it merely reflects incompleteness. The process of inductive learning over KGs involves predicting new relationships based on existing factual statements in the KG, utilizing either numerical or symbolic learning models. Recently, Knowledge Graph Embedding (KGE) and symbolic learning have received considerable attention in various downstream tasks, including Link Prediction (LP). LP techniques employ latent vector representations of entities and their relationships in KGs to infer missing links. Furthermore, as the quantity of data generated by KGs continues to increase, the necessity for additional quality assessment and validation efforts becomes more apparent. Nevertheless, state-of-the-art KG completion approaches fail to consider the quality constraints while generating predictions, resulting in the completion of KGs with erroneous relationships. The generation of accurate data and insights is of vital importance in the context of healthcare decision-making, including the processes of diagnosis, the formulation of treatment strategies, and the implementation of preventive actions. We propose a hybrid approach, VISE, which adopts the integration of symbolic learning, constraint validation, and numerical learning techniques. VISE leverages KGE to capture implicit knowledge and represent negation in KGs, thereby enhancing the predictive performance of numerical models. Our experimental results demonstrate the effectiveness of this hybrid strategy, which combines the strengths of symbolic, numerical, and constraint validation paradigms. VISE implementation is publicly accessible on GitHub (https://github.com/SDM-TIB/VISE).

KW - Explainability

KW - Knowledge Graphs

KW - Numerical Learning

KW - SHACL Constraints

KW - Symbolic Learning

UR - http://www.scopus.com/inward/record.url?scp=85210892776&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85210892776

T3 - CEUR workshop proceedings

BT - Explainable Artificial Intelligence for the Medical Domain 2024

Y2 - 20 October 2024 through 20 October 2024

ER -