Loading [MathJax]/extensions/tex2jax.js

Towards Interpretable Hybrid AI: Integrating Knowledge Graphs and Symbolic Reasoning in Medicine

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Yashrajsinh Chudasama
  • Hao Huang
  • Disha Purohit
  • Maria Esther Vidal

External Research Organisations

  • German National Library of Science and Technology (TIB)

Details

Original languageEnglish
Pages (from-to)39489 - 39509
Number of pages22
JournalIEEE ACCESS
Volume13
Publication statusPublished - 13 Jan 2025

Abstract

Knowledge Graphs (KGs) are data structures that enable the integration of heterogeneous data sources and supporting both knowledge representation and formal reasoning. In this paper, we introduce TrustKG, a KG-based framework designed to enhance the interpretability and reliability of hybrid AI systems in healthcare. Positioned within the context of lung cancer, TrustKG supports link prediction, which uncovers hidden relationships within medical data, and counterfactual prediction, which explores alternative scenarios to understand causal factors. These tasks are addressed through two specialized hybrid AI systems, VISE and HealthCareAI, which combine symbolic reasoning with inductive learning over KGs to provide interpretable AI solutions for clinical decision-making. Leveraging KGs to represent biomedical properties and relationships, and augmenting them with learned patterns through symbolic reasoning, our hybrid approach produces models that are both accurate and transparent. This interpretability is particularly important in medical applications, where trust and reliability in AI-driven predictions are paramount. Our empirical analysis demonstrates the effectiveness of VISE and HealthCareAI in improving the predictive accuracy and clarity of model outputs. By addressing challenges in link prediction - such as discovering previously unknown connections between medical entities - and in counterfactual prediction, TrustKG, with VISE and HealthCareAI, underscores the potential of integrating KGs with symbolic AI to create trustworthy, interpretable AI systems in healthcare. This paper contributes to the advancement of semantic AI, offering a pathway for robust and reliable AI solutions in clinical settings.

Keywords

    Counterfactual Prediction, Inductive Learning, Knowledge Graphs, Link Prediction, Symbolic Learning

ASJC Scopus subject areas

Sustainable Development Goals

Cite this

Towards Interpretable Hybrid AI: Integrating Knowledge Graphs and Symbolic Reasoning in Medicine. / Chudasama, Yashrajsinh; Huang, Hao; Purohit, Disha et al.
In: IEEE ACCESS, Vol. 13, 13.01.2025, p. 39489 - 39509.

Research output: Contribution to journalArticleResearchpeer review

Chudasama Y, Huang H, Purohit D, Vidal ME. Towards Interpretable Hybrid AI: Integrating Knowledge Graphs and Symbolic Reasoning in Medicine. IEEE ACCESS. 2025 Jan 13;13:39489 - 39509. doi: 10.1109/ACCESS.2025.3529133
Chudasama, Yashrajsinh ; Huang, Hao ; Purohit, Disha et al. / Towards Interpretable Hybrid AI : Integrating Knowledge Graphs and Symbolic Reasoning in Medicine. In: IEEE ACCESS. 2025 ; Vol. 13. pp. 39489 - 39509.
Download
@article{0b1b2ddb70354582bf48b6f63f371606,
title = "Towards Interpretable Hybrid AI: Integrating Knowledge Graphs and Symbolic Reasoning in Medicine",
abstract = "Knowledge Graphs (KGs) are data structures that enable the integration of heterogeneous data sources and supporting both knowledge representation and formal reasoning. In this paper, we introduce TrustKG, a KG-based framework designed to enhance the interpretability and reliability of hybrid AI systems in healthcare. Positioned within the context of lung cancer, TrustKG supports link prediction, which uncovers hidden relationships within medical data, and counterfactual prediction, which explores alternative scenarios to understand causal factors. These tasks are addressed through two specialized hybrid AI systems, VISE and HealthCareAI, which combine symbolic reasoning with inductive learning over KGs to provide interpretable AI solutions for clinical decision-making. Leveraging KGs to represent biomedical properties and relationships, and augmenting them with learned patterns through symbolic reasoning, our hybrid approach produces models that are both accurate and transparent. This interpretability is particularly important in medical applications, where trust and reliability in AI-driven predictions are paramount. Our empirical analysis demonstrates the effectiveness of VISE and HealthCareAI in improving the predictive accuracy and clarity of model outputs. By addressing challenges in link prediction - such as discovering previously unknown connections between medical entities - and in counterfactual prediction, TrustKG, with VISE and HealthCareAI, underscores the potential of integrating KGs with symbolic AI to create trustworthy, interpretable AI systems in healthcare. This paper contributes to the advancement of semantic AI, offering a pathway for robust and reliable AI solutions in clinical settings.",
keywords = "Counterfactual Prediction, Inductive Learning, Knowledge Graphs, Link Prediction, Symbolic Learning",
author = "Yashrajsinh Chudasama and Hao Huang and Disha Purohit and Vidal, {Maria Esther}",
note = "Publisher Copyright: {\textcopyright} 2025 IEEE.",
year = "2025",
month = jan,
day = "13",
doi = "10.1109/ACCESS.2025.3529133",
language = "English",
volume = "13",
pages = "39489 -- 39509",
journal = "IEEE ACCESS",
issn = "2169-3536",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

Download

TY - JOUR

T1 - Towards Interpretable Hybrid AI

T2 - Integrating Knowledge Graphs and Symbolic Reasoning in Medicine

AU - Chudasama, Yashrajsinh

AU - Huang, Hao

AU - Purohit, Disha

AU - Vidal, Maria Esther

N1 - Publisher Copyright: © 2025 IEEE.

PY - 2025/1/13

Y1 - 2025/1/13

N2 - Knowledge Graphs (KGs) are data structures that enable the integration of heterogeneous data sources and supporting both knowledge representation and formal reasoning. In this paper, we introduce TrustKG, a KG-based framework designed to enhance the interpretability and reliability of hybrid AI systems in healthcare. Positioned within the context of lung cancer, TrustKG supports link prediction, which uncovers hidden relationships within medical data, and counterfactual prediction, which explores alternative scenarios to understand causal factors. These tasks are addressed through two specialized hybrid AI systems, VISE and HealthCareAI, which combine symbolic reasoning with inductive learning over KGs to provide interpretable AI solutions for clinical decision-making. Leveraging KGs to represent biomedical properties and relationships, and augmenting them with learned patterns through symbolic reasoning, our hybrid approach produces models that are both accurate and transparent. This interpretability is particularly important in medical applications, where trust and reliability in AI-driven predictions are paramount. Our empirical analysis demonstrates the effectiveness of VISE and HealthCareAI in improving the predictive accuracy and clarity of model outputs. By addressing challenges in link prediction - such as discovering previously unknown connections between medical entities - and in counterfactual prediction, TrustKG, with VISE and HealthCareAI, underscores the potential of integrating KGs with symbolic AI to create trustworthy, interpretable AI systems in healthcare. This paper contributes to the advancement of semantic AI, offering a pathway for robust and reliable AI solutions in clinical settings.

AB - Knowledge Graphs (KGs) are data structures that enable the integration of heterogeneous data sources and supporting both knowledge representation and formal reasoning. In this paper, we introduce TrustKG, a KG-based framework designed to enhance the interpretability and reliability of hybrid AI systems in healthcare. Positioned within the context of lung cancer, TrustKG supports link prediction, which uncovers hidden relationships within medical data, and counterfactual prediction, which explores alternative scenarios to understand causal factors. These tasks are addressed through two specialized hybrid AI systems, VISE and HealthCareAI, which combine symbolic reasoning with inductive learning over KGs to provide interpretable AI solutions for clinical decision-making. Leveraging KGs to represent biomedical properties and relationships, and augmenting them with learned patterns through symbolic reasoning, our hybrid approach produces models that are both accurate and transparent. This interpretability is particularly important in medical applications, where trust and reliability in AI-driven predictions are paramount. Our empirical analysis demonstrates the effectiveness of VISE and HealthCareAI in improving the predictive accuracy and clarity of model outputs. By addressing challenges in link prediction - such as discovering previously unknown connections between medical entities - and in counterfactual prediction, TrustKG, with VISE and HealthCareAI, underscores the potential of integrating KGs with symbolic AI to create trustworthy, interpretable AI systems in healthcare. This paper contributes to the advancement of semantic AI, offering a pathway for robust and reliable AI solutions in clinical settings.

KW - Counterfactual Prediction

KW - Inductive Learning

KW - Knowledge Graphs

KW - Link Prediction

KW - Symbolic Learning

UR - http://www.scopus.com/inward/record.url?scp=85215256509&partnerID=8YFLogxK

U2 - 10.1109/ACCESS.2025.3529133

DO - 10.1109/ACCESS.2025.3529133

M3 - Article

AN - SCOPUS:85215256509

VL - 13

SP - 39489

EP - 39509

JO - IEEE ACCESS

JF - IEEE ACCESS

SN - 2169-3536

ER -