Self-Explaining Neural Networks for Respiratory Sound Classification with Scale-free Interpretability

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

Research Organisations

External Research Organisations

  • Griffith University Queensland
View graph of relations

Details

Original languageEnglish
Title of host publication2023 International Joint Conference on Neural Networks (IJCNN)
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (electronic)9781665488679
ISBN (print)978-1-6654-8868-6
Publication statusPublished - 2023
EventInternational Joint Conference on Neural Networks, IJCNN 2023 - Gold Coast, Australia
Duration: 18 Jun 202323 Jun 2023

Publication series

NameProceedings of the International Joint Conference on Neural Networks
Volume2023-June
ISSN (Print)2161-4393
ISSN (electronic)2161-4407

Abstract

Analysis of respiratory sounds is an area where deep neural networks (DNNs) may benefit clinicians and patients for diagnostic purposes due to their classification power. However, explaining the predictions made by DNNs remains a challenge. Currently, most explanation methods focus on post-hoc explanations, where a separate explanatory model is used to explain a trained DNN. Due to the complex nature of respiratory sound classification pipeline involving signal processing such as frequency analysis and wavelet analysis, post-hoc methods cannot uncover the underlying inference process of DNNs, highlighting the importance of designing DNNs with intrinsic interpretability. In this paper, we propose a self-explaining DNN for respiratory sound classification based on prototype learning. Our model explains its behavior by generating sample prototypes while attaching these prototypes to a layer inside the neural network. Furthermore, we design a scale-free interpretability mechanism, in which the model reaches its final decision by dissecting the input and looking for similarities between several parts of the input and the prototypes. The experimental findings on the largest public respiratory sound database demonstrate that our method achieves comparable, sometimes better, performance with the non-interpretable counterparts while offering state-of-the-art interpretability. The code will be released upon acceptance.

Keywords

    prototype learning, respiratory sound classification, scale-free interpretability, self-explaining neural networks

ASJC Scopus subject areas

Cite this

Self-Explaining Neural Networks for Respiratory Sound Classification with Scale-free Interpretability. / Ren, Zhao; Nguyen, Thanh Tam; Zahed, Mohammad Mehdi et al.
2023 International Joint Conference on Neural Networks (IJCNN). Institute of Electrical and Electronics Engineers Inc., 2023. (Proceedings of the International Joint Conference on Neural Networks; Vol. 2023-June).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Ren, Z, Nguyen, TT, Zahed, MM & Nejdl, W 2023, Self-Explaining Neural Networks for Respiratory Sound Classification with Scale-free Interpretability. in 2023 International Joint Conference on Neural Networks (IJCNN). Proceedings of the International Joint Conference on Neural Networks, vol. 2023-June, Institute of Electrical and Electronics Engineers Inc., International Joint Conference on Neural Networks, IJCNN 2023, Gold Coast, Australia, 18 Jun 2023. https://doi.org/10.1109/IJCNN54540.2023.10191600
Ren, Z., Nguyen, T. T., Zahed, M. M., & Nejdl, W. (2023). Self-Explaining Neural Networks for Respiratory Sound Classification with Scale-free Interpretability. In 2023 International Joint Conference on Neural Networks (IJCNN) (Proceedings of the International Joint Conference on Neural Networks; Vol. 2023-June). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IJCNN54540.2023.10191600
Ren Z, Nguyen TT, Zahed MM, Nejdl W. Self-Explaining Neural Networks for Respiratory Sound Classification with Scale-free Interpretability. In 2023 International Joint Conference on Neural Networks (IJCNN). Institute of Electrical and Electronics Engineers Inc. 2023. (Proceedings of the International Joint Conference on Neural Networks). doi: 10.1109/IJCNN54540.2023.10191600
Ren, Zhao ; Nguyen, Thanh Tam ; Zahed, Mohammad Mehdi et al. / Self-Explaining Neural Networks for Respiratory Sound Classification with Scale-free Interpretability. 2023 International Joint Conference on Neural Networks (IJCNN). Institute of Electrical and Electronics Engineers Inc., 2023. (Proceedings of the International Joint Conference on Neural Networks).
Download
@inproceedings{9f9c6bfbd1ba4adb9107f7eabfdb60c6,
title = "Self-Explaining Neural Networks for Respiratory Sound Classification with Scale-free Interpretability",
abstract = "Analysis of respiratory sounds is an area where deep neural networks (DNNs) may benefit clinicians and patients for diagnostic purposes due to their classification power. However, explaining the predictions made by DNNs remains a challenge. Currently, most explanation methods focus on post-hoc explanations, where a separate explanatory model is used to explain a trained DNN. Due to the complex nature of respiratory sound classification pipeline involving signal processing such as frequency analysis and wavelet analysis, post-hoc methods cannot uncover the underlying inference process of DNNs, highlighting the importance of designing DNNs with intrinsic interpretability. In this paper, we propose a self-explaining DNN for respiratory sound classification based on prototype learning. Our model explains its behavior by generating sample prototypes while attaching these prototypes to a layer inside the neural network. Furthermore, we design a scale-free interpretability mechanism, in which the model reaches its final decision by dissecting the input and looking for similarities between several parts of the input and the prototypes. The experimental findings on the largest public respiratory sound database demonstrate that our method achieves comparable, sometimes better, performance with the non-interpretable counterparts while offering state-of-the-art interpretability. The code will be released upon acceptance.",
keywords = "prototype learning, respiratory sound classification, scale-free interpretability, self-explaining neural networks",
author = "Zhao Ren and Nguyen, {Thanh Tam} and Zahed, {Mohammad Mehdi} and Wolfgang Nejdl",
note = "Funding Information: Acknowledgment. This research was funded by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor (No. 01DD20003), and the research projects “IIP-Ecosphere” granted by the German Federal Ministry for Economics and Climate Action (BMWK) (No. 01MK20006A). ; International Joint Conference on Neural Networks, IJCNN 2023 ; Conference date: 18-06-2023 Through 23-06-2023",
year = "2023",
doi = "10.1109/IJCNN54540.2023.10191600",
language = "English",
isbn = "978-1-6654-8868-6",
series = "Proceedings of the International Joint Conference on Neural Networks",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
booktitle = "2023 International Joint Conference on Neural Networks (IJCNN)",
address = "United States",

}

Download

TY - GEN

T1 - Self-Explaining Neural Networks for Respiratory Sound Classification with Scale-free Interpretability

AU - Ren, Zhao

AU - Nguyen, Thanh Tam

AU - Zahed, Mohammad Mehdi

AU - Nejdl, Wolfgang

N1 - Funding Information: Acknowledgment. This research was funded by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor (No. 01DD20003), and the research projects “IIP-Ecosphere” granted by the German Federal Ministry for Economics and Climate Action (BMWK) (No. 01MK20006A).

PY - 2023

Y1 - 2023

N2 - Analysis of respiratory sounds is an area where deep neural networks (DNNs) may benefit clinicians and patients for diagnostic purposes due to their classification power. However, explaining the predictions made by DNNs remains a challenge. Currently, most explanation methods focus on post-hoc explanations, where a separate explanatory model is used to explain a trained DNN. Due to the complex nature of respiratory sound classification pipeline involving signal processing such as frequency analysis and wavelet analysis, post-hoc methods cannot uncover the underlying inference process of DNNs, highlighting the importance of designing DNNs with intrinsic interpretability. In this paper, we propose a self-explaining DNN for respiratory sound classification based on prototype learning. Our model explains its behavior by generating sample prototypes while attaching these prototypes to a layer inside the neural network. Furthermore, we design a scale-free interpretability mechanism, in which the model reaches its final decision by dissecting the input and looking for similarities between several parts of the input and the prototypes. The experimental findings on the largest public respiratory sound database demonstrate that our method achieves comparable, sometimes better, performance with the non-interpretable counterparts while offering state-of-the-art interpretability. The code will be released upon acceptance.

AB - Analysis of respiratory sounds is an area where deep neural networks (DNNs) may benefit clinicians and patients for diagnostic purposes due to their classification power. However, explaining the predictions made by DNNs remains a challenge. Currently, most explanation methods focus on post-hoc explanations, where a separate explanatory model is used to explain a trained DNN. Due to the complex nature of respiratory sound classification pipeline involving signal processing such as frequency analysis and wavelet analysis, post-hoc methods cannot uncover the underlying inference process of DNNs, highlighting the importance of designing DNNs with intrinsic interpretability. In this paper, we propose a self-explaining DNN for respiratory sound classification based on prototype learning. Our model explains its behavior by generating sample prototypes while attaching these prototypes to a layer inside the neural network. Furthermore, we design a scale-free interpretability mechanism, in which the model reaches its final decision by dissecting the input and looking for similarities between several parts of the input and the prototypes. The experimental findings on the largest public respiratory sound database demonstrate that our method achieves comparable, sometimes better, performance with the non-interpretable counterparts while offering state-of-the-art interpretability. The code will be released upon acceptance.

KW - prototype learning

KW - respiratory sound classification

KW - scale-free interpretability

KW - self-explaining neural networks

UR - http://www.scopus.com/inward/record.url?scp=85169566636&partnerID=8YFLogxK

U2 - 10.1109/IJCNN54540.2023.10191600

DO - 10.1109/IJCNN54540.2023.10191600

M3 - Conference contribution

AN - SCOPUS:85169566636

SN - 978-1-6654-8868-6

T3 - Proceedings of the International Joint Conference on Neural Networks

BT - 2023 International Joint Conference on Neural Networks (IJCNN)

PB - Institute of Electrical and Electronics Engineers Inc.

T2 - International Joint Conference on Neural Networks, IJCNN 2023

Y2 - 18 June 2023 through 23 June 2023

ER -

By the same author(s)