Prototype Learning for Interpretable Respiratory Sound Analysis

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

Research Organisations

External Research Organisations

  • Griffith University
View graph of relations

Details

Original languageEnglish
Title of host publication2022 IEEE International Conference on Acoustics, Speech, and Signal Processing
Subtitle of host publicationICASSP 2022 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages9087-9091
Number of pages5
ISBN (electronic)9781665405409
Publication statusPublished - May 2022
Event47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Virtual, Online, Singapore
Duration: 23 May 202227 May 2022

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2022-May
ISSN (Print)1520-6149

Abstract

Remote screening of respiratory diseases has been widely studied as a non-invasive and early instrument for diagnosis purposes, especially in the pandemic. The respiratory sound classification task has been realized with numerous deep neural network (DNN) models due to their superior performance. However, in the high-stake medical domain where decisions can have significant consequences, it is desirable to develop interpretable models; thus, providing understandable reasons for physicians and patients. To address the issue, we propose a prototype learning framework, that jointly generates exemplar samples for explanation and integrates these samples into a layer of DNNs. The experimental results indicate that our method outperforms the state-of-the-art approaches on the largest public respiratory sound database.

Keywords

    interpretable machine learning, prototype-based explanation, respiratory sound classification

ASJC Scopus subject areas

Cite this

Prototype Learning for Interpretable Respiratory Sound Analysis. / Ren, Zhao; Nguyen, Thanh Tam; Nejdl, Wolfgang.
2022 IEEE International Conference on Acoustics, Speech, and Signal Processing: ICASSP 2022 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2022. p. 9087-9091 (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings; Vol. 2022-May).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Ren, Z, Nguyen, TT & Nejdl, W 2022, Prototype Learning for Interpretable Respiratory Sound Analysis. in 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing: ICASSP 2022 - Proceedings. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, vol. 2022-May, Institute of Electrical and Electronics Engineers Inc., pp. 9087-9091, 47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022, Virtual, Online, Singapore, 23 May 2022. https://doi.org/10.48550/arXiv.2110.03536, https://doi.org/10.1109/ICASSP43922.2022.9747014
Ren, Z., Nguyen, T. T., & Nejdl, W. (2022). Prototype Learning for Interpretable Respiratory Sound Analysis. In 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing: ICASSP 2022 - Proceedings (pp. 9087-9091). (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings; Vol. 2022-May). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.48550/arXiv.2110.03536, https://doi.org/10.1109/ICASSP43922.2022.9747014
Ren Z, Nguyen TT, Nejdl W. Prototype Learning for Interpretable Respiratory Sound Analysis. In 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing: ICASSP 2022 - Proceedings. Institute of Electrical and Electronics Engineers Inc. 2022. p. 9087-9091. (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings). doi: https://doi.org/10.48550/arXiv.2110.03536, 10.1109/ICASSP43922.2022.9747014
Ren, Zhao ; Nguyen, Thanh Tam ; Nejdl, Wolfgang. / Prototype Learning for Interpretable Respiratory Sound Analysis. 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing: ICASSP 2022 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2022. pp. 9087-9091 (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings).
Download
@inproceedings{4ccfd687ff0a4e08a4a64eeb1903a47a,
title = "Prototype Learning for Interpretable Respiratory Sound Analysis",
abstract = "Remote screening of respiratory diseases has been widely studied as a non-invasive and early instrument for diagnosis purposes, especially in the pandemic. The respiratory sound classification task has been realized with numerous deep neural network (DNN) models due to their superior performance. However, in the high-stake medical domain where decisions can have significant consequences, it is desirable to develop interpretable models; thus, providing understandable reasons for physicians and patients. To address the issue, we propose a prototype learning framework, that jointly generates exemplar samples for explanation and integrates these samples into a layer of DNNs. The experimental results indicate that our method outperforms the state-of-the-art approaches on the largest public respiratory sound database.",
keywords = "interpretable machine learning, prototype-based explanation, respiratory sound classification",
author = "Zhao Ren and Nguyen, {Thanh Tam} and Wolfgang Nejdl",
note = "Funding Information: Acknowledgments. This research was funded by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor with grant No. 01DD20003. ; 47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 ; Conference date: 23-05-2022 Through 27-05-2022",
year = "2022",
month = may,
doi = "https://doi.org/10.48550/arXiv.2110.03536",
language = "English",
series = "ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "9087--9091",
booktitle = "2022 IEEE International Conference on Acoustics, Speech, and Signal Processing",
address = "United States",

}

Download

TY - GEN

T1 - Prototype Learning for Interpretable Respiratory Sound Analysis

AU - Ren, Zhao

AU - Nguyen, Thanh Tam

AU - Nejdl, Wolfgang

N1 - Funding Information: Acknowledgments. This research was funded by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor with grant No. 01DD20003.

PY - 2022/5

Y1 - 2022/5

N2 - Remote screening of respiratory diseases has been widely studied as a non-invasive and early instrument for diagnosis purposes, especially in the pandemic. The respiratory sound classification task has been realized with numerous deep neural network (DNN) models due to their superior performance. However, in the high-stake medical domain where decisions can have significant consequences, it is desirable to develop interpretable models; thus, providing understandable reasons for physicians and patients. To address the issue, we propose a prototype learning framework, that jointly generates exemplar samples for explanation and integrates these samples into a layer of DNNs. The experimental results indicate that our method outperforms the state-of-the-art approaches on the largest public respiratory sound database.

AB - Remote screening of respiratory diseases has been widely studied as a non-invasive and early instrument for diagnosis purposes, especially in the pandemic. The respiratory sound classification task has been realized with numerous deep neural network (DNN) models due to their superior performance. However, in the high-stake medical domain where decisions can have significant consequences, it is desirable to develop interpretable models; thus, providing understandable reasons for physicians and patients. To address the issue, we propose a prototype learning framework, that jointly generates exemplar samples for explanation and integrates these samples into a layer of DNNs. The experimental results indicate that our method outperforms the state-of-the-art approaches on the largest public respiratory sound database.

KW - interpretable machine learning

KW - prototype-based explanation

KW - respiratory sound classification

UR - http://www.scopus.com/inward/record.url?scp=85128155466&partnerID=8YFLogxK

U2 - https://doi.org/10.48550/arXiv.2110.03536

DO - https://doi.org/10.48550/arXiv.2110.03536

M3 - Conference contribution

AN - SCOPUS:85128155466

T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings

SP - 9087

EP - 9091

BT - 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing

PB - Institute of Electrical and Electronics Engineers Inc.

T2 - 47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022

Y2 - 23 May 2022 through 27 May 2022

ER -

By the same author(s)