The Acoustic Dissection of Cough: Diving Into Machine Listening-based COVID-19 Analysis and Detection

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Zhao Ren
  • Yi Chang
  • Katrin D. Bartl-Pokorny
  • Florian B. Pokorny
  • Björn W. Schuller

Organisationseinheiten

Externe Organisationen

  • Universität Augsburg
  • Imperial College London
  • Medical University of Graz
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
FachzeitschriftJournal of voice
Frühes Online-Datum15 Juni 2022
PublikationsstatusElektronisch veröffentlicht (E-Pub) - 15 Juni 2022

Abstract

Objectives: The coronavirus disease 2019 (COVID-19) has caused a crisis worldwide. Amounts of efforts have been made to prevent and control COVID-19′s transmission, from early screenings to vaccinations and treatments. Recently, due to the spring up of many automatic disease recognition applications based on machine listening techniques, it would be fast and cheap to detect COVID-19 from recordings of cough, a key symptom of COVID-19. To date, knowledge of the acoustic characteristics of COVID-19 cough sounds is limited but would be essential for structuring effective and robust machine learning models. The present study aims to explore acoustic features for distinguishing COVID-19 positive individuals from COVID-19 negative ones based on their cough sounds. Methods: By applying conventional inferential statistics, we analyze the acoustic correlates of COVID-19 cough sounds based on the COMPARE feature set, i.e., a standardized set of 6,373 acoustic higher-level features. Furthermore, we train automatic COVID-19 detection models with machine learning methods and explore the latent features by evaluating the contribution of all features to the COVID-19 status predictions. Results: The experimental results demonstrate that a set of acoustic parameters of cough sounds, e.g., statistical functionals of the root mean square energy and Mel-frequency cepstral coefficients, bear essential acoustic information in terms of effect sizes for the differentiation between COVID-19 positive and COVID-19 negative cough samples. Our general automatic COVID-19 detection model performs significantly above chance level, i.e., at an unweighted average recall (UAR) of 0.632, on a data set consisting of 1,411 cough samples (COVID-19 positive/negative: 210/1,201). Conclusions: Based on the acoustic correlates analysis on the COMPARE feature set and the feature analysis in the effective COVID-19 detection approach, we find that several acoustic features that show higher effects in conventional group difference testing are also higher weighted in the machine learning models.

ASJC Scopus Sachgebiete

Ziele für nachhaltige Entwicklung

Zitieren

The Acoustic Dissection of Cough: Diving Into Machine Listening-based COVID-19 Analysis and Detection. / Ren, Zhao; Chang, Yi; Bartl-Pokorny, Katrin D. et al.
in: Journal of voice, 15.06.2022.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Ren Z, Chang Y, Bartl-Pokorny KD, Pokorny FB, Schuller BW. The Acoustic Dissection of Cough: Diving Into Machine Listening-based COVID-19 Analysis and Detection. Journal of voice. 2022 Jun 15. Epub 2022 Jun 15. doi: 10.1016/j.jvoice.2022.06.011, 10.15488/15816
Download
@article{69f5d11040fd48a0b6e553f71fc7b489,
title = "The Acoustic Dissection of Cough: Diving Into Machine Listening-based COVID-19 Analysis and Detection",
abstract = "Objectives: The coronavirus disease 2019 (COVID-19) has caused a crisis worldwide. Amounts of efforts have been made to prevent and control COVID-19′s transmission, from early screenings to vaccinations and treatments. Recently, due to the spring up of many automatic disease recognition applications based on machine listening techniques, it would be fast and cheap to detect COVID-19 from recordings of cough, a key symptom of COVID-19. To date, knowledge of the acoustic characteristics of COVID-19 cough sounds is limited but would be essential for structuring effective and robust machine learning models. The present study aims to explore acoustic features for distinguishing COVID-19 positive individuals from COVID-19 negative ones based on their cough sounds. Methods: By applying conventional inferential statistics, we analyze the acoustic correlates of COVID-19 cough sounds based on the COMPARE feature set, i.e., a standardized set of 6,373 acoustic higher-level features. Furthermore, we train automatic COVID-19 detection models with machine learning methods and explore the latent features by evaluating the contribution of all features to the COVID-19 status predictions. Results: The experimental results demonstrate that a set of acoustic parameters of cough sounds, e.g., statistical functionals of the root mean square energy and Mel-frequency cepstral coefficients, bear essential acoustic information in terms of effect sizes for the differentiation between COVID-19 positive and COVID-19 negative cough samples. Our general automatic COVID-19 detection model performs significantly above chance level, i.e., at an unweighted average recall (UAR) of 0.632, on a data set consisting of 1,411 cough samples (COVID-19 positive/negative: 210/1,201). Conclusions: Based on the acoustic correlates analysis on the COMPARE feature set and the feature analysis in the effective COVID-19 detection approach, we find that several acoustic features that show higher effects in conventional group difference testing are also higher weighted in the machine learning models.",
keywords = "Acoustics, Automatic disease detection, Computational paralinguistics, Cough, COVID-19",
author = "Zhao Ren and Yi Chang and Bartl-Pokorny, {Katrin D.} and Pokorny, {Florian B.} and Schuller, {Bj{\"o}rn W.}",
note = "Publisher Copyright: {\textcopyright} 2022 The Authors ",
year = "2022",
month = jun,
day = "15",
doi = "10.1016/j.jvoice.2022.06.011",
language = "English",
journal = "Journal of voice",
issn = "0892-1997",
publisher = "Mosby Inc.",

}

Download

TY - JOUR

T1 - The Acoustic Dissection of Cough

T2 - Diving Into Machine Listening-based COVID-19 Analysis and Detection

AU - Ren, Zhao

AU - Chang, Yi

AU - Bartl-Pokorny, Katrin D.

AU - Pokorny, Florian B.

AU - Schuller, Björn W.

N1 - Publisher Copyright: © 2022 The Authors

PY - 2022/6/15

Y1 - 2022/6/15

N2 - Objectives: The coronavirus disease 2019 (COVID-19) has caused a crisis worldwide. Amounts of efforts have been made to prevent and control COVID-19′s transmission, from early screenings to vaccinations and treatments. Recently, due to the spring up of many automatic disease recognition applications based on machine listening techniques, it would be fast and cheap to detect COVID-19 from recordings of cough, a key symptom of COVID-19. To date, knowledge of the acoustic characteristics of COVID-19 cough sounds is limited but would be essential for structuring effective and robust machine learning models. The present study aims to explore acoustic features for distinguishing COVID-19 positive individuals from COVID-19 negative ones based on their cough sounds. Methods: By applying conventional inferential statistics, we analyze the acoustic correlates of COVID-19 cough sounds based on the COMPARE feature set, i.e., a standardized set of 6,373 acoustic higher-level features. Furthermore, we train automatic COVID-19 detection models with machine learning methods and explore the latent features by evaluating the contribution of all features to the COVID-19 status predictions. Results: The experimental results demonstrate that a set of acoustic parameters of cough sounds, e.g., statistical functionals of the root mean square energy and Mel-frequency cepstral coefficients, bear essential acoustic information in terms of effect sizes for the differentiation between COVID-19 positive and COVID-19 negative cough samples. Our general automatic COVID-19 detection model performs significantly above chance level, i.e., at an unweighted average recall (UAR) of 0.632, on a data set consisting of 1,411 cough samples (COVID-19 positive/negative: 210/1,201). Conclusions: Based on the acoustic correlates analysis on the COMPARE feature set and the feature analysis in the effective COVID-19 detection approach, we find that several acoustic features that show higher effects in conventional group difference testing are also higher weighted in the machine learning models.

AB - Objectives: The coronavirus disease 2019 (COVID-19) has caused a crisis worldwide. Amounts of efforts have been made to prevent and control COVID-19′s transmission, from early screenings to vaccinations and treatments. Recently, due to the spring up of many automatic disease recognition applications based on machine listening techniques, it would be fast and cheap to detect COVID-19 from recordings of cough, a key symptom of COVID-19. To date, knowledge of the acoustic characteristics of COVID-19 cough sounds is limited but would be essential for structuring effective and robust machine learning models. The present study aims to explore acoustic features for distinguishing COVID-19 positive individuals from COVID-19 negative ones based on their cough sounds. Methods: By applying conventional inferential statistics, we analyze the acoustic correlates of COVID-19 cough sounds based on the COMPARE feature set, i.e., a standardized set of 6,373 acoustic higher-level features. Furthermore, we train automatic COVID-19 detection models with machine learning methods and explore the latent features by evaluating the contribution of all features to the COVID-19 status predictions. Results: The experimental results demonstrate that a set of acoustic parameters of cough sounds, e.g., statistical functionals of the root mean square energy and Mel-frequency cepstral coefficients, bear essential acoustic information in terms of effect sizes for the differentiation between COVID-19 positive and COVID-19 negative cough samples. Our general automatic COVID-19 detection model performs significantly above chance level, i.e., at an unweighted average recall (UAR) of 0.632, on a data set consisting of 1,411 cough samples (COVID-19 positive/negative: 210/1,201). Conclusions: Based on the acoustic correlates analysis on the COMPARE feature set and the feature analysis in the effective COVID-19 detection approach, we find that several acoustic features that show higher effects in conventional group difference testing are also higher weighted in the machine learning models.

KW - Acoustics

KW - Automatic disease detection

KW - Computational paralinguistics

KW - Cough

KW - COVID-19

UR - http://www.scopus.com/inward/record.url?scp=85134314251&partnerID=8YFLogxK

U2 - 10.1016/j.jvoice.2022.06.011

DO - 10.1016/j.jvoice.2022.06.011

M3 - Article

C2 - 35835648

AN - SCOPUS:85134314251

JO - Journal of voice

JF - Journal of voice

SN - 0892-1997

ER -