Details
Original language | English |
---|---|
Journal | Journal of voice |
Early online date | 15 Jun 2022 |
Publication status | E-pub ahead of print - 15 Jun 2022 |
Abstract
Objectives: The coronavirus disease 2019 (COVID-19) has caused a crisis worldwide. Amounts of efforts have been made to prevent and control COVID-19′s transmission, from early screenings to vaccinations and treatments. Recently, due to the spring up of many automatic disease recognition applications based on machine listening techniques, it would be fast and cheap to detect COVID-19 from recordings of cough, a key symptom of COVID-19. To date, knowledge of the acoustic characteristics of COVID-19 cough sounds is limited but would be essential for structuring effective and robust machine learning models. The present study aims to explore acoustic features for distinguishing COVID-19 positive individuals from COVID-19 negative ones based on their cough sounds. Methods: By applying conventional inferential statistics, we analyze the acoustic correlates of COVID-19 cough sounds based on the COMPARE feature set, i.e., a standardized set of 6,373 acoustic higher-level features. Furthermore, we train automatic COVID-19 detection models with machine learning methods and explore the latent features by evaluating the contribution of all features to the COVID-19 status predictions. Results: The experimental results demonstrate that a set of acoustic parameters of cough sounds, e.g., statistical functionals of the root mean square energy and Mel-frequency cepstral coefficients, bear essential acoustic information in terms of effect sizes for the differentiation between COVID-19 positive and COVID-19 negative cough samples. Our general automatic COVID-19 detection model performs significantly above chance level, i.e., at an unweighted average recall (UAR) of 0.632, on a data set consisting of 1,411 cough samples (COVID-19 positive/negative: 210/1,201). Conclusions: Based on the acoustic correlates analysis on the COMPARE feature set and the feature analysis in the effective COVID-19 detection approach, we find that several acoustic features that show higher effects in conventional group difference testing are also higher weighted in the machine learning models.
Keywords
- Acoustics, Automatic disease detection, Computational paralinguistics, Cough, COVID-19
ASJC Scopus subject areas
- Medicine(all)
- Otorhinolaryngology
- Health Professions(all)
- Speech and Hearing
- Nursing(all)
- LPN and LVN
Sustainable Development Goals
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: Journal of voice, 15.06.2022.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - The Acoustic Dissection of Cough
T2 - Diving Into Machine Listening-based COVID-19 Analysis and Detection
AU - Ren, Zhao
AU - Chang, Yi
AU - Bartl-Pokorny, Katrin D.
AU - Pokorny, Florian B.
AU - Schuller, Björn W.
N1 - Publisher Copyright: © 2022 The Authors
PY - 2022/6/15
Y1 - 2022/6/15
N2 - Objectives: The coronavirus disease 2019 (COVID-19) has caused a crisis worldwide. Amounts of efforts have been made to prevent and control COVID-19′s transmission, from early screenings to vaccinations and treatments. Recently, due to the spring up of many automatic disease recognition applications based on machine listening techniques, it would be fast and cheap to detect COVID-19 from recordings of cough, a key symptom of COVID-19. To date, knowledge of the acoustic characteristics of COVID-19 cough sounds is limited but would be essential for structuring effective and robust machine learning models. The present study aims to explore acoustic features for distinguishing COVID-19 positive individuals from COVID-19 negative ones based on their cough sounds. Methods: By applying conventional inferential statistics, we analyze the acoustic correlates of COVID-19 cough sounds based on the COMPARE feature set, i.e., a standardized set of 6,373 acoustic higher-level features. Furthermore, we train automatic COVID-19 detection models with machine learning methods and explore the latent features by evaluating the contribution of all features to the COVID-19 status predictions. Results: The experimental results demonstrate that a set of acoustic parameters of cough sounds, e.g., statistical functionals of the root mean square energy and Mel-frequency cepstral coefficients, bear essential acoustic information in terms of effect sizes for the differentiation between COVID-19 positive and COVID-19 negative cough samples. Our general automatic COVID-19 detection model performs significantly above chance level, i.e., at an unweighted average recall (UAR) of 0.632, on a data set consisting of 1,411 cough samples (COVID-19 positive/negative: 210/1,201). Conclusions: Based on the acoustic correlates analysis on the COMPARE feature set and the feature analysis in the effective COVID-19 detection approach, we find that several acoustic features that show higher effects in conventional group difference testing are also higher weighted in the machine learning models.
AB - Objectives: The coronavirus disease 2019 (COVID-19) has caused a crisis worldwide. Amounts of efforts have been made to prevent and control COVID-19′s transmission, from early screenings to vaccinations and treatments. Recently, due to the spring up of many automatic disease recognition applications based on machine listening techniques, it would be fast and cheap to detect COVID-19 from recordings of cough, a key symptom of COVID-19. To date, knowledge of the acoustic characteristics of COVID-19 cough sounds is limited but would be essential for structuring effective and robust machine learning models. The present study aims to explore acoustic features for distinguishing COVID-19 positive individuals from COVID-19 negative ones based on their cough sounds. Methods: By applying conventional inferential statistics, we analyze the acoustic correlates of COVID-19 cough sounds based on the COMPARE feature set, i.e., a standardized set of 6,373 acoustic higher-level features. Furthermore, we train automatic COVID-19 detection models with machine learning methods and explore the latent features by evaluating the contribution of all features to the COVID-19 status predictions. Results: The experimental results demonstrate that a set of acoustic parameters of cough sounds, e.g., statistical functionals of the root mean square energy and Mel-frequency cepstral coefficients, bear essential acoustic information in terms of effect sizes for the differentiation between COVID-19 positive and COVID-19 negative cough samples. Our general automatic COVID-19 detection model performs significantly above chance level, i.e., at an unweighted average recall (UAR) of 0.632, on a data set consisting of 1,411 cough samples (COVID-19 positive/negative: 210/1,201). Conclusions: Based on the acoustic correlates analysis on the COMPARE feature set and the feature analysis in the effective COVID-19 detection approach, we find that several acoustic features that show higher effects in conventional group difference testing are also higher weighted in the machine learning models.
KW - Acoustics
KW - Automatic disease detection
KW - Computational paralinguistics
KW - Cough
KW - COVID-19
UR - http://www.scopus.com/inward/record.url?scp=85134314251&partnerID=8YFLogxK
U2 - 10.1016/j.jvoice.2022.06.011
DO - 10.1016/j.jvoice.2022.06.011
M3 - Article
C2 - 35835648
AN - SCOPUS:85134314251
JO - Journal of voice
JF - Journal of voice
SN - 0892-1997
ER -