Details
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 223-226 |
Seitenumfang | 4 |
Fachzeitschrift | Current Directions in Biomedical Engineering |
Jahrgang | 5 |
Ausgabenummer | 1 |
Publikationsstatus | Veröffentlicht - 1 Sept. 2019 |
Abstract
In this work, we discuss epistemic uncertainty estimation obtained by Bayesian inference in diagnostic classifiers and show that the prediction uncertainty highly correlates with goodness of prediction. We train the ResNet-18 image classifier on a dataset of 84,484 optical coherence tomography scans showing four different retinal conditions. Dropout is added before every building block of ResNet, creating an approximation to a Bayesian classifier. Monte Carlo sampling is applied with dropout at test time for uncertainty estimation. In Monte Carlo experiments, multiple forward passes are performed to get a distribution of the class labels. The variance and the entropy of the distribution is used as metrics for uncertainty. Our results show strong correlation with ρ = 0.99 between prediction uncertainty and prediction error. Mean uncertainty of incorrectly diagnosed cases was significantly higher than mean uncertainty of correctly diagnosed cases. Modeling of the prediction uncertainty in computer-aided diagnosis with deep learning yields more reliable results and is therefore expected to increase patient safety. This will help to transfer such systems into clinical routine and to increase the acceptance of machine learning in diagnosis from the standpoint of physicians and patients.
ASJC Scopus Sachgebiete
- Ingenieurwesen (insg.)
- Biomedizintechnik
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
in: Current Directions in Biomedical Engineering, Jahrgang 5, Nr. 1, 01.09.2019, S. 223-226.
Publikation: Beitrag in Fachzeitschrift › Artikel › Forschung › Peer-Review
}
TY - JOUR
T1 - Quantifying the uncertainty of deep learning-based computer-aided diagnosis for patient safety
AU - Laves, Max Heinrich
AU - Ihler, Sontje
AU - Ortmaier, Tobias
AU - Kahrs, Lüder A.
PY - 2019/9/1
Y1 - 2019/9/1
N2 - In this work, we discuss epistemic uncertainty estimation obtained by Bayesian inference in diagnostic classifiers and show that the prediction uncertainty highly correlates with goodness of prediction. We train the ResNet-18 image classifier on a dataset of 84,484 optical coherence tomography scans showing four different retinal conditions. Dropout is added before every building block of ResNet, creating an approximation to a Bayesian classifier. Monte Carlo sampling is applied with dropout at test time for uncertainty estimation. In Monte Carlo experiments, multiple forward passes are performed to get a distribution of the class labels. The variance and the entropy of the distribution is used as metrics for uncertainty. Our results show strong correlation with ρ = 0.99 between prediction uncertainty and prediction error. Mean uncertainty of incorrectly diagnosed cases was significantly higher than mean uncertainty of correctly diagnosed cases. Modeling of the prediction uncertainty in computer-aided diagnosis with deep learning yields more reliable results and is therefore expected to increase patient safety. This will help to transfer such systems into clinical routine and to increase the acceptance of machine learning in diagnosis from the standpoint of physicians and patients.
AB - In this work, we discuss epistemic uncertainty estimation obtained by Bayesian inference in diagnostic classifiers and show that the prediction uncertainty highly correlates with goodness of prediction. We train the ResNet-18 image classifier on a dataset of 84,484 optical coherence tomography scans showing four different retinal conditions. Dropout is added before every building block of ResNet, creating an approximation to a Bayesian classifier. Monte Carlo sampling is applied with dropout at test time for uncertainty estimation. In Monte Carlo experiments, multiple forward passes are performed to get a distribution of the class labels. The variance and the entropy of the distribution is used as metrics for uncertainty. Our results show strong correlation with ρ = 0.99 between prediction uncertainty and prediction error. Mean uncertainty of incorrectly diagnosed cases was significantly higher than mean uncertainty of correctly diagnosed cases. Modeling of the prediction uncertainty in computer-aided diagnosis with deep learning yields more reliable results and is therefore expected to increase patient safety. This will help to transfer such systems into clinical routine and to increase the acceptance of machine learning in diagnosis from the standpoint of physicians and patients.
KW - Bayesian approximation
KW - machine learning
KW - optical coherence tomography
KW - retina
UR - http://www.scopus.com/inward/record.url?scp=85072636393&partnerID=8YFLogxK
U2 - 10.1515/cdbme-2019-0057
DO - 10.1515/cdbme-2019-0057
M3 - Article
AN - SCOPUS:85072636393
VL - 5
SP - 223
EP - 226
JO - Current Directions in Biomedical Engineering
JF - Current Directions in Biomedical Engineering
IS - 1
ER -