Uncertainty Quantification in Computer-Aided Diagnosis: Make Your Model say "I don't know" for Ambiguous Cases

Publikation: Arbeitspapier/PreprintPreprint

Autorschaft

  • Max-Heinrich Laves
  • Sontje Ihler
  • Tobias Ortmaier

Organisationseinheiten

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seitenumfang4
PublikationsstatusElektronisch veröffentlicht (E-Pub) - 2019

Abstract

We evaluate two different methods for the integration of predictionuncertainty into diagnostic image classifiers to increase patient safety indeep learning. In the first method, Monte Carlo sampling is applied withdropout at test time to get a posterior distribution of the class labels(Bayesian ResNet). The second method extends ResNet to a probabilistic approachby predicting the parameters of the posterior distribution and sampling thefinal result from it (Variational ResNet).The variance of the posterior is usedas metric for uncertainty.Both methods are trained on a data set of opticalcoherence tomography scans showing four different retinal conditions. Ourresults shown that cases in which the classifier predicts incorrectly correlatewith a higher uncertainty. Mean uncertainty of incorrectly diagnosed cases wasbetween 4.6 and 8.1 times higher than mean uncertainty of correctly diagnosedcases. Modeling of the prediction uncertainty in computer-aided diagnosis withdeep learning yields more reliable results and is anticipated to increasepatient safety.

Zitieren

Uncertainty Quantification in Computer-Aided Diagnosis: Make Your Model say "I don't know" for Ambiguous Cases. / Laves, Max-Heinrich; Ihler, Sontje; Ortmaier, Tobias.
2019.

Publikation: Arbeitspapier/PreprintPreprint

Laves MH, Ihler S, Ortmaier T. Uncertainty Quantification in Computer-Aided Diagnosis: Make Your Model say "I don't know" for Ambiguous Cases. 2019. Epub 2019. doi: 10.48550/arXiv.1908.00792
Download
@techreport{6d8774a6c0f549f0abb11c64f655b91a,
title = "Uncertainty Quantification in Computer-Aided Diagnosis: Make Your Model say {"}I don't know{"} for Ambiguous Cases",
abstract = "We evaluate two different methods for the integration of predictionuncertainty into diagnostic image classifiers to increase patient safety indeep learning. In the first method, Monte Carlo sampling is applied withdropout at test time to get a posterior distribution of the class labels(Bayesian ResNet). The second method extends ResNet to a probabilistic approachby predicting the parameters of the posterior distribution and sampling thefinal result from it (Variational ResNet).The variance of the posterior is usedas metric for uncertainty.Both methods are trained on a data set of opticalcoherence tomography scans showing four different retinal conditions. Ourresults shown that cases in which the classifier predicts incorrectly correlatewith a higher uncertainty. Mean uncertainty of incorrectly diagnosed cases wasbetween 4.6 and 8.1 times higher than mean uncertainty of correctly diagnosedcases. Modeling of the prediction uncertainty in computer-aided diagnosis withdeep learning yields more reliable results and is anticipated to increasepatient safety.",
author = "Max-Heinrich Laves and Sontje Ihler and Tobias Ortmaier",
year = "2019",
doi = "10.48550/arXiv.1908.00792",
language = "English",
type = "WorkingPaper",

}

Download

TY - UNPB

T1 - Uncertainty Quantification in Computer-Aided Diagnosis

T2 - Make Your Model say "I don't know" for Ambiguous Cases

AU - Laves, Max-Heinrich

AU - Ihler, Sontje

AU - Ortmaier, Tobias

PY - 2019

Y1 - 2019

N2 - We evaluate two different methods for the integration of predictionuncertainty into diagnostic image classifiers to increase patient safety indeep learning. In the first method, Monte Carlo sampling is applied withdropout at test time to get a posterior distribution of the class labels(Bayesian ResNet). The second method extends ResNet to a probabilistic approachby predicting the parameters of the posterior distribution and sampling thefinal result from it (Variational ResNet).The variance of the posterior is usedas metric for uncertainty.Both methods are trained on a data set of opticalcoherence tomography scans showing four different retinal conditions. Ourresults shown that cases in which the classifier predicts incorrectly correlatewith a higher uncertainty. Mean uncertainty of incorrectly diagnosed cases wasbetween 4.6 and 8.1 times higher than mean uncertainty of correctly diagnosedcases. Modeling of the prediction uncertainty in computer-aided diagnosis withdeep learning yields more reliable results and is anticipated to increasepatient safety.

AB - We evaluate two different methods for the integration of predictionuncertainty into diagnostic image classifiers to increase patient safety indeep learning. In the first method, Monte Carlo sampling is applied withdropout at test time to get a posterior distribution of the class labels(Bayesian ResNet). The second method extends ResNet to a probabilistic approachby predicting the parameters of the posterior distribution and sampling thefinal result from it (Variational ResNet).The variance of the posterior is usedas metric for uncertainty.Both methods are trained on a data set of opticalcoherence tomography scans showing four different retinal conditions. Ourresults shown that cases in which the classifier predicts incorrectly correlatewith a higher uncertainty. Mean uncertainty of incorrectly diagnosed cases wasbetween 4.6 and 8.1 times higher than mean uncertainty of correctly diagnosedcases. Modeling of the prediction uncertainty in computer-aided diagnosis withdeep learning yields more reliable results and is anticipated to increasepatient safety.

U2 - 10.48550/arXiv.1908.00792

DO - 10.48550/arXiv.1908.00792

M3 - Preprint

BT - Uncertainty Quantification in Computer-Aided Diagnosis

ER -