Well-Calibrated Predictive Uncertainty in Medical Imaging with Bayesian Deep Learning

Publikation: Qualifikations-/StudienabschlussarbeitDissertation

Autoren

  • Max-Heinrich Viktor Laves

Organisationseinheiten

Forschungs-netzwerk anzeigen

Details

Titel in ÜbersetzungKalibrierte prädiktive Unsicherheit in der medizinischen Bildgebung mit Bayesian Deep Learning
OriginalspracheEnglisch
QualifikationDoktor der Ingenieurwissenschaften
Gradverleihende Hochschule
Betreut von
  • Tobias Ortmaier, Betreuer*in
Datum der Verleihung des Grades13 Dez. 2021
ErscheinungsortHannover
PublikationsstatusVeröffentlicht - 2021

Abstract

The use of medical imaging has revolutionized modern medicine over the last century. It has helped provide insight into human anatomy and physiology. Many diseases and pathologies can only be diagnosed with the use of imaging techniques. Due to increasing availability and the reduction of costs, the number of medical imaging examinations is continuously growing, resulting in a huge amount of data that has to be assessed by medical experts. Computers can be used to assist in and automate the process of medical image analysis. Recent advances in deep learning allow this to be done with reasonable accuracy and on a large scale. The biggest disadvantage of these methods in practice is their black-box nature. Although they achieve the highest accuracy, their acceptance in clinical practice may be limited by their lack of interpretability and transparency. These concerns are reinforced by the core problem that this dissertation addresses: the overconfidence of deep models in incorrect predictions. How do we know if we do not know? This thesis deals with Bayesian methods for estimation of predictive uncertainty in medical imaging with deep learning. We show that the uncertainty from variational Bayesian inference is miscalibrated and does not represent the predictive error well. To quantify miscalibration, we propose the uncertainty calibration error, which alleviates disadvantages of existing calibration metrics. Moreover, we introduce logit scaling for deep Bayesian Monte Carlo methods to calibrate uncertainty after training. Calibrated deep Bayesian models better detect false predictions and out-of-distribution data. Bayesian uncertainty is further leveraged to reduce the economic burden of large data labeling, which is needed to train deep models. We propose BatchPL, a sample acquisition scheme that selects highly informative samples for pseudo-labeling in self- and unsupervised learning scenarios. The approach achieves state-of-the-art performance on both medical and non-medical classification data sets. Many medical imaging problems exceed classification. Therefore, we extended estimation and calibration of predictive uncertainty to deep regression (sigma scaling) and evaluated it on different medical imaging regression tasks. To mitigate the problem of hallucinations in deep generative models, we provide a Bayesian approach to deep image prior (MCDIP), which is not affected by hallucinations as the model only ever has access to one single image.

Ziele für nachhaltige Entwicklung

Zitieren

Well-Calibrated Predictive Uncertainty in Medical Imaging with Bayesian Deep Learning. / Laves, Max-Heinrich Viktor.
Hannover, 2021. 145 S.

Publikation: Qualifikations-/StudienabschlussarbeitDissertation

Laves, M-HV 2021, 'Well-Calibrated Predictive Uncertainty in Medical Imaging with Bayesian Deep Learning', Doktor der Ingenieurwissenschaften, Gottfried Wilhelm Leibniz Universität Hannover, Hannover. https://doi.org/10.15488/11588
Laves, M.-H. V. (2021). Well-Calibrated Predictive Uncertainty in Medical Imaging with Bayesian Deep Learning. [Dissertation, Gottfried Wilhelm Leibniz Universität Hannover]. https://doi.org/10.15488/11588
Download
@phdthesis{52a4a2936c534537a8da5e3642761c76,
title = "Well-Calibrated Predictive Uncertainty in Medical Imaging with Bayesian Deep Learning",
abstract = "The use of medical imaging has revolutionized modern medicine over the last century. It has helped provide insight into human anatomy and physiology. Many diseases and pathologies can only be diagnosed with the use of imaging techniques. Due to increasing availability and the reduction of costs, the number of medical imaging examinations is continuously growing, resulting in a huge amount of data that has to be assessed by medical experts. Computers can be used to assist in and automate the process of medical image analysis. Recent advances in deep learning allow this to be done with reasonable accuracy and on a large scale. The biggest disadvantage of these methods in practice is their black-box nature. Although they achieve the highest accuracy, their acceptance in clinical practice may be limited by their lack of interpretability and transparency. These concerns are reinforced by the core problem that this dissertation addresses: the overconfidence of deep models in incorrect predictions. How do we know if we do not know? This thesis deals with Bayesian methods for estimation of predictive uncertainty in medical imaging with deep learning. We show that the uncertainty from variational Bayesian inference is miscalibrated and does not represent the predictive error well. To quantify miscalibration, we propose the uncertainty calibration error, which alleviates disadvantages of existing calibration metrics. Moreover, we introduce logit scaling for deep Bayesian Monte Carlo methods to calibrate uncertainty after training. Calibrated deep Bayesian models better detect false predictions and out-of-distribution data. Bayesian uncertainty is further leveraged to reduce the economic burden of large data labeling, which is needed to train deep models. We propose BatchPL, a sample acquisition scheme that selects highly informative samples for pseudo-labeling in self- and unsupervised learning scenarios. The approach achieves state-of-the-art performance on both medical and non-medical classification data sets. Many medical imaging problems exceed classification. Therefore, we extended estimation and calibration of predictive uncertainty to deep regression (sigma scaling) and evaluated it on different medical imaging regression tasks. To mitigate the problem of hallucinations in deep generative models, we provide a Bayesian approach to deep image prior (MCDIP), which is not affected by hallucinations as the model only ever has access to one single image.",
author = "Laves, {Max-Heinrich Viktor}",
note = "Doctoral thesis",
year = "2021",
doi = "10.15488/11588",
language = "English",
school = "Leibniz University Hannover",

}

Download

TY - BOOK

T1 - Well-Calibrated Predictive Uncertainty in Medical Imaging with Bayesian Deep Learning

AU - Laves, Max-Heinrich Viktor

N1 - Doctoral thesis

PY - 2021

Y1 - 2021

N2 - The use of medical imaging has revolutionized modern medicine over the last century. It has helped provide insight into human anatomy and physiology. Many diseases and pathologies can only be diagnosed with the use of imaging techniques. Due to increasing availability and the reduction of costs, the number of medical imaging examinations is continuously growing, resulting in a huge amount of data that has to be assessed by medical experts. Computers can be used to assist in and automate the process of medical image analysis. Recent advances in deep learning allow this to be done with reasonable accuracy and on a large scale. The biggest disadvantage of these methods in practice is their black-box nature. Although they achieve the highest accuracy, their acceptance in clinical practice may be limited by their lack of interpretability and transparency. These concerns are reinforced by the core problem that this dissertation addresses: the overconfidence of deep models in incorrect predictions. How do we know if we do not know? This thesis deals with Bayesian methods for estimation of predictive uncertainty in medical imaging with deep learning. We show that the uncertainty from variational Bayesian inference is miscalibrated and does not represent the predictive error well. To quantify miscalibration, we propose the uncertainty calibration error, which alleviates disadvantages of existing calibration metrics. Moreover, we introduce logit scaling for deep Bayesian Monte Carlo methods to calibrate uncertainty after training. Calibrated deep Bayesian models better detect false predictions and out-of-distribution data. Bayesian uncertainty is further leveraged to reduce the economic burden of large data labeling, which is needed to train deep models. We propose BatchPL, a sample acquisition scheme that selects highly informative samples for pseudo-labeling in self- and unsupervised learning scenarios. The approach achieves state-of-the-art performance on both medical and non-medical classification data sets. Many medical imaging problems exceed classification. Therefore, we extended estimation and calibration of predictive uncertainty to deep regression (sigma scaling) and evaluated it on different medical imaging regression tasks. To mitigate the problem of hallucinations in deep generative models, we provide a Bayesian approach to deep image prior (MCDIP), which is not affected by hallucinations as the model only ever has access to one single image.

AB - The use of medical imaging has revolutionized modern medicine over the last century. It has helped provide insight into human anatomy and physiology. Many diseases and pathologies can only be diagnosed with the use of imaging techniques. Due to increasing availability and the reduction of costs, the number of medical imaging examinations is continuously growing, resulting in a huge amount of data that has to be assessed by medical experts. Computers can be used to assist in and automate the process of medical image analysis. Recent advances in deep learning allow this to be done with reasonable accuracy and on a large scale. The biggest disadvantage of these methods in practice is their black-box nature. Although they achieve the highest accuracy, their acceptance in clinical practice may be limited by their lack of interpretability and transparency. These concerns are reinforced by the core problem that this dissertation addresses: the overconfidence of deep models in incorrect predictions. How do we know if we do not know? This thesis deals with Bayesian methods for estimation of predictive uncertainty in medical imaging with deep learning. We show that the uncertainty from variational Bayesian inference is miscalibrated and does not represent the predictive error well. To quantify miscalibration, we propose the uncertainty calibration error, which alleviates disadvantages of existing calibration metrics. Moreover, we introduce logit scaling for deep Bayesian Monte Carlo methods to calibrate uncertainty after training. Calibrated deep Bayesian models better detect false predictions and out-of-distribution data. Bayesian uncertainty is further leveraged to reduce the economic burden of large data labeling, which is needed to train deep models. We propose BatchPL, a sample acquisition scheme that selects highly informative samples for pseudo-labeling in self- and unsupervised learning scenarios. The approach achieves state-of-the-art performance on both medical and non-medical classification data sets. Many medical imaging problems exceed classification. Therefore, we extended estimation and calibration of predictive uncertainty to deep regression (sigma scaling) and evaluated it on different medical imaging regression tasks. To mitigate the problem of hallucinations in deep generative models, we provide a Bayesian approach to deep image prior (MCDIP), which is not affected by hallucinations as the model only ever has access to one single image.

U2 - 10.15488/11588

DO - 10.15488/11588

M3 - Doctoral thesis

CY - Hannover

ER -