Details
Titel in Übersetzung | Kalibrierte prädiktive Unsicherheit in der medizinischen Bildgebung mit Bayesian Deep Learning |
---|---|
Originalsprache | Englisch |
Qualifikation | Doktor der Ingenieurwissenschaften |
Gradverleihende Hochschule | |
Betreut von |
|
Datum der Verleihung des Grades | 13 Dez. 2021 |
Erscheinungsort | Hannover |
Publikationsstatus | Veröffentlicht - 2021 |
Abstract
Ziele für nachhaltige Entwicklung
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Hannover, 2021. 145 S.
Publikation: Qualifikations-/Studienabschlussarbeit › Dissertation
}
TY - BOOK
T1 - Well-Calibrated Predictive Uncertainty in Medical Imaging with Bayesian Deep Learning
AU - Laves, Max-Heinrich Viktor
N1 - Doctoral thesis
PY - 2021
Y1 - 2021
N2 - The use of medical imaging has revolutionized modern medicine over the last century. It has helped provide insight into human anatomy and physiology. Many diseases and pathologies can only be diagnosed with the use of imaging techniques. Due to increasing availability and the reduction of costs, the number of medical imaging examinations is continuously growing, resulting in a huge amount of data that has to be assessed by medical experts. Computers can be used to assist in and automate the process of medical image analysis. Recent advances in deep learning allow this to be done with reasonable accuracy and on a large scale. The biggest disadvantage of these methods in practice is their black-box nature. Although they achieve the highest accuracy, their acceptance in clinical practice may be limited by their lack of interpretability and transparency. These concerns are reinforced by the core problem that this dissertation addresses: the overconfidence of deep models in incorrect predictions. How do we know if we do not know? This thesis deals with Bayesian methods for estimation of predictive uncertainty in medical imaging with deep learning. We show that the uncertainty from variational Bayesian inference is miscalibrated and does not represent the predictive error well. To quantify miscalibration, we propose the uncertainty calibration error, which alleviates disadvantages of existing calibration metrics. Moreover, we introduce logit scaling for deep Bayesian Monte Carlo methods to calibrate uncertainty after training. Calibrated deep Bayesian models better detect false predictions and out-of-distribution data. Bayesian uncertainty is further leveraged to reduce the economic burden of large data labeling, which is needed to train deep models. We propose BatchPL, a sample acquisition scheme that selects highly informative samples for pseudo-labeling in self- and unsupervised learning scenarios. The approach achieves state-of-the-art performance on both medical and non-medical classification data sets. Many medical imaging problems exceed classification. Therefore, we extended estimation and calibration of predictive uncertainty to deep regression (sigma scaling) and evaluated it on different medical imaging regression tasks. To mitigate the problem of hallucinations in deep generative models, we provide a Bayesian approach to deep image prior (MCDIP), which is not affected by hallucinations as the model only ever has access to one single image.
AB - The use of medical imaging has revolutionized modern medicine over the last century. It has helped provide insight into human anatomy and physiology. Many diseases and pathologies can only be diagnosed with the use of imaging techniques. Due to increasing availability and the reduction of costs, the number of medical imaging examinations is continuously growing, resulting in a huge amount of data that has to be assessed by medical experts. Computers can be used to assist in and automate the process of medical image analysis. Recent advances in deep learning allow this to be done with reasonable accuracy and on a large scale. The biggest disadvantage of these methods in practice is their black-box nature. Although they achieve the highest accuracy, their acceptance in clinical practice may be limited by their lack of interpretability and transparency. These concerns are reinforced by the core problem that this dissertation addresses: the overconfidence of deep models in incorrect predictions. How do we know if we do not know? This thesis deals with Bayesian methods for estimation of predictive uncertainty in medical imaging with deep learning. We show that the uncertainty from variational Bayesian inference is miscalibrated and does not represent the predictive error well. To quantify miscalibration, we propose the uncertainty calibration error, which alleviates disadvantages of existing calibration metrics. Moreover, we introduce logit scaling for deep Bayesian Monte Carlo methods to calibrate uncertainty after training. Calibrated deep Bayesian models better detect false predictions and out-of-distribution data. Bayesian uncertainty is further leveraged to reduce the economic burden of large data labeling, which is needed to train deep models. We propose BatchPL, a sample acquisition scheme that selects highly informative samples for pseudo-labeling in self- and unsupervised learning scenarios. The approach achieves state-of-the-art performance on both medical and non-medical classification data sets. Many medical imaging problems exceed classification. Therefore, we extended estimation and calibration of predictive uncertainty to deep regression (sigma scaling) and evaluated it on different medical imaging regression tasks. To mitigate the problem of hallucinations in deep generative models, we provide a Bayesian approach to deep image prior (MCDIP), which is not affected by hallucinations as the model only ever has access to one single image.
U2 - 10.15488/11588
DO - 10.15488/11588
M3 - Doctoral thesis
CY - Hannover
ER -