Uncertainty Estimation in Medical Image Denoising with Bayesian Deep Image Prior

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschung

Autoren

  • Max-Heinrich Laves
  • Malte Tölle
  • Tobias Ortmaier

Organisationseinheiten

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksUncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Graphs in Biomedical Image Analysis
Herausgeber/-innenCarole H. Sudre, Hamid Fehri, Tal Arbel, Christian F. Baumgartner, Adrian Dalca, Ryutaro Tanno, Koen Van Leemput, William M. Wells, Aristeidis Sotiras, Bartlomiej Papiez, Enzo Ferrante, Sarah Parisot
Seiten81-96
Seitenumfang16
ISBN (elektronisch)978-3-030-60365-6
PublikationsstatusVeröffentlicht - 5 Okt. 2020
VeranstaltungSecond International Workshop, UNSURE 2020 and Third International Workshop, GRAIL 2020 - Lima, Peru
Dauer: 8 Okt. 2020 → …

Publikationsreihe

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Band12443 LNCS
ISSN (Print)0302-9743
ISSN (elektronisch)1611-3349

Abstract

Uncertainty quantification in inverse medical imaging tasks with deep learning has received little attention. However, deep models trained on large data sets tend to hallucinate and create artifacts in the reconstructed output that are not anatomically present. We use a randomly initialized convolutional network as parameterization of the reconstructed image and perform gradient descent to match the observation, which is known as deep image prior. In this case, the reconstruction does not suffer from hallucinations as no prior training is performed. We extend this to a Bayesian approach with Monte Carlo dropout to quantify both aleatoric and epistemic uncertainty. The presented method is evaluated on the task of denoising different medical imaging modalities. The experimental results show that our approach yields well-calibrated uncertainty. That is, the predictive uncertainty correlates with the predictive error. This allows for reliable uncertainty estimates and can tackle the problem of hallucinations and artifacts in inverse medical imaging tasks.

ASJC Scopus Sachgebiete

Zitieren

Uncertainty Estimation in Medical Image Denoising with Bayesian Deep Image Prior. / Laves, Max-Heinrich; Tölle, Malte; Ortmaier, Tobias.
Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Graphs in Biomedical Image Analysis. Hrsg. / Carole H. Sudre; Hamid Fehri; Tal Arbel; Christian F. Baumgartner; Adrian Dalca; Ryutaro Tanno; Koen Van Leemput; William M. Wells; Aristeidis Sotiras; Bartlomiej Papiez; Enzo Ferrante; Sarah Parisot. 2020. S. 81-96 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 12443 LNCS).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschung

Laves, M-H, Tölle, M & Ortmaier, T 2020, Uncertainty Estimation in Medical Image Denoising with Bayesian Deep Image Prior. in CH Sudre, H Fehri, T Arbel, CF Baumgartner, A Dalca, R Tanno, K Van Leemput, WM Wells, A Sotiras, B Papiez, E Ferrante & S Parisot (Hrsg.), Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Graphs in Biomedical Image Analysis. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Bd. 12443 LNCS, S. 81-96, Second International Workshop, UNSURE 2020 and Third International Workshop, GRAIL 2020, Lima, Peru, 8 Okt. 2020. https://doi.org/10.1007/978-3-030-60365-6_9
Laves, M.-H., Tölle, M., & Ortmaier, T. (2020). Uncertainty Estimation in Medical Image Denoising with Bayesian Deep Image Prior. In C. H. Sudre, H. Fehri, T. Arbel, C. F. Baumgartner, A. Dalca, R. Tanno, K. Van Leemput, W. M. Wells, A. Sotiras, B. Papiez, E. Ferrante, & S. Parisot (Hrsg.), Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Graphs in Biomedical Image Analysis (S. 81-96). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 12443 LNCS). https://doi.org/10.1007/978-3-030-60365-6_9
Laves MH, Tölle M, Ortmaier T. Uncertainty Estimation in Medical Image Denoising with Bayesian Deep Image Prior. in Sudre CH, Fehri H, Arbel T, Baumgartner CF, Dalca A, Tanno R, Van Leemput K, Wells WM, Sotiras A, Papiez B, Ferrante E, Parisot S, Hrsg., Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Graphs in Biomedical Image Analysis. 2020. S. 81-96. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). doi: 10.1007/978-3-030-60365-6_9
Laves, Max-Heinrich ; Tölle, Malte ; Ortmaier, Tobias. / Uncertainty Estimation in Medical Image Denoising with Bayesian Deep Image Prior. Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Graphs in Biomedical Image Analysis. Hrsg. / Carole H. Sudre ; Hamid Fehri ; Tal Arbel ; Christian F. Baumgartner ; Adrian Dalca ; Ryutaro Tanno ; Koen Van Leemput ; William M. Wells ; Aristeidis Sotiras ; Bartlomiej Papiez ; Enzo Ferrante ; Sarah Parisot. 2020. S. 81-96 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Download
@inproceedings{cd0bb91cdeef4f53b4658adb13fd3274,
title = "Uncertainty Estimation in Medical Image Denoising with Bayesian Deep Image Prior",
abstract = " Uncertainty quantification in inverse medical imaging tasks with deep learning has received little attention. However, deep models trained on large data sets tend to hallucinate and create artifacts in the reconstructed output that are not anatomically present. We use a randomly initialized convolutional network as parameterization of the reconstructed image and perform gradient descent to match the observation, which is known as deep image prior. In this case, the reconstruction does not suffer from hallucinations as no prior training is performed. We extend this to a Bayesian approach with Monte Carlo dropout to quantify both aleatoric and epistemic uncertainty. The presented method is evaluated on the task of denoising different medical imaging modalities. The experimental results show that our approach yields well-calibrated uncertainty. That is, the predictive uncertainty correlates with the predictive error. This allows for reliable uncertainty estimates and can tackle the problem of hallucinations and artifacts in inverse medical imaging tasks. ",
keywords = "eess.IV, cs.CV, Deep learning, Hallucination, Variational inference",
author = "Max-Heinrich Laves and Malte T{\"o}lle and Tobias Ortmaier",
year = "2020",
month = oct,
day = "5",
doi = "10.1007/978-3-030-60365-6_9",
language = "English",
isbn = "978-3-030-60364-9",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "81--96",
editor = "Sudre, {Carole H.} and Hamid Fehri and Tal Arbel and Baumgartner, {Christian F.} and Adrian Dalca and Ryutaro Tanno and {Van Leemput}, Koen and Wells, {William M.} and Aristeidis Sotiras and Bartlomiej Papiez and Enzo Ferrante and Sarah Parisot",
booktitle = "Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Graphs in Biomedical Image Analysis",
note = "Second International Workshop, UNSURE 2020 and Third International Workshop, GRAIL 2020 ; Conference date: 08-10-2020",

}

Download

TY - GEN

T1 - Uncertainty Estimation in Medical Image Denoising with Bayesian Deep Image Prior

AU - Laves, Max-Heinrich

AU - Tölle, Malte

AU - Ortmaier, Tobias

PY - 2020/10/5

Y1 - 2020/10/5

N2 - Uncertainty quantification in inverse medical imaging tasks with deep learning has received little attention. However, deep models trained on large data sets tend to hallucinate and create artifacts in the reconstructed output that are not anatomically present. We use a randomly initialized convolutional network as parameterization of the reconstructed image and perform gradient descent to match the observation, which is known as deep image prior. In this case, the reconstruction does not suffer from hallucinations as no prior training is performed. We extend this to a Bayesian approach with Monte Carlo dropout to quantify both aleatoric and epistemic uncertainty. The presented method is evaluated on the task of denoising different medical imaging modalities. The experimental results show that our approach yields well-calibrated uncertainty. That is, the predictive uncertainty correlates with the predictive error. This allows for reliable uncertainty estimates and can tackle the problem of hallucinations and artifacts in inverse medical imaging tasks.

AB - Uncertainty quantification in inverse medical imaging tasks with deep learning has received little attention. However, deep models trained on large data sets tend to hallucinate and create artifacts in the reconstructed output that are not anatomically present. We use a randomly initialized convolutional network as parameterization of the reconstructed image and perform gradient descent to match the observation, which is known as deep image prior. In this case, the reconstruction does not suffer from hallucinations as no prior training is performed. We extend this to a Bayesian approach with Monte Carlo dropout to quantify both aleatoric and epistemic uncertainty. The presented method is evaluated on the task of denoising different medical imaging modalities. The experimental results show that our approach yields well-calibrated uncertainty. That is, the predictive uncertainty correlates with the predictive error. This allows for reliable uncertainty estimates and can tackle the problem of hallucinations and artifacts in inverse medical imaging tasks.

KW - eess.IV

KW - cs.CV

KW - Deep learning

KW - Hallucination

KW - Variational inference

UR - http://www.scopus.com/inward/record.url?scp=85093089252&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-60365-6_9

DO - 10.1007/978-3-030-60365-6_9

M3 - Conference contribution

SN - 978-3-030-60364-9

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 81

EP - 96

BT - Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Graphs in Biomedical Image Analysis

A2 - Sudre, Carole H.

A2 - Fehri, Hamid

A2 - Arbel, Tal

A2 - Baumgartner, Christian F.

A2 - Dalca, Adrian

A2 - Tanno, Ryutaro

A2 - Van Leemput, Koen

A2 - Wells, William M.

A2 - Sotiras, Aristeidis

A2 - Papiez, Bartlomiej

A2 - Ferrante, Enzo

A2 - Parisot, Sarah

T2 - Second International Workshop, UNSURE 2020 and Third International Workshop, GRAIL 2020

Y2 - 8 October 2020

ER -