Deep-learning-based 2.5D flow field estimation for maximum intensity projections of 4D optical coherence tomography

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Max Heinrich Laves
  • Sontje Ihler
  • Lüder A. Kahrs
  • Tobias Ortmaier

Organisationseinheiten

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksMedical Imaging 2019
UntertitelImage-Guided Procedures, Robotic Interventions, and Modeling
Herausgeber/-innenBaowei Fei, Cristian A. Linte
Herausgeber (Verlag)SPIE
Seitenumfang7
ISBN (elektronisch)9781510625495
PublikationsstatusVeröffentlicht - 8 März 2019
VeranstaltungMedical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling - San Diego, USA / Vereinigte Staaten
Dauer: 17 Feb. 201919 Feb. 2019

Publikationsreihe

NameProgress in Biomedical Optics and Imaging - Proceedings of SPIE
Band10951
ISSN (Print)1605-7422

Abstract

In microsurgery, lasers have emerged as precise tools for bone ablation. A challenge is automatic control of laser bone ablation with 4D optical coherence tomography (OCT). OCT as high resolution imaging modality provides volumetric images of tissue and foresees information of bone position and orientation (pose) as well as thickness. However, existing approaches for OCT based laser ablation control rely on external tracking systems or invasively ablated artificial landmarks for tracking the pose of the OCT probe relative to the tissue. This can be superseded by estimating the scene flow caused by relative movement between OCT-based laser ablation system and patient. Therefore, this paper deals with 2.5D scene flow estimation of volumetric OCT images for application in laser ablation. We present a semi-supervised convolutional neural network based tracking scheme for subsequent 3D OCT volumes and apply it to a realistic semi-synthetic data set of ex vivo human temporal bone specimen. The scene flow is estimated in a two-stage approach. In the first stage, 2D lateral scene flow is computed on census-transformed en-face arguments-of-maximum intensity projections. Subsequent to this, the projections are warped by predicted lateral flow and 1D depth flow is estimated. The neural network is trained semi-supervised by combining error to ground truth and the reconstruction error of warped images with assumptions of spatial flow smoothness. Quantitative evaluation reveals a mean endpoint error of (4.7 ± 3.5) voxel or (27.5 ± 20.5) μm for scene flow estimation caused by simulated relative movement between the OCT probe and bone. The scene flow estimation for 4D OCT enables its use for markerless tracking of mastoid bone structures for image guidance in general, and automated laser ablation control.

ASJC Scopus Sachgebiete

Zitieren

Deep-learning-based 2.5D flow field estimation for maximum intensity projections of 4D optical coherence tomography. / Laves, Max Heinrich; Ihler, Sontje; Kahrs, Lüder A. et al.
Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling. Hrsg. / Baowei Fei; Cristian A. Linte. SPIE, 2019. 109510R (Progress in Biomedical Optics and Imaging - Proceedings of SPIE; Band 10951).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Laves, MH, Ihler, S, Kahrs, LA & Ortmaier, T 2019, Deep-learning-based 2.5D flow field estimation for maximum intensity projections of 4D optical coherence tomography. in B Fei & CA Linte (Hrsg.), Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling., 109510R, Progress in Biomedical Optics and Imaging - Proceedings of SPIE, Bd. 10951, SPIE, Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling, San Diego, USA / Vereinigte Staaten, 17 Feb. 2019. https://doi.org/10.1117/12.2512952
Laves, M. H., Ihler, S., Kahrs, L. A., & Ortmaier, T. (2019). Deep-learning-based 2.5D flow field estimation for maximum intensity projections of 4D optical coherence tomography. In B. Fei, & C. A. Linte (Hrsg.), Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling Artikel 109510R (Progress in Biomedical Optics and Imaging - Proceedings of SPIE; Band 10951). SPIE. https://doi.org/10.1117/12.2512952
Laves MH, Ihler S, Kahrs LA, Ortmaier T. Deep-learning-based 2.5D flow field estimation for maximum intensity projections of 4D optical coherence tomography. in Fei B, Linte CA, Hrsg., Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling. SPIE. 2019. 109510R. (Progress in Biomedical Optics and Imaging - Proceedings of SPIE). doi: 10.1117/12.2512952
Laves, Max Heinrich ; Ihler, Sontje ; Kahrs, Lüder A. et al. / Deep-learning-based 2.5D flow field estimation for maximum intensity projections of 4D optical coherence tomography. Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling. Hrsg. / Baowei Fei ; Cristian A. Linte. SPIE, 2019. (Progress in Biomedical Optics and Imaging - Proceedings of SPIE).
Download
@inproceedings{7db0024f75d64f05ae2e076cbf02b018,
title = "Deep-learning-based 2.5D flow field estimation for maximum intensity projections of 4D optical coherence tomography",
abstract = "In microsurgery, lasers have emerged as precise tools for bone ablation. A challenge is automatic control of laser bone ablation with 4D optical coherence tomography (OCT). OCT as high resolution imaging modality provides volumetric images of tissue and foresees information of bone position and orientation (pose) as well as thickness. However, existing approaches for OCT based laser ablation control rely on external tracking systems or invasively ablated artificial landmarks for tracking the pose of the OCT probe relative to the tissue. This can be superseded by estimating the scene flow caused by relative movement between OCT-based laser ablation system and patient. Therefore, this paper deals with 2.5D scene flow estimation of volumetric OCT images for application in laser ablation. We present a semi-supervised convolutional neural network based tracking scheme for subsequent 3D OCT volumes and apply it to a realistic semi-synthetic data set of ex vivo human temporal bone specimen. The scene flow is estimated in a two-stage approach. In the first stage, 2D lateral scene flow is computed on census-transformed en-face arguments-of-maximum intensity projections. Subsequent to this, the projections are warped by predicted lateral flow and 1D depth flow is estimated. The neural network is trained semi-supervised by combining error to ground truth and the reconstruction error of warped images with assumptions of spatial flow smoothness. Quantitative evaluation reveals a mean endpoint error of (4.7 ± 3.5) voxel or (27.5 ± 20.5) μm for scene flow estimation caused by simulated relative movement between the OCT probe and bone. The scene flow estimation for 4D OCT enables its use for markerless tracking of mastoid bone structures for image guidance in general, and automated laser ablation control.",
keywords = "Cochlear implantation, Laser control, Microsurgery, Optical flow, Scene flow, Tracking",
author = "Laves, {Max Heinrich} and Sontje Ihler and Kahrs, {L{\"u}der A.} and Tobias Ortmaier",
note = "Funding information: This research has received funding from the European Union as being part of the EFRE OPhonLas project.; Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling ; Conference date: 17-02-2019 Through 19-02-2019",
year = "2019",
month = mar,
day = "8",
doi = "10.1117/12.2512952",
language = "English",
series = "Progress in Biomedical Optics and Imaging - Proceedings of SPIE",
publisher = "SPIE",
editor = "Baowei Fei and Linte, {Cristian A.}",
booktitle = "Medical Imaging 2019",
address = "United States",

}

Download

TY - GEN

T1 - Deep-learning-based 2.5D flow field estimation for maximum intensity projections of 4D optical coherence tomography

AU - Laves, Max Heinrich

AU - Ihler, Sontje

AU - Kahrs, Lüder A.

AU - Ortmaier, Tobias

N1 - Funding information: This research has received funding from the European Union as being part of the EFRE OPhonLas project.

PY - 2019/3/8

Y1 - 2019/3/8

N2 - In microsurgery, lasers have emerged as precise tools for bone ablation. A challenge is automatic control of laser bone ablation with 4D optical coherence tomography (OCT). OCT as high resolution imaging modality provides volumetric images of tissue and foresees information of bone position and orientation (pose) as well as thickness. However, existing approaches for OCT based laser ablation control rely on external tracking systems or invasively ablated artificial landmarks for tracking the pose of the OCT probe relative to the tissue. This can be superseded by estimating the scene flow caused by relative movement between OCT-based laser ablation system and patient. Therefore, this paper deals with 2.5D scene flow estimation of volumetric OCT images for application in laser ablation. We present a semi-supervised convolutional neural network based tracking scheme for subsequent 3D OCT volumes and apply it to a realistic semi-synthetic data set of ex vivo human temporal bone specimen. The scene flow is estimated in a two-stage approach. In the first stage, 2D lateral scene flow is computed on census-transformed en-face arguments-of-maximum intensity projections. Subsequent to this, the projections are warped by predicted lateral flow and 1D depth flow is estimated. The neural network is trained semi-supervised by combining error to ground truth and the reconstruction error of warped images with assumptions of spatial flow smoothness. Quantitative evaluation reveals a mean endpoint error of (4.7 ± 3.5) voxel or (27.5 ± 20.5) μm for scene flow estimation caused by simulated relative movement between the OCT probe and bone. The scene flow estimation for 4D OCT enables its use for markerless tracking of mastoid bone structures for image guidance in general, and automated laser ablation control.

AB - In microsurgery, lasers have emerged as precise tools for bone ablation. A challenge is automatic control of laser bone ablation with 4D optical coherence tomography (OCT). OCT as high resolution imaging modality provides volumetric images of tissue and foresees information of bone position and orientation (pose) as well as thickness. However, existing approaches for OCT based laser ablation control rely on external tracking systems or invasively ablated artificial landmarks for tracking the pose of the OCT probe relative to the tissue. This can be superseded by estimating the scene flow caused by relative movement between OCT-based laser ablation system and patient. Therefore, this paper deals with 2.5D scene flow estimation of volumetric OCT images for application in laser ablation. We present a semi-supervised convolutional neural network based tracking scheme for subsequent 3D OCT volumes and apply it to a realistic semi-synthetic data set of ex vivo human temporal bone specimen. The scene flow is estimated in a two-stage approach. In the first stage, 2D lateral scene flow is computed on census-transformed en-face arguments-of-maximum intensity projections. Subsequent to this, the projections are warped by predicted lateral flow and 1D depth flow is estimated. The neural network is trained semi-supervised by combining error to ground truth and the reconstruction error of warped images with assumptions of spatial flow smoothness. Quantitative evaluation reveals a mean endpoint error of (4.7 ± 3.5) voxel or (27.5 ± 20.5) μm for scene flow estimation caused by simulated relative movement between the OCT probe and bone. The scene flow estimation for 4D OCT enables its use for markerless tracking of mastoid bone structures for image guidance in general, and automated laser ablation control.

KW - Cochlear implantation

KW - Laser control

KW - Microsurgery

KW - Optical flow

KW - Scene flow

KW - Tracking

UR - http://www.scopus.com/inward/record.url?scp=85068910692&partnerID=8YFLogxK

U2 - 10.1117/12.2512952

DO - 10.1117/12.2512952

M3 - Conference contribution

AN - SCOPUS:85068910692

T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE

BT - Medical Imaging 2019

A2 - Fei, Baowei

A2 - Linte, Cristian A.

PB - SPIE

T2 - Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling

Y2 - 17 February 2019 through 19 February 2019

ER -

Von denselben Autoren