Details
Original language | English |
---|---|
Title of host publication | Medical Imaging 2019 |
Subtitle of host publication | Image-Guided Procedures, Robotic Interventions, and Modeling |
Editors | Baowei Fei, Cristian A. Linte |
Publisher | SPIE |
Number of pages | 7 |
ISBN (electronic) | 9781510625495 |
Publication status | Published - 8 Mar 2019 |
Event | Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling - San Diego, United States Duration: 17 Feb 2019 → 19 Feb 2019 |
Publication series
Name | Progress in Biomedical Optics and Imaging - Proceedings of SPIE |
---|---|
Volume | 10951 |
ISSN (Print) | 1605-7422 |
Abstract
In microsurgery, lasers have emerged as precise tools for bone ablation. A challenge is automatic control of laser bone ablation with 4D optical coherence tomography (OCT). OCT as high resolution imaging modality provides volumetric images of tissue and foresees information of bone position and orientation (pose) as well as thickness. However, existing approaches for OCT based laser ablation control rely on external tracking systems or invasively ablated artificial landmarks for tracking the pose of the OCT probe relative to the tissue. This can be superseded by estimating the scene flow caused by relative movement between OCT-based laser ablation system and patient. Therefore, this paper deals with 2.5D scene flow estimation of volumetric OCT images for application in laser ablation. We present a semi-supervised convolutional neural network based tracking scheme for subsequent 3D OCT volumes and apply it to a realistic semi-synthetic data set of ex vivo human temporal bone specimen. The scene flow is estimated in a two-stage approach. In the first stage, 2D lateral scene flow is computed on census-transformed en-face arguments-of-maximum intensity projections. Subsequent to this, the projections are warped by predicted lateral flow and 1D depth flow is estimated. The neural network is trained semi-supervised by combining error to ground truth and the reconstruction error of warped images with assumptions of spatial flow smoothness. Quantitative evaluation reveals a mean endpoint error of (4.7 ± 3.5) voxel or (27.5 ± 20.5) μm for scene flow estimation caused by simulated relative movement between the OCT probe and bone. The scene flow estimation for 4D OCT enables its use for markerless tracking of mastoid bone structures for image guidance in general, and automated laser ablation control.
Keywords
- Cochlear implantation, Laser control, Microsurgery, Optical flow, Scene flow, Tracking
ASJC Scopus subject areas
- Materials Science(all)
- Electronic, Optical and Magnetic Materials
- Materials Science(all)
- Biomaterials
- Physics and Astronomy(all)
- Atomic and Molecular Physics, and Optics
- Medicine(all)
- Radiology Nuclear Medicine and imaging
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling. ed. / Baowei Fei; Cristian A. Linte. SPIE, 2019. 109510R (Progress in Biomedical Optics and Imaging - Proceedings of SPIE; Vol. 10951).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Deep-learning-based 2.5D flow field estimation for maximum intensity projections of 4D optical coherence tomography
AU - Laves, Max Heinrich
AU - Ihler, Sontje
AU - Kahrs, Lüder A.
AU - Ortmaier, Tobias
N1 - Funding information: This research has received funding from the European Union as being part of the EFRE OPhonLas project.
PY - 2019/3/8
Y1 - 2019/3/8
N2 - In microsurgery, lasers have emerged as precise tools for bone ablation. A challenge is automatic control of laser bone ablation with 4D optical coherence tomography (OCT). OCT as high resolution imaging modality provides volumetric images of tissue and foresees information of bone position and orientation (pose) as well as thickness. However, existing approaches for OCT based laser ablation control rely on external tracking systems or invasively ablated artificial landmarks for tracking the pose of the OCT probe relative to the tissue. This can be superseded by estimating the scene flow caused by relative movement between OCT-based laser ablation system and patient. Therefore, this paper deals with 2.5D scene flow estimation of volumetric OCT images for application in laser ablation. We present a semi-supervised convolutional neural network based tracking scheme for subsequent 3D OCT volumes and apply it to a realistic semi-synthetic data set of ex vivo human temporal bone specimen. The scene flow is estimated in a two-stage approach. In the first stage, 2D lateral scene flow is computed on census-transformed en-face arguments-of-maximum intensity projections. Subsequent to this, the projections are warped by predicted lateral flow and 1D depth flow is estimated. The neural network is trained semi-supervised by combining error to ground truth and the reconstruction error of warped images with assumptions of spatial flow smoothness. Quantitative evaluation reveals a mean endpoint error of (4.7 ± 3.5) voxel or (27.5 ± 20.5) μm for scene flow estimation caused by simulated relative movement between the OCT probe and bone. The scene flow estimation for 4D OCT enables its use for markerless tracking of mastoid bone structures for image guidance in general, and automated laser ablation control.
AB - In microsurgery, lasers have emerged as precise tools for bone ablation. A challenge is automatic control of laser bone ablation with 4D optical coherence tomography (OCT). OCT as high resolution imaging modality provides volumetric images of tissue and foresees information of bone position and orientation (pose) as well as thickness. However, existing approaches for OCT based laser ablation control rely on external tracking systems or invasively ablated artificial landmarks for tracking the pose of the OCT probe relative to the tissue. This can be superseded by estimating the scene flow caused by relative movement between OCT-based laser ablation system and patient. Therefore, this paper deals with 2.5D scene flow estimation of volumetric OCT images for application in laser ablation. We present a semi-supervised convolutional neural network based tracking scheme for subsequent 3D OCT volumes and apply it to a realistic semi-synthetic data set of ex vivo human temporal bone specimen. The scene flow is estimated in a two-stage approach. In the first stage, 2D lateral scene flow is computed on census-transformed en-face arguments-of-maximum intensity projections. Subsequent to this, the projections are warped by predicted lateral flow and 1D depth flow is estimated. The neural network is trained semi-supervised by combining error to ground truth and the reconstruction error of warped images with assumptions of spatial flow smoothness. Quantitative evaluation reveals a mean endpoint error of (4.7 ± 3.5) voxel or (27.5 ± 20.5) μm for scene flow estimation caused by simulated relative movement between the OCT probe and bone. The scene flow estimation for 4D OCT enables its use for markerless tracking of mastoid bone structures for image guidance in general, and automated laser ablation control.
KW - Cochlear implantation
KW - Laser control
KW - Microsurgery
KW - Optical flow
KW - Scene flow
KW - Tracking
UR - http://www.scopus.com/inward/record.url?scp=85068910692&partnerID=8YFLogxK
U2 - 10.1117/12.2512952
DO - 10.1117/12.2512952
M3 - Conference contribution
AN - SCOPUS:85068910692
T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE
BT - Medical Imaging 2019
A2 - Fei, Baowei
A2 - Linte, Cristian A.
PB - SPIE
T2 - Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling
Y2 - 17 February 2019 through 19 February 2019
ER -