Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Medical Imaging 2019 |
Untertitel | Image-Guided Procedures, Robotic Interventions, and Modeling |
Herausgeber/-innen | Baowei Fei, Cristian A. Linte |
Herausgeber (Verlag) | SPIE |
Seitenumfang | 7 |
ISBN (elektronisch) | 9781510625495 |
Publikationsstatus | Veröffentlicht - 8 März 2019 |
Veranstaltung | Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling - San Diego, USA / Vereinigte Staaten Dauer: 17 Feb. 2019 → 19 Feb. 2019 |
Publikationsreihe
Name | Progress in Biomedical Optics and Imaging - Proceedings of SPIE |
---|---|
Band | 10951 |
ISSN (Print) | 1605-7422 |
Abstract
In microsurgery, lasers have emerged as precise tools for bone ablation. A challenge is automatic control of laser bone ablation with 4D optical coherence tomography (OCT). OCT as high resolution imaging modality provides volumetric images of tissue and foresees information of bone position and orientation (pose) as well as thickness. However, existing approaches for OCT based laser ablation control rely on external tracking systems or invasively ablated artificial landmarks for tracking the pose of the OCT probe relative to the tissue. This can be superseded by estimating the scene flow caused by relative movement between OCT-based laser ablation system and patient. Therefore, this paper deals with 2.5D scene flow estimation of volumetric OCT images for application in laser ablation. We present a semi-supervised convolutional neural network based tracking scheme for subsequent 3D OCT volumes and apply it to a realistic semi-synthetic data set of ex vivo human temporal bone specimen. The scene flow is estimated in a two-stage approach. In the first stage, 2D lateral scene flow is computed on census-transformed en-face arguments-of-maximum intensity projections. Subsequent to this, the projections are warped by predicted lateral flow and 1D depth flow is estimated. The neural network is trained semi-supervised by combining error to ground truth and the reconstruction error of warped images with assumptions of spatial flow smoothness. Quantitative evaluation reveals a mean endpoint error of (4.7 ± 3.5) voxel or (27.5 ± 20.5) μm for scene flow estimation caused by simulated relative movement between the OCT probe and bone. The scene flow estimation for 4D OCT enables its use for markerless tracking of mastoid bone structures for image guidance in general, and automated laser ablation control.
ASJC Scopus Sachgebiete
- Werkstoffwissenschaften (insg.)
- Elektronische, optische und magnetische Materialien
- Werkstoffwissenschaften (insg.)
- Biomaterialien
- Physik und Astronomie (insg.)
- Atom- und Molekularphysik sowie Optik
- Medizin (insg.)
- Radiologie, Nuklearmedizin und Bildgebung
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling. Hrsg. / Baowei Fei; Cristian A. Linte. SPIE, 2019. 109510R (Progress in Biomedical Optics and Imaging - Proceedings of SPIE; Band 10951).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Deep-learning-based 2.5D flow field estimation for maximum intensity projections of 4D optical coherence tomography
AU - Laves, Max Heinrich
AU - Ihler, Sontje
AU - Kahrs, Lüder A.
AU - Ortmaier, Tobias
N1 - Funding information: This research has received funding from the European Union as being part of the EFRE OPhonLas project.
PY - 2019/3/8
Y1 - 2019/3/8
N2 - In microsurgery, lasers have emerged as precise tools for bone ablation. A challenge is automatic control of laser bone ablation with 4D optical coherence tomography (OCT). OCT as high resolution imaging modality provides volumetric images of tissue and foresees information of bone position and orientation (pose) as well as thickness. However, existing approaches for OCT based laser ablation control rely on external tracking systems or invasively ablated artificial landmarks for tracking the pose of the OCT probe relative to the tissue. This can be superseded by estimating the scene flow caused by relative movement between OCT-based laser ablation system and patient. Therefore, this paper deals with 2.5D scene flow estimation of volumetric OCT images for application in laser ablation. We present a semi-supervised convolutional neural network based tracking scheme for subsequent 3D OCT volumes and apply it to a realistic semi-synthetic data set of ex vivo human temporal bone specimen. The scene flow is estimated in a two-stage approach. In the first stage, 2D lateral scene flow is computed on census-transformed en-face arguments-of-maximum intensity projections. Subsequent to this, the projections are warped by predicted lateral flow and 1D depth flow is estimated. The neural network is trained semi-supervised by combining error to ground truth and the reconstruction error of warped images with assumptions of spatial flow smoothness. Quantitative evaluation reveals a mean endpoint error of (4.7 ± 3.5) voxel or (27.5 ± 20.5) μm for scene flow estimation caused by simulated relative movement between the OCT probe and bone. The scene flow estimation for 4D OCT enables its use for markerless tracking of mastoid bone structures for image guidance in general, and automated laser ablation control.
AB - In microsurgery, lasers have emerged as precise tools for bone ablation. A challenge is automatic control of laser bone ablation with 4D optical coherence tomography (OCT). OCT as high resolution imaging modality provides volumetric images of tissue and foresees information of bone position and orientation (pose) as well as thickness. However, existing approaches for OCT based laser ablation control rely on external tracking systems or invasively ablated artificial landmarks for tracking the pose of the OCT probe relative to the tissue. This can be superseded by estimating the scene flow caused by relative movement between OCT-based laser ablation system and patient. Therefore, this paper deals with 2.5D scene flow estimation of volumetric OCT images for application in laser ablation. We present a semi-supervised convolutional neural network based tracking scheme for subsequent 3D OCT volumes and apply it to a realistic semi-synthetic data set of ex vivo human temporal bone specimen. The scene flow is estimated in a two-stage approach. In the first stage, 2D lateral scene flow is computed on census-transformed en-face arguments-of-maximum intensity projections. Subsequent to this, the projections are warped by predicted lateral flow and 1D depth flow is estimated. The neural network is trained semi-supervised by combining error to ground truth and the reconstruction error of warped images with assumptions of spatial flow smoothness. Quantitative evaluation reveals a mean endpoint error of (4.7 ± 3.5) voxel or (27.5 ± 20.5) μm for scene flow estimation caused by simulated relative movement between the OCT probe and bone. The scene flow estimation for 4D OCT enables its use for markerless tracking of mastoid bone structures for image guidance in general, and automated laser ablation control.
KW - Cochlear implantation
KW - Laser control
KW - Microsurgery
KW - Optical flow
KW - Scene flow
KW - Tracking
UR - http://www.scopus.com/inward/record.url?scp=85068910692&partnerID=8YFLogxK
U2 - 10.1117/12.2512952
DO - 10.1117/12.2512952
M3 - Conference contribution
AN - SCOPUS:85068910692
T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE
BT - Medical Imaging 2019
A2 - Fei, Baowei
A2 - Linte, Cristian A.
PB - SPIE
T2 - Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling
Y2 - 17 February 2019 through 19 February 2019
ER -