Medical instrument detection with synthetically generated data

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

View graph of relations

Details

Original languageEnglish
Title of host publicationMedical Imaging 2024: Imaging Informatics for Healthcare, Research, and Applications
EditorsHiroyuki Yoshida, Shandong Wu
Place of PublicationSan Diego
PublisherSPIE
Number of pages9
Volume12931
ISBN (electronic)9781510671676
ISBN (print)9781510671669
Publication statusPublished - 2 Apr 2024

Abstract

The persistent need for more qualified personnel in operating theatres exacerbates the remaining staff’s workload. This increased burden can result in substantial complications during surgical procedures. To address this issue, this research project works on a comprehensive operating theatre system. The system offers real-time monitoring of all surgical instruments in the operating theatre, aiming to alleviate the problem. The foundation of this endeavor involves a neural network trained to classify and identify eight distinct instruments belonging to four distinct surgical instrument groups. A novel aspect of this study lies in the approach taken to select and generate the training and validation data sets. The data sets used in this study consist of synthetically generated image data rather than real image data. Additionally, three virtual scenes were designed to serve as the background for a generation algorithm. This algorithm randomly positions the instruments within these scenes, producing annotated rendered RGB images of the generated scenes. To assess the efficacy of this approach, a separate real data set was also created for testing the neural network. Surprisingly, it was discovered that neural networks trained solely on synthetic data performed well when applied to real data. This research paper shows that it is possible to train neural networks with purely synthetically generated data and use them to recognize surgical instruments in real images.

Keywords

    Yolov8, surgical instrument detection, synthetic data, object detection, Multi-label classification, deep learning

Cite this

Medical instrument detection with synthetically generated data. / Wiese, Leon Vincent; Hinz, Lennart; Reithmeier, Eduard.
Medical Imaging 2024: Imaging Informatics for Healthcare, Research, and Applications. ed. / Hiroyuki Yoshida; Shandong Wu. Vol. 12931 San Diego: SPIE, 2024. 12931 .

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Wiese, LV, Hinz, L & Reithmeier, E 2024, Medical instrument detection with synthetically generated data. in H Yoshida & S Wu (eds), Medical Imaging 2024: Imaging Informatics for Healthcare, Research, and Applications. vol. 12931, 12931 , SPIE, San Diego. https://doi.org/10.1117/12.3005798
Wiese, L. V., Hinz, L., & Reithmeier, E. (2024). Medical instrument detection with synthetically generated data. In H. Yoshida, & S. Wu (Eds.), Medical Imaging 2024: Imaging Informatics for Healthcare, Research, and Applications (Vol. 12931). Article 12931 SPIE. https://doi.org/10.1117/12.3005798
Wiese LV, Hinz L, Reithmeier E. Medical instrument detection with synthetically generated data. In Yoshida H, Wu S, editors, Medical Imaging 2024: Imaging Informatics for Healthcare, Research, and Applications. Vol. 12931. San Diego: SPIE. 2024. 12931 doi: 10.1117/12.3005798
Wiese, Leon Vincent ; Hinz, Lennart ; Reithmeier, Eduard. / Medical instrument detection with synthetically generated data. Medical Imaging 2024: Imaging Informatics for Healthcare, Research, and Applications. editor / Hiroyuki Yoshida ; Shandong Wu. Vol. 12931 San Diego : SPIE, 2024.
Download
@inproceedings{1e80f97a03414c2794ea4f01065a2d30,
title = "Medical instrument detection with synthetically generated data",
abstract = "The persistent need for more qualified personnel in operating theatres exacerbates the remaining staff{\textquoteright}s workload. This increased burden can result in substantial complications during surgical procedures. To address this issue, this research project works on a comprehensive operating theatre system. The system offers real-time monitoring of all surgical instruments in the operating theatre, aiming to alleviate the problem. The foundation of this endeavor involves a neural network trained to classify and identify eight distinct instruments belonging to four distinct surgical instrument groups. A novel aspect of this study lies in the approach taken to select and generate the training and validation data sets. The data sets used in this study consist of synthetically generated image data rather than real image data. Additionally, three virtual scenes were designed to serve as the background for a generation algorithm. This algorithm randomly positions the instruments within these scenes, producing annotated rendered RGB images of the generated scenes. To assess the efficacy of this approach, a separate real data set was also created for testing the neural network. Surprisingly, it was discovered that neural networks trained solely on synthetic data performed well when applied to real data. This research paper shows that it is possible to train neural networks with purely synthetically generated data and use them to recognize surgical instruments in real images.",
keywords = "Yolov8, surgical instrument detection, synthetic data, object detection, Multi-label classification, deep learning",
author = "Wiese, {Leon Vincent} and Lennart Hinz and Eduard Reithmeier",
year = "2024",
month = apr,
day = "2",
doi = "10.1117/12.3005798",
language = "English",
isbn = "9781510671669",
volume = "12931",
editor = "{ Yoshida}, {Hiroyuki } and { Wu}, {Shandong }",
booktitle = "Medical Imaging 2024: Imaging Informatics for Healthcare, Research, and Applications",
publisher = "SPIE",
address = "United States",

}

Download

TY - GEN

T1 - Medical instrument detection with synthetically generated data

AU - Wiese, Leon Vincent

AU - Hinz, Lennart

AU - Reithmeier, Eduard

PY - 2024/4/2

Y1 - 2024/4/2

N2 - The persistent need for more qualified personnel in operating theatres exacerbates the remaining staff’s workload. This increased burden can result in substantial complications during surgical procedures. To address this issue, this research project works on a comprehensive operating theatre system. The system offers real-time monitoring of all surgical instruments in the operating theatre, aiming to alleviate the problem. The foundation of this endeavor involves a neural network trained to classify and identify eight distinct instruments belonging to four distinct surgical instrument groups. A novel aspect of this study lies in the approach taken to select and generate the training and validation data sets. The data sets used in this study consist of synthetically generated image data rather than real image data. Additionally, three virtual scenes were designed to serve as the background for a generation algorithm. This algorithm randomly positions the instruments within these scenes, producing annotated rendered RGB images of the generated scenes. To assess the efficacy of this approach, a separate real data set was also created for testing the neural network. Surprisingly, it was discovered that neural networks trained solely on synthetic data performed well when applied to real data. This research paper shows that it is possible to train neural networks with purely synthetically generated data and use them to recognize surgical instruments in real images.

AB - The persistent need for more qualified personnel in operating theatres exacerbates the remaining staff’s workload. This increased burden can result in substantial complications during surgical procedures. To address this issue, this research project works on a comprehensive operating theatre system. The system offers real-time monitoring of all surgical instruments in the operating theatre, aiming to alleviate the problem. The foundation of this endeavor involves a neural network trained to classify and identify eight distinct instruments belonging to four distinct surgical instrument groups. A novel aspect of this study lies in the approach taken to select and generate the training and validation data sets. The data sets used in this study consist of synthetically generated image data rather than real image data. Additionally, three virtual scenes were designed to serve as the background for a generation algorithm. This algorithm randomly positions the instruments within these scenes, producing annotated rendered RGB images of the generated scenes. To assess the efficacy of this approach, a separate real data set was also created for testing the neural network. Surprisingly, it was discovered that neural networks trained solely on synthetic data performed well when applied to real data. This research paper shows that it is possible to train neural networks with purely synthetically generated data and use them to recognize surgical instruments in real images.

KW - Yolov8

KW - surgical instrument detection

KW - synthetic data

KW - object detection

KW - Multi-label classification

KW - deep learning

U2 - 10.1117/12.3005798

DO - 10.1117/12.3005798

M3 - Conference contribution

SN - 9781510671669

VL - 12931

BT - Medical Imaging 2024: Imaging Informatics for Healthcare, Research, and Applications

A2 - Yoshida, Hiroyuki

A2 - Wu, Shandong

PB - SPIE

CY - San Diego

ER -

By the same author(s)