Loading [MathJax]/extensions/tex2jax.js

Segment Any Object Model (SAOM): Real-To-Simulation Fine-Tuning Strategy For Multi-Class Multi-Instance Segmentation

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autorschaft

  • Mariia Khan
  • Yue Qiu
  • Yuren Cong
  • Bodo Rosenhahn

Externe Organisationen

  • Edith Cowan University
  • AIST

Details

OriginalspracheEnglisch
Titel des Sammelwerks2024 IEEE International Conference on Image Processing (ICIP)
Seiten582-588
Seitenumfang7
ISBN (elektronisch)979-8-3503-4939-9
PublikationsstatusVeröffentlicht - 27 Okt. 2024
Veranstaltung31st IEEE International Conference on Image Processing, ICIP 2024 - Abu Dhabi, Vereinigte Arabische Emirate
Dauer: 27 Okt. 202430 Okt. 2024

Publikationsreihe

NameProceedings - International Conference on Image Processing, ICIP
ISSN (Print)1522-4880
ISSN (elektronisch)2381-8549

Abstract

Multi-class multi-instance segmentation is the task of identifying masks for multiple object classes and multiple instances of the same class within an image. The foundational Segment Anything Model (SAM) is designed for promptable multi-class multi-instance segmentation but tends to output part or sub-part masks in the “everything” mode for various real-world applications. Whole object segmentation masks play a crucial role for indoor scene understanding, especially in robotics applications. We propose a new domain invariant Real-to-Simulation (Real-Sim) fine-tuning strategy for SAM. We use object images and ground truth data collected from Ai2Thor simulator during fine-tuning (real-to-sim). To allow our Segment Any Object Model (SAOM) to work in the “everything” mode, we propose the novel nearest neighbour assignment method, updating point embeddings for each ground-truth mask. SAOM is evaluated on our own dataset collected from Ai2Thor simulator. SAOM significantly improves on SAM, with a 28% increase in mIoU and a 25% increase in mAcc for 54 frequently-seen indoor object classes. Moreover, our Real-to-Simulation fine-tuning strategy demonstrates promising generalization performance in real environments without being trained on the real-world data (sim-to-real). The dataset and the code are available here.

ASJC Scopus Sachgebiete

Zitieren

Segment Any Object Model (SAOM): Real-To-Simulation Fine-Tuning Strategy For Multi-Class Multi-Instance Segmentation. / Khan, Mariia; Qiu, Yue; Cong, Yuren et al.
2024 IEEE International Conference on Image Processing (ICIP). 2024. S. 582-588 (Proceedings - International Conference on Image Processing, ICIP).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Khan, M, Qiu, Y, Cong, Y, Rosenhahn, B, Abu-Khalaf, J & Suter, D 2024, Segment Any Object Model (SAOM): Real-To-Simulation Fine-Tuning Strategy For Multi-Class Multi-Instance Segmentation. in 2024 IEEE International Conference on Image Processing (ICIP). Proceedings - International Conference on Image Processing, ICIP, S. 582-588, 31st IEEE International Conference on Image Processing, ICIP 2024, Abu Dhabi, Vereinigte Arabische Emirate, 27 Okt. 2024. https://doi.org/10.1109/ICIP51287.2024.10647744, https://doi.org/10.48550/arXiv.2403.10780
Khan, M., Qiu, Y., Cong, Y., Rosenhahn, B., Abu-Khalaf, J., & Suter, D. (2024). Segment Any Object Model (SAOM): Real-To-Simulation Fine-Tuning Strategy For Multi-Class Multi-Instance Segmentation. In 2024 IEEE International Conference on Image Processing (ICIP) (S. 582-588). (Proceedings - International Conference on Image Processing, ICIP). https://doi.org/10.1109/ICIP51287.2024.10647744, https://doi.org/10.48550/arXiv.2403.10780
Khan M, Qiu Y, Cong Y, Rosenhahn B, Abu-Khalaf J, Suter D. Segment Any Object Model (SAOM): Real-To-Simulation Fine-Tuning Strategy For Multi-Class Multi-Instance Segmentation. in 2024 IEEE International Conference on Image Processing (ICIP). 2024. S. 582-588. (Proceedings - International Conference on Image Processing, ICIP). doi: 10.1109/ICIP51287.2024.10647744, 10.48550/arXiv.2403.10780
Khan, Mariia ; Qiu, Yue ; Cong, Yuren et al. / Segment Any Object Model (SAOM) : Real-To-Simulation Fine-Tuning Strategy For Multi-Class Multi-Instance Segmentation. 2024 IEEE International Conference on Image Processing (ICIP). 2024. S. 582-588 (Proceedings - International Conference on Image Processing, ICIP).
Download
@inproceedings{235b9da1c6824bf8a925912b06845c53,
title = "Segment Any Object Model (SAOM): Real-To-Simulation Fine-Tuning Strategy For Multi-Class Multi-Instance Segmentation",
abstract = "Multi-class multi-instance segmentation is the task of identifying masks for multiple object classes and multiple instances of the same class within an image. The foundational Segment Anything Model (SAM) is designed for promptable multi-class multi-instance segmentation but tends to output part or sub-part masks in the “everything” mode for various real-world applications. Whole object segmentation masks play a crucial role for indoor scene understanding, especially in robotics applications. We propose a new domain invariant Real-to-Simulation (Real-Sim) fine-tuning strategy for SAM. We use object images and ground truth data collected from Ai2Thor simulator during fine-tuning (real-to-sim). To allow our Segment Any Object Model (SAOM) to work in the “everything” mode, we propose the novel nearest neighbour assignment method, updating point embeddings for each ground-truth mask. SAOM is evaluated on our own dataset collected from Ai2Thor simulator. SAOM significantly improves on SAM, with a 28% increase in mIoU and a 25% increase in mAcc for 54 frequently-seen indoor object classes. Moreover, our Real-to-Simulation fine-tuning strategy demonstrates promising generalization performance in real environments without being trained on the real-world data (sim-to-real). The dataset and the code are available here.",
keywords = "Indoor Scene Understanding, Segment Anything Model, Semantic Segmentation",
author = "Mariia Khan and Yue Qiu and Yuren Cong and Bodo Rosenhahn and Jumana Abu-Khalaf and David Suter",
note = "Publisher Copyright: {\textcopyright} 2024 IEEE.; 31st IEEE International Conference on Image Processing, ICIP 2024 ; Conference date: 27-10-2024 Through 30-10-2024",
year = "2024",
month = oct,
day = "27",
doi = "10.1109/ICIP51287.2024.10647744",
language = "English",
isbn = "979-8-3503-4940-5",
series = "Proceedings - International Conference on Image Processing, ICIP",
pages = "582--588",
booktitle = "2024 IEEE International Conference on Image Processing (ICIP)",

}

Download

TY - GEN

T1 - Segment Any Object Model (SAOM)

T2 - 31st IEEE International Conference on Image Processing, ICIP 2024

AU - Khan, Mariia

AU - Qiu, Yue

AU - Cong, Yuren

AU - Rosenhahn, Bodo

AU - Abu-Khalaf, Jumana

AU - Suter, David

N1 - Publisher Copyright: © 2024 IEEE.

PY - 2024/10/27

Y1 - 2024/10/27

N2 - Multi-class multi-instance segmentation is the task of identifying masks for multiple object classes and multiple instances of the same class within an image. The foundational Segment Anything Model (SAM) is designed for promptable multi-class multi-instance segmentation but tends to output part or sub-part masks in the “everything” mode for various real-world applications. Whole object segmentation masks play a crucial role for indoor scene understanding, especially in robotics applications. We propose a new domain invariant Real-to-Simulation (Real-Sim) fine-tuning strategy for SAM. We use object images and ground truth data collected from Ai2Thor simulator during fine-tuning (real-to-sim). To allow our Segment Any Object Model (SAOM) to work in the “everything” mode, we propose the novel nearest neighbour assignment method, updating point embeddings for each ground-truth mask. SAOM is evaluated on our own dataset collected from Ai2Thor simulator. SAOM significantly improves on SAM, with a 28% increase in mIoU and a 25% increase in mAcc for 54 frequently-seen indoor object classes. Moreover, our Real-to-Simulation fine-tuning strategy demonstrates promising generalization performance in real environments without being trained on the real-world data (sim-to-real). The dataset and the code are available here.

AB - Multi-class multi-instance segmentation is the task of identifying masks for multiple object classes and multiple instances of the same class within an image. The foundational Segment Anything Model (SAM) is designed for promptable multi-class multi-instance segmentation but tends to output part or sub-part masks in the “everything” mode for various real-world applications. Whole object segmentation masks play a crucial role for indoor scene understanding, especially in robotics applications. We propose a new domain invariant Real-to-Simulation (Real-Sim) fine-tuning strategy for SAM. We use object images and ground truth data collected from Ai2Thor simulator during fine-tuning (real-to-sim). To allow our Segment Any Object Model (SAOM) to work in the “everything” mode, we propose the novel nearest neighbour assignment method, updating point embeddings for each ground-truth mask. SAOM is evaluated on our own dataset collected from Ai2Thor simulator. SAOM significantly improves on SAM, with a 28% increase in mIoU and a 25% increase in mAcc for 54 frequently-seen indoor object classes. Moreover, our Real-to-Simulation fine-tuning strategy demonstrates promising generalization performance in real environments without being trained on the real-world data (sim-to-real). The dataset and the code are available here.

KW - Indoor Scene Understanding

KW - Segment Anything Model

KW - Semantic Segmentation

UR - http://www.scopus.com/inward/record.url?scp=85216442500&partnerID=8YFLogxK

U2 - 10.1109/ICIP51287.2024.10647744

DO - 10.1109/ICIP51287.2024.10647744

M3 - Conference contribution

AN - SCOPUS:85216442500

SN - 979-8-3503-4940-5

T3 - Proceedings - International Conference on Image Processing, ICIP

SP - 582

EP - 588

BT - 2024 IEEE International Conference on Image Processing (ICIP)

Y2 - 27 October 2024 through 30 October 2024

ER -

Von denselben Autoren