Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | 2024 IEEE International Conference on Image Processing (ICIP) |
Seiten | 582-588 |
Seitenumfang | 7 |
ISBN (elektronisch) | 979-8-3503-4939-9 |
Publikationsstatus | Veröffentlicht - 27 Okt. 2024 |
Veranstaltung | 31st IEEE International Conference on Image Processing, ICIP 2024 - Abu Dhabi, Vereinigte Arabische Emirate Dauer: 27 Okt. 2024 → 30 Okt. 2024 |
Publikationsreihe
Name | Proceedings - International Conference on Image Processing, ICIP |
---|---|
ISSN (Print) | 1522-4880 |
ISSN (elektronisch) | 2381-8549 |
Abstract
Multi-class multi-instance segmentation is the task of identifying masks for multiple object classes and multiple instances of the same class within an image. The foundational Segment Anything Model (SAM) is designed for promptable multi-class multi-instance segmentation but tends to output part or sub-part masks in the “everything” mode for various real-world applications. Whole object segmentation masks play a crucial role for indoor scene understanding, especially in robotics applications. We propose a new domain invariant Real-to-Simulation (Real-Sim) fine-tuning strategy for SAM. We use object images and ground truth data collected from Ai2Thor simulator during fine-tuning (real-to-sim). To allow our Segment Any Object Model (SAOM) to work in the “everything” mode, we propose the novel nearest neighbour assignment method, updating point embeddings for each ground-truth mask. SAOM is evaluated on our own dataset collected from Ai2Thor simulator. SAOM significantly improves on SAM, with a 28% increase in mIoU and a 25% increase in mAcc for 54 frequently-seen indoor object classes. Moreover, our Real-to-Simulation fine-tuning strategy demonstrates promising generalization performance in real environments without being trained on the real-world data (sim-to-real). The dataset and the code are available here.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Software
- Informatik (insg.)
- Maschinelles Sehen und Mustererkennung
- Informatik (insg.)
- Signalverarbeitung
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
2024 IEEE International Conference on Image Processing (ICIP). 2024. S. 582-588 (Proceedings - International Conference on Image Processing, ICIP).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Segment Any Object Model (SAOM)
T2 - 31st IEEE International Conference on Image Processing, ICIP 2024
AU - Khan, Mariia
AU - Qiu, Yue
AU - Cong, Yuren
AU - Rosenhahn, Bodo
AU - Abu-Khalaf, Jumana
AU - Suter, David
N1 - Publisher Copyright: © 2024 IEEE.
PY - 2024/10/27
Y1 - 2024/10/27
N2 - Multi-class multi-instance segmentation is the task of identifying masks for multiple object classes and multiple instances of the same class within an image. The foundational Segment Anything Model (SAM) is designed for promptable multi-class multi-instance segmentation but tends to output part or sub-part masks in the “everything” mode for various real-world applications. Whole object segmentation masks play a crucial role for indoor scene understanding, especially in robotics applications. We propose a new domain invariant Real-to-Simulation (Real-Sim) fine-tuning strategy for SAM. We use object images and ground truth data collected from Ai2Thor simulator during fine-tuning (real-to-sim). To allow our Segment Any Object Model (SAOM) to work in the “everything” mode, we propose the novel nearest neighbour assignment method, updating point embeddings for each ground-truth mask. SAOM is evaluated on our own dataset collected from Ai2Thor simulator. SAOM significantly improves on SAM, with a 28% increase in mIoU and a 25% increase in mAcc for 54 frequently-seen indoor object classes. Moreover, our Real-to-Simulation fine-tuning strategy demonstrates promising generalization performance in real environments without being trained on the real-world data (sim-to-real). The dataset and the code are available here.
AB - Multi-class multi-instance segmentation is the task of identifying masks for multiple object classes and multiple instances of the same class within an image. The foundational Segment Anything Model (SAM) is designed for promptable multi-class multi-instance segmentation but tends to output part or sub-part masks in the “everything” mode for various real-world applications. Whole object segmentation masks play a crucial role for indoor scene understanding, especially in robotics applications. We propose a new domain invariant Real-to-Simulation (Real-Sim) fine-tuning strategy for SAM. We use object images and ground truth data collected from Ai2Thor simulator during fine-tuning (real-to-sim). To allow our Segment Any Object Model (SAOM) to work in the “everything” mode, we propose the novel nearest neighbour assignment method, updating point embeddings for each ground-truth mask. SAOM is evaluated on our own dataset collected from Ai2Thor simulator. SAOM significantly improves on SAM, with a 28% increase in mIoU and a 25% increase in mAcc for 54 frequently-seen indoor object classes. Moreover, our Real-to-Simulation fine-tuning strategy demonstrates promising generalization performance in real environments without being trained on the real-world data (sim-to-real). The dataset and the code are available here.
KW - Indoor Scene Understanding
KW - Segment Anything Model
KW - Semantic Segmentation
UR - http://www.scopus.com/inward/record.url?scp=85216442500&partnerID=8YFLogxK
U2 - 10.1109/ICIP51287.2024.10647744
DO - 10.1109/ICIP51287.2024.10647744
M3 - Conference contribution
AN - SCOPUS:85216442500
SN - 979-8-3503-4940-5
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 582
EP - 588
BT - 2024 IEEE International Conference on Image Processing (ICIP)
Y2 - 27 October 2024 through 30 October 2024
ER -