Details
Original language | English |
---|---|
Article number | 8353729 |
Pages (from-to) | 1441-1454 |
Number of pages | 14 |
Journal | IEEE Transactions on Pattern Analysis and Machine Intelligence |
Volume | 41 |
Issue number | 6 |
Publication status | Published - 1 Jun 2019 |
Abstract
A wide variety of computer vision applications rely on superpixel or supervoxel algorithms as a preprocessing step. This underlines the overall importance that these approaches have gained in recent years. However, most methods show a lack of temporal consistency or fail in producing temporally stable superpixels. In this paper, we present an approach to generate temporally consistent superpixels for video content. Our method is formulated as a contour-evolving expectation-maximization framework, which utilizes an efficient label propagation scheme to encourage the preservation of superpixel shapes and their relative positioning over time. By explicitly detecting the occlusion of superpixels and the disocclusion of new image regions, our framework is able to terminate and create superpixels whose corresponding image region becomes hidden or newly appears. Additionally, the occluded parts of superpixels are incorporated in the further optimization. This increases the compliance of the superpixel flow with the optical flow present in the scene. Using established benchmark suites, we show that our approach produces highly competitive results in comparison to state-of-the-art streaming-capable supervoxel and superpixel algorithms for video content. This is further shown by comparing the streaming-capable approaches as basis for the task of interactive video segmentation where the proposed approach provides the lowest overall misclassification rate.
Keywords
- oversegmentation, superpixels, supervoxels, Video segmentation
ASJC Scopus subject areas
- Computer Science(all)
- Software
- Computer Science(all)
- Computer Vision and Pattern Recognition
- Computer Science(all)
- Computational Theory and Mathematics
- Computer Science(all)
- Artificial Intelligence
- Mathematics(all)
- Applied Mathematics
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 41, No. 6, 8353729, 01.06.2019, p. 1441-1454.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Occlusion-Aware Method for Temporally Consistent Superpixels
AU - Reso, Matthias
AU - Jachalsky, Jörn
AU - Rosenhahn, Bodo
AU - Ostermann, Jörn
PY - 2019/6/1
Y1 - 2019/6/1
N2 - A wide variety of computer vision applications rely on superpixel or supervoxel algorithms as a preprocessing step. This underlines the overall importance that these approaches have gained in recent years. However, most methods show a lack of temporal consistency or fail in producing temporally stable superpixels. In this paper, we present an approach to generate temporally consistent superpixels for video content. Our method is formulated as a contour-evolving expectation-maximization framework, which utilizes an efficient label propagation scheme to encourage the preservation of superpixel shapes and their relative positioning over time. By explicitly detecting the occlusion of superpixels and the disocclusion of new image regions, our framework is able to terminate and create superpixels whose corresponding image region becomes hidden or newly appears. Additionally, the occluded parts of superpixels are incorporated in the further optimization. This increases the compliance of the superpixel flow with the optical flow present in the scene. Using established benchmark suites, we show that our approach produces highly competitive results in comparison to state-of-the-art streaming-capable supervoxel and superpixel algorithms for video content. This is further shown by comparing the streaming-capable approaches as basis for the task of interactive video segmentation where the proposed approach provides the lowest overall misclassification rate.
AB - A wide variety of computer vision applications rely on superpixel or supervoxel algorithms as a preprocessing step. This underlines the overall importance that these approaches have gained in recent years. However, most methods show a lack of temporal consistency or fail in producing temporally stable superpixels. In this paper, we present an approach to generate temporally consistent superpixels for video content. Our method is formulated as a contour-evolving expectation-maximization framework, which utilizes an efficient label propagation scheme to encourage the preservation of superpixel shapes and their relative positioning over time. By explicitly detecting the occlusion of superpixels and the disocclusion of new image regions, our framework is able to terminate and create superpixels whose corresponding image region becomes hidden or newly appears. Additionally, the occluded parts of superpixels are incorporated in the further optimization. This increases the compliance of the superpixel flow with the optical flow present in the scene. Using established benchmark suites, we show that our approach produces highly competitive results in comparison to state-of-the-art streaming-capable supervoxel and superpixel algorithms for video content. This is further shown by comparing the streaming-capable approaches as basis for the task of interactive video segmentation where the proposed approach provides the lowest overall misclassification rate.
KW - oversegmentation
KW - superpixels
KW - supervoxels
KW - Video segmentation
UR - http://www.scopus.com/inward/record.url?scp=85046370510&partnerID=8YFLogxK
U2 - 10.1109/tpami.2018.2832628
DO - 10.1109/tpami.2018.2832628
M3 - Article
AN - SCOPUS:85046370510
VL - 41
SP - 1441
EP - 1454
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
SN - 0162-8828
IS - 6
M1 - 8353729
ER -