Details
Originalsprache | Englisch |
---|---|
Aufsatznummer | 8353729 |
Seiten (von - bis) | 1441-1454 |
Seitenumfang | 14 |
Fachzeitschrift | IEEE Transactions on Pattern Analysis and Machine Intelligence |
Jahrgang | 41 |
Ausgabenummer | 6 |
Publikationsstatus | Veröffentlicht - 1 Juni 2019 |
Abstract
A wide variety of computer vision applications rely on superpixel or supervoxel algorithms as a preprocessing step. This underlines the overall importance that these approaches have gained in recent years. However, most methods show a lack of temporal consistency or fail in producing temporally stable superpixels. In this paper, we present an approach to generate temporally consistent superpixels for video content. Our method is formulated as a contour-evolving expectation-maximization framework, which utilizes an efficient label propagation scheme to encourage the preservation of superpixel shapes and their relative positioning over time. By explicitly detecting the occlusion of superpixels and the disocclusion of new image regions, our framework is able to terminate and create superpixels whose corresponding image region becomes hidden or newly appears. Additionally, the occluded parts of superpixels are incorporated in the further optimization. This increases the compliance of the superpixel flow with the optical flow present in the scene. Using established benchmark suites, we show that our approach produces highly competitive results in comparison to state-of-the-art streaming-capable supervoxel and superpixel algorithms for video content. This is further shown by comparing the streaming-capable approaches as basis for the task of interactive video segmentation where the proposed approach provides the lowest overall misclassification rate.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Software
- Informatik (insg.)
- Maschinelles Sehen und Mustererkennung
- Informatik (insg.)
- Theoretische Informatik und Mathematik
- Informatik (insg.)
- Artificial intelligence
- Mathematik (insg.)
- Angewandte Mathematik
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
in: IEEE Transactions on Pattern Analysis and Machine Intelligence, Jahrgang 41, Nr. 6, 8353729, 01.06.2019, S. 1441-1454.
Publikation: Beitrag in Fachzeitschrift › Artikel › Forschung › Peer-Review
}
TY - JOUR
T1 - Occlusion-Aware Method for Temporally Consistent Superpixels
AU - Reso, Matthias
AU - Jachalsky, Jörn
AU - Rosenhahn, Bodo
AU - Ostermann, Jörn
PY - 2019/6/1
Y1 - 2019/6/1
N2 - A wide variety of computer vision applications rely on superpixel or supervoxel algorithms as a preprocessing step. This underlines the overall importance that these approaches have gained in recent years. However, most methods show a lack of temporal consistency or fail in producing temporally stable superpixels. In this paper, we present an approach to generate temporally consistent superpixels for video content. Our method is formulated as a contour-evolving expectation-maximization framework, which utilizes an efficient label propagation scheme to encourage the preservation of superpixel shapes and their relative positioning over time. By explicitly detecting the occlusion of superpixels and the disocclusion of new image regions, our framework is able to terminate and create superpixels whose corresponding image region becomes hidden or newly appears. Additionally, the occluded parts of superpixels are incorporated in the further optimization. This increases the compliance of the superpixel flow with the optical flow present in the scene. Using established benchmark suites, we show that our approach produces highly competitive results in comparison to state-of-the-art streaming-capable supervoxel and superpixel algorithms for video content. This is further shown by comparing the streaming-capable approaches as basis for the task of interactive video segmentation where the proposed approach provides the lowest overall misclassification rate.
AB - A wide variety of computer vision applications rely on superpixel or supervoxel algorithms as a preprocessing step. This underlines the overall importance that these approaches have gained in recent years. However, most methods show a lack of temporal consistency or fail in producing temporally stable superpixels. In this paper, we present an approach to generate temporally consistent superpixels for video content. Our method is formulated as a contour-evolving expectation-maximization framework, which utilizes an efficient label propagation scheme to encourage the preservation of superpixel shapes and their relative positioning over time. By explicitly detecting the occlusion of superpixels and the disocclusion of new image regions, our framework is able to terminate and create superpixels whose corresponding image region becomes hidden or newly appears. Additionally, the occluded parts of superpixels are incorporated in the further optimization. This increases the compliance of the superpixel flow with the optical flow present in the scene. Using established benchmark suites, we show that our approach produces highly competitive results in comparison to state-of-the-art streaming-capable supervoxel and superpixel algorithms for video content. This is further shown by comparing the streaming-capable approaches as basis for the task of interactive video segmentation where the proposed approach provides the lowest overall misclassification rate.
KW - oversegmentation
KW - superpixels
KW - supervoxels
KW - Video segmentation
UR - http://www.scopus.com/inward/record.url?scp=85046370510&partnerID=8YFLogxK
U2 - 10.1109/tpami.2018.2832628
DO - 10.1109/tpami.2018.2832628
M3 - Article
AN - SCOPUS:85046370510
VL - 41
SP - 1441
EP - 1454
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
SN - 0162-8828
IS - 6
M1 - 8353729
ER -