Occlusion-Aware Method for Temporally Consistent Superpixels

Research output: Contribution to journalArticleResearchpeer review

Authors

Research Organisations

External Research Organisations

  • Technicolor Research & Innovation
View graph of relations

Details

Original languageEnglish
Article number8353729
Pages (from-to)1441-1454
Number of pages14
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume41
Issue number6
Publication statusPublished - 1 Jun 2019

Abstract

A wide variety of computer vision applications rely on superpixel or supervoxel algorithms as a preprocessing step. This underlines the overall importance that these approaches have gained in recent years. However, most methods show a lack of temporal consistency or fail in producing temporally stable superpixels. In this paper, we present an approach to generate temporally consistent superpixels for video content. Our method is formulated as a contour-evolving expectation-maximization framework, which utilizes an efficient label propagation scheme to encourage the preservation of superpixel shapes and their relative positioning over time. By explicitly detecting the occlusion of superpixels and the disocclusion of new image regions, our framework is able to terminate and create superpixels whose corresponding image region becomes hidden or newly appears. Additionally, the occluded parts of superpixels are incorporated in the further optimization. This increases the compliance of the superpixel flow with the optical flow present in the scene. Using established benchmark suites, we show that our approach produces highly competitive results in comparison to state-of-the-art streaming-capable supervoxel and superpixel algorithms for video content. This is further shown by comparing the streaming-capable approaches as basis for the task of interactive video segmentation where the proposed approach provides the lowest overall misclassification rate.

Keywords

    oversegmentation, superpixels, supervoxels, Video segmentation

ASJC Scopus subject areas

Cite this

Occlusion-Aware Method for Temporally Consistent Superpixels. / Reso, Matthias; Jachalsky, Jörn; Rosenhahn, Bodo et al.
In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 41, No. 6, 8353729, 01.06.2019, p. 1441-1454.

Research output: Contribution to journalArticleResearchpeer review

Download
@article{b3144bc0ea35454b8ceff42aada03464,
title = "Occlusion-Aware Method for Temporally Consistent Superpixels",
abstract = "A wide variety of computer vision applications rely on superpixel or supervoxel algorithms as a preprocessing step. This underlines the overall importance that these approaches have gained in recent years. However, most methods show a lack of temporal consistency or fail in producing temporally stable superpixels. In this paper, we present an approach to generate temporally consistent superpixels for video content. Our method is formulated as a contour-evolving expectation-maximization framework, which utilizes an efficient label propagation scheme to encourage the preservation of superpixel shapes and their relative positioning over time. By explicitly detecting the occlusion of superpixels and the disocclusion of new image regions, our framework is able to terminate and create superpixels whose corresponding image region becomes hidden or newly appears. Additionally, the occluded parts of superpixels are incorporated in the further optimization. This increases the compliance of the superpixel flow with the optical flow present in the scene. Using established benchmark suites, we show that our approach produces highly competitive results in comparison to state-of-the-art streaming-capable supervoxel and superpixel algorithms for video content. This is further shown by comparing the streaming-capable approaches as basis for the task of interactive video segmentation where the proposed approach provides the lowest overall misclassification rate.",
keywords = "oversegmentation, superpixels, supervoxels, Video segmentation",
author = "Matthias Reso and J{\"o}rn Jachalsky and Bodo Rosenhahn and J{\"o}rn Ostermann",
year = "2019",
month = jun,
day = "1",
doi = "10.1109/tpami.2018.2832628",
language = "English",
volume = "41",
pages = "1441--1454",
journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
issn = "0162-8828",
publisher = "IEEE Computer Society",
number = "6",

}

Download

TY - JOUR

T1 - Occlusion-Aware Method for Temporally Consistent Superpixels

AU - Reso, Matthias

AU - Jachalsky, Jörn

AU - Rosenhahn, Bodo

AU - Ostermann, Jörn

PY - 2019/6/1

Y1 - 2019/6/1

N2 - A wide variety of computer vision applications rely on superpixel or supervoxel algorithms as a preprocessing step. This underlines the overall importance that these approaches have gained in recent years. However, most methods show a lack of temporal consistency or fail in producing temporally stable superpixels. In this paper, we present an approach to generate temporally consistent superpixels for video content. Our method is formulated as a contour-evolving expectation-maximization framework, which utilizes an efficient label propagation scheme to encourage the preservation of superpixel shapes and their relative positioning over time. By explicitly detecting the occlusion of superpixels and the disocclusion of new image regions, our framework is able to terminate and create superpixels whose corresponding image region becomes hidden or newly appears. Additionally, the occluded parts of superpixels are incorporated in the further optimization. This increases the compliance of the superpixel flow with the optical flow present in the scene. Using established benchmark suites, we show that our approach produces highly competitive results in comparison to state-of-the-art streaming-capable supervoxel and superpixel algorithms for video content. This is further shown by comparing the streaming-capable approaches as basis for the task of interactive video segmentation where the proposed approach provides the lowest overall misclassification rate.

AB - A wide variety of computer vision applications rely on superpixel or supervoxel algorithms as a preprocessing step. This underlines the overall importance that these approaches have gained in recent years. However, most methods show a lack of temporal consistency or fail in producing temporally stable superpixels. In this paper, we present an approach to generate temporally consistent superpixels for video content. Our method is formulated as a contour-evolving expectation-maximization framework, which utilizes an efficient label propagation scheme to encourage the preservation of superpixel shapes and their relative positioning over time. By explicitly detecting the occlusion of superpixels and the disocclusion of new image regions, our framework is able to terminate and create superpixels whose corresponding image region becomes hidden or newly appears. Additionally, the occluded parts of superpixels are incorporated in the further optimization. This increases the compliance of the superpixel flow with the optical flow present in the scene. Using established benchmark suites, we show that our approach produces highly competitive results in comparison to state-of-the-art streaming-capable supervoxel and superpixel algorithms for video content. This is further shown by comparing the streaming-capable approaches as basis for the task of interactive video segmentation where the proposed approach provides the lowest overall misclassification rate.

KW - oversegmentation

KW - superpixels

KW - supervoxels

KW - Video segmentation

UR - http://www.scopus.com/inward/record.url?scp=85046370510&partnerID=8YFLogxK

U2 - 10.1109/tpami.2018.2832628

DO - 10.1109/tpami.2018.2832628

M3 - Article

AN - SCOPUS:85046370510

VL - 41

SP - 1441

EP - 1454

JO - IEEE Transactions on Pattern Analysis and Machine Intelligence

JF - IEEE Transactions on Pattern Analysis and Machine Intelligence

SN - 0162-8828

IS - 6

M1 - 8353729

ER -

By the same author(s)