Occlusion handling for the integration of virtual objects into video

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksVISAPP 2012
UntertitelProceedings of the International Conference on Computer Vision Theory and Applications
Seiten173-180
Seitenumfang8
PublikationsstatusVeröffentlicht - 2012
VeranstaltungInternational Conference on Computer Vision Theory and Applications, VISAPP 2012 - Rome, Italien
Dauer: 24 Feb. 201226 Feb. 2012

Publikationsreihe

NameVISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications
Band2

Abstract

This paper demonstrates how to effectively exploit occlusion and reappearance information of feature points in structure and motion recovery from video. Due to temporary occlusion with foreground objects, feature tracks discontinue. If these features reappear after their occlusion, they are connected to the correct previously discontinued trajectory during sequential camera and scene estimation. The combination of optical flow for features in consecutive frames and SIFT matching for the wide baseline feature connection provides accurate and stable feature tracking. The knowledge of occluded parts of a connected feature track is used to feed a segmentation algorithm which crops the foreground image regions automatically. The resulting segmentation provides an important step in scene understanding which eases integration of virtual objects into video significantly. The presented approach enables the automatic occlusion of integrated virtual objects with foreground regions of the video. Demonstrations show very realistic results in augmented reality.

ASJC Scopus Sachgebiete

Zitieren

Occlusion handling for the integration of virtual objects into video. / Cordes, Kai; Scheuermann, Björn; Rosenhahn, Bodo et al.
VISAPP 2012: Proceedings of the International Conference on Computer Vision Theory and Applications. 2012. S. 173-180 (VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications; Band 2).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Cordes, K, Scheuermann, B, Rosenhahn, B & Ostermann, J 2012, Occlusion handling for the integration of virtual objects into video. in VISAPP 2012: Proceedings of the International Conference on Computer Vision Theory and Applications. VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications, Bd. 2, S. 173-180, International Conference on Computer Vision Theory and Applications, VISAPP 2012, Rome, Italien, 24 Feb. 2012.
Cordes, K., Scheuermann, B., Rosenhahn, B., & Ostermann, J. (2012). Occlusion handling for the integration of virtual objects into video. In VISAPP 2012: Proceedings of the International Conference on Computer Vision Theory and Applications (S. 173-180). (VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications; Band 2).
Cordes K, Scheuermann B, Rosenhahn B, Ostermann J. Occlusion handling for the integration of virtual objects into video. in VISAPP 2012: Proceedings of the International Conference on Computer Vision Theory and Applications. 2012. S. 173-180. (VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications).
Cordes, Kai ; Scheuermann, Björn ; Rosenhahn, Bodo et al. / Occlusion handling for the integration of virtual objects into video. VISAPP 2012: Proceedings of the International Conference on Computer Vision Theory and Applications. 2012. S. 173-180 (VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications).
Download
@inproceedings{cdb361c0b3564f38907914afddbc182b,
title = "Occlusion handling for the integration of virtual objects into video",
abstract = "This paper demonstrates how to effectively exploit occlusion and reappearance information of feature points in structure and motion recovery from video. Due to temporary occlusion with foreground objects, feature tracks discontinue. If these features reappear after their occlusion, they are connected to the correct previously discontinued trajectory during sequential camera and scene estimation. The combination of optical flow for features in consecutive frames and SIFT matching for the wide baseline feature connection provides accurate and stable feature tracking. The knowledge of occluded parts of a connected feature track is used to feed a segmentation algorithm which crops the foreground image regions automatically. The resulting segmentation provides an important step in scene understanding which eases integration of virtual objects into video significantly. The presented approach enables the automatic occlusion of integrated virtual objects with foreground regions of the video. Demonstrations show very realistic results in augmented reality.",
keywords = "Augmented reality, Feature tracking, Foreground segmentation, Structure and motion recovery",
author = "Kai Cordes and Bj{\"o}rn Scheuermann and Bodo Rosenhahn and J{\"o}rn Ostermann",
year = "2012",
language = "English",
isbn = "9789898565044",
series = "VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications",
pages = "173--180",
booktitle = "VISAPP 2012",
note = "International Conference on Computer Vision Theory and Applications, VISAPP 2012 ; Conference date: 24-02-2012 Through 26-02-2012",

}

Download

TY - GEN

T1 - Occlusion handling for the integration of virtual objects into video

AU - Cordes, Kai

AU - Scheuermann, Björn

AU - Rosenhahn, Bodo

AU - Ostermann, Jörn

PY - 2012

Y1 - 2012

N2 - This paper demonstrates how to effectively exploit occlusion and reappearance information of feature points in structure and motion recovery from video. Due to temporary occlusion with foreground objects, feature tracks discontinue. If these features reappear after their occlusion, they are connected to the correct previously discontinued trajectory during sequential camera and scene estimation. The combination of optical flow for features in consecutive frames and SIFT matching for the wide baseline feature connection provides accurate and stable feature tracking. The knowledge of occluded parts of a connected feature track is used to feed a segmentation algorithm which crops the foreground image regions automatically. The resulting segmentation provides an important step in scene understanding which eases integration of virtual objects into video significantly. The presented approach enables the automatic occlusion of integrated virtual objects with foreground regions of the video. Demonstrations show very realistic results in augmented reality.

AB - This paper demonstrates how to effectively exploit occlusion and reappearance information of feature points in structure and motion recovery from video. Due to temporary occlusion with foreground objects, feature tracks discontinue. If these features reappear after their occlusion, they are connected to the correct previously discontinued trajectory during sequential camera and scene estimation. The combination of optical flow for features in consecutive frames and SIFT matching for the wide baseline feature connection provides accurate and stable feature tracking. The knowledge of occluded parts of a connected feature track is used to feed a segmentation algorithm which crops the foreground image regions automatically. The resulting segmentation provides an important step in scene understanding which eases integration of virtual objects into video significantly. The presented approach enables the automatic occlusion of integrated virtual objects with foreground regions of the video. Demonstrations show very realistic results in augmented reality.

KW - Augmented reality

KW - Feature tracking

KW - Foreground segmentation

KW - Structure and motion recovery

UR - http://www.scopus.com/inward/record.url?scp=84862120073&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84862120073

SN - 9789898565044

T3 - VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications

SP - 173

EP - 180

BT - VISAPP 2012

T2 - International Conference on Computer Vision Theory and Applications, VISAPP 2012

Y2 - 24 February 2012 through 26 February 2012

ER -

Von denselben Autoren