Occlusion handling for the integration of virtual objects into video

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

Research Organisations

View graph of relations

Details

Original languageEnglish
Title of host publicationVISAPP 2012
Subtitle of host publicationProceedings of the International Conference on Computer Vision Theory and Applications
Pages173-180
Number of pages8
Publication statusPublished - 2012
EventInternational Conference on Computer Vision Theory and Applications, VISAPP 2012 - Rome, Italy
Duration: 24 Feb 201226 Feb 2012

Publication series

NameVISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications
Volume2

Abstract

This paper demonstrates how to effectively exploit occlusion and reappearance information of feature points in structure and motion recovery from video. Due to temporary occlusion with foreground objects, feature tracks discontinue. If these features reappear after their occlusion, they are connected to the correct previously discontinued trajectory during sequential camera and scene estimation. The combination of optical flow for features in consecutive frames and SIFT matching for the wide baseline feature connection provides accurate and stable feature tracking. The knowledge of occluded parts of a connected feature track is used to feed a segmentation algorithm which crops the foreground image regions automatically. The resulting segmentation provides an important step in scene understanding which eases integration of virtual objects into video significantly. The presented approach enables the automatic occlusion of integrated virtual objects with foreground regions of the video. Demonstrations show very realistic results in augmented reality.

Keywords

    Augmented reality, Feature tracking, Foreground segmentation, Structure and motion recovery

ASJC Scopus subject areas

Cite this

Occlusion handling for the integration of virtual objects into video. / Cordes, Kai; Scheuermann, Björn; Rosenhahn, Bodo et al.
VISAPP 2012: Proceedings of the International Conference on Computer Vision Theory and Applications. 2012. p. 173-180 (VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications; Vol. 2).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Cordes, K, Scheuermann, B, Rosenhahn, B & Ostermann, J 2012, Occlusion handling for the integration of virtual objects into video. in VISAPP 2012: Proceedings of the International Conference on Computer Vision Theory and Applications. VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications, vol. 2, pp. 173-180, International Conference on Computer Vision Theory and Applications, VISAPP 2012, Rome, Italy, 24 Feb 2012.
Cordes, K., Scheuermann, B., Rosenhahn, B., & Ostermann, J. (2012). Occlusion handling for the integration of virtual objects into video. In VISAPP 2012: Proceedings of the International Conference on Computer Vision Theory and Applications (pp. 173-180). (VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications; Vol. 2).
Cordes K, Scheuermann B, Rosenhahn B, Ostermann J. Occlusion handling for the integration of virtual objects into video. In VISAPP 2012: Proceedings of the International Conference on Computer Vision Theory and Applications. 2012. p. 173-180. (VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications).
Cordes, Kai ; Scheuermann, Björn ; Rosenhahn, Bodo et al. / Occlusion handling for the integration of virtual objects into video. VISAPP 2012: Proceedings of the International Conference on Computer Vision Theory and Applications. 2012. pp. 173-180 (VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications).
Download
@inproceedings{cdb361c0b3564f38907914afddbc182b,
title = "Occlusion handling for the integration of virtual objects into video",
abstract = "This paper demonstrates how to effectively exploit occlusion and reappearance information of feature points in structure and motion recovery from video. Due to temporary occlusion with foreground objects, feature tracks discontinue. If these features reappear after their occlusion, they are connected to the correct previously discontinued trajectory during sequential camera and scene estimation. The combination of optical flow for features in consecutive frames and SIFT matching for the wide baseline feature connection provides accurate and stable feature tracking. The knowledge of occluded parts of a connected feature track is used to feed a segmentation algorithm which crops the foreground image regions automatically. The resulting segmentation provides an important step in scene understanding which eases integration of virtual objects into video significantly. The presented approach enables the automatic occlusion of integrated virtual objects with foreground regions of the video. Demonstrations show very realistic results in augmented reality.",
keywords = "Augmented reality, Feature tracking, Foreground segmentation, Structure and motion recovery",
author = "Kai Cordes and Bj{\"o}rn Scheuermann and Bodo Rosenhahn and J{\"o}rn Ostermann",
year = "2012",
language = "English",
isbn = "9789898565044",
series = "VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications",
pages = "173--180",
booktitle = "VISAPP 2012",
note = "International Conference on Computer Vision Theory and Applications, VISAPP 2012 ; Conference date: 24-02-2012 Through 26-02-2012",

}

Download

TY - GEN

T1 - Occlusion handling for the integration of virtual objects into video

AU - Cordes, Kai

AU - Scheuermann, Björn

AU - Rosenhahn, Bodo

AU - Ostermann, Jörn

PY - 2012

Y1 - 2012

N2 - This paper demonstrates how to effectively exploit occlusion and reappearance information of feature points in structure and motion recovery from video. Due to temporary occlusion with foreground objects, feature tracks discontinue. If these features reappear after their occlusion, they are connected to the correct previously discontinued trajectory during sequential camera and scene estimation. The combination of optical flow for features in consecutive frames and SIFT matching for the wide baseline feature connection provides accurate and stable feature tracking. The knowledge of occluded parts of a connected feature track is used to feed a segmentation algorithm which crops the foreground image regions automatically. The resulting segmentation provides an important step in scene understanding which eases integration of virtual objects into video significantly. The presented approach enables the automatic occlusion of integrated virtual objects with foreground regions of the video. Demonstrations show very realistic results in augmented reality.

AB - This paper demonstrates how to effectively exploit occlusion and reappearance information of feature points in structure and motion recovery from video. Due to temporary occlusion with foreground objects, feature tracks discontinue. If these features reappear after their occlusion, they are connected to the correct previously discontinued trajectory during sequential camera and scene estimation. The combination of optical flow for features in consecutive frames and SIFT matching for the wide baseline feature connection provides accurate and stable feature tracking. The knowledge of occluded parts of a connected feature track is used to feed a segmentation algorithm which crops the foreground image regions automatically. The resulting segmentation provides an important step in scene understanding which eases integration of virtual objects into video significantly. The presented approach enables the automatic occlusion of integrated virtual objects with foreground regions of the video. Demonstrations show very realistic results in augmented reality.

KW - Augmented reality

KW - Feature tracking

KW - Foreground segmentation

KW - Structure and motion recovery

UR - http://www.scopus.com/inward/record.url?scp=84862120073&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84862120073

SN - 9789898565044

T3 - VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications

SP - 173

EP - 180

BT - VISAPP 2012

T2 - International Conference on Computer Vision Theory and Applications, VISAPP 2012

Y2 - 24 February 2012 through 26 February 2012

ER -

By the same author(s)