Details
Original language | English |
---|---|
Title of host publication | VISAPP 2012 |
Subtitle of host publication | Proceedings of the International Conference on Computer Vision Theory and Applications |
Pages | 173-180 |
Number of pages | 8 |
Publication status | Published - 2012 |
Event | International Conference on Computer Vision Theory and Applications, VISAPP 2012 - Rome, Italy Duration: 24 Feb 2012 → 26 Feb 2012 |
Publication series
Name | VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications |
---|---|
Volume | 2 |
Abstract
This paper demonstrates how to effectively exploit occlusion and reappearance information of feature points in structure and motion recovery from video. Due to temporary occlusion with foreground objects, feature tracks discontinue. If these features reappear after their occlusion, they are connected to the correct previously discontinued trajectory during sequential camera and scene estimation. The combination of optical flow for features in consecutive frames and SIFT matching for the wide baseline feature connection provides accurate and stable feature tracking. The knowledge of occluded parts of a connected feature track is used to feed a segmentation algorithm which crops the foreground image regions automatically. The resulting segmentation provides an important step in scene understanding which eases integration of virtual objects into video significantly. The presented approach enables the automatic occlusion of integrated virtual objects with foreground regions of the video. Demonstrations show very realistic results in augmented reality.
Keywords
- Augmented reality, Feature tracking, Foreground segmentation, Structure and motion recovery
ASJC Scopus subject areas
- Computer Science(all)
- Computer Graphics and Computer-Aided Design
- Computer Science(all)
- Computer Vision and Pattern Recognition
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
VISAPP 2012: Proceedings of the International Conference on Computer Vision Theory and Applications. 2012. p. 173-180 (VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications; Vol. 2).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Occlusion handling for the integration of virtual objects into video
AU - Cordes, Kai
AU - Scheuermann, Björn
AU - Rosenhahn, Bodo
AU - Ostermann, Jörn
PY - 2012
Y1 - 2012
N2 - This paper demonstrates how to effectively exploit occlusion and reappearance information of feature points in structure and motion recovery from video. Due to temporary occlusion with foreground objects, feature tracks discontinue. If these features reappear after their occlusion, they are connected to the correct previously discontinued trajectory during sequential camera and scene estimation. The combination of optical flow for features in consecutive frames and SIFT matching for the wide baseline feature connection provides accurate and stable feature tracking. The knowledge of occluded parts of a connected feature track is used to feed a segmentation algorithm which crops the foreground image regions automatically. The resulting segmentation provides an important step in scene understanding which eases integration of virtual objects into video significantly. The presented approach enables the automatic occlusion of integrated virtual objects with foreground regions of the video. Demonstrations show very realistic results in augmented reality.
AB - This paper demonstrates how to effectively exploit occlusion and reappearance information of feature points in structure and motion recovery from video. Due to temporary occlusion with foreground objects, feature tracks discontinue. If these features reappear after their occlusion, they are connected to the correct previously discontinued trajectory during sequential camera and scene estimation. The combination of optical flow for features in consecutive frames and SIFT matching for the wide baseline feature connection provides accurate and stable feature tracking. The knowledge of occluded parts of a connected feature track is used to feed a segmentation algorithm which crops the foreground image regions automatically. The resulting segmentation provides an important step in scene understanding which eases integration of virtual objects into video significantly. The presented approach enables the automatic occlusion of integrated virtual objects with foreground regions of the video. Demonstrations show very realistic results in augmented reality.
KW - Augmented reality
KW - Feature tracking
KW - Foreground segmentation
KW - Structure and motion recovery
UR - http://www.scopus.com/inward/record.url?scp=84862120073&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84862120073
SN - 9789898565044
T3 - VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications
SP - 173
EP - 180
BT - VISAPP 2012
T2 - International Conference on Computer Vision Theory and Applications, VISAPP 2012
Y2 - 24 February 2012 through 26 February 2012
ER -