Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | VISAPP 2012 |
Untertitel | Proceedings of the International Conference on Computer Vision Theory and Applications |
Seiten | 173-180 |
Seitenumfang | 8 |
Publikationsstatus | Veröffentlicht - 2012 |
Veranstaltung | International Conference on Computer Vision Theory and Applications, VISAPP 2012 - Rome, Italien Dauer: 24 Feb. 2012 → 26 Feb. 2012 |
Publikationsreihe
Name | VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications |
---|---|
Band | 2 |
Abstract
This paper demonstrates how to effectively exploit occlusion and reappearance information of feature points in structure and motion recovery from video. Due to temporary occlusion with foreground objects, feature tracks discontinue. If these features reappear after their occlusion, they are connected to the correct previously discontinued trajectory during sequential camera and scene estimation. The combination of optical flow for features in consecutive frames and SIFT matching for the wide baseline feature connection provides accurate and stable feature tracking. The knowledge of occluded parts of a connected feature track is used to feed a segmentation algorithm which crops the foreground image regions automatically. The resulting segmentation provides an important step in scene understanding which eases integration of virtual objects into video significantly. The presented approach enables the automatic occlusion of integrated virtual objects with foreground regions of the video. Demonstrations show very realistic results in augmented reality.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Computergrafik und computergestütztes Design
- Informatik (insg.)
- Maschinelles Sehen und Mustererkennung
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
VISAPP 2012: Proceedings of the International Conference on Computer Vision Theory and Applications. 2012. S. 173-180 (VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications; Band 2).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Occlusion handling for the integration of virtual objects into video
AU - Cordes, Kai
AU - Scheuermann, Björn
AU - Rosenhahn, Bodo
AU - Ostermann, Jörn
PY - 2012
Y1 - 2012
N2 - This paper demonstrates how to effectively exploit occlusion and reappearance information of feature points in structure and motion recovery from video. Due to temporary occlusion with foreground objects, feature tracks discontinue. If these features reappear after their occlusion, they are connected to the correct previously discontinued trajectory during sequential camera and scene estimation. The combination of optical flow for features in consecutive frames and SIFT matching for the wide baseline feature connection provides accurate and stable feature tracking. The knowledge of occluded parts of a connected feature track is used to feed a segmentation algorithm which crops the foreground image regions automatically. The resulting segmentation provides an important step in scene understanding which eases integration of virtual objects into video significantly. The presented approach enables the automatic occlusion of integrated virtual objects with foreground regions of the video. Demonstrations show very realistic results in augmented reality.
AB - This paper demonstrates how to effectively exploit occlusion and reappearance information of feature points in structure and motion recovery from video. Due to temporary occlusion with foreground objects, feature tracks discontinue. If these features reappear after their occlusion, they are connected to the correct previously discontinued trajectory during sequential camera and scene estimation. The combination of optical flow for features in consecutive frames and SIFT matching for the wide baseline feature connection provides accurate and stable feature tracking. The knowledge of occluded parts of a connected feature track is used to feed a segmentation algorithm which crops the foreground image regions automatically. The resulting segmentation provides an important step in scene understanding which eases integration of virtual objects into video significantly. The presented approach enables the automatic occlusion of integrated virtual objects with foreground regions of the video. Demonstrations show very realistic results in augmented reality.
KW - Augmented reality
KW - Feature tracking
KW - Foreground segmentation
KW - Structure and motion recovery
UR - http://www.scopus.com/inward/record.url?scp=84862120073&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84862120073
SN - 9789898565044
T3 - VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications
SP - 173
EP - 180
BT - VISAPP 2012
T2 - International Conference on Computer Vision Theory and Applications, VISAPP 2012
Y2 - 24 February 2012 through 26 February 2012
ER -