Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009 |
Herausgeber (Verlag) | IEEE Computer Society |
Seiten | 224-231 |
Seitenumfang | 8 |
ISBN (Print) | 9781424439935 |
Publikationsstatus | Veröffentlicht - 2009 |
Veranstaltung | 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Miami, FL, USA / Vereinigte Staaten Dauer: 20 Juni 2009 → 25 Juni 2009 |
Publikationsreihe
Name | 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009 |
---|---|
Band | 2009 IEEE |
Abstract
In this work we present an approach for markerless motion capture (MoCap) of articulated objects, which are recorded with multiple unsynchronized moving cameras. Instead of using fixed (and expensive) hardware synchronized cameras, this approach allows us to track people with off-the-shelf handheld video cameras. To prepare a sequence for motion capture, we first reconstruct the static background and the position of each camera using Structure-from-Motion (SfM). Then the cameras are registered to each other using the reconstructed static background geometry. Camera synchronization is achieved via the audio streams recorded by the cameras in parallel. Finally, a markerless MoCap approach is applied to recover positions and joint configurations of subjects. Feature tracks and dense background geometry are further used to stabilize the MoCap. The experiments show examples with highly challenging indoor and outdoor scenes.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Maschinelles Sehen und Mustererkennung
- Ingenieurwesen (insg.)
- Biomedizintechnik
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009. IEEE Computer Society, 2009. S. 224-231 5206859 (2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009; Band 2009 IEEE).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Markerless motion capture with unsynchronized moving cameras
AU - Hasler, Nils
AU - Rosenhahn, Bodo
AU - Thormählen, Thorsten
AU - Wand, Michael
AU - Gall, Juergen
AU - Seidel, Hans Peter
PY - 2009
Y1 - 2009
N2 - In this work we present an approach for markerless motion capture (MoCap) of articulated objects, which are recorded with multiple unsynchronized moving cameras. Instead of using fixed (and expensive) hardware synchronized cameras, this approach allows us to track people with off-the-shelf handheld video cameras. To prepare a sequence for motion capture, we first reconstruct the static background and the position of each camera using Structure-from-Motion (SfM). Then the cameras are registered to each other using the reconstructed static background geometry. Camera synchronization is achieved via the audio streams recorded by the cameras in parallel. Finally, a markerless MoCap approach is applied to recover positions and joint configurations of subjects. Feature tracks and dense background geometry are further used to stabilize the MoCap. The experiments show examples with highly challenging indoor and outdoor scenes.
AB - In this work we present an approach for markerless motion capture (MoCap) of articulated objects, which are recorded with multiple unsynchronized moving cameras. Instead of using fixed (and expensive) hardware synchronized cameras, this approach allows us to track people with off-the-shelf handheld video cameras. To prepare a sequence for motion capture, we first reconstruct the static background and the position of each camera using Structure-from-Motion (SfM). Then the cameras are registered to each other using the reconstructed static background geometry. Camera synchronization is achieved via the audio streams recorded by the cameras in parallel. Finally, a markerless MoCap approach is applied to recover positions and joint configurations of subjects. Feature tracks and dense background geometry are further used to stabilize the MoCap. The experiments show examples with highly challenging indoor and outdoor scenes.
UR - http://www.scopus.com/inward/record.url?scp=70450162951&partnerID=8YFLogxK
U2 - 10.1109/CVPRW.2009.5206859
DO - 10.1109/CVPRW.2009.5206859
M3 - Conference contribution
AN - SCOPUS:70450162951
SN - 9781424439935
T3 - 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009
SP - 224
EP - 231
BT - 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009
PB - IEEE Computer Society
T2 - 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Y2 - 20 June 2009 through 25 June 2009
ER -