Details
Original language | English |
---|---|
Article number | 7393844 |
Pages (from-to) | 1533-1547 |
Number of pages | 15 |
Journal | IEEE Transactions on Pattern Analysis and Machine Intelligence |
Volume | 38 |
Issue number | 8 |
Publication status | Published - 1 Aug 2016 |
Abstract
In this work, we present an approach to fuse video with sparse orientation data obtained from inertial sensors to improve and stabilize full-body human motion capture. Even though video data is a strong cue for motion analysis, tracking artifacts occur frequently due to ambiguities in the images, rapid motions, occlusions or noise. As a complementary data source, inertial sensors allow for accurate estimation of limb orientations even under fast motions. However, accurate position information cannot be obtained in continuous operation. Therefore, we propose a hybrid tracker that combines video with a small number of inertial units to compensate for the drawbacks of each sensor type: on the one hand, we obtain drift-free and accurate position information from video data and, on the other hand, we obtain accurate limb orientations and good performance under fast motions from inertial sensors. In several experiments we demonstrate the increased performance and stability of our human motion tracker.
Keywords
- animation, Human pose estimation, IMU, inertial sensors, motion capture, multisensor fusion
ASJC Scopus subject areas
- Computer Science(all)
- Software
- Computer Science(all)
- Computer Vision and Pattern Recognition
- Computer Science(all)
- Computational Theory and Mathematics
- Computer Science(all)
- Artificial Intelligence
- Mathematics(all)
- Applied Mathematics
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 38, No. 8, 7393844, 01.08.2016, p. 1533-1547.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Human Pose Estimation from Video and IMUs
AU - Von Marcard, Timo
AU - Pons-Moll, Gerard
AU - Rosenhahn, Bodo
PY - 2016/8/1
Y1 - 2016/8/1
N2 - In this work, we present an approach to fuse video with sparse orientation data obtained from inertial sensors to improve and stabilize full-body human motion capture. Even though video data is a strong cue for motion analysis, tracking artifacts occur frequently due to ambiguities in the images, rapid motions, occlusions or noise. As a complementary data source, inertial sensors allow for accurate estimation of limb orientations even under fast motions. However, accurate position information cannot be obtained in continuous operation. Therefore, we propose a hybrid tracker that combines video with a small number of inertial units to compensate for the drawbacks of each sensor type: on the one hand, we obtain drift-free and accurate position information from video data and, on the other hand, we obtain accurate limb orientations and good performance under fast motions from inertial sensors. In several experiments we demonstrate the increased performance and stability of our human motion tracker.
AB - In this work, we present an approach to fuse video with sparse orientation data obtained from inertial sensors to improve and stabilize full-body human motion capture. Even though video data is a strong cue for motion analysis, tracking artifacts occur frequently due to ambiguities in the images, rapid motions, occlusions or noise. As a complementary data source, inertial sensors allow for accurate estimation of limb orientations even under fast motions. However, accurate position information cannot be obtained in continuous operation. Therefore, we propose a hybrid tracker that combines video with a small number of inertial units to compensate for the drawbacks of each sensor type: on the one hand, we obtain drift-free and accurate position information from video data and, on the other hand, we obtain accurate limb orientations and good performance under fast motions from inertial sensors. In several experiments we demonstrate the increased performance and stability of our human motion tracker.
KW - animation
KW - Human pose estimation
KW - IMU
KW - inertial sensors
KW - motion capture
KW - multisensor fusion
UR - http://www.scopus.com/inward/record.url?scp=84978427623&partnerID=8YFLogxK
U2 - 10.1109/tpami.2016.2522398
DO - 10.1109/tpami.2016.2522398
M3 - Article
C2 - 26829774
AN - SCOPUS:84978427623
VL - 38
SP - 1533
EP - 1547
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
SN - 0162-8828
IS - 8
M1 - 7393844
ER -