Data-driven manifolds for outdoor motion capture

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

Research Organisations

External Research Organisations

  • ETH Zurich
  • Max Planck Institute for Intelligent Systems
View graph of relations

Details

Original languageEnglish
Title of host publicationOutdoor and Large-Scale Real-World Scene Analysis - 15th International Workshop on Theoretical Foundations of Computer Vision, Revised Selected Papers
Pages305-328
Number of pages24
Publication statusPublished - 2012
Event15th International Workshop on Theoretical Foundations of Computer Vision - Dagstuhl Castle, Germany
Duration: 26 Jun 20111 Jul 2011

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume7474 LNCS
ISSN (Print)0302-9743
ISSN (electronic)1611-3349

Abstract

Human motion capturing (HMC) from multiview image sequences is an extremely difficult problem due to depth and orientation ambiguities and the high dimensionality of the state space. In this paper, we introduce a novel hybrid HMC system that combines video input with sparse inertial sensor input. Employing an annealing particle-based optimization scheme, our idea is to use orientation cues derived from the inertial input to sample particles from the manifold of valid poses. Then, visual cues derived from the video input are used to weight these particles and to iteratively derive the final pose. As our main contribution, we propose an efficient sampling procedure where the particles are derived analytically using inverse kinematics on the orientation cues. Additionally, we introduce a novel sensor noise model to account for uncertainties based on the von Mises-Fisher distribution. Doing so, orientation constraints are naturally fulfilled and the number of needed particles can be kept very small. More generally, our method can be used to sample poses that fulfill arbitrary orientation or positional kinematic constraints. In the experiments, we show that our system can track even highly dynamic motions in an outdoor environment with changing illumination, background clutter, and shadows.

ASJC Scopus subject areas

Cite this

Data-driven manifolds for outdoor motion capture. / Pons-Moll, Gerard; Leal-Taixé, Laura; Gall, Juergen et al.
Outdoor and Large-Scale Real-World Scene Analysis - 15th International Workshop on Theoretical Foundations of Computer Vision, Revised Selected Papers. 2012. p. 305-328 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 7474 LNCS).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Pons-Moll, G, Leal-Taixé, L, Gall, J & Rosenhahn, B 2012, Data-driven manifolds for outdoor motion capture. in Outdoor and Large-Scale Real-World Scene Analysis - 15th International Workshop on Theoretical Foundations of Computer Vision, Revised Selected Papers. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 7474 LNCS, pp. 305-328, 15th International Workshop on Theoretical Foundations of Computer Vision, Dagstuhl Castle, Germany, 26 Jun 2011. https://doi.org/10.1007/978-3-642-34091-8_14
Pons-Moll, G., Leal-Taixé, L., Gall, J., & Rosenhahn, B. (2012). Data-driven manifolds for outdoor motion capture. In Outdoor and Large-Scale Real-World Scene Analysis - 15th International Workshop on Theoretical Foundations of Computer Vision, Revised Selected Papers (pp. 305-328). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 7474 LNCS). https://doi.org/10.1007/978-3-642-34091-8_14
Pons-Moll G, Leal-Taixé L, Gall J, Rosenhahn B. Data-driven manifolds for outdoor motion capture. In Outdoor and Large-Scale Real-World Scene Analysis - 15th International Workshop on Theoretical Foundations of Computer Vision, Revised Selected Papers. 2012. p. 305-328. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). doi: 10.1007/978-3-642-34091-8_14
Pons-Moll, Gerard ; Leal-Taixé, Laura ; Gall, Juergen et al. / Data-driven manifolds for outdoor motion capture. Outdoor and Large-Scale Real-World Scene Analysis - 15th International Workshop on Theoretical Foundations of Computer Vision, Revised Selected Papers. 2012. pp. 305-328 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Download
@inproceedings{2163d8850f8f447cb66cd503c72caaac,
title = "Data-driven manifolds for outdoor motion capture",
abstract = "Human motion capturing (HMC) from multiview image sequences is an extremely difficult problem due to depth and orientation ambiguities and the high dimensionality of the state space. In this paper, we introduce a novel hybrid HMC system that combines video input with sparse inertial sensor input. Employing an annealing particle-based optimization scheme, our idea is to use orientation cues derived from the inertial input to sample particles from the manifold of valid poses. Then, visual cues derived from the video input are used to weight these particles and to iteratively derive the final pose. As our main contribution, we propose an efficient sampling procedure where the particles are derived analytically using inverse kinematics on the orientation cues. Additionally, we introduce a novel sensor noise model to account for uncertainties based on the von Mises-Fisher distribution. Doing so, orientation constraints are naturally fulfilled and the number of needed particles can be kept very small. More generally, our method can be used to sample poses that fulfill arbitrary orientation or positional kinematic constraints. In the experiments, we show that our system can track even highly dynamic motions in an outdoor environment with changing illumination, background clutter, and shadows.",
author = "Gerard Pons-Moll and Laura Leal-Taix{\'e} and Juergen Gall and Bodo Rosenhahn",
year = "2012",
doi = "10.1007/978-3-642-34091-8_14",
language = "English",
isbn = "9783642340901",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "305--328",
booktitle = "Outdoor and Large-Scale Real-World Scene Analysis - 15th International Workshop on Theoretical Foundations of Computer Vision, Revised Selected Papers",
note = "15th International Workshop on Theoretical Foundations of Computer Vision ; Conference date: 26-06-2011 Through 01-07-2011",

}

Download

TY - GEN

T1 - Data-driven manifolds for outdoor motion capture

AU - Pons-Moll, Gerard

AU - Leal-Taixé, Laura

AU - Gall, Juergen

AU - Rosenhahn, Bodo

PY - 2012

Y1 - 2012

N2 - Human motion capturing (HMC) from multiview image sequences is an extremely difficult problem due to depth and orientation ambiguities and the high dimensionality of the state space. In this paper, we introduce a novel hybrid HMC system that combines video input with sparse inertial sensor input. Employing an annealing particle-based optimization scheme, our idea is to use orientation cues derived from the inertial input to sample particles from the manifold of valid poses. Then, visual cues derived from the video input are used to weight these particles and to iteratively derive the final pose. As our main contribution, we propose an efficient sampling procedure where the particles are derived analytically using inverse kinematics on the orientation cues. Additionally, we introduce a novel sensor noise model to account for uncertainties based on the von Mises-Fisher distribution. Doing so, orientation constraints are naturally fulfilled and the number of needed particles can be kept very small. More generally, our method can be used to sample poses that fulfill arbitrary orientation or positional kinematic constraints. In the experiments, we show that our system can track even highly dynamic motions in an outdoor environment with changing illumination, background clutter, and shadows.

AB - Human motion capturing (HMC) from multiview image sequences is an extremely difficult problem due to depth and orientation ambiguities and the high dimensionality of the state space. In this paper, we introduce a novel hybrid HMC system that combines video input with sparse inertial sensor input. Employing an annealing particle-based optimization scheme, our idea is to use orientation cues derived from the inertial input to sample particles from the manifold of valid poses. Then, visual cues derived from the video input are used to weight these particles and to iteratively derive the final pose. As our main contribution, we propose an efficient sampling procedure where the particles are derived analytically using inverse kinematics on the orientation cues. Additionally, we introduce a novel sensor noise model to account for uncertainties based on the von Mises-Fisher distribution. Doing so, orientation constraints are naturally fulfilled and the number of needed particles can be kept very small. More generally, our method can be used to sample poses that fulfill arbitrary orientation or positional kinematic constraints. In the experiments, we show that our system can track even highly dynamic motions in an outdoor environment with changing illumination, background clutter, and shadows.

UR - http://www.scopus.com/inward/record.url?scp=84867888650&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-34091-8_14

DO - 10.1007/978-3-642-34091-8_14

M3 - Conference contribution

AN - SCOPUS:84867888650

SN - 9783642340901

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 305

EP - 328

BT - Outdoor and Large-Scale Real-World Scene Analysis - 15th International Workshop on Theoretical Foundations of Computer Vision, Revised Selected Papers

T2 - 15th International Workshop on Theoretical Foundations of Computer Vision

Y2 - 26 June 2011 through 1 July 2011

ER -

By the same author(s)