Anthropometric Modeling and Automatic Pose Reconstruction (Anthro-MAPR)
Small Business Information
625 Mount Auburn Street, Cambridge, MA, 02138
AbstractWhile significant advances have been made in technologies for automatically acquiring and trackingand even identifyingdismounts in video, automatic characterization of human behavior remains an open problem that hinders our intelligence dominance against asymmetric threats. To enable the development of viewpoint- and anthropometric-invariant behavioral modeling algorithms, a technique is required for directly recovering an individuals anthropometric parameters from video data. This task is comprised of two important research problems: parametric modeling of the human body, and recovery of these parameters from a video of a human subject. We propose a system for video-based Anthropometric Modeling and Automatic Pose Reconstruction (Anthro-MAPR). Anthro-MAPR reconstructs a high-fidelity, parameterized anthropometric model of an individual from video imagery; the system achieves this using novel detectors capable of simultaneously detecting human subjects and estimating their anthropometric parameters in complex environments. By building on mature multi-camera registration and person-tracking software, Anthro-MAPR will work both for single and multi-camera scenarios, and it assumes no restrictions on complexity of background, presence of other moving objects in the field of view, or the type of lighting conditions. BENEFIT: The technologies developed under this effort would enable direct computation of biometrics (e.g., weight, height, limb dimensions, gait analysis) as well as measures of stress (i.e., is the individual performing an unusual or difficult motion?) that could not be easily derived from a 2D analysis alone. Just as importantly, this technology would significantly boost the development of robust camera-based behavioral analysis techniques critical to the prediction and interruption of hostile actions by asymmetric forces both at home and in theatre.
* information listed above is at the time of submission.