You are here

STTR Phase I: Dynamic Robust Hand Model for Gesture Intent Recognition

Award Information
Agency: National Science Foundation
Branch: N/A
Contract: 1549864
Agency Tracking Number: 1549864
Amount: $225,000.00
Phase: Phase I
Program: STTR
Solicitation Topic Code: IT
Solicitation Number: N/A
Solicitation Year: 2015
Award Year: 2016
Award Start Date (Proposal Award Date): 2016-01-01
Award End Date (Contract End Date): 2016-12-31
Small Business Information
10570 Whitney Way
Cupertino, CA 95014
United States
DUNS: 078658977
HUBZone Owned: No
Woman Owned: No
Socially and Economically Disadvantaged: No
Principal Investigator
 Raja Jasti
 (408) 489-8622
Business Contact
 Raja Jasti
Phone: (408) 489-8622
Research Institution
 Purdue University
 Karthik Ramani
Young Hall 155 S. Grant Street
West Lafayette, IN 47907
United States

 Nonprofit College or University

The broader impact/commercial potential of this project stems from addressing the important hand gesture based input challenges of VR and AR industries that are expected to grow to $150B by 2020. Piper Jaffray identifies VR as the next mega trend and estimates the VR market to be worth more than $60B by 2025. Piper Jaffray highlights new market opportunities for peripheral devices that bring hands and feet into VR. This technology if successful in mitigating the high technical risks represents a huge leap in the state of the art in 3D hand models for gesture recognition and has the potential to be the industry standard for AR, VR and 3D applications. Our company will commercialize the project by licensing this technology as a hand model SDK to the AR/VR and 3D camera device makers and application developers to bring highly interactive VR/AR and 3D gesture applications to gaming, entertainment, education, healthcare, design, architecture, and manufacturing. This Small Business Technology Transfer Research (STTR) Phase I project develops a breakthrough innovation in 3D hand gesture intent recognition that can robustly work across different 3D cameras, orientations, positions and occlusions. It addresses a key challenge in gesture recognition while enabling natural spatial interactions for Virtual and Augmented Reality (VR/AR) and many other applications enabled by 3D depth cameras. It solves the following key challenges faced by existing academic and commercial hand models and involves very high technical risks: 1) robustness under heavy occlusions 2) invariance to view-point changes 3) low computational training and tracking complexity 4) discriminative to frequent gesture/micro-gesture sequences. We will tackle these by developing a novel dynamic, robust hand-tracking model inspired by a machine learning technique that is not commonly used by computer vision community. We will achieve this by developing the following objectives 1) hand pose hypothesis generation using trained classifiers 2) hand model fitting using joint matrix factorization and completion 3) user study and evaluation of the hand model.

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government