You are here

Advanced Robot Training Interface

Award Information
Agency: Department of Defense
Branch: Office of the Secretary of Defense
Contract: W911QX-15-C-0027
Agency Tracking Number: O2-1522
Amount: $999,999.00
Phase: Phase II
Program: SBIR
Solicitation Topic Code: OSD13-HS1
Solicitation Number: 2013.3
Timeline
Solicitation Year: 2013
Award Year: 2015
Award Start Date (Proposal Award Date): 2015-09-30
Award End Date (Contract End Date): 2017-09-29
Small Business Information
One Mifflin Place, Suite 400, Cambridge, MA, 02138
DUNS: 000000000
HUBZone Owned: N
Woman Owned: N
Socially and Economically Disadvantaged: N
Principal Investigator
 James English
 (888) 547-4100
 jde@energid.com
Business Contact
 James Bacon
Phone: (888) 547-4100
Email: jab@energid.com
Research Institution
N/A
Abstract
It is not likely that an autonomous system can be programmed with all the information it requires to perform every mission or every variation of every contingency within the near future. One way to address this issue is to provide user-friendly means to teach or program a robot to add to its autonomous capability. This proposal focuses on the physical aspects of the robot performance, manipulation and mobility, and the combination of the two, whole-body manipulation.1 Optimally, one would teach a robot as one would teach a human how to perform a physical task. Since this would be an initial entry into intuitive user friendly methods to teach a robot to perform complex physical acts, it is advisable to start with a virtual representation or simulation that lay the foundation for the interface and programming syntax that would describe the physical actions of the robot.2 Programming a robot to use a new tool provides a good example where teaching will prove beneficial. There are issues with teaching proper grasp of the tool, and the proper pose that would allow the robot to use the tool effectively and apply forces that would not damage the object(s) the tool is acting upon, the tool, or the robot. Thus, before the issues of the human interface are addressed, the basic issues of control for a dynamic multi-body system must be solved in manner that can be represented as a form that can be programmed into a robot.2,3 Focus should be placed first on the development of a software architecture that encompasses and integrates task and motion planning for whole body manipulation. The software should have the ability to program a robot to perform physical tasks through a virtual environment capable of modeling dynamics and physical contact/interaction without the need for a user such as a Soldier to write a C-level program. However, the programming constructs or subroutines created by this architecture to operate the robot must be accessible and usable in common programming languages such as C/C++ and Python as open architecture libraries and subroutines. The newly learned behavior must be of a structure that the robot can incorporate it into its existing control programming and implement it autonomously. It is expected that in order to implement the virtual environment model of the robot which would include descriptions of its physical configurations, actuators, and sensors, highly trained and educated personnel may be needed. The virtual model should be of a fidelity that allows control for a dynamic multi-body system to be developed. However, once the detailed model is implemented in the virtual environment, the software should provide an interface that allows a user such as a Soldier to interact with and program a robot through the virtual environment to perform a physical task such as grasping or repetitive tasks such as sweeping for mines and IEDs, and trenching for wires. A secondary focus should be to develop an open software architecture that can be built upon and evolve through time that will allow DoD, Universities, and private industry to collaborate as a software development community on this problem. Open architecture efforts such as ROS should serve as an example. There should be interfaces for well-established speech recognition and vision libraries such as OpenCV. It is expected that the architecture will allow for future advancements to be added. For example there are different methodologies for teaching how to recognize a physical object. 4 The open software architecture should have the proper interface to allow object and feature recognition packages/algorithms to be added and updated.

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government