Description:
OBJECTIVE: Develop interface(s) to allow users to"teach"or program robotic manipulation and mobility through virtual simulation and through real-world demonstration which the robot can apply autonomously in various situations. DESCRIPTION: It is not likely that an autonomous system can be programmed with all the information it requires to perform every mission or every variation of every contingency within the near future. One way to address this issue is to provide user-friendly means to teach or program a robot to add to its autonomous capability. This proposal focuses on the physical aspects of the robot performance, manipulation and mobility, and the combination of the two, whole-body manipulation.1 Optimally, one would teach a robot as one would teach a human how to perform a physical task. Since this would be an initial entry into intuitive user friendly methods to teach a robot to perform complex physical acts, it is advisable to start with a virtual representation or simulation that lay the foundation for the interface and programming syntax that would describe the physical actions of the robot.2 Programming a robot to use a new tool provides a good example where"teaching"will prove beneficial. There are issues with teaching proper grasp of the tool, and the proper pose that would allow the robot to use the tool effectively and apply forces that would not damage the object(s) the tool is acting upon, the tool, or the robot. Thus, before the issues of the human interface are addressed, the basic issues of control for a dynamic multi-body system must be solved in manner that can be represented as a form that can be programmed into a robot.2,3 Focus should be placed first on the development of a software architecture that encompasses and integrates task and motion planning for whole body manipulation. The software should have the ability to program a robot to perform physical tasks through a virtual environment capable of modeling dynamics and physical contact/interaction without the need for a user such as a Soldier to write a"C-level"program. However, the programming constructs or"subroutines"created by this architecture to operate the robot must be accessible and usable in common programming languages such as C/C++ and Python as open architecture libraries and subroutines. The newly learned behavior must be of a structure that the robot can incorporate it into its existing control programming and implement it autonomously. It is expected that in order to implement the virtual environment model of the robot which would include descriptions of its physical configurations, actuators, and sensors, highly trained and educated personnel may be needed. The virtual model should be of a fidelity that allows control for a dynamic multi-body system to be developed. However, once the detailed model is implemented in the virtual environment, the software should provide an interface that allows a user such as a Soldier to interact with and program a robot through the virtual environment to perform a physical task such as grasping or repetitive tasks such as sweeping for mines and IEDs, and trenching for wires. A secondary focus should be to develop an open software architecture that can be built upon and evolve through time that will allow DoD, Universities, and private industry to collaborate as a software development community on this problem. Open architecture efforts such as ROS should serve as an example. There should be interfaces for well-established speech recognition and vision libraries such as OpenCV. It is expected that the architecture will allow for future advancements to be added. For example there are different methodologies for"teaching"how to recognize a physical object. 4 The open software architecture should have the proper interface to allow object and feature recognition packages/algorithms to be added and updated. PHASE I: The deliverables should include a final Phase I report that describes a feasibility concept that encompasses the architecture, algorithms and hardware needed to implement graphical and visual input to script an act of whole body manipulation, that can be implemented as part of a library describing autonomous motion for a robot. Requirements for graphical input or interface for physics-based virtual robot models must also be documented. The feasibility concept should include a description of software interface architecture for future expandability. PHASE II: Phase II shall produce and deliver a prototype system for teaching a robot to perform mobility, manipulation and whole-body manipulation. The Phase II system shall be demonstrated to operate as a simulation of an actual robotic platform. The control constructs generated by the prototype teaching software will be used to program tasks in which the robot will need to interact with objects and the environment using whole body manipulation. The user should not need to write a text-based program. However it should be demonstrated that the robot control constructs generated may be used as libraries in"C/C++"code. Also to be demonstrated is the learned behavior being implemented in a manner in which the robot may use the behavior in conjunction with its previous programming. The prototype system should include: A documented open architecture framework and algorithms that allows additions and modifications to autonomous manipulation, mobility, and whole-body manipulation control. Prototype simulation software to augment the learning process with open architecture interfaces for programming a robot capable of implementing whole-body manipulation. Demonstrations of learned/modified autonomous whole body manipulation behavior. The virtual environment software and interface hardware developed should be compatible with RS JPO interoperability standards. The architecture and algorithms for robot control will be fully documented. The source code will be made available to DoD employees. Source code will follow common standards of open source programming and be compatible with ROS. PHASE III: DUAL USE COMMERCIALIZATION: Research could be transitioned to DoD and first responder efforts, or University research programs. Robots in industry may be programmed for adaptability to changes in the fabrication process. If the software is adopted as a standard, it may allow a descriptive verbal language for instructing robots to develop. Better operator interfaces should reduce the skill level required to program robots for enhanced robot autonomous operation adaptability. This is especially true for advanced robot control concepts.