You are here

Advanced Autonomy and Operator Interfaces for Complex Robotic Systems


OBJECTIVE: The objective of this topic is to develop autonomous capability for robots with human-like dexterity to perform complex tasks for medical applications. DESCRIPTION: Current low-dimensional robots are directed by human operators using operator control units (OCUs) such as hand controllers that send a continuous stream of commands to the end-effector to follow a desired trajectory. This"teleoperation"control strategy places a significant cognitive load on the operator, which grows exponentially as the number of degrees of freedom (DOF) of the robot increases (Lane et al., 2001). For example, a humanoid robot such as the Battlefield Extraction and Assist Robot (BEAR) has over 20 degrees of freedom (Atwood and Klein, 2010), as opposed to the manipulator on a Talon or Packbot robot that has three DOFs (Yamauchi, 2004). NASA"s Robonaut is another example of complex robotic system that is difficult to teleoperate (Ambrose et al., 2000). Difficulties encountered in coordinating operations with the remotely operated vehicles (ROVs) used to cap the gulf oil leak also highlight the need for a better control strategy than simply using joysticks (Schneider, 2010). An additional consideration is that as the operator workload increases, the ability to control the robot slows down considerably, which may jeopardize mission performance. Current operator interfaces are simply not suitable for directing higher dimensional robots with human levels of capability. In order to reduce the operator workload and speed up robot performance, the level of interaction between operator and robot must be elevated from teleoperation to at least semi-autonomous control. At this level, the operator may direct the robot through gesturing or issuing voice commands. The cadre of advanced interfaces may also include non-invasive brain machine interfaces such as electroencephalography (EEG) that can decipher user intent based on"brain waves"(Bradberry et al, 2010). This level of command requires additional development of novel operator interfaces coupled with significantly enhanced autonomy embedded within the robot itself. In particular, an advanced semi-autonomous goal-directed task planning and execution system is needed for effective operation of high degree of freedom robots. While academic research on humanoid robots has produced motion planners capable of handling many degrees of freedom and complex situations, these planners require too much computation time, and are vulnerable to the disturbances that occur in unstructured environments. For example, task planners based on waypoints have been developed for unmanned aerial vehicles (UAVs), but these place significant limitations on maneuver detail even with relatively few degrees of freedom (Bowen et al., 2003). New approaches are needed that are capable of quickly generating, executing, and modifying plans for high degree-of-freedom robots in unstructured environments (Murphy, 2004). Users should be able to specify goals at a high level, such as reaching a goal state. For example, the operator should be able to simply point to the objects in their current location and tell the robot that they should be stacked in a new location, which is pointed to by the operator. Ideally, with this approach, no explicit OCU is required at all. The operator directs the robot through gestures and speech. This type of operator control interface represents a revolutionary advancement over the current situation, in which one or more operators must devote their full attention (and use both hands) to interact with a briefcase-sized OCU. Freeing operators from having to use such large and attention-intensive OCUs would make the robots more accessible to soldiers in the field, and to other personnel that are not trained to use OCUs. The autonomy system should automatically take advantage of spatial and temporal goal flexibility to enhance robustness and energy efficiency. The task execution system should incorporate necessary sensing and state estimation capabilities. This might include local terrain sensing vision or LIDAR systems to support locomotion, and stereo or time of flight vision and touch sensors to support manipulation. PHASE I: Conduct survey of robots with human-like mobility and dexterity being developed for medical applications. Develop operator interface design concepts that translate functional robot requirements into implementable task strategies. Develop an animated computer model of the robot and software emulation of operator interface(s) along with medically-relevant operational tasks. The prototype model should be capable of demonstrating a humanoid robot performing high level tasks using minimal user-operator intervention. Develop a complete plan for a Phase II proof of concept demonstration and model validation of the new operator interface and autonomous control. PHASE II: Build and demonstrate a working prototype Joint Architecture for Unmanned Systems (JAUS) compliant robot control unit that implements execution of high level tasks on two systems: (1) the CAD software model developed in Phase I, and (2) a humanoid robot hardware prototype of the proposer"s choice. JAUS standards and documentation can be found at References 11-13. The prototype control unit should be capable of demonstrating key elements of Army medical robotic tasks such casualty extraction, e.g., lifting a payload and placing it in a desired location using minimal user-operator invention. At least semi-autonomous operation must be demonstrated with the goal of maximum autonomy. PHASE III DUAL USE APPLICATIONS: Produce a ruggedized, deployable prototype control unit ready for demonstration and limited operational testing on a humanoid robot used in military and civilian emergency response scenarios, such as recovering injured personnel in mine, construction site and nuclear power plant accidents; chemical spills; fire fighting, terrorist, hostage situations; and, in police response to situations involving armed suspects. Commercialization of this technology could potentially save many lives among military and civilian emergency medical personnel, as well as among the targeted casualties and injured persons. REFERENCES: 1. Lane JC, Carignan C, Akin D: Advanced Operator Interface Design for Complex Space Telerobots, Autonomous Robots 11, Kluwer Academic Publishers, 49-58, 2001 2. Atwood T, Klein J: A life-size humanoid robot & #8232;that searches for & #8232;and rescues & #8232;people, Botmag, April 25, 2007. 3. Yamauchi, Brian."PackBot: A Versatile Platform for Military Robotics,"In Proceedings of SPIE Vol. 5422: Unmanned Ground Vehicle Technology VI, Orlando, FL, April 2004, 4. Ambrose R, Aldridge H, Askew RS, Burridge RR, Bluethmann W, Diftler M, Lovchik C, Magruder D, and Rehnmark F. Robonaut: NASA"s Space Humanoid. Humanoid Robotics, July/Aug. 2000, pp. 57-63. 5. Bradberry TJ, Gentili RJ, Contreras-Vidal JL. Decoding three-dimensional hand kinematics from electroencephalographic signals. Conf Proc IEEE Eng Med Biol Soc. 2009:5010-3. 6. Bowen, David G. and Scott C. MacKenzie, DEFENCE R & D CANADA,"Unmanned Vehicles: Technological Drivers and Constraints,"Sep 2003, 7. Murphy, Robin R."Human-Robot Interaction in Rescue Robotics,"Trans. Systems, Man, and Cybernetics, vol. 34, no. 2, 2004, 138153. 8. USAMRMC TATRC Autonomous Combat Casualty Care Programs. 9. Department of the Army Pamphlet 525-7-15,"US Army Concept Capability Plan for Army Aviation Operations 2015-2024,"page 43, paragraph 2-11b(5)(e). 10. Schneider, David."The Gulf Spill"s Lessons for Robotics,"IEEE Spectrum, 47(8): pp9-12, August 2010. 11. Joint Architecture for Unmanned Systems (JAUS) Documentation: 12. JAUS Transport Specfication: 13. Society of Automotive Engineers (SAE) Standard AS5669, JAUS/SDP Transport Specification:
US Flag An Official Website of the United States Government