You are here

Advanced Human Robot Interaction to create Human-Robot Teams

Description:

OBJECTIVE: Develop the human robot interaction technology to better integrate robotic mule platforms such as the Legged Squad Support System (LS3) or Squad Mission Support System (SMSS) with human operators by improving the mode of interaction, the model of the human operator's intent, and the ability of the platform to communicate its intent to the human operator. DESCRIPTION: Recent robotic advancements such as the Legged Squad Support System (LS3) and the Squad Mission Support System (SMSS) are poised to change the way that the dismounted squad manages their physical burden. However, due to unreliable or unnatural interactions with robotic systems, there is the potential for robotic mule platforms to replace the soldiers"physical burden with the mental workload of operating and monitoring a robot. In order for the vision of a robotic mule to be realized, there needs to be a dramatic shift in the way that the operator interacts with the robotic mule platform as well as how much the robotic mule platform is aware of the human(s) that are being supported. The imagined use case is long periods of monotonous following behavior punctuated by intense bursts of activity where the operator does not have time to scroll through user menus to tell the robot what to do. The robot should intuit the intent of the operator in this situation to keep out of the way, head to an objective, shut down, etc... To address the above challenges, this topic is focused on three primary objectives: 1) a natural mode of interaction, e.g. voice commands through radio, direct voice commands, gesture recognition, passive visual following, etc... 2) an ability for the robot to intuit higher level human operator intent, using features such as the high level mission plan, situational stress, and previous command history 3) the ability of the robot to communicate its intent to the human operator in a heads-up, hands-free, non-intrusive way. This SBIR assumes an adjustable autonomy already exists on the robot which can scale from full obstacle avoidance, GPS waypoint navigation down through teleoperation and remote control. This SBIR will leverage this adjustable autonomy of the robot and develop a model of the operator"s intent that can influence the level of autonomy that the robot performs at in a given situation and that can adjust the amount of interaction that the robot platform demands from the human operator. This topic is not focused on developing new human robot interaction modalities. Joysticks, speech recognition, machine vision, haptic devices, etc. have been studied extensively and these existing interaction modalities are believed to be sufficient to achieve the goals of this SBIR when adapted and incorporated into a natural Soldier Machine Interface (SMI). Separately, these modalities have been demonstrated, but a Soldier Machine Interface (SMI) that naturally integrates these different techniques has not been shown. Unless revolutionary and quantitative improvements in the efficacy or performance are reasonably expected, this SBIR will not fund the development of new interaction modalities. Additionally, it is expected that only a portion of the possible interaction modalities will be successful enough to incorporate into the robotic mule platform. Thus, a sufficient number of initial approaches should be pursued to improve the chance of final success. PHASE I: Currently, the Soldier Machine Interface (SMI) for robotics is some sort of ruggedized laptop. Not only does this add weight to the dismounted soldier, it requires them to operate the robot with menu screens and joysticks. In Phase I we seek a System Architecture that will allow the soldier to operate the robot in a heads-up, hands-free way. During Phase I, the prototype SMI may be simulated in a Robotic Operating System (ROS) based environment rather than on a physical robot. Define sensor, processor, and software requirements. Propose metrics for highlighting the impact and reliability of this SMI methodology. Develop a minimal lexicon for interaction with the robot (no more than a dozen commands such as start, stop, follow closer, follow side-by-side, turn, slow down, etc.). This lexicon will serve as a baseline for the robot, and it will be interpreted for the robot by the SMI, i.e., the communication may not be verbal, it may be non-verbal, but the SMI will translate it into a standard JAUS communication protocol whether the SMI resides on the robot, on the soldier or both. At any rate, the modality for robot communication with the operator must be heads-up, hands-free, and light weight. Deliverables: A Phase I report highlighting the work above on the concept of operations (CONOPS), design of the SMI and its reliability metrics, requirements analysis, interaction modalities, mule robot lexicon, and a design path to Robotic Systems Joint Project Office (RS-JPO) Interoperability Profile (IOP) compliance. PHASE II: Integrate a fully functional SMI onto an operationally relevant platform such as the LS3 or a surrogate platform (e.g. All-Terrain Vehicle, ATV). Demonstrate the effectiveness of this novel SMI using the metrics defined in Phase I. The SMI should be robust to ambient noise, sunlight, occlusions between the robot and the leader including the leader moving off the camera frame and multiple soldiers, i.e. 9 squad members, in the environment. The objective is MIL-STD-810 compliance. Deliverables: Phase II shall deliver a RELIABLE heads-up, hands-free device for controlling the robot. The spirit of this topic is to integrate the robot into the squad to the fullest extent possible, and keep soldiers"focus on their primary mission. A Final Report shall detail the progress made during the course of the research. A final demonstration will show the SMI fully integrated with the mule robot or a mule robot surrogate. PHASE III: Work to have the proposed system become a part of a program of record such as Squad Machine Equipment Transport (SMET). It is imagined that this human interface technology could also be applied to industrial or home robotics as well. REFERENCES:"Speech Recognition Through the Decades: How We Ended Up With Siri"http://www.pcworld.com/article/243060/speech_recognition_through_the_decades_how_we_ended_up_with_siri.html Rabiner, L.R. (1989) A tutorial on hidden markov models and selected applications in speech recognition IEEE Bishop, C.M. (2006) Pattern Recognition and Machine Learning Monnier, C.S., et. al. (2012) A monocular leader-follower system for small mobile robots, SPIE Vol. 8387 Big Picture: Pictorial Report No. 29, Original use of Army Mules. http://www.youtube.com/watch?v=neg9V5IxPJ4 (start the video at 17:48) Saiki, L.Y.M., et. al.(2012) How Do People Walk Side-by-Side? Using a Computational Model of Human Behavior for a Social Robot, in Proceedings of the 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2012), Boston, USA. Doisy, G. (2012), Sensorless Collision Detection and Control by Physical Interaction for Wheeled Mobile Robots, in Proceedings of the 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2012), Boston, USA. April Tags http://april.eecs.umich.edu/wiki/index.php/April_Tags Interoperability Profile http://www.rsjpo.army.mil/ The New Methodology, Martin Fowler http://martinfowler.com/articles/newMethodology.html#xp
US Flag An Official Website of the United States Government