You are here

Naturalistic Operator Interface for Immersive Environments

Description:

OBJECTIVE: Develop and demonstrate effective teleoperator interface methods for supervisory control of a network of assets in a fully-immersive, synthetically-augmented environment. DESCRIPTION: Surveillance and intelligence gathering can be linked to finding needles in haystacks it may take days or weeks to gather enough evidence in an operational environment to take decisive action. If non-cooperative subjects know they are under surveillance, enough evidence may never be aggregated; this is particularly true in urban environments where it is difficult to observe a scene without being noticed. To enable intelligence, surveillance, and reconnaissance (ISR) operations, smaller and stealthier sensors are currently being developed that will allow the operator to gather critical information remotely. This physical detachment from the combat arena requires revolutionary interface technologies to provide the operator with a sense of presence in the remote environment. Obtaining a sense of environmental presence is critical to the operator"s decision-making ability, situation awareness, and workload. In order to gain an authentic sense of environmental presence, research suggests that a fully-immersive, synthetically-augmented display system is ideal for contemporary sensor operations. An example of such a system is the Supervisory Control and Cognition Branch"s (RHCI"s) Immersive Collaborative Environment (ICEbox), which employs several large, high resolution displays, appropriately positioned to completely enclose the operator. In contrast to traditional sensor display systems, which consist primarily of an assemblage of several computer monitors, the fully-immersive characteristic of the ICEbox, as well as its ability to exhibit synthetically-augmented overlays, perceptually allows the operator to establish a sense of presence in the remote environment. It is believed that by developing a perceptual sense of environmental presence through an expansive field of view and data presentation the operator will experience an improvement in both decision-making and situation awareness, and a decrease in workload. Despite the identified benefits related to remote interaction with a network of collected assets, employment of such a system has yet to come to fruition because the state of the art of fully-immersive, synthetically-augmented display technology suffers from a deficient means for human-machine interaction. The most significant challenge faced by researchers and designers is developing an innovative and effective human-machine interface that does not rely on the impractical (in this instance) use of a traditional keyboard and mouse. Ideally, since humans routinely exercise several sensory avenues simultaneously during information exchange, an intuitive human-machine interface would consent to the use of multiple modes of input concurrently. However, recent efforts at incorporating alternative modes of input (such as speech recognition, touch, full-body gesture recognition, eye-tracking, etc.), are commonly focused on the use of a single modality at the exclusion of others. Additionally, inaccuracies related to the recognition of user input, and the interpretation of user intent, continue to arise with each individual modality. Furthermore, for those few who have attempted to fuse simultaneous input from multiple modalities, the noted problematic occurrences are compounded by the complexity of the integration. As a result, the state of the art of fully-immersive, synthetically-augmented display technology is handicapped from progressing at a more accelerated rate. If fully-immersive, synthetically-augmented display systems are to rapidly mature, a more robust and naturalistic means for human-machine interaction (HMI) must come to fruition. Creating HMI that realizes this"telepresence"and mission effectiveness simultaneously requires developing novel interface concepts, which in turn will distill requirements for sensor and information network responsiveness and control. Actualizing this capability by means of a human-centric approach should improve operator performance at both the individual and team levels. PHASE I: For remote sensor network management, develop a framework that supports intuitive human-machine communication in a fully-immersive, synthetically-augmented environment. Demonstrate aspects of the constituent technology and illustrate how it will be incorporated to provide enhanced benefits in Phase II. Develop an experimental plan to establish improvements in usability in Phase II. PHASE II: Develop and demonstrate a prototype system to be employed in a representative application domain simulation. Evaluate the human-machine exchanges to illustrate payoffs in interaction speed, error reduction, workload, training time reduction, and/or interaction flexibility. PHASE III: MILITARY APPLICATION: Successful maturation of intuitive human-machine interaction technology to be employed in a fully-immersive, synthetically-augmented environment would enhance a variety of complex military and commercial monitoring, planning, and control domains. COMMERCIAL APPLICATION: Remote sensor operations and RPA control are immediate application areas, but utility would also be warranted in domains such as virtual learning environments, advanced business teleconferencing, and enhanced medical imaging systems.
US Flag An Official Website of the United States Government