Description:
TECHNOLOGY AREA(S): Air Platform, Human Systems
OBJECTIVE: To support Warfighter (soldier, airman, commander, etc.) situational awareness and decisive actions through the development of presentation techniques capable of providing 360 degree field of view with enhanced symbology in a 120 degree field of view immersive virtual reality display.
DESCRIPTION: Background Advancements in computer processing power and memory, 3D and virtual reality displays, and sensor technology have reached a point that allows for substantial augmentation of the human warfighter. The warfighter of tomorrow will have extra-human capabilities that will allow him or her to perceive, evaluate, plan, and respond in manners far beyond those of either humans or machines acting in isolation. Current limitations of the human body (e.g., limited sensory range and capability, limited retrospective, prospective, and declarative memory, and limited computation) can be overcome using augmentation. Humans currently use a stereo-optical visual system that is limited to roughly a 120 degree field of view, but they must operate in a 360 degree world. Thus, two thirds of the environment is unavailable to the human at any one time. The human must turn his or her head or body to see anything in the other 240 degrees. Numerous studies in altered or inverted visual fields strongly suggests that the human visual system is limited primarily by the sensors (eyes) rather than the computational mechanism (brain). Indeed, while theories vary, it appears that the brain is quite capable of acclimating to non-standard visual fields and that humans can perform at the same proficiency using these altered visual fields. There is usually a period of retraining/acclimation that is required to transition to the new field and there is usually a period of acclimation required to return to the normal field of view (negative after effects). Some research has evidently demonstrated it is possible for the human to rapidly switch between the different fields of view. Solution This task seeks to develop visual compression techniques that will display a live, real-time 360 degree sensory visual image in a 120 degree field of view using a virtual reality headset. Rather than a linear compression, an algorithm should be developed such that there will be no resolution or distance distortion for images in the fovea (~ +/- 5 degrees off of the center line of sight), and that the remaining peripheral view is compressed to present the remaining 350 degrees. The exact method of compression (e.g., linear, logarithmic) should be explored to determine the ‘best fit’ for human performance. Best fit is defined by factors such as task performance, situation awareness of objects and the movement of objects in the environment, length of acclimation and after effects periods, ability to switch between normal field of view and 360 field of view, and subject comfort and endurance. Current Limitations Many gaming development software and 3D computer modeling and animation software packages offer ways to increase the field of view. However, these packages do not allow for variations in the compression pattern (e.g., keeping the resolution and distance of objects in the foveal view at those of normal field of view). There has been no rigorous experimentation and research into the effectiveness of 360 degree field of view presentations similar to those that have been conducted in distorted fields of view such as inverted or prism displays.
PHASE I: Develop a functional proof-of-concept system capable of providing 360 degree field of view with enhanced symbology in a 120 degree field of view immersive virtual reality display to the user. The user’s foveal vision (+/- 5 degrees) should have a resolution and apparent distance that is 1-to-1 with normal field of view. At least two variations of compression fall off from the foveal vision (linear, logarithmic) should be produced. The user should be able to move around within a virtual 3D environment and react to (e.g., identify, turn to, target) artificial objects in the environment. A protocol for acclimation to the 360 view and re-acclimation to the normal field of view should be proposed and demonstrated on at least one individual.
PHASE II: Evaluate variations proof of concept system (including acclimation and re-acclimation protocols) to establish best concept. Enhance best concept to demonstrate more realistic warfighter scenarios for ground and air. Examine candidate sensor solutions and make recommendations for best uses of the concept. Evaluation criteria include (but are not limited to) the following: time to detect threats (single and multiple) – faster = better; Accuracy of classification of threat – greater = better; negative physiological effects (e.g., spatial disorientation, headache, fatigue, vertigo, dizziness, difficulty assessing distances) – fewer = better; Ability to navigate, move, and avoid obstacles through complex environments – easier + faster = better; and Ease of transition between 360-mode and normal mode (i.e., acclimation) – easier + faster = better.
PHASE III: Develop combined VR display and sensor system prototypes and demonstrate the combined system in a real world setting (e.g., outdoors, navigating indoor office hallways and spaces; operating vehicles such as aircraft or ground vehicles) where the system presents real time data with the user carrying out a number of activities. The system will also be well-suited for transition to a variety of commercial applications, including: monitoring systems for police, border patrol, and private security; and the entertainment industry, specifically motion capture for the production of video games and movies. Evaluation criteria are the same as for Phase II, but also include (but are not limited to) the following: system weight – less = better; wearer comfort - more = better; Visual Resolution – greater = better; system robustness – greater = better; all weather capabilities – greater = better.
REFERENCES:
1: Welch, Robert B. Perceptual modification: Adapting to altered sensory environments. Elsevier, 2013
2: Martin, T. A., et al. "Throwing while looking through prisms. II. Specificity and storage of multiple gaze-throw calibrations." Brain 119.4 (1996): 1199-1212.
3: Fernández-Ruiz, Juan, and Rosalinda Díaz. "Prism adaptation and aftereffect: specifying the properties of a procedural memory system." Learning & Memory 6.1 (1999): 47-53.
4: Kennedy, Robert S., and Kay M. Stanney. "Postural instability induced by virtual reality exposure: Development of a certification protocol." International Journal of Human-Computer Interaction 8.1 (1996): 25-47.
5: Clower, Dottie M., and Driss Boussaoud. "Selective use of perceptual recalibration versus visuomotor skill acquisition." Journal of Neurophysiology 84.5 (2000): 2703-2708.
6: Lin, JJ-W., et al. "Effects of field of view on presence, enjoyment, memory, and simulator sickness in a virtual environment." Virtual Reality, 2002. Proceedings. IEEE. IEEE, 2002.
KEYWORDS: Image Processing, Three-dimensional Imagery, Photogrammetry, Human Systems, Virtual Reality, Field Of View, Augmented Reality, Situation Awareness, Sensor, Computer Vision