Multi-Modal Visualizations for Virtual Environment Training
Small Business Information
897 Kensington Gardens Court, Oviedo, FL, 32765
AbstractThe long-term objectives of the proposed research are to develop effective multi-modal visualization design guidelines from the perspective of human information processing, problem solving, and visual display theories and to empirically test and validatedetermined guidelines using efficient sequential experimentation techniques (Han, Williges, & Williges, 1997; Williges, Williges, & Han, 1992). It is proposed that such theories and approach can be used to build a theoretical framework of principle-drivenvisualization design guidelines based on the inherent visual, aural, and haptic primitives associated with each modality's respective goal-relevant domain/task attributes. Such classification may assist visualization designers in successfully transforminginformation into the appropriate perceptual form for users' application domain and task goals. Furthermore, training tools employing these principle-driven visualization guidelines may help reduce time and cost associated with military training systemdevelopment and deployment by identifying a set of common design guidelines that may be applied intelligently and consistently across multiple training system domains (e.g., COVE and LCAC). Empirical testing and validation of proposed design guidelineswill be conducted in the multimodal VE testbeds at the University of Central Florida's (UCF) Immersion Center and in conjunction with the Naval Air Warfare Center Training Systems Division (NAWCTSD) in Orlando, Florida. Development of an interactive,visualization design tool to aid designers in assessing and modifying multimodal human-computer systems to ensure they meet necessary usability requirements offers wide utility to the HCI field in general and to military system design in particular. Thefeasibility of developing such a tool will be explored in Phase 3, should the initial phases be successful. Software development packages will also be explored in Phase 3 for prototyping such a tool. Current and future collaborations with the UCFImmersion Center will provide a means for implementing proposed intelligent advising tools and exploring the possibility for developing commercial applications with such tools. Furthermore, the efforts of this continued research may transition directly totraining systems already in procurement. For instance, the Conning Officer Virtual Environment (COVE) training system, which could be utilized as a future testing/implementation platform, is focused on training future Conning Officers to perform UNREPtasks, as well as basic shiphandling commands and maneuvers. This research may also transition to future VE training systems, such as those aimed at improving a shiphandler's ability to command the Landing Craft Air Cushion Vehicle (LCAC), which involvessome shiphandling tasks similar to the UNREP task. The implication is that goal-relevant domain attributes identified for the UNREP task may be generalizable to tasks performed in the LCAC. Consequently, the identified visualization techniques could beemployed for similar tasks performed in diverse training systems, hence facilitating design simplification and procedural consistency for military training systems.
* information listed above is at the time of submission.