SFMS Using Multisensory Ontologies
Agency / Branch:
DOD / NAVY
Navy warfighters are being inundated with information, yet they often lack the "contextual knowledge" that helps them quickly understand-or "visualize"-the situation in order to make rapid, yet effective, decisions. We propose to develop a Spatial Framework Mapping System (SFMS) based on an innovative multisensory ontology architectural concept. We define a multisensory ontology as semantic model that not only relates visual artifacts to each other, as in the Spatial Data Artifact Taxonomy, but to other sensory artifacts as well, such as video, voice, photos, and touch-points, to create an "immersive knowledge environment" for Navy decision-makers. Our goal is to improve information understanding for decision-makers, as well as improving the overall quality of their decisions, by engaging critical human senses more directly in situational understanding. As a foundational component of SFMS, we also propose to develop a Spatial Data Artifacts Library to store and access sparkline-type artifacts, as well as authoring and administration tools to support technical staff in mapping the artifacts to complex datasets, configuring artifacts for use in new applications, and managing the evolution of the SFMS software and data. Finally, we propose to develop metrics that measure the effectiveness of SFMS in improving information "understanding" and "decision quality" for users.
Small Business Information at Submission:
Modus Operandi, Inc.
709 South Harbor City Blvd., Suite 400 Melbourne, FL 32901
Number of Employees: