You are here

Common Readiness Assessment and Performance Tracking, and Warehousing System for Day-to-Day LVC Training and Operations


OBJECTIVE: Develop on-demand methods for measuring assessing, formatting, predicting, and tracking readiness performance and proficiency data from live aircraft, instrumented ranges, and distributed mission operations simulation environments DESCRIPTION: This effort will develop on-demand and predictive methods for assessing combat human performance and readiness within and across training and operational LVC contexts. While there are currently a number of commercial formats for human and system performance date available, the integration, routine assessment, prediction, readiness gap analysis, and resource allocation for various contexts available to improve readiness and proficiency has not yet been accomplished in a way that permits routine, on-demand, tagging tracking, warehousing and reporting on a consistent basis. Existing standards and capabilities in operation today, but they are not directly compatible with each other and it is currently impossible to share data from different environments/systems in order to assess human performance within and across them. The simulation community generally follows a Distributed Interactive Simulation (DIS) protocol or High Level Architecture (HLA). Live ranges use Test and Training Enabling Architecture (TENA) and exercises use systems such as the Nellis Air Combat Training System (NACTS). Each of these can provide data individually, but there is no integrated or systemic approach to cumulating the data, linking it to performance and readiness models and standards, assessing levels of understanding and knowledge from these data, or using the data to identify what is known and needed to be known so that subsequent training and rehearsal events can be identified, scheduled and executed. Moreover, a capability to routinely assess and track readiness and to predict future readiness or future training proficiency fall offs and sequences of initial and refresher training events based on performance at any given time does not exist today. With the merger of live, virtual, and constructive (LVC) systems, these incompatibilities represent a significant shortfall in our capacity to assess the payoff of integrated and joint training and exercise concepts like LVC as readiness solutions. Further, we cannot demonstrate the longitudinal impact of these concepts because we cannot routinely evaluate performance and readiness across training, exercise, test and evaluation contexts over time. To solve the shortfall, this effort will develop and demonstrate a method for capturing, storing, modeling and routinely reporting performance and readiness that is usable within and across live and virtual environments. A major part of this effort will examine existing data formats and structures within and across LVC environments and will develop and demonstrate capabilities to extract common data compatible across environments and systems. Representative data from the variety of potential sources will be provided as part of the effort start up. The successful effort will develop and demonstrate a comprehensive system that facilitates data collection, measurement, prediction, tracking, warehousing and modeling human performance for a variety of sources in a common format to permit sharing of common data for routine performance evaluation and management. Developing a common methodology and system combat readiness assessment, tracking, predicting, and reporting represents a unique and critical capability for the development, analysis - and most importantly - usability of performance indicators from different environments and systems optimizing LVC for future readiness and combat proficiency. PHASE I: Phase I will identify and integrate exemplar metrics from different data formats and from live ranges, actual aircraft, and simulation-based systems into a proof-of-concept demonstration case and provide recommendations for portrayals of data, metrics, and models for predicting acquisition and mastery of combat knowledge in and across possible learning environments. PHASE II: Will extend and elaborate the Phase I proof-of-concept to demonstrate common interfaces, extraction tools, data tagging, visualization, and reporting methods for the variety of data sources and formats identified in Phase I. The Phase II capability will also provide a demonstrable capability for data warehousing to permit routine measurement and tracking of performance"objects"across the relevant environments. PHASE III: Common approaches for data integration/consolidation is a key need across military and civilian contexts. Existing systems are proprietary, limited in data availability and sharing, and are context specific. Example use cases are emergency operations centers and air and space operations centers. REFERENCES: 1. Defense Modeling and Simulation Office homepage: 2. Dwyer, D.J., Fowlkes, J.E., Oser, R.L., Salas, E., & Lane, N.E. (1997). Team performance measurement in distributed environments: The TARGETS methodology. In M.T. Brannick, E. Salas, & C. Prince (Eds.), Team performance assessment and measurement: Theory, methods, and applications. Mahwah, NJ: Lawrence Erlbaum Associates. 3. Serfaty, D., MacMillan, J., Entin, E.B., Entin, E.E. (1997). The decision-making expertise of battle commanders. In C.E. Zsambok & G. Klein (Eds.), Naturalistic decision making. Mahwah, NJ: Lawrence Erlbaum Associates.
US Flag An Official Website of the United States Government