You are here

Benchmarking Simulations for Missile Defense System Analysis


OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Trusted AI and Autonomy; Advanced Computing and Software


The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.


OBJECTIVE: Develop ways to evaluate how close model performance is to benchmark data and ways to calibrate model performance to improve benchmark results.


DESCRIPTION: The Missile Defense Agency (MDA) has high-fidelity digital simulation models of the Missile Defense System (MDS) that provide very accurate results. This level of fidelity is achieved by modeling the elements in the MDS with a very high degree of realism including physics-level modeling and wrapping tactical code. While the results of these digital simulations are very accurate, this accuracy comes with high computational expense. There would be many more simulation trials desired to perform different types of analysis than would be able to be executed.


Lower fidelity models that represent the MDS can be run much faster but sacrifice some fidelity and realism of the higher fidelity models. Given benchmark data from physical tests or from high fidelity models, this topic seeks both metrics and measures of model performance as compared with the benchmark data and methods to automate the calibration of models to better align performance with benchmark data. Standardized approaches to comparing model performance with benchmark data would increase analyst confidence and ability to use faster running models for some analysis tasks. Tuning models to yield outputs that better match benchmark data would make models useful for analysis for more use cases. The ability to use lower fidelity models with confidence for more MDS analysis use cases would enable more studies to be completed and more rapidly advance the state of the MDS.

Model tuning is an optimization problem where distance measures between model outputs and benchmark data are minimized by adjusting available model parameters. In this tuning process, care must be shown to avoid overfitting and provide tuned models that are robust for use on a variety of MDS scenarios.


PHASE I: Research, design and develop metrics and measures to compare model performance vs. benchmark data. Research and create proof of concept optimization methods to automate the tuning of models to bring performance into line with benchmark data.


PHASE II: Expand the benchmarking methodology and tuning algorithms to create a full prototype capability. Work with project sponsors to perform a benchmarking study using this new technology with MDS data and models.


PHASE III DUAL USE APPLICATIONS: Scale-up the capability from the prototype utilizing the new hardware and/or software technologies developed in Phase II into a mature, fieldable capability. Work with missile defense integrators to integrate the technology into a missile defense system level testbed for regular analyst use.




  1. A multi-fidelity surrogate-model-assisted evolutionary algorithm for computational expensive optimization problems; Journal of Computational Science January 2016.
  2. Agents for sequential learning using multiple-fidelity data; Palizhati, A., Torrisi, S.B.


KEYWORDS: benchmarking; missile defense; modeling and simulation; model tuning

US Flag An Official Website of the United States Government