You are here

Verification, Validation, Assurance, and Trust of Machine Learning Models and Data

Award Information
Agency: Department of Defense
Branch: Army
Contract: W15QKN-22-P-0057
Agency Tracking Number: A22B-T002-0068
Amount: $173,000.00
Phase: Phase I
Program: STTR
Solicitation Topic Code: A22B-T002
Solicitation Number: 22.B
Timeline
Solicitation Year: 2022
Award Year: 2022
Award Start Date (Proposal Award Date): 2022-09-26
Award End Date (Contract End Date): 2023-03-31
Small Business Information
2241 17th Street
Boulder, CO 80302-1111
United States
DUNS: 128005423
HUBZone Owned: No
Woman Owned: No
Socially and Economically Disadvantaged: No
Principal Investigator
 Shane Hall
 (303) 447-3255
 hall@opttek.com
Business Contact
 Benjamin Thengvall
Phone: (303) 447-3255
Email: thengvall@opttek.com
Research Institution
 University of Alabama in Huntsville
 Mikel Petty
 
301 Sparkman Drive
Huntsville, AL 35899
United States

 (256) 824-6140
 Nonprofit College or University
Abstract

As the use of machine learning (ML) models proliferates in commercial and defense applications, the United States Army (Army) faces significant challenges in evaluating the effectiveness, robustness, and safety of these ML models in armament systems. For the Combat Capabilities Development Command (CCDC) Armaments Center, the decision to enable automated choices in these systems, which encompass lethality capabilities, requires very high confidence that any resultant behaviors will fall within intended operational and mission bounds. Ensuring reliable and safe behaviors involves both ensuring accurate and comprehensive test data is used in the creation and training of these ML systems and that the ML models are robust, accurate, and appropriately behaviorally bounded when employed against real data in practice.  ML models come in many forms, and the technologies used to create them are rapidly evolving. The Army needs 1) a process and framework to assess and measure the quality of training data and identify shortcomings that may lead to poorly trained ML models, and 2) a process and tools for ML model exploration that can assure confidence of model behavior within defined data boundaries and can also identify unintended or poor behavior in ML models if they exist. These processes and tools will need to be robust and flexible to handle various forms of ML models and data. OptTek Systems, Inc. (OptTek) and their research partner, the University of Alabama in Huntsville (UAH), are proposing to combine proven approaches and technologies into a set of processes and tools that can be applied to this problem along with an experienced team that has the background and expertise to achieve these goals. In this Phase I project, the OptTek team (OptTek and UAH) will explore a process for ML model data set validation and ML model behavioral evaluation that will help determine readiness for more formal operational test and evaluation.

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government