You are here

Explainable and Transparent Machine Learning for Autonomous Decision-Making (EXTRA)

Award Information
Agency: Department of Defense
Branch: Air Force
Contract: FA8750-22-C-1004
Agency Tracking Number: F2D-3274
Amount: $999,996.00
Phase: Phase II
Program: SBIR
Solicitation Topic Code: AF212-D004
Solicitation Number: 21.2
Timeline
Solicitation Year: 2021
Award Year: 2022
Award Start Date (Proposal Award Date): 2022-01-12
Award End Date (Contract End Date): 2023-07-10
Small Business Information
20271 Goldenrod Lane Suite 2066
Germantown, MD 20876-1111
United States
DUNS: 967349668
HUBZone Owned: No
Woman Owned: Yes
Socially and Economically Disadvantaged: Yes
Principal Investigator
 GENSHE CHEN
 (301) 515-7261
 gchen@intfusiontech.com
Business Contact
 yingli Wu
Phone: (949) 596-0057
Email: yingliwu@intfusiontech.com
Research Institution
N/A
Abstract

This effort aims to develop interpretable and reliable machine learning methods that address the challenge of deriving explanations of autonomous decision-making behavior. In particular, the proposed effort focuses on the challenges inherent in interactions between humans and intelligent machines, where transparency and trust are essential to facilitate successful human-machine teaming. This effort, based on deep reinforcement learning, will address the fact that autonomous decision-making agents can affect future states based on their current actions, as well as the challenge of reasoning over long-term human-machine collaborative objectives of the underlying mission. The resulting explainable system enables better understanding of learning outcomes, and can also help develop more effective machine learning algorithms.  The developed systems can be applied to military scenarios by enabling human-interpretable behavior explanations in human-in-the-loop decision processes. These systems may also be deployed in commercial applications such as autonomous driving or energy management systems, where high-stakes decisions require transparency and traceability. The proposed explainable machine learning techniques can also be implemented in heavily regulated areas, such as healthcare or financial systems, where stringent interpretability and accountability are required. The objective of this effort is to conduct a feasibility study and validate prototype concepts for future development and integration.

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government