You are here

Artificial Intelligence Tools for Autonomous Counter-Countermeasures

Description:

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Trusted AI and Autonomy

 

OBJECTIVE: Develop novel methods to identify and mitigate vulnerabilities in autonomy agents designed to carry out Department of Defense (DoD) missions and develop software tools to automatically assess and identify vulnerabilities prior to deployment in government systems.

 

DESCRIPTION: Rapid advancements in Artificial Intelligence (AI) have resulted in increased development of autonomous agents that perform complex tasks previously requiring human operators. In the academic domain, AI agents have been used to defeat world-class experts in games such as Go and Shogi, and more recently, multiplayer games such as Quake III, Starcraft II and DOTA II. The DoD has rapidly adapted these technologies for a variety of tasks including mission planning, air combat operations, missile defense, and so forth. As with any rapidly advancing technology, identifying the weakness and vulnerabilities of the technology are as important as advancing the technology itself, which exploit the fragility of AI models often underpinning these autonomy solutions. However, these efforts typically focus only on perturbation in input data received by an AI model and not the autonomy system as a whole.

 

In this effort, the Navy intends to analyze the entire autonomy development, integration, and deployment process to develop methods that can identify strategies to counter opponent autonomous systems, as well as development of red-teaming methods to mitigate the effectiveness of potential counter autonomy techniques developed by adversaries.

 

Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by 32 U.S.C. § 2004.20 et seq., National Industrial Security Program Executive Agent and Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Counterintelligence and Security Agency (DCSA) formerly Defense Security Service (DSS). The selected contractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances. This will allow contractor personnel to perform on advanced phases of this project as set forth by DCSA and NAVAIR in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material during the advanced phases of this contract IAW the National Industrial Security Program Operating Manual (NISPOM), which can be found at Title 32, Part 2004.20 of the Code of Federal Regulations.

 

PHASE I: Analyze existing autonomy approaches for relevant air combat missions. Identify potential attack surfaces in which counter autonomy could potentially be employed to defeat the autonomy and determine the risk potential of these vulnerabilities. Using information from all missions and analysis develop a counter autonomy ontology and suggested approaches. The Phase I effort will include prototype plans to be developed under Phase II.

 

PHASE II: Extend the research toward government provided reference scenarios. Develop and refine prototype algorithms for identifying high-risk vulnerabilities in autonomy agents. Demonstrate the ability to identify multiple types of vulnerabilities in deployable agents. Ensure that developed prototype can be integrated with Navy systems in Phase III.

 

Work in Phase II may become classified. Please see note in Description section.

 

PHASE III DUAL USE APPLICATIONS: Develop operational capability for use in Navy DevSecOps air worthiness framework including user and design documentation.

 

This research on decomposing the Autonomy and AI software supply chain aims to identify vulnerabilities from a security perspective, offering significant dual-use potential for both the DoD and private sectors. Industries such as telecommunications, transportation, and critical infrastructure can leverage these insights for enhanced cybersecurity measures. The findings will inform improved software development practices, aiding tech companies in creating more secure AI systems. Additionally, sectors handling sensitive data, like finance and healthcare, can benefit from advanced risk management strategies. While the research has broad commercial applications, particularly in AI safety and ethics, the dissemination of sensitive findings will be carefully managed to maintain a balance between public sector innovation and national security. This approach ensures the strategic advantages of the research are preserved while supporting technological advancement in various industries.

 

REFERENCES:

  1. Bookman, L., Clymer, D., Sierchio, J., & Gerken, M. (2022, June). Autonomous system identification and exploitation through observation of behavior. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications IV (Vol. 12113, pp. 76-85). SPIE. https://doi.org/10.1117/12.2618929
  2. Gupta, A., & Krishnamurthy, V. (2022). Principal–Agent Problem as a Principled Approach to Electronic Counter-Countermeasures in Radar. IEEE Transactions on Aerospace and Electronic Systems, 58(4), 3223-3235. https://doi.org/10.1109/TAES.2022.3147739
  3. Maybury, M., & Carlini, J. (2020). Counter autonomy: Executive Summary. Defense Science Board, Washington, D.C. https://apps.dtic.mil/sti/citations/AD1112065
  4. National Industrial Security Program Executive Agent and Operating Manual (NISP), 32 U.S.C. § 2004.20 et seq. (1993). https://www.ecfr.gov/current/title-32/subtitle-B/chapter-XX/part-2004

 

KEYWORDS: Artificial Intelligence; A.I.; Machine Learning; Counter-counter measures; Autonomy; Reinforcement Learning

US Flag An Official Website of the United States Government