You are here

Securing AI/ML Models Against Adversarial Threats for Advanced Command and Control (AC2) Missions

Description:

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Integrated Sensing and Cyber; Trusted AI and Autonomy

 

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.

 

OBJECTIVE: The objective of this SBIR Phase II is to examine and develop effective methods for safeguarding AI/ML models from malicious threats. Specifically, the prototype should aim to identify vulnerabilities in AI/ML models, such as adversarial examples, data poisoning, and model extraction attacks. Additionally, it intends to propose innovative defense mechanisms that can mitigate the impact of these attacks. The research will also investigate the trade-off between the effectiveness of defense mechanisms and the computational resources required for their implementation. Ultimately, the goal is to improve the security and resilience of AI/ML models, thereby increasing their reliability and trustworthiness for real-world applications. There is an immediate demand for this capability across strategic, operational, and technical guidance and policies mandated by the Secretary of the USAF as follows: • Operational Imperatives   o II - Achieving Operationally Optimized Advanced Battle Management Systems (ABMS) / Air Force Joint All-Domain Command & Control (AF JADC2) o IV - Achieving Moving Target Engagement at Scale in a Challenging Operational Environment

 

DESCRIPTION: The significance of artificial intelligence (AI) and machine learning (ML) has grown in various military applications. However, the susceptibility of AI/ML models to adversarial attacks has raised concerns regarding the security and reliability of these models in C2 real-world applications. Adversarial attacks involve deliberate attempts to manipulate or deceive an AI/ML model by introducing carefully crafted inputs that cause the model to misclassify or produce incorrect outputs [1]. Such attacks can have severe consequences in safety-critical applications like autonomous agent route planning or medical diagnosis, and they can also result in privacy violations and data breaches. The most prevalent form of adversarial attack is the generation of adversarial examples, which are inputs slightly altered from legitimate inputs but can cause the model to produce incorrect outputs. Adversarial examples can be created using various techniques, such as gradient-based methods or evolutionary algorithms, and they can be challenging to detect and defend against. Other types of adversarial attacks include data poisoning, where an attacker injects malicious data into the training dataset to bias the model towards a specific outcome, and model extraction, where an attacker attempts to steal the model's architecture or parameters to replicate or enhance the model. Consequently, the development of effective techniques to secure AI/ML models against adversarial attacks has become imperative for operational performance within the USAF. Therefore, this proposal seeks innovative prototypes to engage and deter cyber threats under AI/ML models, which will incorporated into the Air Force’s core operational mission.

 

PHASE I: s this is a Direct-to-Phase-II (D2P2) topic, no Phase I awards will be made as a result of this topic. To qualify for this D2P2 topic, the Government expects the Offeror to demonstrate feasibility by means of a prior “Phase I-type” effort that does not constitute work undertaken as part of a prior SBIR/STTR funding agreement. The scope of the phase I feasibility study should include at minimum research on identifying vulnerabilities in AI/ML models, such as adversarial models, data poisoning, and model extraction attacks and others securing AI/ML techniques innovative defense mechanisms that can mitigate the impact of these attacks as the minimum basis for qualifications for this phase II solicitation proposal.

 

PHASE II: Proposals should include development, installation, integration, demonstration and/or test and evaluation of the proposed solution prototype system. This demonstration should evaluate the proposed solution against the proposed objectives; describe how the solution will fulfill the AF’s requirements; identify the technology’s transition path; specify the technology’s integration; and describe the technology’s sustainability. Phase II awards are intended to provide a path to commercialization, not the final step for the proposed solution.

 

PHASE III DUAL USE APPLICATIONS: Phase III efforts will focus on transitioning the developed technology to a working commercial or warfighter solution. If a viable business model for the developed solution is demonstrated, the offeror or identified transition partners would be in a position to supply future processes to the Air Force and other DoD components as this new technology is adopted.

 

REFERENCES:

  1. Ibitoye, Olakunle, et al. "The Threat of Adversarial Attacks on Machine Learning in Network Security--A Survey." arXiv preprint arXiv:1911.02621 (2019);
  2. Song, Liwei, Reza Shokri, and Prateek Mittal. "Privacy risks of securing machine learning models against adversarial examples." Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2019;
  3. Apruzzese, Giovanni, et al. "Addressing adversarial attacks against security systems based on machine learning." 2019 11th international conference on cyber conflict (CyCon). Vol. 900. IEEE, 2019;

 

KEYWORDS: Adversarial Threats;data poisoning;model extraction attacks;adversarial attacks;

US Flag An Official Website of the United States Government