Description:
TECH FOCUS AREAS: Cybersecurity; Artificial Intelligence/Machine Learning
TECHNOLOGY AREAS: Information Systems
OBJECTIVE: The objective of this topic is to explore the design of tools, techniques and methods necessary to support the development of trusted AI systems at speed and scale within modern development pipelines.
DESCRIPTION: A key element of trust in AI is our ability to impart and assess the classical security attributes of developed AI systems. Confidentiality, integrity, and availability of systems remain a key concern as we move toward increased AI, especially as such systems are imparted with the levels of autonomy necessary for speed and scale of action required for future missions. This need for assurance in our AI-based software must be weighed against the benefits (and risks) of agile, rapid development pipelines, such as continuous integration / continuous development (CI/CD) and integrated Development – Security – Operations (DevSecOps) approaches to development. Such pipelines often rely on external security measures, which may contain systemic failures and lack engineering rigor. Rectifying these competing concerns requires the development of comprehensive security engineering approaches that effectively reduce the introduction and exploitation of vulnerabilities in modern AI and AI-enabled developments.
This is a fundamental requirement of trusted AI, as it underlies the effectiveness of numerous AI/ML attack paths identified in literature (e.g. the Berryville Institute of Machine Learning). Technologies developed under this topic should focus on enabling trust through the systematic address of vulnerabilities that might arise in AI system design, development, and test. The key aspects addressed by this topic follow the security engineering approach outlined by NIST in SP800-160 as applied to machine learning and AI-based systems:
1. Technologies and methods that address the “problem-definition” context of AI system security. This would include the ability to model, quantify, and analyze threats and risks to AI systems; define and analyze AI-driven security requirements; and inform the efficient application of solution-focused technologies and processes.
2. Technologies and methods focused on the “solution-definition” context of secure AI systems. This could include capabilities focused on the secure development of AI-systems, the application/specialization of formal and semi-formal analysis in the design and architecture of AI systems, and the static and dynamic analysis of implementation artifacts (code & associated models/data) for vulnerabilities.
3. Technologies and methods that enable the “trustworthiness-assessment” context of AI systems, focused on the testing and evaluating systems against the stated solution and problem context goals and approaches (did I build the right system, and did I build it right?). This could include risk-driven test and evaluation of AI capabilities at the unit, component, and sub-system level; capabilities to perform vulnerability assessment on AI-based systems and data; and the means to measure the resulting AI system’s security properties.
Approaches sought as part of this effort may address one or more of these contexts, either as part of an integrated process or as a standalone technology focused on a particular aspects of security engineering for AI-based systems. Candidates will be evaluated based upon their novelty, uniqueness, and specificity to AI-based systems and the unique challenges faced when designing, developing, and fielding assured AI technologies within a mission context. There is no requirement for use of Government materials, equipment, data or facilities.
PHASE I: Phase I should completely document 1) the AI-driven explainability requirements the proposed solution addresses; 2) the approach to model, quantify and analyze the representation, effectiveness, and efficiency of the explainable decision-making solution; and 3) the feasibility of developing or simulating a prototype architecture.
PHASE II: Develop, install, integrate and demonstrate a prototype system determined to be the most feasible solution during the Phase I feasibility study. This demonstration should focus specifically on:
1. Evaluating the proposed solution against the objectives and measurable key results as defined in the Phase I feasibility study.
2. Describing in detail how the solution can be scaled to be adopted widely (i.e. how can it be modified for scale).
3. A clear transition path for the proposed solution that takes into account input from all affected stakeholders including but not limited to: end users, engineering, sustainment, contracting, finance, legal, and cyber security.
4. Specific details about how the solution can integrate with other current and potential future solutions.
5. How the solution can be sustainable (i.e. supportability).
6. Clearly identify other specific DoD or governmental customers who want to use the solution.
PHASE III DUAL USE APPLICATIONS: The need for AI/ML systems that are robust, well-designed, minimize vulnerabilities, and exhibit resilience against external attack is a need shared between government and industry. Developments against this topic will support AI/ML assurance requirements in any agile development (e.g. DevSecOps) pipeline, and outcomes are likely to find inclusion in software development practices that support both commercial and government needs. The contractor will pursue commercialization of the various technologies developed in Phase II for transitioning expanded mission capability to a broad range of potential government and civilian users and alternate mission applications. Direct access with end users and government customers will be provided with opportunities to receive Phase III awards for providing the government additional research & development, or direct procurement of products and services developed in coordination with the program.
PROPOSAL PREPARATION AND EVALUATION: Please follow the Air Force-specific Direct to Phase II instructions under the Department of Defense 21.2 SBIR Broad Agency Announcement when preparing proposals. Proposals under this topic will have a maximum value of $1,500,000 SBIR funding and a maximum performance period of 18 months, including 15 months technical performance and three months for reporting.
After proposal receipt, an initial evaluation will be conducted IAW the criteria DoD 21.2 SBIR BAA, Sections 6.0 and 7.4. Based on the results of that evaluation, Selectable companies will be provided an opportunity to participate in the Air Force Trusted AI Pitch Day, tentatively scheduled for 26-30 July 2021 (possibly virtual). Companies’ pitches will be evaluated using the initial proposal evaluation criteria. Selectees will be notified after the event via email. Companies must participate in the pitch event to be considered for award.
REFERENCES:
1. McGraw, et al: An Architectural Risk Analysis of Machine Learning Systems: Toward More Secure Machine Learning. Berryville Institute of Machine Learning (BIML), 2019.
2. McGraw, et al: Security Engineering for Machine Learning. IEEE Computer, volume 52, number 8, pages 54-57. 2019.
3. Ross, R., McEvilley, M. and Carrier Oren, J.: NIST SP 800-160 — Systems Security Engineering: Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems, November 2016.