You are here

Verification of Intelligent Autonomous Systems Containing Artificial Intelligence Components

Description:

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Trusted AI and Autonomy OBJECTIVE: Develop a user-friendly toolbox of methods for verification of neural network and computer vision elements at both the individual component property level and at the system level for closed-loop intelligent autonomous systems. Methods should be automated and scalable to the greatest extent possible and assist the user in applying to complex autonomous systems in challenging naval environments. DESCRIPTION: Neural Networks and related methods in Computer Vision are being considered for a wide range of future autonomous naval systems, including in roles that have some degree of safety, time, or mission criticality. A safety critical example would be the use of computer vision as a perceptual mechanism for aerial probe and drogue refueling. In this task, a tanker aircraft puts out a fuel hose with a stabilizing drogue that trails behind the aircraft. An intelligent autonomous air system would need to maneuver in close proximity to the tanker in order to safely attach a refueling probe to a coupler mechanism on the end of the hose and to maintain a safe relative position and state while making the transfer. A mission critical example would be an autonomous undersea system searching an area using sidescan sonar imagery and a color camera and using neural networks-based methods to identify the ocean bottom type, detect and classify objects of interest on the sea bottom, and determine their pose. A time critical example is autonomous navigation and semantic mapping of dynamic, coastal areas with low cost, expendable autonomous systems without GPS within a specific time window. In this case, neural network and computer vision methods could be used both in perception, reasoning, and in developing effective policies for discovering and identifying mission-relevant and area access-relevant relationships between geometry, mapping, objects, and entities. The last five years has seen significant advances in methods for verification of correctness of neural network and computer vision more broadly at both individual component and systems level (e.g., Maribou, Verisig, Fast and Complete, Robustness Analyzer for Deep Neural Networks (ERAN), Neurify, Runtime Shielding, other reachability-based methods, etc.). These methods provide alternatives such as: (1) Transforming or approximating the Artificial Intelligence (AI) element into a symbolic abstraction that lends itself better to verification, guaranteeing properties or proving the absence of adversarial examples, (2) Making verification part of the learning process with the goal of directing the learning to have a set of desired properties that can be quantified during learning, or (3) Ensuring the desired global properties at a systems level, such as with run-time shielding, reward shaping, watchdog monitoring, barrier certificates, etc. to protect against outputs that would lead to the system entering an unsafe or otherwise undesirable state. The methods individually have different challenges in terms of (1) their ability to express in their model or scale to realistic-sized and scope naval problems, (2) the extent to which they can currently be automated sufficiently to be used routinely in practice (i.e., don’t require the user to hand tailor their own proofs for each new application), (3) the degree to which they might overprescribe or constrain the design or implementation in overly conservative ways. Thus, there is a strong need for the creation of user friendly software toolboxes that make these methods more broadly accessible to a wide range of practicing engineers and computer scientists working on different naval intelligent autonomous systems problems. PHASE I: Determine a planned set of methods and their functionalities that will be in the toolbox and develop an initial version with an initial limited set of methods with sufficient functionality to demonstrate feasibility and allow some limited experimentation and demonstration. Experiments with methods may be done with low-fidelity simulation elements to show their value on particular use cases. Simulation may include some limited-complexity environmental models, vehicle models, sensor models, and communications models, depending on what would be most suitable to examine the particular approach. Develop metrics to evaluate the system in Phase II. PHASE II: Further refine the toolbox design and develop aversion with a broad set of methods that can extend to a greater range of autonomous control algorithm, mission, and environmental situations and system types in a more complex dynamic and unstructured environment. Experiment on naval use cases with a medium-fidelity simulation and sufficient autonomy components to conduct and report on experiments and comparison with benchmarks. If feasible, experiments may also be conducted with the use of inexpensive unmanned vehicles or other hardware. Experiments should include a focus on determining the sensitivity of the tool to a variety of factors. Revise evaluation metrics as necessary. PHASE III DUAL USE APPLICATIONS: Develop more user-friendly version of the toolbox with expanded functionality and sufficient support to be usable by a broad range of engineers and computer scientists in support of areas such as Unmanned Air Systems (UAS), Autonomous Undersea Vehicles (AUV), Unmanned Sea Surface Vehicles, (USSV), autonomous cars, and ground and industrial robotics. REFERENCES: 1. Ivanov, R., Weimer, J., Alur, R., Pappas, G. J., & Lee, I. (2019, April). Verisig: verifying safety properties of hybrid systems with neural network controllers. In Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control (pp. 169-178). 2. Katz, G., Huang, D. A., Ibeling, D., Julian, K., Lazarus, C., Lim, R., & Barrett, C. (2019, July). The marabou framework for verification and analysis of deep neural networks. In International Conference on Computer Aided Verification (pp. 443-452). Springer, Cham. 3. Singh, G., Ganvir, R., Püschel, M., & Vechev, M. (2019). Beyond the single neuron convex barrier for neural network certification. Advances in Neural Information Processing Systems, 32. 4. Xu, K., Zhang, H., Wang, S., Wang, Y., Jana, S., Lin, X., & Hsieh, C. J. (2020). Fast and complete: Enabling complete neural network verification with rapid and massively parallel incomplete verifiers. arXiv preprint arXiv:2011.13824. 5. Wang, S., Pei, K., Whitehouse, J., Yang, J., & Jana, S. (2018). Efficient formal safety analysis of neural networks. Advances in Neural Information Processing Systems, 31. 6. Alshiekh, M., Bloem, R., Ehlers, R., Könighofer, B., Niekum, S., & Topcu, U. (2018, April). Safe reinforcement learning via shielding. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1). 7. Huang, X., Kroening, D., Ruan, W., Sharp, J., Sun, Y., Thamo, E., ... & Yi, X. (2020). A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37, 100270. 8. Zhang, J., & Li, J. (2020). Testing and verification of neural-network-based safety-critical control software: A systematic literature review. Information and Software Technology, 123, 106296. 9. Tran, H. D., Yang, X., Manzanas Lopez, D., Musau, P., Nguyen, L. V., Xiang, W., ... & Johnson, T. T. (2020, July). NNV: the neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In International Conference on Computer Aided Verification (pp. 3-17). Springer, Cham. 10. Bak, S., Tran, H. D., Hobbs, K., & Johnson, T. T. (2020, July). Improved geometric path enumeration for verifying relu neural networks. In International Conference on Computer Aided Verification (pp. 66-96). Springer, Cham. 11. Bastani, O., Pu, Y., & Solar-Lezama, A. (2018). Verifiable reinforcement learning via policy extraction. Advances in neural information processing systems, 31. 12. Herbert, S. L., Chen, M., Han, S., Bansal, S., Fisac, J. F., & Tomlin, C. J. (2017, December). FaSTrack: A modular framework for fast and guaranteed safe motion planning. In 2017 IEEE 56th Annual Conference on Decision and Control (CDC) (pp. 1517-1522). IEEE. 13. Naik, N., & Nuzzo, P. (2020, December). Robustness Contracts for Scalable Verification of Neural Network-Enabled Cyber-Physical Systems. In 2020 18th ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE) (pp. 1-12). IEEE. 14. Pacheck, A., Moarref, S., & Kress-Gazit, H. (2020, May). Finding missing skills for high-level behaviors. In 2020 IEEE International Conference on Robotics and Automation (ICRA) (pp. 10335-10341). IEEE. KEYWORDS: Neural Networks, Computer Vision, Verification, Autonomous Control, Intelligent Autonomy, Verification and Validation
US Flag An Official Website of the United States Government