Description:
OBJECTIVE: Develop an innovative verification tool to assess the robustness of run time safety systems bounding autonomous and learning algorithms for operation in untrained/unknown environments. DESCRIPTION: It is understood that an autonomous unmanned air, ground, or sea vehicles can incur a near infinite decision space that is difficult to capture completely in extensive simulation. The response of these vehicles to untrained environments can potentially have unintended consequences to adversely affect safety. This is of particular concern when these vehicles are considered for collaborative manned/unmanned teaming missions. For such systems, a run time verification engine may be developed to ensure the safety of human life by constraining the output of the autonomous algorithm to guarantee actions are correct, interpretable, and recoverable. The autonomous algorithm combined with the failsafe mechanism is intended to improve the robustness of autonomous systems to unknown environments and unexpected events. However, if such a failsafe/recovery mechanism existed, what evaluation systems are available to test their viability and robustness? The intent of this solicitation is to develop a verification method to examine the robustness of a run time safety algorithm. The first objective is to examine the techniques presented in [1][2] and apply them to an autonomous unmanned vehicle model that includes a learning trajectory generation algorithm. The techniques in [1] present a method to protect the behavior of an adaptive / learning function. The techniques in [2][3] present methods to analyze the robustness of the implemented safety algorithm. Due to the dependence of autonomous systems on historical state data, current simulation environments require the need for extensive run times to reach a potential unintended operating region. Additionally, as a greater quantity of information is fused and utilized by the autonomous algorithm to make decisions, gradual, unintended data streams may induce state conditions that may cause an unsafe or unpredictable response. A key capability must be to rapidly re-stimulate the system to an untrained, unintended, or erroneous operating state in order to assess the robustness of the run time safety algorithm. The verification algorithm must implement: A method to introduce specific logical or run time operating states that induce an algorithm failure. A mechanism for recording and initializing systems to specific states. An interface control description that emulates real world sensor outputs to be provided to the system under test. The generation of a robustness measure around an operating region [2][3]. PHASE I: The objective of the Phase I effort is to examine the robustness techniques presented in [2][3] and apply them to a run time protected [1] autonomous unmanned vehicle model to ensure safety of human life when cooperating with manned vehicles. It is expected that the offeror should have, at proposal, a Matlab (or similar) model of a representative autonomous unmanned vehicle and a learning algorithm present to decide new trajectories based on initial goals and changing environmental conditions. The Phase I effort shall include the development of a notional verification tool applying the techniques in [1][2], that includes a run time verification engine that only allows the learning system to traverse within a pre-defined safety boundary and that analyizes the robustness of the algorithm response. The Phase I deliverables should include: A proof of concept model in Matlab (or similar) implementation of a runtime verification robustness analysis tool demonstrating feasibility and scalability of the approach. A Phase I final report that will include a full modeling and analysis tool conceptual system design and an implementation plan for the follow-on tool development. PHASE II: The Phase II effort shall implement and extend the verification method. This tool will enable modeling of learning based unmanned systems and will generate test cases based upon user specified failure modes. These test cases will then be automatically run against the autonomy using the previously implemented communication interface and initial condition injection. The company must provide the capability to provide a robustness quantification metric that is updated as the model is subjected to new and unexpected behaviors according to the given requirements and safety specifications. The Phase II prototype software should show proof-of-concept by applying the tool to a particular, relevant DoD use case. Of particular interest would be use cases that can be generalized across domains (e.g., Underwater vehicles) or include heterogeneous autonomous systems (e.g., air/ground/sea coordination). PHASE III: DUAL USE COMMERCIALIZATION: Improved trust in robust autonomous systems will enable the use of complex learning algorithms in safety critical applications. Capability can be used in future government programs including applications in the Department of Defense, Transportation, and Energy that require robustness guarantees of future adaptive/learning autonomous vehicles operating in congested and safety critical environments. Capability can be used in future commercial programs, enabling the certification and transition of applications such as self-driving cars, autonomous cargo aircraft, etc.