You are here

TRUST: Testing Robustness of UAS Safety Technology

Award Information
Agency: Department of Defense
Branch: Office of the Secretary of Defense
Contract: FA8650-15-C-2629
Agency Tracking Number: O2-1550
Amount: $981,324.00
Phase: Phase II
Program: SBIR
Solicitation Topic Code: OSD13-HS2
Solicitation Number: 2013.3
Solicitation Year: 2013
Award Year: 2015
Award Start Date (Proposal Award Date): 2015-08-13
Award End Date (Contract End Date): 2017-11-14
Small Business Information
3600 Green Court, Suite 600, Ann Arbor, MI, 48105
DUNS: 000000000
HUBZone Owned: N
Woman Owned: N
Socially and Economically Disadvantaged: N
Principal Investigator
 John Sauter
 (734) 887-7642
Business Contact
 Andrew Dallas
Phone: (734) 887-7603
Research Institution
It is understood that an autonomous unmanned air, ground, or sea vehicles can incur a near infinite decision space that is difficult to capture completely in extensive simulation. The response of these vehicles to untrained environments can potentially have unintended consequences to adversely affect safety. This is of particular concern when these vehicles are considered for collaborative manned/unmanned teaming missions. For such systems, a run time verification engine may be developed to ensure the safety of human life by constraining the output of the autonomous algorithm to guarantee actions are correct, interpretable, and recoverable. The autonomous algorithm combined with the failsafe mechanism is intended to improve the robustness of autonomous systems to unknown environments and unexpected events. However, if such a failsafe/recovery mechanism existed, what evaluation systems are available to test their viability and robustness The intent of this solicitation is to develop a verification method to examine the robustness of a run time safety algorithm. The first objective is to examine the techniques presented in [1][2] and apply them to an autonomous unmanned vehicle model that includes a learning trajectory generation algorithm. The techniques in [1] present a method to protect the behavior of an adaptive / learning function. The techniques in [2][3] present methods to analyze the robustness of the implemented safety algorithm. Due to the dependence of autonomous systems on historical state data, current simulation environments require the need for extensive run times to reach a potential unintended operating region. Additionally, as a greater quantity of information is fused and utilized by the autonomous algorithm to make decisions, gradual, unintended data streams may induce state conditions that may cause an unsafe or unpredictable response. A key capability must be to rapidly re-stimulate the system to an untrained, unintended, or erroneous operating state in order to assess the robustness of the run time safety algorithm. The verification algorithm must implement: A method to introduce specific logical or run time operating states that induce an algorithm failure. A mechanism for recording and initializing systems to specific states. An interface control description that emulates real world sensor outputs to be provided to the system under test. The generation of a robustness measure around an operating region [2][3].

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government