DoD 2014.1 SBIR Solicitation
NOTE: The Solicitations and topics listed on this site are copies from the various SBIR agency solicitations and are not necessarily the latest and most up-to-date. For this reason, you should use the agency link listed below which will take you directly to the appropriate agency server where you can read the official version of this solicitation and download the appropriate forms and rules.
The official link for this solicitation is: http://www.acq.osd.mil/osbp/sbir/solicitations/index.shtml
Application Due Date:
Available Funding Topics
- OSD14.1-AU1: Biometrics for Human-machine Team Feedback in Autonomous Systems
- OSD14.1-AU2: Evaluating the Performance and Progress of Learning-enabled Systems
- OSD14.1-AU3: Evaluating Mixed Human/Robot Team Performance
- OSD14.1-AU4: Safety Testing for Autonomous Systems in Simulation
- OSD14.1-AU5: Distributed Visual Surveillance for Unmanned Ground Vehicles
- OSD14.1-IA1: Obfuscation to Thwart Un-Trusted Hardware
- OSD14.1-IA2: Detecting Malicious Circuits in IP-Core
Biometrics for Human-machine Team Feedback in Autonomous Systems
This topic is supported under National Robotics Initiatives (NRI). OBJECTIVE: Develop and use biometrics that provides feedback about the status of human-machine team in autonomous systems. DESCRIPTION: Intense workload and short deadlines place a great deal of stress on warfighters applying computer systems to complete their mission. Biometric techniques show promise for detecting variations in human workload, stress, fatigue, and engagement when these systems are in the testing and evaluation stages of development (Bonner & Wilson, 2002; Murai, Okazaki, & Hayashi, 2004; Hockey, Gaillard, & Burov, 2004). Health monitoring systems could use biometric data collected for to make informed decisions about the human operator's condition (Carter, Cheuvront, Sawka, 2004). Having detected these factors, the software could provide a human impairment profile to better address the human's interaction with the proposed autonomous system. The new sensors must minimize interference with the warfighter's ability to complete the testing sessions or mission. For example the sensors cannot require excessive apparatus or a lengthy calibration training period. Both psychophysiology and affective computing have explored many avenues of research, including speech, facial expressions, gestures, central nervous system responses and autonomic nervous system responses (Zeng et al., 2009; Calvo and D"Mello, 2010). Among these, autonomic nervous system (ANS) responses such as cardiorespiratory and electrodermal responses hold a great deal of promise in physiological computing since they can be measured more cheaply, quickly and unobtrusively than central nervous system responses. PHASE I: Identify or design sensors that can unobtrusively monitor human operators for human state assessment with a quantifiable impact on task performance. Design a sensor system and provide proof-of-concept supporting data on the ability of said design to accurately assess the cognitive state of engineers during test activities. PHASE II: Prototype the designed sensor system. Demonstrate that sensor information improves human operator cognitive state assessment and can lead to improved performance and productivity during test engineering activities. Develop prototype mobile application to facilitate the cognitive state assessment in operational environments. PHASE III: Fully developed cognitive state assessment systems that have numerous applications relevant to the Department of Defense, especially where fatigue or information overload are responsible for elevated error rates. Industry applications include operation and safety in areas such as transportation, energy and medicine.
Evaluating the Performance and Progress of Learning-enabled Systems
This topic is supported under National Robotics Initiatives (NRI). OBJECTIVE: Develop methodology to evaluate and measure the performance and progress for learning enabled systems. DESCRIPTION: A long term goal of machine learning is to develop systems that learn complex behaviors with minimal human oversight. However, future systems that incorporate learning strategies will not necessarily have a fixed software state that can be evaluated by the testing community. In some cases, most of the training occurs in the development process using large databases of training examples. Testing may involve a series of challenge scenarios, similar to the DARPA autonomous mobility challenges, designed to examine the performance of the system-under-test in relevant conditions. Design of the scenarios and performance metrics are open research questions. As autonomous systems are used in increasingly complex scenarios, supervised training during the development phase, by itself, may not be sufficient to allow the system to learn the appropriate behavior. Learning from demonstration uses examples, often supplied by the end-user of the system, to train the system. Examples include flying in specific environments, bi-pedal locomotion on different terrain surfaces and, throwing objects of different sizes or densities. In this case, the tester needs to stand in for the end user and"train"the systems before testing it. Test procedures need to evaluate not only the performance of the system in various scenarios, but the amount of time it takes to train the system and the required level of expertise for the"expert"trainer. Finally, some applications include continuously adapting models that adjust over time to compensate for changes in the environment or mission. Current research is exploring the use of on-line learning in areas such as terrain adaptive mobility and perception. This case presents a particularly challenging evaluation problem, performance in a given scenario is not static it may improve over time. In this solicitation we seek a methodology that answers the following three questions: a) What is an appropriate testing methodology for learning-enabled systems? This includes testing procedures that apply to systems with supervised learning components, as well as user-trained or continuously adapting systems. b) Are there general testing principles that can be applied to learning-enabled systems regardless of the specific applications? c) Can we predict the evolution of a learning-enabled system over time? For adaptive systems, can we predict how much time is required to adapt to a new environment? What are the potential impacts on military autonomous systems? PHASE I: The first phase consists of initial methodology development, metrics and a set of use cases to evaluate and measure the performance of learning-enabled systems. This methodology must address supervised, re-trained and continuously adaptive systems. Documentation of the methodology and use cases is required in the final report. PHASE II: Prototype the methodology by using it to examine test cases for each type of learning-enabled system in simulated test environments. The prototypes should address the 3 questions states above. Deliverables shall include the prototype system and a final report, which shall contain documentation of all activities in the project and a user's guide and technical specifications for the prototype system. PHASE III: Fully developed systems that evaluate the performance learning-enabled systems in either real or simulated scenarios. Potential commercial applications could be a system that to assess the performance of autonomous driving systems, logistics systems, and autonomous UAV applications such as power line inspection in which the UAV must adapt its flight parameters to changing wind characteristics. Deliverables shall include the methodology, test case scenarios and some general principles that the test and evaluation community can use to develop test procedures for specific systems.
Evaluating Mixed Human/Robot Team Performance
This topic is supported under National Robotics Initiatives (NRI). OBJECTIVE: Develop methodology to evaluate mixed human/robot team performance DESCRIPTION: Introducing robotic assets to a military or civilian unit should increase the level of performance for the team. We evaluate human teams by scoring their performance on specific tasks; they can be a single score for the team, or an aggregate of individual member scores. Likewise, we need to extend this idea to tasks performed by mixed teams. The conceptual team can be a single human acting as"robot operator or handler"and a single robot. However, of equal important is the team of multiple humans with one or more robots. Evaluation of the human component performance, the robot component performance, and the mixed team performance is critical in both T & E settings, where meeting performance thresholds will be key, as well as research environments where identification of weaknesses will assist in advancing the technology. Unfortunately, evaluation of a mixed human/robot team performance is much more complicated and complex than a human only team evaluation. In addition, mission space of a mixed human/robot team may be different than that of a human only team in some ways the mission space may be more limited (e.g., terrain may limited robot mobility); in other ways, the mission space may be expanded (e.g., through additional sensor capabilities). There is a wide range of scenarios in which human-robot teams may increase overall mission performance. Some examples of possible scenarios include some combat operations (both mounted and dismounted) such as reconnaissance; installation personnel transport; construction; road clearance; logistics; and, medical evacuation. These scenarios will require certain tasks to be performed by the human-robot team members, in various environments, to certain expected levels of performance. In this solicitation, we are looking for a methodology and/or algorithm that can answer the following three questions: a) What are appropriate lists of tasks, in what environments, for a mixed human/robot team? This would include defining different kinds of military or civilian human/robot teams, with differing capabilities, and expected performance characteristics? b) Are there techniques/methodologies that can evaluate the performance of the robotic asset(s) within the team mission? Are there techniques/methodologies that can evaluate the combined performance of robots and humans in the team mission? c) How does performance on scenario-based task/environment/capability combinations relate to additional, new combinations (can we use the techniques/methodologies to establish performance envelopes across a range of scenarios and unmanned systems?) PHASE I: Determine the feasibility of developing a methodology and/or algorithm to evaluate and measure a mixed human/robot team performance. From the scenarios above (or identify others) where human-robot teams are likely to increase mission effectiveness, choose two to three scenarios to use as developmental scenarios. Within the chosen scenarios: 1) Identify potential military human/robot team characteristics, 2) identify possible tasks that the robot/human/team will perform (both scenario/mission specific and general/universal), 3) define environment characteristics that will impact robot/human/team performance, and identify potential and appropriate robot/human/team performance measurement metrics. From this large matrix space (scenario/team/task/environment/metrics), identify one or more methodologies and/or algorithms for assessing mixed human/robot team performance. Documentation of methodology tradeoffs and projected methodology strengths and weaknesses shall be required in the final report. PHASE II: Define in detail and prototype the methodology(ies) and/or algorithm(s). Test the methodology(ies) and or algorithm(s) in real or simulated scenarios, with particular attention to validated (or defensibly appropriate) models of robot and human performance. Demonstrate the feasibility of the answers to the above three questions. Define methods to valid the solution set. Show applicability across a range of scenarios (i.e., not just the scenarios chosen in Phase I). In addition, the final product of Phase II should be able to evaluate and compare team performance between a human only team and a mixed human/robot team. Deliverables shall include the prototype methodology(ies) and/or algorithms and a final report, which shall contain documentation of all activities in the project and detailed instructions for using the prototype approach. PHASE III: The final product is expected to be fully developed systems that can evaluate the performance of any combination of human only teams and mixed human/robot teams. Potential military applications would be by the Test and Evaluation community for use in determining the performance envelope of candidate systems or in the research community for use in describing performance in R & D. Potential commercial applications could be for robotic systems developed to assess performance during system development across a range of applications or by commercial enterprises interested in developing and assessing autonomous driving.
Safety Testing for Autonomous Systems in Simulation
This topic is supported under National Robotics Initiatives (NRI). OBJECTIVE: The Army is interested in adding autonomy to its vehicle convoys , but how can we certify that these autonomous algorithms are safe? Currently, live testing of full vehicle systems is the only acceptable method, but even after hundreds of hours of successful live testing, a single hidden failure point in the algorithms would disprove the hypothesis that the proposed autonomous system is safe. Furthermore, live testing can be cost prohibitive, and is (not surprisingly) far from exhaustive. Instead, we seek to develop a safety testing environment (STE) that will exercise our current autonomy algorithms with software/hardware in the loop in parallel with live testing that will validate the STE. DESCRIPTION: Recent advancements in sensor simulation tools  have improved our ability to model radar, lidar, camera, and GPS with software/hardware in the loop. Of course, our ability to model the physics of heavy trucks  is quite mature as well. To address the challenge of developing the STE, we will provide our autonomy algorithms as Government Furnished Equipment (GFE). The focus of this topic is: 1) to build an environment that mirrors actual test data to provide a departure point for Monte Carlo simulations. 2) research the failure modes for autonomy algorithms within the capabilities of current sensor models and 3) simulate the corner cases that would exercise these failure modes. This topic is not focused on improving physics-based simulation of heavy trucks or building better sensor models. Neither do we seek to develop new algorithms for autonomous behavior, but rather to leverage existing GFE autonomy algorithms to study the open research question of how we can test these algorithms in simulation, and certify that they are safe to the fullest extent possible within current simulation environments. PHASE I: In Phase I we seek a System Architecture for the Safety Testing Environment (STE). This prototype STE may be outlined with cursory autonomy algorithms rather than with the GFE algorithms. Define sensor models, processor and software requirements. Propose metrics for highlighting the impact and reliability of the STE. Provide a detailed concept of operations (CONOPS) and overview (OV) graphics. PHASE II: Integrate GFE algorithms into a fully functional STE of an operationally relevant scenario such collision mitigation braking, adaptive cruise control, or lane departure, etc. We desire a model of a M915 or Marine Corps AMK23 Cargo Truck for the STE. Demonstrate the effectiveness of this STE within the metrics defined in Phase I. The STE should be able to simulate ambient noise, sunlight, occlusions between the following and leading vehicle and fully simulate radar, lidar, camera and GPS. The objective is a full military environment. PHASE III: Work to have the proposed system become a part of the AMAS program.
Distributed Visual Surveillance for Unmanned Ground Vehicles
This topic is supported under National Robotics Initiatives (NRI). OBJECTIVE: Develop a system to identify, classify, and analyze visual data from unmanned ground vehicles and stationary visual surveillance sources to enable real-time on-board decisions and system-wide planning regarding route, speed, and tasks. DESCRIPTION: Distributed visual surveillance has a major role in the future of Unmanned Ground Vehicles (UGV"s). Distributed visual surveillance refers to the use of cameras networked over a wide area to continually collect and process sensor data to recognize and classify objects in the environment. Analyzed data will inform unmanned decision-making and fleet management to optimize a transportation system. Sensors and camera systems mounted on UGVs will augment stationary surveillance hardware. An area of interest for this research is data fusion, co-operative multi-sensor tracking methods, and distributed processing. Also of interest is the reconciliation, classification, and prioritization of data; storage and accurate retrieval of archival references; and the selection of an appropriate action/response to the data. Although there are many potential sensors that can be used in distributed surveillance, in this topic we are focusing on visual (and perhaps infrared) imaging sensors whose cost, reliability, and availability makes transition to the field or commercialization much more likely. Communications bandwidth is and will remain a limited resource. Even with video compression technologies, there is insufficient bandwidth to upload all video and high-resolution still images from all network nodes. Artifacts due to heavy video compression would degrade most analysis applications and viewing all the data would overwhelm analysts. Local processing is therefore preferable to central processing to extract actionable information from the sensor data and to plan UGV position adjustments. An individual node can determine whether or not there has been a significant change in the situation that would warrant transmitting a package of sensor-level data. The scenario to be addressed in this topic is that a small fleet of 10-15 UGVs deployed at a CONUS installation in order to safely transport personnel, on-demand, from various point around one building one-third mile sharing pedestrian sidewalk, across an uncontrolled four-lane roadway, through a busy parking lot to another building on the installation. Vehicles will operate at speeds from 3mph (in mixed pedestrian traffic) up to potentially 25mph which is the limit for Neighbourhood Electric Vehicles. Vehicles must recognize and respond appropriately to pedestrians, unconnected vehicles, and other environmental objects. Approximately 12 networked cameras fixed across along the route and around the test site will provide visual coverage of the area. Sensors will have a priori visual background data and UGV location will be known (landmarks, GPS positioning, etc.) enabling temporal differential or background subtraction to locate objects. Capabilities desired for the UGV include ODOA, correct positioning and speed regulation with respect to moving and stationary objects, coordinated and optimized system-wide responses across the fleet, data collection and/or communications, and extracting actionable information from the sensor stream. Information of interest includes detection and behaviour analysis of humans and vehicles, analysis of traffic patterns, and identification of suspicious activities or behaviors. The intended platform is an electric vehicle with size on the order of 500-600 Kg (roughly golf-cart sized). The platform is expected to manage its own energy usage and recharge itself, wirelessly, so energy efficient algorithms are of interest. UGV platform and payload development, including sensors and communications, are outside the scope of this topic. PHASE I: The first phase consists of scenario/capability selection, initial system design, researching sensor options, investigating signal and video processing algorithms, and showing feasibility on sample data. Documentation of design tradeoffs and projected system performance shall be required in the final report. PHASE II: The second phase consists of a final design and full implementation of the system, including sensors and UGV software. At the end of the contract, a database of behavioural characteristics will be available enabling both improved M & S and T & E as well as improved autonomous local maneuvering shall be demonstrated in a realistic outdoor environment. Deliverables shall include the prototype system and a final report, which shall contain documentation of all activities in the project and a user's guide and technical specifications for the prototype system. PHASE III: The end-state of this research is to further develop the prototype system and potentially transition the system to the field or for use on military installations and bases. Potential military applications include monitoring highways, overpasses, intersections, buildings and security checkpoints. Potential commercial applications include monitoring high profile events, border security and commercial and residential surveillance. The most likely path for transition of the SBIR from research to operational capability is through collaboration with robotic companies from industry or through collaboration with the Robotic Systems Joint Project Office (RS JPO).
Obfuscation to Thwart Un-Trusted Hardware
OBJECTIVE: To develop innovative methods for mutating or obfuscating the processes of network security appliances or tactical communication systems. To make the path of the processes and data through hardware non deterministic, thereby thwarting any supply chain attacks that rely on the deterministic nature of computing to exfiltrate data and compromise operations. To mask the data and processes such that information exfiltrated from compromised hardware is not useful to an adversary. DESCRIPTION: With more and more of the hardware that the U.S. Army relies on for critical communications and security being manufactured in whole or in part in countries not sympathetic to the goals of this Nation, supply chain tampering is of a greater and greater concern. Tampering with components as they are produced can have catastrophic effect. From a security perspective, the possibility of supply chain attacks undermines the trust that can be placed on a system. Supply chain attacks can involve the insertion of hardware modules or embedded code into hardware devices. These insertions can exfiltrate data or allow backdoor access into systems by the parties responsible for their insertion. Detecting these insertions is costly and difficult, especially with many components coming from many places; all of which could have any of these types of insertions. These inserted modules rely on the user being unaware of their presence, and performing tasks in a predictable manner. The aspect of a predictable manner is very important to the developers of the supply chain attacks. In the case of network security appliances, the hardware"s intended use is known at the time of manufacture, and its use can easily be predicted. In many cases the behavior of the software is very well known, and its path through the hardware can easily be predicted. This can give the adversary easy access to usernames, passwords, and data that should be encrypted. It can also provide the adversary with the means to stealthily bypass the security features on the system. If network security appliance or tactical communication system processes and data can be masked or modified in such a way that if exfiltrated it is no longer useful, or even harmful, to the adversary it will restore trust to the system. If the processes can be rerouted through the hardware, such that its path is unpredictable, these malicious insertions would no longer be able to reliably exfiltrate useful data, or attack processes. Developing a means of restoring trust by the software architecture is a novel idea. It will lead to a more secure computing environment, because we will be able to place more trust in the systems. It will also prevent the cost of construction and operating new and trusted computer components manufacturing facilities, or embedding inspectors at factories around the world. PHASE I: Define software architecture that would be compatible with network appliance and/or tactical communication hardware that would enable security applications or tactical communication systems to operate in a trusted manner on hardware assumed to be untrusted. Describe and develop creative methods, techniques, and tools that would allow for the implementation of such an architecture. PHASE II: Develop, implement and validate a prototype system that utilizes the architecture, tools, and methods from Phase I. The prototypes should be sufficiently detailed to evaluate scalability, usability, and resistance to malicious attack. Efficiency is also an issue that should be explored, although it is less critical than overall scalability. PHASE III DUAL USE APPLICATIONS: The increasingly global market for computer hardware will continue to put the production of hardware in places not sympathetic to the United States Military or commercial sector. This application will have a broad market in the commercial sector as well where the protection of intellectual property is becoming increasingly difficult.
Detecting Malicious Circuits in IP-Core
OBJECTIVE: Develop technologies and tools for detecting potential malicious/backdoor logics in hardware IP-core, toward reducing supply-chain vulnerability in embedded computing and system on chip environment. DESCRIPTION: This topic solicits the development of technologies and tools which perform analysis on gate-level netlist of hardware IP-core to identify potentially malicious wires and logics, related to hardware backdoors. Compromise at hardware level is very powerful, difficult to detect and generally not addressable via software running on it. The solicited tool can be used to screen, detect and disqualify components/IP-cores which contain backdoor circuitry. Tactical computing devices often rely on the system-on-chip embedded computing hardware commonly found in embedded computing devices, often used in mobile computing and networking appliances, as the underlying processing infrastructure. Modern large and complex embedded and system-on-chip (VLSI/FPGA circuit) design often integrates large number of pre-designed components, acquired from third parties. These IP-core components are generally delivered as gate-level netlist. Currently, there is no practical way to ensure that these third party components (IP-cores) do not contain any backdoor or malicious circuitry, which can stealthily compromise the design (system) after deployment. Compromise circuitry embedded within the hardware is generally very hard to detect and defeat. State of the art methodology for verifying VLSI design includes running unit test on the individual component, as well as performing comprehensive regression test on the full-chip (VLSI) design. However, these tests can only address functionality described in the specifications. They rarely uncover the stealthy, out-of-specification malicious logics, which can only be triggered (activated), by hidden, rare and very-specific occasions. A new approach is needed to uncover these elusive circuits. If successful, the tools developed in this SBIR can be used to screen these third party IP-cores to ensure that they do not contain any backdoor/malicious logic. They prevent compromised IP-cores from being integrated into the design and enhance the security of the system. PHASE I: Investigate and develop creative methods, techniques for reliably discovering malicious/backdoor logics in hardware IP-core, normally delivered in the form of gate-level-netlist. Develop proof of concept prototype and identify the metrics that determine the prototype"s efficacy. PHASE II: Develop and enhance the prototype into a fully functioning tool. Demonstrate and evaluate the capability of the tool on actual (real world scale) set of benign IP-Cores and IP-cores with malicious-circuit/ backdoor. PHASE III DUAL USE APPLICATIONS: Inclusion of third party IP-cores is a common practice in system-on-chip design and development in private sector and in military industry. These SOCs hardware have been the backbones for embedded and mobile computing devices in the commercial sector as well as in the military uses. System-on-chip (SOC) hardware (semiconductors) is widely used in commercial application such as network appliances and mobile computing. Security and financial motive for the insertion malicious circuits exists in these applications. Commercial chip provider/manufacturers have interest for ensuring that their product is free of malicious circuits. If successful the tool developed within this SBIR should find its market in the commercial sector as well as military sector.