You are here
DEPARTMENT OF HOMELAND SECURITY (DHS) SMALL BUSINESS INNOVATION RESEARCH (SBIR) PROGRAM FY23
NOTE: The Solicitations and topics listed on this site are copies from the various SBIR agency solicitations and are not necessarily the latest and most up-to-date. For this reason, you should use the agency link listed below which will take you directly to the appropriate agency server where you can read the official version of this solicitation and download the appropriate forms and rules.
The official link for this solicitation is: https://oip.dhs.gov/sbir/public
Release Date:
Open Date:
Application Due Date:
Close Date:
Available Funding Topics
Develop a hardware-assisted real-time accurate detector of cyber-attacks on networked and edge electronic devices.
An increasing number of network-connected devices and systems in modern-day life are vulnerable to many attacks. Beyond the traditional computing systems and cloud services, modern Internet-of-Things (IoT) and cyber-physical systems can experience numerous cyber-attacks, such as ransomware, spyware, spoofing, botnets, keyloggers, denial of service (DoS), and distributed denial of service (DDoS), each of which is becoming more prevailing by numbers, as well as more challenging to thwart. There is an on-going need for effective solutions to identify, report and protect against cyber-threats. Current protection techniques are limited in detection efficacy (~70%) and scalability issues. Most techniques are primarily based-upon static software-focused solutions such as code analysis and signature (template) matching. These techniques have proven to be limited in detection efficacy so far, as reflected by the increasing number of threats and compromised cases. This topic is seeking solutions to analyze hardware generated data that would enable real-time, precise detection (>95%) and proactive protection against cyber-threats. The end state of this effort is a device-embedded solution to support highly accurate, real-time (within fraction of seconds) detection of critical cyber-threats, such as crypto-ransomware and DDoS attacks, on networked and edge electronic devices, such as computers, servers, cyber-physical systems, and IoT devices with minimal performance overhead while offering multi-layer and distributed defense, monitoring anomalous behaviors against zero-day attacks, and engaging automatic protection without human intervention.
Develop software to correlate items on a shipping manifest with images obtained from the screening of air cargo skids.
Air cargo screening is performed by Transportation Security Administration (TSA) regulated entities. As private enterprises these entities need to screen air cargo for aviation security threats in an effective manner but also need to do so in economically viable ways. Screening air cargo through whole-skid X-ray Computed Tomography (CT) Explosive Detection Systems (EDS) imaging is a challenging enterprise. Air cargo skids are an approximate maximum size of 48” x 65” x 48”, which is much larger than checked baggage items, and contain much more diverse items ranging from fresh produce, to medicines, to dense electronics and heavy machine parts. Screeners are frequently called upon to break down skids and screen items individually (through X-ray, explosive trace detection and/or physical search) when X-ray images are not definitive enough to determine that no threats are contained within. This leads to increased costs in staffing requirements and decreased throughput. X-ray systems used to screen air cargo skids can distinguish organic, inorganic and metallic items for the screeners. Air cargo manifests are a source of information of what the skids contain and could be used to inform screeners of what to expect in an X-ray image. Air cargo manifest information could also be used to inform a screener when items are present that are sufficiently dense, cluttered or known to scatter X-rays (such as books or pallets of water) are and likely to present difficulties in X-ray screening. Software that can serve to decrease the number of skids that must be broken down for individual examination, resulting in increasing efficiency and saving costs for the regulated entity as well as ensuring improved screening and threat interdiction. The proposed solution should present high-level analytic conclusions to a screener based on the air cargo manifest. It should also allow screeners to call up the air cargo manifest and work in concert with existing X-ray and future CT-based (EDS) air cargo skid screening systems. It is envisioned that this solution would be standalone with a connection (e.g. an ethernet or USB interface to a standard PC) to the screening system and not integrated into existing devices.Develop an interoperable digital software badge capability that can securely and efficiently prove a first responder personnel’s identity and qualifications onsite in a disaster response operating environment.
Many first responder organizations at various levels, inclusive of government, local and state, and non-profit agencies each have different methods on identifying first responders on scene during an incident. The lack of an interoperable and standardized credentialing solution for first responders results in more challenges with communication and coordinated access to information, such as the coordination of personnel and for residents and victims who may need transportation, medical assistance, food and shelter, etc. The current emergency response involves first responders arriving in-person at the scene, communicating via mobile land radio and networked digital applications. Current credentialing solutions like plastic identity badges, such as Personal Identity Verification (PIV), and Personal Identity Verification-Interoperable (PIV-I), are costly at approximately $132 and generally not integrated with field applications and platforms. Moreover, PIV-based badge solutions are not easily extended to support additional attributes or integrate with resource management applications and logistics in a dynamic environment. Paper printed credentials that are simple to manufacture (such as printed vaccination cards) are easily counterfeited and are not strongly verifiable. Other approaches are more resistant to counterfeiting but use proprietary encodings that in turn are not universally readable. These solutions cannot continue to be effectively and safely utilized as many incidents are dangerous to operate in, have legal protections (crime scene), or un-approved personnel may interfere with or thwart responders’ actions in furtherance of their own agenda or plan (criminal acts/terrorism). A new capability is required to make large scale incident and events safer for the public and responders by ensuring only authorized personnel are allowed to work inside the emergency area. A more flexible suite of credentials and universal verification is needed for our response community to respond to incidents securely and efficiently. New international standards, including the International Organization for Standardization (ISO), International Electrotechnical Commission (IEC) 18013-5, and ISO/IEC 23220 series are being adopted by some state and Federal government organizations in the U.S., and by the private sector and internationally, for credentialing citizens. Credentialing encompasses proof of identity, including verification and validation of name, age, home and work addresses, employment, etc. on and offline without needing to connect back to the issuing organization. The format (mdoc) is extensible to other types of credentials including first responders. Several large phone equipment manufacturers (Google, Samsung, Apple) in 2022 and other emerging technology companies are rolling out digital wallets, along with consuming U.S state issued driving licenses and identification cards. An additional standard, Decentralized Identifiers (DIDs) v1.0 is emerging as an alternative for verifiable digital identity credentialing. The proposed solution should adhere to these defined standards, and should include the following requirements:
• Credentials must include:
o Name
o Title
o Organization
o Jurisdiction
o Qualifications
Qualifications should include credentials that prove the individual has an array of skills that have been verified against the Federal Emergency Management Agency (FEMA) National Incident Management System (NIMS) guidelines.
• Credentialing information should be able to be shared and communicated online and offline to other first responders prior to allowing access to the site or venue. • Ability to be tracked and monitored, dynamically over a wide range of emergency operational situations and via a wide range of network conditions, inclusive of high latency, degradation of network bandwidth and broadcast ability, and no network ability.
• Ability to send verified identification information in a secure packet to the specific authorized receiver collecting the credentials. It should occur in real time, with a validation or authorization process that is cryptographic hardware based.
• Should not require specialized hardware to issue, hold or be verified. Can be used with existing first responder hardware that first responders already have available, (smart phone, laptop, smartwatch, etc.) that has trusted execution environments.
• The digital identity credential information should be sent and received in a standardized format easily accessed and understood by authorized users of the system that is interoperable and doesn’t require proprietary software protocols to be issued, held or verified.
Develop an integrated Alarm Resolution sensor suite with smart algorithms to enable high collection efficiency of explosive samples and signatures and enhance AR detectors’ performance, while simultaneously reducing user information overload. Explosive threats come in all shapes, sizes, concealments, and nuanced formulations.
Due to these complexities, DHS Components employ a variety of detectors from Explosives Trace Detectors (ETD) based on both Ion Mobility Spectrometry and Mass Spectrometry, vapor detectors, bulk resolution detection (Infrared (IR) and Raman based), through barrier detection (Spatially Offset Raman Spectrometry), and colorimetric kits. However, having a multitude of AR tools can lead to information overload in end-users.
Information overload is especially pronounced in crowded and high throughput environments such as aviation and security checkpoints. As a baseline, a Transportation Security Officer (TSO) currently would have to follow a multi-step decision tree to screen and resolve an alarm of a suspicious object. As part of this, the TSO would have to characterize shapes, sizes, and color of objects and to mentally categorize objects according to their utilities and compositions. From these initial screenings, they would determine the best tool(s) at their disposal to detect and identify explosives whether it is an ETD, colorimetric, IR, Raman, vapor detector, or any combination thereof. They would then determine how best to sample and collect explosives signatures whether it is through swabbing residues at frequently touched surfaces or aiming a laser to excite unknown substances and collecting their signatures.
One way to reduce information overload is through rigorous training. TSOs are trained on how to execute the decision tree and with practices of sampling techniques, they develop muscle memory (i.e. with the use of a Pressure Sensitive Wand). However, such training could only alleviate information overload incrementally. Thus, one alarmed object after another and day after day, TSOs may experience a high level of information overload which accumulates over time leading to user fatigue.
In response to this topic, S&T seeks a proposed solution to develop a new capability that comprehensively alleviates aforementioned information overload on the users. This capability would characterize shapes, sizes, and color of objects and from this initial characterization, categorize objects according to their utilities and compositions. From these initial screenings, the capability would inform a user on the best tool(s) to detect and identify explosives. The proposed capability solution should consist of three sub-components, all integrated within a desktop size box:
1) New sensors which can scan the object and categorize material compositions according to their physical characteristics (ex: electrical conductivity, magnetic property...);
2) A Machine Learning algorithm that takes sensory output, analyzes, and suggests the best AR detector to sample or collect signatures of unknown substances; and
3) Upon user confirmation, actuate the AR detector to detect and identify explosives.
This is in essence a decision analytics tool specifically applied toward enhanced Alarm Resolution. Performance parameters for this proposed capability include the Probability of Categorizing (PC) correctly objects according to their utilities and function (Phases I and II) and the Probability of Recommending (PR) the right AR tool(s) (Phase III). For comparison, PC and PR will be collected from a well-informed TSO who employs the AR decision tree according to five different bins of characteristics and functions and four different AR tools.
Develop testing methods and processes to ensure conformance and interoperability of Mission Critical Services server-to-server communications.
Mission critical services were implemented in the Third Generation Partnership Project (3GPP) standards to provide public safety users with highly resilient and high-performance network capabilities, such as quality of service, priority and preemption.
Cellular network providers are starting to provide Mission Critical Services (MCS or MCX) to their customers in the form of Mission Critical Push To Talk (MCPTT), Mission Critical Video (MCVideo), and Mission Critical Data (MCData). However, communications and interoperability among users on different carriers, or on different vendor furnished applications, can be limiting. MCS services consist of both end point applications on user equipment and the servers that provision and manage those services. Though MCS servers exist within a cellular network provider's core network, there are other devices that can act as a MCS server. An example of this is portable dispatcher units that a customer may use.
Since these services are relatively new, the interfaces between the servers and their implementation are not mature and are likely to be non-interoperable and not conform to the standards. Furthermore, there exists proprietary technology that could hamper communications and interoperability. (Non-interoperable solutions will be a barrier to interoperability between public safety users using different services). The National Institute of Standards and Technology's (NIST) Public Safety Communications Research Division (PSCR) has recently funded grants to perform MCS device and application conformance test cases. Currently, there is no test equipment that can perform server conformance test cases. And there is no unified process and methodology to conduct interoperability testing for MCS server-to-server communications.
The 3GPP has created server-to-server conformance test cases in the document TS 36.579-3 for MCPTT, but it has no plans to create conformance test tools for these test cases. There are 3GPP work plans to greatly expand the scope of the document to include server aspects for MCVideo and MCData, in addition to expanding the number of test cases for MCPTT. (e.g., TTCN-3). This effort will build upon the NIST PSCR effort to build test tools to enable conformance and interoperability testing between MCS servers of different cellular network providers. This solution will develop testing equipment to test server conformance and a process & methodology for testing interoperability to insure that first responders maintain communications during critical incidents and planned events per the standard(s) [see references]. This is especially needed if the users are trying to communicate while using multiple cellular network providers or solution providers.
Develop a technique for generating reduced-order models of the protect surface of a cyber-physical-human critical infrastructure system enabling a comprehensive vulnerability assessment against catastrophic, model-based destabilization attacks.
Today's critical infrastructures are complex, dynamic, cyber-physical-human systems. These systems often can hide intricate sensitivities to small perturbations that can result in catastrophic, destabilizing behaviors, such as cascading failures in a power grid or a stock market's flash crash. Enemies with enough information about the system can exploit these sensitivities and design provably stealthy attacks to trigger them, so detecting the presence of these sensitivities, or these intrinsic vulnerabilities, is the first step towards protecting our critical infrastructure systems from this next-generation of sophisticated model-based attacks.
Many such model-based attacks in critical infrastructure systems are beginning to emerge [1]. For example, machine learning can be used to design catastrophic attacks in a number of systems, such as in chemical processing plants, power generation or distribution systems, heating, ventilation, and air conditioning (HVAC) systems, water treatment or distribution systems, or nuclear power facilities. A general model may be known- from basic physics, but where specific parameters for a particular target facility are learned through stealthy observation of that system’s behavior. Nevertheless, even though details about designing and executing such attacks are increasingly available in the academic literature, little work has been done developing techniques to systematically detect and protect against them. Guaranteed robustness analyses [2] provide one approach to secure these systems, but the nonlinear and often hybrid nature of these systems, and their sheer complexity, make performing such computations extremely difficult at scale.
The protect surface of the critical infrastructure is a model that represents system variables that are hypothesized as being potentially exposed to possible attackers (or other unexpected perturbations), as well as their causal relationships to each other; dynamical structure functions have been used to build such models for linear time invariant systems. When building these models, choosing which variables are "exposed", and which variables are suppressed as part of the causal interaction between exposed variables, allows modelers to distinguish insider attacks, where many more system variables may be exposed, from other attacks where fewer variables may be exposed. Nevertheless, the number of exposed variables and the complexity of the, often nonlinear, dynamics can make these models unwieldy and impractical to develop for real critical infrastructure systems.
New research is needed to develop methodologies for reduced-order modeling of the protect surfaces for critical infrastructure systems, such as power systems, chemical and other manufacturing facilities, municipal and regional water systems, nuclear reactors, emergency services, transportation networks, pipelines, commercial and government facilities, financial systems, dams, communication networks, or food production and agricultural systems. These reduced order models should preserve critical properties of the full system, such as stability and sensitivity to perturbations of the exposed variables, while significantly reducing the complexity of the model. The reduced order model should exhibit the same vulnerability properties as the full model so that a comprehensive vulnerability analysis conducted on the reduced model will reveal the vulnerabilities and exploitation potential of the actual system. For more information on approaches for developing such reduced order models might be found in [3] and related works. A proposed solution should provide an approach to building the reduced models of a system, that can maintain the vulnerability properties of that system. It will provide an exemplar for the process, and a tool(s) to assist others in developing such models.
Classification software that derives theoretically calculated signatures/spectra of unknown, not yet created, toxic compounds.
The government seeks innovative methods to create theoretical spectroscopic signatures of potentially toxic chemical compounds for use in detection systems. Compounds of interest include chemical warfare agents (CWAs), toxic industrial compounds (TICs), pharmaceutical based agents (PBAs), and non-traditional agents (NTAs). Compounds of interest could be naturally occurring or synthetic. Novel classification, identification, and quantification methods can provide enormous savings in cost and timelines for fielding new detector systems and can improve the reliability and performance of both current and future systems. These enhancements will ultimately result in increased safety for the public and Department of Homeland Security operational units when encountering novel agents.
Detection systems that rely on target materials’ spectroscopic signatures have been limited to the detection, and possible quantification, of known compounds whose signatures have been measured experimentally. This project will introduce the ability to expand libraries of spectroscopic signatures beyond that limited set by (1) the automated generation of molecular structures, (2) theoretical prediction of their spectroscopic signatures, and (3) predictions of their toxicity metrics. This will dramatically expand the range of potentially toxic materials that may be detected, even with existing detection systems. Present technologies for spectrum prediction include the use of molecular dynamics to simulate single molecules and clusters of molecules, and density functional theory (DFT); some employ machine learning algorithms. However, these techniques still lack sufficient accuracy to fill the needs of the Department of Homeland Security.
The project entails developing theoretical spectra of toxic compounds, such as CWAs, TICs, PBAs, NTAs, and similar compounds. The work could proceed from low molecular weight to higher molecular weight compounds. Algorithms for classification may focus on a chosen spectroscopic technology and to provide tools to enable theoretically based identification. This effort is meant to develop algorithms; the choice of platform (e.g. cloud or edge computing) is up to the performer. Estimation of toxicity metrics of chemicals in the above-listed classes, including as-yet unknown threat agents, can be defined by immediately dangerous to life and health (IDLH) metrics following NIOSH/OSHA standards. Finally, data formats must be non-proprietary. Standard data formatting will enable efficient data processing and reachback analysis.