You are here

Signal Processing for Layered Sensing


Asymmetric threats including chemical and biological agents, improvised dissemination devices, and vehicle- and personnel-born improvised explosive devices represent a persistent hindrance to U.S. military operations. Various sensor and surveillance systems develop a capacity to warn of the presence of such threats on a point-by-point basis; however the consumption of these data in the construction of a common operational picture for unit commanders remains an intractable challenge. Even if all of the systems were integrated onto a common networking and communications framework so that early warning data from any individual sensor were available in real time, the capacity to process and analyze such data in order to synthesize situational understanding for leaders is extremely limited. The current state of the art in the chemical and biological sensing paradigm is limited to the correlation of alarm information from networked sensor systems vis-à-vis the actual multimodal fusion of quantized native sensor signals. Enabling technology in information theory and signal processing continues to advance and emerge as a reasonably mature approach toward the consumption of sensor data leading to the development of improved situational understanding based on disparate and unrelated data sources. While the integration and correlation of information has advanced in recent times due in part to improved accessibility of networking and communications architectures, few examples of multimodal data fusion using data from more than one sensor have been reduced to practice. Network bandwidth continues to constrain the content available for such systems to function; however, distributed processing architectures allow for the fusion of quantized native sensor signals at outlying nodes, mitigating to some extent the size of the data stream that must be fused at the central signal processing node. This effort would be expected to demonstrate the unique and invaluable power of a multimodal signal processing tier of raw sensor data as a first tier of a layered sensing architecture that provides an unprecedented depth of information and detection confidence that exceeds the baseline simple correlation approach. The system should demonstrate the management of uncertainty and error (e.g., position, navigation, and timing errors and spurious alarm response signals) at various levels in the architecture, and incorporate the fault analysis and multimodal fusion product along with a representation of the overall confidence level of the final product in a fashion that is intuitive to non-technical operational decision-makers. Such a demonstration would serve as a validation case for arguments on the implementation of disparate sensing architectures for threat detection and awareness. To date, there remains a significant level of skepticism surrounding the argument that data fusion architectures provide operational benefit. PHASE I: Execute a comprehensive system study and define best practices models and theory for the correlation and fusion of weighted signal outputs from multiple sensor modalities (including but not limited to: electro-optical/infrared imagery, acoustic, seismic, magnetic, passive infrared motion sensors, chemical and biological agent detectors, radiological detectors, explosives sensors, ground surveillance radar, airborne imagers and LIDAR systems, unmanned aerial vehicle imagery, aerostat imagery) in the presence of bias and other errors, including position and time errors, and faulty sensor operation. Assess the value of follow-on tasking of reconnaissance and surveillance assets. Accommodate real-time threat environment intelligence and meteorological data and apply decision logic that accounts for realistic operational conditions. Provide a system level capability that manages false alarms while maintaining sufficient network-wide sensitivity to the threat condition. PHASE II: Fabricate, integrate, test, and optimize the performance of a signal processing system that was defined as an outcome of the Phase I Feasibility Study. PHASE III: A Phase III follow-on effort would effect a system demonstration that could be integrated into a wargame environment or table-top exercise to enable capability and doctrine developers to assess the value of an autonomous layered sensing architecture on battlefield situational understanding. The demonstration would have immediate value for the definition of future warfighter capability requirements while simultaneously maturing the technology readiness level of the multimodal data fusion environment. PHASE III DUAL USE APPLICATIONS: An integrated multimodal data fusion environment would realize significant market potential in industrial process control and chemical transfer line integrity in engineering plants and medical diagnostics. Environmental, law enforcement, security, and incident/disaster response applications would also lend themselves to the exploitation of multimodal data fusion integration systems and techniques.
US Flag An Official Website of the United States Government