You are here

Adaptive multi-sensor wide area situational awareness system

Description:

OBJECTIVE: Develop machine learning technology that can significantly improve warfighter wide area situational awareness based on multiple sensors. DESCRIPTION: Layered sensing enables situational awareness (SA) about an area of interest (AOI) by providing multiple high-resolution views of the area. SA in a wide area of operations is particularly challenging as the sensor resources have to be stretched to satisfy a large number of warfighter requests. Typical wide area sensor layering consists of a patchwork of high and low resolution sensor views. For example, SAR/GMTI radars may first provide low-revisit data over large areas and then high-resolution high-revisit data over pre-defined areas of where an activity of interest is suspected. EOIR and WAMI sensors can scan a large area of interest at low resolution and then provide high-resolution images over small areas of interest. Depending on the availability of the resources and time constraints, multiple sensors can be layered over the same activity or each sensor can cover different activities. AF and Army users then have the challenge of fusing this disparate data to uncover activities of interest. The objective of this topic is to develop machine learning technologies that can address two challenges in the wide area sensor layering: (1) improve analyst"s ability to detect activities of interest in wide area layered sensor data, (2) to deliver the sensor data to the AF and Army users in time to make a decision over a combination of satellite, airborne and mobile low-bandwidth networks. Machine learning technologies can improve the performance of activity detection methods by taking advantage of the training data arising during the operations. The activity detection system has an opportunity to learn from the detections provided by the users, and from ad hoc multi-sensor co-collects provided by the layered sensors. Traditional machine learning uses labeled examples, generated by analysts by analyzing the collected data, to train the learning algorithms; the training process produces a decision rule which can be applied to detect activities in the future data. In a typical scenario, however, more data is available; for instance each low-resolution may have corresponding high-resolution views and analysts might have enhanced parts of the data by describing the situation in detail. Such additional data from the high-resolution sensors or the analyst may not be available in the operational environment when activities of interest must be detected in real time. Advanced machine learning technology that can use additional information during training is desired. Furthermore, the machine learning technology should take into account the specifics of operational sensor data: high variability of observed activities and sensor observations, internal structure within the classes of interest; presence of large number of clutter classes, limited amount of learning samples, and the need to integrate machine learning into the human analytical process. Such adaptive activity detection system can significantly increase probability of detection of activities of interest and reduce the false alarm rate. The goal of the adaptive network management system is to ensure that the right information flows to the right users in time to provide situational awareness. The system will monitor network performance under varying sensor output and user requests, learn to predict future bottlenecks, and develop proactive network management and prioritization policies. PHASE I: Develop machine learning technologies which can operate on training data containing additional information not available during the test stage, and consider the complex structure of classes of interest and large number of clutter classes. Apply machine learning technology to detection of activities in sensor data and management of sensor networks. PHASE II: Apply developed technology to forensic datasets from radar, EOIR, WAMI sensors and to track the data/activities. Demonstrate detection and false alarm performance. Demonstrate benefit to the analyst. PHASE III: Integrate with ground stations. REFERENCES: 1. M. Bryant,P. Johnson, B. M. Kent, M. Nowak, S. Rogers, (2008)"LAYERED SENSING, Its Definition, Attributes, and Guiding Principles for AFRL Strategic Technology Development", Sensors Directorate, Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio. 2. D. Deptula, (2012),"New ISR Concepts for the 21st Century", Remarks by David A. Deptula, Lt Gen USAF (Ret), CEO, Mav6, LLC Mid East ISR Symposium, Abu Dhabi, UAE 5 Feb 2012 http://edgefighter.com/2012/02/12/new-isr-concepts- for-the-21st-century/ 3. M.S. Cromer, W.G. McDonough, J.A. Conway,(2009),"Leading the way in geospatial intellingence", Military Intelligence Professional Bulletin, http://findarticles.com/p/articles/mi_m0IBS/is_3_35/ai_n57942959/
US Flag An Official Website of the United States Government