TECHNOLOGY AREA(S): Electronics
OBJECTIVE: Develop spatiotemporal processing and exploitation for full motion Electro-Optic Infrared (EO/IR) sensor for on-the-move real-time detection of in-road and road-side explosive hazard and threat indicators for route clearance application.
DESCRIPTION: Traditionally, EO/IR sensor processing and exploitation of full-motion video has approached the automated target detection problem as a cascade of image processing tasks to detect location/region of interest (ROIs) followed by tracking of these ROIs over a sequence of images to build confidence before a decision. While such an approach is reasonable for sensors operating at low-frame rates (such as hyperspectral sensors), there is an opportunity and need for more integrated spatial and temporal exploitation of data for EO/IR sensors that readily provide full-motion video at 30 frames per second and higher. There is rich target specific (structural and spectral) information in the temporal evolution of the signature in full motion video captured over multiple frames by gradually changing perspective. Traditional approach centered on spatial image exploitation and temporal tracking of detections is not able to fully exploit this spatio-temporal characteristics of the threat signature. Image processing and machine vision approaches such as super-resolution imaging and structure from motion have sought to exploit this temporal content in full motion video to tease out additional information to improve quality/content of the image frame. However, such pre-processing steps are generally computationally expensive and still require traditional image based detection methods for automated exploitation. Highly varying imaging conditions, ever changing clutter environment and uncertain threat scenarios further limit the suitability of such approaches for challenging on-the-move real-time detection of in-road and road-side explosive hazard and threat indicators for route clearance application in both rural and urban scenarios on improved or unimproved roads. On-the-move real-time detection will require tools, techniques and video processing architecture to identify and efficiently capture robust spatiotemporal features and feature-flow characteristics that may facilitate reliable detection of threats and threat signatures. Further, these threat signatures may occur at different (and often a-priori unknown) spatial and temporal scales. While physics based features and feature-flow characteristics are particularly interesting to gain insight and evaluation of the technique, they are often hard to come by for unstructured tasks. More recent advances in learning algorithms, flux-tensor processing and deep-learning networks may provide an opportunity to investigate viability and suitability of such spatiotemporal detection and exploitation for route clearance application. While on-time-move detection of in-road and road-side threats from ground based and low-flying airborne EO/IR sensors is of primary interest, where applicable person, object and vehicle detection and tracking and human activity detection and characterization will also be on interest from the perspective of threat indicators.
PHASE I: The Phase I goal under this effort is to evaluate current state of the art, identify processing tools/algorithms, develop a design of exploitation architecture/pipeline and scope processing hardware that will allow real-time on-the-move integrated spatiotemporal processing of full-motion video data from EO/IR sensors for detection of in-road and road-side explosive hazard and threat indicators for route clearance application. Representative set of ground truthed data for in-road and road-side threats from ground based EO/IR sensors will be provided to evaluate feasibility of critical technologies/algorithms. The Phase I final report must summarize the current state of the art in spatiotemporal processing of full-motion video, provide details of the technical approach/algorithms, conceptual processing architecture/pipeline, rationale for the selected processing/exploitation architecture, system level capabilities and limitations, and critical technology/performance risks for the proposed processing and exploitation approach.
PHASE II: The Phase II goal under this effort is to implement and evaluate viability, utility and expected performance of spatiotemporal features, processing and exploitation techniques for real-time on-the-move detection of in-road and road-side explosive hazard and threat indicators for route clearance application. The proposed algorithms is expected to be operated and demonstrated in real-time (at specified frame rate that the provider may identify based on processing/computation needs), on-the-move (at suitable useful rate of advance) running on the processing hardware that will be installed and integrated on a ground vehicle for a representative mission scenario in test environment. The Phase II final report will include detailed system (software and hardware) design, hardware-software interfaces, system capability and limitation, detailed summary of testing and results, lessons learned and critical technology/performance risks.
PHASE III: The Phase III goal is to develop an end-to-end demonstration prototype (including suitable sensor, processing hardware, detection software and user interface) for on-the-move real-time detection of in-road and road-side explosive hazard and threat indicators for route clearance application. The sensor system may be mounted on a ground vehicle or an airborne platform and operated and demonstrated in relevant variable environment (including the mission relevant variability such as terrain, time of day or climate condition). The sensor system technology developed under this effort will have high potential for other commercial applications for law enforcement, border security and surveillance, autonomous robotics and self-driving cars.
1: K. K. Green, C. Geyer, C. Burnette, S. Agarwal, B. Swett, C. Phan and D. Deterline, "Near real-time, on-the-move software PED using VPEF," in SPIE DSS, Baltimore, MD, 2015.
2: Burnette, C., Schneider, M., Agarwal, S., Deterline, D., Geyer, C., Phan, C., Lydic, R.M., Green, K., Swett, B. "Near real-time, on-the-move multi-sensor integration and computing framework," in SPIE DSS, Baltimore, MD, 2015.
3: B. Ling, S. Agarwal, S. Olivera, Z. Vasilkoski, C. Phan, C. Geyer, "Real-Time Buried Threat Detection and Cueing Capability in VPEF Environment," in SPIE DSS, Baltimore, MD, 2015.
KEYWORDS: Spatiotemporal Processing, Full-motion Video Exploitation, Automated Target Detection, Deep-learning Networks, Feature-flow, Route Clearance, Improvised Explosive Devices
Dr. Sanjeev Agarwal