You are here

Synthetic Data for Computer Vision in Remote Sensing

Description:

TECHNOLOGY AREA(S): Info Systems, 

OBJECTIVE: Develop novel techniques to identify and locate uncommon targets in overhead and aerial imagery, specifically by leveraging synthetic data to augment data classes for which minimal labeled data samples exist. Initial focus will on synthetic aperture radar (SAR) and panchromatic electro-optical (EO) imagery, with a subsequent extension to multi-spectral imagery (MSI). 

DESCRIPTION: The National Geospatial Intelligence Agency (NGA) produces timely, accurate and actionable geospatial intelligence (GEOINT) to support U.S. national security. To continue to quickly and efficiently exploit the growing volume of imagery data and complexity of monitored objects, NGA must continue to improve its automated and semi-automated methods. Recent advances in computer vision will enable intelligence analysts to identify rare objects within large data volumes by leveraging AI tools. However, these approaches are data driven – presenting a challenge when few measured training samples exist. One approach to this problem is leveraging synthetic data for the sensor modality of interest (e.g. Xpatch, DIRSIG, Blender, IRMA). However, when using synthetic data to train a network, one must address the tendency of the network to learn distinct synthetic features vs. the unique phenomenology representative of an object of interest. Architectures which seek to generate realistic data, as well as classification networks which are robust to the synthetic-measured gap are of interest. 

PHASE I: Design and demonstrate methods that utilize synthetic SAR or EO data to augment a single few-shot class using government furnished SAR or EO image chips. Phase I will deliver a proof of concept algorithm suite, all data collected or curated, and thorough documentation of conducted experiments and results in a final report, with the goal of bolstering a strong Phase II proposal. 

PHASE II: Develop enhancements to address identified deficiencies from Phase I. Extend Phase I capabilities by increasing the scale of the synthetic augmentation to include multiple few shot targets. This phase will also extend classification to include detection of objects of interest within a larger scene. Deliver updates to the proof of concept algorithm suite and technical reports. Phase II will result in a prototype end-to-end implementation of the Phase I few-shot detection system extended to process EO, SAR, and MSI imagery and a comprehensive final report. 

PHASE III: Deep Learning, specifically Generative Adversarial Networks (GANs), have recently allowed the quick generation of convincing synthetic images and the refinement of simulated images to appear photorealistic. Applying these generative techniques to train computer vision classification and detection models in data-starved scenarios would have wide-spread applications across the government and commercial sectors. 

REFERENCES: 

1: Howe J

2:  Pula K, and Reite A. "Conditional Generative Adversarial Networks for Data Augmentation and Adaptation in Remotely Sensed Imagery" arXiv:1908.13809

KEYWORDS: Computer Vision, Synthetic Data, Generative Adversarial Networks, Machine Learning, Deep Learning, Few Shot Learning, Image Processing 

US Flag An Official Website of the United States Government