You are here

Low-Shot Detection in Remote Sensing Imagery

Description:

TECHNOLOGY AREA(S): Info Systems, Sensors, Electronics, Battlespace 

OBJECTIVE: Develop novel techniques to identify and locate uncommon targets in overhead and aerial imagery, specifically when few prior examples are available. Initial focus will be on panchromatic electro-optical (EO) imagery with a subsequent extension to multi-spectral imagery (MSI) and/or synthetic aperture radar (SAR) imagery. 

DESCRIPTION: The National Geospatial Intelligence Agency (NGA) produces timely, accurate and actionable geospatial intelligence (GEOINT) to support U.S. national security. To continue to be effective in today's environment of simultaneous rapid growth in available imagery data and number and complexity of intelligence targets, NGA must continue to improve its automated and semi-automated methods. Of particular concern is furthering NGA's ability to identify and locate uncommon intelligence targets in large search regions. Recent advances in computer vision due to deep learning have dramatically improved the state¬ of-the-art in techniques such as object detection and semantic segmentation, to include scenarios where little data is available for training (i.e., low-shot). While these approaches have shown encouraging results on NGA problems, little research has been performed which is specific to the remote sensing domain: a deficiency which this effort aims to address. It is the intention of the government to provide labeled data (pending availability and releaseability) to assist with algorithm development and capability demonstration; however, the performer is encouraged to identify and use imagery from other sources as well. Baseline performance will be established by a comparison against a state-of-the-art detection network (e.g., Feature Pyramid Networks, YOLO9000) pre-trained on ImageNet or COCO. Performance evaluation will be conducted using metrics derived from precision-recall curves. 

PHASE I: Address deep learning low-shot detection in panchromatic EO with consideration for extension to MSI and/or SAR in Phase II. Phase I will result in a proof of concept algorithm suite for low-shot detection and thorough documentation of conducted experiments and results in a final report. 

PHASE II: Extend Phase I capabilities to MSI and/or SAR to include researching methodologies to simultaneously exploit multiple modalities and/or develop zero-shot capabilities (i.e., no prior available examples). Develop enhancements to address identified deficiencies from Phase I or those identified when processing MSI and/or SAR data. Deliver updates to the proof of concept algorithm suite and technical reports. Phase II will result in a prototype end-to-end implementation of the Phase I low-shot detection system extended to process MSI and/or SAR imagery and a comprehensive final report. 

PHASE III: Technology enabling the automated search for uncommon objects in overhead imagery would be widely applicable across the government and commercial sectors. Military applications include national security, targeting, and intelligence. Commercially, it will apply to urban planning, geology, anthropology, economics, and search and rescue; and all other domains that benefit from identifying objects in overhead imagery. 

REFERENCES: 

1: Hariharan B. and Hirshick R. Low-shot Visual Object Recognition by Shrinking and Hallucinating Features. arXiv preprint arXiv:1606.02819v2. 2016 Nov 30.

2:  Wang Y. and Hebert M. Combining low-density separators with CNNs. In Advances in Neural Information Processing Systems 2016 (pp.244-252).

3:  Lin T. et al., Feature Pyramid Networks for Object Detection. arXiv preprint arXiv:1612.03144, 2016 Dec 9.

4:  Xie M., Jean N., Burke M., Lobell D., Ermon S. Transfer Learning from deep features for remote sensing and poverty mapping. arXiv preprint arXiv:lSl0.00098. 2015 Oct 1.

5:  Shrivastava A et al. Learning from Simulated and Unsupervised Images through Adversarial Training. arXiv preprint arXiv: 1612.07828. 2016 Dec 22.

6:  Bourmalis K, Silberman N, Dohan D, Erhan D, Krishnan D. Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks. arXiv preprint arXiv: 1602.05424. 2016 Dec 16.

7:  Redmon J, Farhadi A. YOLO9000: Better, Faster, Stronger. arXiv preprint arXiv preprint arXiv: 1612.08242. 2016 Dec 25.

8:  Lin TY, et al. Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision 2014 2014 Sept 6 (pp. 740-755). Springer International Publishing.

9:  Deng J, et al. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on 2009 Jun 20 (pp. 248-255). IEEE.

KEYWORDS: Computer Vision, Machine Learning, Deep Learning, Detection, Segmentation, Low Shot Learning, Image Processing 

CONTACT(S): 

Chris Algire 

(571) 558-1835 

Christopher.V.Algire@nga.mil 

US Flag An Official Website of the United States Government