You are here

Blending Ground View and Overhead Models

Description:

TECHNOLOGY AREA(S): Info Systems, Human Systems 

OBJECTIVE: Develop techniques that use information from deep models trained on ground-view images to learn representations of similar objects for overhead models. Demonstrate the ability to teach a neural network an object class for overhead imagery from ground-view examples. 

DESCRIPTION: The current approach for automated classification and detection of objects in imagery is to create a large labeled dataset that can be used to train models. However, creating training datasets is expensive, time-consuming, and imprecise. It is not feasible to create a large labeled dataset for every example of object that is in need of detection by an automated system. This is particularly true in overhead geospatial domains where the amount and quality of labeled data are low. There are, however, large high quality labeled datasets for ground view images like ImageNet, MS COCO, etc. NGA seeks advances in research that leverage the representation of objects in these ground view models to train classifiers and detectors in overhead imagery. An essential problem facing overhead geospatial detection systems is a lack of training data. The geospatial community should be able to leverage the innovation brought by the ground based community to speed the development of overhead models. NGA would like an algorithm that is capable of leveraging images of an object taken from the ground to train a model to detect that object in overhead imagery. This algorithm should be take less time to train with less data than a similar algorithm trained with a large overhead corpus. While the Government may furnish data for this project, proposals should be written to publically available datasets accessible on the internet.  

PHASE I: Research techniques to accomplish cross-view transfer learning of novel object classes. Demonstrate initial instance of successful learning procedure. 

PHASE II: Develop further refined techniques that require less data and achieve higher performance metrics. Create software package that implements basic research techniques discovered through this work. 

PHASE III: These developments would be applicable to a variety of commercial applications that seek innovative and efficient ways to leverage overhead data for automated discovery of objects; examples of applications include financial markets interested in geospatial indicators, security providers with novel signatures in need of discovery, and agricultural modelers with needs to identify data trends. 

REFERENCES: 

1: DigitalGlobe, CosmiQ Works, NVIDIA, and Amazon Web Services team up to launch Spacenet open data initiative. http://investor.digitalglobe.com/phoenix.zhtml?c=70788&p=irol-newsArticle&ID=2197375.

2:  Hani Altwaijry, Eduard Trulls, James Hays, Pascal Fua, and Serge Belongie. Learning to match aerial images with deep attentive architectures, 06 2016.

3:  Tianqi Chen, Ian J. Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge transfer. CoRR, abs/1511.05641, 2015.

4:  Tsung-Yi Lin, Yin Cui, Serge Belongie, and James Hays. Learning deep representations for ground-to-aerial geolocalization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.

5:  Nam N. Vo and James Hays. Localizing and orienting street views using overhead imagery. CoRR, abs/1608.00161, 2016.

KEYWORDS: Computer Vision; Overhead Imagery; Transfer Learning; Machine Learning; Cross-view Representation 

CONTACT(S): 

Samuel Dooley 

(571) 557-7312 

Samuel.W.Dooley@nga.mil 

US Flag An Official Website of the United States Government