You are here

Miniaturization of Neural Networks for Geospatial Data



OBJECTIVE: Apply and develop techniques in miniaturizing neural networks to models specifically designed for geospatial data. Identify and quantify accuracy loss, speed improvements, and size savings any models that are studied. 

DESCRIPTION: As neural networks and deep learning continue to show improvements on standard benchmark datasets, these models will become more utilized by the geospatial community. In the literature on neural networks, there is an area of study that focuses on the miniaturization of these models. The goal of this process is to approximate a given high-performing and complex model with another model that is smaller and can run faster while also not having a significant decrease in performance accuracy. Much of this work exists with the goal of embedding these deep models on mobile systems. Many of the most advanced technology that exists on mobile phones today are a result of these advances. NGA seeks the application and development of these miniaturization techniques to overhead imagery models. There have been recent advances in the creation of geospatial benchmarking datasets with the creation of labeled overhead imagery corpora such as Cars Overhead with Context and Spacenet. Models that have performed well on these datasets are based on existing neural networks, modified and trained on these datasets. A natural parallel to these advances is to inquire about the ability to mobilize these high-performing models, but in a way in which considers the geospatial aspects of their application. If methods are applied and developed in Phase I to effectively shrink the neural network model while speeding up inference and not impacting accuracy on standard geospatial datasets, NGA would like to understand the effects of those efficiencies on a cloud computing infrastructure that supports automated tipping to imagery analysts. The research from this SBIR would be influential in determining NGA’s future architecture for serving analysts with the outputs of neural networks in a fast and scalable environment. 

PHASE I: Research and identify various methods to take existing neural networks and miniaturize them for standard overhead imagery datasets. Report on the trade-offs to the techniques researched and developed for a comparison of the original network with the miniaturized one. 

PHASE II: Develop a method to scale these techniques to a broader class of neural models for geospatial data. Demonstrate the benefits of miniaturized geospatial networks in distributed systems. 

PHASE III: These methods would be useful to a broad range of commercial and civil applications that desire efficient, high-performing inference models that are able to be run in a computing constrained environment; examples include overhead imagery analytics platforms, law enforcement, and earth systems monitoring. 


1: Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and Huffman coding. CoRR, abs/1510.00149, 2015.

2:  G. Hinton, O. Vinyals, and J. Dean. Distilling the Knowledge in a Neural Network. ArXiv e-prints, March 2015.

3:  Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR, abs/1704.04861, 2017.

4:  Forrest N. Iandola, Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and 1mb model size. CoRR, abs/1602.07360, 2016.

KEYWORDS: Computer Vision; Overhead Imagery; Mobile Networks; Machine Learning; Optimization 


Samuel Dooley 

(571) 557-7312 

US Flag An Official Website of the United States Government