You are here
Orbital Insight Synthetic Data for Computer Vision in Remote Sensing
Phone: (540) 358-5487
Phone: (415) 264-1938
As the Intelligence Community’s experts in geospatial analytics, the National Geospatial-Intelligence Agency (NGA) has a long-standing interest in conducting research and development in analyzing overhead imagery. With the commercialization of space, the need to analyze greater volumes of imagery much more quickly continues to progress. Fortunately, recent advances in computer vision (CV) have made this possible. \n\n \n\n The field of computer vision has been revolutionized by deep learning in recent years. These deep learning models have achieved state-of-the-art performance across the entire range of computer vision tasks, including object detection, semantic segmentation, and image generation. They work by training large networks of parameters, such as convolutional neural networks (CNNs), on datasets that demonstrate how a task should be performed. But these advances come with a significant cost - a critical need for the labeled data required to train these models. At present, state-of-the-art algorithms are not able to learn from unlabeled data; they require large amounts of painstakingly labeled data in order to perform well, creating a labeled data dependency problem. In the best case, this problem can be solved by manually labeling data, although that is time-consuming, expensive, and error-prone. But that is in the best case. In the worst case, which is the situation with uncommon objects, dataset curation is even more difficult as there isn’t enough available data to label. \n\n \n\n Orbital Insight and DADoES, propose to collaborate with NGA to solve the labeled data dependency through the use of synthetic data, with the ultimate goal of creating an object detection model that works on a customer-specified uncommon object for aerial and overhead imagery. As part of this effort, we propose to methodically incorporate synthetic data into our computer vision model development pipeline. First, we will create a 3D model of the object of interest designated by the government sponsor. Second, we will customize and calibrate a set of rendering and physics-based simulation tools to accurately represent the model as it would appear to the sensor. We will then procedurally generate synthetic images based on controlled variations for systematic experimentation with synthetic datasets. Then, we will perform domain adaptation on the synthetic images to increase the realism of the scenes further. Performance results from models trained with different dataset combinations will tell us what specific synthetic dataset parameters were useful. Using our proposed methods, the next generation of computer vision models will be more accurate, reliable, and consistent than ever before. If synthetic data can perform nearly as well as real data, it will provide cheaper models for uncommon objects and reduce the costs of algorithm development by 75%.
* Information listed above is at the time of submission. *