You are here

Video to Feature Data Association and Geolocation

Description:

TECHNOLOGY AREA(S): Info Systems 

OBJECTIVE: Using video data (potentially 360-degree video) from sensors on ground vehicles with approximate geolocation information, develop algorithms to use foundational features (such as building footprints, corners of road intersections, and other labeled objects) to establish associations between segments in the observed imagery and object features, and thereby greatly improve the time-varying geolocation accuracy of the sensor and to expand the number and quality of the associations. 

DESCRIPTION: Navigation systems use GPS and other approaches to obtain approximate geolocation, which can be used to retrieve foundation data in maps, such as exists in Open Street Maps or Google Map, among many other public domain sources. However, GPS and other radio sources can drop out, or be denied, resulting in inaccurate geolocation, and potentially causing navigation errors. Yet, finding associations between observed objects in the environment and features that have been pre-collected and stored in maps or geographic information systems can be used to refine estimates of geolocation, maintain tracks and “propriocentricity,” and assist in fine-precision geolocation. Depending on the richness of the foundation features, it might be possible to navigate precisely and with high speed, if one supplements the location services with obstacle avoidance. This topic is concerned with finding associations between objects observed in imagery, collected as a time sequence, and foundation data that is practically and reasonably available. That foundation data may include building footprints, descriptions of the buildings or shops, 3-D descriptions of the buildings, labeled features such as road signs, fire hydrants, road layouts, and other information that would normally be used by a human to help navigate the environment, either based on consultation with a map, or from memory. It is assumed that the foundation data is accurate, and that points within the map are accurately geolocated (in advance). An automated system that can efficiently find such associations would be able to greatly improve its geolocation accuracy, using calibrated camera parameters, based on associations of points in the map data. This would facilitate navigation at higher speeds in more complex environments. While lidar sensors or radio frequency sensors might be able to further improve the ability of the system to find associations, this topic is concerned with finding associations using EO video cameras. Further, while cross-correlations with “street view” data might also assist in find associations, the feature data is assumed to consist of typical map data, which can include points, vectors and shapes, and will not include “street view” images. 

PHASE I: Identify and collect sample data sets for proof-of-concept and future testing purposes, and demonstrate viability of approaches to automated extraction of points and segments for finding associations, and improving geolocation accuracy improvement if associations can be found. 

PHASE II: Develop algorithms and software to access feature foundation data, establish associations between observed data and features, improve geolocation accuracy, and extend and improve associations based on those improvements, and test and evaluate the prototype system. 

PHASE III: Apart from military applications that assist Special Forces, peacekeeping forces, and other to navigate in unfamiliar territory, vendors of navigation systems for vehicles, smart phones, and mobile devices for vehicular and personal navigation will benefit from efficient, lightweight software capable of providing high accuracy geolocation accuracy even when GPS sources are inadequate. 

REFERENCES: 

1: H. Eugster and S. Nebiker. UAV-Based Augmented Monitoring--Real-Time Georeferencing and Integration of Video Imagery with Virtual Globe. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVII, Part B1. Beijing 2008

2:  Keith Yu Kit Leung, Christopher M. Clark, Jan P. Huissoon, "Localization in urban environments by matching ground level video images with an aerial image," IEEE International Conference on Robotics and Automation, 2008

KEYWORDS: Computer Vision, Autonomous Vehicle Navigation, Dead-reckoning, Spatial Data 

CONTACT(S): 

Steve Smith 

(571) 558-2944 

Stephen.J.Smith@nga.mil 

US Flag An Official Website of the United States Government