You are here

Topographic Feature Extraction from ground-based still and video imagery



OBJECTIVE: Fully automate the process of extracting topographic features, such as ridge lines, peaks, skylines, and river banks, from imagery taken at near-ground levels, showing landscapes, hills, mountain ranges, and/or other topographic features. 

DESCRIPTION: NGA has an interest in knowing the earth, whether from satellite imagery, or imagery readily available from cameras and video cameras collecting imagery at or close to the earth’s surface. With the increasing resolution of consumer-grade cameras and video systems, it becomes possible to extract relevant topographic features with high accuracy from a wide variety of imagery sources. Today, analysts use manual and semi-automated methods to demark ridge lines and other features in imagery in order to collect topographic information. A fully automated approach that requires no human intervention, nor seed points, nor manual “clicks” is highly desired. Much research has been done on automating or providing semi-automated capabilities of this type (see references); this topic desires a transitionable fully automated system for use by government and commercial applications. A fully automated system would first examine whether an image contains relevant topographic features, such as skyline, or distant ridge lines. It would examine video data to see if extraction can be improved by processing spatio-temporal blocks of data, as opposed to single frames. If the imagery is suitable, it would produce digital representations of topographic features, without any human intervention, labeling the features according to the type. At a minimum, automated extraction of skyline under varying meteorological conditions, as well as distant ridge lines in the foreground (with higher mountains or peaks behind the ridge line) with varying land cover types, is required. Other topographic features can add additional information about topography, and provide more precision in terms of geolocation of features. Thus offerers might want to suggest other feature types whose extraction they propose to fully automate. Various confounding factors should be considered, including the possibility of partial obscuration of features, buildings and towers that might get in the way, clouds and low-lying fog, varying conditions within a video segment, and range estimates so as to discard near features where perspective transformations cause the occlusion line mismatch with the actual ridge of the topography. Proposed evaluation criteria, and methods to obtain ground-truthed imagery for both development and testing, should be specified in the proposal. 

PHASE I: Identify features to be extracted, identify example data-sets with ground truth information, and design an approach to automated feature extraction, with proof of concept demonstrations. 

PHASE II: Obtain development data, and separately testing data, develop algorithmic methods to fully automate the extraction of topographic features to include skylines and ridgelines, and evaluate performance under a variety of operating conditions. Generate and/or collect a wide body of data for training and testing, expanding the set of target types and objects, including clutter data, and develop performance figures under varying operating conditions and varying sensor types, taking into account possible confounding factors. 

PHASE III: NGA has an immediate need for automated capabilities as described, which can be expanded to include other feature types. Dual use applications include needs by cartographers, surveyors, land use developers, and others who can make use of these capabilities. Developers might consider licensing of software, but might also consider providing services against large databases of imagery supplied by commercial companies. 


1: "Horizon Lines in the Wild", Scott Workman, Menghua Zhai, Nathan Jacobs

2:  "Joint Semantic Segmentation and Depth Estimation with Deep Convolutional Networks" A. Mousavian, H. Pirsiavash

3:  "Transient Attributes for High-Level Understanding and Editing of Outdoor Scenes" PY Laffont, Z. Ren, X. Tao, C. Qian, J. Hays

4:  "Learning Deep Features for Scene Recognition Using Places Database", B. Zhou, A. Lapedriza, J. Xiao, A. Torralba

5:  "Recognizing Landmarks in Large-Scale Social Image Collections", D.J. Crandall, Y. Li, S. Lee, D.P. Huttenlocher

6:  "Predicting Good Features for Image Geo-Localization Using Per-Bundle VLAD", Hyo Jin Kim, Enrique Dunn, Jan-Michael Frahm

7:  The IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1170-1178

KEYWORDS: Image Processing, Algorithm, Infrared, IR, Sensors 


Duncan McCarthy 

(571) 557-6240 

US Flag An Official Website of the United States Government