You are here

Image data compilation based on accurate registration of sequential frames from a drone

Description:

TECHNOLOGY AREA(S): Electronics 

OBJECTIVE: Provide algorithms capable of compiling information from multiple frames acquired from a moving unmanned aerial vehicle. This algorithm will consolidate video data from an unnamed air vehicle in the form of data vectors that represent ground locations from multiple angles of observation. 

DESCRIPTION: U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate is supporting protection of combat vehicles through detection of obstacles that threaten maneuverability in battlespace environments. We are developing sensors mounted on unmanned air vehicles to detect and localize landmines, natural or manmade topography, and surface obstacles that limit maneuverability or threaten the combat vehicle. To aid in this goal we will collect data over an environment from multiple look angles to improve our knowledge of the objects or conditions at ground locations. We believe that having multiple samples at a ground location will produce better features for subsequent algorithms. A necessary precondition of this process is to sort data from pixelated images of the video into vectors of data associated with particular ground locations, thus capturing a location’s data from multiple positions. We seek assistance in this area from qualified companies who can implement algorithms that will identify terrain features from multiple video frames collected by an airborne imaging system. Algorithms should process this data to determine 3D point clouds and subsequently assign to these points data associated with that location from multiple cameras and look angles. Topography and surface objects in the scene should be accounted for in the algorithms to appropriately register data to particular ground locations. The means of achieving this objective may include, but are not limited to, structure from motion, image transforms and photogrammetry. It is desirable that 3D information about the environment be obtained as an intermediate product. A potential benefit of this level of processing is the ability for the algorithm to discriminate above ground clutter from surface level terrain or obstacles. Likewise, information about the relative attitude of the air platform would be beneficial. Contractor data may be used to develop the algorithms, but as these algorithms mature Government provided data will be utilized to assess performance on data collected at Government test sites. The algorithm’s objective will be to accurately register image data to ground locations from multiple imaging systems mounted to the same platform. This algorithm will preferentially operate using image data alone. Inertial Measurement Unit (IMU), Global Position System (GPS), and height data may be brought to bear if significant improvements to output quality are achievable; however preference is given to methods that operate in GPS denied environments. Subsequent detection processing of the assembled feature vectors should be considered in the context of improving resolution and registration accuracy, but this solicitation is does not encompass advanced automatic target detection development. 

PHASE I: This effort should identify algorithms capable of registering image data to ground locations. Preliminary testing of contractor or modeled data should be performed to determine the ground sampling density achievable as a function of standoff distance, magnification and pixel size. The impact of optical distortions, frame rate, range of collection and nadir vs. slanting look angles should be characterized to guide future data collection activities. The final report will be include the expected performance as a function of system parameters and sufficient information to determine the necessary conditions for sensors and platforms to achieve accurate image registration to ground locations. 

PHASE II: This effort will implement the algorithm as software to produce 3D point clouds and associated image intensity data from Government data. Data assessment methods will be developed to determine the accuracy and stability of algorithm for various controlled data collections as well as field conditions without fiducial targets. The algorithm will show a path to continuous operation at realistic frame rates. The algorithm will be implementable on processing hardware scaled for size, weight, and power appropriate for an unmanned aerial vehicle. The algorithm should be demonstrated on such a processor or demonstrated to specify the processing and computation needs required. Resolution is desired on the order of 10cm for select regions of interest. Thus, the holistic algorithm may require trivial pre-screener processing to limit processing regions requiring improved resolution. The Phase II final report will include detailed system (software and hardware) design, system capability and limitations, detailed summary of testing and results, lessons learned, critical technology and performance risks. 

PHASE III: The Phase III goal is to develop and implement accurate image registration algorithms on processors for UAVs. This may be combined as a complete product for commercial sales, or as an algorithmic add on that utilizes Government or commercial sensors and platforms. This phase will improve accuracy of the methods and produce consistent feature vectors of image data associated with locations in the scene. 

REFERENCES: 

1: Irschara, Arnold, Christopher Zach, Jan-Michael Frahm, and Horst Bischof. "From structure-from-motion point clouds to fast location recognition." In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 2599-2606. IEEE, 2009.

2:  Lingua, Andrea, Davide Marenchino, and Francesco Nex. "Performance analysis of the SIFT operator for automatic feature extraction and matching in photogrammetric applications." Sensors 9, no. 5 (2009): 3745-3766.

3:  Poelman, Conrad J., and Takeo Kanade. "A paraperspective factorization method for shape and motion recovery." IEEE transactions on pattern analysis and machine intelligence 19, no. 3 (1997): 206-218.

4:  Zhang, Junzhe, and G. Okin. "Quantifying vegetation distribution and structure using high resolution drone-based structure-from-motion photogrammetry." In AGU Fall Meeting Abstracts. 2017

KEYWORDS: Structure From Motion, Image Registration, Optical Flow, Unmanned Aerial Vehicle 

US Flag An Official Website of the United States Government