Description:

OBJECTIVE: To automatically compute precise geo-location uncertainty estimates for aerial imaging systems. Uncertainty estimates for systems that utilize computer vision-based algorithms to generate geo-location estimates are of particular interest. DESCRIPTION: With the continuing increase in the quantity and capabilities of systems that capture aerial imagery, a large amount of video and motion imagery is being collected to provide warfighters with increased situational awareness. However, while imagery by itself is helpful, knowing the context of the imagery is often as important as the imagery itself. For example, a simple video of a person walking on a road does not give the warfighter much information. On the other hand, seeing a video of a person walking down a road, while knowing what road the person is walking on, in which direction they are walking, what other people and objects are nearby, and information about what other traffic typically exists near that location gives the warfighter a tremendous amount of information. Therefore, it is important that the geo-location of all imagery collected be accurately estimated. If perfect estimates of geo-location were achievable, then simply returning the geo-location of the imagery would be sufficient. However, because no geo-registration solution will be perfect, it is essential that the geo-location estimation system also return the uncertainty in its estimates. Current systems that utilize Global Positioning Satellites (GPS), inertial measurement units (IMUs), and digital terrain databases (DTED) can return fairly accurate uncertainty estimates due to the well-characterized error sources of the system inputs. However, when computer vision-based algorithms are used to compute the geo-location of an object (e.g., if GPS is not available for some reason), accurate uncertainty estimates are not straight-forward to compute. For example, registration of the captured imagery to pre-existing reference imagery, feature tracking to enable vision-aided, GPS-denied navigation, or frame-to-frame image registration will all produce outputs with non-Gaussian and currently unknown uncertainties. This is compounded by the problem of outliers in both the inputs and outputs. While many algorithms have been developed to minimize the probability of outputs that are outliers, the probability is still non-zero and is not estimated with current approaches. The goal of this project is to develop the mathematical basis and algorithms for accurately estimating uncertainty when computer vision algorithms are used in a geo-location process. Both registration and vision-aided navigation applications are of particular interest in this project. Delivery of MATLAB or similar algorithm code is required. Currently, many algorithms exist that utilize the Kalman Filter. However, the covariance estimates of the Kalman Filter are known to be under-estimates for non-linear systems (the inconsistency problem for simultaneous localization and mapping -- SLAM.) In addition, the assumption of a Gaussian distribution may of itself be inappropriate for the outlier results common in visual processing algorithms. While the field of robust statistics helps minimize the impact of outliers, it does not address the generation of statistically valid uncertainty numbers. Addressing these two issues will be the primary focus of this project. Commercialization Potential: Accurate characterization of geo-location uncertainty will enable deployment of computer-vision based algorithms within the military and for civilian applications that require known uncertainties (e.g., civil surveying applications). PHASE I: Develop a mathematical basis and algorithmic prototype for automatically computing uncertainty estimates of monocular visual SLAM, image-to-image registration, and/or image-to-map algorithms. Compare uncertainty estimates achieved vs. current state-of-the-art algorithms (e.g., Kalman Filtering, particle filters) using real imagery data. Evaluate statistical consistency of uncertainty estimates. PHASE II: Expand mathematical basis developed in Phase I to more geo-location algorithms. Demonstrate statistically valid uncertainty estimates on several different algorithms used for geo-locating objects in aerial imagery, including monocular visual SLAM, image registration procedures, and cross-modality image registration. In addition, transition and demonstrate Phase I algorithms in real-time environment (i.e., with similar processing and time requirements as the geo-location algorithms themselves). PHASE III: Test the algorithms developed in Phases I and II in operationally-relevant environments. Verify the efficacy of the developed algorithms and increase their robustness to enable deployment in operational systems. REFERENCES: 1."Improving the Accuracy of EKF-Based Visual-Inertial Odometry", M. Li and A. Mourikis, in Proceedings, 2012 IEEE International Conference on Robotics and Automation (ICRA). 2."Improved Fusion of Visual Measurements Through Explicit Modeling of Outliers,"C. Taylor, in Proceedings, 2012 IEEE/ION Position, Location, and Navigation Symposium (PLANS). 3."Observability-based rules for designing consistent EKF SLAM estimators,"G. Huang, A. Mourikis, and S. Roumeliotis, The International Journal of Robotics Research, 2010, v. 29(5).