OBJECTIVE: Develop an API and User Interface for testing and evaluating the performance of perception systems for autonomous vehicles. DESCRIPTION: Perception systems for autonomous vehicles are often tuned to a specific platform and thus will degrade in performance if transferred to a different platform. For example, terrain costs are computed according to a fixed mapping from perceived terrain to robot capabilities. This means that upgrades or damage to the underlying platform can severely affect performance. Also, comparing perception systems on a single platform or across platforms is hampered by the lack of plug-and-play capability in most perception systems and tools for statistically meaningful evaluation. The DARPA Learning Applied to Ground Vehicles (LAGR) program addressed this shortcoming in part by providing a single platform with a default control system and planning module to which a perception system could be added by plugging in a flash disk. They system was not transferrable, however, to other robot platforms, and there existed no common set of tools for evaluating the data collected by the robot. It is desirable to be able to evaluate the relative performance of perception systems before the question of their effectiveness arises in the field. To this end, an Application Programming Interface (API) and User Interface are sought that will enable test of cross-platform generalizability of perception systems. The API should provide an interfacing standard that will allow perception systems to act as plug-ins to the autonomy software, communicating with the standardized planning system and controls. The user interface will enable an easy and intuitive way to conduct tests, visualize results, and do comparative analysis. PHASE 1: Develop an API that enables simple integration of any perception system following the API's interfacing guidelines onto a test autonomous platform. Demonstrate on a COTS robot with a simple perception system doing ODOA by interfacing with the robot's provided planning and control system. Produce a design for a User Interface that enables visualization and analysis of the performance of the perception system. Develop the metrics to be used for comparison of systems. These should include metrics such as time-to-goal, number of hazards encountered, number of human interventions required, and power consumption. The Phase I deliverables should include a final Phase I report documenting and describing the API in the form of a software manual as well describing the design of the User Interface. PHASE II: Develop the User Interface software designed in Phase I to allow convenient measurement and subsequent analysis of perception systems'performance including cross comparison of perception systems. Use the API and User Interface to demonstrate comparison of two perception algorithms across all metrics developed in Phase I on the COTS robot chosen in Phase I. The purpose of the API and User Interface is the evaluation and comparison of perception systems across platforms. To this end, extend this demonstration to include application of the API to a second COTS platform along with repeated demonstration of the two perception algorithms on this new platform. PHASE III: Bring the User Interface to a commercial level. Transition the work of Phase II to DOD test centers and DoD development efforts that need to assess and pick the most competent perception systems available. Understanding the limitations of existing perception systems is essential for properly deploying them for the appropriate environments and tasks. Potential commercial applications of this technology include autonomous system development and evaluation within major UGY providers within the areas of agriculture and mining.