You are here

Distributed Visual Surveillance for Unmanned Ground Vehicles


This topic is supported under National Robotics Initiatives (NRI). OBJECTIVE: Develop a system to identify, classify, and analyze visual data from unmanned ground vehicles and stationary visual surveillance sources to enable real-time on-board decisions and system-wide planning regarding route, speed, and tasks. DESCRIPTION: Distributed visual surveillance has a major role in the future of Unmanned Ground Vehicles (UGV"s). Distributed visual surveillance refers to the use of cameras networked over a wide area to continually collect and process sensor data to recognize and classify objects in the environment. Analyzed data will inform unmanned decision-making and fleet management to optimize a transportation system. Sensors and camera systems mounted on UGVs will augment stationary surveillance hardware. An area of interest for this research is data fusion, co-operative multi-sensor tracking methods, and distributed processing. Also of interest is the reconciliation, classification, and prioritization of data; storage and accurate retrieval of archival references; and the selection of an appropriate action/response to the data. Although there are many potential sensors that can be used in distributed surveillance, in this topic we are focusing on visual (and perhaps infrared) imaging sensors whose cost, reliability, and availability makes transition to the field or commercialization much more likely. Communications bandwidth is and will remain a limited resource. Even with video compression technologies, there is insufficient bandwidth to upload all video and high-resolution still images from all network nodes. Artifacts due to heavy video compression would degrade most analysis applications and viewing all the data would overwhelm analysts. Local processing is therefore preferable to central processing to extract actionable information from the sensor data and to plan UGV position adjustments. An individual node can determine whether or not there has been a significant change in the situation that would warrant transmitting a package of sensor-level data. The scenario to be addressed in this topic is that a small fleet of 10-15 UGVs deployed at a CONUS installation in order to safely transport personnel, on-demand, from various point around one building one-third mile sharing pedestrian sidewalk, across an uncontrolled four-lane roadway, through a busy parking lot to another building on the installation. Vehicles will operate at speeds from 3mph (in mixed pedestrian traffic) up to potentially 25mph which is the limit for Neighbourhood Electric Vehicles. Vehicles must recognize and respond appropriately to pedestrians, unconnected vehicles, and other environmental objects. Approximately 12 networked cameras fixed across along the route and around the test site will provide visual coverage of the area. Sensors will have a priori visual background data and UGV location will be known (landmarks, GPS positioning, etc.) enabling temporal differential or background subtraction to locate objects. Capabilities desired for the UGV include ODOA, correct positioning and speed regulation with respect to moving and stationary objects, coordinated and optimized system-wide responses across the fleet, data collection and/or communications, and extracting actionable information from the sensor stream. Information of interest includes detection and behaviour analysis of humans and vehicles, analysis of traffic patterns, and identification of suspicious activities or behaviors. The intended platform is an electric vehicle with size on the order of 500-600 Kg (roughly golf-cart sized). The platform is expected to manage its own energy usage and recharge itself, wirelessly, so energy efficient algorithms are of interest. UGV platform and payload development, including sensors and communications, are outside the scope of this topic. PHASE I: The first phase consists of scenario/capability selection, initial system design, researching sensor options, investigating signal and video processing algorithms, and showing feasibility on sample data. Documentation of design tradeoffs and projected system performance shall be required in the final report. PHASE II: The second phase consists of a final design and full implementation of the system, including sensors and UGV software. At the end of the contract, a database of behavioural characteristics will be available enabling both improved M & S and T & E as well as improved autonomous local maneuvering shall be demonstrated in a realistic outdoor environment. Deliverables shall include the prototype system and a final report, which shall contain documentation of all activities in the project and a user's guide and technical specifications for the prototype system. PHASE III: The end-state of this research is to further develop the prototype system and potentially transition the system to the field or for use on military installations and bases. Potential military applications include monitoring highways, overpasses, intersections, buildings and security checkpoints. Potential commercial applications include monitoring high profile events, border security and commercial and residential surveillance. The most likely path for transition of the SBIR from research to operational capability is through collaboration with robotic companies from industry or through collaboration with the Robotic Systems Joint Project Office (RS JPO).
US Flag An Official Website of the United States Government