TECHNOLOGY AREA(S): Ground/Sea Vehicles
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), which controls the export and import of defense-related material and services. Offerors must disclose any proposed use of foreign nationals, their country of origin, and what tasks each would accomplish in the statement of work in accordance with section 5.4.c.(8) of the solicitation.
OBJECTIVE: Develop and demonstrate a system that purely uses deep learning and inexpensive commercial-off-the-shelf (COTS) sensors to incrementally learn and perform robotic following behaviors with large vehicles.
DESCRIPTION: Army supply convoys currently face numerous threats, such as Improvised Explosive Devices (IEDs), while completing their missions. The current method to address these threats is to add armor, which increases the weight and reduces the mobility of the vehicle. Another method to address these threats is to use robotics and autonomy to remove Soldiers from the vehicle. Developing autonomous ground vehicles is a very difficult challenge due to the numerous situations that a vehicle may encounter. To handle these situations using traditional methods, each scenario needs to be accounted for and explicitly programmed into the system. Given the high number of potential scenarios, programming the system to handle them is very time consuming and costly. The performance of these robotic systems is also limited to scenarios that have been explicitly programmed. A potential way to more rapidly program a system to handle the various scenarios and reduce development costs, is to utilize a lifelong deep learning approach. Deep learning uses neural networks to allow computers to automatically create models and learn using data from sensors, human interaction, and databases.
Deep learning has been shown to be an effective means of performing pattern recognition in other fields and is showing potential to be used for ground vehicle robotics. Recently, a deep learning system was demonstrated that enabled automated highway driving using inexpensive COTS sensors. By collecting human driving data and running it through learning algorithms, the system was able to incrementally achieve large improvements in driving performance in short time frames. Convolutional Neural Networks (CNN) have also been applied as a classifier in determining autonomous vehicle traversability over off-road and on-road terrains. In addition, a CNN has been trained to map raw pixel data from a single camera directly into steering commands, which allowed a system to learn to steer on both local roads and highways, with and without lane markings, using minimal training data from humans.
In order to overcome the challenges with programming robots to handle the countless variables encountered with ground mobility, proposals are sought to develop and demonstrate an inexpensive system that purely uses deep learning and inexpensive COTS sensors, limited to passive cameras and radar, to enable a large vehicle to robotically follow another large vehicle in a convoy. This research is different from in that deep learning will be used to train a vehicle to follow another, rather than drive fully on its own. The ultimate vision of this project is to take a large vehicle equipped with sensors and equipment, have a driver follow a lead vehicle (that is not equipped with sensors) along arbitrary routes, process the data with learning algorithms, and then have the system perform the steering, throttle, and brake control to follow the lead vehicle on subsequent runs. It is expected that the system may not perform well initially, but it should incrementally improve with each run as it learns from additional data collected. The system should also be capable of sharing its knowledge with other robotic follower vehicles. The environment for this topic will be limited to daytime operations on improved roads (paved or unpaved) and include typical on-road static and dynamic obstacles such as other vehicles, construction barrels, and pedestrians. The distances for following will range from 10 meters to 150 meters. The scenarios will start simple with speeds below 45 km/h on good roads with gentle curves and static obstacles, and then increase in complexity as the system improves and safety permits. Later scenarios might include lower quality roads, higher speeds (up to 90 km/h), sharper turns, and additional obstacles (both static and dynamic). Costs of the prototype system may be higher, but the cost target for a production system is less than $25k. Both online and offline learning techniques are acceptable. The testing should show that the system does not overfit to specific training sets and can perform in environments and conditions that are different from the training. The system should also be capable of operating in GPS-denied and communication-denied environments.
PHASE I: Develop a concept design for a system using lifelong deep learning and inexpensive COTS sensors to perform robotic following with large vehicles. The deliverables shall be a concept design report and performance analysis report. The concept design should include a description of the system architecture, algorithms, sensors, and computing requirements. The performance analysis should show the effectiveness of the algorithms in tests conducted in simulation using collected real-world data sets.
PHASE II: Using the Phase I concept design, the contractor shall develop, integrate, and demonstrate a prototype system that can incrementally learn robotic following behaviors on a large vehicle, using deep learning algorithms and inexpensive COTS sensors. The system deliverables shall include: design documentation, interface control documents (ICDs), software, and hardware. The integration and demonstration shall be performed using a large vehicle (provided by the government) that is already equipped with drive-by-wire capability. The environment and operating conditions for the final demonstration should be on improved roads, during the day, and at speeds ranging from 45 km/h to 90 km/h.
PHASE III DUAL USE APPLICATIONS: A potential military application of the deep learning system is to integrate into the Autonomous Ground Resupply (AGR) program, which will then transition into the Leader Follower Program of Record. There is potential additional application for the system to expand into full autonomy and transition into the Autonomous Convoy Operations Program of Record. A potential commercial applications of the system could be to enable platooning within the trucking industry. There are also potential agricultural applications where more than one piece of equipment and operator is required to perform a task.
- A. Vance, "The First Person to Hack the iPhone Built a Self-Driving Car. In His Garage," 16 December 2015. [Online]. Available: http://www.bloomberg.com/features/2015-george-hotz-self-driving-car/?cmpid=twtr1.
- D. Ciresan, U. Meier and J. Schmidhuber, "Multi-column Deep Neural Networks for Image Classification," IDSIA-04-12, Manno, Switzerland, February 2012.
- "2014 Autonomous Mobility Applique System - Capabilities Advancement Demonstration (AMAS CAD)," RDECOM TARDEC, 2014. [Online]. Available: https://www.youtube.com/watch?v=HseUNLP6q24.
- S. Hatfield, "Army Robotics Modernization," 25 August 2015. [Online]. Available: http://www.ndia.org/Divisions/Divisions/Robotics/Documents/Hatfield.pdf.
- D. Erhan, A. Courville, Y. Bengio and P. Vincent, "Why Does Unsupervised Pre-Training Help Deep Learning?," Universite de Montreal, Montreal, 2010.
- R. Hadsell, P. Sermanet, J. Ben, A. Erkan, M. Scoffier, K. Kavukcuoglu and U. L. Y. Muller, "Learning Long-Range Vision for Autonomous Off-Road Driving," J. Field Robotics, no. 26, pp. 120-144, 2009.
- L. Linhui, W. Mengmeng, D. Xinli, L. Jing and Z. Yunpeng, "Convolutional Neural Network Applied to Traversability Analysis of Vehicles," Advances in Mechanical Engineering, 2013.
- M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao and K. Zieba, "End to End Learning for Self-Driving Cars," NVIDIA Corporation, 2016.
KEYWORDS: Autonomy, Robotics, Leader Follower, Unmanned, Ground Vehicle, Vehicle Control, Deep Learning, Machine Learning, Artificial Intelligence