You are here

Development of an Onboard Video Processing Platform for Small Unmanned Aerial Systems (SUAS)


OBJECTIVE: To develop an on-board video processing platform for SUAS that achieves a significant decrease in SWAP requirements without compromising performance and ease of programmability. DESCRIPTION: Recent improvements in imagery devices--both infrared (IR) and visible (EO cameras--have led to a significant increase in the number of pixels that can be captured per second in a SWAP-constrained environment. For example, it is now possible to capture over 70 megapixels (MP) per second (MP/s) in a camera that can fit in a 6-foot wingspan UAS. These dramatic improvements in captured MP/s, however, introduce a new problem: how to utilize the pixels that are generated by these cameras. Traditionally, small UAVs would capture a television-type video (approximately 9 MP/s), compress, and transmit this video back to the ground for exploitation. With an increase in Pixels captured per second, however, compression becomes far more difficult in the SWAP constraints of a SUAS. One alternative would be to detect the"interesting"portions of the captured imagery and send only that information down. However, this requires significant exploitation of the imagery to occur onboard the SUAS. In either case (increased compression or exploitation), significant increases in the computational capabilities of onboard processors are required in SWAP-constrained environment. In addition to the need for more computation onboard SUAS, this computation cannot come at the expense of the ease of programmability. Because these high-pixel count imagery sensors are relatively new, the exploitation algorithms for this imagery are still being developed. Therefore, creating a specialized computational platform (such as a field-programmable gate array (FPGA) or application specific integrated circuits (ASIC)) for performing exploitation onboard SUAS is not practical. Instead, the desired computational platform must meet stringent SWAP constraints while being easy to develop for (e.g., the Intel x86 platforms) and re-program. Of particular interest is the recent development of high-performance processors for the cellular phone industry. Cell phones now have the ability to compress HD video in real-time, perform other processing on collected imagery, and perform significant 3-D graphics computations. In addition, these computationally intensive applications have been enabled in an extremely limited SWAP environment. Therefore, this topic is an attempt to solve the current imagery processing problem with SUAS by leveraging COTS processors for performing video processing onboard. Current SUAS typically use PC104-type computation platforms to interface with cameras onboard. However, the SWAP constraints of these computation platforms limit the ability of SUAS to collect and transmit video data. On the other hand, the advertised SWAP capabilities of cell-phone processors are almost an order of magnitude better than current PC104-based platforms. Therefore, this project should create a platform, utilizing a cell-phone processor, for capturing, processing, and compressing video onboard a SUAS at a greatly reduced SWAP cost. Commercialization potential: There are numerous defense and civilian applications that could benefit from a development of this type of processor. Defense applications include all micro, mini, and small UAS currently used for collecting EO/IR imagery. Civilian applications include many embedded aerial and ground robotics applications where video processing is required to complete tasks. PHASE I: Search for/compare and purchase available mobile video processors. Work w/govt. to choose set of image processing algorithms as benchmark for set of possible processors. Evaluate each processor on video processing performance, ease of programmability, ability to interface with different units (including cameras, inertial measurement units (IMUs) and wireless comm. devices) and SWAP requirements. PHASE II: Develop a computational platform capable of executing a large set of video processing algorithms with minimal algorithm development time and within the SWAP constraints of a SUAS. Develop a software library of multiple video processing algorithms that efficiently execute on the designed platform. Source code delivery to the government is required. Demonstration of the computational unit in a SUAS during government-run flight tests will be performed. PHASE III: In this phase, refinement of the computational unit for inclusion with a specific targeted SUAS platform will be performed. Testing of the unit's performance in a variety of operating conditions will be required. REFERENCES: 1. J. Meehan, S. Busch, J. Noel, and F. Noraz,"Multimedia IP architecture trends in the mobile multimedia consumer devices,"in Signal Processing: Image Communication, 2010. 2. C. Lee, E. Kim, and H. Kim,"The AM-Bench: An Android Multimedia Benchmark Suite," 3. K.T.T. Cheng and Y.C. Wang,"Using mobile GPU for general-purpose computing -- a case study of face recognition on smartphones,"2011 International Symposium on VLSI Design, Automation, and Test. 4. K. Pulli, A. Baksheev, K. Kornyakov, and V. Eruhimov,"Realtime Computer Vision with OpenCV,"in Communications of the ACM, June 2012.
US Flag An Official Website of the United States Government