You are here

Vision Processor for Helmet System (VPHS)


OBJECTIVE: A vision processor for helmet systems (VPHS) is required to enable the design and fabrication of a digital binocular helmet-mounted display (HMD) having all source image fusion with two video outputs. DESCRIPTION: Advances in digital sensors and digital displays now require the development of improved digital processing capacity that can be integrated within helmet space and mass limitations. The total head-born weight for helmet systems must be less than about 5 lbs including the shell, life-support, and any embedded electronics components (e.g. HMD system). The weight budget allocation to the helmet-mounted components of the HMD system is about 2 lb, including sensors, processors, microdisplays, optics, batteries, and cables. Also, the total power dissipation for the in-helmet components, which is dominated by the in-helmet processor and sensors, must be less than about 10 W to avoid the need for active in-helmet cooling. Prior approaches to the in-helmet processor required for digital all-source imaging have yet to meet these mass and power requirements. However, efforts to date have been based on older levels of microelectronics fabrication technology that are no longer state-of-the art: e.g. 90-nm design rule application specific integrated circuit (ASIC) or fifth-generation field programmable gate array (FPGA) devices. For example, the vision processor ASIC developed under the DARPA Multispectral Adaptive Networked Tactical Imaging System (MANTIS) program (2003-2010) was originally conceived to fuse inputs from five helmet-mounted electro-optical sensors operating in the visible-near infrared (VNIR x 2), short wave infrared (SWIR x 2), and long wave infrared (LWIR) bands and generate two synchronized SXGA video outputs at 60 Hz to a pair of microdisplays. The final MANTIS program demonstration was a handheld system (no-helmet mounted components) with a processor that ingested three sensors (one each VNIR, SWIR, LWIR) and generated just one video output at 30 Hz. Binocular systems needed by pilots require threshold (objective) performance comprising two synchronized video outputs, each at 60 Hz x 1.3 Mpx/frame x 8b/px = 0.624 Gbps (5Mpx x 8b x 96Hz = 3.84 Gbps), and must be capable of ingesting matching resolution video (in Mbps) from multiple sources (on-helmet or on-aircraft) comprising various mixtures of live video from sensors, synthetic imagery, and overlay symbology. Similarly, the FPGA approaches have not yet demonstrated the needed processing capacity in a sufficiently small form factor, but improved, sixth-generation components have now appeared. Hybrid processor architectures, e.g. an array of processing elements interspersed with memory routers on a single silicon chip, may also provide space and weight advantages over the more traditional ASIC and FPGA designs. The microelectronics design and fabrication industry serving companies that address applications such as VPHS has advanced beyond the 90 nm design rule to 45 nm, and will move to even smaller nodes (e.g. 22 nm). The performance metric for VPHS design pathways may be expressed as floating point giga-operations per second (GFLOPS) per Watt of power per gram of weight (GFLOPS/W/g). This metric increases as the silicon design node decreases. The 45 nm (22 nm) design node should be about 4X (16X) better (in terms of GFLOPS/W/g) than the 90 nm design node used in prior efforts and enable the performance needed in for digital HMD system designs. PHASE I: Develop a plan to design and fabricate a VPHS. Perform an analysis of the proposed VPHS approach that demonstrates the capability to process the outputs of two 5-Mpx 14-b 96Hz sensor outputs through a representative set of imaging and display algorithms to drive two 5-Mpx 8-b 96Hz microdisplays with less than 1-frame latency. Estimate the power, weight, and size of the processor.. PHASE II: Design a prototype VPHS. Perform a simulation of the proposed VPHS design that demonstrates the threshold (objective) capability to process the outputs of two 1.3Mpx 14b 60Hz (5Mpx 14b 96Hz) sensor outputs through a representative set of imaging and display algorithms to drive two synchronized 1.3Mpx 8b 60Hz (5Mpx 8b 96Hz) microdisplays with less than 1-frame latency. Estimate the power, weight, and size of the processor for both the threshold and objective performance levels. PHASE III: Fabricate a prototype VPHS and demonstrate threshold (objective) capacity to process two 1.3Mpx 14b 60Hz (5Mpx 14b 96Hz) VNIR sensor outputs through a representative set of imaging and display algorithms to drive two 1.3Mpx 8b 60Hz (5Mpx 8b 96Hz) flat panel displays with less than 1-frame latency. REFERENCES: 1. Peter Burt,"On Combining Color and Contrast-selective Methods for Fusion,"IDGA Image Fusion Conference, Institute for Defense and Government Advancement, 2004. 2. Acadia II System-on-a-Chip; 3. Rockwell Collins MicroCore technology, included in Steven E. Koenck, and David W. Jensen,"High dynamic range sensor system and method,"United States Patent Application 20090268192. 4. Xilinx Virtex-X field programmable gate arrays (FPGA), 5. Coherent Logix HyperX processors,
US Flag An Official Website of the United States Government