You are here

Three-Dimensional Imagery Presentation from Multiple Aircraft Sensors

Description:

TECHNOLOGY AREA(S): Human Systems

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), which controls the export and import of defense-related material and services. Offerors must disclose any proposed use of foreign nationals, their country of origin, and what tasks each would accomplish in the statement of work in accordance with section 5.4.c.(8) of the solicitation.

OBJECTIVE: To support future Mission Commander situational awareness and decision-making through the development of presentation techniques capable of fusing visual information collected from multiple distributed sensors into a single, cohesive, three-dimensional display.

DESCRIPTION: Background: As Army aviation continues to implement increasingly advanced levels of automation, human operators will transition from their current roles actively piloting vehicles to serve instead as Mission Commanders (MC) supervising highly intelligent autonomous systems. In these manned-unmanned teaming (MUM-T) operations, the MC’s primary function will be decision-making – delegating tasks to be executed by the autonomous system(s) under their command. The MC’s many tactical decisions include: finding the optimal routes for aircraft to fly, defining ground regions to be surveyed for intelligence gathering, processing intelligence to identify threats, and determining how to prosecute identified threats. The effective execution of these functions will require the MC to synthesize information collected from a variety of sensors distributed across the battlefield in order to maintain high-level situational awareness (SA). With existing technologies this task requires high levels of visual attention, frequent task-switching to perceive information from multiple displays, and very high levels of working memory. These cognitive demands significantly degrade the MC’s decision-making ability. Solution: This task seeks to develop visual information presentation techniques capable of fusing information from multiple sensors to generate a single three-dimensional visual representation of the battlefield using existing techniques such as photogrammetry. Each individual sensor will provide full-motion video as well as accompanying metadata describing the geospatial position and orientation of the sensor. A centralized processor will analyze the imagery from each sensor to render the scene in three dimensions, and synthesize information collected from many distributed sensors into a single common database. The MC can easily manipulate the display to view the environment from any vantage point without the risk of entering dangerous airspace or the expense of time and fuel to reposition an aircraft. The three-dimensional world model will also support the display of additional information using synthetic visualization methods, such as flight paths, line of sight calculations, and weapon effectiveness ranges. The resulting three-dimensional database will be presented to the MC utilizing existing display hardware, such as a panel-mounted display or a stereoscopic head-mounted display (HMD) such as the Microsoft HoloLens. The use of an HMD will allow the imagery to be presented as a virtual hologram, allowing the MC to see a virtual representation of the entire battlespace floating inside their cockpit. Combining this holographic presentation with gesture input devices such as the LeapMotion will provide a natural means of interacting with the imagery. Ultimately, this world model can be used as the virtual “sand table” on which all relevant battlespace information is presented to the MC in a single interface. Current Limitations: Commercial software packages such as Pix4Dmapper provide similar functionality. However, existing products are intended for agricultural and geological surveying purposes. For these applications spatial accuracy is of primary importance, with little concern for processing latency. Existing processing methods also have little to no tolerance for moving objects. These limitations greatly restrict the ability of existing technologies to support military operations in rapidly evolving dynamic urban environments, where tolerance for latency is limited, and the accurate display of moving targets is critical.

PHASE I: Develop a functional proof-of-concept system capable of integrating visual and meta-data information from at least two distributed sensors imaging a scene with a single moving target. Processing time will introduce a latency of no more than 60 seconds between the collection of the original sensor imagery and the final presentation of the three-dimensional moving image to the user.

PHASE II: Expand the proof-of-concept system to integrate information from at least three distributed sensors imaging a scene with up to ten moving targets. The resulting imagery will support presentation through both traditional flat-panel displays and holographic displays using a selectively transparent stereoscopic HMD. The system must be compliant with relevant DOD/industry standards (STANAG, MISB, FACE, etc.). Processing time will introduce a latency of less than 30 seconds.

PHASE III DUAL USE APPLICATIONS: The system will provide a means of augmenting the collected imagery with synthetic supplemental geo-referenced information, such as flight paths, line of sight calculations, and weapon effectiveness ranges. Processing time will introduce a latency of less than 15 seconds. Transition the system to the SUMIT (Synergistic Unmanned-Manned Intelligent Teaming) 6.3 research program, which is investigating human-machine interface and cognitive decision-aiding concepts for future MCs involved in MUM-T. The system will also be well-suited for transition to a variety of commercial applications, including: monitoring systems for police, border patrol, and private security; and the entertainment industry, specifically motion capture for the production of video games and movies.

REFERENCES:

    • REFA. Blake, A. Zisserman, “Visual Reconstruction,” MIT Press, 1987.

 

    • I. Colomina, P. Molina, “Unmanned aerial systems for photogrammetry and remote sensing: A review,” ISPRS Journal of Photogrammetry and Remote Sensing, Volume 92, p.79–97, June 2014.

 

    • F. Devernay, O. Faugeras, "Computing differential properties of 3-D shapes from stereoscopic images without 3-D models," 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, p.208-213, 21-23, 1994.

 

    • Q. Huynh-Thu, P. Le Callet, M. Barkowsky, "Video quality assessment: From 2D to 3D — Challenges and future trends," 17th IEEE International Conference on Image Processing (ICIP), p. 4025-4028, 2010.

 

    • H. Shum, S. B. Kang, “Review of image-based rendering techniques,” Proceedings of SPIE 4067, Visual Communications and Image Processing, 2000.

 

  • TRADOC Pamphlet 525-7-15: The United States Army Concept Capability Plan for Army Aviation Operations 2015-2024, (http://www.tradoc.army.mil/tpubs/pams/p525-7-15.pdf).

KEYWORDS: image processing, three-dimensional imagery, photogrammetry, human systems, Mission Commander, situation awareness, sensor, computer vision

  • TPOC-1: Grant Taylor
  • Phone: 650-604-1747
  • Email: grant.s.taylor.civ@mail.mil
  • TPOC-2: Christoph BorelDonohue
  • Phone: 301-394-4143
  • Email: christoph.c.boreldonohue.civ@mail.mil
US Flag An Official Website of the United States Government