Description:
TECHNOLOGY AREA(S): Air Platform, Info System, +f60s, Sensors, Electronics
OBJECTIVE: The objective of this topic is to develop an innovative method for real-time or near-real-time processing of high resolution, Red-Green-Blue (RGB), still-frame images and/or streamed Full Motion Video (FMV) being received from an in-flight tactical Group 1 Unmanned Aerial System (UAS). The automated workflow should take input from the imagery/video stream, generate a 3D scene model, annotate and integrate the model with platform telemetry or data from other airborne sensors (tagging, tracking and locating (TTL); signals intelligence (SIGINT); electronic warfare (EW), etc.) for presentation to the sensor and/or UAS operator.
DESCRIPTION: This topic seeks innovative proposals for a near-real-time method for downloading and processing multiple, high resolution, RGB still-frame images (or segments of streaming video) from an in-flight Puma Unmanned Aerial Vehicle (UAV), automatically generating and annotating an accurate, textured 3D scene model from the data, fusing the scene with real-time, sensor data, and presenting the results to the sensor and/or UAS operator. This topic does not seek to develop a new airborne intelligence, surveillance, or reconnaissance sensor, rather, emphasis is placed on leveraging the air vehicle’s existing imaging system, mobility and on-board sensors along with state-of-the-art imagery processing capabilities to produce a near-real-time, augmented-reality, 3D model of an objective area. Proposed solutions may assume that the UAV is in orbit around the objective area. Models should be continuously updated and refined as more data becomes available. Models should be saved in Ground Control Station non-volatile storage for post-mission, forensic analysis. Models already in storage should be accessible to the system for reloading, reuse, and refinement. Systems must support Special Operations Forces (SOF) missions including but not limited to Operational Preparation of the Environment; Advance Force Operations; Intelligence, Surveillance and Reconnaissance (ISR) Operations; and Force Protection & Over-watch. Proposals will be expected to address the positive influences the proposed solution will exert on: ISR UAS and sensor-employment concepts of operations and the SOF mission set.
PHASE I: Conduct a feasibility study and initial system design to assess what is in the art of the possible that satisfies the requirements specified in the above paragraph entitled “Description.” As a part of this feasibility study, proposers shall address all viable overall system design options and meet or exceed the following objective (O) and threshold (T) performance parameter specifications: 1. Minimum number of still images necessary: O=60, T=120. 2. Minimum length of FMV frame sequence necessary: O=4 min, T=8 min. 3. Level-of-Detail (LOD) (greatest detail around center of field-of-view or “objective area”): O=4 (or continuous), T=2. 4. Model Resolution (at greatest level of detail): O<=0.50m, T=0.75m. 5. Objective Area: O=200m x 200m, T=100m x 100m 6. Computational Latency (time from first image or video frame until initial, specification compliant model availability): O=8 min, T=16 min 7. Weight (Total net increase of UAS transport weight): O < 2kg, T < 3kg 8. Set-up Time, Net Increase (The amount of time added to the UAS GCS initial set-up.): O=T<=3 minutes. The objective of this USSOCOM Phase I SBIR effort is to conduct and document the results of a thorough feasibility study to investigate what is in the art of the possible within the given trade space. The feasibility study should investigate all known options that meet or exceed the minimum performance parameters specified in this topic. It should also address the risks and potential payoffs of the innovative technology options that are investigated, recommend the option that best achieves the objective of this technology pursuit, and provide an initial, system-level design. The funds obligated on the resulting Phase I SBIR contracts are to be used for the sole purpose of conducting this study using scientific experiments and laboratory studies as necessary. Operational prototypes will not be developed with USSOCOM SBIR funds during Phase I feasibility studies. An operational prototype delivered at the end of a Phase I feasibility study, even if developed with non-SBIR funds, will not be considered in deciding if a firm will be selected for Phase II.
PHASE II: Develop, install, and demonstrate a prototype system determined, during the Phase I feasibility study, to be the most feasible solution to meet the stated Government requirements. Phase II will include additional requirements specifying the ability to use the presented model to perform analytics such as mensuration, line of sight analysis, integration with data from specific third-party sensors, and others.
PHASE III: This system could be used in a broad range of military and non-military applications where it is desirable to construct 3D models from sparse data sets of still-frame images or short video clips.
REFERENCES:
1: "Level of Detail", 26 June 2017, at 11:18;https://en.wikipedia.org/wiki/Level_of_detail
2: "Texture Mapping", 30 June 2017, at 22:52;https://en.wikipedia.org/wiki/Texture_mapping
3: "AeroVironment RQ-20 Puma", 25 April 2017, at 00:30; https://en.wikipedia.org/wiki/AeroVironment_RQ-20_Puma
4: "United States Special Operations Command", 30 June 2017, at 22:20; https://en.wikipedia.org/wiki/United_States_Special_Operations_Command
5: "Joint Publication 3-05 Special Operations", 16 July 2014; http://www.dtic.mil/doctrine/new_pubs/jp3_05.pdf
6: "Special Forces, Primary Missions", 21 March 2016;http://www.goarmy.com/special-forces/primary-missions.html
KEYWORDS: UAS, UAV, Puma, 3D, Image Processing, Video Processing, Special Operations