You are here

Artificial Intelligence Tool for Background Database Generation


TECH FOCUS AREAS: Artificial Intelligence/Machine Learning; General Warfighting Requirements (GWR) TECHNOLOGY AREAS: Sensors; Space Platform; Air Platform; Battlespace OBJECTIVE: Scene Generation tools are used to create synthetic image data representative of what a sensor on a weapon system would measure. Creating synthetic data is limited by lack of available models or lack of measured databases to capture the radiometric characteristics of the scene required. The objective of this topic is to develop a robust capability to approximate backgrounds at the resolution required in closed-loop simulations using a combination of geo-spatial information, required time of day and year, measured databases and trained artificial intelligence algorithms for image feature identification and construction. DESCRIPTION: Scene generation tools provide in-band models of sensor output and input to simulators, allowing for the research and development of new weapon systems. These tools drive scene projectors during hardware-in-the-loop testing and provide synthetic output for software-in-the-loop testing and algorithm development for new sensor concepts. Scientists developing new munition seeker concepts and those responsible for executing test programs are limited to a small subset of geographic locations and environmental conditions. An example would be the limited ability to capture the scene changes due to weather, time of year, and time of day. Another issue is the ability to create data at the resolution of a weapon seeker that is rapidly approaching the ground; the databases currently used are at a fixed resolution, which may be significantly courser than the seeker resolution. This intent of this topic is to establish an automated process that can generate an approximation of scene background data using topographical maps, transportation maps, maps of watershed features, knowledge of area vegetation, geographic features, weather and timeframe. All data available, including data from the public domain such a google earth should be considered. Real-data from high resolution assets should be used as a part of the scene construction process when appropriate to establish realism and to train the system in the sense of Deep Learning. The goal is to create a capability that is global and multi-spectral, representing a user specified sensor waveband. The capability must be repeatable to allow duplication to the extent possible of test results. While the capability might be used in part as a preprocessor, the final stage must operate as a plug in to standard scene generation tools, such as FLITES, in order to integrate into the scene targets, people, moving vehicles, or other objects and perform the final radiometric discretization and image modeling. PHASE I: Perform a preliminary demonstration creating background data from commonly available resources and knowledge of a given geographic region. The demonstration should provide the feasibility of a range of resolutions characteristic of a sensor moving from high-altitude to the ground. Narrowing scope to an IR band is acceptable to provide a proof of concept. A plan for Phase II development, and the role of artificial intelligence in that process, shall be established. Limitations of the planned capability shall be documented. PHASE II: Finalize design of a demonstration prototype. Develop, integrate, and train the solution prototype. Establish and document relevant use-cases. Plan and coordinate one or more demonstrations to provide proof of concept determination. Perform experiments and analyze results to establish the adequacy of the solution approach and minimize transition risk. Contact potential customers and transition partners to support Phase III activities. Provide regular communication to the government sponsor to ensure understanding and risk mitigation. Deliver a prototype to AFRL/RWWG compatible with the FLITES simulation tool. PHASE III DUAL USE APPLICATIONS: Add additional classified data sources and work with multiple end-users to provide additional specific capabilities required. NOTE: The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the proposed tasks intended for accomplishment by the FN(s) in accordance with section 5.4.c.(8) of the Announcement and within the AF Component-specific instructions. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws. Please direct questions to the Air Force SBIR/STTR Contracting Officer, Ms. Kris Croake, REFERENCES: 1. Crow, Dennis & Coker, Charles & Keen, Wayne. (2006), “Fast Line-of-Sight Imagery for Target and Exhaust-plume Signatures (FLITES) scene generation program,” art. no. 62080J, Proceedings of SPIE, 10.1117/12.669306 2. Savage, James & Coker, Charles & Thai, Bea & Aboutalib, Omar & Pau, John. (2007), “Irma 5.2 multi-sensor signature prediction model,” Proc SPIE, 6965, 10.1117/12.778000 3. Bruce A. Wilcoxen and Harry M. Heckathorn "Synthetic scene generation model (SSGM R7.0)", Proc. SPIE 2742, Targets and Backgrounds: Characterization and Representation II,” (17 June 1996); 10.1117/12.243028 4. Jeevan Devaranjan, Amlan Kar, Sanja Fidler, “Meta-Sim2: Unsupervised Learning of Scene Structure for Synthetic Data Generation,” arXiv
US Flag An Official Website of the United States Government