Description:
OUSD (R&E) MODERNIZATION PRIORITY: Artificial Intelligence/ Machine Learning
TECHNOLOGY AREA(S): Sensors
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.
OBJECTIVE: Develop a methodology for producing synthetic multi-spectral satellite sensor data for the purpose of training low-shot machine learning models.
DESCRIPTION: DTRA has the requirement to be able to quickly and efficiently identify objects of interest related to defeating improvised threat networks as well as understanding the characteristics and conditions of specific operational environments. In certain cases, objects of interest may be rare, necessitating a novel method of performing accurate object detection. For certain objects, panchromatic or 3-band imagery may be insufficient to achieve accurate object identification, thus additional bandwidths within the near and mid infrared spectrum may be needed which takes advantage of additional spectral characteristics for achieving object detection. Currently, many of the generative modeling techniques are applied solely to the visible wavelengths. A need exists to fully explore the application of generative modeling techniques to multispectral imagery datasets.
Various studies have been conducted to show how Generative Adversarial Networks (GANs) can be successful in augmenting datasets for standard RGB datasets. However, less research has focused on how GANs could reproduce multispectral imagery (MSI) within the near and mid-IR range. Given that GAN models are typically difficult to train, the additional complexities of multispectral imagery to include higher radiometric and spectral resolution, presents a challenging task. The thrust of this effort would be to create a GAN for multispectral data that could augment current training sets and still achieve the same robustness, stability, accuracy, and correlation to original bandwidths. The multispectral GAN model would need to be tested in various terrain and seasonal environments, and ensure that spectral and radiometric characteristics were retained and good visual quality was achieved. The final model would need to be adaptable to accept various formats of imagery, with varying resolutions and bands. Various quantitative metrics should be identified and explained.
PHASE I: The performer shall conduct a proof of concept study to identify the processes and algorithms most successful for performing generative modeling to recreate useful realistic synthetic multi-band imagery in the near and mid IR wavelengths. The end report and demonstration shall provide quantitative metrics which help to determine the feasibility to continue to a Phase II effort
PHASE II: The performer shall mature the algorithms to improve accuracy, robustness, and stability of the generation of the synthetic multispectral imagery. The algorithms shall be applied in multiple terrain environments with various objects of interest and differing imagery sources. The performer shall design, develop, and deliver a prototype, to include software code. The phase II deliverable is a (1) report detailing the finalized approaches and analysis of performance, (2) proof of concept demonstration, (3) and software code.
PHASE III DUAL USE APPLICATIONS: Finalize and commercialize software for use by customers (e.g. government, satellite companies, etc.). Although additional funding may be provided through DoD sources, the awardee should look to other public or private sector funding sources for assistance with transition and commercialization.
REFERENCES:
1. Abady, L., M. Barni, A. Garzelli and B. Tondi. “GAN generation of synthetic multispectral satellite images.” Remote Sensing (2020);
2. Jiayi Ma, Wei Yu, Pengwei Liang, Chang Li, Junjun Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Information Fusion, Volume 48, 2019, Pages 11-26, ISSN 1566-2535;
3. Kerdegari, Hamideh & Razaak, Manzoor & Argyriou, Vasileios & Remagnino, Paolo. “Semi-supervised GAN for Classification of Multispectral Imagery ;Acquired by UAVs.” (2019);
4. M. Gong, X. Niu, P. Zhang and Z. Li, ""Generative Adversarial Networks for Change Detection in Multispectral Imagery,"" in IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 12, pp. 2310-2314, Dec. 2017, doi: 10.1109/LGRS.2017.2762694;
5. Mohandoss, Tharun, Aditya Kulkarni, Daniel Northrup, Ernest Mwebaze, Alemohammad, Hamed. “Generating Synthetic Multispectral Satellite Imagery from Sentinel-2” arXiv, arXiv:2012.03108;
6. Perez, Anthony, et al. ""Semi-supervised multitask learning on multispectral satellite images using wasserstein generative adversarial networks (gans) for predicting poverty."" arXiv preprint arXiv:1902.11110 (2019);
KEYWORDS: GAN; multispectral; synthetic data