You are here

Deep Generative Modeling of Infrared Datasets for Aided Target Recognition

Description:

TECHNOLOGY AREA(S): Electronics 

OBJECTIVE: Develop and demonstrate techniques and algorithms for Deep Generative Modeling for the creation of Infrared (IR) datasets to facilitate Machine Learning and Aided Target Recognition (AiTR). 

DESCRIPTION: Applications of Deep Learning and Machine Learning to imagery and video have been dramatic in the last decade. However, these achievements have been almost entirely based on visible band imagery and video. The data requirements of these algorithms are enormous, and developers have been able to rely on masses of readily available visible band data. Militarily significant IR data does not currently exist in the quantities and varieties necessary to fully leverage the advantages of Deep Learning. What is needed a set of techniques and algorithms which can artificially generate militarily significant (as in specific localities and target types) IR video and imagery in entirety and to augment existing IR data with novel prescribed objects and targets. Success and promise has recently been shown by Generative Adversarial Networks. However, these image constructions are mostly intended for visual effect. Much higher fidelity is essential to training AiTRs. Also artificial IR modeling systems exist at Night Vision and Electronic Sensors Directorate (NVESD). This effort aims at overcoming data limitations listed above and enhancing realism of current NVESD modeling systems. The goal is to support an IR AiTR effective fieldable system—enhancing vehicle threat detection and avoidance. This effort directly supports Army Modernization Priority: Next Generation Combat Vehicle (NGCV)—benefitting the automation associated with the NGCV through improved algorithm performance. This effort will enable NGCV sensors to rapidly determine external threats and alleviate operator fatigue via automation of surveillance and navigational functions. 

PHASE I: Show proof of concept for Deep Generative Modeling algorithms for IR imagery and video synthesis. Show proof of concept for algorithms to greatly improve realism of synthetic imagery. Integrate algorithms into comprehensive algorithm suite. Test algorithms against existing NVESD modeling methodologies. Demonstrate feasibility of techniques in creating IR video sequences. Distribute demonstration code to Government for independent verification. Successful testing at the end of Phase 1 must show a level of algorithmic achievement such that potential Phase 2 development demands few fundamental breakthroughs but would be a natural continuation and development of Phase 1 activity. 

PHASE II: Complete primary algorithmic development. Complete implementation of algorithms. Test completed algorithms on government controlled data. System must achieve 25% improvement in classification rate and false alarm rate over AiTR algorithms trained on real imagery alone (using government baseline AiTR algorithm). Principle deliverables are the algorithms. Documented algorithms will be fully deliverable to government in order to demonstrate and further test system capability. Successful testing at end of Phase 2 must show level of algorithmic achievement such that potential Phase 3 algorithmic development demands no major breakthroughs but would be a natural continuation and development of Phase 2 activity. 

PHASE III: Complete final algorithmic development. Complete final software system implementation of algorithms. Test completed algorithms on government controlled data. System must achieve 25% improvement in classification rate and false alarm rate over algorithms trained on real imagery alone (using government baseline AiTR algorithm). Documented algorithms (along with system software) will be fully deliverable to government in order to demonstrate and further test system capability. Applications of the system will be in NVESD Multi-Function Display Program, vehicle navigation packages, and AiTR systems. Civilian applications will be in night surveillance, crowd monitoring, navigation aids, and devices requiring rapid adaptation to new environments. 

REFERENCES: 

1: Steven A. Israel

2:  J.H. Goldstein

3:  Jeffrey S. Klein

4:  James Talamonti

5:  Franklin Tanner

6:  Shane Zabel

7:  Philip A. Sallee

8:  Lisa McCoy, "Generative Adversarial Networks for Classification", 2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR).

9:  Dimitris Kastaniotis

10:  Ioanna Ntinou

11:  Dimitrios Tsourounis

12:  George Economou

13:  Spiros Fotopoulos, "Attention-Aware Generative Adversarial Networks (ATA-GANs)", 2018 IEEE 13th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP).

14:  Parimala Kancharla

15:  Sumohana S. Channappayya, "Improving the Visual Quality of Generative Adversarial Network (GAN)-Generated Images Using the Multi-Scale Structural Similarity Index ", 2018 25th IEEE International Conference on Image Processing (ICIP).

KEYWORDS: Deep Learning, Generative Adversarial Networks, Aided Target Recognition, Neural Networks, Infrared Video 

US Flag An Official Website of the United States Government