You are here

EO/IR-Specific State of the Art Machine Learning



OBJECTIVE: Develop EO/IR-specific state of the art machine learning method(s) for improving utility of ISR sensor products to provide enhanced interpretability and extend range performance over visual image assessments. 

DESCRIPTION: The Sensors Directorate of the Air Force Research Laboratory (AFRL/RY) and AF Life Cycle Management Center have been partnering on sensor technology research and development for ISR applications covering a range of passive and active EO/IR sensor concepts. Relevant research has the potential to support the DoD in manned and unmanned airframes. For this topic, the research should focus on the capability of performing National Image Interpretability Rating Scale (NIIRS) 5 or better level tasks on NIIRS 4 imagery where the images acquired are degraded due to low signal to noise ratio, atmospheric conditions, etc. These tasks are to be performed on passive single band imagery. The rapid expansion of research in the areas of state of the art machine learning, artificial intelligence, and deep learning open the possibility of improved image interpretability at a given imaging range, as well as the potential for further extending range performance of EO/IR sensor systems. One major challenge is acquiring or accurately modeling datasets for training and learning. Acquired datasets would have to be labeled after the collection to aid with training and learning. Collection of training and learning data will be provided by the offeror, no government facilities, equipment, etc. will be provided. An additional, but related, challenge is that training data may only be collected over a pristine or limited set of conditions. It is important to understand how training datasets and machine learning transfers to other data collection ranges, environmental conditions, and even target variations. This area of research is known as transfer learning. Performance metrics will focus on accomplishing NIIRS 5 or better tasks on NIIRS 4 imagery with 75% accuracy as a threshold and 100% accuracy as an objective. 

PHASE I: Research will focus on: 1) identifying and securing suitable datasets and/or modeling tools for providing data to train state of the art machine learning methods; 2) baselining a set of machine learning tools, including those methods required for feature extraction (including deep learning approaches), & transfer learning; and 3) providing an initial performance assessment, recommending next steps in refining state of the art machine learning tools. 

PHASE II: Research will focus on refining machine learning tools based upon Phase I recommendations. Additionally, the contractor will secure, generate, or collect more relevant training data. The contractor will perform a final assessment of the machine learning tools, including assessing potential performance gains over visual image analyses and testing limitations of transfer learning methods. The contractor will deliver all developed tools, algorithms, and data to the government. The contractor will initiate discussions with sensor system developers, exploitation processing developers, and other avenues for transition of machine learning techniques. 

PHASE III: This phase will match the Phase II machine learning tools with appropriate applications and pursue systems developers to refine and transition the tools for the specific system(s). The primary candidates include both existing operational and planned future DoD reconnaissance imaging systems, as well as commercial remote sensing systems for civil applications, such as mining and crop/forest health. The focus will be on refining tools that can be applied to detect, recognize, identify, and recommend actions in remote sensing performed by EO/IR sensors. 


1. Zhang, Liangpei, Lefei Zhang, and Bo Du. "Deep learning for remote sensing data: A technical tutorial on the state of the art." IEEE Geoscience and Remote Sensing Magazine 4.2 (2016): 22-40.; 2. Paxman, R. G., Rogne, T. J., Sickmiller, B. A., LeMaster, D. A., Miller, J. J., & Vollweiler, C. G. (2016). Spatial stabilization of deep-turbulence-induced anisoplanatic blur. Optics express, 24(25), 29109-29125.; 3. Ball, J. E., Anderson, D. T., & Chan, C. S. (2017). Comprehensive survey of deep learning in remote sensing: theories, tools, and challenges for the community. Journal of Applied Remote Sensing, 11(4), 042609.; 4. Marcum, R. A., Davis, C. H., Scott, G. J., & Nivin, T. W. (2017). Rapid broad area search and detection of Chinese surface-to-air missile sites using deep convolutional neural networks. Journal of Applied Remote Sensing, 11(4), 042614.

KEYWORDS: Remote Sensing; High-resolution Imaging; Multispectral; Hyperspectral; Deep Learning; Convolutional Neural Networks; Object Detection; Target Recognition; Imaging Through Turbulent Media; Image Reconstruction-restoration; Hyperspectral; Big Data; Computer 

US Flag An Official Website of the United States Government