You are here

Transfer Learning and Deep Transfer Learning for Military Applications



OBJECTIVE: Develop militarily relevant machine learning, to include deep learning, algorithms to transfer knowledge obtained from one labeled dataset (source) to an unlabeled (target) dataset of a possibly different domain (e.g., EO <> SAR, Satellite <> airborne <> 

DESCRIPTION: This research will enable us to build Aided Target Recognition (AiTR) and other algorithms for environments and targets where we currently lack data or lack labeled data. The concept is to ‘transfer’ learned classifiers from one domain or set of targets to classify different targets or the same targets but in different domains. For example learning how to classify vehicles in Radar data by leveraging what we know about classifying vehicles in EO data. Statistical learning theory has produced powerful methods for learning feature mappings that maximize classification accuracy while minimizing divergence between the source and target distributions. In transfer learning we utilize the terms ‘source’ and ‘target’ data since there are really two classification problems. One classification problem uses the source data, the data that is labeled, well-understood, and has well-understood classification performance. The other classification problem is the ‘target’ classification problem where there is a small amount of data and/or a small amount of labeled data and the idea is to learn and leverage as much as possible from the source to classify the objects in the target domain. A recent example is learning how to classify positive and negative lung cancer samples by leveraging the classification knowledge of how to classify positive and negative breast cancer samples. One approach to such a problem, is to regularize standard discriminant analysis and other manifold embedding techniques with a divergence penalty. Doing so allows us to transfer the knowledge from the source domain to the target and achieve improved classification in the new domain. One class of methods finds feature embeddings or mappings that preserve manifold structure while separating the different targets and minimizing the divergence between the target and source data. This assumes an isomorphism between the source and target classes. Other approaches try to find representations that are robust across the source and target data. Our goal is to extend those ideas to a Deep learning framework. Current approaches in deep learning do not leverage the approach described above and instead use a simpler method called fine-tuning, which does not posess a firm theoretical underpinning. Instead of employing general features from a large general dataset (like ImageNet) with fine-tuning, we plan to explicitly consider the divergence between source and target distributions when learning a classifier within a deep learning framework. Specifically, we focus on the context of applying classification knowledge learned in one source setting (labeled dataset in one modality or one set of object classes) to a new target setting (unlabeled data in new modality or new object classes). This particular transfer problem, called transductive transfer learning, applies to several relevant scenarios such as i) transferring knowledge from simulated to measured data, ii) transferring from one domain such as EO to SAR, or iii) transferring knowledge to new imaging conditions or measurement devices, to name a few examples. The machine learning and statistical learning fields have made significant progress in this research area (see Pan & Yang, 2010 for a comprehensive review). Meanwhile, the area of Deep Learning has been advancing rapidly with relatively few methods dedicated to transfer (Ganin, et al. 2016). The goal of this SBIR is to extend some of the theory of transfer learning to a deep learning framework, in ways which go beyond the typical deep learning transfer approaches which use robust features from a large general dataset, then fine tune for new datasets. Instead, the methods should develop approaches based on statistical learning theory for transfer to deep learning. 

PHASE I: Design and develop a proof-of-concept deep transfer learning framework. This phase should focus on theoretical development with experiments to verify the theory and performance on synthetic and measured datasets. Benchmark against existing approaches in Deep Learning and transfer learning. The research should be documented in a final report and implemented in a proof-of-concept software deliverable. Government materials, equipment, data, or facilities will not be provided in Phase I. 

PHASE II: Mature the algorithm for use in the real world where training data may be sparse, noisy, or imbalanced. Characterize the algorithm performance, training time and testing time according to data quality and availability. Develop benchmarks for transfer across a variety of domains and datasets. The research should be documented in a final report and implemented in a proof-of-concept software deliverable. Government data may be provided in Phase II if necessary. 

PHASE III: Transition the algorithm to one or more AF weapon systems. This will include a strategy for supporting the requisite knowledge representation approach for both source and target data in an operational setting and will specifically include addressing the dynamic nature of source/target data evolution over time. The research should be documented in a final report and implemented in a proof-of-concept software deliverable. 


1. S. J. Pan and Q. Yang. "A Survey on Transfer Learning." IEEE Transactions on Knowledge and Data Engineering, 22.10 (2010): 1345-1359.; 2. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, Victor S. Lempitsky. "Domain-Adversarial Training of Neural Networks." Journal of Machine Learning Research, 17 (2016): 1-35.; 3. Si, Si, Dacheng Tao, and Bo Geng. "Bregman divergence-based regularization for transfer subspace learning." IEEE Transactions on Knowledge and Data Engineering, 22.7 (2010): 929-942.; 4. Mendoza-Schrock, Olga, Mateen M. Rizki, and Vincent J. Velten. "Manifold Transfer Subspace Learning (MTSL) for Applications in Aided Target Recognition." International Journal of Monitoring and Surveillance Technologies Research (IJMSTR), 5.3 (2017): 15-3

KEYWORDS: Statistical Learning Theory, Transfer Learning, Deep Learning, Transductive Transfer Learning, Source-Target Divergence, Discriminant Analysis, Manifold Embedding, Classification, Identification 

US Flag An Official Website of the United States Government