You are here

Information Theory Models for Multi-Sensor Design of Signature Exploitation Systems



OBJECTIVE: Develop theoretical models that quantify and characterize the individual information contributions from multiple sensor modalities. Address diverse sensing modalities involving texture, color, materials, and geometry within the fusion problem.

DESCRIPTION: Increased complexity, high bandwidth trends in addressing the automatic target recognition (ATR) problem result in high dimensional target signature models. A wide variety of nuisance and environmental issues can make reliable performance in the field challenging. Both of these issues can combine to present unrealizable algorithm training requirements. The introduction of multiple sensing modalities is often proposed as a means to address the problem of dimensionality and reliability. Sensor modes such as radar, electro-optic (EO), infrared, and ladar each excite a unique combination of target attributes such as texture, color, materials, and geometry for example. Theoretical models are needed to quantify and characterize the independent and/or dependent information contribution arising from various sensing modalities within the fusion context. The Mutual Information (MI) measure can be used to characterize the degree of statistical independence of sources of information contributions. Entropy and MI are analytically connected to the probability of error and the Neyman Pearson criteria allowing for the rate of noise infiltration to be related to the rate of degradation in system performance. The Feature Mutual Information (FMI) metric is based on the MI of image features. Greater traceability of independent sources of information across sensor type could afford more principled methods in the design of joint target feature sets. The learning of joint feature sets based on quantified information contributions (as in bits of information) from each sensor type will lead to a more performance-based fusion design. Real-world constraints limit the number of samples available for learning joint feature sets. Information based learning methods for joint feature designs must provide optimal information extraction conditioned on the number of training samples. The incremental training of maximum information joint feature sets should afford minimal information loss while quantifying system performance in the finite sample regime. The incremental learning of optimal joint features can be further constrained by several properties favorable to the automatic target recognition (ATR) fusion problem. The invariance of learned joint features to selected nuisance conditions such as target pose angle or target registration are of interest. Also, the constraint of sparsity and statistical independence within learned joint feature sets affords advantages in design and implementation. A unified information based approach which embraces all of the above is desired to address the joint feature fusion problem for application to Air Force sensing areas.

PHASE I: Develop theoretical models that quantify/characterize the individual information contributions from multiple sensor modalities within the ATR fusion problem. Develop information-based feature extraction methods that constrain joint feature solutions based on sparsity and invariance to selected nuisance parameters. Benchmark training complexity versus target uncertainty due to nuisance issues.

PHASE II: Demonstrate the information fusion methods developed in Phase I using designed target experiments with controlled measurement multi-sensor data sets. Perform design trade studies on multiple sensor modalities and joint feature designs. Develop system metrics for benchmarking system classifier/feature complexity and information gain. Establish and test hypotheses relating multi-sensor design, target phenomenology, and information gain.

PHASE III DUAL USE APPLICATIONS: Apply the demonstrated methods and metrics of Phase II to the sensor/feature design within a transitional program. Evaluate viable alternatives analysis for the subject application program using operational sensor data. Evaluate algorithm complexity and information gain.


    • J. Malas and J. Cortese, “The Radar Information Channel and System Uncertainty,” IEEE proceedings to the 2010 IEEE Radar Conference, Washington DC, Month 2010.


    • L. Paninski, "Estimating Entropy on m Bins Given Fewer Than m Samples," IEEE Transactions on Information Theory, Vol. 50, Issue: 9, Sept. 2004


    • L. G. Valiant, "The Hippocampus as a Stable Memory Allocator for Cortex," Neural Computation, Vol. 24, No. 11, Pages 2873-2899, Nov. 2012.


    • V. Velten, “Geometric Invariance for Synthetic Aperture Radar (SAR) Sensors,” Algorithms for Synthetic Aperture Radar Imagery V, E. Zelnio, ed., v. 3370, SPIE Proceedings, Orlando, FL, April 1998.


  • S. Gupta, K.P. Ramesh, and E. Blasch, "Mutual Information Metric Evaluation for PET/MRI Image Fusion," Proceedings of IEEE National Aerospace Conference, pages 305-311, 16-18 July, 2008.

KEYWORDS: information theory, fusion, joint, features, multi-sensor, invariance, automatic target recognition, ATR

  • TPOC-1: Albert Tao
  • Phone: 937-528-8215
  • Email:
US Flag An Official Website of the United States Government