You are here

Image Feature Extraction for Improved EW Classification


OBJECTIVE: The objective of this topic is to develop an innovative data sharing and data fusion approach for improving situational awareness by combining feature extractions from both the on board imaging sensor (i.e., Photonics Mast and AN/BVS-1) and EW sensors (i.e., AN/BLQ-10) to create better separation in the decision space, resulting in an improvement in automatic classification and a reduction in operator workload. This requires research and development of advanced image processing techniques for automatic feature extraction, multi-modal sensor data fusion algorithms and a synchronized data sharing infrastructure to effectively integrate the sensors. DESCRIPTION: The problem of classifying an RF emitter as a threat, although not trivial, is fairly well understood and traditional EW systems and approaches continue to drive current Navy EW operations. With the proliferation of inexpensive commercial radars and communication devices, the number and type of RF emitters in littoral environments has grown with increasing rapidity and in turn has increased system acquisition costs as well as drastically increased operator workload in order to deal with this growing number of emitters. In order to contend with the increase in RF contacts that need to be accurately classified and managed as friend, foe or neutral in a rapidly changing electromagnetic environment (EME), a more automated approach needs to be taken which fuses imagery with EW emitter parameters to increase automation and the probability of correct classification. The Navy seeks advanced multi-modal data fusion algorithms and image processing techniques for improved automatic emitter classification and identification. Currently, Imaging systems and EW systems aboard US submarines are"stove-piped". Therefore, in order to exploit the information contained in both, the Navy seeks the development of 1) an efficient data/time synchronization and sharing mechanism in order to be able to associate and integrate image features and features extracted from EW sensor information; 2) image processing algorithms, focusing on the region of interest, which support the integration of visual features extracted from real-time (or near real-time) imagery data into an enhanced RF emitter feature vector and; 3) the development of Bayesian data fusion/classification algorithms which utilize the corresponding multi-dimensional vector space. This implies the ability to maintain synchronization with the frame rate of the critical video distribution subsystem (30 Hz) and have access (and also maintain synchronization with) contact information that contains pulse descriptor words (PDWs) and external features extracted from received communication signals in a dense target littoral environment. PHASE I: Research and development of an overall concept and detailed description of the synchronization mechanisms developed which allow the sensors to synchronously share information. Provide a detailed explanation of which features are to be extracted (can be traditional or non-traditional) from both the received imagery and RF signals which maximize separation in the decision space and the Bayesian techniques used to classify the target in the resulting non-linear decision space. Provide an estimate of the improvement in probability of correct classification (Pcc) and demonstrate the improvement with a simple demonstration based on simulated data. PHASE II: Extend the proof-of-concept development from Phase I to demonstrate the effectiveness of the approach using a real world scenario in a laboratory environment. Evaluate performance using government provided data and develop specifications for transition to system insertion. PHASE III: Transition the system into a production Navy system such as Virginia Phonics Mast. Private Sector Commercial Potential/Dual-Use Application: Multi-modal data fusion algorithms are applicable to the telecommunications industry as well as industries requiring surveys, searches, or mapping, or even search and rescue operations. Multi-modal data fusion algorithms are sensor independent. They are applicable in any application where disparate information can be vectorized and weighted in such a way as to create a vector space in order to more accurately interpret (classify) real-time sensor data. For example, in the telecommunications industry, wireless network planning is highly dependent on the very dynamic electromagnetic environment (EME). Currently, test vans with collectors roam urban areas to attempt to characterize the EME in terms of detecting potential co-channel interference or areas of obscuration. This data is then manually processed and assessed to determine where to place new cell towers and repeaters. An algorithm capable of fusing collected information with other types of sensors (imagery, terrain maps, meteorological information, GPS, etc.) and is adaptable to the dynamic urban environment would be very useful to this industry in order to reduce 1) the search area of the van; 2) automate the classification of the co-channel interference (TV station, other cell tower, communications transmitter); 3) learn via the incorporation of a new training set to adapt to changes (frequency allocations, new communications infrastructure, etc.). Fusing imagery with RF signal information can also assist in search and rescue operations. Many ships, planes and even hikers carry rescue transponders or emergency beacons which radiate upon activation. Ships/individuals who are lost or in trouble at sea or in wilderness areas as well as plane crashes in rough terrain, all present situations where being able to correlate/fuse real-time imagery with RF emissions can shorten the time to rescue. REFERENCES: [1] R. Wiley, The Analysis of Radar Signals, 2nd ed. London, U.K.: Artech House Press, 1993 [2]"Higher-Order Spectral Analysis: A Nonlinear Signal Processing Framework"C. L. Nikias, A. P. Petropulu, Prentice Hall, Englewood Cliffs, NJ, USA (1993). [3] B. Zadrozny,"Learning and Evaluating Classifiers under Sample Selection Bias", Proceedings of the 21st International Conference on Machine Learning, 2004. [4]"Adaptive Blind Signal and Imaging Processing: Learning Algorithms and Applications"A. Cichocki, S. Amari, Wiley, New York, NY, USA (2002).
US Flag An Official Website of the United States Government