You are here

Collaborative Airborne Sensor Fusion via Maximizing Information under Constraints

Description:

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Trusted AI and Autonomy

 

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.

 

OBJECTIVE: Collaborative Automatic Target Acquisition (ATA) in munitions is a burgeoning research field with a unique set of challenges. DOD guidance on the use of machine learning/artificial intelligence for safety-of-life applications necessitates that munitions that employ ATA are highly confident and correct in their target classifications. Some viewing angles and perspectives provide better target discrimination than others, depending on the target, the ATA algorithm being used, the type of sensor, and the observations that have already been made by the munitions. The objective of this topic is to investigate and demonstrate algorithms that can determine the next measurement or next “best look” on a set of targets to maximize correct identification/classification by the munitions, while minimizing the total number of measurements/observations and collaborative communication required to achieve that goal.

 

DESCRIPTION: There are two primary concepts of operation that are supposed by this topic: standalone munitions and swarming munitions. In the case of swarming munitions, each munition would have a different viewing angle/attitude on the target, or “look”. The “best look” algorithm developed under this topic would determine what the next optimal “look” would be, and task the sensor on a munition to gather that observational data. After determining what the most informative data to collect is, communication bandwidth is conserved by choosing to only communicate observations that are both independent of previous observations and from optimal sensor/viewing perspective combinations.

 

This mathematical determination of observation independence and optimal “look” can also be applied to the case of a single munition. A single munition could be fusing together the predictions from multiple types of onboard sensors to correctly identify a target, and knowing which sensor is providing the best observations at any one time would increase the accuracy of the sensor fusion ATA algorithm. Additionally, the mathematical determination of any given observation’s independence could be used to avoid feeding fusion algorithms multiple iterations of dependent observations, falsely increasing the influence of one “look” on the outcome of a fusion algorithm’s target identification.

 

Challenges such as enemy anti-air weapons, communication/processing constraints, battery limitation, maneuverability, obscuration, and a multitude of deception methods all impede target identification, and can could be included in as constraints on a “best look” algorithm spawning from this research. These constraints will iteratively be introduced into the “best look” algorithms. An objective goal will be modifying the “best look” algorithms to provide sensor tasking on munitions that increases survivability by avoiding adversarial air defenses and minimizing battery usage.

 

PHASE I: For a Offeror to demonstrate that their technology is at an appropriate level for a D2P2 award, the Offeror should have experience developing autonomy algorithms for applications similar to the topic above. Similar applications may include swarming for search and rescue, ISR, or other kinds of drone teaming. The Offeror should also have experience simulating autonomy algorithms with tools such as Airsim, CODE, or Golden Horde Colosseum. Offerors should be capable of simulating the performance of multiple sensors and multiple Automatic Target Recognition (ATR) algorithms while factoring in sensor degradation and object obfuscation.

 

PHASE II: Given a variety of target types, a set of targets in the environment, and a set of distributed seeker sensors and their associated ATR algorithms, a prototype deliverable should be able to simulate and demonstrate the concept of operations of maximal information measurement fusion, provide the statistics concerning number of looks required, and other statistics such as the percent of correct target classifications as a function of the number of observations.  The algorithm should be capable to perform under additional  possible constraints. The algorithm should be able to variably set the number and type of targets, the layout of the targets, and the obscuration of the targets. Simulation data may need to be generated as part of the effort, to provide quantitative statistics of in the “best look” algorithm performance under a variety of conditions. Both the single and swarming concept of munitions should be demonstrated and evaluated.

 

After functional demonstration of the “best look” algorithm in a simulation environment, constraints will be added in to more accurately reflect the operational environment. These constraints may include, but are not limited to, adversarial air defenses against munitions, communication/processing constraints, battery limitation, maneuverability, and obscuration of targets of interest to the ATR algorithm.

Size, weight, and power (SWAP) efficiency metrics will also be used to judge the performance of the “best look” algorithm. Proposers should expect their algorithm to be implementable on a System on Module embedded computer running alongside an ATR algorithm. The training of any machine learning models is not SWAP constrained, but the trained model is.

 

PHASE III DUAL USE APPLICATIONS: Other potential military applications of this technology in PH III include advances made towards fusing automatic targeting information across other distributed airborne platforms, such as ISR. A PH III could be applied commercially in autonomous aircraft and automobiles, and the sensor input independence research could be applied to a number of commercial fields dealing with real-time statistical analysis.

 

REFERENCES:

  1. K. Saleh, S. Szénási and Z. Vámossy, "Occlusion Handling in Generic Object Detection: A Review," 2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI), Herl'any, Slovakia, 2021, pp. 000477-000484, doi: 10.1109/SAMI50585.2021.9378657;
  2. J. He, S. Yan, J. Hu, and Y. Wang, “Learning-based airborne sensor task assignment in unknown dynamic environments,” Engineering Applications of Artificial Intelligence, vol. 111, May 2022. doi:2022.104747;
  3. A. O. Hero and D. Cochran, "Sensor Management: Past, Present, and Future," in IEEE Sensors Journal, vol. 11, no. 12, pp. 3064-3075, Dec. 2011, doi: 10.1109/JSEN.2011.2167964;
  4. DoD Directive 3000.09, “AUTONOMY IN WEAPON SYSTEMS”, January 25, 023.;

 

KEYWORDS: Networked Collaborative Autonomy; Automated Target Acquisition; Automatic Target Recognition; Digital Engineering; Modeling Simulation and Analysis; Sensor Fusion; Loitering Munitions; Distributed Sensing.

US Flag An Official Website of the United States Government