You are here

Autonomous Optical Sensors

Description:

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Integrated Sensing and Cyber; Trusted AI and Autonomy; Integrated Network Systems-of-Systems

OBJECTIVE: This project aims to develop a portable optical sensor that can capture high-quality real-time imagery data during missile tests. The sensor will be positioned near a missile launcher during the launch or near the target to analyze the terminal phase of the flight. The missile tests will occur in remote locations where proper test infrastructure is unavailable. The Autonomous Optical Sensor (AOS) system will incorporate several high-speed imaging cameras with advanced artificial intelligence and machine learning capabilities. These features will enable the sensor to calibrate and manage itself and assist in positioning itself accurately. The system will be designed to operate autonomously for an extended period with either a battery or a renewable energy source. The sensor will wirelessly receive setup and calibration data from a centralized command and control center. The command center will provide guidance or queuing data for the AOS to initiate its track of a System Under Test (SUT). The AOS system's cutting-edge technology will make it possible to collect accurate and reliable data, even in the most challenging test conditions.

DESCRIPTION: The sensor is designed to operate with minimal or no intervention from an operator. Once deployed, it will capture imagery data of a System Under Test (SUT) using advanced geospatial and optical sensor auto-calibration technologies. The sensor will be equipped with organic computing, distributed network, and power systems to manage the positioning and the collection, processing, and transport of real-time imaging data. This eliminates the need for transporting raw data to a centralized location for processing and analysis. Furthermore, the setup and calibration task will be minimized since the sensor will self-align and calibrate itself before test operations. The results of the computing work done at the edge, such as real-time imagery, sensor calibration updates, or other actionable information, will be transmitted to the main data center for review and analysis after the test.

PHASE I: In Phase I of this project, the goal is to research and define an integrated AOS configuration that includes various types of optical sensors, such as visible and electro-optical/infrared, as well as data processing, networking, and power systems. Additionally, an analysis will be conducted to determine how the system will be managed by an AI framework that employs specialized algorithms and techniques. These algorithms will facilitate positioning, calibration, real-time management, and control of the overall design. Moreover, the awardee will define the control method to include the sensor's feasibility for learning different support configurations or adaptive learning. A process of training the algorithms to adapt to changing conditions or new datasets will have to be designed. By the end of Phase I, the awardee will have defined the optimal configuration of AOS and AI framework necessary to satisfy AOS requirements.

PHASE II: In Phase II of the project, the awardee will create a prototype of the AOS based on the analysis conducted during Phase I. However, integrating AI-enabled or cognitive projects into existing operations can be challenging. Adapting the AOS to current T&E infrastructures may require refining an integrated system design (AI software/hardware) to achieve optimal performance, accuracy, and reliability. It is expected that the AI will need to be iteratively refined and optimized based on the Phase I designs. Functional testing in an operational context is a crucial part of system development. This will facilitate the AI-optimization process for this type of system since it involves an ongoing learning approach to development. The prototype should be able to achieve self-localization and alignment, obtain queuing or positioning data from an external sensor of an SUT, and maintain track of an SUT. Both self-localization and alignment are critical for AI-enabled systems to understand and navigate within their environment effectively. By accurately determining their position and aligning their measurements and actions with a common reference frame, these systems can interact with other devices, objects, or entities and perform tasks such as mapping, object recognition, navigation, or coordination.

PHASE III DUAL USE APPLICATIONS:

Primary commercial dual-use potential is tied to collecting real-time imagery supporting air traffic management (ATM) at airports or surveillance of defined sensitive areas.

  1. Monitoring and managing air traffic flow: Help track flights in real-time using radar data or other surveillance systems primarily to identify incursions by small UAS.
  2. Assisting in airspace coordination: I can provide information about airspace restrictions, temporary flight restrictions (TFRs), and other limitations in the defined sensitive areas. This can help ensure aircraft stay within designated airspace and avoid potential conflicts.
  3. Alerting operators of potential safety or security concerns: Notify operators of any unusual behavior, deviations from flight plans, or potential security threats. This can help maintain the safety and security of the defined sensitive areas.

REFERENCES:

  1. Trajectory Analysis and Optimization Software (TAOS) TAOS by Sandia National Labs: Describes a tool designed to be a three-degree-of-freedom or six-degree-of-freedom trajectory—possible application to sensor placement and calibration. [URL: https://www.sandia.gov/taos/]
  2. Reinforcement Learning Applications in Unmanned Vehicle Control A_Comprehensive_Overviewby Kiumarsi, B. et al. (2019): This paper addresses research in reinforcement learning techniques in control systems, providing insights into their potential applications and challenges. [https://www.researchgate.net/ publication/ 361572362_Reinforcement_Learning_Applications_in_Unmanned_Vehicle_Control_A _Comprehensive_Overview]
  3. How to train your robot with deep reinforcement learning: lessons we have learned by Levine, S. et al. (2021): This research paper delves into applying deep learning algorithms for control tasks, showcasing their capabilities and discussing their limitations. [https://journals.sagepub.com/doi/epub/10.1177/0278364920987859]
  4. Model Predictive Control with Artificial Neural Networks by Scokaert, P. O., et al. (2005): This paper investigates the integration of artificial neural networks with model predictive control techniques, presenting a novel approach for control system design. [https://link.springer.com/chapter/10.1007/978-3-642-04170-9_2]

KEYWORDS: Artificial Intelligence; Adaptive Learning; Autonomous Control; Self-Alignment and localization; Intelligent Instrumentation

US Flag An Official Website of the United States Government