You are here

Signature Detection and Training via Application of Digital Product-Insertion Technologies

Description:

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Trusted AI and Autonomy

 

OBJECTIVE: To investigate and demonstrate a proof-of-concept to explore the applicability of emerging digital product-insertion technologies to the detection of and changes in signatures of interest and to the 3-D rendering of real-world objects and settings in synthetic VR/AR/XR training environments for operators and inspectors.

 

DESCRIPTION: Advances in commercial technologies and AI have led to the development of unique capabilities that can have direct application to DoD and national security requirements.  For example, digital product insertion and related AI capabilities for seamlessly emplacing 2-D and 3-D products in media have demonstrated significant advancements that have direct application to technologies relevant to national security. These advances in AI can support requirements such as rendering 3-D environments in near-real-time and the development of models and signature identification capabilities that require limited or no training data. Areas of interest for Over-the-Horizon Arms Control and potential applications include:

• Identification of novel signatures of interest within the nuclear pathway

• Rapid generation of AR/VR/XR-enabled synthetic training environments from images, videos, CAD/CAM drawings

• Detecting evidence of alterations, including image and video authentication and DeepFake detection

• Enhancing capabilities of DTRA inspection teams and counterproliferation practitioners via VR/AR/XR technologies, including the real-time insertion of threat objects.

 

PHASE I: Design and execute a technical feasibility study to examine the application of novel artificial intelligence digital product-insertion capabilities in three priority areas:

 

Priority Area 1: Detection

1. Identification of abnormal seismic signatures in video footage.

2. Identification of other insights from video footage such as power fluctuations, equipment operating status, etc.

 

Priority Area 2: Training

  1. 3-D rendering of objects and settings into synthetic VR/AR/XR training environments.

 

Priority Area 3: Image/Video Interpretation

1. Detection of indicators of alterations in images and video.

2. Identification of environmental change (e.g., equipment layout) in images and security camera footage.

3. Detection of image or video alteration.

4. Identification, extraction, and 3-D rendering of unfamiliar objects.

5. 2-D to 3-D rendering into CAD or comparable/relevant formats

6. Reduction or elimination for the need of training data to classify images/video or classify objects within images/video.

 

PHASE II: Design and execute a test plan to validate application of novel artificial intelligence digital product placement capabilities against one (threshold) or two (objective) priority areas determined to be feasible by the Phase I feasibility study. Tests will be conducted in laboratory (threshold) or field (objective) environments and place an emphasis on potential applications to nuclear pathway signature detection and the nuclear treaty verification space, such as remote monitoring, exercises, and training. Test plans will include documentation on methodology to be employed and adhere to best experimental design practices.

 

The Phase II efforts will address research questions to include:

• The potential to transform a single 2-D image of an object into a 3-D object and place it into an interactive VR/AR/XR scenario

• The potential for a non-expert user with minimal knowledge of video editing or VR/AR/XR to insert objects into scenes after a short training course.

• An assessment of the additional value that can be extracted from still and motion video with limited to no training data and/or model iteration required

• The potential to integrate digital product placement and similar capabilities be integrated with Unreal Engine, Unity, or comparable capabilities

 

PHASE III DUAL USE APPLICATIONS: Phase III will consist of a demonstration of a fully capable and packaged artificial intelligence capabilities that address specific end-user requirements associated with Priority Areas 1-3.

 

Phase III for feasible Priority Area 1 use-cases will demonstrate a repeatable and accurate means of extracting established signatures from motion video (e.g., security camera footage). Data ingest and processing pipelines will be automated to the greatest extent feasible and leverage low-code, no-code interfaces where possible to allow for utility by users with varying ranges of technical expertise.

 

Phase III for feasible Priority Area 2 use-cases will demonstrate effective integration of VR/AR/XR and digital product insertion technologies. These integrations will validate enhancements to user experience and training quality. Technology integrations should also demonstrate a reduction in time associated with setting and scenario development associated with use for training, planning, and operational execution.

 

Phase III for feasible Priority Area 3 use-cases will demonstrate an ability to detect previously imperceptible signatures in still images and/or motion images. Phase II experiments should also demonstrate where savings were achieved (e.g., required volume of training data) in the development of models.

 

REFERENCES:

  1.  G. Varol, R. S. Kuzu and Y. S. Akgiil, "Product placement detection based on image processing," 2014 22nd Signal Processing and Communications Applications Conference (SIU), Trabzon, Turkey, 2014, pp. 1031-1034, doi: 10.1109/SIU.2014.6830408.
  2. Using Machine Learning for Programmatic Product Placement in TV Advertising by Christopher Kuthan, Akhil Aendapally, and Anita Snyder, 27 JAN 2021, in *Post Types, Amazon Rekognition, Amazon Rekognition Video, Amazon SageMaker, Amazon Simple Queue Service (SQS), AWS Elemental MediaTailor, AWS Lambda, Case Study, Customer Solutions, Industries, Marketing & Advertising, <https://aws.amazon.com/blogs/industries/using-machine-learning-for-programmatic-product-placement-in-tv-advertising/>

 

KEYWORDS: 2-D to 3-D rendering; image and video exploitation; object detection; image and video authentication, synthetic training environments; virtual, augmented, and mixed reality; deep-fake detection

US Flag An Official Website of the United States Government