You are here

High-Performance Multi-Platform / Sensor Computing Engine

Description:

TECHNOLOGY AREA(S): Electronics, Sensors

OBJECTIVE:

The objective of this topic is to develop a next generation multi-platform & multi-sensor capable Artificial Intelligence-Enabled (AIE), high performance computational imaging camera with an optimal Size, Weight and Power - Cost (SWaP-C) envelope. This computational imaging Camera can be utilized in weapon sights, surveillance and reconnaissance systems, precision strike target acquisition, and other platforms. This development should provide bi-directional communication between tactical devices with onboard real-time scene/data analysis that produces critical information to the SOF Operator. As a part of this feasibility study, the Offerors shall address all viable overall system design options with respective specifications on the key system attributes.

DESCRIPTION:

A system-of-systems approach "smart-Visual Augmentation Systems" and the integration of an next generation smart sensor enables information sharing between small arms, SOF VAS and other target engagement systems. Sensors and targeting that promote the ability to hit and kill the target as well as ensuring Rules of Engagement are met and civilian casualties/collateral damage is eliminated. The positive identification of the target and the precise firing solution will optimize the performance of the operator, the weapon, and the ammunition to increase precision at longer ranges in multiple environments.


This system could be used in a broad range of military applications where Special Operations Forces require: Faster Target Acquisition; Precise Targeting; Automatic Target Classification; Classification-based Multi Target Tracking; Ability to Engage Moving Targets, Decision Support System; Targeting with Scalable Effects; Battlefield Awareness; Integrated Battlefield (Common Operating Picture with IOBT, ATAK, COT across Squad, Platoon).

PHASE I:

Conduct a feasibility study to assess what is in the art of the possible that satisfies the requirements specified in the above paragraphs entitled "Objective" and "Description".

PHASE II:

Develop and demonstrate a prototype system on a weapon sight or handheld binocular.

PHASE III:

This technology could also be adopted by automobile industry for autonomous navigation.

KEYWORDS: Visual Augmentation, Computational Imaging Camera, Hyper Enabled, Artificial Intelligence, Machine Learning, Multi-Platform, Multi-Sensor.

References:

[2] AI Benchmark: All About Deep Learning on Smartphones in 2019, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), https://arxiv.org/pdf/1910.06663.pdf

[1] The Hyper Enabled Operator, Small Wars Journal, https://smallwarsjournal.com/jrnl/art/hyperenabled-operator#_edn2

US Flag An Official Website of the United States Government