Welcome to the new SBIR.gov, to assist in getting you situated with the system, a preview of the new login and registration process is available here. Please reach out to the website support team with any questions via sba.sbir.support@reisystems.com

Topic

Funding Opportunities

Icon: back arrowBack to Funding Opportunities Search

Explosive Ordnance Disposal Visual Ordnance Identification Database (EODVOID)

Seal of the Agency: DOD

Funding Agency

DOD

ARMY

Year: 2025

Topic Number: A254-002

Solicitation Number: 25.4

Tagged as:

SBIR

BOTH

Solicitation Status: Open

NOTE: The Solicitations and topics listed on this site are copies from the various SBIR agency solicitations and are not necessarily the latest and most up-to-date. For this reason, you should use the agency link listed below which will take you directly to the appropriate agency server where you can read the official version of this solicitation and download the appropriate forms and rules.

View Official Solicitation

Release Schedule

  1. Release Date
    November 6, 2024

  2. Open Date
    October 2, 2024

  3. Due Date(s)

  4. Close Date
    November 20, 2024

Description

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Trusted AI and Autonomy; Advanced Computing and Software OBJECTIVE: The Explosive Ordnance Disposal Visual Ordnance Identification Database (EODVOID) will develop an automated photogrammetry method to greatly increase the speed of scanning and creating 3D models for 1000’s of pieces of ordnance samples. This would enable the development of a much-needed authoritative ordnance database and serve as a baseline standard for training and developing AI/ML detection and classification algorithms. DESCRIPTION: Currently there is no Artificial Intelligence and Machine Learning (AI/ML) based ordnance detection/identification/recognition technology available to Army Explosive Ordnance Disposal (EOD) Soldiers. Such a technology would greatly assist in the detection and identification of these ordnance items and potentially increase the safety of EOD personnel. EOD technicians have rendered safe over 100,000 improvised explosive devices in Iraq and Afghanistan since 2006 and have trained thousands of host nation forces. To effectively execute their mission, EOD Technicians typically identify ordnance by physical sight or through analyzing images from fielded robotic platforms (ground or air). Currently all identification of ordnance is done visually and relies on the expertise of EOD operators, that typically utilize printed reference materials which include photographs and line drawings of the potential explosive hazards. The problem of identifying threats is further complicated by the fact that there are tens of thousands of different types of threat items worldwide, and some may be made to appear like other ordnance but functions differently. To add to the issue, once ordnance is fired, the physical characteristics (shape) may change and key identifying features such as markings may be altered or destroyed. If ordnance has been left in certain environments for extended periods of time, there may be a significant degradation to the appearance of the item due to rust and damage. Once the ordinance is properly identified, the EOD Technician can proceed to the next phase of their mission. The EODVOID will utilize photogrammetry methods developed by this SBIR and have cameras mounted on a robot to capture high resolution files of ordnance and their components. We will develop an automated system that will scan, take photos and create 3D models in one complete action. Once captured these images will be stored, along with all the metadata of each item, in a database with the ability to be geographically tailored to subset databases for any regional deployment. It is extremely important to have hi-resolution images for the purpose of training the CV algorithms. We will be generating (NOT SIMULATING) hi-resolution images that have been aged, rusted, dented, broken and placed in different environments as well as all aspect angles of the ordnance. PHASE I: This topic is only accepting Direct to Phase II proposals for a cost up to $2,000,000 for a 24-month period of performance. Proposers interested in submitting a DP2 proposal must provide documentation to substantiate that the scientific and technical merit and feasibility equivalent to a Phase I project has been met. Documentation can include data, reports, specific measurements, success criteria of a prototype, etc. (DIRECT TO) PHASE II: Companies are expected to develop a fully automated photogrammetry scanning program for ordnance to create 3D models that can be populated into a database. This SBIR proposal will create a controlled automated environment, using high end DSL cameras, computers, turntables and proper lighting, this will ensure all the shadowing and detail on the ordnance is correct. Because of this process we will be able to generate (NOT SIMULATE) hi-resolution 3D images as well as age, dent, rust, brake apart items and show what actual fired ordnance would look like on the battlefield. Being able to create this process will make it extremely faster and more comprehensive for the purpose of training the CV algorithms. This proposal will leverage highly mature computer vision approaches that have not previously been applied to EOD applications. Once the process to establish a database has been created, transfer learning methods will be employed as a first step towards achieving the required level of detection and classification. Convolutional Neural Networks (CNNs) are currently integrated as an important tool within the industrial base and have automated image and video recognition tasks, resulting in a high degree of effectiveness and efficiency across multiple sectors. For example, CNNs are now integrated in a variety of industries, to include retail, automotive, healthcare and manufacturing. CNNs can be used in medical imaging applications and in manufacturing for monitoring and ensuring quality. Automotive manufacturers use related methods for the design of autopilot capabilities and autonomous driving applications. The development of the EODVOID and corresponding metadata standard for training deep learning algorithms is fundamental to the development of AI/ML detection algorithms. The primary enabling technology for the autonomous recognition of ordnance is an automated scanning solution to populate the database. Such technology has been proven to a TRL 6 at Picatinny Arsenal and the DEVCOM AC EOD in 2024. Companies must have extensive knowledge on photogrammetry methods and related automation software. Need to be able to speed up overall process of taking photos of ordnance and transferring images into complete 3D models automatically. Currently around 20-25 complete models can be produced a day. Need to bring overall process up to around 50 – 75 if not more. Companies need to know how to work with Convolutional Neural Networks (CNNs) so that all the data that is being captured through the photogrammetry method can eventually be used in a database that will lead to ordnance identification on the battlefield. PHASE III DUAL USE APPLICATIONS: • Video gaming & AR/VR: leverages AI image detection for in-game object detection • Healthcare: aides in creating 3D models as well as classification of medical scans • Autonomy: train algorithms used to make computer-speed decisions about objects in the natural world • Robotics & Manufacturing: industrial robotic applications it to train robots in the manufacturing process and quality control process Automotive manufacturers use related methods for the design of autopilot capabilities and autonomous driving applications. The development of the EODVOID and corresponding metadata standard for training deep learning algorithms is fundamental to the development of AI/ML detection algorithms. REFERENCES: 1. https://digital.library.unt.edu/ark:/67531/metadc1085874/ 2. https://ieeexplore.ieee.org/document/10566207 KEYWORDS: Photogrammetry; Scan; Classification; EOD; Ordnance; Identification