You are here
DoD SBIR Program Solicitation FY 15. 2
NOTE: The Solicitations and topics listed on this site are copies from the various SBIR agency solicitations and are not necessarily the latest and most up-to-date. For this reason, you should use the agency link listed below which will take you directly to the appropriate agency server where you can read the official version of this solicitation and download the appropriate forms and rules.
The official link for this solicitation is: http://www.acq.osd.mil/osbp/sbir/solicitations/sbir20152/index.shtml
Application Due Date:
Available Funding Topics
- A152-090: Linear Inflow Model Synthesis for Advanced Rotorcraft Configurations
- A152-091: Innovative Motion Measurement Package (M2P) for Guided and Un-Guided Munitions
- A152-092: Enhanced Analysis for Pulsed Voltammetry Evaluation Tool / System for Improved Power Systems
- A152-093: Techniques for Wire Recognition using mmW
- A152-095: Avian Vision Processing
- A152-096: Advanced Coordinated Control, Formation Flying for Nano-Satellite Applications
- A152-097: Underbody Blast, Crash and Rollover Interior Impact Injury Prevention Technologies
- A152-098: Variable Energy Ignition System for Heavy Fuel Rotary Engine
- A152-100: Low Cost, Low Temperature Processing, High Use Temperature Composite Material
- CBD152-001: Adjustable Focus Lenses for Respiratory Protection
- CBD152-002: Smart Split Neck Seals for Respiratory Protection
- CBD152-003: Development of Mycotoxin Medical Countermeasures
- CBD152-004: Exploiting Microbiome and Synthetic Biology to Discover and Produce Naturally Occurring Antibiotics
- CBD152-005: High Sensitivity, Low Complexity, Multiplexed Diagnostic Devices
- CBD152-006: Signal Processing for Layered Sensing
- DLA152-001: Advanced Manufacturing Technologies
- DLA152-002: Medical 3D Printing
- DLA152-003: Ceramic Additive Manufacturing for Metal Casting
- DMEA152-001: Rapid Non-destructive Detection of Advanced Counterfeit Electronic Material
- DMEA152-002: Analysis of Integrated Circuits Using Limited X-rays
- DTRA152-001: Radiation Hardened Optoelectronics for Optical Interconnects
- DTRA152-002: Materials Development for Enhanced X-ray Detection of Dynamic Material Events Under Fast Loading Rates
- DTRA152-003: High Performance Computing (HPC) Application Performance Prediction & Profiling Tools
- DTRA152-004: Instrumentation for Characterization of Fireballs, Hot Gases, & Aerosols from Defeat of Targets Containing Biological and Chemical Agents
- DTRA152-005: Joint Learning of Text-based Categories
- DTRA152-006: Island-mode Enhancement Strategies and Methodologies for Defense Critical Infrastructure
- DTRA152-007: Multi-mode Handheld Radioisotope Identification Instrument
- DTRA152-008: Standoff Detection of Highly Enriched Uranium
- MDA15-001: Advanced Cognition Processing and Algorithms for Improved Identification
- MDA15-002: Kinematic Reach/Containment
- MDA15-003: System Communications
- MDA15-004: Lethality Enhancement
- MDA15-005: Gaming Trainer
- MDA15-006: Command and Control Human-to-Machine Interface
- MDA15-008: Improved Track Accuracy for Missile Engagements
- MDA15-010: Innovative Methodologies for Modeling Fracture Under High Strain-rate Loading
- MDA15-014: Thermally Efficient Emitter Technology for Advanced Scene/Simulation Capability in Hardware in the Loop Testing
- MDA15-017: Innovative Antenna Arrays Enabling Continuous Interceptor Communications
- MDA15-018: Multi-Object Payload Deployment
- MDA15-020: Interceptor Thermal Protection Systems
- MDA15-022: Low Light Short Wave Infrared Focal Plane Arrays
- MDA15-023: Solid State High Power Amplifier for Communications
- MDA15-024: Non-Destructive Testing Methods for Detecting Red Plague Within an Insulated Silver Plated Copper Conductor
- MDA15-025: Passive Inter-Modulation RF Emissions Utilized for Identifying Galvanic Corrosion in Metal Structures
- N152-081: Synthesis and Realization of Broadband Magnetic Flux Channel Antennas
- N152-082: Design and Produce Millimeter Wave Dipole Chaff with High Radar Cross Section
- N152-083: Synthetic Aperture Radar Approaches for Small Maritime Target Detection and Discrimination
- N152-084: Test and Certification Techniques for Autonomous Guidance and Navigation Algorithms for Navy Air Vehicle Missions
- N152-085: Gallium Arsenide Based 1-Micrometer Integrated Analog Transmitter
- N152-086: Flight Deck Lighting Addressable Smart Control Modules
- N152-087: Ability for Electronic Kneeboard (EKB) to Communicate and Operate in a Multi- level Security Environment
- N152-088: Infrared Search and Threat Identification
- N152-089: High Peak Power 1.9 um Thulium-Doped Solid-State Lasers for Next-Generation Compact and Robust High Peak-Power Blue Lasers
- N152-090: Multi-Wavelength and Built-in Test Capable Local Area Network Node Packaging
- N152-091: Advanced Non-Destructive System to Characterize Subsurface Residual Stresses in Turbo-machinery Components
- N152-092: Inducing Known, Controlled Flaws in Electron Beam Wire Fed Additive Manufactured Material for the Purpose of Creating Non-Destructive Inspection Standards
- N152-093: Innovative, High-Energy, High Power, Light-Weight Battery Storage Systems Based on Li-air, Li-sulfur (Li-S) Chemistries
- N152-094: Model-Based Tool for the Automatic Validation of Rotorcraft Regime Recognition Algorithms
- N152-095: Ultra-High Temperature (UHT) Sensor Technology for Application in the Austere Environment of Gas Turbine Engines
- N152-096: Miniaturized, Fault Tolerant Decentralized Mission Processing Architecture for Next Generation Rotorcraft Avionics Environment
- N152-097: Low Emissions Waste to Energy Disposal
- N152-098: Modular Smart Micro/Nano-Grid Power Management System
- N152-099: Cooled BusWork for Shipboard Distribution and Energy Storage
- N152-100: Navy Air Cushion Vehicles (ACVs) Lift Fan Impeller Optimization
- N152-101: Amphibious Combat Vehicle Ramp Interface Modular Buoyant Kit (MBK) for Joint High Speed Vessel (JHSV) Stern Ramp
- N152-102: Modular Boat Ramp to Launch and Retrieve Watercraft from Joint High Speed
- N152-103: Innovative Flexible Equipment Support Infrastructure
- N152-104: Manufacturing Near-Net-Shape Conformal Electro-optic Sensor Window Blanks from Spinel
- N152-105: Metrology of Visibly Opaque, Infrared-Transparent Aerodynamic Domes, Conformal Windows, and Optical Corrector Elements
- N152-106: Metrology of Visibly Transparent Large Aspheric Optics
- N152-107: Manufacturing of Visibly Transparent Large Conformal Windows
- N152-108: Accelerating Instructor Mastery (AIM)
- N152-109: Reliability Centered Additive Manufacturing Design Framework
- N152-110: Dive Helmet Communication System
- N152-111: Rapid Initialization and Filter Convergence for Electro-optic / Infrared Sensor Based Precision Ship-Relative Navigation for Automated Ship Landing
- N152-112: Robust MEMs Oscillator Replacement for Quartz Crystal TCXO Oscillator
- N152-113: Unmanned Undersea Vehicle (UUV) Detection and Classification in Harbor Environments
- N152-114: GaN Avalanche Devices for RF Power Generation
- N152-115: Active Thermal Control System Optimization
- N152-116: Affordable Compact HPRF/HPM Attack Warning System
- N152-117: Low Size, Weight, Power, and Cost (SWAP-C) Magnetic Anomaly Detection (MAD) System
- N152-118: Ultra High Density Carbon Nanotube (CNT) Based Flywheel Energy Storage for Shipboard Pulse Load Operation
- N152-119: Guidance System on a Chip
- N152-120: Attack Sensitive Brittle Software
- N152-121: Compact Air-cooled Laser Modulate-able Source (CALMS)
- N152-122: In-Transit Visibility Module for Lifts of Opportunity Program (LOOP) & Transportation Exploitation Tool (TET)
- N152-123: Advanced UHF SATCOM Satellite Protection Features
- SB152-001: Cell Free Platforms for Prototyping and Biomanufacturing
- SB152-002: Cortical Modem Systems Integration and Packaging
- SB152-003: Broadband Self-calibrated Rydberg-based RF Electric Field and Power Sensor
- SB152-004: Many-Core Acceleration of Common Graph Programming Frameworks
- SB152-005: Ovenized Inertial Micro Electro Mechanical Systems
- SB152-006: Compact, Configurable, Real-Time Infrared Hyperspectral Imaging System
- SB152-008: Low Cost Expendable Launch Technology
Linear Inflow Model Synthesis for Advanced Rotorcraft Configurations
Current linear rotorcraft flight dynamics models are dependent on finite-state inflow theory based on potential flow modeling at the rotor plane . These inflow models have few parameters and are readily available in linear state-space form, making them easy to implement in flight dynamic models for stability assessment and control system design studies. These types of models have been developed for  and extensively used in  modeling single main rotor helicopters. The future Army rotorcraft fleet will include configurations beyond the traditional single main rotor/trail rotor helicopters (e.g., tiltrotors and compounds) and advances in simulation modeling are required. Physics-based inflow models (e.g. free-vortex wake models, CFD, VPM, etc.) provide a more accurate and wholly generic representation of the rotor wake and can capture interference effects between the rotor and other rotors , other lifting surfaces, the fuselage, and the ground . Rotor interference effects become critical when the wake of one rotor system is immersed in another, such as in coaxial helicopters. The physics-based models are nonlinear and so are limited in their use in flight dynamics and control studies. State-space inflow models of coaxial configurations have begun to be developed, but validation data is limited and needs to be improved in forward and maneuvering flight [6, 7]. A method to numerically determine a real-time linear state-space model directly from physics-based models that captures the key interference effects would provide a pathway for accurate model development and improved stability analysis and control law design of advanced configurations. The goal of the effort will be to develop and validate a methodology to determine linear time-invariant (LTI) parametric inflow models from physics-based models for any rotorcraft configuration. The parametric model should include interference effects of other aircraft components. The inflow model should be easily included in existing rotorcraft simulation tools. The methodology should be applicable to not only single main rotor helicopters, for which linear inflow models exist, but be generic and easily extendible to coaxial, compound, or other rotorcraft configurations. The validation should consist of wake and full aircraft dynamics comparisons (e.g. time histories, frequency responses, trim analysis, etc.) with higher-order models, the non-linear simulation from which the model was derived, and experimental/flight data, if available. The methodology developed would reduce the iterative nature of control law design and lead to cost savings and increased efficiency in the development process of aircraft flight control systems. The feasibility for determining accurate linear inflow models for advanced rotorcraft configurations and an initial model structure will be established in Phase I. If the ability is confirmed, Phase II will fully develop the capability. The primary interests of this study are (1) linearization capability, (2) accuracy, and (3) scalability of parameter representations. There are broad dual-use applications for a linear representation of advanced rotorcraft inflow dynamics. Major military rotorcraft manufactures all have multi-rotor aircraft and other advanced configurations in use or development. The inflow modeling of these configurations has proven to be problematic and a key barrier to proper predictive capability of the aircraft flight dynamics characteristics. PHASE I: Determine the feasibility to extract a real-time parametric linear inflow model for advanced rotorcraft configurations from a physics-based model that includes interference effects of other aircraft components. Propose an initial model structure and compare to other inflow model types and document the advantages and disadvantages of each method. Describe the methodology that will be used to obtain the model and all simplifications or assumptions needed to produce the linear model. Describe the process that will be used to obtain time history and frequency response data for validation. PHASE II: Develop and validate a software tool that implements a numerical method to generate parameterized LTI state-space inflow models for use in rotorcraft flight dynamics analysis. The tool should require generic inputs, including aircraft states and control inputs, and be able to correctly predict the inflow at the rotor(s) in both hover and forward flight. The model should be applicable to a range of physics-based models. Demonstrate the derived model in a flight dynamics simulation of a representative test aircraft. Compare results of this model with prior work using finite-state and higher-order models, including predictions of transient response with flight test data. Model predictive capability will be evaluated by comparing time histories and frequency responses of the high-order and linear models. Time histories should provide an error of 25% or less, and the frequency responses should have costs of less than 100 . Provide documentation for incorporation of this model with existing flight dynamics modeling tools. PHASE III: Transition the tool for commercial use with military and industry customers. Since the developed tool is generic, it will be compatible with each customer’s existing physics-based inflow models. The government will use the tool to develop accurate flight dynamics models to support handling qualities (through piloted simulation) and control law development for advanced rotorcraft configurations. Industry customers could use the tool for similar purposes.
Innovative Motion Measurement Package (M2P) for Guided and Un-Guided Munitions
Performance of future munitions are dependent upon the accurate estimation of the airframe’s angular motion, acceleration about each axis, velocity and roll position relative to up. The M2P will reside within the munition airframe and measure actual projectile/airframe properties, which can be used by the munitions guidance package and/or fuzing system. The M2P technology can utilize conventional sensor technology but however, must also include novel technology that will enhance measurements as compared to current methodologies. Data such as launch conditions need to be accurately predicted in order for the M2P package to be able to integrate rates and accelerations in order to accurately predict the projectile’s attitude. The projectile’s M2P package shall not include GPS. The M2P sensor technology needs to function during gun launch shock environment and reliably measure, collect, and transmit data to the appropriate munition sub-system. The solution must be of such small size as to fit within the very limited volume claim (ideally within a 1 cubic inch volume) of the airframe and utilize existing power supply (ideally less than one watt). Life expectancy of the technology shall be at least 20 years. PHASE I: Identify all gun launch environmental effects and factors the projectile experiences. From these factors the contractor shall formulate and laboratory verify the technologies that are capable of accurately measuring the motion of the projectile during and after gun launch. The contractor will perform and document design analyses to demonstrate compliance with requirements listed above. The results of Phase I will include an engineering analysis of alternatives noting the design capabilities and limitations and recommendations for the Phase II effort, as well as recommendations for the physical prototypes to be built in the laboratory and subjected to laboratory functional testing. PHASE II: Based on success in Phase I, the contractor shall refine the selected design(s) to meet functional and environmental requirements. The contractor will design and build a minimum of 10 prototypes and provide to the government. At the Government’s discretion these prototypes may undergo air and/or rail gun testing culminating in a ballistic test (live fire). Deliverables include the 10 prototypes, an engineering report on the design selected in the contractor’s format. PHASE III: The topic author envisions that the LOS/BLOS Division within ARDEC’s Munitions Engineering and Technology Center will incorporate and test the developed technology in a time-fuzed munition. A need may exist to provide analysis, instructions, process control documents, and design that can be used by Army engineering to integrate the Phase II technology into a selected munition for further performance evaluations. To this end, the contractor may be requested, if funds are available, to design and build a minimum of 10 prototype sensor packages to be integrated into an Army selected munition. This munition may undergo air and/or rail gun testing culminating in a live-fire ballistics test.
Enhanced Analysis for Pulsed Voltammetry Evaluation Tool / System for Improved Power Systems
In order to develop new high-performance batteries, fuel cells, and sensors, the electrochemical behavior of materials and devices need to be quantitatively assessed. This assessment (models and systems characterization) will help identify the performance of electrochemical systems leading to the development of significantly improved power sources. New electrochemical analysis tools will enable better characterization of electrodes, electrolytes, and devices. Often only a few data points from a voltammogram or working curves for each experimental condition are used to determine, the formal potential and peak current. These analyses can be imprecise due to background currents, multiple redox processes, etc. Pulse voltammetry not only reduces charging currents but can also be mathematically described and modeled. To date a limited set of fundamental models have been developed to describe the pulsed voltammetric response for complex mechanisms and electrode geometries using the data contained in the complete voltammogram. Additional models need to be developed and these techniques made accessible to the research community using a generalized approach that can easily determine relevant electrochemical parameters from datasets obtained by modern instrumentation. In particular, it is not possible to determine the reversible half-wave potential, the charge transfer coefficient, the heterogeneous charge transfer rate, or the diffusion coefficient from the entire voltammogram. A tool is needed that is able to quantify electrochemical parameters using a variety of pulse profiles and would enable enhanced understanding of the behavior of redox couples and resultant devices such as batteries, fuel cells, and electrochemical sensors. Furthermore, the algorithms must be able translate between different types of instrumentation and new pulse based voltammetry techniques. These tools would be integrated into commercial electrochemistry data generation and analysis packages. PHASE I: Develop models and measurement / characteristic tools that can determine reversible half-wave potential, the charge transfer coefficient, the heterogeneous charge transfer rate, and the diffusion coefficient for single square wave voltammograms. The resultant tool should allow the parameters to be calculated entirely from the voltammograms (with user provided initial values) or allow individual variables to be set by the user and the other variables calculated based on the voltammogram. In addition, the package should determine the confidence/uncertainty for the calculated electrochemical parameters. PHASE II: Incorporate additional models to allow utilization of data from a variety of pulse voltammetry techniques including: normal pulse, reverse pulse, and differential pulse voltammetry, as well as coupled electrochemical and chemical mechanisms such as preceding chemical reaction, following chemical reaction, catalytic chemical reaction, etc. The algorithm should also be capable of determining the electrochemical parameters from a "bundled" series of voltammograms with for example varying square wave frequency. Verify with statistical confidence intervals the accuracy of the data obtained. Integrate the models and evaluation tools into commercial electrochemistry data generation and analysis package that will assist in determining prospective solutions performance. PHASE III: This product would be used in a broad range of military and civilian research with applications including: advanced batteries, electrosynthesis, electrocatalysis, and detectors to provide decisions on which technologies to further develop for improved electrochemical power sources.
Techniques for Wire Recognition using mmW
Rotorcraft landing and takeoff is dangerous in environments where obstacles, particularly wires or power lines exist, and pilot vision is degraded by obscurants such as dust, smoke, fog, rain and snow. This SBIR would focus on a radar solution to detecting wires and power cables when landing in a visually degraded environment. Existing data for wires and power lines with millimeter wave radars produce either curtains or periodic bright spots through the bragg effect when used in traditional real beam imaging. However, if the sensor is capable of penetrating an obscurant and has an automated technique to recognize a power line or wire, the sensor can simply send data to an independent imager to nominally represent a wire without the need to image it. The development of a technique or algorithm to recognize wires at a variety of approach angles will improve the Army’s capability of operation in degraded visual environments. PHASE I: Phase I will investigate an approach to recognizing wires, including wires from 3/8” up to bundled high power transmission lines, from angles of incidence of 0 degrees (head on from the sensor) to 45 degrees angle of incidence from the sensor. The techniques proposed will utilize previously collected fully calibrated government furnished sensor data in the millimeter wave radar regime (W band). PHASE II: Phase II will further develop and refine techniques or algorithms /software to demonstrate the ability to detect and identify wires using the approaches laid out in Phase I. The laboratory demonstration will include formatted and calibrated W government supplied data with approaches to wires from a range of angles of incidence. The Phase II effort shall be capable of detecting bundled high power transmission lines (threshold) down to 3/8” smaller wires (objective), from angles of incidence of 0-20 degrees (threshold) and up to 45 degrees (objective) from detection ranges of 150 meters (threshold) out to 2 kilometers (objective). PHASE III: Phase III will build on the laboratory demonstration from Phase II to include integration with a W band radar sensor and a synthetic vision system to provide a full detection and visualization system for pilot situational awareness on rotary wing aircraft. The proposers will also consider other military applications including applications to fixed wing and UAV operations to enhance capabilities in degraded visual environments. Commercially, this capability enhancement can transition to the commercial aircraft industry for passenger airlines as well as the shipping industry. The end state system will also increase aircraft survivability for both private industry first responders and emergency medivac helicopter transports, as well as military pararescue aircraft in urban areas where wires may impact helicopter survivability. The end state system is directly applicable to sensor fusion efforts in the RDECOM Degraded Visual Environment Mitigation Program and can transition to PM Aviation. The technology, as it matures, can also transition to meeting the needs of Future Vertical Lift expected capabilities (survivability and enhanced capability in all weather, all environments) and expected mission tasks including enhancing the survivability medical pararescue rotary wing aircraft
Avian Vision Processing
Birds of prey, also known as raptors, are birds that hunt or feed on other animals. They are characterized by keen vision that allows them to detect prey during flight. Since vision is the most important sense for birds, and good eyesight is essential for safe flight, this group has a number of adaptations which give visual acuity superior to that of other vertebrate groups. The objective of this SBIR is to develop an innovative solution which will allow the operator to achieve the same sort of superior imaging and improved situational awareness available to avian predators. The effective focal length (EFL) of the imaging system itself should be able to change; simple digital magnification of an image does not provide the kind of image clarity discussed here. While variable focal length lenses have been demonstrated (see references), shortcomings have been identified both in the range of the EFL shift and in the resolution capabilities of the system. The intent of this effort is to push the boundaries of this technology and achieve extreme shift and resolution. This EFL shift should be maximized, with a minimum EFL of no more than 40mm, and a dynamic shift of 10x (threshold), 100x (objective). Time for EFL shift will be considered; while there is no rigid time requirement, the lens must be able to shift across it''s full dynamic range quickly (80% at 1000m. False alarms are to be minimized as much as possible. Image processing must be conducted on a man-portable computer system (laptop or smaller). Once detected, range to the target must be determined. Range is to be accurate to +/- 1m within 1000m. While passive range finding via computer vision (see reference) is preferred, active range finding (to include laser range finding) is permitted, however the ranging must be done automatically (i.e., the operator isn''t tasked with aiming a laser, the system instead aims and range-finds for the operator). PHASE I: Investigate innovative optical solutions for imaging systems. Develop and document the overall optic component design and accompanying algorithms for operation/alteration of the lensing system. Develop support documentation for the lensing medium. Demonstrate a proof of principle of the design by producing a preliminary architecture concept (for example, lens size, sensor size/density) where image acquisition information can be displayed and analyzed with a computer system. Phase I deliverables will include: Monthly status reports, Final phase report, and demonstration hardware. PHASE II: Develop and demonstrate a prototype capability for insertion into a realistic, Government-supplied imaging system. The component must be capable of demonstrating key operational parameters (in particular alteration of EFL) in a laboratory environment. Include analytical studies and laboratory studies to physically validate the analytical predictions of separate elements of the technology. Identify representative components that are not yet integrated. Demonstrate ability to automatically locate and track a target within the imager’s field of view. Demonstrate the ability to accurately determine range to that target. Phase II deliverables will include: Monthly status reports, Final phase report, and prototype system (TRL 5/6) demonstrating functionality of lens, target detection, and rangefinding. PHASE III: Protoype system validation in a realistic (outdoor, controlled range) environment. Components are integrated into a meaningful form factor, and in a package which is robust enough for Soldier use. System size, weight, and power will be optimized for functionality and reliability. Manufacturability assessment will be conducted. Intent for this technology is to transition to the Fire Control family of systems, particularly the Fire Control-Squad and Fire Control-Precision Programs of Record (POR’s). Commercial opportunities (to be defined by vendor in proposal) will be explored.
Advanced Coordinated Control, Formation Flying for Nano-Satellite Applications
The focus and priority of this topic is seeking innovative space-based remote sensor capabilities supporting all-weather, day-night imaging capability. Preliminary research assessments highlight the availability of next generation device/component technologies and outline novel approaches for creating flotillas, swarms, and/or formations of nano-satellites with multi-faceted functions and sensor capabilities. While each individual satellite should have a specific sensor or control function, the overall formation/swarm should have a greater function and be “greater than the sum of its parts.” Of particular interest are solutions with multiple onboard processing computer clusters, very high bandwidth communications architectures, imagery collection/dissemination, SAR/ISAR, MASINT, and GPS alternatives. Current small satellites such as cubesats are limited with power due to size and weight issues. Of particular interest is new power storage, collection, handling, and distribution concepts that will enable higher power components for communications and active sensors. Power requirements for a distributed aperture RF/EO system must be determined in order to develop a flotilla craft design. Solutions should target the goals defined here and should be scalable across a network of mobile ground, air, sea, and space devices. PHASE I: Research and develop novel approaches to demonstrate the feasibility of the end goal of performing distributed RF/EO apertures using nano-satellites. The Phase I effort should consist of a study effort to determine if current nanosat capabilities can be implemented to demonstrate this goal from low Earth orbit. Assess through analysis the Technology Readiness Level (TRL) of the proposed concept at the conclusion of Phase I. PHASE II: Based on the verified successful results of Phase I, refine and extend the proof-of-concept design into a fully functioning pre-production prototype. Verify the TRL at the conclusion of Phase II. PHASE III: Develop the prototype into a comprehensive solution that could be used in a broad range of military and civilian applications where rapid RF/EO imagery is required. There are no particular requirements on data resolution at this time. This demonstrated capability will benefit and have transition potential to Department of Defense (DoD) weapons and support systems, federal, local and state organizations as well as commercial entities. For instance, a swarm of commercial nanosatellite sensors could be used to monitor crops, roadways, etc. or a team of floating EO sensors could be deployed in waterways to inspect the integrity of dams and levees, or to monitor the smuggling of illegal contraband in US waters.
Underbody Blast, Crash and Rollover Interior Impact Injury Prevention Technologies
Non-traditional interior roof military vehicle impact injury prevention technologies address the challenge to provide warfighter survivability, allowing them to complete their mission, by preventing impact related injuries such as skull fractures and neck injuries, otherwise incurred during underbody blast, crash and rollover events. The solution accounts for the full range of occupants to include the 5th female, 50th male, and 95th male occupant sizes. Additionally it takes into account the occupant may be wearing the ACH helmet and additional gear worn on the body of the occupant. The occupant shall be considered restrained during the blast event. Injury data from theater shows mounted warfighter head, neck and upper spine injuries due to occupant impacts with the vehicle interior during blast, crash and rollover events, frequently occur (Head Injury Analysis for DOT & E Study, JTAPIC RFI 2013-N0114, 10APR2013 and 2012-N0161 Blast Injury Prevalence Rate BIPSR, 10JAN2013). Injuries to the head include traumatic brain injuries (TBI) primarily concussions, skull fractures, face fractures and neck/upper spine fractures. The focus of this topic is to reduce injuries related to skull and neck fractures. Traumatic brain injuries are out of scope of this topic. Although it can be assumed that if impact energy is mitigated to reduced fractures, TBI’s related to occupant impacts is also likely to be reduced. Non-traditional technologies may include, and are not limited to; active protective technologies, optimized interior geometric design and a durable, flame resistant exposed surface allowing protection for multiple impact directions. Interior impact protection shall be developed for military vehicle applications, such as; the HMMWV (High Mobility Multi-purpose Wheeled Vehicle, AMPV (Armored Multiple Purpose Vehicle), Abrams, Bradley Fighting Vehicle, Stryker, 20 T Truck and HTV (Heavy Transport Vehicle). Non-traditional technologies are needed to address military vehicle design trade-offs such as; i) minimizing vehicle packaging space claims and non-intrusive designs, and ii) lighter weight than traditional technologies such as energy absorption materials. PHASE I: Phase I entails a feasibility study, concept development, analysis and modeling and simulation, risk analysis, cost analysis and concept design of a non-traditional, roof-mounted impact injury protection mainly focused upon head and neck injury protection. The concept shall utilize one of the US Army tactical and one ground combat vehicle such as; HMMWV (High Mobility Multi-purpose Wheeled Vehicle, AMPV (Armored Multiple Purpose Vehicle), Abrams, Bradley Fighting Vehicle, Stryker, 20 T Truck and HTV (Heavy Transport Vehicle) as the basis for concept development, which are ITARS Export Controlled. Non-traditional technologies will be designed to provide impact protection at AIS (Anatomical Injury Score) of 2 or less. The technology shall provide impact injury protection from multi-directional impacts, have a durable, flame resistant exposed surface and be non-intrusive ensuring the technology does not hinder or encroach upon the mounted warfighter. The technology shall be capable of providing occupant head-neck protection in less than 15 milliseconds, given an impact velocity of 15 miles per hour (24 kilometers per hour). The energy attenuation criterion for head impact protection, will achieve less than 700 HICd (Head Injury Criterion) using the test evaluation method and equipment per FMVSS 201U. HICd is calculated in accordance with the following formula taken from JSSG-2010-7 Crew Systems Crash Protection Handbook (208): Where A_R=[A_x^2+A_y^2+A_z^2 ]^(1/2) Resultant Acceleration magnitude in g units at the dummy head cg. t1 and t2 are any two points in time during the impact event which are separated by not more than a 15 millisecond time (FMVSS 49 CFR 571 208: Occupant Crash Protection, 2013.10.01). Neck impact injury protection shall be designed according to SAE J885, Feb 2011-02, section related to Direct Impact to the Neck and Neck injury due to head inertia loading. Neck impact injury protection shall provide a maximum peak flexion bending moment about the occipital condyle shall not exceed 190 Nm. The maximum peak extension bending moment about the occipital condyle shall not exceed 57 Nm. The maximum peak axial tension shall not exceed 3300N. The maximum peak axial compression shall not exceed 4000 Newtons. The maximum peak fore and aft shear shall not exceed 3100 Newtons. The neck moments are calculated form the following formula: MOCY =MY – FX(0.01778m), where: MOCY = moment y about occipital condyle MY = measured moment from load cell FX = measured force from load cell Durability criterion is to not degrade in performance when exposed to temperatures of MIL-STD-810 Basic Hot and Basic Cold Storage temperatures, humidity and tracked vehicle vibration schedules. The technology concept shall be developed with the intent to not incur holes from abrasion, tear or puncture after being tested to ASTM D2582, ASTM D751, ASTM D2261, ASTM D3384, ASTM 966 or ASTM D1242 or similar standard, based upon the exposed surface sheet material composition, at 1,000 cycles. The technology concept shall be developed with an exposed surface designed to prevent an overhead dripping and melting injuring the occupant, and shall not generate significant heat index, rapid flashing (ignition greater than 15 seconds) or flame spread, smoke or toxicity when exposed to large fire per ASTM E1354 at 50 kW/m2< 200 flaming and non-flaming, FAR 25.853 and have toxicity approval from the U.S. Public Health and Safety Department and TARDEC. When active technologies are being considered/used for innovative technology development, the technology shall be developed with consideration for ASTM D5428-08 Standard Practice for Evaluating the Performance of Inflatable Restraint Modules, ASTM D5427-09 Standard Practice for Accelerated Aging of Inflatable Restraint Fabrics, ASTM D7559/D7559M-09 Standard Test Method for Determining Pressure Decay of Inflatable Restraint Cushions, ASTM D5807-08 (2013) Standard Practice for Evaluating the Over-pressurization Characteristics of Inflatable Restraint Cushions, ASTM D6476-12 Standard Test Method for Determining Dynamic Air Permeability of Inflatable Restraint Fabrics, ASTM D6799 Standard Terminology Relating to Inflatable Restraints. Analytical tools such as Finite Element Analysis and modeling and simulation, shall be used for concept development including pulse development. The outcome of Phase I shall include the scientific and technical feasibility as well as the commercial merit for the technology(s) solution provided. The concept(s) developed shall be supported by sound engineering principles. Supporting data, initial test data, along with material safety data sheets and material specifications shall also be included if available. The projected development and material cost and timing shall be included in the study. PHASE II: Phase II of this effort shall focus upon the validation and correlation of the analytical concepts and pulses developed from Phase I, along with technology prototype fabrication and validation laboratory testing and evaluation. Military vehicle application(s) and integration concepts will be further refined for one or more military vehicle, such as the the HMMWV (High Mobility Multi-purpose Wheeled Vehicle, AMPV (Armored Multiple Purpose Vehicle), Abrams, Bradley Fighting Vehicle, Stryker, 20 T Truck and HTV (Heavy Transport Vehicle). Vehicle specific prototype component design, modeling and simulation analysis as well as early fabrication for purposes of laboratory verification and vehicle interface evaluation. The energy attenuation performance of the technology solution(s) for head-neck impact injury protection shall be in accordance with the criterion in Phase I, and shall be verified during Phase II. Laboratory testing with prototype concept hardware shall also verify the technology solution(s) capability to withstand military vehicle environment and climates for durability and flame, smoke and toxicity resistance as described in Phase I. The technologies shall be designed to be securely attached to the vehicle roof in areas of the vehicle where occupant head-neck impact is projected to occur based upon modeling and simulation. Vehicle packaging space shall be considered relative to the 5th percentile female through to the 95th percentile male, and analysis conducted to verify the occupant’s space requirements are met in accordance with MIL-STD-1472. This system has the potential to be utilized in any Military and Civilian truck and automotive applications. Technology risks and risk mitigation plans shall be clearly identified; design guidelines and lessons learned shall be documented. Any required modifications and re-testing shall be conducted during Phase II. PHASE III: In the final Phase of the project the contractor shall prove out the effectiveness of the system on the AMPV, Stryker, HTV and/or Bradley Fighting Vehicle for underbody blast, crash and rollover conditions, integration, environment, and durability. The technology solutions designed in Phase I and II to a variety of military vehicles, will be tailored specifically for the military specific vehicles for ease of transition to fielded vehicle solutions. Technology risks and risk mitigation plans shall be clearly identified; design guidelines and lessons learned shall be documented. Any required modifications and re-testing shall be conducted during Phase III. Manufacturability shall be verified during Phase III. This system has the potential to be utilized in any Military and Civilian truck and automotive applications, as well as potential naval applications. Additionally the technology will be applicable to commercial automotive and locomotive industries.
Variable Energy Ignition System for Heavy Fuel Rotary Engine
There currently is a shortcoming for heavy fuel engines that have a rated power below 100 BHP that are compatible with both JP-8 and DF-2, have high power to weight and power to volume density, provide good fuel consumption characteristics, and operate over extreme climatic ranges ranging from below -25 F to 125 F ambient. One developing technology that could potentially fit this niche market are spark ignited heavy fuel, rotary diesel engines that can provide from 10 BHP to 50 BHP per rotor, have best brake fuel consumption less than 0.5 lbm/bhp-hr, and have an engine power density of 1 hp/lbm for small ground vehicles. A major challenge with such engines is the combustion system development, of which the ignition system is a critical element. This is due to the ignition source’s direct contact with injected fuel and the complex fuel and air flow path characteristics. There are significant performance and reliability gains with a high energy ignition event at low engine speeds and startup, compared to low ignition energy at high speed conditions. The performance requirements for this ignition system SBIR are as follows: • Variable output: 50 - 200 mJ / spark • Response Time: 2000 Hz • Ambient Temperature Range: -25 F to 125 F • Size: 1.5 Liters • Weight: 4.5 lbs • Power Consumption: 120W • Output (plug) channels: 5 • Fully controllable: Energy output, timing PHASE I: Identify and assess ignition system components and design approach. Design a brass board ignition module for simulated workbench use. PHASE II: Develop and build two generations of ignition system prototypes. Demonstrate and validate the performance stated in the topic description through computational analysis, bench top experimentation, and relevant engine hardware demonstration. The system shall be controllable via a software interface. Demonstrations shall be completed in a laboratory environment with the TARDEC Combat Vehicle Prototype (CVP) program Advanced Auxiliary Power. The Advanced Auxiliary Power Program utilizes a 700cc heavy fuel rotary engine to produce 45kW of electric power. PHASE III: Develop and build a hardened prototype ignition system module capable of meeting MIL-STD-805C environmental standards. The ignition system shall be readily integrated onto the CVP program, transition mechanism to the Future Fighting Vehicle (FFV). The resulting ignition system could be available for rotary diesel engines used in future Army unmanned ground and aerial vehicles that have power requirements ranging from 20 – 100 BHP while improving in performance and reliability from the current state of the art.
Low Cost, Low Temperature Processing, High Use Temperature Composite Material
There is an emphasis on lightweight systems; however, many armament systems have use-temperatures that exceed traditional organic, composite systems. Specialty polymers can extend the range to 700F, but are expensive and hard to process. High-use temperature composites include pre-ceramic polymers, ceramic matrix and metal matrix composites. All of these are expensive and hard to process. This effort focuses on developing a composite system which is low cost and can be processed at low temperatures while still having a high-use temperature. The processing temperature must be low enough so as to not cause coefficient of thermal expansion mismatch issues with steel substrates. PHASE I: Develop a composite material system that demonstrates low cost, low processing temperature and high-use temperature. Demonstrate its capabilities by producing mechanical test results across the entire temperature range of interest. ASTM D3039 (Standard Test Method for Tensile Properties of Polymer Matrix Composite Materials), D3410 (Standard Test Method for Compressive Properties of Polymer Matrix Composite Materials with Unsupported Gage Section by Shear Loading), D2344 (standard Test Method for Short-Beam Strength of Polymer Matrix Composite Materials and Their Laminates), and either D3518 (Standard Test Method for In-Plane Shear Response of Polymer Matrix Composite Materials by Tensile Test of a +/- 45° Laminate) or D5379 (Standard Test Method for Shear Properties of Composite Materials by the V-Notched Beam Method) should be used to generate these properties. If a novel material that is not a polymer matrix composite is developed, then appropriate test standards may be substituted. The material deliverable shall be 25 lbs of developed material in a continuous fiber form that can be processed on existing filament winding or tape placement equipment. The use-temperature must range from -70F to at least 800F, preferably 1000 F. The material should be physically and environmentally stable across the entire temperature range. Cost of the system should be same or lower than standard temperature thermoset materials. PHASE II: Refine the material system and demonstrate high temperature stability by testing material samples at elevated temperatures. Property goals at room temperature in the fiber direction shall be a tensile strength of 200 ksi, a tensile modulus of 25 Msi, a compressive strength of 100 ksi, and a compressive modulus of 20 MSI. The interlaminar shear strength shall be equal to or greater than 9 ksi and any deviation from this value shall be reported and a plan to achieve 9 ksi shall be described. Shear modulus and strength, along with transverse properties, shall be measured as well. At 800F, properties in all directions shall not decrease by more than 20%. PHASE III: Finalize the development of a material based solution that can be readily implemented on existing manufacturing equipment. Non-DoD applications include down well piping, engine components, etc.
Adjustable Focus Lenses for Respiratory Protection
Current respiratory protection systems require optical inserts for wearers requiring optical correction. Use of optical correction inserts limit optical compatibility with night vision goggles and weapon systems due to the added eye relief. One reason individual high index lenses are not used is because they cost seven times more than vision correction inserts. Additionally, polycarbonate lenses have distortions for diopters above positive 6 and below negative 6. Logistics associated with optical inserts are costly due to the need for stocking inserts because inserts may require yearly exchange based on annual vision exams. Similarly, stocking custom lenses to accommodate every soldier is not logistically possible or cost effective. Technology is needed that can provide on-the-fly adjustable focus lenses to accommodate all wearers. The vision correction could be adjusted as a wearer’s vision changes. Ideally, these lenses could be built into the respiratory protection system and would reduce overall eye relief. There are many technology concepts for adjustable focus eyeglasses. However, none have been demonstrated to work in a respiratory protective mask system and none are able to cover the entire range of optical correction needed by the military (-9 to 9 diopters). The current effort would develop novel adjustable focus lenses to allow wearers to focus on both near and distance objects with one lens. The adjustable focus lenses should be able to be integrated into the existing Avon Protection Systems, Inc. M50 respirator, should be lightweight, and should change focus quickly. All methods of incorporating the lens into the respirator system will be considered (i.e., the lenses could be included in the respirator during or after manufacture). The lenses must be able to withstand a large range of temperature and environmental extremes and must be resistant to chemical, biological, and non-traditional threat agents. PHASE I: Demonstrate a lab scale prototype/breadboard system that provides adjustable vision correction from -5 to +5 diopters. Demonstrate a response time of PHASE II: Refine optical performance. Demonstrate adjustable vision correction from -9 to +9 diopters. Provide a means for the user to easily change the optical correction. Demonstrate the technology can quickly change and maintain the optical correction until the user decides to change it again. Demonstrate a response time of < 1 sec. Demonstrate performance using human subject volunteers. Field of vision, optical distortion, haze, and clarity should be the same or better than the current M50 mask. Demonstrate the technology can be integrated into the M50 respirator. PHASE III: Complete optical refinement. Optimize fabrication process to demonstrate large-scale production capabilities. Demonstrate ability of technology to be incorporated into a facemask. Demonstrate that the technology is durable and suitable for military combat applications. PHASE III DUAL USE APPLICATIONS: Potential alternative applications include optical correction in industrial, international, and commercial respiratory protection systems.
Smart Split Neck Seals for Respiratory Protection
Current respiratory protection neck seal systems do not incorporate smart sensing technologies. Current neck seal systems are simply basic circular rubber cut-outs and are required to be constructed of one continuous piece of material. Many wearers find traditional neck seals to be uncomfortable. Respiratory protection systems utilized for fixed wing aircraft pilots (e.g. JSAM-FW, AR-5, and AERP), as well as escape purposes (e.g. JSCESM and NIOSH CBRN escape hoods) utilize neck seals as a primary protective barrier. These one piece neck seals may only be donned by pulling the system over the head and down onto the neck. This donning methodology limitation greatly impacts the overall system design, its ability to be worn concurrent with other forms of head-borne PPE, and wearer acceptance. Innovative technology is needed that can allow for a smart split neck seal design that allows for donning versatility, improved comfort, and the provision of user feedback regarding seal performance. Application of smart technology to a neck seal will provide an additional option for wearers with facial hair and/or spectacles (optical correction), would assist in avoiding seal collisions with concurrently worn headgear (e.g. helmet chin straps), and would allow for optional overlapping neck seal wrap designs as opposed to the current continuous neck seal that must be donned over the head. Sensor technology that would continually assess the integrity of the seal but does not require the presence of a specific threat challenge is desired. This sensor would help balance comfort with protection in ensuring that a hermetic neck seal is maintained around the entire circumference of the neck while concurrently ensuring the pressures at the neck are not excessive leading to discomfort. Many types of flexible electronics and pressure-sensitive devices have been constructed from thin films, micro-electromechanical systems (MEMS), and nanowires. However, none have been demonstrated to work in a respiratory protective mask seal system. The current effort would develop not only a novel split neck seal design but also an innovative sensor technology to enhance the sealing of the respiratory protective mask. The sensor would ensure the mask seal maintains proper pressure with the neck skin surface to prevent breakage of the mask seal. In addition, the sensors would be utilized to ensure that both sections of the two piece seal design adequately adhere to one another to form a continuous hermetic sealing surface. The resultant system must be hygienic, durable, and easy to clean. The sensor should be able to be integrated with the seal of the mask, should be lightweight, and should not impact the flexibility or extensibility of the sealing surface. The system must be able to withstand a large range of temperature and environmental extremes and must be resistant to chemical, biological, and non-traditional threat agents. Lastly, the split neck seal system with the seal sensor should not add more than thirty dollars to the cost of the system in which it is intended to be integrated. PHASE I: Investigate material sealing methodologies (e.g. ferrofluids, advanced closure systems, etc.) that would allow for the development of a split neck seal design with a hermetic closure. Identify seal performance and wearer comfort metrics for the neck while wearing a full facepiece respirator that seals to the neck. Identify appropriate sensor technology for the proposed split seal design. Demonstrate that selected material sealing technology provides a hermetic seal to the geometry of the neck (e.g., on a headform) and to itself (as a neck closure). A negative pressure between 10 to 15 cmH2O shall be applied to the neck seal and maintained for 30 sec with less than a 2.5 cmH2O drop in pressure. PHASE II: Refine seal performance and develop sensor. Develop sensor technology identified in Phase I. Provide a functioning prototype sensor. The developed sensors should detect a seal performance change of less than or equal to 2% within a time period of 0.2 seconds (e.g. for a pressure sensor, detect a pressure change of less than or equal to 2% of the identified pressures). Apply the developed sensors to actual neck seal geometries to demonstrate performance. Concepts for a split seal design, seal performance indicator, and for electronic user adjustable and automatic fit control shall be demonstrated. Demonstrate the technology can quickly identify repeated seal breaks occurring in the same region. Develop ability to store baseline pressures acquired during a successful fit test. Develop and demonstrate indicator technology to warn user of seal breaks and/or differences in fit from the baseline. Provide a capability for visualization of the potential leak location. Provide pre-production prototype of both face and neck seal respirators with embedded Smart Seal technology. Provide a means for self-calibration of the sensor and seal technology. Demonstrate the flexibility of the existing sealing surface does not change by more than 2% due to the sensor technology. Total weight of the sensors, housing, and electronics including power source should not exceed 50 g. Demonstrate that the system can be electronically adjusted by the wearer to improve fit and that an automatic fit control option is provided and these controls are capable for achieving suitable fit and fit adjustment during simulated workplace protection factor testing. PHASE III: Complete system refinement. Optimize fabrication process to demonstrate large-scale production capabilities. Demonstrate ability of the technology to be incorporated into a full facepiece respirator prototype. Demonstrate the technology is durable and suitable for military combat applications. PHASE III DUAL USE APPLICATIONS: Potential alternative applications include industrial, international, and commercial respiratory protection systems as well as protective clothing seals and closures.
Development of Mycotoxin Medical Countermeasures
Mycotoxins are toxins produced by several species of fungi. Exposure to these toxins can result in incapacitation or even death of the exposed subject. From a biological warfare perspective, mycotoxins are relatively easy to produce in large quantities and many of them have nearly effortless accessibility. For these reasons, mycotoxins present a real threat to the warfighter. Trichothecene (T-2), a type of mycotoxin, can be delivered via food or water sources, droplets, aerosols, or smoke from various dispersal systems and exploding munitions. T-2 toxin has allegedly been used as a biological weapon in the past. For example, due to its high thermal stability, T-2 toxin has been baked into bread, yet maintains activity. T-2 toxin is also extremely resistant to ultraviolet light inactivation. As with virtually all toxins, T-2 toxin is far more toxic when inhaled rather than through oral, dermal, or injection exposure. Aflatoxin is a fungal toxin that commonly contaminates maize and other types of crops during production, harvest, storage or processing. Exposure to aflatoxin is known to cause both chronic and acute hepatocellular injury. In Kenya, acute aflatoxin poisoning results in liver failure and death in up to 40% of cases. Similar to the case for T-2 toxin, aflatoxins have high thermal and ultraviolet radiation resistance to inactivation, making them prime candidates for weapons of mass destruction (WMD) or weapons of mass casualties (WMC). The development of mycotoxin medical countermeasures, specifically against aflatoxins and T-2 toxin, addresses current technology requirements as defined by the Joint Requirements Office for Chemical and Biological Defense (JRO-CBD) and the Joint Program Executive Office for Chemical and Biological Defense (JPEO-CBD). Currently there are no FDA approved medical countermeasures available for either aflatoxin or T-2 mycotoxin exposure. These toxins are debilitating and sometimes lethal on their own. Reports in the open literature suggest that concurrent exposure of mycotoxins in combination with other toxins (or pathogens), results in the increased susceptibility to toxin induced organ damage/failure and to increased mortality than what is observed when either toxin is given alone. PHASE I: Offerors must propose proof of concept experiments to demonstrate neutralization or attenuation of mycotoxins (of particular interest is aflatoxin or T-2 toxin in a lethal in vitro model). Demonstration of efficacy in some form of an in vivo model is also acceptable, but not required for Phase I. Technologies of interest include, but are not limited to, antibodies (human antibodies are strongly preferred), small molecules, aptamers, and other novel approaches. Repurposing of FDA approved drugs or drugs with successful completion of FDA Phase I clinical trials are also to be considered. Exit criteria for successful completion of Phase I research would be the demonstration of efficacy at low molar concentration in in vitro studies or a 50% increase in survival in any animal model (assuming the animal model has already been developed and an animal use protocol approved). Information garnered from Phase I experiments may be more qualitative than quantitative. PHASE II: With successful completion of Phase I experiments, Phase II would further evaluate the medical countermeasure (MCM) in a small mammal study. In these studies the MCM would be administered to animals with mycotoxin only exposure and to mycotoxin plus another toxin such as Staphylococcal enterotoxin A (SEA) or Staphylococcal enterotoxin B (SEB). The animal model should be of sufficient size and scope to demonstrate a statistically significant increase in long-term survival in animals receiving the MCM. The SBIR Phase II studies shall design experiments in a manner that facilitates the collection of non-clinical GLP pharmacokinetic (PK) and pharmacodynamic (PD) data. The PK and PD information will be of paramount importance to inform subsequent Phase III studies. PHASE III: Phase III studies would further refine the animal model and the compound/drug dosing regimen. The goal would be to work towards FDA approval of a MCM for one or more mycotoxins to include aflatoxin(s) and/or T-2 toxin(s). FDA licensure/approval is not necessary for the project to be deemed successful. However, an objective demonstration of MCM efficacy in at least an animal model relevant to the human condition, and/or successful completion of an FDA approved clinical trial, with accompanying efficacy demonstrated in an animal model is required for success to be declared. One means for the offeror to document progress in the right direction is through a Technology Readiness Assessment (TRA) of the technology using the harmonized Quantitative Technology Readiness Level (Q-TRL) guidance document as described by the Public Health Emergency Medical Countermeasures Enterprise (PHEMCE). A second means for demonstrating success is the establishment of funding and partnering with commercial companies (if necessary) to bring the product to market. PHASE III DUAL USE APPLICATIONS: Successful MCM products directed towards mycotoxins clearly have broad dual use applicability. Acute mycotoxin intoxication is a common occurrence throughout much of the world, usually due to the growth of mycotoxin producing fungi on grains stored under conditions conducive to fungal contamination.
Exploiting Microbiome and Synthetic Biology to Discover and Produce Naturally Occurring Antibiotics
The explosion in the “omics” field has allowed for unprecedented genetic identification of some of the billions of bacteria that comprise the world of the microbiome. A potential wealth of information is available through the study of species that have developed sophisticated defense mechanisms to protect themselves from the onslaught of foreign invaders. Recent examples include the microbiome of the new world vulture and in humans. The potential for identification of natural product antibiotics is now within technical reach, and could represent a large family of hitherto unknown naturally occurring antibacterial agents. Furthermore, current data suggest that these natural products are produced from a cluster of genes, and likely represent a variety of agents that have singular and synergistic effects against invading bacteria. Isolation of the natural products, as well as elucidation of the genes involved in their biosynthesis, might provide a number of convergent paths towards the development of new and improved antibacterial therapeutic agents. Another advantage of these types of investigations is the well-established regulatory pathway to FDA approval. The ever-increasing discovery of drug resistant strains, such as methicillin resistant Staphylococcus aureus (MRSA), coupled with the dwindling antibiotic research efforts in pharmaceutical companies, is one reason that Executive Order 13676 was issued in September 2014 to expedite the discovery and development of new antibacterial agents. The Department of Defense (DoD) is increasingly concerned with both multi-drug resistant strains of bacteria, as well as those bacterial threats that could potentially be used as biowarfare agents. In an effort to address these gaps, the DoD is soliciting researchers who will take advantage of recent advances in technology to identify and develop new families of antibiotics derived from the microbiome. PHASE I: Perform proof-of-concept studies to define the source of the microbiome, the identity of a gene cluster relating to the biosynthesis of antibacterial agents, and in vitro data showing antibacterial effect. It is not expected at this stage that any natural products will be isolated and identified, rather that crude mixtures would be tested to show in vitro activity. The bacterium does not, at this stage, need to be a priority for the DoD, and could be Escherichia coli, Klebsiella pneumonia, Acinetobacter baumannii, Pseudomonas aeruginosa, etc. If selected for a Phase II contract, the Phase I Option period should be used to test the crude mixtures of natural products against bacteria of interest to the DoD (Bacillus anthracis, Burkholderia mallei, Burkholderia pseudomallei, Francisella tularensis, and Yersinia pestis). The government currently offers a Core testing capability to perform in vitro and/or in vivo screening of compounds (lead, advanced, or licensed) alone or in combination against an extensive panel of biodefense pathogens, as well as a panel of multi-drug resistant (MDR) pathogens, to generate minimum inhibitory concentration (MIC) data at no cost, and with no intellectual property implications to the providing party. Offerors selected for Phase I award will be provided additional information regarding these opportunities. PHASE II: During Phase II, it is expected the crude mixtures used in Phase I will be purified to homogeneity. Novel chemical entities will be fully defined and characterized. These materials will be further tested for efficacy in in vitro assays. Depending on the complexity of the structure, it may be advantageous to develop a medicinal chemistry program around any novel chemotype discovered. Mechanism of action studies should be commenced to understand the target(s) of these natural products in order to further exploit the compounds as potential therapeutics. It is further anticipated that the cluster of genes responsible for the biosynthesis of these small molecule compounds will be fully identified, and methods for their expression in suitable vectors will be developed. Synthetic biology approaches to constructing bio-factories for the large scale production of these materials could be explored for feasibility. Structure-function and/or structure-activity relationships should be established during this Phase, and the further development of these novel compounds as novel antibacterial agents defined. These studies would encompass a large spectrum of bacterial targets, and identification of broad-spectrum agents should be a priority. By studying and understanding the mechanism of action of these compounds, it may be advantageous to define a multi-pronged therapeutic regime similar to what could be found in nature. The primary deliverable from this Phase of the project will be to demonstrate in vivo efficacy (target >50% survival in a lethal challenge model when initially dosed greater than or equal to 12h post-exposure) of one or more of these compounds against multiple bacteria, including but not limited to agents of interest to the DoD (B. anthracis, B. mallei, B. pseudomallei, F. tularensis, and Y. pestis). Ideally, these compounds will show characteristics amenable to advancement into pre-clinical and clinical studies. PHASE III: Phase III activities will focus on advancing the most promising candidate(s) towards the clinic and FDA approval. This would include pre-clinical studies such as further animal efficacy studies, pharmacology, toxicology, formulation, and dose ranging studies to determine likely human dosing, routes of administration, as well as manufacturing and all requisite studies to file an Investigational New Drug (IND) application with the FDA. PHASE III DUAL USE APPLICATIONS: Although the DoD has specific requirements for therapeutics and/or prophylaxis against bacterial agents that can be used as biowarfare agents, it is fully expected that any product derived from this work will have significant and broad commercial applications to the health and safety of the general public.
High Sensitivity, Low Complexity, Multiplexed Diagnostic Devices
The U.S. Department of Defense requires infectious disease in vitro diagnostic (IVD) capabilities that are operationally suitable for use in far forward military environments and operationally effective versus a wide range of threats. Current single use disposable Lateral Flow Immunoassay-based diagnostic tests have many desirable operational suitability characteristics (low cost, minimal training, lightweight, results in 15 minutes, eye readable results, and long shelf life at room temperature) but lack sufficient sensitivity to be clinically useful for most infectious diseases. Current nucleic acid amplification-based diagnostic tests provide adequate sensitivity for some diseases but are slow (>30 minutes), more complex, are not compatible with many host response biomarker-based diagnostic approaches and have a high cost per test. The High Sensitivity, Low Complexity, Multiplexed Diagnostic Devices topic seeks to develop novel approaches that will fundamentally improve sensitivity while maintaining desirable operational suitability characteristics. Furthermore, novel approaches will be needed to incorporate multiple analytical approaches into a single platform technology to provide clinical utility across a broad range of etiological agents (i.e., intracellular organisms, parasites, etc.), diseases and clinical sample types and to provide information to support force health protection decision making. PHASE I: Describe the specific technical approaches that would be pursued for achieving better than state of the art clinical sensitivity (≥90%) for acute infections (testing occurs within the first 168 hours after symptom on-set or pre-symptomatically) in an operationally suitable platform for the representative etiological agents/diseases: • Yersinia pestis / Plague (Gram-negative coccobacillus) • Brucella spp. / Brucellosis (Intracellular, Gram-negative bacteria) • Alpha viruses / Venezuelan equine encephalitis, Chikungunya (Single stranded RNA) • Dengue virus with serotype identification / Dengue Fever (single stranded RNA virus) • Variola major / Smallpox (DNA virus) The five diseases listed are representative of a larger set of diseases of operational concern to the U.S. military that would be pursued if selected for Phase III transition. One or more of the representative diseases would be selected as the basis for prototype development in Phase II depending on the specific approach proposed. Within Phase I these five representative diseases serve as the basis for offerors to illustrate the innovative elements of their proposed technical approach when applied to a specific real-world problem. For disease specific tests, the description of the technical approach entails a detailed description of assay designs (bio-recognition elements), signal amplification and transduction techniques, selected sample types (least invasive clinical sampling), and sample preparation techniques (if any) for a specific diagnostic intended use that illustrates the contractor’s understanding of the disease, the diagnostic problem, and improvements over the current state of the art for the same market. The description should provide details how sufficient inclusivity and specificity will be obtained to inform treatment decisions. Syndromic approaches (through multiplexing) add significantly to clinical utility. Provide an analysis of the envisioned technical approach with respect to the Clinical Laboratory Improvement Act (CLIA) guidelines for CLIA-waived status. PHASE II: For one or more of the test types investigated in Phase I, develop and deliver prototype IVD device and pilot lot assays (if applicable to the system design) to the Government for independent evaluation. Complete pre-submission meetings with the FDA to inform inclusivity, specificity and syndromic approaches and intended use for the test and CLIA-waived clinical trial design. The degree of innovation will be measured by the offeror’s ability to achieve a high clinical sensitivity for a broad range of disease and sample types while retaining operationally desirable characteristics (cost < $40 per sample analyzed, training time less than 4 hours, system weight with consumables for 40 tests less than 25 lbs., single sample time to result less than 30 minutes, eye readable results, and consumable shelf life greater than 1 year at 25C). By the end of Phase II, the offeror will have produced a pre-production prototype of the diagnostic device, optimized the assay design for performance in the relevant clinical sample types, temperature and shelf life stability, and manufacturability and will be ready to begin pre-clinical trials shortly after Phase III award. PHASE III: Complete the maturation of all hardware, software and reagent elements of the diagnostic device. Conduct pre-clinical and clinical trials and 510(k) package preparation and submission (as the sponsor) to the U.S. Food and Drug Administration (FDA) for the initial IVD product developed under Phase II. Conduct follow-on developments and FDA clearances of IVD tests for additional known and emerging diseases of operational interest to the U.S. Military. Manufacture IVD devices and assays (as applicable to the technical approach) under Current Good Manufacturing Practices (cGMP) and other quality systems and deliver to the Government for operational use by Warfighters. The Government will provide Government Furnished Information (GFI) and Materials (GFM), when not publically available, to support assay design and testing. The Government will provide access to Biological Safety Level (BSL) 3 and 4 testing facilities when needed. PHASE III DUAL USE APPLICATIONS: Beyond the diagnostic use for the military population, products of this effort are intended to be used in U.S. and European Union domestic health care markets for in vitro diagnostics. Furthermore, for some agents, the products of this effort may be useful for companion diagnostics to be used in therapeutic development studies and environmental field analytics applications.
Signal Processing for Layered Sensing
Asymmetric threats including chemical and biological agents, improvised dissemination devices, and vehicle- and personnel-born improvised explosive devices represent a persistent hindrance to U.S. military operations. Various sensor and surveillance systems develop a capacity to warn of the presence of such threats on a point-by-point basis; however the consumption of these data in the construction of a common operational picture for unit commanders remains an intractable challenge. Even if all of the systems were integrated onto a common networking and communications framework so that early warning data from any individual sensor were available in real time, the capacity to process and analyze such data in order to synthesize situational understanding for leaders is extremely limited. The current state of the art in the chemical and biological sensing paradigm is limited to the correlation of alarm information from networked sensor systems vis-à-vis the actual multimodal fusion of quantized native sensor signals. Enabling technology in information theory and signal processing continues to advance and emerge as a reasonably mature approach toward the consumption of sensor data leading to the development of improved situational understanding based on disparate and unrelated data sources. While the integration and correlation of information has advanced in recent times due in part to improved accessibility of networking and communications architectures, few examples of multimodal data fusion using data from more than one sensor have been reduced to practice. Network bandwidth continues to constrain the content available for such systems to function; however, distributed processing architectures allow for the fusion of quantized native sensor signals at outlying nodes, mitigating to some extent the size of the data stream that must be fused at the central signal processing node. This effort would be expected to demonstrate the unique and invaluable power of a multimodal signal processing tier of raw sensor data as a first tier of a layered sensing architecture that provides an unprecedented depth of information and detection confidence that exceeds the baseline simple correlation approach. The system should demonstrate the management of uncertainty and error (e.g., position, navigation, and timing errors and spurious alarm response signals) at various levels in the architecture, and incorporate the fault analysis and multimodal fusion product along with a representation of the overall confidence level of the final product in a fashion that is intuitive to non-technical operational decision-makers. Such a demonstration would serve as a validation case for arguments on the implementation of disparate sensing architectures for threat detection and awareness. To date, there remains a significant level of skepticism surrounding the argument that data fusion architectures provide operational benefit. PHASE I: Execute a comprehensive system study and define best practices models and theory for the correlation and fusion of weighted signal outputs from multiple sensor modalities (including but not limited to: electro-optical/infrared imagery, acoustic, seismic, magnetic, passive infrared motion sensors, chemical and biological agent detectors, radiological detectors, explosives sensors, ground surveillance radar, airborne imagers and LIDAR systems, unmanned aerial vehicle imagery, aerostat imagery) in the presence of bias and other errors, including position and time errors, and faulty sensor operation. Assess the value of follow-on tasking of reconnaissance and surveillance assets. Accommodate real-time threat environment intelligence and meteorological data and apply decision logic that accounts for realistic operational conditions. Provide a system level capability that manages false alarms while maintaining sufficient network-wide sensitivity to the threat condition. PHASE II: Fabricate, integrate, test, and optimize the performance of a signal processing system that was defined as an outcome of the Phase I Feasibility Study. PHASE III: A Phase III follow-on effort would effect a system demonstration that could be integrated into a wargame environment or table-top exercise to enable capability and doctrine developers to assess the value of an autonomous layered sensing architecture on battlefield situational understanding. The demonstration would have immediate value for the definition of future warfighter capability requirements while simultaneously maturing the technology readiness level of the multimodal data fusion environment. PHASE III DUAL USE APPLICATIONS: An integrated multimodal data fusion environment would realize significant market potential in industrial process control and chemical transfer line integrity in engineering plants and medical diagnostics. Environmental, law enforcement, security, and incident/disaster response applications would also lend themselves to the exploitation of multimodal data fusion integration systems and techniques.
Advanced Manufacturing Technologies
DLA seeks drastically lower unit costs of discrete-parts support through manufacturing revolutions that also have applicability to low and high volume production from commercial sales. This will result in an improvement in the affordability of these innovations to DLA and its customers and the development of cost effective methods to sustain existing defense systems while potentially impacting the next generation of defense systems. The proposals must include and will be judged, in part, on an economic analysis of the expected market impact of the technology proposed. This topic seeks a revolution in the reduction of unit cost metrics. Incremental advancements will receive very little consideration. DLA seeks herein only projects that are too risky for ordinary capital investment by the private sector. PHASE I: Determine, insofar as possible, the scientific, technical and commercial feasibility of the idea. Include a plan to demonstrate the innovative discrete-parts manufacturing process and address implementation approaches for near term insertion into the manufacture of Department of Defense (DoD) systems, subsystems, components or parts. PHASE II: Develop applicable and feasible prototype demonstrations for the approach described, and demonstrate a degree of commercial viability. Validate the feasibility of the innovative discrete-parts manufacturing process by demonstrating its use in the production, testing and integration of items for DLA. Validation would include, but not be limited to, system simulations, operation in test-beds, or operation in a demonstration system. A partnership with a current or potential supplier to DLA is highly desirable. Identify any commercial benefit or application opportunities of the innovation. Innovative processes should be developed with the intent to readily transition to production in support of DLA and its supply chains. PHASE III: Technology transition via successful demonstration of a new process technology. This demonstration should show near-term application to one or more Department of Defense systems, subsystems or components. This demonstration should also verify the potential for enhancement of quality, reliability, performance and/or reduction of unit cost or total ownership cost of the proposed subject. Private Sector Commercial Potential: Discrete-parts manufacturing improvements have a direct applicability to all defense system technologies. Discrete-parts manufacturing technologies, processes, and systems have wide applicability to the defense industry including air, ground, sea, and weapons technologies. Competitive discrete-parts manufacturing improvements should have leverage into private sector industries as well as civilian sector relevance. Many of the technologies under this topic would be directly applicable to other DoD agencies, NASA, and any commercial manufacturing venue. Advanced technologies for discrete-parts manufacturing would directly improve production in the commercial sector resulting in reduced cost and improved productivity.
Medical 3D Printing
DLA seeks to integrate 3D printing into the Medical supply chain. Medical 3D printing is a disruptive, game-changing technology that will significantly alter medical supply chains in the future. Integrating medical 3D printing will transform customer experience because the supplies will be customizable and available on-demand. With medical 3D printing, the DLA Medical Supply Chain can offer new products and services such as human tissue that will meet customer needs, while at the same time reducing inventory and stock levels for items like medication. 3D printing will also be cost-effective because it is often easier and cheaper to store raw material, rather than finished products. There are three key ways DLA Medical Supply Chain can use 3D printing to provide life-saving medical supplies: 1) Print medical equipment 2) Print human tissue 3) Print medicine With medical 3D printing, DLA will shift from forecasting, storing, and supplying items to raw materials and equipment. DLA will likely need to adapt its existing supply chain to handle biomaterial such as cells. Additionally, the DLA will need to identify and assess appropriate suppliers. PHASE I: Determine, insofar as possible, the scientific, technical and commercial feasibility of the idea. Include, where appropriate, a process technology roadmap for implementing promising approaches for near term insertion in support of Department of Defense (DoD) systems, subsystems or component production. PHASE II: Develop applicable and feasible prototype demonstrations for the approach described, and demonstrate a degree of commercial viability. Validate the feasibility of medical 3D printing processes by demonstrating its use in the production, testing, and integration of items for DLA and its customers. Validation would include, but not be limited to, system simulations, operation in test-beds, or operation in a demonstration system. A partnership with a current or potential supplier to DLA is highly desirable. Identify any commercial benefit or application opportunities of the innovation. Firms should develop innovative processes with the intent to readily transition to production in support of DLA and its supply chains. PHASE III: Technology transition via successful demonstration of a new process technology. This demonstration should show near-term application to one or more medical areas. This demonstration should also verify the potential for enhancement of quality, reliability, performance and/or reduction of unit cost or total ownership cost of the proposed subject. Private Sector Commercial Potential: Medical 3D printing processes and systems have wide applicability to the defense industry including air, ground, sea, and weapons technologies. There is significant interest within the private sector industries as well as civilian sector relevance. Many of the technologies and applications under this topic would be directly applicable to other DoD agencies, NASA, and any medical venue. Medical 3D printing will directly increase the availability, reduce the cost, and improve productivity of certain medical supplies.
Ceramic Additive Manufacturing for Metal Casting
DLA seeks drastically lower unit costs and availability of cast parts support through manufacturing revolutions that also have applicability to low or high volume production from commercial sales. This will result in an improvement in the affordability of these innovations to DLA and its customers and the development of cost effective methods to sustain existing defense systems while a potential impact on the next generation of defense systems. The proposals must include and evaluations will review, in part, an economic analysis of the expected market impact of the technology proposed. This topic seeks a revolution in the reduction of unit cost metrics. Incremental advancements will receive very little consideration. DLA seeks herein only projects that are too risky for ordinary capital investment by the private sector. Manufacturing using ceramic cores or molds for metal casting processes, like investment casting, use conventional tooling that is typically expensive with long lead times. Ceramic Additive Manufacturing has become a more efficient way to manufacture cores and molds for some metal castings, including investment cast parts. The DLA desires to make Ceramic additive manufacturing a viable drop-in replacement in the manufacture and production cores and molds used for metal cast parts. PHASE I: Determine, insofar as possible, the scientific, technical and commercial feasibility of the idea. Include, where appropriate, a process technology roadmap for implementing promising approaches for near term insertion in support of Department of Defense (DoD) weapon systems, subsystems or component production. PHASE II: Develop applicable and feasible prototype demonstrations for the approach described, and demonstrate a degree of commercial viability. Validate the feasibility of the innovative ceramic additive manufacturing process by demonstrating its use in the production, testing, and integration of items for the DLA. Validation would include, but not be limited to, system simulations, operation in test-beds, or operation in a demonstration system. A partnership with a current or potential supplier to the DLA is highly desirable. Identify any commercial benefit or application opportunities of the innovation. Firms should develop innovative processes with the intent to readily transition to production in support of the DLA and its supply chains. PHASE III: Technology transition via successful demonstration of a new process technology. This demonstration should show near-term application to one or more Department of Defense weapon systems, subsystems, or components. This demonstration should also verify the potential for enhancement of quality, reliability, performance and/or reduction of unit cost or total ownership cost of the proposed subject. Private Sector Commercial Potential: Ceramic additive manufacturing innovations for metal castings have a direct applicability to many defense weapon system technologies. New ceramic additive manufacturing technologies processes and systems for metal cast items have wide applicability to the defense industry including air, ground, sea, and weapons technologies. There is significant interest within the private sector industries as well as civilian sector relevance. Many of the technologies under this topic would be directly applicable to other DoD agencies, NASA, and many commercial manufacturing venues. Ceramic additive manufacturing innovations would directly improve production in the commercial sector resulting in reduced cost and improved productivity.
Rapid Non-destructive Detection of Advanced Counterfeit Electronic Material
Counterfeit and subversively modified electronic components represent a substantial threat to Department of Defense (DoD) systems. Testimony to the Senate Armed Services Committee (SASC) concluded that “the scope and impact of counterfeits is not known … counterfeit electronic parts can compromise performance and reliability, risk national security, and endanger the safety of military personnel … [and] weaknesses in the testing for electronic parts create vulnerabilities that are exploited by counterfeiters” . The need for advanced tools with widespread applicability towards electronics employed by the DoD is evident. The latest, most sophisticated forms of counterfeiting includes cloned integrated circuits (ICs), overproduced ICs, and tampered ICs created by state-sponsored counterfeiters, with the goal of embedding a back door and/or the capability to remotely disrupt/disable/destroy the system the IC is installed in. Cloned ICs are created when a component is reverse engineered, then produced without the permission of the Original Component Manufacturer (OCM). Currently, extensive electrical testing is required to detect advanced counterfeit components such as cloned ICs. Such testing requires expensive equipment, is time consuming, requires extensive data and cooperation from the OCM, and results vary wildly, depending on interpretation by subject matter experts (SMEs) . Although automated inspection equipment may help identify tampered or cloned components, extensive work has been done to camouflage IC layouts and standard cell functionality, which is capable of defeating automated inspection . A tool is required which can rapidly detect and compare the characteristic EM emissions and reflections of a device, in real time. Such a tool would be able to detect cloned and tampered ICs unless they were made with the exact same material, via the same processes, by the same production equipment. By capturing and comparing the unique signature or “fingerprint” of each component, the most sophisticated counterfeit components would be detected before installation. Electronic components exhibit characteristic electromagnetic responses when powered. These responses are currently being used for quality control, electronic health monitoring, and counterfeit detection.DMEA is seeking a novel tool which electromagnetically scans an electronic device without need for a test fixture and when the device is in both a powered-on and powered-off state. The tool should be able to simultaneously illuminate the device via EM radiation and collect its characteristic responses to illumination to non-destructively determine part authenticity and identify any subversive modifications made to the part. This re-radiation of the incident EM energy is analogous to X-ray fluorescence, in that the resulting radiation is unique to the inspected component. Only devices made of the exact same material, via the same processes, will exhibit identical signatures. Assessment of electronic components must be made in real-time (no post-processing required) and able to be performed by an operator with minimal training. The tool needs to be applicable to a wide range of device types and sizes and be able to detect multiple types of typical and sophisticated counterfeit modalities. The developed tool must be capable of achieving a Pd of >95% with a FAR of <5% to avoid adverse effects on trusted materiel in the DoD supply chain. PHASE I: The goal of Phase I is to establish a design for a proof-of-concept tool capable of remotely scanning an electronic component with free field EM energy to assess its authenticity and proper functionality. Proof-of-concept should be established through laboratory experimentation on representative material. The technique must be environmentally safe and not exceed MIL-STD-461 emissions requirements. Utilization of actual counterfeit, defective, and/or subversively modified components in establishing feasibility is desirable. The hardware and software architecture needed for an integrated tool will be designed. At the conclusion of this phase a feasibility study report will be produced. PHASE II: Phase II will build and test a prototype version of the assessment tool whose architecture was designed under Phase I. Prototype demonstration will include tests performed on relevant electronic devices to determine performance statistics (Pd, FAR). In conjunction with the demonstration, a detailed plan for achieving Manufacturing Readiness Level >6 and for transitioning the tool for insertion within the DoD supply chain will be prepared. PHASE III: Phase III will transition the developed tool for active use by the DoD, academic and private sectors. Commercialization of the concept will occur as SMEs from each sector have identified a critical need for such capabilities. Integration and testing will be coordinated with a DoD organization that handles electronic parts within the supply chain. Additional government and commercial insertion points for the screening tool will be identified.
Analysis of Integrated Circuits Using Limited X-rays
The DARPA TRUST program established the potential usefulness of using an X-Ray microscope to analyze ICs at a synchrotron. We would like to further that development using a lab based X-Ray source. When utilizing a lab X-ray source, acquisition times for X-ray images increase significantly compared to using a synchrotron X-ray source. This is due to the decreased number of X-ray photons (i.e., X-ray flux) produced by a lab source. This reduced X-ray flux combined with a relatively large imaging area makes image acquisition prohibitively slow at the resolution required for the analysis of modern ICs. It therefore becomes necessary to develop an approach to X-ray imaging that can minimize the imaging time, while still being able to reconstruct the internals of a multilayer laminar IC. This problem is very similar to the one faced in medical imaging to reduce the total radiation dosage to a patient. However, unlike in medical imaging, the internal structure of ICs is very regular and defined by design rules. These design rules limit the geometries and materials used in an IC. This a priori information can be combined with recent advances in low dose X-ray imaging to create software with innovative algorithms optimized for IC inspection. The X-ray microscope, for which the imaging approach will be applicable, will conform to the following: 1) Sample will be mounted on a stage with at least four degrees of freedom (i.e., x, y, z, and rotation around an axis). 2) Stage positional uncertainty (i.e., error in reported position) will be smaller than the minimum feature size. 3) A scintillator and camera will be used to acquire an X-ray image of a specific region of the sample at a time. This region will be much smaller than the area of interest; therefore, many images will need to be taken and stitched together to image the entire area of interest. 4) Image data can be sent to a computer for real-time analysis and there will be an interface for a computer to control the stage and image acquisition equipment. This communication can be used to dynamically optimize the scanning and reconstruction algorithms. The software is required to do the following: 1) Direct the X-ray microscope to acquire X-ray images. a. Interface to microscope will allow the following to be controlled: i. x, y, and z position. ii. Angle iii. X-ray exposure time b. Software will need to decide the best way to position the sample and at which angles to acquire images to minimize the total time it takes to image the IC. 2) Analyze the resulting X-ray images in such a way that the conductors in the entire IC are modeled a. A 3D model is anticipated as the output format. b. The data must be segmented based on X-ray absorption contrast. PHASE I: Conduct research on algorithms for X-ray image analysis and evaluate their application to imaging ICs as described in the paragraphs above. Conceive of a system in which an IC is placed in an X-ray microscope and software is run that utilizes the microscope to create a 3D map of the conductors in the IC while minimizing the X-ray imaging time. Propose a design for that software and determine its technical capabilities. The end product of Phase I is a feasibility study report, in which the following must be specified: 1. The hardware required to execute the software program (e.g., Desktop computer, HPC, GPUs, Intel Phi, etc.). We expect the software to take advantage of parallel processing and to scale with the processing resources available. 2. A list of assumptions or requirements of the X-ray microscope beyond what is listed in this document. 3. A list of assumptions or requirements of the IC being imaged. 4. A clear description of the X-ray acquisition algorithm and why it is advantageous. 5. A clear description of the reconstruction algorithm and why it is advantageous. 6. A clear description of the analysis algorithms and why they are advantageous. 7. An estimate of imaging time for a 5mm x 5mm, 9 layer, 90nm technology node IC given an X-ray microscope that takes “t0” time to acquire a single 20μm x 20μm area image when oriented perpendicular to the X-ray source. Include details of how this estimate is calculated. We expect the imaging time to be less than the time it would take to scan the whole chip at 30 angles. a. Note that “t0” is expected to be at least 10 seconds for a lab based X-ray microscope. 8. An estimate of any additional processing time required after imaging is complete. This estimate should scale with available processing capability. 9. An estimate of false positives, false negatives, and the trade space with respect to imaging speed. 10. A clear description of the proposed data output format and how it models the 3D structure of the conductors of the IC and the coordinate system used. 11. A clear description of any inputs or operator interaction the system will require. PHASE II: Develop a prototype of the Phase I concept and demonstrate its operation. Validate the performance using an X-ray microscope over multiple dissimilar, modern ICs (e.g., FPGA or microprocessor of 90nm technology node or better) and develop a test plan to fully characterize the prototype. The software being developed must have a royalty-free license for the Government, including its support service contractors, to use, modify, reproduce, release, perform, display, or disclose technical data or computer software generated for any United States government purpose. The software under development will operate on an X-ray microscope specified and made available by DMEA. PHASE III: There may be opportunities for further development of this software for use in a specific military or commercial application. During a Phase III program, the contractor may refine the performance of the design and produce pre-production quantities for evaluation by the Government.
Radiation Hardened Optoelectronics for Optical Interconnects
With the dominance of parallel processing, the rise integrated “system on chip” (SOC) architecture, and the continuing need to handle more data more quickly, traditional electronic interconnects are reaching their practical limits. Optical data transfer has already replaced electronic data transfer in long distance applications (km) and shorter distance high bandwidth applications (m-cm) due the combination of high bandwidth and low loss. Optical interconnects can also be very robust in extreme temperature and radiation environments making them well suited for satellite and unmanned vehicle applications. Optical data transfer over shorter (cm-mm) distances faces several significant technical and integration challenges. Some of these challenges are directly related to scaling: diffraction limitations, coupling efficiencies and cross talk, and fabricating efficient scaled emitters and detectors. Other challenges are related to material and wavelength challenges. Silicon, the semiconductor of choice for the electronic industry, has an indirect optical band gap making it an inefficient emitter or lasing material. Silicon, while a common detector material for visible wavelength, cannot be used as a detector for the common telecommunication wavelengths (1550 nm and 1330 nm). All silicon/silicon oxide optical interconnects would be ideal for integrating with conventional CMOS fabrication, however it is likely that techniques utilizing alternate semiconductors will be required at least in the near term. Current solutions include epitaxial germanium and wafer bonded III-V semiconductors for detectors and emitters. Hardening the optoelectronic components of optical interconnects to radiation effects (total ionizing dose, displacement damage, single events, color center formation, and optical loss) is necessary before they are incorporated in satellites or unmanned vehicles (unmanned aerial vehicles or robots) that are expected to operate in high radiation environments. Electronics and optoelectronics in these systems are typically expected to be able retain functionality during gamma, neutron, and high energy ion exposures with lifetime total ionizing doses between 100kRad and 1MRad (silicon). Optical interconnects may also offer significant advantages in hardening systems against electromagnetic pulses and electromagnetic weapons by eliminating the antenna effects caused by cabling or long electrical interconnects. PHASE I: Demonstration and preliminary radiation effects testing (to a WMD relevant dose) of at least one scaled or near scaled active optical interconnect component (e.g. emitter, modulator, detector). Development of a plan for scaled (or near scaled) complete optical interconnect prototype. PHASE II: Development, fabrication, and preliminary radiation effects testing (to a WMD relevant dose) of a scaled or near scaled optical interconnect prototype with at least two active components and a coupled waveguide. Development of an approach for mitigating any observed radiation effects. PHASE III: Dual use applications: Suitably scaled and energy efficient optical interconnects could be utilized in commercial server, data centers, and high performance computers. Radiation resistant high reliability optical interconnects also have potential applications in commercial aviation, automobiles, and medical devices.
Materials Development for Enhanced X-ray Detection of Dynamic Material Events Under Fast Loading Rates
The Defense Threat Reduction Agency’s Basic Research Program, Thrust Area 4 – Science to Defeat WMD (weapons of mass destruction), has been supporting research of hard and deeply buried targets including penetration of concretes and geological materials. With new experimental facilities that now couple high intensity and high flux x-ray capabilities with impact drivers (e.g. lasers, gas guns, etc.), an exciting opportunity exists for directly probing material deformation mechanisms under extreme loading conditions. Limitations in detection capabilities still present a significant hurdle to providing a complete mapping of the temporal evolution of complex materials under dynamic loading. New materials or methods that enhance x-ray diffraction and/or x-ray imaging measurements are desired, including both direct and indirect detection methods. Indirect detection uses phosphors or scintillators to convert x-rays to visible photons; whereas, direct detection schemes directly convert an x-ray photon to an electrical signal for readout. Certain detection schemes are currently more attractive to different experimental conditions and regimes of interest. PHASE I: Any new or improved materials must provide good sensitivity for photon energies greater than 20 keV. Proposed materials for enhanced indirect detection must possess fast decay times below 25 ns, spatial resolution < 5 μm and demonstrate photon collection efficiency >90%. In addition to being sensitive beyond 20 keV, materials for improved direct detection must be capable of achieving temporal resolution better than 150 ns (much faster temporal resolution capabilities would be desired). Submitters are expected to produce materials that satisfy conditions for direct or indirect detection, not both. Experimental results must demonstrate that the materials of interest can reach or exceed these specifications. PHASE II: Based upon the Phase I performance results of these materials, DTRA will decide based upon costs/risks if Phase II work is to be initiated. Phase II work will focus on incorporating the improved material or technology into a detector capable of high frame rates (>10 MHz) and many frame data collection/storage (20-500 frames). Any detector will be submitted for further testing under high rate experimental conditions. Development of a commercialization strategy should also be achieved in Phase II. PHASE III: In addition to providing enhanced detection capabilities for directly probing complex materials under high rate deformation, new developments in time domain x-ray detection technology could be beneficial for investigating complex biological processes.
High Performance Computing (HPC) Application Performance Prediction & Profiling Tools
DTRA uses High Fidelity computer codes to investigate weapon effects phenomenology and techniques for countering WMD. End to end High Fidelity simulations in support of the DTRA Agent Defeat Warfighter Capability will require calculations including multiple phenomena that occur in vastly different time scales (μs to hours). The resulting code run times will be prohibitively long without optimization for next generation computer architectures. PHASE I: Develop an approach for design or modification of existing code profiling tools that are capable of handling High Fidelity codes as described above. Identify key concepts and methods that, when implemented, will provide non-intrusive tools that are effectively operable on complex High Fidelity applications codes. State of the art, innovative application code profiling tools as envisioned here, will need to enable performance and energy use prediction on a cross platform basis, i.e., Run on one architecture to predict performance on a future or different architecture. The tools must operate on optimized executables, not source code, and produce readily understandable results. PHASE II: Develop a production ready suite of profiling tools based on the Phase I approach. Demonstrate the use of the tools on DTRA in-house and DOD HPCMP systems on a broad range of High Fidelity application codes to include both rectangular grid and unstructured, three-dimensional adaptive mesh, coupled Computational Fluid Dynamics (CFD) / Computational Structural Mechanics (CSM) codes, explicit finite element codes used for short strong shocks, and chemistry codes used in conjunction with CFD codes PHASE III: The code profiling tools developed for use on very demanding application codes will be well suited, once refined, for use on more general HPC workloads. Improvements in this phase are expected to involve ease of use enhancements and hardening of the profiling tools for use on a wide range of application software used in Government research and industry.
Instrumentation for Characterization of Fireballs, Hot Gases, & Aerosols from Defeat of Targets Containing Biological and Chemical Agents
Testing of methods to defeat chemical and biological agents often requires scaled experiments involving rapid combustion of bio- and chemical- agent simulants. This effort will focus on the development of next-generation instrumentation for effective characterization of physical and chemical processes occurring during rapid combustion in the expanding fireball, to provide quantitative and qualitative data on chemical reactions and physical changes such as particle distribution and fluid flow that result in formation of the final plume. The instrumentation developed must be stand-off or ruggedized to survive the high temperature, high pressure and corrosive/reactive atmosphere created by weapon engagement with a chemical/biological target, and demonstrate repeatable performance under such field conditions. Smaller, agile instruments that can be easily transported from one site to another, requiring minimal utilities (power, water, etc.) and infrastructure at field site, is optimum. In addition, instruments should collect data in a highly dynamic environment where chemical reactions of chemical agents and physical changes such as fluid flow and particle-size redistribution may be occurring at microsecond to millisecond time scales. Specifically we are interested in measuring the following: • Temperature as a function of space and time, • Chemical specie and concentration as a function of space and time and • Droplet/particle velocity and size distribution as a function of space and time. Spatially resolved measurements, where a single instrument could collect data from multiple positions within or just outside the fireball, are desired. The measurements from these instruments and data obtained from testing will provide the basis for modeling and simulation of the first few seconds of the expanding fireball. Temperature measurements of laboratory scale clean (no agent/simulant/by products or other debris) fireballs have progressed using pyrometry and emission spectroscopy. Species have been measured with mass spectroscopy and other optical techniques and particle size distribution by particle image velocimetry. However these techniques have not scaled up to larger scale up to field scale because of at least two reasons. First the fireball is dirty containing not only detonation products but agent/simulant and byproducts and other debris and second the environment is extremely harsh making instrument survival difficult. PHASE I: Provide proof of concept for next-generation instrumentation for quantitative and qualitative characterization of the physical and chemical process in fireballs. Demonstrate feasibility that new technologies will survive and collect useful data in the harsh environment of explosive fireballs. A proof of concept demonstration shall consist of measuring at least one of the following; temperature as a function of space and time, chemical specie and concentration as a function of space and time and droplet/particle velocity and size distribution as a function of space and time in an explosively generated fireball consisting of explosive detonation products and Triethyl phosphate (TEP), a commonly used simulant. It is anticipated that this would be a laboratory scale test. Technologies of interest include but are not limited to PHASE II: Expand the technology to other relevant agents/simulants. Scale-up, ruggedize and deliver prototypes of instrumentation to demonstrate in relevant testing. Plan and conduct small-scale testing within project scope, and participate in government-provided mid- to large-scale explosives testing. Demonstrate cost-effective data analysis and fast turn-around for repeated testing. PHASE III: Support testing at DTRA Test Division (J9CXT); DTRA Weapons Division (J9CXW) the Army Corps of Engineers, Engineer Research and Development Center (ERDC); Air Force Research Lab (AFRL); Naval Surface Warfare Centers (NSWC); Naval Air Warfare Centers (NAWC); Energetic Materials Research and Testing Center (EMRTC) affiliated with New Mexico Tech; and other laboratories around the country. This technology might also have utility in rocket motor performance instrumentation, or other extremely high rate combustion processes.
Joint Learning of Text-based Categories
J9CXQ has the challenge of identifying and extracting evidential information from a complex and ambiguous text. An automated extraction system is being developed that will detect and characterize categories of entities, relations, events, and topics. The extracted information will be stored in a knowledge base that will enable automatically finding patterns and searching for critical information. These detection and extraction algorithms depend on well-formed definitions of the elements (entities, relations, events). They are further disambiguated using context such as the topics found in the documents. These definitions typically expressed as a probability distribution. Since these elements are not known beforehand the algorithms must not only characterize them, but also discover them in the first place. The task of creating these characterizations are too large to do manually, and even when known beforehand, the task of annotating is prohibitive, hence it is necessary to automate the process both to discover the categories and then represent them probabilistically. Traditionally this is done using an unsupervised learning algorithm such as Latent Dirichlet Allocation (LDA). Currently, despite the fact that each of the elements is highly interrelated (i.e. topic, entity, relations and events), each of these are learned independently. What is needed is to learn all of these elements in an interrelated manner. This is because a better characterization of one will improve characterizations of the other. For example, topics are identified in an unsupervised way using entity classes are found by clustering in vector spaces in an unsupervised way, or named entity recognition (e.g. using conditional random fields) in a supervised way. Not only will a joint learning approach improve accuracy, but it will also enable tighter, more specific classes which should make the overall analysis of text much more powerful. J9CXQ is seeking research in the area of knowledge representation and reasoning systems that can support the following combination of requirements: (1) The method should infer classes of entities, relationships, and contextual topics in a joint manner to account for interdependencies. (2) The method should not require extensive annotation, and should be unsupervised or require a minimal amount of human input. (3) None of the classes are predefined, but they are discovered through learning. (4) Readily adapts or transfers to new domains, (5) The ability for an analyst to set the topic or distribution of entities or relations and see the effect on the remaining variables. (6) Ideally, the method will include context information and should not be solely based on a bag-of-words language model. (7) Flexible output that facilitates data analysis and visualization. (8) Given the novelty of the method, it should be well documented both within the source code and auxiliary supportive documentation. PHASE I: From basic research develop and demonstrate proof-of-concept. Research and develop methodologies that should infer classes of entities, relationships, and contextual topics in a joint manner to account for interdependencies. At the conclusion of Phase I, produce a conceptual architecture design identifying necessary hardware and software to create a system and identify technology gaps that must be resolved prior to building a system. Develop a proof-of-concept demonstration to support the architecture design. PHASE II: Build a prototype system that will support testing and evaluation. Develop, demonstrate, and validate a prototype system based on the preliminary design from Phase I. All appropriate engineering testing will be performed, and a critical design review will be performed to finalize the design. The Phase II deliverable will include a working prototype of the software, specification for its development, and demonstration of the eight specified requirements. PHASE III: Integrate into the J9CXQ CWMD Analyst Reasoning Environment to provide a new inference capability over extracted events, entities and relationships. Optimize the prototype system and demonstrate it at the full scale level. This technology will have broad application in military, government, and commercial settings. Within the military and government, there is an increasing emphasis on technologies that aid decision-makers while managing big data. Developing tools that can rapidly integrate information and provide a process for analyzing data to compliment a user’s decision making process will be a powerful addition to strategic, operational, and tactical decision making.
Island-mode Enhancement Strategies and Methodologies for Defense Critical Infrastructure
The defense critical infrastructure (DCI) is the composite of DoD and non-DoD assets essential to project, support, and sustain military forces and operations worldwide. The DCI includes, but is not limited to, elements such as military bases, ballistic missile defense installations, radar sites, etc. An electromagnetic (EM) attack (nuclear electromagnetic pulse [EMP] or non-nuclear EMP [e.g., high-power microwave, HPM]) has the potential to degrade or shut down portions of the electric power grid important to the DoD. While a power grid may employ intentional islanding techniques to protect sections of the grid and prevent a cascading collapse of the power grid, the broad reach of potential EM attacks with the potential of simultaneous levels of disruption might prevent traditional islanding protection methods from being sufficient for continued operations of the DCI. Restoring the commercial grid from the still functioning regions may not be possible or could take weeks or months. Significant elements of the DCI require uninterrupted power for prolonged periods to perform time-critical missions (e.g., sites hardened to MIL-STD-188-125-1). To ensure these continued operations, DCI sites must be able to function as a microgrid that can operate in both grid-connected and intentional island-mode (grid-isolated). Such a microgrid is defined as a group of interconnected loads and distributed energy resources within clearly defined electrical boundaries that acts as a single controllable entity with respect to the power grid. The purpose of this topic is, through systematic study of a typical DCI site, to develop enhanced methodologies and technologies for providing intentional island-mode capability at DCI sites in the event of grid loss. Methodologies should account for the need of immediate and continuous operations at sites and the seamless transition to and from commercial power (grid-connected and grid isolated states). The emphasis of this project should be on determining how to best prepare an existing DoD site for intentional island-mode operation and identifying major risks and hurdles. This work will require refinement of existing technologies and development of new technologies and is directed specifically toward applying the new knowledge to meet the survivability of DoD sites to EM attacks affecting large geographical areas. The goal of this project is to develop a set of methodologies and strategies that can be applied, along with existing methods, to enhance the resilience of DCI assets such as military bases. Such methods should aide in the development of islanding at DoD sites to ensure survivability to geographically large EM threats. These methods may also be applied to the commercial sector and other areas of the government: hospitals, civilian infrastructure, businesses, etc. PHASE I: The successful Phase I project should develop innovative strategies and methodologies for DCI island-mode operations in the event of power grid disruption or failure due to an EM threat. Sufficient detail should be developed to show technical competency and/or proof of concept. Phase I strategies as well as establishing performance goals. Additionally, a draft roadmap should be developed indicating Phase II and Phase III plans, timelines, and addressing key decision points and milestones. PHASE II: Phase II will focus on intentional island-mode methodologies and strategies at a specific DCI site (TBD). Limited initial testing may occur at a proto-type site, via modeling, or prior to full scale testing at a DCI site. Identify and address key island-mode hurdles, limitations, and obstacles and provide recommendations on addressing these areas. Methodologies and strategies should be improved and expanded based on testing, assessments, and available date. Clear documentation on strategies/methodologies and improvements is a priority. Identification of dual use commercial applications is an important aspect of this phase. PHASE III: The Phase III project would focus execution of the Phase II test plan and on expanding these methodologies and strategies to include systems/infrastructure outside the DCI. This could include other DoD/government agency sites, hospitals, civilian infrastructure, or other commercial sites. Methodologies developed for the site specific work in Phase II could be expanded for a different site or generalized to create overarching guidelines.
Multi-mode Handheld Radioisotope Identification Instrument
DTRA is seeking development of handheld radioisotope identification instrumentation with extended capabilities for identifying and categorizing isotopic sources. Passive measurements of gamma-ray signatures can be adversely compromised by shielding around the source. Neutrons are an additional signature that may either substantiate a finding or, more importantly, elucidate an anomaly that may arise from purposeful shielding. Currently offered instruments tend to optionally include neutron detectors of limited sensitivity that can be added to otherwise, gamma-ray centered designs. Of particular interest is to significantly increase thermal neutron sensitivity to a minimum of 15 cps/nv. Furthermore, instrumentation may also find a role in verification and inspections, and determining the presence of particular Pu isotopes (or their enrichment) through fast neutron spectroscopy would be beneficial. Note here that such a goal specifically requires sensitivity to energetic neutrons (to several MeV). It is preferable that the instrument is capable of accomplishing all detection functions without need for optional sensors that may increase the unit size, weight, or power consumption. The preferred form-factor is that of a compact instrument than can be personally worn or carried in a holster, allowing for hands-free operation. Examples of operation and form would be the identiFinder ® R400 and HDS-101GN. The instrument should meet or exceed the requirements of the relevant ANSI N42.34 standard for handheld instruments for the detection and identification of radionuclides, and include the capability to distinguish neutrons by energy (thermal versus fast). Solutions must employ low-power electronics and need to be battery operated with useful lifetime targets of 8+ hours between recharge or replacement. There is a high desire for solutions that are physically robust and insensitive to adverse environmental conditions. The systems should be capable of identifying radioisotopic sources in mixed radiation (gamma plus neutron) environments. Instruments may be entirely self-contained or may utilize a short range wireless connection (e.g. Bluetooth) to commonly available tablet, laptop or smartphone devices for user interaction. Overall practicality of the operating conditions and ergonomics should be a factor in selecting packaging designs. For gamma-rays, it is desirable to utilize detectors that can achieve an energy resolution of < 5% FWHM at 662 keV. Additionally, the sensitivity should be greater than 1000 cps/μSv/hr for Cs-137. Commercial radioisotope identification software can be used. Ultimately, it is preferable that the instrument be capable of running GADRAS software for radioisotope identification. Minimally, it should provide file formats compatible with the ANSI N42.42 standard. PHASE I: Identify key operational components and develop the initial design of the handheld radioisotope identification instrument. Extensive modelling studies must be performed to demonstrate detector sensitivity, and capability for gamma, fast and thermal neutron detection and radioisotope identification. Demonstrate pathways to meeting performance goals in Phase II. PHASE II: Develop a prototype instrument that accomplishes the goals of gamma-ray, thermal and fast neutron measurements. The instrument shall not be dependent on post-acquisition analysis of data. Incorporate GADRAS or other radioisotope identification software. Demonstrate radioisotopic identification consistent with N42.34, identifying areas where the prototype diverges from the standard. Demonstrate the application of neutron detection to identification of radioisotopes with specific examples of SNM. PHASE III: DUAL USE APPLICATIONS: Develop a commercial instrument, with suitable partners as needed, for military applications of interest to DTRA as well as domestic applications to support first responders and regulatory inspections, border and port security, power plant maintenance and environmental clean-up.
Standoff Detection of Highly Enriched Uranium
Within the Federal and State governments there are several select agencies whose mission is to detect the presence of highly enriched uranium without revealing the search activity or the means of detection. The most challenging task is detection from an undisclosed survey vehicle moving at no more than typical urban speeds. The commercial applications for this product would be fall out for DOD requirement in various size weight and power configurations. The end users would be nuclear power plant operators and first responder organizations with hazardous material response missions. The market would be for standoff detection of reactor materials that are loose as a result of a reactor mishap. General requirements include: a. The detector cannot be visible to the surrounding public; b. The detector must fit within a compact vehicle and not larger than 43 liters in volume and 45 kg in total weight and may be powered by the vehicle electrical system c. Detector data must include the time date and location of the detection with such data communicated to the search personnel operating the SUV d. Detector must be capable of operating continuously for not less than 12 hours. PHASE I: 1) Model alternative detector approaches to identify the detector media, algorithms, supporting software and hardware meet the requirement. 2) Design and model bread board level systems that can be tested against surrogate material at the distances and speeds required. 3) Select the alternative approach capable of meeting the speed and distance detection of the target uranium sources. PHASE II: 1) Build a bread board level design and test against surrogate material at the distances and speeds required. 2) Refine the bread board system to a prototype design. 3) Construct and prototype capable of meeting the speed and distance detection of the target uranium sources and test to show it meets performance requirements. PHASE III: A compact, mobile detection system would be of utility to Federal and State agencies responsible for detecting HEU. This dual use technology applies to both military and civilian detection requirements.
Advanced Cognition Processing and Algorithms for Improved Identification
Fixed measurements, features, and classifiers preclude systems from changing decision logic based on new information collected during an engagement, since tactical operational environments are often different from those used to collect or generate sample data. This potentially causes sensor bias thus ultimately impacts object classification. In addition, the sample data may vary form the actual data of interest. Because the measurements, features, and classifiers are fixed, traditional techniques do not allow systems to adapt to new information and change the decision based on this new information. The next evolution of target recognition approaches need to be less sample-driven and should focus on cognitive synthesis of disparate information sources. The selected innovative approach should incorporate not just classifiers, but should also consider learning techniques to blend logical inference, deductive reasoning, and expert knowledge systems. Techniques should not rely primarily on estimated sample points with fixed priors, but can employ physics, game theory, and knowledge engineering based on intelligence understanding. PHASE I: Develop a proof of concept design/study. Identify designs/models, and conduct a feasibility assessment for the proposed algorithm, model, technique, and/or methods. Work should clearly validate the viability of the proposed solution with a clear concept-of-operation document. PHASE II: Based on the results and findings of Phase I, develop and refine the proposed solution. The objective is to validate the new technology solution that a customer can transition in Phase III. Validate the feasibility of the Phase I concept by development and demonstrations that will be tested to ensure performance objectives are met. Validation would include, but is not limited to, system simulations, operation in test-beds, or operation in a demonstration subsystem. This phase should result in a prototype with substantial commercialization potential. PHASE III: The contractor will apply the innovations demonstrated in the first two phases to one or more missile defense applications. The objective is to demonstrate the scalability of the developed technology, transition the component technology to the missile defense system or payload contractor, mature it for operational insertion, and demonstrate the technology in an operational environment. Commercialization: The contractor will pursue commercialization of the various technologies and models developed in Phase II for potential commercial uses in such diverse fields as network management, cell communications, air traffic control, finance, and other industries.
Seek innovative improvements and creative applications of mature product and material technologies that can address increased kinematic performance and containment. Reducing mass while maintaining or increasing performance (more divert delta V or more efficient use of packaged delta V) will increase the kinematic reach and containment of the vehicle. These innovations can range from light weight rocket motor components minimizing missile stage inert mass, innovative high temperature non-eroding materials that can survive higher temperature environments (> 5000 F for approximately 120 s for kinematic reach and > 3500 degrees F for approximately 300 s for containment) to innovative propulsion components which enable greater performance. This may involve innovative research and development, advanced material characterization testing, development of improved material manufacturing and component manufacturing processes, etc., that lead to a specific products for improved missile kinematic performance. PHASE I: Develop a proof-of-concept solution; identify candidate materials and manufacturing processes. Complete a preliminary evaluation of the process, technique or manufacturing technology showing the assessment of improvement through improved performance and/or reduced inert mass. At completion of this program the design and assessment will be documented for Phase II. PHASE II: Expand on Phase I results by producing components, demonstrating manufacturing processes and inspection process. These activities will provide data to support the studies completed in the Phase I program to substantiate the performance improvements. This will allow a more thorough assessment of the technology for missile defense applications. PHASE III: The developed process/product should have direct insertion potential. Conduct engineering and manufacturing development, test, evaluation, qualification. Demonstration would include, but not limited to, demonstration in a real system or operation in a system level test-bed with insertion planning for a missile defense application. Commercialization: The technologies developed under this SBIR topic should have applicability to the defense industry as well as other potential applications such as commercial space flight and commercial industries which employ the use of energetic chemicals.
As new missile defense CONOPS are developed, the requirements placed on weapon data links will increase. Lower latencies and higher data rates will be needed as weapons become more agile, targeting error requirements become tighter, and the need for real time data become greater. In order to support future network communications, innovative concepts and technologies are needed to develop mitigation strategies and alternative approaches to lower link latency issues without the need for hardware modification. The more stressing environments of future systems including engagement coordination scenarios and stress communications networks require future networks to account for improvements in: • Lower Latency • Increased Bandwidth • Increased Data Rates • Transmission Accuracy Coordination algorithms that take advantage of alternate satellite and non-satellite based communication can be considered. Also, considerations should be made for latency and alternatives in case of link failure. PHASE I: Develop a proof of concept design/study; identify designs/models, and conduct a feasibility assessment for the proposed algorithm, model, technique, and/or methods. Work should clearly validate the viability of the proposed solution with a clear concept of operation document. PHASE II: Based on the results and findings of Phase I, develop and refine the proposed solution. The objective is to validate the new technology solution that a customer can transition in Phase III. Validate the feasibility of the Phase I concept by development and demonstrations that will be tested to ensure performance objectives are met. Validation would include, but not be limited to, system simulations, operation in test-beds, or operation in a demonstration subsystem. This phase should result in a prototype with substantial commercialization potential. PHASE III: Apply the innovations demonstrated in the first two phases to one or more missile defense applications. The objective of Phase III is to demonstrate the scalability of the developed technology, transition the component technology to a system integrator or payload contractor, mature it for operational insertion, and demonstrate the technology in an operational level environment. Commercialization: The contractor will pursue commercialization of the various technologies and models developed in Phase II for potential commercial uses in such diverse fields as network, cell, and financial communications, and other industries.
The topic will study the incorporation of innovative reactive materials into a kinetic warhead to increase lethality. Emphasis will be placed on reactive materials that would achieve high reaction temperatures (>4000K) and generate high amounts of chemical energy (>2kcal/g) on impact. The need exists to develop and test reactive materials with varying densities from 1 g/cm3 to 10 g/cm3 as substitutes (with proper strength, ductility, etc.) for inner plastics, aluminum and steel components, etc. or as an add on structure. Proposed solution should enable design of material with specific reaction rates. Investigate cost effective fabrication technologies that are scalable to production. Proposer should tailor reactive materials and manufacturing processes to warhead applications. PHASE I: Analyze, evaluate and conduct feasibility experimentation of the proposed lethality enhancement materials including material characterization and fabrication. Complete preliminary evaluation of the process, technique or manufacturing technology showing improved performance and/or reduced inert mass. PHASE II: Design, fabricate and test prototype-scale device or components under conditions which simulate targets and velocities of interest. Demonstrate applicability to selected military and commercial applications. These activities will provide data validating the studies completed in the Phase I effort with the performance improvements. This will allow a more thorough assessment of the technology for missile defense applications. PHASE III: Conduct engineering and manufacturing development, test, evaluation, and qualification in a missile defense system or demonstrate operation in a system level test-bed with insertion planning for a missile defense application. Commercialization: The technologies developed under this SBIR topic would have applicability to areas such demolition and blasting, fusible links for electrical circuit protection, combustible structures, cutting torches, etc. The technologies developed should also have applicability to defense industry as well as other potential applications such as commercial space flight and commercial industries which employ the use of energetic chemicals.
Several missile defense training systems exist to assist the Warfighter in learning and becoming operationally proficient with the system. This topic seeks to take this a step further by leveraging gaming technologies to determine critical areas of performance and to also design a wrapper to encourage the users to "play" the system, exercising those critical components to refine performance. Modeling and simulation using white cell participants is a common method of training. However, no methods using this approach give instant feedback in a competitive gaming environment. The goal of this topic is to analyze current gaming techniques and determine if any of them could be applied to missile defense battle management training. The effort should address and investigate methodologies for improving learning, participation, and motivation through the application of gaming technologies. Feedback should include some type of reward, possibly points, where the users can compete for skill levels. In planning a missile defense design, the blue force asset lay-down (consisting of sensors, shooters, and command and control (C2) elements) will vary in number. Collaboration between shooters via their elements also varies based on both proximity (communications restrictions) and capability. The threat (red force asset lay-down) will also vary in number and capability. Particularly in Phase I, the researcher can assume simple kinematic impact of nominal missile defense threat trajectories in order to create threat scenarios for scoring analysis. The researcher can also assume simple sensor characteristics when determining intercept timelines. Shooters may consist of single or multiple missile configurations with basic ballistic missile trajectories. The goal is not to replicate current missile defense planning system analysis capabilities, or to develop placement optimization routines, but to focus on innovative ways to assess threat (red force) versus asset (blue force) missile defense design scenarios and to score the results in order to provide dynamic feedback to the operator. This feedback capability will optimize operator training with respect to determining planning methodologies and inherently improve retention and overall knowledge of battlespace management. A key focus of the innovation is identifying the set of variables, parameters and skills over which performance levels should be tested. PHASE I: Develop and demonstrate a gaming concept utilizing a missile defense design scoring algorithm that accommodates multiple threats of varying types and capabilities pitted against multiple sensors, shooters, and C2 elements also with varying capabilities. Provide feedback to the game participant in a quantitatively measurable format. Provide the capability to compare these “scores” based on the participant’s alternatives or courses of action. PHASE II: Refine and update concept(s) based on Phase I results, and demonstrate the impacts of attrition based on both missile expenditure and/or loss of defense assets during raid scenarios in stressing environments. Demonstrate how the gaming concepts improve the operator’s ability to quickly plan for variations in red force and blue force laydown restrictions. The government may choose to provide a government test bed at no cost if the developer wishes to utilize the facility for high fidelity testing. PHASE III: Demonstrate the new technologies via operation as part of a complete system or operation in a system-level test bed to allow for testing and evaluation in realistic scenarios. Transition technologies developed under this solicitation to relevant missile defense elements directly or through vendors. Commercialization: The contractor will pursue commercialization of the various technologies and optimization components developed in Phase II for potential commercial and military uses in many areas such as disaster drill planning, automated processing, accident trauma response planning, and manufacturing processes. There are many applications of planning for events that are rare and it is difficult to train adequately to maintain the needed top response skills. Turning training into a "game" with rewards would incentivize the user to train more frequently to maintain top skill levels.
Command and Control Human-to-Machine Interface
Command and control human-to-machine interface is critical to overall missile defense system performance due to human decisions and interactions associated with command and control systems. Recent advances in virtual reality, stereo-graphics, touch screen interfaces, and automated decision aides have the potential to revolutionize how Warfighters interact with command and control systems by providing situational awareness capabilities that immerse Warfighters in the battlespace environment and present relevant information for quick decisions. Innovations developed under this topic should be in graphical displays or techniques to effectively transmit critical information quickly and effectively to achieve an understanding of the underlying patterns, interrelationships of the data, and particularly critical aspects. Optimal visual representations of missile defense data form the basis for inferences and decisions. However, a poorly chosen graphical form can lead to erroneous inferences and potentially degrade performance. Important components will be the time varying nature of the data, as well as varying priorities across space and time and various accesses, i.e. different users will have different authority and visibility on the system. The graphical display system could be operated when the BMDS Warfighter is subject to considerable stress, so the system needs to be designed to accommodate such use. In addition, the system needs to be flexible and adaptable for new types of information. The underlying battle management framework exists to interoperate with the necessary tactical data link, mapping, and analysis engines. Therefore, the intent of this effort is not to create “yet another missile defense situational awareness display”, but to design and develop an original, innovative, and effective approach to human-to-machine interaction. The proposer should assume that, for the tactical situation display, data fusion and correlation algorithms are not part of this effort. PHASE I: Develop and demonstrate an original, innovative situational awareness and engagement management human-to-machine interface concept. Research and provide evidence as to how this approach improves situation recognition and reaction times through proof-of-principle tests utilizing simple battlespace management situations provided to the operator. Demonstrate how this technology can be used to enhance both C2BMC training and operational processes. PHASE II: Refine and update concept(s) based on Phase I results and demonstrate the technology in a realistic environment using government provided operations scenarios. The deliverable would be a working prototype that demonstrates a missile defense situational awareness and engagement management environment that enables an individual to interact with the situation and make decisions quickly. Demonstrate the technology’s ability in a stressed raid environment with multiple input data sources and user types. PHASE III: Demonstrate the new technologies via operation as part of the complete missile defense battle management system or operation in a system-level test bed to allow for testing and evaluation in realistic scenarios. A successful prototype could be transitioned into a battle management training and/or operational system. Market technologies developed under this solicitation to relevant missile defense elements directly, or transition them through vendors. Commercialization: The contractor will pursue commercialization of the various technologies and optimization components developed in Phase II for potential commercial and military uses in many areas such as 911 centers, battlefield displays, integrated air and missile defense, or trauma triage displays.
Improved Track Accuracy for Missile Engagements
Missile defense performance is dependent on the efficient acquisition, tracking, and discrimination of threatening objects by disparate and geographically dispersed sensors. Precision tracking is a key component for all phases of a missile defense engagement to ensure efficient use of resources and to enhance each component’s contribution to the success of such engagements. Candidate solutions should address improvements in track accuracy for interesting objects following a ballistic trajectory, while retaining a robust capability to maintain track continuity, accuracy, and purity, against evolving threats with increasing complexity. The expectation is that any final product from this solicitation will yield improvements in the accuracy of sensor tracking data that will enhance the effectiveness for all missile defense stakeholders whose technologies rely on precision track data. Innovative solutions that utilize real-time, near real-time, and/or hybrid techniques will be considered. Solutions that are adaptable to several radar frequency bands are preferred, but single band solutions will be considered. The radar tracking coefficients, where accuracy improvements are expected, include: measured position and velocity, predictive track propagation (with uncertainty), track correlation, and track purity. PHASE I: Develop and conduct proof-of-principle studies and/or demonstrations of track accuracy techniques and algorithms that are easily adaptable to a wide range of sensors using simulated sensor data. PHASE II: Update/develop algorithms based on Phase I results and demonstrate technology in a realistic environment using data from multiple sensor (as applicable) sources. Demonstrate ability of the techniques and algorithms to work in real-time, high clutter, and/or countermeasure environment. PHASE III: Integrate techniques and algorithms into missile defense systems and demonstrate the overall updated capability. Pursue partnerships with DoD system integrators. Commercialization: The contractor will pursue commercialization of the various technologies developed in Phase II for potential commercial and military uses in many areas such as weather radar, air traffic control, or satellite tracking.
Innovative Methodologies for Modeling Fracture Under High Strain-rate Loading
Seek high fidelity modeling tools for fracture mechanics that are accurate and cost effective for post intercept debris prediction. Acceptable solutions potentially incorporate improved damage models, meshless methods, “peridynamics,” or any combination thereof. Use of first-principles codes to predict the characteristics of post-intercept debris requires prediction of fracture and cracking of aerospace structures. Prediction of crack initiation and propagation can be done using a mesh that follows the crack, but this is time-consuming since it involves re-building the model (re-meshing) in the region of interest. Finite element codes, or hydro-codes (e.g. Dyna, Paradyn, Zapotec, Velodyne, etc.), must capture events on extremely short time-scales for high-rate problems such as high-velocity impact or explosive loading of structures. Within this group of codes, methodologies that address fracture, crack growth, shear bands, and voids are all of interest. Fracture is a challenging problem in applied mechanics and both improved modeling and computational cost are critical to a successful approach. PHASE I: Investigate the feasibility of new damage models or modeling approaches in first-principles codes. Select a tractable problem on an appropriate scale with a known solution or experimental data for verification of the proposed methodology and demonstrate the feasibility of the approach using a representative structural model (incorporating materials commonly used in aerospace structures) with high-rate loadings (e.g. high-velocity impact or explosive loading) that would cause cracking or fracture on the time-scales (at least 20ms duration) of this type of problem. Demonstrate improvements to the fidelity of fracture predictions and assess the computational cost. PHASE II: Perform further demonstration of the methodology proposed in Phase I through application to more complex, larger-scale models of interest, and use of a broader range of experimental data sets. Include implementation of this methodology to predict fracture and crack-growth in full-scale testing, such as flight, sled, or arena tests. PHASE III: Transition the first-principle physics-based modeling capability developed under this program to lethality and debris prediction efforts. Execute model runs for design and analysis cases of interest to missile defense applications including flight test and engineering codes. Commercialization: An innovative application of generalized finite element methods or other fracture modeling approaches to first-principles codes or hydro-codes could then be applied in the defense and aerospace industries wherever high strain-rates appear due to high-velocity impact or explosive loadings, or in other applications such as mining or demolitions where explosive loadings are involved.
Thermally Efficient Emitter Technology for Advanced Scene/Simulation Capability in Hardware in the Loop Testing
Ground testing of exo-atmospheric interceptor IR sensors play an essential role in the development of advanced algorithm concepts, mitigating flight test risk/cost and evaluating tactical performance. Numerous next-generation IR emitter technologies such as IR light emitting diodes (LEDs), photonic crystals and resistors are in development. These devices address the need for greater projected temperature ranges, faster frame update rates and very large array formats but present challenges in managing parasitic/waste heat. This solicitation seeks new and innovative emissive technologies to enable presentation of dynamic high-temperature scenes at higher frame rates for high fidelity IR projection in ground test environments to meet the test requirements for larger formats and more stressing tactical environments where thermal management is not a dominating factor. The end result will provide a capability to evaluate exo-atmospheric IR sensors and target tracking/discrimination algorithms in ground test facilities with increasing confidence of success prior to flight test. Technical goals of this topic include: • Pixel scene resolution of 4K x 4K • Frame rates > 400Hz • Flickerless display • Compatible with cryogenic chamber operation (~100K) • MWIR/LWIR scene temperatures of 2000K • Native non-uniformity <10% • Cross-Talk <1% • Dynamic range 16-bits PHASE I: Conduct a feasibility study to identify one or more innovative thermal efficient emitter solutions that meet the temperature/speed goals and show promise to implement in extremely large formats. Emitter to emitter spacing (pitch) must be minimized to avoid area-defect yields and ease optical interfacing challenges. Identify addressing and drive schemes to achieve flickerless display. Use of modeling and simulation to conduct trade studies, optimize efficiency, predict overall performance, and forecast power requirements is essential. Preliminary testing of materials at the “coupon” level to anchor model predictions is desirable. PHASE II: Develop and execute an incremental test & integration plan that will address the technology challenges and produce a prototype/breadboard system for evaluation. Define any technology shortfalls and document the recommendations for resolution and update system level model. PHASE III: Based on Phase II lessons learned, revise the system model to proof-out the new design. Develop and execute an incremental test & integration plan that will produce a final prototype. Demonstrate interface capability via bench test of a government furnished very large format array at cryogenic temperature. Commercialization: The primary market for thermally efficient materials for extremely large infrared projectors will be military and civilian government agencies with applications requiring testing of munition and surveillance sensors.
Innovative Antenna Arrays Enabling Continuous Interceptor Communications
Phased antenna arrays are expensive, heavy systems with complex hardware configurations. Despite these complexities, phased arrays are advantageous in situations where mechanical steering is impractical. In the past decade, there has been maturation in technology regarding the use of digital beamforming (DBF) to substantially augment the system-level capabilities of phased array antennas. However, disadvantages of this “work-around” include potential high power consumption, data latency and throughput introduced by digitization and beamforming operations. This topic solicits ideas to develop radio frequency antennas that are innovative, reliable, radiation hardened, and support high speed continuous communications between fire control and interceptor/kill vehicles in an operational fading channel environment throughout all stages of flight without reorientation requirements. Proposed communications schemes should have the lowest possible weight impact on the kill vehicle. Favorable solutions will consider multiple communication paths including communication terminals, satellites, and with scalable transmission power and ranges. Reduction in the antenna array footprint on the kill vehicle is also desired including flexible ultra wideband, conformal, and fractal antenna solutions that are capable of receiving signals from a large range of orientations. PHASE I: Conduct an initial design evaluation of proposed systems and perform any laboratory/breadboard experimentation or numerical modeling needed to verify the proposed method. The contractor should identify the strengths/weaknesses associated with different solutions, methods and concepts. PHASE II: Based on the optimal communication antenna array design proposed in Phase I, the contractor should complete a detailed prototype design incorporating government performance requirements. Fabricate and test a prototype for hardness, reliability, and performance in a simulated environment to verify theoretical/design assumptions. The final deliverable will be a detailed performance analysis of the experiment, an antenna prototype, and an initial design of an engineering development model of the resulting communications system. The contractor should coordinate with solicitor during prototype design and development to ensure products will be relevant to ongoing and planned projects. PHASE III: Either solely, or in partnership with a suitable production foundry, implement and verify in full scale that the Phase II demonstration technology is economically viable. Assist solicitor in transitioning the technology to the appropriate system or payload integrators for engineering implementation and testing. Develop and execute a plan for marketing and manufacturing. Commercialization: Innovations developed under this topic will benefit both DoD and commercial space and terrestrial programs. Possible uses for these products and techniques include long-range line-of-sight communications systems for satellites or aircraft.
Multi-Object Payload Deployment
Future weapon systems may be required to deliver multiple payloads. A key technological driver for multi-object payload vehicles is the restraint and deployment method. This topic seeks innovative solutions to reliably restrain and release the payloads with precise deployment dynamics. Restraint technology must withstand high axial shock and acceleration loads. Payload deployment dynamics should create low radial acceleration loads. Deployment will occur outside the atmosphere and technologies should offer flexible (simultaneous/sequential) deployment. The design should interface with current missile defense platforms. For the purposes of Phase I, deployable payloads should have identical configurations, each with a mass range between 10-30kg. PHASE I: Conduct a design that shows the feasibility of the concept, backed with low-fidelity, proof-of-concept component testing. The proposer should provide estimated performance and reliability characteristics. PHASE II: Refine the concept through detailed design and analysis including fabrication of hardware. Testing should include multiple test series that demonstrate the restraint and deployment characteristics of the design. This phase should conclude with an updated design based on test results. The proposer should provide performance and reliability characteristics. PHASE III: Demonstrate the scalability of the developed technology, transition the technology to the missile defense system integrator or payload contractor, and ensure maturity for operational insertion into missile defense applications. Demonstration would include, but not limited to, demonstration in a missile defense system or operation in a system level test-bed with insertion planning for a missile defense application. Commercialization: The proposal should show that the innovation has benefits to both commercial and defense applications. Technology developed can be applied to commercial satellite or launch platforms; weaponry; or military aircraft. The projected benefits should demonstrate cost reduction and improve producibility or performance of products that use the technology. The proposer should estimate the market size for both commercial and defense applications. Success in this research area should strengthen available and reliable hardware for use at missile defense application Department of Defense Agencies, and commercial entities.
Interceptor Thermal Protection Systems
Objectives for future missile defense applications include increased kinematic reach. One method of maximizing kinematic reach is through inert mass reduction. Interceptors require a significant amount of thermal protection system materials to survive fly-out trajectories. An example of current state-of-the-art material for thermal protection systems has a density of approximately 1.72 g/cm^3 (0.06 lbm/in3) and possesses a thermal conductivity of approximately 0.36 W/m-K. Innovative material solutions are sought to extend beyond the state-of-the-art. Robustness may be addressed within the thermal protection system by integrating features that address Electro-Static Discharge (ESD), Electromagnetic Impulse (EMI), and Lightning Strike. Improvements in affordability may be addressed through manufacturing processing or component integration. Additional features for potential integration are antennas, cables & connectors, raceways, etc. The advanced material must be dimensionally and chemically stable during typical missile storage and flight environments. PHASE I: Evaluate the feasibility of the material concept, backed with proof-of-concept material testing. Provide estimated performance and reliability characteristics. PHASE II: Continue development of the material and associated concepts through detailed design and analysis including fabrication of material and subscale hardware. Developmental testing should be conducted to validate modeling and property databases. Evaluate material aging affects. Provide in-house and independent verification and validation. Provide performance and reliability characteristics. Phase II should conclude with an updated design based on test results. PHASE III: Demonstrate the scalability of the developed technology, transition the technology to a missile defense system integrator or payload contractor, and ensure maturity for operational insertion. Demonstrate operation in a missile defense system or in a system level test-bed. Plan insertion into a missile defense application. Commercialization: Develop and execute a plan to manufacture the prototype developed in Phase II, and assist in transitioning this technology to the appropriate missile defense system integrator for engineering integration and testing.
Low Light Short Wave Infrared Focal Plane Arrays
This topic focuses on enabling next generation sensors and improving FPA performance beyond the current state-of-the-art to support future missile defense applications. This topic seeks low noise, high sensitivity FPA technologies that detect very low signal levels. Current FPA technologies for imaging in low-light conditions at SWIR wavelengths are limited by poor quantum efficiency and/or poor noise characteristics. Silicon-based technologies offer low noise devices; however, the quantum efficiencies of these devices is typically a few percent at SWIR wavelengths. Technologies based on other substrates (HgCdTe, InGaAs, etc.) have fairly high quantum efficiencies (greater than 50%) at SWIR wavelengths, but the noise characteristics of these devices are typically too large for very low light sensing conditions. Technologies based on avalanche photodiodes (APDs) are capable of imaging in low-light conditions; however, APD technologies have limitations: in the case of Geiger mode APDs, their response to input photon flux is non-linear and in the case of linear-mode APDs, their array sizes are small. This topic does not focus on particular substrate/readout integrated circuit combinations but solicits technical solutions for imaging objects under low light conditions at SWIR wavelengths. Goals for the topic are: 1) quantum efficiencies for silicon-based substrates that are greater than 60% (at 900nm), 35% (at 1000nm), and 20% (at 1100 nm) or quantum efficiencies for non-silicon-based substrates that are greater than 30% across the 900-1200 nm region 2) dark currents that are less than 100 electrons/pixel/second at 77K 3) readout noises less than 5 electrons 4) formats that are greater than 512 X 512 with extensibility to 1k X 1k 5) frame rates of 30Hz with integration times up to 30 milliseconds 6) programmable with capability of 2 X 2 binning of pixels 7) excess noise factor that is less than 2 for technologies that use gain 8) linear response to incident flux The proposed technologies should also include designs that mitigate the effects of harsh radiation environments to prevent catastrophic system failure. PHASE I: Conduct modeling, simulations, and analysis (MS&A), and proof-of-principle experiments of the critical elements for the proposed FPA technology. This phase should validate the feasibility of the proposed technology. Phase I will conclude with a proof-of-concept design review of the detector technology to include a clear, concise technology development plan and schedule, predicted FPA performance metrics, a transition risk assessment, and associated requirements documentation. The contractor is strongly encouraged to collaborate and cultivate relationships with other system and/or sensor payload contractors to ensure the applicability of the FPA technology and to initiate work towards technology transition. No specific contact information will be provided by the topic authors. PHASE II: Using the resulting processes, designs, techniques, and architectures, developed in Phase I, fabricate a prototype or engineering demonstration unit of the FPA technology. Perform characterization testing of the FPA within the program constraints of cost and schedule. The characterization tests should show the performance achieved from the FPA technology. Environmental testing: vibration, thermal, and radiation testing (if applicable) is encouraged. Differences between the MS&A and the FPA performance data should be noted. During this phase, the contractor should continue to collaborate and cultivate relationships with other system and/or sensor payload contractors while considering the overall objective of commercialization of the detector technology in Phase III. PHASE III: The offeror will implement and verify in full scale, either solely, or in partnership with a suitable production foundry, that the Phase II demonstration technology is economically viable. Assist in transitioning the detector technology for missile defense applications to an appropriate contractor for engineering integration and testing. Commercialization: The contractor will pursue commercialization of the various technologies developed in Phase II for potential commercial uses in other government applications. In addition, there are potential applications for detector technologies in a wide range of diverse fields that include astronomy, commercial satellite imagery, optical and free-space communications, law enforcement, maritime and aviation sensors, spectroscopy, atmospheric measurements (in-situ and remote sensing), and terrain mapping.
Solid State High Power Amplifier for Communications
The goal of this topic is to investigate solid state power amplifier (SSPA) technologies that meet or exceed the output power (greater than 1 kW), duty factor, operating frequency (K-band:20-22 GHz), reliability, sustainability, and supportability achievable with existing traveling-wave tube amplifiers as a potential replacement for klystron tubes in future communication systems. Klystron tube technology has reliability and supportability issues resulting from manufacturing processes and component (tungsten wire) availability. The proposed SSPA architecture should consist of a modular design which provides an additive power approach based on multiple radio frequency (RF) modules linked together to achieve desired power while providing for gradual degradation when a single RF module fails so components can be replaced without having to remove input power. Currently, input power must be cut when replacing a failed klystron. SSPA technologies that leverage built-in-tests and diagnostics to provide fault detection and isolation at a line replaceable unit level should be emphasized. This level of fault determination should allow for efficient replacement of failed hardware components and demonstrate enhanced safety for system operators and maintainers. Another goal of this topic is to demonstrate fast SSPA warm-up times (less than a 1 second “warm-up” time prior transmission). Finally, the proposed SSPA technology should demonstrate enhanced availability and reliability to meet operational readiness at minimal cost. PHASE I: The contractor should develop a generic solid state HPA design that provides for future growth potential to support emerging interceptor communications requirements. The design should document expected reliability, availability, sustainability and supportability performance at power levels and duty factors that are sufficient for communications. PHASE II: The contractor should transition the generic solid state high power amplifier design completed in Phase I into a system specific end item prototype suitable for testing via insertion into an existing missile defense application. The prototype and design documentation should be provided to the government. PHASE III: The contractor should integrate the Phase II prototype into an existing government hardware string and then conduct subsystem and system level testing to ensure compatibility with legacy communication hardware components. The contractor will be expected to refine the design as needed to address any needed changes identified during testing to satisfy communication design/performance requirements. Commercialization: This innovative technology would have benefits for all commercial and/or defense systems applications operating in the X, Ka, K, & Ku bands requiring reliable, sustainable high power amplification of RF energy (e.g. commercial and military satellite communications) but the benefits are also applicable to bands above and below those specific bands. The contractor proposal should clearly explore and identify other specific applications for both commercial and defense systems.
Non-Destructive Testing Methods for Detecting Red Plague Within an Insulated Silver Plated Copper Conductor
Red Plague is a galvanic corrosion of silver coated copper materials which occurs when the silver coating does not adequately cover the underlying copper and is exposed to water by either direct contact or condensation. Red Plague causes degradation of the anodic copper while leaving the cathodic silver plating intact. More details for causes and current mitigation provided in in SAE-ARP-6400, the NASA Red Plague Control Plan, and the European Space Agency’s “Corrosion of Silver-Plated Copper Conductor.” Therefore, the silver “straw” can carry current during a normal incoming inspection conductivity test and most system level high-frequency checks, but without the copper's strength and ductility, silver cannot carry the shock and vibration load in flight environments. Silver plated copper wire provides many advantages over other conductor systems such as excellent solderability, crimpability, and flexibility. The intent of this topic is ultimately to create a system which can be used to perform non-destructive testing on XL-ETFE insulated silver plated copper wire and determine acceptability prior to implementation within hardware. One proposed method for creating this system is by utilizing the skin effect. As frequency of a signal increases, a given conductor will carry more and more of the signal on the "skin" of the conductor. This is often referred to as "skin effect." Conversely, as the frequency decreases a more uniform distribution of current within the conductor is obtained. Therefore, within a given length of XL-ETFE insulated silver plated copper conductor (with 40 microinch mean plating thickness), a method could be developed such that by applying signals at various frequencies, and measuring the AC impedance at frequency of the wire length it may be possible to determine relative extent of copper conductor degradation from varying degrees and numbers of sites of Red Plague corrosion. The levels of the variable frequency currents applied may also impact current density at corrosion sites and be used to determine effects to the characteristics of the wire from this corrosion. PHASE I: Develop a non-destructive test method to detect the presence of Red Plague. Conduct experimental and/or analytical efforts to demonstrate that Red Plague can be created consistently within a lab environment for use in the analysis of the non-destructive testing (NDT) method. It is critical that the performing company can consistently create a known amount of damage due to Red Plague within an XL-ETFE wire for verification of the non-destructive test method. PHASE II: Conduct experimental and/or analytical efforts to demonstrate proof-of-principle of proposed technology. Investigations shall consider the viability, feasibility, and cost-effectiveness of solutions to locate and quantify extent of Red Plague within XL-ETFE insulated silver plated copper conductor. Demonstrate the technology by developing a prototype in a representative environment. Demonstrate feasibility and engineering scale up of proposed technology as well as identify and address technological hurdles. Demonstrate the system’s viability and superiority under a wide variety of conditions typical of both normal and extreme operating conditions. PHASE III: Successfully demonstrate direct applicability or near-term application of technology in one or more missile defense applications. Demonstration should be in a real system or operational in a system level test-bed. This demonstration should also verify the potential for enhancement of quality, reliability, performance, and reduction of total ownership cost of the proposed subject. Commercialization pathways should be identified for both military and civilian applications. Commercialization: Equally important to military utility is the transferability of proposed technologies to Red Plague detection in aerospace, automotive, and industrial uses. The proposed technology should benefit commercial and defense systems through cost reduction as well as improved reliability and sustainment. As enabling technologies, it is anticipated that commercial and industrial transferability and applicability of such technologies will be high.
Passive Inter-Modulation RF Emissions Utilized for Identifying Galvanic Corrosion in Metal Structures
Corrosion is a major concern that causes premature deterioration or failure at damage sites in metal structures thereby necessitating monitoring, maintenance, repair or replacement. PIM emissions are a known problem for ships and land-based cellular systems where metal structures simultaneously receive RF radiation on two or more different signal frequencies. The received RF signal frequencies may then cause RF currents to flow in metal structures that can be non-linearly mixed together in metal-to-metal contacts in the structures affected by galvanic corrosion. These RF currents may then generate additional new intermodulation RF emissions from the structures due to sum and difference harmonic mixing of the two or more original frequencies. PIM may occur in a variety of areas from coaxial connectors to cables, rusty bolts, or any metallic structure joint where dissimilar metals or similar metals in an electrolyte occur. Corrosion sites could include poor, loose, or contaminated connectors, junctions between dissimilar metals, and mechanical connections that have become oxidized, or contaminated with typical corrosion chemicals found in missile defense systems. Demonstrate that PIM RF emissions have the capability to accurately detect and characterize degradation arising from galvanic corrosion processes in metal structures. Areas of emphasis include determining if various conditions of corrosion severity found at points on a metal structure can be accurately and repeatedly identified to determine where and when a particular corrosion site is an issue for concern. Furthermore, the information gained by the preceding effort should be evaluated for its capability to support corrosion degradation monitoring, corrosion modeling and determining applicable acceleration factors for galvanic and other combinational corrosion processes. Also desired from this effort is to determine whether the RF currents that circulate due to RF radiation impinging on the metal structures increase the PIM emissions. Resolve how RF radiation affects or accelerates existing galvanic corrosion processes in the structures. PHASE I: Demonstrate a proof-of-concept for detecting PIM RF emissions from metal structures and provide analysis of how effectively the measurements are employed to detect and locate the corrosion points at sites on metal structures. Identify prototype equipment items, develop a preliminary equipment design and document developed techniques. PHASE II: Perform testing of a PIM detection prototype to obtain information on PIM emission “signatures” of corrosion sites with known differing metallurgical characteristics. Demonstrate that the results are accurate and repeatable enough to provide a basis for degradation monitoring, corrosion modeling and determination of corrosion model acceleration factors. Develop a library of repeatable signatures of corrosion to include metallurgical characteristics at corrosion sites such as joint contact pressure, surface topology, current density and area of the contact point. PHASE III: Develop a final equipment design and hardware and transition to the government. This phase is to obtain improved capability for identifying the presence and progression of underlying galvanic corrosion in metal-to-metal contacts. This effort offers significant cost avoidance for missile defense applications. Positive results from this project have great potential to improve system reliability, and to reduce the costs of metal structure corrosion from material and systems functional loss within the DoD, and on a national commercial level. Commercialization: In the U.S., total direct cost of corrosion is estimated at about 300 billion dollars per year; which is about 3.2% of domestic product. Corrosion also interferes with human safety, disrupts industrial operations and presents dangers to the environment. Positive results from this project have great potential to improve system reliability and to reduce the costs of metal structure corrosion within the DoD and on a national commercial level.
Synthesis and Realization of Broadband Magnetic Flux Channel Antennas
Significant advances have been made recently in the development of magnetic antennas. These antennas are magnetic duals of electric antennas, which allow them to be mounted directly on an aircraft surface. No frequency-dependent backing-cavities are required, which allows true frequency-independent operations. Flux channels in the form of magnetic rings have been shown to replace vertical elements and crossed-linear dipoles. In both cases, no hull penetrations are required save for the feed points, and the external presentation is minimized. However, application to the curved surfaces of aircraft has yet to be addressed. Issues involved are related to the tape-winding process that has been used to manufacture the channels. This process resists conforming to curved surfaces. Designs are required that can be applied to singly and doubly curved surfaces such as those found on aircraft. There is also opportunity here to arrange the geometries to adjust antenna pattern characteristics. The solutions will need to operate over a decade of frequency in the 3 – 600 MHz (MegaHertz) band with frequency-independent characteristics in both impedance and radiation pattern. The antenna should attain a gain greater than 0 dBi and a Voltage Standing Wave Ratio (VSWR) equal to or better than 2.5:1 over at least the upper two octaves of the band. The design should be extendable to other and wider frequency bands for both line-of-sight and satellite communications applications. The primary objective of this solicitation is to extend the capabilities of magnetic-current radiators by constructing frequency-independent geometries. A secondary objective is to consider electrically small antennas and high-power antennas. Using the science and current development state are described by Sebastian in the references [1, 2]. PHASE I: Determine technical feasibility and develop an approach for frequency-independent geometries for magnetic flux-channel antennas that are conformal to aircraft surfaces and designed to meet the performance requirements in the description section. Prove feasibility through analysis and simulation. PHASE II: Further develop design from Phase I through additional analysis and simulation. Design, manufacture, integrate, and demonstrate the operation of a prototype on a simulated aircraft body to establish practical performance parameters. Based on these results, propose any refinements to the antenna design and fabrication approach and determine the trade-off between cost, weight, and gain-bandwidth performance. Address fabrication cost and volume challenges that are relevant to the general application to aircraft. PHASE III: Finalize the design from Phase II, perform relevant testing and transition the technology to appropriate Navy and commercial platforms. The small business will support the Navy with certifying and qualifying the antenna for Naval use. As appropriate, the small business will focus on scaling up manufacturing capabilities and commercialization plans.
Design and Produce Millimeter Wave Dipole Chaff with High Radar Cross Section
Current aircraft radio frequency (RF) chaff is made from aluminum coated glass filaments produced in a continuous strand and then cut to lengths that achieve the desired resonance at frequencies in the 2-18 GHz band. The filaments require a slip coating to prevent end welding of fibers when cut, and to minimize clumping when ejected. The typical chaff cartridge can contain millions of these coated glass filaments and has multiple sections of different lengths to create a reflective response across many frequencies at the same time [Ref. 2,3,4]. The millimeter wave band of 30 to 40 GHz is not addressed in the fielded chaff cartridge. The current chaff material is not well suited to be cut and packaged to lengths required for efficacy in the millimeter wave region. Also, calculations show the amounts needed to produce the required response in that region cannot be achieved in the volume of the current chaff cartridge. Recent advances in nano-fibers, nanotubes, meta-materials, conductive polymers, graphite fibers, graphene fibers, metal nanowire technologies, and coating techniques using copper, silver, aluminum, zinc, etc., provide some promise that new higher performing chaff can be produced on a large scale. [Ref. 1,]. The new chaff may be able to double, triple, or even quadruple the number of dipoles in the available volume. A novel dipole chaff material is needed that can be utilized for millimeter wave frequencies, is low cost, and can be produced easily in sufficient quantities, by industry, to satisfy the needs of the military community. This new chaff must have high scattering RCS in the 33 to 38 GHz frequency band. Target cartridge volume is 1.4 inch diameter X 5.8 inch long cylinder. It is desired that the material RCS exceed 500 square meters at 35 GHz per cartridge. NAVY - 10 PHASE I: Design, develop and prove feasibility of new innovative chaff in accordance with the parameters in the Description. Provide a detailed analysis conducted by modeling and simulation, calculation or measurement of individual dipole RCS performance and then scale up the RCS performance for a volumetric chaff cloud result. If objective is met for the proposed frequency band of 33 to 38 GHz, then investigate the scalability of the material for the 2 to 18 GHz frequency band to show increase/decrease of effectiveness. Agglomeration or bird-nesting of dipole payload must not exceed 20 percent of sample to facilitate dispersion of the chaff payload upon dispense. The target materials have been in existence for some time now and the basic process of combining and coating have been the subject of experimentation on a small scale. Production of a small quantity consisting of a few ounces of the material to prove the ease of manufacture and demonstrate the simplified process is desired. PHASE II: Develop a pilot scale manufacturing process for the chaff material. Test material in a controlled environment and demonstrate that modeling and simulation results confirm actual performance findings. Develop plans to integrate the chaff with a dissemination device such as Navy RR-129 cartridge form factor. Using Government Furnished Equipment (GFE) cartridges, produce and provide 30 flight test ready samples for Government-furnished testing on an air platform in order to fully characterize the effects of this chaff on a Navy test range. Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by DoD 5220.22-M, National Industrial Security Program Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Security Service (DSS). The selected contractor and/or subcontractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances, in order to perform on advanced phases of this project as set forth by DSS and NAVAIR in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material IAW DoD 5220.22-M during the advanced phases of this contract. PHASE III: Develop a full-scale manufacturing process for proposed material. Participate in qualification testing efforts of the proposed material. Assist in the transition of the technology to appropriate air platforms.
Synthetic Aperture Radar Approaches for Small Maritime Target Detection and Discrimination
Traditionally SAR has been used to provide imagery of fixed structures on land. Objects moving in the scene were unfocused and generally not of value. For large vessels at sea in relatively calm conditions, some advanced focusing algorithms are able to provide high quality imagery but are not useful for small vessels with very dynamic movements. For maritime environments, the community has relied on low altitude (<1000’) non-coherent techniques that leverage lower clutter returns present in at these low grazing angles. Coherent techniques based on SAR processing are far less sensitive to grazing angle, allowing the platform to operate at higher altitudes and steeper grazing angles (perhaps 10’s of degrees). In addition, they offer improved performance due to a richer set of potential discriminants. Increased standoff ranges are attained using the coherent techniques as well as allowing the use of lower cost, lower peak powered radars. The suggested approach differs from traditional SAR in that the objective is not to “focus” the target but rather to leverage the nature of the target signature and the coherence of the background to improve detection, tracking and discrimination. PHASE I: Design and demonstrate the feasibility of a SAR based small maritime target approach for detection and tracking using available field data or synthetic data. PHASE II: Mature the detection and tracking approach to be suitable for integration into an existing Navy airborne maritime surveillance radar system. Develop a set of discriminates using field data for a limited set of target types identified by the Navy. PHASE III: Refine and improve the implementation for integration on Navy maritime surveillance radar systems suitable for platforms such as the MQ-8C, MQ-4C, MH-60R and P-8A.
Test and Certification Techniques for Autonomous Guidance and Navigation Algorithms for Navy Air Vehicle Missions
Many advanced autonomous guidance and navigation algorithms capable of dynamic route re-planning have been developed. The application of such algorithms to Unmanned Air System (UAS) missions has remained limited. This limited application results from multiple factors; however, the greatest obstacle is airworthiness certification. The development of certification methods for these algorithms remains challenging because of the difficulty in defining test cases and expected results that suitably enumerate the broad range of conditions and non-deterministic responses that may be generated in response to complex environments. The current certification approach relies on brute-force methods to exhaustively test the algorithms making certification efforts prohibitive due to cost and schedule impacts. Therefore, the Navy’s current UAS inventory is limited to fixed, pre-planned mission plans that prohibit fully integrated operations with the fleet. Test and certification techniques for navigation and guidance algorithms specific to Naval missions are needed to integrate higher levels of autonomy into Naval Aviation operations. A wide range of classes of algorithms have seen application in recent UAS autonomous path planning research, such as: potential field methods , optimization methods , and heuristic search methods . The present inventory of these algorithms is quite large, and the application of each class of algorithm can be quite nuanced. Although some test/demonstration cases have been proposed , new, robust analysis, test, and demonstration techniques must be developed to enable the Navy acquisition community to certify systems with autonomous path planning algorithms . PHASE I: Develop and prove feasibility of initial test and certification techniques for autonomous guidance and navigation algorithms for representative Navy mission scenarios. Mission scenarios could include dynamic path re-planning during shipboard launch/departure and recovery/landing flight phases within the ship’s airspace, including integrated operations with other manned aircraft. The test and certification techniques will be evaluated against accuracy, algorithmic scope coverage, and reduction in test scope from full brute-force style evaluation techniques. PHASE II: Develop a prototype software application/suite for the test and certification techniques and integrate that software with real-time (or faster-than-real-time) software-in-the-loop capabilities that provides a virtual test bed for: (i) assessing the efficacy of various algorithms in different mission scenarios, (ii) developing suitable test/demonstration cases, (iii) robustness of test/demonstration cases across algorithm classes. Develop integration, test, and certification guidelines for the software application/suite to enable testing of future algorithms. PHASE III: Transition a final software application/suite to the Navy (i.e. Triton, Fire Scout, and UCLASS) and other DoD agencies. Additionally, transition the developed technology to commercial UAS industries.
Gallium Arsenide Based 1-Micrometer Integrated Analog Transmitter
Current airborne military communications and electronic warfare systems require ever increasing bandwidths while simultaneously requiring reductions in space, weight and power (SWaP). The replacement of the coaxial cable used in various onboard RF/analog applications with RF/analog fiber optic links will provide increased immunity to electromagnetic interference, reduction in size and weight, and an increase in bandwidth. However it requires the development of high performance, high linearity optoelectronic components that can meet extended temperature range requirements (-40 to 100 degrees Celsius (C)). Additionally, avionic platforms pose stringent requirements on the SWaP consumption of components such as optical transmitters for avionic fiber communications applications. To meet these requirements, new optical component technology will need to be developed. Current analog optical transmitter technology typically consists of discrete lasers and modulators operating at 1550 nanometers (nm), with a requirement for active cooling for operation in avionic environments. To meet avionic requirements, the transmitter should integrate laser and modulator into a compact uncooled package that can maintain performance over full avionic temperature range. It is envisioned that a Gallium Arsenide (GaAs) based transmitter at approximately 1 micrometer wavelength can meet this requirement. One (1) micrometer GaAs optical sources can operate over an extended temperature range (>100 degrees C) at high efficiency (up to ~60%). This is currently not possible at 1550nm. The desired optical component is a GaAs-based integrated analog transmitter (laser and high-efficiency modulator), with an integrated optical source with low relative intensity noise (RIN) (<-160dBc/Hz), 100 milliwatt (mW) output power, uncooled operation over a minimum temperature range of -40 to +100 degrees C, and an integrated optical intensity modulator with low V-pi (<2V), packaged in a ruggedized package that has a height less than or equal to 5 mm, and a volume of <2.5 cubic centimeters. The packaged transmitter must perform over the specified temperature range and maintain hermeticity and optical alignment upon exposure to typical Navy air platform vibration, humidity, thermal shock, mechanical shock, and temperature cycling environments . PHASE I: Develop and analyze a new design and packaging approach for an uncooled 1 micrometer optical transmitter that meets the requirements outlined in the Description section. Develop fabrication process, packaging approach, and test plan. Demonstrate feasibility of the optical transmitter with a supporting proof of principle bench top experiment. PHASE II: Optimize Phase I transmitter and package design and develop a prototype. Test prototype transmitter to meet design specifications in a Navy air platform representative of a relevant application environment, which can include unpressurized wingtip or landing gear wheel well (with no environmental control) to an avionics bay (with environmental control). The prototype transmitter should be tested in an RF photonic link over temperature with the objective performance levels reached. Demonstrate a prototype fully packaged transmitter for direct insertion into analog fiber optic links. PHASE III: Perform extensive operational reliability and durability testing, as well as optimize manufacturing capabilities. Transition the demonstrated technology to Naval Aviation platforms and interested commercial applications.
Flight Deck Lighting Addressable Smart Control Modules
Surface aviation and amphibious assault ships launch and recover aircraft whose pilots typically use Night Vision Devices (NVDs) for night operations. As a result, the NVD flight deck lighting solution requires control and dimming of various individual lighting fixtures and circuits aboard these ships. Digitally addressable control of these lighting fixtures is required in order to dim and/or turn these lights on and off depending on what the flight operations and atmospheric conditions require. The US Navy desires a new solution for aviation lighting aboard air capable ships utilizing LED technology through a standardized smart module that will be able to recognize the lighting package configuration and what type of fixture it is controlling through embedded firmware/software; this would allow lights of different functions and power requirements to be daisy chained, significantly reducing cable runs and installation costs. As lights are added to the system, they should self-configure and appear on the operator control panel in the correct lighting group. The smart module would eliminate the need for multiple configurations, set-up issues and complex troubleshooting while providing a simplified configuration that allows it to be easily replaced when a light is not able to be turned on/off, dimmed, or flashed from the operator control panel. Failures of any light should not affect the operation of any other light. The existing lighting control system includes an expensive (estimated $60,000) electronics box, FLEXDRIVER, which drives up to forty-eight light fixtures on a ship in each enclosure; this reduces system reliability and creates single point failures. A large ship may need in excess of twelve FLEXDRIVER’s, each of which must be individually configured for the compliment of lights the flight deck has. Each configuration is a unique mix of multiple driver cards for the specific light fixtures it drives and each light has a direct connection to a FLEXDRIVER, increasing system cabling. An innovative approach is needed to identify the most cost effective methods to achieve successful installation of the smart module. Objectives to consider include the ability to reduce the system cable plant, minimize system interconnections, provide redundancy for fault tolerance, provide for on/off, flashing and dimming control of various lighting groups, configurations that can be incorporated into existing lighting fixtures or interconnection junction boxes while minimizing total system cost of ownership. The consolidated control of the total system would be over the shipboard Local Area Network (LAN). The proposed system should meet strict Electro Magnetic Interference (EMI) requirements of Mil-STD-461 and navy shipboard environmental requirements. Of note, there are twelve different LED lighting fixtures of varying quantities that make up the Advanced Flight Deck Lighting (AFDL) system. More will be developed in the future. Current lighting fixtures range from a single LED fixture to fixtures containing multiple LED strings (1-6) with different voltage (3-28VDC) and current (68-3150ma) requirements. PHASE I: Design and demonstrate the feasibility of a universal smart LED Light Fixture Control module, as discussed in the Description section, with respective embedded firmware/software capable of having universal applications on all US Navy flight decks while identifying methods to keep overall system costs to a minimum. PHASE II: Based on the Phase I effort develop a production-representative prototype of an LED Light Fixture Control smart module and demonstrate its functionality in a lighting control system on a shipboard representative lighting layout with either real or simulated loads, provided by NAVAIR (e.g., either a test bench or real fixtures at NAVAIR Lakehurst or a location to be determined (TBD)). Design concepts should be updated to detailed design documentation. Analysis of shipboard environmental sustainability should be provided at a minimum, with environmental testing of the PRM conducted ideally. The ability to provide a failure analysis is desired, as is an estimate of service life. PHASE III: Finalize development of an optimized LED Light Fixture Control smart module design for robustness and full environmental qualification, including shock testing. Test prototype in conjunction with a shipboard representative flight deck lighting system (TBD by NAVAIR). Produce units for delivery to fleet and shore sites. Transition and integrate the smart module into its intended platform(s).
Ability for Electronic Kneeboard (EKB) to Communicate and Operate in a Multi- level Security Environment
The Electronic Kneeboard (EKB) is currently being developed to enable access to digital publications, tactical imagery, and other dynamic data in all USN and USMC aircraft. This capability will greatly enhance aircrew situational awareness, reduce cockpit clutter, improve precision fire, and enable in-flight mission re-planning. The warfighter would greatly benefit from a mobile platform capable of communicating on multi-level security domains, leveraging any and all available transport media. The utility of EKB is dependent on a tablet device ability to operate at both unclassified and classified levels, within a loosely-defined and inconsistent connectivity model. Unclassified operation will be required for various administrative functions (Naval Air Training and Operating Procedures Standardization (NATOPS)/Naval Aviation Technical Information Product (NATIP)/Standard Operating Procedures (SOP), study, access to email, and routine mission planning); while the classified environment will be essential for tactical mission execution that includes but is not limited to ingestion of live data feeds, chat, tactical imagery, etc. The objective of this project is to design and develop a software-based solution to achieve unclassified and classified (definition for classified is Secret) personas on a single tablet. The development effort will have to address a major challenge, which will require a highly innovative approach to devise a software tool that is sufficiently "secure" to meet National Security Agency (NSA) requirements for highly classified communications. Proposers should consider the requirements of NSA’s Commercial Solutions for Classified (CSfC) program (see reference below). Further, the software-based solution should utilize a variety of transport media to send/receive data from/to the device when a network connection is present. The solution should address the need for predictable, timely execution of system commands. The software tool should utilize a smart algorithm/load balancer to analyze available connections and make the most efficient use of the bandwidth provided over each security level, based on network performance metrics, application priority, and others. For example, a shipboard environment may have a Satellite Communications (SATCOM) presence/Consolidated Afloat Networks and Enterprise Services (CANES) Wi-Fi, a Forward Operating Base may have SATCOM/cellular, and a training squadron may have cell/Wi-Fi/Navy Marine Corps Intranet (NMCI) hardwire. This approach would enable devices to receive, process, and display a variety of data types from existing networks, aircraft systems, and sensors. Data types to include standard Office documents, imagery files, e-mail, text, and voice traffic. A smart processing construct is critical to the success of this effort. Current solutions in this problem space fail to effectively leverage both internal system resources and external system interfaces. Internal resources (i.e. system memory, Central Processing Unit (CPU) cycles) are simply divided based on a predetermined split across various virtual machines. This structured methodology does not account for the dynamic reallocation of critical resources based on mission need. Further, current tablet technologies do not gracefully assess system interfaces and the bandwidth available across each of them. Standard bandwidth monitoring techniques are obtrusive, utilizing methods which further exacerbate the limited bandwidth problem. PHASE I: Design and develop a software-based concept to achieve high assurance data isolation/compartmentalization via dynamic data identification. PHASE II: Develop a prototype software tool with a path towards multi-level secure processing capability and certification. Preliminary testing of the prototype will be conducted with the inputs/artifacts provided by the government sponsor to support flight certification process. Demonstration of load/resource balancing across security levels is key. Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by DoD 5220.22-M, National Industrial Security Program Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Security Service (DSS). The selected contractor and/or subcontractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances, in order to perform on advanced phases of this project as set forth by DSS and NAVAIR in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material IAW DoD 5220.22-M during the advanced phases of this contract. PHASE III: Integrate the software tool into EKB tablet to assure interoperability with existing EKB applications (list to be provided as needed) and conduct operational tests with mission representative datasets in simulated network environments. Collect performance metrics from developmental tests and refine smart processing algorithm(s) to optimize performance. All certification and accreditation artifacts will be provided for both information assurance and flight certification.
Infrared Search and Threat Identification
A number of thermal imaging devices and sensor systems that are capable of tracking an IR signature exist in the fleet today; however, they do not have the capability to identify the threat level of the designated target. For example, the AN/AAQ-37 Distributed Aperture System (DAS) on the F-35 provides situational awareness, detection, and tracking but not threat identification. The Advanced Targeting Forward Looking Infrared (ATFLIR) on the F/A-18 provides long range target detection without threat identification. The capability to infer, from existing sensor IR imagery data, the identification and classification of high end airborne threats, is needed. This passive capability would be integrated into existing airborne sensors to allow for threat identification in emission controlled and emission denied tactical operational environments. The goal is to provide threat identification as a software application within the current sensor system such as the F-35 DAS or the F/A-18 ATFLIR. The capability needs to provide temporal discrimination of IR signatures and positive threat identification with a low false alarm rate. The solution must be supportable with the form, fit, and function of the target sensor with efforts to minimize space, weight, and power (SWAP) impacts for future compatibility. The government will provide SWAP requirements once the target sensor is selected. PHASE I: Define and develop a concept for an IR threat identification capability. Compare innovative techniques in IR threat identification against existing techniques and quantify the performance differences. Formulate and prove feasibility of a concept for implementation using existing IR sensors technology and identify whether the proposed solution will require a hardware component. A limited, unclassified threat data set will be provided for application in concept development. PHASE II: Produce prototype technology based on the concept developed in Phase I. Develop models and simulation techniques to show achievable performance. Demonstrate the functional technology and its performance in a controlled environment. Validate the selected technique and demonstrate advantages within the scope of specific IR threat data to be provided by the Government. Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by DoD 5220.22-M, National Industrial Security Program Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Security Service (DSS). The selected contractor and/or subcontractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances, in order to perform on advanced phases of this project as set forth by DSS and NAVAIR in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material in accordance with DoD 5220.22-M during the advanced phases of this contract. PHASE III: Transition the technology into a suitable candidate IR sensor system, identified by the Navy, and provide empirical evidence on the effectiveness of system threat identification. Transition the developed technology to the Fleet.
High Peak Power 1.9 um Thulium-Doped Solid-State Lasers for Next-Generation Compact and Robust High Peak-Power Blue Lasers
A need exists for high pulse energy high repetition-rate lasers for LIDAR transmitters. LIDAR systems have been shown to be a powerful tool to remotely probe various oceanographic and atmospheric processes. Each system generally requires specialized transmitters at often hard to achieve wavelengths. Often the lasers available to hit these wavelengths are not suited for high peak-power operation. A Thulium and Holmium 2 um system, together, is ideally suited to cover this requirement. It has a large tuning range, and in combination with one or two frequency conversion steps, it can cover a large part of the electro-magnetic spectrum. Furthermore, the long-lived upper states required for energy storage can facilitate high pulse energy operation. There is even more compelling motivation for the Navy that is driving the development of this high peak-power 1.9 -um solid state laser. There has been a requirement for a high peak-power blue laser system solution to be operated in pulsed mode with high repetition rate for standoff oceanographic sensing applications. A combination of the intrinsic wide tuning range of Thulium and a two stage second-harmonic generation (SHG) can be utilized to generate light optimized for different ocean scenarios. The current state-of-the-art blue laser solutions with high peak-power capability usually incorporate complex, inefficient, and relatively bulky and heavy multiple-stage non-linear processes that require second or third harmonic generations and/or optical parametric oscillation approaches. These approaches often do not meet the stringent space, weight, power (SWaP) and performance and reliability requirements set forth by the Navy. Therefore, it is anticipated that frequency quadrupling of a compact high peak-power 1.9 um laser would potentially result in a blue laser with high peak-power with 50% reduction in size and weight, thereby circumventing many of the drawbacks of the more conventional approaches. NASA Langley Research Center has been actively pursuing coherent Doppler wind LIDAR development for 3-D winds measurement for the last fifteen years. The intensive research efforts have led to significant advancement of a 2 -um laser transmitter, and the NASA team has recently demonstrated a Holmium- (Ho) and Thulium- (Tm) doped solid state laser and amplifier at ~2um emission wavelength with 250 millijoules (mJ) per pulse at a pulse repetition rate of 10 Hz  and a Ho:YLF slab amplifier with 125 mJ at 350 Hz . These are Holmium-based systems at wavelengths slightly longer than 2 um. For some Navy applications such as the aforementioned application in the blue spectral range, it is desirable to use wavelengths shorter than 1.9 um. To cover this desired wavelength range, further development of Tm-based solid-state lasers with high peak-power and repetition rate is needed. It is therefore the goal of this program to seek the development of a compact, ruggedized, high energy, high repetition rate, 1.9 um, Thulium laser that will meet the size, weight, power performance and reliability requirements stated in the following. The performance objectives of the laser solution are: 1. High repetition rate -- Threshold: 200 Hz; Objective: 250 Hz. 2. High peak-power -- Threshold: 80 mJ per pulse with pulse width no more than 100 ns. 3. Wavelength -- Threshold: between 1.850 um and 1.960 um; Objective: 1.890 um. 4. Line width of less than or equal to 2 Angstroms. 5. Polarization -- single polarization >100:1. 6. Wall plug efficiency -- Threshold 10%; Objective 20%. 7. Laser beam quality M-squared less than 3. 8. Light weight. (Total weight including the laser head, cooling system, power supply, and control system) -- Threshold: less than 50 lbs; Objective: less than 35 lbs. 9. Small volume. (Total volume for the cooling system, power supply, control system and laser head) -- Threshold: less than 2 cubic feet; Objective: less than 1 cubic foot. 10. Electrical power requirement -- Threshold: less than 1.5 kW; Objective: less than 1 kW. 11. Ability to be ruggedized and packaged to withstand the shock, vibration, pressure, temperature, humidity, electrical power conditions, etc. encountered in a system built for airborne use. 12. Reliability: Mean time between equipment failure -- 300 operating hours. 13. No cryogenic cooling allowed. PHASE I: Design and determine the feasibility of a viable robust 1.9 um solid-state laser system which meets or exceeds the requirements specified in the Description section. Identify technological and reliability challenges of the design approach, and propose viable risk mitigation strategies. PHASE II: Design, fabricate, and deliver a laser system prototype based on the design from Phase I. Test and fully characterize the system prototype. PHASE III: Finalize the design and fabricate a ruggedized laser system solution and assist to obtain certification for flight on a NAVAIR R&D aircraft. Identify transition partners and create a business plan that will provide a robust, compact and flexible LIDAR transmitter to the oceanographic and atmospheric sciences community.
Multi-Wavelength and Built-in Test Capable Local Area Network Node Packaging
The Navy is interested in advancing built-in test (BIT) capable digital avionics single-mode wavelength division multiplexing (WDM) local area network (LAN) node technology. Combining integrated active and passive WDM components with planar light-wave circuits (PLCs), and integrated optical time domain reflectometry (OTDR) technology will create low cost, space, weight and power (SWAP) WDM packaging technology for Department of Defense (DoD) aviation platforms. Application of BIT capable WDM technology on DoD aviation platforms will enable a drastic increase in the aggregate transmission bandwidth and network node connectivity, reliability and maintainability relative to today’s copper and single-wavelength fiber optic point-to-point link designs. Current fiber optic systems utilizing single wavelength point-to-point links limit the ability of the avionics designer to maximize network redundancy, reliability, and maintainability, while minimizing the number of onboard interconnects. This fundamental limitation of point-to-point fiber optic links for high speed digital data transmission points directly towards WDM technology as a viable solution. The inherent speed and latency advantages of optical communication in future designs trend toward ultra-high speed fiber optic WDM local area networking. The recent development of precision fiber optic component connection and OTDR application specific integrated circuit technologies and advancements in integration of ruggedized digital WDM active components and avionics WDM LAN topologies point toward an innovative research program to integrate the various components into functional packages for board level integration. It is expected that WDM LAN technology will be incorporated in future generation avionics architectures. In order to meet the needs of military avionics, the Navy is seeking innovative approaches for creating WDM LAN nodes based on hybrid-integrated BIT capable optoelectronic packages containing OTDR application-specific integrated circuit (ASIC), tunable laser, wavelength converter, fixed and tunable multiplexer/de-multiplexer, planar lightwave circuit (PLC), and advanced connection technology. Placing WDM LAN node components on a printed circuit board enables easy insertion within avionics weapons replaceable assemblies. This will advance technology readiness and thus eliminate apprehension on the part of avionics integrators to adopt WDM LAN technology in next generation designs. Successful development could also result in significantly reduced WDM LAN component packaging cost. Final packaged solutions including all electronic interface circuitry shall meet the following SWAP requirements. Size: Package height shall be less than 8 mm Threshold / 5 mm Objective. Package footprint shall be less than 100 cm2 Threshold / 50 cm2 Objective. Mass: Package mass shall be less than 1000 grams Threshold / 500 grams Objective. Power: Package shall require less than 12 W of electrical power (including cooling if needed) to meet Phase II routing objectives. PHASE I: Develop and demonstrate the feasibility of a hybrid-integrated WDM LAN node package. Simulate the in-package optical routing for ring, bus, star and mesh topologies, including the performance over expected optoelectronic package assembly tolerances. Include plan to meet SWAP requirements. PHASE II: Develop a manufacturable prototype of the WDM LAN node package designed in Phase I. Demonstrate four optical input/outputs each supporting a minimum of 4 C-band 100 GHz spaced wavelengths per input and output port at a minimum of 10 gigabits per second per wavelength with a bit error rate no greater than 10E-12. Test both device components and the package over a -40 to +100 Celsius temperature range. Phase II has the potential to be classified, the contractor will need to be prepared for personnel and facility certification. PHASE III: Increase manufacturing readiness and transition to manufacturing for avionics application for both Navy and commercial usage.
Advanced Non-Destructive System to Characterize Subsurface Residual Stresses in Turbo-machinery Components
Compressive surface treatments are frequently used in turbo-machine components to add a factor of safety to their component life. The residual stress (RS) profile that is imparted to metallic components can vary by application, service use, time, and environment. The US Navy is interested in non-destructively measuring the subsurface residual stress field in metallic engine components, specifically in titanium and Inconel® alloys. The current industry standard for measuring RS is x-ray diffraction (XRD) which is limited to measuring surface stresses. In order to measure subsurface stresses with XRD, the component must be destructively evaluated. Because of this, a subsurface stress for a production component is unable to be confirmed except for the few candidates used for quality assurance. Even when these candidates are analyzed, XRD-measured RS does not correlate well with the design model showing the difficulty in modeling subsurface stress. Extensive measurements of surface RS have been performed during recent years confirming that surface RS relaxes at critical locations of engine components with operational usage. These components are designed to initially have compressive RS at key locations; however, as the RS relaxes with usage, it may approach a tensile condition and the component no longer benefits from the intended factor of safety. An NDI system that provides quantitative, subsurface measurements of RS at critical locations of turbo-machinery components of the propulsion system is sought. Such technology will provide the ability to implement subsurface stresses to a design model, confirm that the design intent was met, as well as offer the ability to actively monitor the life remaining for a component due to operational stress relaxation. Critical components of concern include fans, disks, and blisks/integrally bladed rotors (IBR). Critical locations tend to be inside surfaces of bolt-holes, bores, slots and fillet radii. In order to be a valuable design tool, the new NDI device must be capable of the following: • Resolving stress within 10% of actual stress value (emphasis will be placed on method of validation); • Resolving stress location within 10% of modeled local mesh size; • Discerning stress values in each element of the model used to life the component; • Results for stress magnitude and location should be repeatable to less than 10%, using ASTM F1469 as a guideline; • Through component measurement for typical gas turbine engine components is desired. The ability to measure residual stress at depths customary to advanced surface treatments (~0.150 inches) is necessary. Components can be moved or manipulated to achieve this requirement; • Measuring Titanium- or Inconel®-based alloys common to cold section components with ability to expand to other materials; • Producing a favorable return on investment; • And be safe for the user. Attributes such as time per measurement, measurement environment and system size should be addressed. Time per measurement and system size should be minimized as much as possible while the measurement environment should be practical (e.g., humidity, component cleanliness, etc.) The ability to quantify the percentage of each grain orientation (e.g., 100, 111 planes) for cubic and hexagonal structures in a specific volume and discern the stress of a specific orientation is not required but would be encouraged. The desire for this solicitation is to develop a technology with the ability to measure subsurface stresses for use in component design, and the ability to be developed into a manufacturing quality control tool as well as a portable, field inspection device. Establishing a working relationship with relevant original engine manufacturer(s) (OEM), while not required, will greatly enhance probability of success. PHASE I: Demonstrate feasibility and proof-of-concept of proposed NDI system capable of quantitatively, nondestructively and reliably measuring and tracking surface and subsurface RS at critical locations of turbo-machinery components. Provide preliminary design for a system. PHASE II: Develop, produce, validate and implement a robust and rugged NDI RS measurement prototype system based on the results of Phase I. The prototype should be capable of obtaining the necessary subsurface RS data nondestructively and tracking it for comparative analyses during the life cycle of each individual component. Integrate the system to develop life management methodology and validate life management algorithms for application to be used on engine components. Provide detailed design for a system. Perform a demonstration of the developed NDI system. PHASE III: Mature the system for field use by making the system robust, rugged and ensuring ease of use for the operator in both Navy and commercial applications. Perform any final testing and commercialize and transition the technology for field and Original Equipment Manufacturer usage.
Inducing Known, Controlled Flaws in Electron Beam Wire Fed Additive Manufactured Material for the Purpose of Creating Non-Destructive Inspection Standards
Several military platforms are targeting EBAM for production of new, replacement, and repair components. Standard NDI methods are currently being applied to EBAM components, but significant capability gaps exist in inspections of component preforms thicker than approximately 3". Uncertainty remains around the probability of detection, minimum detectable flaw size, and resolution of non-destructive inspections (NDI) on typical flaws found in additive manufacturing (AM) materials. Improved standards are needed to down select and optimize specific NDI techniques (X-ray, computed tomography (CT), and ultrasound (UT)) for these components. Typical NDI standards for X-ray, CT, and UT inspection consist of a block of material fashioned to a representative shape and thickness to represent the component to be inspected. Representative flaws are introduced by drilling side-drilled and/or flat bottomed holes and/or burning electric discharge machining (EDM) notches into the standard. These flaws do not always adequately represent volumetric flaws within a component, particularly with methods such as CT. A novel method to induce known, controlled flaws in EBAM materials for production of improved NDI standards offers the promise of increased confidence and reliability in these inspections. PHASE I: Determine and demonstrate feasibility to develop reliable and repeatable methods for inducing known, controlled flaws representative of porosity, lack of fusion, and cracking due to residual stresses in Ti-6Al-4V samples deposited by EBAM. Demonstrate the feasibility of applying at least one such approach by fabricating and inspecting coupon specimens. PHASE II: Provide practical implementation of a production-scalable process to implement the recommended approach developed under Phase I. Evaluate the approach through the fabrication and evaluation of a sufficient quantity of NDI test coupons. Develop and fabricate an NDI standard using the recommended method utilizing relevant specifications as cited in the references and Government defined flaw geometries (minimum flaw size will nominally be 0.0500 in. +/- 0.0005 in.). PHASE III: Transition the NDI standard manufacturing approach to military aircraft component fabricators and commercial industry which utilize EBAM for new, replacement, or repair components.
Innovative, High-Energy, High Power, Light-Weight Battery Storage Systems Based on Li-air, Li-sulfur (Li-S) Chemistries
As the Navy modernizes its Fleet, the energy needs of naval aircraft are increasing significantly. Meeting the energy demands of these aircraft is a formidable challenge which requires looking beyond current Lithium-ion (Li-ion) batteries. The state-of-the-art Li-ion cells have a theoretical specific energy of 387 Wh/kg (watt hour/kilogram) and energy density of about 1015 Wh/L (volumetric energy density), respectively. The specific energy of the Li-ion cells are attractive because, in comparison to nickel-cadmium and lead-acid batteries, Li-ion batteries offer significant advantages – decreased weight (~1/3) and increased capacity (~3X). The decrease in weight would result in cost savings due to lower fuel consumption during flight or the ability to increase payload, which increases mission capability. Li-air and Li-S are two emerging chemistries that can meet such energy demands. There are two types of rechargeable Li-air batteries undergoing research, namely, non-aqueous and aqueous systems. The theoretical specific energy for the non-aqueous system (organic electrolyte based) is 3505 Wh/kg and the theoretical energy density is about 3436 Wh/L, which are about ten times greater than the Li-ion cells. The corresponding parameters for the aqueous systems are 3582 Wh/kg and 2234 Wh/L, respectively, which are also approximately ten times greater than the Li-ion cells . The Li-sulfur cells have a theoretical specific energy of 2567 Wh/kg with a theoretical energy density of 2200 Wh/L. Each component of the non-aqueous Li-air battery faces unique technical challenges. For example, dendrite formation on the Li metal anode raises safety concerns that impact the capacity retention of the cell and contribute to voltage gap during cycling process. One of the challenges for the aqueous type Li-air is the requirement of a Li-ion conducting membrane to protect the Li metal. The polysulphide solubility is a concern for the Li-S batteries [1-3]. These challenges lead to low specific energy and poor cycling efficiency for the current Li-air and Li-sulfur systems. Combinations of material innovations and advancement in obtaining stable interfaces are key to solving such challenges. Disruptive new Li-air and Li-S concepts have the potential to increase cycle life, round trip efficiency, and power density from their current levels which are critical to the development of reliable next-generation battery chemistry technologies. The purpose of the topic is to develop 28V (Volt) DC (Direct Current) / 270 VDC electrical energy storage devices based on emerging chemistries such as Li-air and Li-S. The offerors must demonstrate the minimum specific energy for Li-air cells in the range of 600-1000 Wh/kg or higher (at least 3X higher than Li-ion cells) or in the range 400 – 800 Wh/kg for Li-S cells. The offerors must propose innovative approaches to overcome the challenges mentioned above to achieve the defined threshold values with the goal of approaching theoretical energy density as objective values. The battery system to be developed (28V / 270V DC) must be stable under aircraft operational, environmental, electrical, and safety requirements governed by applicable government documents [4-5]. The requirements include, but are not limited to, sustained operation over a wide temperature range from -40 deg (degrees) C (Celsius) to +71 deg C, including exposure to +85 deg C and the ability to withstand carrier based shock and vibration loads, altitude range up to 65,000 ft (feet), per MIL-STD-810G , and electromagnetic interference of up to 200 V/m (Volts Per Meter), per MIL-STD-461F . Proposed innovative pathways must meet additional requirements of low self-discharge (< 5% per month), good cycle life (> 2000 cycles at 100% Depth of discharge (DOD)), and long calendar life (4-7 years' service life) at cell level (threshold) and at battery product level (objective). The diagnostic/prognostic capabilities of the system that will lead to developing a safer, reduced total ownership cost functional product should also be addressed. PHASE I: Design and develop an innovative concept to address low specific energy and low cycle life and demonstrate the feasibility of Li-air and or Li-S battery at full-cell level. Perform critical safety and electrical performance evaluations of Li-air and/or Li-sulfur batteries. PHASE II: Develop a prototype and demonstrate the functionality of a Li-air and/or Li-S battery over a wide-temperature range, under select harsh environmental, storage, and cycling conditions. In addition, initiate the scale-up and design processes and develop preliminary cost structure. PHASE III: The functional aircraft-worthy prototype battery product should be developed with performance specifications satisfying targeted acquisition requirements coordinated with Navy technical point of contacts. Complete testing per military performance specifications and transition to appropriate platforms (Ex. F/A-18E/F, F-35 etc.). Commercialize the Li-air, Li-S battery technology and leverage the advantages of scalable manufacturing process to develop a cost-effective manufacturing process for technology transition to various system integrations for both DOD and civilian applications.
Model-Based Tool for the Automatic Validation of Rotorcraft Regime Recognition Algorithms
Due to practical, technical, and logistical limitations associated with achieving direct loads monitoring for every fatigue sensitive component on an aircraft, the Navy is relying on flight maneuver recognition to provide usage data across a fleet of aircraft in order to refine fatigue life calculations. However, current RR tools have trouble accurately and precisely recognizing flight regimes. These existing RR tools are based off of empirical or rule-based systems. They are derived from actual flight tests, which makes them vehicle-, load out-, weather-, and pilot- dependent. In addition, their development is costly and time-consuming, since each air vehicle system and maneuver type must be individually flight tested and verified against the RR code. As a result, RR codes do not have the accuracy required for fleet usage in HUMS. It is therefore important to verify that future RR codes for use in rotorcraft applications correctly represent as many flight conditions as possible. In order to ensure the fitness of RR tools, per ADS-79D-HDBK, new RR codes require an independent verification and validation (IV&V) effort. Current verification of RR tools is performed by manually comparing the input of physical flight test data to the RR tool’s output. This process is often labor intensive and error prone. Without an automatic and standardized way of comparing codes, selecting the ‘best’ code can be a subjective process. Automating the validation process would not only expedite the process considerably, it would also allow for the quantitative comparison of RR codes. The use of physics-based simulation to recreate a set of validation test flights can reduce flight test costs and streamline the RR validation process. Despite their present shortcomings, an effective RR code would be invaluable for tracking fatigue damage to parts through the accurate detection and measurement of flight regimes experienced by a rotorcraft. RR schemes have many other possible benefits to rotorcraft operations, including updating service usage spectrums, as well as component damage tracking. These improvements could drastically reduce unscheduled maintenance and downtime. An automated validation tool which leverages physics-based simulation that will provide validation of RR codes, be fast, simple to use, and provide feedback on the accuracy of the RR tool’s identification of regimes is sought. This validation tool should be able to identify codes that capture at least 97 percent of maneuvers, or sufficient maneuvers in order to not under-predict the fatigue damage fraction of life-limited parts by more than 0.5 percent. The tool should alert users to inconsistencies between the outputs of the RR code and the performed maneuvers. The validation tool should use scripted HUMS flights on instrumented aircraft. PHASE I: Design and develop a model/concept for a physics-based tool for the automatic validation of rotorcraft RR software. Demonstrate the feasibility of the approach. PHASE II: Provide practical implementation of the methodology developed in Phase I and incorporate it into a prototype tool which includes a suitable graphical user interface (GUI). The prototype tool must be able to analyze RR code for a specific rotorcraft platform and mission load out. Improve the accuracy, robustness, and speed of the tool to help spur the development of more robust RR codes. Demonstrate the developed prototype tool with scripted HUMS flights on instrumented aircraft and eventually streaming data from a flight simulator. PHASE III: Transition and integrate the validation tool into a software package for use with RR code outputs obtained from actual flight data from onboard a Navy/Marine helicopter. Perform field testing to demonstrate the robustness of the system when dealing with real flight data. Expand the tool to be able to analyze code for actual production platforms (e.g. H-53E/K, H-60R/S, and H-1) and different load outs for each required platform, as well as for use in commercial establishments. Evaluate qualification test results and provide procurement specification for transition to an actual production platform.
Ultra-High Temperature (UHT) Sensor Technology for Application in the Austere Environment of Gas Turbine Engines
Current blade health monitoring sensors are capable of operating at 1100°F continuously uncooled, and have been demonstrated to work up to 1800°F with cooling. Use of compressor air for sensor cooling would adversely impact the cycle efficiency and potentially produce case distortion, and hence, a need exists to develop uncooled sensors that can operate in a +2500°F environment in the aft end of the turbo machinery. Design progression also alludes to a high-temperature need in the compression system. Current developmental and future engines for Department of Defense (DoD) propulsion systems will need to operate at high efficiencies to meet the mission-weighted fuel burn (MWFB) requirements. This means that engines operate with tighter running clearances, increased blade loading and at substantially higher temperatures. As these designs have matured, high cycle fatigue (HCF) failures have become a more common problem. Engine structural integrity program (ENSIP) criteria (MIL-HDBK-1783B, A.4.13.3) was modified to state that components shall have a minimum HCF life of 10^9 cycles (up from 10^7) and must be designed to 40% of the endurance limit (down from 60%); however, these contemporary design standards are still lacking in their ability to mitigate HCF failures. As a result, broader design margins are required to mitigate safety risks. Mission requirements for enhanced propulsion system performance and reduced fuel burn present significant design challenges for original engine manufacturers (OEM’s). An effective method to increase performance, while decreasing weight and specific fuel consumption (SFC), is to dramatically increase compression ratio and use fewer, more efficient stages. These increased compression ratios directly translate to increased temperatures which exacerbate the HCF issues. There is a need to understand the temperature environment in which these airfoils operate to better assess the impact to dwindling HCF margins due to thermal effects on fatigue and endurance capabilities. Current developmental and future engines are expected to see increased creep incidents due to rising engine stresses and temperatures. With larger stresses present inside engines, creep can occur at lower temperatures (cold creep) and may appear more prevalent in the cold sections of engines, although creep has traditionally been classified as a hot section issue. There exists a need to understand the impact of degradation mechanisms such as HCF, cold creep and others on material capabilities during engine operation. Important aspects of a sensor system for this environment to consider are footprint, overall ruggedness, and practicality of system-engine interface. An onboard diagnostic system must be of compact geometry and maintain a minimum weight. Should a system utilize line of sight probes requiring ports through the engine casing, said ports should also be minimized in size and number. An onboard system will be subjected to engine vibration, debris in gas path and possible perturbations from bypass air (on a turbofan engine). The vibration experienced by the engine case and the rotor is not necessarily in unison so the system must be able to discern blade vibration due to foreign or domestic object damage (FOD or DOD) or HCF from other vibratory noise such as that from relative motion between case and rotor. Ideally, the system must be structurally ruggedized to withstand the vibrations and loads of an operating aircraft, debris that may be in the flow path and the high-speed airflow. Currently we work with probes that are 1/2 to 1/4 inches in diameter, and just over 1 inch in length. There is no current upper bound on weight, however one must consider that weight is a premium in aircraft design and the more light-weight a design is the more attractive it is as a solution. These probes penetrate engine cases and today we would like to see innovation in decreasing these penetrant sizes. Of course, an innovative design proposal that challenges our concepts of probe construction and functionality is welcomed. It is highly recommended, though not required, that the collaboration with original equipment manufacturers be maintained throughout this development. PHASE I: Design and demonstrate the feasibility of an UHT in-situ engine sensor system at temperatures in the region of 2500°F and as cited in the Description section. Proof of concept demonstrating the system’s ability to identify and report airfoil high cycle fatigue and FOD/DOD events in real time, at elevated temperatures, is necessary. Operating in austere engine environments should be given serious consideration in this phase PHASE II: Based on Phase I effort, further develop and test in-situ engine sensor system prototype. Rig testing should be conducted to elevate technology readiness level (TRL) to an appropriate level for a representative demonstration (a minimum TRL level of 4 is expected). The system should ultimately demonstrate robustness in design and function for successful operation in a representative environment. Conceptual design for the final system configuration should also be completed. PHASE III: Ruggedize the design and perform required operational testing. Commercialize and transition the developed technology to appropriate Navy platforms.
Miniaturized, Fault Tolerant Decentralized Mission Processing Architecture for Next Generation Rotorcraft Avionics Environment
Most avionics systems for rotorcraft currently rely on a federated mission computer/processing architecture which centralizes the aggregation of data for processing and subsequent Human Machine Interface (HMI)/subsystem transmission. Current Rotocraft Federated architecture habitually claims redundancy by having a secondary processing computer that is either fully capable, or has a reduced situational awareness (S/A) capability giving the operators a 'fly home' mode upon failure of the primary processing unit. This architecture has resulted in development of high cost mission computers, $200K-$400K/unit, requiring multi-million dollar investments by the government. This centralized processing system worked in the past as power usage was lower; however, newer systems using commercial off the shelf (COTs)/government off the shelf (GOTs) processing boards in a 3U and 6U format are high speed/high power creating the need for heat rejection >200W for a relatively small surface area which is expected to increase as more video intensive processing is introduced with the Digital Terrain Elevation Data (DTED) II/III based tracking and digital map systems. As part of this effort, an evaluation of the integration of COTs/GOTs systems vs custom design needs to be investigated to optimize the processing profile. This system should be fault tolerant, capable of losing up to 50% of the processing nodes while maintaining full situational awareness (S/A) across 4 Extended Graphics Array/High Definition (XGA/HD) (720 and 1080p) displays, processing map, digital video and A/C sensors. Additionally it should have a singular nodal processing system of at least 3 nodes with a documented expandability limit, a unit cost 20% or less of existing mission computers which cost $200,000 - $300,000. Another systems integration based point to consider is that the majority of aircraft avionics systems are designed for a 1553B daisy chain architecture for digital data transfer. New, higher speed systems rely more on a hub/spoke architecture with designated relay points (i.e., Ethernet) which drive physical integration issues based on the eight (8) wires necessary for connection between each node vs two (2) wires for legacy wired communications systems, i.e., 1553 and serial. Modular Open Systems Architecture (MOSA) and the Future Airborne Computing Environment (FACE) will be required for the majority of Avionics components in the future. While a MOSA computing environment should be achievable, there has been some debate about the ability to utilize an abstraction layer and maintain a true Real-Time processing environment. This issue should be addressed in the design of this system, identifying roadblocks to full FACE conformance and potential mitigations. This architecture could be utilized as a host architecture/framework for next generation Jet Engine controls, serving as the enabling functions for a dynamic distributed controls system if a the flight critical level of reliability and signal integrity is achieved during this effort. PHASE I: Determine the technical feasibility of distributed avionics architecture, identifying any technological breakthroughs necessary to meet the high-uptime system requirements and distributed processing stability. PHASE II: Produce a prototype mission processing architecture validating the proposed design from Phase I that is capable of being certified for flight. The actual certification process need not be completed for this phase but a high degree of confidence that the hardware qualification required for a Safety of Flight or Mission Critical System will be achieved. Demonstrate full functionality, automatically without operator intervention, when one of the nodes is disabled. PHASE III: Produce a set of Production Representative Units PRU's (pre Low Rate Initial Production LRIP units) for retrofit integration into an AH-1Z utilizing the existing monolithic software running on an existing rotorcraft platform demonstrating the ability to process the existing software in at least three separate, fully redundant nodes. Commercial applications, such as the automotive and manufacturing industry, will also be continued to be developed.
Low Emissions Waste to Energy Disposal
Island bases and other remote forward operating bases (FOB) have limited land and energy resources to dispose of municipal solid waste (MSW). Open air pits are discouraged and congressionally required to be nearly-eliminated. Due to the high volume of generated MSW and limited amount of real-estate, landfills and bio-digestive approaches are impractical. Incinerators currently being used require expensive, complex scrubbers in order to meet air quality standards. Existing incinerators in use also consume excessive amounts of fuel (diesel or JP8) and require waste characterization and sorting to ensure proper operation. On island bases and other remote FOB’s, fuel is costly to import - not only in monetary value, but manpower and lives as well. Current DoD waste disposal practices for contingency bases involves trucking away waste or bringing in additional fuel to burn the waste, which adds to the transportation burden and increases risk to personnel. An Army Environmental Policy Institute (AEPI) study reported than a soldier or civilian was wounded or killed for every 24 fuel resupply convoys in Afghanistan during FY 2007 . Thermal approaches to MSW disposal are sought that would generate energy in the form of a fuel, useful thermal energy, or electrical energy. The goal is to achieve net zero consumption of energy in the disposal of MSW, while meeting air quality standards (see Ref 6). However as a minimum the results of this project must show quantifiable improvement in energy consumption. The new incinerator system must be simple to operate and maintain in all climate conditions. The system must be able to be setup and operational within 24hrs. Fuel or energy generation must be produced within 24 hours of operation. Typical thermal approaches that may be considered in developing the incinerator system include but are not limited to: combustion, gasification, pyrolysis, and thermal depolymerization. Plasma arc gasification concepts should address energy intensity of the process, include simplicity and robustness of the hardware to be used. Bio-approaches are not ideal, but will be considered. Any bio or chemical system must be robust and capable of functioning in all global climate conditions. GENERAL SPECIFICATIONS: The proposed incinerator system, when developed into a working system, should meet the following logistics foot print and capacity: For transportability to remote island bases or FOB, as well as ease of assembly, the system should be contained in TRICON size containers that can be reassembled and dismantled like modular building blocks. Each TRICON must weigh not more than 10,000 lbs, the maximum capacity of the material handling equipment within the Naval Mobile Construction Battalion’s Table of Allowance. The system should be contained in no more than eight TRICON containers. The proposed incinerator system should be able to deliver at least a 95% reduction in volume of waste, be flexible in handling solid waste to include food, waste oil, and damp wood or vegetation. The system should be able to handle at least 1200lbs of waste a day and be able to be operated with a minimum of two personnel. Effluents and any char from the process needs to be environmentally safe for easy disposal. The system must be able to be setup and operational within 24hrs. Fuel or energy generation must be produced within 24 hours of operation. PHASE I: Determine feasibility of developing a portable incinerator system capable of MSW disposal with the goal of achieving net zero energy consumption while meeting air quality standards. Provide simulation and design plans for fabrication of working prototype waste disposal incinerator system. Laboratory scale demonstration would be desirable but not required as a Phase I deliverable. PHASE II: Fabricate and demonstrate a fully functioning incinerator system prototype with measurable energy consumption improvement. The measurable improvement should be close to or at the goal of net zero energy consumption. Air quality measurement will be tested to quantify emissions. Prototype system should be delivered, sized and fitted into TRICON containers, meeting specifications as discussed in the Description section. PHASE III: Based on the results of Phase II, the small business will manufacture an incinerator system with measurable energy consumption improvement close to or at the goal of net zero energy consumption and transition the system for Navy use in an operationally relevant environment. The small business will support the Navy with testing and validation of the system to certify and qualify it for Navy use. A system capable of handling a small 150 man camp will be field tested and evaluated. Standard MSW will be consumed with targeted >95% reduction in volume. Portability and ease of setup will be evaluated. The primary application will be fixed facilities at remote Naval locations. Simple system operation and maintenance will also be considered in evaluating possible wider DoD implementation.
Modular Smart Micro/Nano-Grid Power Management System
Microgrids are being considered at DoD installations to better manage energy usage, with the objective of providing better efficiency, reliability, and higher integration of renewable generation such as wind and solar. While the benefits of microgrids are broadly touted, implementation has been slow and complex. A turnkey modular micro/nano-grid controller design is sought, that would expedite test and validation of the benefits provided by a microgrid in actual facilities test platforms. A modular approach would also result in standardization and high volume such that economy of scale and better reliability could be achieved than low volume production. The envisioned power controller would be analogous to a network router, be modular and expandable such that each controller shall be able to communicate with one another in a mesh-type network with cyber security features. Each power controller would be plug-and-play such that standard types of power generators such as PV and DC battery backup, grid connection, and loads can be detected and managed with minimal configuration. A modular approach would mean that the capacity of each controller may be small with a few controlling the power on a building but in sufficient aggregate the mesh of controllers would be able to manage a large base. Power would be transferred between buildings through existing power distribution lines. This would require integrated medium voltage solid state transformers in the power controller. Proposals should focus on hardware design, cyber security, and networking functionality of the concept in order to achieve desired results. PHASE I: Phase I should address hardware, cyber security, and mesh network algorithms. Determine feasibility and develop a concept for a solid state power converter capable of bidirectional power conversion with connection directly to the distribution line (4kV-7.5kV). The hardware should meet safety requirements for connection to the utility grid at medium voltage and be islandable. Cyber security solutions should address implementation of metering and controls that meet DoD Information Assurance Certification Approval Process. Also identify a design and concept for a modular hardware architecture to house the controls that would be universal and compatible with third party microgrid converters. Approaches for mesh network algorithms should be developed and simulation of aggregate functionality performed to demonstrate feasibility of intelligently managing load and generation across a base using 30kW modular controllers on each building. PHASE II: Develop and Integrate hardware, software, and cyber security Modular Smart Microgrid Power Management System prototype into an existing base power grid. Prototype hardware delivered will include a microgrid controller with networking capability, integrated cybersecurity features, and 4 ports for grid connection, PV generation, DC battery backup, and 120VAC loads. The hardware shall be tested and validated for capability to connect to medium range distribution voltages. PHASE III: Conduct full scale operational demonstration of micro-grid capability using modular controllers in a mesh-network between several buildings through existing distribution lines. Assist the Navy sponsor in transitioning a final design functioning Modular Smart Microgrid Power Management System into designated base power grids.
Cooled BusWork for Shipboard Distribution and Energy Storage
Improvements in the manufacturing of power density, power quality, and efficiency in power and energy management and control are needed by the Navy to meet power and energy demands and allow for future mission growth. The Navy is seeking to foster the development of common, affordable electrical components and systems that could have broad application to ships. Electrochemical storage (battery) cells have designs which do not lend themselves to effective thermal management. For planar and cylindrical cells, the axes which offer the maximum surface area are those which offer the greatest thermal resistance. This is due to the nature of the design, which is effectively a layered structure of polymer, metal and some thin chemical coatings, all wetted in an organic electrolyte. Developing a cooling system at the in-plane direction of the conductors may result in better thermal regulation from the use of tab and bussing connections, as well as vents, sensors, and others which are often tied to the cell-ends, particularly by leveraging thermal regulation internal to the cells under high rate transient conditions (ref #1). Cooling of the out-of-plane surfaces would likely still be used in combination with the innovative new system. The objective of this effort is to optimize use of the busbar for cooling, given its intimate contact with the conductors internal to the cell, as well as its mass and necessity to be routed throughout battery modules for electrical conductivity (ref #2). Typically, batteries are electrically connected to one another through busbars or similar buswork. And, typically, the main purpose of the busbars is to conduct current with minimal resistance. However, in this work, an advanced busbar for innovative thermal control and enhanced electrical performance is desired. The system will have to include the capability to circulate a cooling medium for optimized thermal control. Buswork developed should allow for integration on any battery module, thus the technologies proposed should be scalable from small 10Ah type cells through 60+Ah large format cell designs. Proposed approaches can be passively or actively operational and must consider the flow of cooling media and electrons in terms of media selection, system continuity, and failure modes. Technologies developed for this specific application will also be explored for applicability to electric power distribution buswork, such as those used in switchboards. Technologies proposed under this effort should not contain precious or hazardous materials, nor require significant deviation from a typical battery system design (such as cells placed in a geometric array and connected in series). It is optimal for these devices to operate such that chilled water is not required, though glycol/water mixtures can be assumed to be available at 40 degrees C, with sufficient flow available to meet mission needs. Ambient spaces should be assumed to be up to 60 degrees C and battery maximum operational temperatures should also be assumed to be 60 degrees C. In order to maintain density of the energy storage devices, the proposed approaches should not increase the cell length over 50% beyond the commonly accepted value for bussing for a specific ampacity associated with the cell’s maximum rating. If a special cooling fluid is to be utilized, the interface to shipboard cooling fluid should be considered, along with the impact on efficiency and device/system density and packaging size. The end intent is to utilize this design as part of a tightly-packaged battery system, complete with thermal management, monitoring, fire suppression and requisite safety devices; the innovative approach proposed under this solicitation would serve as the connectivity between cells in battery modules of different sizes and ratings. Proposed buswork concepts should meet the following thresholds: • Deliverable Design Characteristics Value • Chemistry Li-ion • Cell Capacity: 20-30Ah, scalable • Cell Form Factor: Cylindrical or Prismatic (pouch or hard cell) • Cell Case Polarity: Case positive, neutral or negative • Operational Rate: Continuous >15 C-rate • Design: Modular; combine in series via rack mount to obtain system interface • Module Packaging: Metal or Polymeric Exterior NAVY - 39 • Module Voltage: =48 VDC • System Voltage: =1000 VDC • Voltage Isolation: >2000 VDC • Ambient Conditions: 0-60°C air • Coolant Media: 0-35°C Seawater and/or 5-40°Coolant (50/50 Propylene Glycol/Water) • Volumetric Penalty: =50% larger than appropriate copper busbar • Management: Battery Monitoring System (BMS) capable of temperature and voltage cut-out • Isolation: Contractor and Fuse • Safety Process: NAVSEAINST 9310 • Shock*: MIL-S-901D • Vibration*: MIL-STD-167-1A • Transportability*: MIL-STD-810G * = Design to this attribute PHASE I: The company will demonstrate the feasibility of the concept in meeting Navy needs for an innovative modular bus bar cooling system and will establish that the concept can be feasibly developed into a useful product for the Navy. The company will prove the concept of external, axial heat transfer in-plane with the current flow and identify performance advantages. Proof of concept will be demonstrated on a small cell or cells, which can be cycled in a manner suitable to create sufficient internal heating, and compared to a control approach. The proof of concept should also be demonstrated on cycling cells or integrated into a battery versus a control of the same design. PHASE II: Based on the results of Phase I effort, the small business will develop a Phase II prototype for evaluation. The prototype will be evaluated to determine its capability in meeting the performance goals defined in Phase II Statement of Work (SoW) and the Navy need for improved thermal management, in this case, via an innovative modular bus bar cooling system for energy storage with high rate heat removal that leverages the thermal mass and conductivity of bussing systems. The company will demonstrate the cooling device to support an arrangement of cells at the 48V module level. The prototype design should provide no less than 10Ah (threshold), 30Ah (objective), and should show applicability to be utilized with various cell geometries and battery architectures. The company will deliver a minimum of five of these prototypes to the Navy for evaluation. The Company will perform detailed analysis to ensure materials are rugged and appropriate for Navy application. Environmental, shock, and vibration analysis will also be performed. PHASE III: The company will apply the knowledge gained in Phase II to build an advanced module, suitably packaged with 1000VDC strings of cooled batteries, including battery management system, and characterize its performance at high discharge rates as defined by Navy requirements. Working with the Navy and applicable Industry partners, demonstrate application with the bus bar cooling system to be implemented within shipboard and/or land-based test site to support energy storage or other applications. The company will support the Navy for test and validation to certify and qualify the system for Navy use. The company shall explore the potential to transfer the bus bar cooling system to other military and commercial systems (electric grid, electric vehicles). Market research and analysis shall identify the most promising technology areas and the company shall develop manufacturing plans to facilitate a smooth transition to the Navy.
Navy Air Cushion Vehicles (ACVs) Lift Fan Impeller Optimization
The Ship-to-Shore Connector (SSC), a replacement hovercraft for the existing fleet of Landing Craft, Air Cushion (LCAC) vehicles, utilizes a lift fan system to discharge air into the craft’s skirt and bow thrusters to lift the hovercraft under normal operation. Each SSC utilizes two identical lift fans which are defined by an impeller and a volute assembly. Each lift fan impeller includes a center disk, blades which are attached to both sides of the center disk, and two outer shrouds. Lift fan impeller blades are removable and replaceable without requiring the disassembly of the lift fan system. There is no commercial hovercraft of this scale and payload capacity. The current SSC lift fans meet craft performance requirements, but the SSC Program Office seeks the development of an advanced lift fan that increases fan efficiency by at least 10% while achieving minimal noise levels (125 db or less). Improvements in fan efficiency will increase fuel economy. SSC lift fan performance will be enhanced when impeller blade characteristics are optimized through their entire length. Current impeller blade characteristics, such as shape, and structure, affect aerodynamic losses and efficiency of the impeller. However, studies suggest (Ref. 1) conventional design methods, such as the streamline curvature of the fan blades, do not adequately address aerodynamic improvement. In addition, an extrusion manufacturing process used to form impeller blades also inhibits the development of an optimum shape. Each impeller blade includes a large attachment foot on each end of the blade which disturbs flow and reduces efficiency and mass airflow. Each inner lift fan impeller blade foot is attached to the fan center disk through a series of bolted fasteners and each outer lift fan impeller blade foot is attached to one of the two outer shrouds through a series of bolted fasteners. Although, undergoing dimensional modifications and other changes, the foot-to-blade transition area continues to negatively impact the aerodynamics within the lift fan system. This foot-to-blade transition promotes flow separation along the blade, and may change the angle in which the airflow is directed into the bow thruster and skirt supply ducts, which affects the efficiency of the fan by approximately 3-4% (Ref. 1). Extending the blades radially outward and away from the fastened area or a design which removes the footing while maintaining structural integrity are a couple of ways the lift fan system could achieve greater efficiency. Sweeping the blade shape may also improve efficiency (Ref. 2), however, areas to consider that may be affected by any changes to blade design are the blade leading and trailing edges, maximal flow, and direction of flow. There are little commercial alternatives to a Navy-grade centrifugal fan applied to an air-cushion system. Due to this limited availability, all performance aspects will be relative to legacy craft data and projected SSC estimates. The overarching goals of this effort are the development of a cost-effective advanced lift fan blade design and manufacturing process that will increase lift fan efficiency by at least 10% to optimize SSC fuel efficiency and reduce noise; extend SSC mission range; and minimize SSC production and life cycle costs. In addition, the Navy requires the removal and replacement of fan blades without the removal of the impeller from the SSC. The advanced lift fan impeller must meet the Navy’s fan specification (Ref. 3). PHASE I: The company will define and develop a concept for a lift fan impeller that meets the requirements as stated in the Description section above. The company will demonstrate the feasibility of the concept through aerodynamic modeling and analysis and show that the concept will provide a cost-effective lift fan for the Navy with improved fan efficiency and fuel economy. The company must also demonstrate the manufacturability of the fan. The concept must provide the capability for impeller blade removal and replacement without removing the impeller from its installed location on the craft. PHASE II: Based on the results of Phase I effort and the Phase II Statement of Work (SOW), the company will develop a prototype lift fan impeller for evaluation. The prototype will be evaluated to determine its capability in meeting the performance goals defined in the Phase II SOW and the Navy fan specification for efficiency, noise, and vibration. System performance will be demonstrated through installation and testing on SSC and by modeling and analysis. The fan must demonstrate increased fan efficiency (by at least 10%) and reduced noise level (below 125 db). The prototype will also need to be evaluated to ensure individual lift fan impeller blades can be removed without removing the impeller from its installed location on the craft. Evaluation results will be used to refine the prototype into an initial design that will meet the SSC Craft Specifications. PHASE III: The company will be expected to support the Navy in transitioning the lift fan impeller to Navy use on the SSC. The company will finalize design and fabricate production prototype lift fan impeller, according to the Phase III SOW, for evaluation to determine its effectiveness in an operationally relevant environment. The company will support the Navy for test and validation in accordance with SSC Craft Specifications to certify and qualify the system for Navy use and for transition into operational SSCs. Following testing and validation, the end design is expected to produce results outperforming the current SSC lift fan in regards to fan efficiency, air flow, and noise reduction.
Amphibious Combat Vehicle Ramp Interface Modular Buoyant Kit (MBK) for Joint High Speed Vessel (JHSV) Stern Ramp
The United States Marine Corps has advised the Navy that it needs to develop a light weight kit that can be readily attached to the JHSV’s stern cargo ramp so that when the ramp is lowered directly into the water it would allow AAVs and ACVs to be launched and retrieved from the JHSV near the shore (splash-off). The Marine Corps needs a high speed shallow draft connector that can launch a dozen or more AAVs/ACVs within three miles from shore. These amphibious vehicles do not carry sufficient fuel to enable them to carry out their assigned mission ashore if they are launched from deep draft Amphibious Landing Ships positioned at a long standoff distance. The Navy is seeking concepts that do not require significant modifications to the ramp structure, but should rely instead on light weight discrete systems to protect the ramp from sea state-induced motion and facilitate desired transfer operations. The Navy intends to develop a kit that will permit the L&R of Amphibious Vehicles close to shore from shallow draft high speed JHSVs. The Navy seeks the development of an innovative compact lightweight MBK that can be stored inside the JHSV’s vehicle bay. The MBK will serve as flotation to the current JHSV stern ramp to facilitate launch and retrieval of the USMC ACVs and AAVs (Ref 1) through full Sea State 3 (SS3) with significant wave heights up to 1.25 meters without damaging the ramp. The primary purpose of the MBK will be to provide protection to the ramp from induced relative motions from the sea. The MBK also needs to be designed to accommodate the weight of the future Advanced Combat Vehicle (ACV) that is projected to weigh approximately 30 short tons and will be splash-launched directly into the sea via the stern cargo ramp and then driven back up the ramp from the sea and aboard the JHSV. The current stern cargo ramp that is installed in the JHSV has had structural failures when operated above its specified design condition. The current JHSV stern cargo ramp is designed to support vehicle transfer through SS1, flat seas up to 1 ft (Ref 2), during L&R (Ref 3). Torsion is particularly damaging to the JHSV ramp and its connections to the JHSV structure at-sea transfers. JHSV or receiving platform motions may impart dangerous accelerations to vehicles transiting the ramp. MBK development should be focused upon satisfying requirements in a cost effective manner. The MBK system should be portable enough to store in a Twenty-foot Equivalent Unit (TEU) meeting TEU tare weight restrictions of 47.6k lbs and be easily transferred from the vehicle bay to the stern “porch” with standard shipboard cargo movement forklift or similar gear. While JHSV does slow down to almost a complete stop prior to unfolding and deploying the ramp, the MBK must be designed so that it can be prepared for use by shipboard personnel while the JHSV is underway at speeds of up to 20 knots. As the ramp is being unfolded, the MBK should be designed to automatically position itself beneath the ramp prior to the ramp being lowered into the water. The MBK must be able to dynamically compensate for motion in SS3 while in use to ensure safe L&R. The MBK needs to be designed to operate in SS3 for the worst case condition in which the vehicle is being driven along the edge of the ramp instead of right down the middle of the ramp. It needs to be able to dynamically compensate for SS3 conditions using buoyancy, configuration, and other innovative techniques to prevent damage to the JHSV and the ACVs or AAVs during L&R. The addition of the MBK should also have little to no effect on ramp deployment and retrieval time. This will allow deployment of amphibious combat vehicles from the much more affordable JHSV platform rather than the traditional delivery by LPD 17 and Landing Craft Air Cushions (LCAC). PHASE I: The company will develop concepts for an MBK to enable the JHSV stern ramp to be used to launch and retrieve amphibious vehicles meeting the requirements described above. The company is expected to demonstrate the feasibility of its concepts through dynamic modeling and simulation to show that the concepts avoid damage to the JHSV and amphibious vehicles during splash-launch and recovery of heavy 30 ST loads in Sea State 3. Feasibility demonstrations must show that ramp structural design limits are not exceeded and that the cost of the MBK is affordable to the Navy; not more than $250K, however, the Navy seeks the most affordable solution capable of meeting the technical requirements. PHASE II: Based on the results of Phase I effort and the Phase II Statement of Work (SOW), the company will develop a prototype MBK for evaluation. The prototype will be evaluated by scale model test, modeling, and simulation to determine its capability in meeting the performance goals defined in Phase II SOW and the Navy requirements listed in the description for an innovative compact lightweight modular MBK. The MBK prototype will be tested in the laboratory on a motion platform to demonstrate that the MBK can accommodate amphibious launch and recovery in Sea State 3 without exceeding the design limits of the JHSV and without damage to the JHSV, the amphibious vehicles, or the MBK. PHASE III: The company will be expected to support the Navy in transitioning the technology for Navy use. The company will finalize design and fabricate full scale production prototype for the JHSV stern ramp, according to the Phase III SOW, for evaluation to determine its effectiveness in an operationally relevant environment. If the MBK successfully completes the relevant environment evaluation, the Navy expects the company to support full-scale test and validation during a Fleet Experiment to certify and qualify the system for Navy use.
Modular Boat Ramp to Launch and Retrieve Watercraft from Joint High Speed
The JHSV’s boat crane does not have the requisite man loading safety factor needed to allow the boat crew to remain on board during L&R. In order for small boats to debark from the JHSV, they must enter the water using the boat crane without the crew on board, and then the small boat must be positioned alongside the JHSV in a coordinated effort by the crane operator and members of the ship’s crew using tag lines. After the small boat is secured alongside the JHSV, a Jacobs Ladder must be rigged between the JHSV and the small boat in order for the boat crew to climb into the small boat and depart. In order to embark a small boat aboard the JHSV after returning from a mission, the crane operator and members of the ship’s crew using tag lines must again position and secure the small boat alongside the JHSV and rig the Jacobs Ladder so that the boat crew can climb back aboard the JHSV. This operational limitation of not allowing the crane to lift the boat while its crew is aboard creates a safety concern whenever members of the boat crew are transferring between the JHSV and their small boat using a Jacobs Ladder in Sea State 3 or greater. This crane man loading restriction not only creates an operational safety concern but significantly slows down the rate at which small boats can be launched and retrieved from the JHSV. Therefore, approaches that are safer and support higher rates of small boat launch are being sought. The JHSV has a requirement to rapidly L&R a variety of manned and unmanned watercraft in full Sea State 3 (SS3) conditions with wave height of 3-5 feet (ref #1). The manned and unmanned vehicles include 11 meter Rigid Hull Inflatable Boats (RHIB), 40 foot High Speed Boats (HSBs), SEAL Delivery Vehicles (SDVs), Unmanned Underwater Vehicles (UUVs), and 40 foot long endurance drones. This watercraft have large length to beam ratios making them susceptible to spinning about the axis formed between the crane’s boom tip and hook caused by motions imposed upon the JHSV from the sea. L&R at this scale is not done commercially and high volume of craft launch at this weight is not done. The Navy has determined that the existing crane installed aboard the JHSV should only be used for moving cargo aboard the JHSV from one place to another and that a different approach needs to be undertaken for watercraft L&R. To that end, this topic seeks to develop an approach for watercraft L&R that does not rely upon using the crane installed aboard the JHSV. An approach involving the development of an auxiliary ramp is being sought because that approach appears to be both safer and capable of providing faster L&R of watercraft. If successful, the Navy will either fund a ship alteration to modify the current stern cargo ramp or fund a ship alteration to develop and install an additional light weight modular boat ramp between the existing crane and the cargo ramp for L&R of watercraft from the JHSV. The operational safety and the rate at which L&R can be performed become much worse as the Sea State increases from SS0 through SS3. The crane installed on board the JHSV for this purpose has not successfully demonstrated L&R of watercraft above Sea State 2. Navy is seeking an innovative new Modular Boat Ramp (MBR) system that could be situated on the stern of the JHSV (ref #2). This system must be able to L&R manned 11 meter RHIBs SEAL Delivery Vehicle (SDV), or High Speed Boat (HSB) through SS3, (ref #3) allowing the boat crew to remain on board the RHIB during the entire L&R cycle. This system should be readily adaptable for L&R of Unmanned Surface Vehicles (USV) and UUVs in full SS3. The L&R system should be designed so that it can accommodate the launch or retrieval of all small crafts less than or equal to 40 feet in length in less than 15 minutes in SS3. When the system is fully operational, subsequent small crafts should be able to be launched every 10 minutes and retrieved every 20 minutes. The maximum dimensions of the small boats that JHSV shall accommodate and L&R are as follows: a. Length: 12.32 m (40.41 ft.), b. Width: 2.74 m (9.00 ft.), c. Height: 2.72 m (8.92 ft.). The design and fabrication of the system concept must be combatable to an all-aluminum vessel, comply with all applicable safety standards for manned boat operations, be readily stowed aboard the JHSV’s Vehicle Mission Bay (VMB), and be easily transferable from the VMB at a location near the back porch area to facilitate temporary installation of the system near the stern of the JHSV. This effort also requires technologies that facilitate command and control of the system by both ship’s force and the crew aboard the small boat. This L&R system must also provide a manual backup mode of operation permitting at sea capture and release of the small crafts from the JHSV. PHASE I: The Company will develop a concept for a Modular Boat Ramp (MBR) system that meets the requirements described above. The company will conduct feasibility studies, including hydrodynamic assessments of the system, and demonstrate the feasibility of the operational Launch and Retrieval (L&R) system concept via physics-based modeling and simulation. Feasibility studies must show that the MRB is affordable for the Navy; not more than $125K, however, the Navy seeks the most affordable solution capable of meeting the technical requirements. PHASE II: Based on the results of Phase I and the Phase II Statement of Work (SOW), the small business will develop a prototype MBR for evaluation. The prototype will be evaluated to determine its capability in meeting the performance goals defined in the Phase II SOW and the Navy requirements for the Modular Boat Ramp (MBR). The MBR must be able to safely launch and retrieve both manned and unmanned vehicles in SS3. The MBR must be a modular device capable of L&R times and vehicle size requirements described above in SS3 to ensure the operational tempo of JHSV L&R. System performance will be demonstrated through prototype evaluation and testing, modeling, and analysis. The contractor will evaluate results and refine requirements for the L&R system. PHASE III: The company will support the Navy in transitioning the MRB system for Navy use aboard JHSV and similar vessels. The company will develop a full-sized MBR system in accordance with their Phase III SOW for evaluation to determine its effectiveness in an operationally relevant environment. The company will support the Navy for test and validation to certify and qualify the system for Navy use.
Innovative Flexible Equipment Support Infrastructure
US Navy Destroyers need an equipment support infrastructure for several shipboard electronics and command spaces for a Common Processing System (CPS). The purpose of the flexible infrastructure (FI) is to provide equipment configuration flexibility and the ability to complete Command Center System modernization and upgrades at reduced cost (by 30 to 60 percent ) (Ref 1) compared to fixed-system modifications. Costs are associated with changes in doctrine, organization, training, material, personnel, and facilities. The innovative designs developed will allow the Navy to incorporate a reconfigurable, modular support structure for large cabinets and consoles in the remaining DDG 51 class ships as well as have it considered for back fit in existing DDG 51 Class ships. DDG 51 Destroyers are continuously upgraded for numerous reasons including; new mission capability, response to new threats, or to add advanced technology to increase reliability (Ref 2). These upgrades involve the addition or substitution of new cabinets, displays, and electronics resulting in long periods in shipyards for re-outfitting of support structures in new configurations. During this time, ships are not available for deployment. An infrastructure that will allow equipment cabinets to be moved or replaced without welding would reduce significant work in the upgrade package. Support structure cutting and re-welding is labor intensive, time consuming, and very costly. The current FI systems found on other ships are designed with very specific bolt-hole patterns. Presently, DDG 51 Class cabinets and consoles do not align with the existing FI bolt-hole patterns and the solution so far has been a heavy adapter plate. Although technically feasible, current adapter plates are undesirable because they are very heavy, restrict access for bolting, and protrude beyond the dimensions of the cabinet causing a tripping hazard and increase the height of the cabinet. The total height of the cabinets is limited to 75 inches, consequently, when the FI replaces the existing deck structure, it must not cause cabinet height allowances to be exceeded. Increased height cannot be tolerated without significant ship redesign of the existing deck heights. This topic seeks innovative solutions to produce a light-weight, affordable, and flexible support structure that can accommodate very heavy cabinets and reduce installation costs by more than 30% in comparison to the current legacy structure. Current FI systems used on other ships are designed to handle cabinets of 150 pounds per square foot. A significant number of the cabinets on DDG 51 Class Destroyers are heavier than this. Typical shipboard computer racks and cabinets are 19 inches wide by 24 inches deep by 59.5 inches high (Ref 3) and weigh up to 2,000 pounds. While the FI must accommodate the weight of cabinets, it must not add weight to the ship when it replaces the existing deck structure. It is anticipated, however, that by not utilizing the adapter plates and potential use of lighter material for use in the flexible structure itself, the overall flexible system will help in reducing the total ship weight. The design of any support infrastructures, adapter plates, or modifications to the ship must meet ship specification requirements for underwater shock (MIL-S-901D), EMI (MIL-S-461F), and safety (NAVSEA DDS 078-1). Additionally, modifications to the existing ship design need to be minimal in order to prevent increased modification costs of ships to deploy FI on new and existing ships. In addition, the FI must have minimal weight and component costs and be easy to install. PHASE I: The company will develop a concept for an affordable, innovative equipment support structure that will meet the requirements described above. The company will demonstrate the feasibility of the design concept through material selection and testing, as well as, analytical modeling and simulation with anticipated cost analysis. The approach/analysis should demonstrate the structural design has the capability to meet load requirements at the same or reduced weight compared to the existing deck structure. PHASE II: Based on the results of Phase I and the Phase II Statement of Work (SoW), the company will develop prototype FI for evaluation. Prototype FI will be evaluated to determine its capability in meeting the performance goals defined in the Phase II SoW and the Navy requirements for a weight neutral and innovative equipment support structure. The FI will be evaluated in a mock-up of the CPS to ensure that it meets load, re-configurability, height, cabinet mounting, and weight requirements. Evaluation results will be used to refine the prototype FI into an initial design that will meet Navy needs. Transition of the technology for Navy use will include documentation that describes all installation, maintenance, and repair practices and procedures and meet Shock, EMI, and safety requirements. PHASE III: The Navy expects the company to support transition of the FI to Navy use aboard DDG 51 Class ships. The company will produce production components for evaluation in the Combat Information Center and the Combat Systems Equipment Room on board DDG 51 to determine its effectiveness in an operationally relevant environment. Additionally, the company will support the Navy for test and validation to certify and qualify the system for Navy use.
Manufacturing Near-Net-Shape Conformal Electro-optic Sensor Window Blanks from Spinel
Electro-optic sensor windows that conform to the local shape of an aircraft mold-line are desirable for future air platforms to allow for a large sensor angle of regard. Conformal shapes may have little to no symmetry depending upon their location. Spinel is an excellent material candidate as it is both durable and multi-spectral (ultraviolet through mid-wave infrared). Spinel is more erosion resistant than multispectral zinc sulfide, which is another logical candidate for a large conformal window. The availability of blanks currently limits the size and curvature for potential window applications. The objective of this project is to develop and demonstrate new manufacturing processes to provide near net shape blanks to improve upon current state of the art processes which start with a thick, planar blank and grind them to the desired shape. Currently, the thickness of the planar blank limits the maximum sag (height of window from highest to lowest point) that can be obtained to approximately one inch. Although actual shapes will be chosen by mutual agreement with the government, it is expected that the approximate conformal shape for Phase II will be 16x16 inches with a sag of six inches. A small grain spinel is preferred for enhanced strength, which is a desired property. PHASE I: Demonstrate feasibility to manufacture near net shape conformal electro-optic sensor window blanks from spinel. To prove feasibility, produce a minimum of two uncracked, fully dense spinel window blanks having final fired dimensions of at least a 6x6inch footprint, sag of three inches, and thickness of ½ inch. A toroid is a possible demonstration shape. Measure the transmission spectrum of the material using polished flat specimens cut from one of the window blanks. Desired transmission is within 4% of the theoretical value for spinel at a wavelength of 0.63 microns and within 2% of the theoretical value for spinel at a wavelength of 4 microns. PHASE II: Scale up to produce a minimum of two uncracked, fully dense spinel window blanks having final fired dimensions of at least a 16x16inch footprint, sag of six inches, and thickness of one inch. Demonstration shape may be a toroid or other free-form shape mutually agreed upon with the government. Measure the transmission spectrum and flexure strength of the material using polished flat specimens cut from one of the window blanks. Desired transmission is within 4% of the theoretical value for spinel at a wavelength of 0.63 microns and within 2% of the theoretical value for spinel at a wavelength of 4 microns. A material strength of greater than 200MPa is desirable, as measured by ring-on-ring flexure testing. PHASE III: Implement manufacturing processes for commercial production and commence full rate production in order to support Navy requirements. Assist the Navy in transitioning this technology to identified platforms.
Metrology of Visibly Opaque, Infrared-Transparent Aerodynamic Domes, Conformal Windows, and Optical Corrector Elements
The function of electro-optical sensors is greatly impacted by the window’s properties. Survivability depends on material strength, hardness, and thermal properties. Targeting is limited by optical properties of the window material. Drag is reduced by aerodynamic shapes. The objective of this project is to create metrology methods and hardware to measure the optical figure and transmitted wavefront error of visibly opaque, infrared-transparent aerodynamic domes, conformal windows, and optical corrector elements to provide feedback for optical figure correction by an optics shop. Example materials include standard grade zinc sulfide (ZnS), hot pressed magnesium fluoride (MgF2), and germanium (Ge). Some materials of interest have negligible transparency below 2 microns and aspheric shapes made from these materials cannot be measured by any known method today. Possible candidate shapes include toroidal windows, tangent ogive domes, and arch shaped correctors. Methods capable of measuring objects whose two surfaces deviate more than 5 degrees from parallel could be useful. Potential metrology methods should have a precision of one tenth of the measurement wavelength, or better. PHASE I: Evaluate the feasibility of measuring the optical figure and transmitted wavefront of visibly opaque, infrared-transparent aerodynamic domes, conformal windows, and optical corrector elements with a precision of one tenth of the measurement wavelength, or better Demonstrate breadboard capability to measure freeform shapes such as a 5 inch diameter x 7 inch tall aerodynamic dome provided by the government. Measurement must produce surface figure and transmitted wavefront maps with a precision of 0.5 micron or better. PHASE II: Improve the metrology technique and hardware developed in Phase I by increasing ease of use, precision of results, measurement speed, and adaptability to different shapes and sizes. The output of the method must be in a form that provides feedback to an optical polishing shop for figure correction. PHASE III: Implement commercial metrology capabilities. Manufacture an instrument for sale to optics manufacturers to measure visibly opaque aspheric optics. Alternatively, provide a commercial service to measure visibly opaque aspheric optics.
Metrology of Visibly Transparent Large Aspheric Optics
Aspheric optics represent the next generation in electro-optic sensor windows allowing for windows that conform to the local shape of an aircraft moldline, domes that reduce drag in missiles, and optical elements that correct for distortions produced by conformal windows and aerodynamic domes. The objective of this project is to develop metrology methods and hardware to measure the optical figure and transmitted wavefront error of large conformal windows, aerodynamic domes, and optical corrector elements to provide feedback for optical figure correction by an optics shop. Methodology development could begin with glass or fused silica specimens but is expected to move to spinel by Phase II. Possible candidate shapes include toroidal windows, tangent ogive domes, and arched optical corrector elements. Specimens to be measured are expected to have a footprint up to 24x24 inches. Metrology methods should have a precision of 0.1 micron or better. PHASE I: Evaluate the feasibility of measuring the optical figure and transmitted wavefront of aspheric optics including conformal windows, aerodynamic domes, and corrector optics. Demonstrate breadboard capability to precisely measure (0.5 micron) freeform shapes such as a 4x4 inch toroidal window provided by the government. Produce a design capable of measuring large aspheric optics up to 24x24x24 inches. Instruments that operate at visible or one micron wavelengths are acceptable. Measurement must be capable of handling optics with little to no symmetry. A method capable of measuring objects whose two surfaces deviate more than 5 degrees from parallel could be useful but is not required. PHASE II: Build and demonstrate the instrument designed in Phase I capable of measuring aspheric optics up to 24x24x24 inches. Specimens for measurement possibly made of glass or plastic will be provided by the government. Measurement results must be in a form that provides feedback to an optical polishing shop for figure correction. The measurement goal is a precision of 0.1 micron or better. The intent for this instrument is to be used on the floor of an optics shop making large aspheric and conformal optics. PHASE III: Implement commercial metrology capabilities. Manufacture an instrument for sale to optics manufacturers to measure large aspheric optics. Alternatively, provide a commercial service to measure aspheric optics.
Manufacturing of Visibly Transparent Large Conformal Windows
Conformal electro-optic sensor windows are desirable for future air platforms as they maintain the shape of the aircraft moldline and allow for a large sensor angle of regard. Such windows may have little to no symmetry depending upon their location. Spinel is an excellent candidate window material as it is both durable and multi-spectral (ultraviolet through mid-wave infrared). Spinel is more erosion resistant than multispectral zinc sulfide, which is another logical candidate for a large conformal window. The objective for this project is to create methods to grind and polish freeform conformal sensor windows with dimensions up to 24x24 inches with a sag height of approximately 8 inches and an optical precision of 0.5 micron or better. It is expected that methods will be developed with glass or fused silica and that a full sized window will be made from glass or fused silica. It is also possible that a full sized spinel blank may be available from the government during this project. PHASE I: Demonstrate the feasibility of grinding and polishing a glass or fused silica conformal window with dimensions of 12x12 inches. The contractor shall propose a shape with a window sag height of approximately four inches and lacking rotational symmetry. The precise shape of the conformal window will be selected by mutual agreement with the government. The root-mean-square precision of the optical figure shall be 0.5 micron or less with a clear aperture extending within half an inch from the edge. PHASE II: Based on Phase I effort, scale up window fabrication with glass or fused silica to lateral dimensions of 24x24 inches with a sag of approximately eight inches. The shape shall be proposed by the contractor with mutual agreement from the government. A root-mean-square precision of the optical figure of 0.5 micron or less with a clear aperture extending within half an inch from the edge is desired. There is a possibility that a spinel blank will be made available by the government for Phase II. PHASE III: Implement commercial manufacturing capabilities for large conformal windows made of durable ceramics such as spinel. Manufacture an instrument for sale to optics manufacturers to make large, aggressively aspheric optics. Alternatively, provide a commercial service to manufacture such optics.
Accelerating Instructor Mastery (AIM)
Educators typically study for four years at a university building a solid foundation of instructional knowledge. In addition, most educators also have observed practical experience before they instruct on their own. In contrast, active duty military instructors often don’t have the benefit of any education on how to instruct. They are often recently graduated students; although their content knowledge is strong, they lack "expert instructor techniques" (i.e., strategizing, executing, and adapting classroom instruction as needed to achieve desired outcomes). Given the limited learning and practice opportunities for military instructors, there is a need to capture, explore, and practice military-specific best instructor techniques in the schoolhouse, in order to enable and accelerate effective and engaging instruction even by inexperienced instructors. There are many different ways to possibly facilitate instructor development outside of a traditional education setting. One method is to develop simulated environments. For example, TeachLivE is a mixed reality environment that uses simulated students to help educators practice their pedagogical skills. In this type of environment, the scenarios can be tailored and scaffolding techniques can be provided to support until the instructional skills are mastered . Another method is to facilitate peer collaboration. Enabling re-use and re-combinations of previously developed training material (e.g., documents, PowerPoint slides, video, audio, practical exercises, etc.) allows instructors to use, update, and further develop items that have proven effective in similar training curricula. Similar approaches have been studied in higher education, where teachers collaborate to generate content based on knowledge about teaching with technology [2, 3]. It is unclear which approach and systems would best facilitate and expedite learning and development for inexperienced instructors. Outside of the civilian traditional educational development model and system (e.g., 4-year degree, professional conferences, etc.) there are limited technologies and materials for inexperienced military educators / instructors to effectively learn and practice pedagogical skills. Innovative solutions are sought to address this gap in enhancing inexperienced instructor capabilities among military populations. These capabilities should be as open source as possible, require a low / no manpower footprint, and be a tool that can be self-sustaining and extensible for wide variety of military courses. PHASE I: Determine requirements for the development of software that provides capability to improve and accelerate the development of novice instructors. Requirements for data collection should include types of data and methods for easily capturing each type of data. Phase I deliverables will include: (1) requirements for the system components; (2) overview of the system and plans for Phase II; and (3) mock-ups or a prototype of the system. If awarded, the Phase I Option should also include the processing and submission of all required human subjects use protocols, should these be required. Due to the long review times involved, human subject research is strongly discouraged during Phase I. Phase II plans should include key component technological milestones and plans for at least one operational test and evaluation, to include user testing. PHASE II: Develop a prototype system based on Phase I effort, and conduct a training effectiveness evaluation. Specifically, military instructor performance areas will be provided to support the development of the training effectiveness evaluation. A near-term training need will be identified as a use case for initial system development. All appropriate engineering tests and reviews will be performed, including a critical design review to finalize the system design. Once system design has been finalized, then a training effectiveness evaluation will be conducted with a Marine Corps population. Phase II deliverables will include: (1) a working prototype of the system that is able to interact with existing system specifications and (2) training effectiveness evaluation of system capabilities to provide demonstrable improvement to the instructor population. PHASE III: If Phase II is successful, the company will be expected to support the Marine Corps in transitioning the technology for Marine Corps use. The company will develop the software for evaluation to determine its effectiveness in a Marine Corps formal school setting. In addition, the small business will support the Marine Corps with certifying and qualifying the system for Marine Corps use within the Marine Corps Training Information Management System. As appropriate, the small business will focus on broadening capabilities and commercialization plans.
Reliability Centered Additive Manufacturing Design Framework
Additive Manufacturing (AM) offers the opportunity to fabricate equivalent (i.e. same fit/form/function) structures and components in a more cost effective manner and in ways that are not currently possible with subtractive, casting or other manufacturing approaches. Unfortunately, today there are only a limited number of AM components that can achieve complete equivalency to their original counterparts which significantly limits the use of this technology. Still, if the design space for AM parts is enlarged, enormous benefits could be realized if we are able to incorporate reliability, inspectability and repairability to those parts. By explicitly exploiting some of the inherent AM material anisotropies while minimizing or eliminating known AM weaknesses via effective utilization of topological designs; material, microstructural and functionally graded designs and other design space trade-offs, efficient and reliable structural or functional components can be fabricated. Standing in the way for exploiting these seemingly endless design space trade-offs is the ability to inspect and reliably assure operational safety and performance of such advanced designs throughout the operational life of these components. This SBIR topic is seeking innovative approaches to monitor in-situ the reliability of complex AM parts through their entire design life by including “reliability assurance” as an integral part of the design process and embedded in the AM part itself. This topic is not interested in AM parts with simple geometrical designs which could be inspected with existing Commercial Off The Shelf (COTS) Nondestructive Evaluation (NDE) equipment. Depending on the primary function of the AM part (such as carrying structural load, thermal management, material transport, flow control, signature control, etc.) the approach to ensure reliability will vary. Other factors will also need to be considered when choosing the approach such as part accessibility, part criticality, degree of multi-functionality, AM method used to manufacture the part, AM material options and many others. Cost should also be a factor when down selecting amongst several approaches that yield similar levels of safety and reliability. Addressing this topic in its entirety would require resources beyond those available through a single SBIR program because of the number of disciplines that need to be brought to bear. An AM design framework that incorporates reliability throughout the life of the component should integrate understanding and modeling of: 1 - the different manufacturing technologies (SLA, SLS, SLM, FDM, EBM, EBF3, etc.) used to manufacture AM parts; 2 - the materials, processing, microstructure, property, defect types and distribution in the final AM components; 3 - functional performance and progression (load bearing, thermal management, material transport, EM); 4 - the development, progression and criticality of damage in the AM parts; 5- the interaction between the different damage types and the interrogation method used to monitor part for integrity (such as ultrasonic, electromagnetic, thermal, visual, etc.); 6 - as well as the reliability of the monitoring approach itself. Therefore, to narrow the scope of this SBIR topic, this solicitation will focus in only one AM technology for metallic components. Also, “repairability” will only be considered for implementation during Phase III if deemed necessary. The Navy will only fund proposals that are innovative, address the proposed R&D and involve technical risk. PHASE I: Develop a Reliability-Centered AM Computational Design Framework (RCAM CDF) that incorporates a specific AM process for metallic components. This design framework should include at minimum part geometry, material mechanical properties, defect types, as well as an inspection, monitoring or other methodology to guarantee reliability of the component throughout its operational design life. Only small laboratory coupons will be fabricated during this phase of the program to verify and validate different aspects of the RCAM CDF. PHASE II: The Phase I RCAM CDF will be optimized and expanded to incorporate those characteristics that were not previously developed (such as a design optimization algorithm, material microstructure, microstructure evolution, damage nucleation and progression and others as resources allow). The entire RCAM CDF framework will be further optimized for usability, robustness and performance. Small scale laboratory coupons will be fabricated to assist with the expansion and optimization activities of the RCAM CDF. For validation purposes, a small air to air heat exchanger (HX) will be designed using the RCAM CDF. The Principal Investigator will fabricate a minimum of four heat exchanger prototypes. The first HX will be used for dimensional and functional characterization after manufacturing. The second HX will be used to perform detailed cross sectional photo-micrographic analysis to validate the RCAM CDF after manufacturing. The third HX will be used to monitor its reliability throughout an accelerated testing phase. And the fourth HX prototype will be used to perform detailed photo micrographic analysis to validate part reliability after the accelerate testing. The small business must involve an original equipment manufacturer (OEM) during this phase of the contract to facilitate the transition to Phase III. PHASE III: If Phase II is successful, the company will be expected to support the Navy in transitioning the RCAM CDF for Navy use. Working with the Navy, the company will integrate the RCAM CDF framework for evaluation to determine its effectiveness in an operationally relevant environment. The OEM involved during Phase II will be part of the transition team. Phase III will include defining the additive manufacturing parameters for qualified full scale system production and establishing facilities capable of achieving full scale production capability of Navy-qualified HXs. The small business will also focus on identifying potential commercialization opportunities.
Dive Helmet Communication System
The current system being used is analog circuits (circa 1960's) with components to match. The communication components are susceptible to moisture and handling damage. Thus the existing communication system has reduced intelligibility in the varying noise levels of the current helmet. The diver helmet currently used by Navy divers (Kirby Morgan KM 37NS) has documented high noise levels from the breathing system with a circa 1960's communication system (analog on copper wire). This is a challenging communication environment which is complicated when using helium mixed gas (squeaky voice) and/or at depth (compressed air). Breathing air maximum operating depth is 190 feet seawater (fsw) and is 300 fsw for helium-oxygen mixes. Either breathing medium may be used in water temperatures as low as 37°F, or as low as 28°F if the helmet is used with the hot water shroud and a hot water supply. In addition, at water temperatures below 40°F, the diver will be dressed in either a variable volume dry suit or a hot water suit. Additionally, the diver community has high incidence of noise induced hearing loss (high frequency). An ideal communication system would use modern, digital signal processing, with matched microphones and speakers (actuators) to overcome these challenges and the feedback that occurs in the current analog system. All required components must be included in the system for improved diver (helmet) and topside communications. The system shall be able to frequency compensate to keep incoming signals in the optimal voice range for divers and correct for topside signals. The new system must be robust enough to handle the moisture/salt in a marine diving (90% humidity) scenario and the inherent rough handling of diver equipment. The system should also include the capability for noise monitoring and use of fiber optic signal umbilical lines in the future (i.e. accommodate the use of fiber optic cables and the reduction of copper wire in the umbilical lines). The system must work in the current noise levels of 94 to 97 dBA, which are noise hazardous (reference 1) and compromise current communications. ANSI S3.2-2009 (R2014) (Method for Measuring the Intelligibility of Speech over Communication Systems) would be an appropriate means of evaluation using the modified rhyme test. ANSI S3.2-2009 (R2014) includes factors that affect the intelligibility of speech. The new system must reduce the noise feedback loop in the helmet, producing high noise levels at the diver's ear, which increases the noise exposure. The goal is to reduce noise levels below 85 dBA. PHASE I: Determine the feasibility of developing and constructing communication technologies to provide clear communications between divers and the topside station. Develop a detailed design for a diver communication system that can meet the performance and the constraints listed in the equipment description for the Kirby Morgan KM37NS dive helmet in reference 2. Similar commercial system is shown in reference 3. Currently approved communications systems are listed in reference 4. The technical manual (N00178-04-D-4012/HR06) is reference 5. Perform design trade-offs to provide initial assessment of concept performance for topside, bottom-side, multiple divers and umbilical components and be suitable for the marine environment. The dive helmet communication system must consider forward fit and back fit and be reported in the Phase I Final Report. System concept must address pressure changes which occur in the helmet. System may utilize frequency shifting to deal with gas density and/or mixed gas. System concept must also address diver utility (i.e. ear equalization) if considered. Phase I includes the initial layout and capabilities description to build the unit in Phase II. Current topside design factors are Space: 10" x 9" x 14.5"; Weight: 22lbs; Power: 110/220 VAC 50/60 Hz (autosensing) or internal/external battery 12 VDC. PHASE II: Produce dive helmet communication system prototype hardware based on Phase I concept design for evaluation. Finalize the prototype design and validate improved communications at topside station. Also, validate improved communications under while breathing air (maximum operating depth is 190 feet seawater (fsw) and is 300 fsw for helium-oxygen mixes. Either breathing medium may be used in water temperatures as low as 37°F, or as low as 28°F if the helmet is used with the hot water shroud and a hot water supply. In addition, at water temperatures below 40°F, the diver will be dressed in either a variable volume dry suit or a hot water suit. Install developed prototype in a Kirby Morgan KM 37NS dive helmet. The developer will use its own helmet during Phase II. Speech intelligibility tests shall be performed to confirm the communication improvements. Deliver two full prototype dive helmet communication system kits for testing and evaluation at a location chosen by the US Navy. PHASE III: Construct production units suitable for certification for the Approved for Navy Use (ANU) List and develop marketing plans for a broad range of customers. Kits shall include all hardware required for modification of helmets, topside equipment, installation, operations and maintenance procedures.
Rapid Initialization and Filter Convergence for Electro-optic / Infrared Sensor Based Precision Ship-Relative Navigation for Automated Ship Landing
Carrier based fixed wing aircraft need accurate, high rate, and high integrity precision ship-relative navigation (PS-RN) to conduct safe and efficient automated landings. This SBIR is focused on capability that is not dependent on Radio Frequency (RF) emissions or Global Positioning System (GPS) in order to reduce vulnerability to interference. The PS-RN solution must initialize very quickly because of the relatively high closure rates (approx. 120 knots). Electro-Optical/Infrared (EO/IR) aircraft mounted cameras are attractive sensors due to low cost and size, but do not directly measure range and angle, so these calculations must be reliably extracted from the imagery at a high data rate (approx. 30 Hz). They must initialize quickly (10 seconds or less) when imagery becomes usable as the aircraft approaches the ship, even with obscured visibility and deck motion. Current image detection, tracking, and template matching require too much time (over 1 minute) to be usable for a jet aircraft approach in reduced visibility. Convergence rate of linearized filters is not adequate for the required precision of relative navigation; non-linear filters may provide a workable approach. For this SBIR effort, the sensors to be used are aircraft mounted electro-optic and/or infrared cameras, with optional use of ship mounted light or beacon sources. The navigation scenario begins at 4 nautical miles (NMI), with the aircraft at 1200 feet Mean Sea Level (MSL), and within 10 degrees of the landing area centerline. The aircraft inertial measurement unit (IMU) can be used. Ship location is known within 1/2 NMI, course and speed are known within 10 degrees and 5 knots. The ship must be detected and tracking begin rapidly when it enters the sensor field of view; the initialization goal is 10 seconds. The camera system, including lenses, must be small (approx. 200 cubic inches) and able to withstand the electromagnetic interference, shock, and vibration environment of a carrier landing. Only two fixed focal lengths can be used for the entire approach. Computer processing must be done on 1 or 2 cards added to an existing mission computer. Aircraft pose estimate must be accurate within 0.20 degree in azimuth and elevation, and range within 4%. It must be a high integrity solution with sufficient accuracy and continuity (in concert with the IMU) suitable for aircraft control. Ship motions corresponding to sea state 4 must be identified. The lights or beacons must be suitable for installation on the deck, hull, catwalk, or other structural location on an aircraft carrier. PHASE I: Determine feasibility for the development of rapidly initializing detection, tracking and relative pose estimation processes that could be used to provide navigation inputs to the flight control system of a carrier based fixed wing aircraft for landing on an aircraft carrier. Develop a system concept for this purpose and report on the results. PHASE II: Based on Phase I effort, develop and demonstrate sensor processing using available sensor(s) for rapid-initialization PS-RN real-time capability, in simulation, in full range of lighting conditions, from full sun to overcast moonless night, with sea state 4 deck motion. Determine through simulation maximum range capability in fog and rain, and determine the heaviest fog and rain that can exist and still allow accurate navigation to begin at no less than ¾ nautical mile. Collect imagery data during flight approaches and landings in day and dark night conditions using low cost surrogate aircraft and a land based facility. Use these imagery data to demonstrate rapid-initializing real time PS-RN in the laboratory. Conduct an open loop demonstration of real time PS-RN capability in flight. Update the models and software for delivery to the Navy simulation laboratory. PHASE III: From the results of Phase II, design and fabricate a processing system using available sensors which can be carried by an F/A-18 aircraft. The Navy will gather flight test imagery using this system in a separately funded flight test effort. Using the data from this flight test, demonstrate precision ship-relative navigation capability in the laboratory. Develop a preliminary design for an integrated sensor and processing system to be installed on an aircraft carrier and a selected Navy carrier aircraft for operational use.
Robust MEMs Oscillator Replacement for Quartz Crystal TCXO Oscillator
High Precision Temperature Compensated Crystal Oscillators (TCXO's) are used in GPS receivers, radio transceivers, and other radio frequency devices used in guided munitions. Quartz TCXO's are useful as a clock reference for these devices because of their high frequency stability, often well below 2 parts per million (ppm). However, quartz crystal devices are sensitive to both mechanical shock and rapid temperature changes. This is particularly an issue for high velocity gun-launched guided projectiles. This topic seeks a MEMS oscillator comparable in performance to temperature-compensated crystal oscillators (TCXO) in frequency accuracy, temperature stability, while providing similar or improved phase noise. The design must be survivable up to 30kgee high-G mechanical shocks during firing and temperature extremes with rapid fluctuations up to 250 degree F/min (surface) caused by aeroheating during flight. The design must also remain within or reduce the size, weight and power envelope of existing crystal oscillators it is meant to replace. The design must also perform within the operational constraints of existing crystal oscillators by exhibiting minimal aging and acceleration sensitivity, and by reaching a functional level of stability (<0.1 ppm / sec) within 0.25 seconds of application of power. PHASE I: Develop a proof of concept approach for a MEMS oscillator that meets the requirements as stated in the topic description. Support the concept design by material testing and analytical modeling. A critical demonstration experiment with components in a laboratory environment is desired. PHASE II: Further refine the approach and build prototype devices. Characterize the device’s frequency accuracy, temperature stability, phase accuracy and temperature and shock shifts. Deliver prototype devices for government testing. PHASE III: If Phase II is successful, the small business will provide support in transitioning the technology for Navy use in the Hyper Velocity Projectile program. Work with GPS receiver manufacturers to integrate the MEMS oscillator in GPS receivers. Develop a plan to determine the effectiveness of the replacement MEMs Oscillators in an operationally relevant environment. Support the Navy with certifying and qualifying the system for Navy use. When appropriate focus on scaling up manufacturing capabilities and commercialization plans.
Unmanned Undersea Vehicle (UUV) Detection and Classification in Harbor Environments
New or improved sensing concepts and technologies are needed to better recognize the presence of Unmanned Undersea Vehicles (UUVs) operating in ports and harbors, particularly in the proximity of U.S. Navy ships and submarines. The maturity and proliferation of UUVs throughout the world is presenting an emerging challenge for force protection in harbor environments. It is important to counter sensor laden units that do not present a direct threat, but an armed UUV presents a particularly compelling challenge. The mobility of UUVs limits the effectiveness of traditional mine countermeasures like change detection. The stationary nature of the assets that are being protected in harbors allows for slow and deliberate approaches by enemy platforms. Current strategies for detecting and classifying UUVs employ systems that were originally designed to detect combat swimmers and scuba divers. A number of these systems have demonstrated some capability against UUV targets that were presented in a controlled research environment, but the typical warning ranges do not provide a completely satisfactory response window. It is envisioned that multi-modal layered approach has the potential to significantly increase the average response window available to counter UUV approaches to U.S. Navy assets. PHASE I: Determine the technical feasibility of a UUV sensing approach that would be effective for Remus 100 size and larger targets and develop a system design and concept of operation for implementing it, either as an independent system or in concert with existing sonar technology. Design considerations include an objective standoff distance of 1000 meters and a false alarm tolerance of one per day. The improved sensing technology will be integrated into the over-arching asset protection infrastructure. PHASE II: Produce UUV sensor prototype hardware along with a concept of operations based on the Phase I effort. Demonstrate and validate performance of the UUV sensor developmental system against Remus 100 size targets in a relevant environment. A completely functional system is not required at the end of the Phase II effort; however, a demonstration during Phase II should clearly support the expected performance of a final design. PHASE III: Based upon Phase I and Phase II efforts, the developed sensing technologies and systems that have demonstrated effective detection and classification of UUVs in harbor environments will be candidates for prototype development and test and evaluation that will support incorporation into the Strategic Systems Program Nuclear Weapons Security WQX-2 Program of Record. Additional transition targets include the Naval Facilities Command Electronic Harbor Security System.
GaN Avalanche Devices for RF Power Generation
Radio Frequency (RF) power generation by diode sources enables compact and affordable sources for a wide range of sensor applications. Avalanche breakdown is an important mechanism for the generation of RF power in a two terminal diode. Examples are IMPact ionization Avalanche Transit Time (IMPATT) diodes demonstrated in Silicon, Gallium Arsenide (GaAs), and Indium Phosphide (InP). By both thermal considerations and large breakdown field, Gallium Nitride-based avalanche devices should offer a substantial advance (~100X) in power output with improved efficiency (~2X). The problem in wide bandgap nitrides is that until recently, avalanche breakdown has not been experimentally observed, despite two decades of material advances. The absence of experimental observation is often attributed to the higher dislocation density of current GaN technology, which lowers the breakdown electric field threshold due to non-avalanching mechanisms. Recently there are reports of the observation of avalanche-breakdown-like behavior in GaN devices for power electronics, where avalanche breakdown phenomena are exploited to prevent device burnout. No RF devices have been developed to exploit the avalanche behavior. The goal of this topic is to demonstrate RF power generation in Gallium Nitride or related group III-Nitride diodes exploiting avalanche breakdown. PHASE I: Determine feasibility of exploiting avalanche breakdown in the III-Nitride system in a representative diode structure for RF power generation. Demonstrate avalanche gain behavior with a diode in a representative circuit. Design the required device geometries and material properties for W-band operation along with the expected power output and DC conversion efficiency, based on the current state of the art for high quality GaN materials and prior theoretical work on scaling studies of microwave diodes and material properties of GaN. The planned device should be capable of operation up to a nominal current density of 100,000 A/cm2 . PHASE II: Develop and demonstrate the device design formulated in Phase I. Fabricate the device with the appropriate GaN material technology, processing, and geometry to demonstrate RF power generation at a nominal frequency of 94 GHz in an appropriate circuit. Characterize the device performance as a function of DC operating parameters, circuit matching, and thermal effects. A fixtured device with waveguide output will be delivered to the government for validation along with test data detailing the device performance. Based on the measured device performance and scaling considerations, estimate expected performance for Ka-, and G-band operation. PHASE III: Develop an RF source module based on the Phase II results for compact payloads for expendable decoys. Phase III should optimize power output and efficiency and develop device packaging that minimizes device heating through thermal management approac
Active Thermal Control System Optimization
Thermal Management is a critical requirement for future warships with electronic propulsion, weapon, and sensor systems. Innovative thermal architectures are needed to cool next-generation, high-energy density electronics which are expected to exhibit highly transient loads during pulsed operation. Two-phase cooling systems, such as vapor compression cycles, pumped cooling loops, and hybrid systems, are of particular interest for electronics cooling applications due to the large heat transfer coefficients from boiling flow, as well as their capability to maintain isothermal conditions as ambient temperature varies. In the presence of large thermal transients, active control may be required to avoid temperature variations, thermal lag, flow instabilities and critical heat-flux, which do not occur in state-of-the-art systems based on single-phase convective cooling. While control strategies for air-conditioning and refrigeration systems are well developed, the use of phase change cycles for electronics cooling is relatively new. In order for the thermal control system to accommodate transients, the component response times need to be understood and the control system optimized to ensure stable operation of coupled components while avoiding dry-out. The objective of this topic is to develop a software toolset with a graphical user interface to model various component configurations and control approaches for an electronics thermal control system. This toolset will allow for modeling of component interactions under dynamic thermal loads and evaluation of control methodologies for optimizing thermal performance, while minimizing system size and weight. Such components include, but are not limited to, variable speed compressors, pumps, electronic expansion devices, accumulators, charge compensators, liquid receivers, cold plates, condensers, and other components used in a two-phase thermal control system. The toolset will need to be able to monitor temperature, pressure, and flow, and simulate active control of components for modification of operation based on control algorithms and user inputs. PHASE I: Develop component-level models (using a subset of components listed above) and formulate system-level concepts to characterize the control architecture for multiple-cold-plate, active thermal control systems. Validate model performance through static and transient laboratory experiments. PHASE II: Based upon Phase I results, develop full-scale system-level software tools and control system modeling capability to optimize thermal response of advanced multi-cold plate architectures. Characterize operation, including transient behavior, of a representative thermal control system. Operationally test control system with relevant and/or simulating hardware. Deliver a standalone software toolset with graphical user interface. PHASE III: Based upon Phase II effort, finalize software toolset and graphical user interface. Provide transition and commercialization plans using the knowledge gained during Phases I and II. Provide support during developmental and operational testing on full-scale radar, weapon, and other systems.
Affordable Compact HPRF/HPM Attack Warning System
Advances in high power microwave threats pose significant dangers to critical naval electronic systems. To mitigate these dangers, a warning system is needed that will cover a broad range of potential HPRF frequencies and large dynamic range of intensities with the ability to survive and be operational under the highest intensities with low false alarm rate. The HPRF sensor should be able to provide frequency information of the attack, which may be wideband pulses (100-500 MHz, pulse widths 2 – 200 ns) or narrowband (500 MHz – 5 GHz, pulse widths 1ns-5μs). It must also measure the approximate power level as well as information about the direction to, and possible range of, the attacker to provide information for possible evasion maneuvers or potential retaliation. It is desirable to obtain HPRF geo-location information with an error of less than 5 degrees in both the azimuth and elevation/declination and provide an approximate target range. The HPRF irradiation may or not be repetitive. The system should be able to survive HPRF field intensities in excess of 50 W/cm2 without damage to the detection system (i.e. sufficiently high damage threshold). The system should be able to detect and characterize the power and frequency of current and anticipated HPRF sources being developed. Peak power and waveform measurements of the HPRF along with historical tracking of background RF irradiation/waveforms should be used to maximize the detector dynamic range and reduce the false alarm rate to levels below 1%. The system should be capable of roughly estimating the direction of attack, and estimated distance, and RF parameters so that facility decision makers can take proper action. The project is subject to technical risks in covering the broad range of potential intensities, possible frequencies and waveforms. The affordable system should be immune to the HPRF attack and have a low SWaP footprint that can be easily integrated into navy platforms (such as helos and UAVs) and their power sources without negatively impacting current system functionality such as power supplies, aerodynamics, weight-balance, and/or cargo/passenger space. PHASE I: Conceptualize, design, develop, and model key elements for an innovative HPRF Advanced Warning System (AWS) that can meet the requirements discussed in the description section with emphasis on low false alarm rate and on providing accurate direction and range of the attacker; these latter items are important to extend the present state of the art. Perform modeling and simulation to provide initial assessment of the performance of the concept. The design should establish realizable technological solutions for a device capable of achieving the desired geo-locating accuracy for the wide range of HPRF waveforms listed while being immune to the effects of the HPRF irradiation. The proposed design should be an 80% complete solution and include all sub-systems necessary for this innovative HPRF AWS. The proposed brass board system should be designed to demonstrate a path towards providing a compact solution (with low SWaP) that can be easily integrated onto air (e.g. UAVs), ground or nautical platforms. Cost analysis and material development should be included to ascertain critical needs not yet fully developed or readily available given current technology. The design and modeling results of Phase I should lead to plans to build a prototype unit in Phase II. PHASE II: Phase II will involve the design refinement, procurement, integration, assembly, and testing of a proof of concept brass board prototype leveraging the Phase I effort. The Phase II brass board prototype will be capable of providing frequency information of the HPRF source, which may be wideband pulses (100-500 MHz, 2 – 200 ns) or narrowband (500 MHz – 5 GHz, pulse widths, 1ns-5μs), and measure the approximate HPRF power level as well as provide accurate geo-location information as stated above. This brass board prototype must demonstrate a clear path forward to a full scale concept demonstrator based on the selected technology. Data packages on all critical components will be submitted throughout the prototype development cycle and test results will be provided for regular review of progress. The use of actual hardware and empirical data collection is expected for this analysis. PHASE III: The performer will apply the knowledge gained during Phase I and II to build and demonstrate the full scale functional final design that will include all system elements and represent a complete solution. The final design should be compact and ruggedized and the sensor should be platform mountable (e.g. exterior of an airborne platform such as a UAV). The device should be applicable for test range use and should be immune to the damage from HPRF. The functional final design will provide notification of attack and capture data for later analysis. The data will provide signal characterization including field strength, frequency, pulse width, and repetition rate. Additionally it will provide for angle of arrival and probable distance. The probe(s) should be immune to HPRF. Data packages on all critical components and subcomponents will be submitted throughout the final development cycle and test results will be regularly submitted for review of progress. It is desirable for the performer to work closely with NAVAIR Program Offices, e.g. PEO (U&W) PMA-263, Navy and Marine Corps Small Tactical UAS and PMA-272 Tactical Aircraft Protection Systems program office to maximize transition and field testing opportunities. The initial use and desire for this final design will be to provide an Affordable Compact HPRF/HPM Attack Warning System that can be combined with existing Naval UAS or Helicopter assets in a protective configuration for future Directed Energy threats. Working with the Navy and Marine Corps, the company will integrate their prototype HPRF/HPM Attack Warning System onto an existing vehicle for evaluation to determine its effectiveness in an operationally relevant environment. The company will support the Navy and Marine Corps for test and validation to certify and qualify the system for Navy and Marine Corps use. The company will develop manufacturing plans and capabilities to produce the system for both military and commercial markets.
Low Size, Weight, Power, and Cost (SWAP-C) Magnetic Anomaly Detection (MAD) System
Research over the last decade has significantly reduced the Size, Weight, and Power (SWAP) of atomic vapor magnetometers, [1, 2] making these sensors a good match for unmanned Navy vehicles. This topic seeks innovative designs that incorporate such magnetometers into a Magnetic Anomaly Detection (MAD) system, including both the hardware and software to detect, localize, and track a magnetic dipole target from an Unmanned Aerial Vehicle (UAV). Traditionally, MAD systems must account for a variety of noise sources such as sensor noise, platform noise, geomagnetic noise, and movement in gradient fields so this effort must contain additional sensors to remove these noise sources. This MAD system is envisioned to provide a common sensor for use on Tier 1 UAVs as well as being towed from helicopters. The hardware goals are driven by the intended application for small UAVs. As such, the total field magnetometer should be commensurately small: sensor head size <100cc, electronics module <500cc, low-power (<5W total objective), and low-weight (<5 lbs. total). The noise floor should match or improve upon current commercially available sensors at 0.35 pT/rtHz between 0.01-100 Hz with a raw heading error <300 pT, compensated heading error <10 pT (objective), and remove dead zones inherent in traditional total field magnetometer designs. The system should operate in all Earth’s field conditions (roughly 25 μT – 75 μT). Proposals should include the performance of the existing total field magnetometer planned for incorporation into the MAD system and describe modifications that would be needed to meet these performance goals. The cost objective should be less than $10k in small quantities (~10/year). To reduce noise, additional sensors are usually included in a MAD system: a 3-axis vector magnetometer to compensate for platform noise,  a 3-axis accelerometer, GPS inputs, and other sensors. These additional sensors should be in-line with an overall compact low-power design, but need not be included in the SWAP parameters above. Software should be able to detect, localize, and track a magnetic dipole target using GPS coordinates that are not necessarily straight and level flight. The algorithms should allow for the possibility of geomagnetic noise reduction with an external reference magnetometer. Computer intensive computations such as heading error correction, noise suppression, and MAD algorithms need not be done in the magnetometer and can be done in an external computer. PHASE I: Define a concept for a prototype compact MAD system. Demonstrate a total field magnetometer meeting the 0.35 pT/rtHz noise performance goal at 1 Hz in a bench top system. Include a 3-axis vector magnetometer and demonstrate an ability to compensate heading error. Develop software approaches for magnetic dipole detection and localization. Investigate noise reduction techniques to be implemented in the software and identify the associated hardware components. PHASE II: Based upon the Phase I effort, construct a prototype MAD system and a reference magnetometer. Verify the MAD system survives expected test-flight conditions and meets performance goals in Earth’s background field using the reference. Refine the software and integrate it with the hardware. Conduct a flight test to demonstrate the prototype MAD system’s performance against a simulated target. PHASE III: This system will be an integral part of the MAD UAV. Work with the UAV Primes to integrate, test and productionize the MAD system. Conduct operational demonstrate of the system’s performance against a relevant target at sea. PMA-264 is the expected transition sponsor of the MAD UAV technology to be deployed on the P-8A for ASW MAD. Tasking would include: additional ruggedization of the system for Fleet use, implementation of cost reduction measures to provide a minimal-cost product for Navy acquisition, and integration of the system onto an ASW vehicle.
Ultra High Density Carbon Nanotube (CNT) Based Flywheel Energy Storage for Shipboard Pulse Load Operation
The introduction of advanced weapons systems such as rail guns, lasers, and other future pulse loads to future warships create power and energy demands that exceed what a traditional ship electric plant interface can provide. This creates the problem of satisfying growing demand for with stored energy, while working within the limited space available aboard ship platforms. Flywheel energy storage systems are potentially attractive due to high cycle life capability, tolerance for military environmental conditions, and capability for buffering multiple stochastic loads. This is provided by the capability to support rapid discharge and charge cycles on a continuous basis. However, prior Navy flywheel installations have been “built in”, which do not allow for easy removal and replacement via a hatchable and/or modular installation that is scalable for multi-MW operation. The size and overall design of these flywheel systems are driven by issues such as rotor energy density due to tip speed limitations due to available material strength. Carbon Nanotube (CNT) based macrostructures in the form of conductive fiber and sheets with high strength and resilience provide the potential to improve state of the art flywheel energy storage. For flywheel designs, it is anticipated that CNT based composites can increase the available energy by over 30% as compared to existing state of art composite materials. This results in a potentially thermally, mechanically and dynamically compliant system operating with a tip speed greater than 1000 m/s. Given the infancy of this technology, innovative research is necessary to identify and prove the actual improvements over baseline state of the art composite designs, by applying new means of CNT integration into advanced wheel designs. The U.S. Navy is therefore interested in developing and characterizing the advantages of innovative shipboard flywheel system designs, directed and optimized to the MW scale, which maximize the advantages of CNT material integration. For the purposes of this effort, the below theoretical metrics of the flywheel design will be used as a basis. Continuously online charge-discharge of up to 50% duty cycle (e.g. up to 50% charging, 50% discharging). 26” shipboard hatchable design for easy removal or installation of components. Modular installation and operation capability to multi-MW levels, with relevant bus voltage and power conversion. Operation over the temperature range (40 – 140 F). Provide the capability to last for 60000 hours of online use and support >20000 cycles. The flywheel should be designed such that any the inertial material or other moving parts cannot penetrate into any personnel space under a catastrophic failure. PHASE I: Determine feasibility and develop a conceptual system design for a CNT flywheel energy storage system and provide comparison against alternative existing metals or composite materials. The comparative design should highlight advantages that CNT components provide with respect to size, weight and performance in a shipboard environment. Perform an initial development effort that demonstrates scientific merit and capabilities of the proposed CNT materials for application in a high speed rotating storage application. Laboratory coupon specimens should be fabricated and characterized. PHASE II: Fabricate the prototype CNT based components associated with the kinetic energy storage design developed in Phase I to the highest allowable scale given constraints of budget and schedule. Fully characterize and demonstrate capabilities and limitations of the CNT material based component. Update the kinetic energy storage design developed in Phase I based on results. PHASE III: Based on Phase I and II effort, fabricate full megawatt-scale kinetic energy storage system incorporating CNT materials to support shipboard energy storage system incorporating CNT materials to support shipboard energy storage pulse power requirements through the existing ONR Multifunction Energy Storage FNC.
Guidance System on a Chip
Small munitions and individual warfighter launchable unmanned systems place a premium on the volume and weight available to both primary and payload systems. For precision weapons, the current state of the art in guidance systems was developed for larger diameter systems (81mm and above) and are simply too large and consume too much power to meet the needs of future precision weapon roadmaps (targeting 60mm and below). An innovative solution that combines all of the required processing, memory, inertial sensing, and multi-interface capabilities into a single microchip device will enable precision weapons as small as 30 to 40mm in diameter to be realized. For unmanned systems this will help to fully maximize the available payload volume. For both manned and unmanned systems it will decrease power system requirements. The guidance system will reduce cost and supply chain issues through part count reduction and commonality across procurements. Reductions in size, weight, and power of the guidance system will also benefit larger caliber systems. The risk lies in the combining of electronic with different substrates and manufacturing processes. The new guidance system will need to be a printed circuit board (PCB) mountable, self-contained device, no larger than 13mm (L) x 13mm (W). Internal packaging can be single or multiple die, as long as the whole device when properly secured to the PCB can be hardened to survive 50,000g acceleration loads. The embedded inertial sensors should include, at a minimum, 3-axis rate sensor and 3-axis accelerometer, with desire for 3-axis magnetometer. The I/O connections need to include 4 channel motor control (including actuator feedback), external analog sensor inputs, external digital sensor inputs, high speed serial for diagnostic/telemetry data, fuze communications, and re-programming interface. PHASE I: Define and develop a concept for a Guidance System on a Chip that can meet the requirements listed in the description. Identify concepts and methods for integrating the required components and capabilities, possibly from different substrates and manufacturing techniques, onto a single chip die, or integrated multi-die package. PHASE II: Complete detailed design and layout of the chip solutions defined in Phase I. Conduct modeling and simulation of the chip designs to reduce fabrication error risk and validate shock survivability of selected packaging solution. Phase II includes first fabrication run of chip(s), with a possible Phase II option concluding with testing the fully packaged device. PHASE III: Integrate the prototype microchip device design from Phase II into the current USMC Guided Projectile (Mortar, Artillery, Rocket, Shoulder Launched), ONR 30 Guided Projectile S&T, and/or the Hyper Velocity Projectile (HVP) projectile. It will be tested in shock environment, Hardware-in-the-Loop, and live fire testing.
Attack Sensitive Brittle Software
Critical cyber systems are subject to attacks by the enemy. Generally, resilience and survivability are considered desirable properties for software in these systems. They aim to remain operational, though at a degraded state, when they are compromised. However, this is not always desirable. There may be circumstances where it is preferable for the software to be brittle and simply crash when attacks/exploits are successful. This would be the case when integrity and confidentiality requirements are much more important than availability requirements. Compromised or misbehaving software could be much more dangerous than simple unavailability. Another circumstance is when redundant and diversified backup systems are readily available. A faster failure and timely switch-over would minimize the disruption/damage and actually enhance overall resilience. Brittleness would be a plus in these circumstances. The aim is for the crash to occur as soon as possible after a successful attack and program control is lost. The objective of this solicitation is to develop transformation mechanisms for software that increase brittleness and achieve the “fast-crash” property. The transformation mechanisms should be generic and able to be applied to third party software. They should also preserve the functionality of the original software and incur relatively small computational overheads. We are interested in transformations that act on all layers including binary, byte code, and source code. The “fast-crash” property can be defined as the number of instructions/jumps/stack operations that gets executed after control of the program is lost. Compromised software will eventually crash; our goal is to speed up the crash so that malicious behavior cannot take place before it occurs. As such, “fast-crash” should occur within a handful of instruction/branches. Techniques that enhance resilience and survivability of code segments and systems have been an active area of research. We are looking for techniques that do the opposite. There is currently little research that explicitly aims to create brittleness in software. However, it is often the side-effect of some security techniques especially in the context of artificial diversity and randomization. For example, address space layout randomization (ASLR) aims to prevent buffer overflow attacks by making it difficult to reliably jump to a particular exploited function in memory [4,5]. Attempted attacks would crash the program by attempting jumps to invalid addresses. ASLR would be considered an effective “fast-crash” mechanism. We are looking for other mechanisms of achieving the “fast-crash” property against different categories of attacks such as return oriented programming (ROP). We would consider application, customization, or assessment of existing security mechanisms in the context of “fast-crash”. However, novel mechanisms will be prioritized especially if they are able to be simpler, lighter weight, and more dependable than mechanisms meant for other purposes. It’s important to note that “fast-crash” is different from active integrity verification techniques that directly identify attacks. We are NOT as interested in mechanisms that monitor the system and trigger a crash when attacks are detected. Ideally, the system crash should simply be a byproduct or guaranteed side-effect of a successful attack and compromise. The system should crash quickly and consistently after faults that are associated with deliberate attacks. At the same time, the system should not crash when attacks are not taking place or after random faults. The success of the result will be measured by both the false-positive rate and the false-negative rate. This SBIR topic will also consider active techniques if they are novel, generalized, and easily deployed. Another preference is for techniques that do not require source code and can be applied to raw binaries the military already deploys. PHASE I: Develop and discuss the approach for the “fast-crash” transformation mechanism. Demonstrate the feasibility and effectiveness of the mechanism on arbitrary code segments. Prototypes of components that demonstrate the concept will be desirable. PHASE II: Based upon Phase I effort, develop and demonstrate a fully functioning prototype of the “fast-crash” system on the provided experimental platform. Validate its efficacy during normal system operations and in the face of arbitrary attacks. PHASE III: Upon successful completion of Phase II effort, the performer will provide support in transitioning the technology for Navy use as required. The performer will develop a plan for integrating the product into the Navy’s HM&E software control systems and supervisory systems.
Compact Air-cooled Laser Modulate-able Source (CALMS)
Today, flexible compact laser sources in the UVA (315 nm - 400 nm) are not available for lab/field testing or other military applications. Technology solutions to this problem are needed in several key areas: 1) increasing the output power of individual laser modules operating in the UVA spectrum, 2) developing the capability to efficiently combine the outputs of multiple laser modules into a single optical fiber for delivery to an optical pointer, and 3) combining these technologies into a compact system package that is suitable for applications with severe size and weight constraints. State of the art off the shelf diode lasers in this band are 250 mW or less. The ideal system design would be able to accommodate 3 or more lines anywhere within the UVA band, provide greater than 3W power output, be electronically driven from an external pulse source (1-100% duty cycle pulses), permit 2 to 3 orders of magnitude amplitude control, quickly switch between waveforms (DC thru 10kHz), and couple to a 100 micron core fiber output. PHASE I: Formulate and develop a UVA laser system concept that can accommodate three or more lines anywhere within the UVA band, provide greater than 3W power output, be electronically driven from an external pulse source (1-100% duty cycle pulses), permit 2 to 3 orders of magnitude amplitude control, quickly switch between waveforms (DC thru 10kHz), and couple to a 100 micron core fiber output. If the laser is not truly continuous wave, then pulse repetition frequencies of greater than 100 kHz and pulse widths greater than 10ns are required. The volume of the full system shall not exceed 75 cubic inches. The system concept needs to specify all pertinent design details and explain why all materials chosen are believed to be suitable and capable of meeting the desired specifications. A detailed test plan must be developed explaining how a prototype would be validated during Phase II. The Phase I effort will not require access to classified information. If need be, data of the same level of complexity as secured data will be provided to support Phase I work. PHASE II: Upon successful completion of Phase I, the Phase I design will be built and validated using the test plan developed in Phase I. The Phase II effort may require access to classified information. If as a result of the firm's proposed effort access to classified data is required during Phase II, the small business will need to be prepared to obtain appropriate personnel and facility certification for secure data access. PHASE III: The product is expected to transition into military systems. The system could be integrated into existing systems or future developmental programs.
In-Transit Visibility Module for Lifts of Opportunity Program (LOOP) & Transportation Exploitation Tool (TET)
The United States Transportation Command (USTRANSCOM) plans and executes worldwide movement of cargo and people at sea, on land, and in the air, launching an average of 1,700 movements a day. Navy Fleet/Force transportation requests within USTRANSCOM are routed to specialists who focus on satisfying requirements using a mode of transportation with minimal coordination between them and their counterparts. To avoid the high cost for dedicated USTRANSCOM Special Assignment Airlift Mission (SAAM) or contracted commercial flights, the Navy developed and implemented a Transportation Exploitation Tool (TET) prototype. TET helps planners find cargo capacity on SAAMs or other conveyances that can support mission requirements resulting in significant cost avoidance for the urgent movement of cargo worldwide. Once a mission is booked, transportation planners and fleet forces require the ability to accurately track urgently needed cargo as it moves from its origin to its destination and through all transportation nodes, and then allow planners to close out the mission. The ITV of critical supplies throughout the movement will have several important benefits: 1) allows team to plan for material handling, and other special equipment and personnel needed to load the cargo at transportation nodes, 2) provides early identification of problems or delays during the mission that may necessitate re-planning, and 3) increased visibility helps ensure critical items are not needlessly re-ordered. Current state of the art for large scale DoD ITV systems use radio frequency identification (RFID) tags and a network of read/write stations (to include satellites). Systems like the Army's Joint-Automatic Identification Technology (J-AIT) have over 1,900 stations alone. These types of systems are costly and manpower intensive. The limitations the Navy has for manpower and funding requires an innovative solution for an ITV capability using data that is already available. The focus of the ITV module is threefold. First is development of software tools (Intelligent Agents (IAs) and semantic services) that can take disparate sources of data (real-time feeds, near-real-time feeds, manual entry, and historical references) and fuse them to accurately infer location data of transported items with high confidence. Second is development of additional decision support tools to aid planners. This includes automatic generation of alerts and tracking of mission performance metrics. Third and final is development of predictive models that will use the same data sources to anticipate arrival times of logistical items. The capability to predict arrival times (updated as the current location changes) of goods in finer granularity than current methodologies is necessary to facilitate planning. Using machine learning methodologies, the predictive models should be able to learn from previously executed missions to identify trends that support successful mission planning and execution. Challenges include development of a robust IA that can digest large quantities of data and mine pertinent information (specified and inferred) on specific items. Data may need to be semantically connected (for example, connection of the cargo ID to a strategic mission with its flight schedule and inflight tracking data) in order to derive relevant mission data (location, time to arrival, and probability of success for completing the next mission leg). The software tools need to be accurate, reliable, and able to run efficiently on handheld operating systems compatible with the Ozone Widget Framework (OWF). The existing TET system runs as a web service on the NIPRNET/SIPRNET (the Department of Navy's unclassified and classified internal computer networks). For this effort, the ITV module will initially attain static or live data feeds from existing TET services through an existing systems integration laboratory (SIL) environment (the government will ensure access to the SIL for the awarded companies) at The Pennsylvania State University (PSU) Applied Research Laboratory (ARL), independent of Department of Defense (DoD) networks but funded by the government. The SIL environment with virtual machines will allow experimentation and integration without the need for operational security and information assurance certification until the system is actually deployed. Message communications between TET and the ITV service is required; for example an alert of a missed transportation connection would be pushed back to TET to initiate dynamic re-planning of the mission. Examples of communication channels that need to interface with TET/ITV: Marine Corps' Tactical Service Oriented Architecture (TSOA), Naval Tactical Cloud (NTC). Examples of incoming data feeds provided via the SIL: IDE/GTN Convergence (IGC), Global Decision Support System (GDSS), Federal Aviation Administration (FAA), National ITV Server, Global Air Transportation Execution System (GATES), Weather, Coast Guard Automated Information System (AIS) or equivalent. Real time feeds will ultimately be pursued with support from the government. PHASE I: Define and develop a concept and software architecture for an ITV module that can infer location of a logistical item using available real time feeds, near-real time feeds, and historical data. The ITV module architecture shall support communication with NAVSUP's Transportation Exploitation Tool (TET) to receive mission details, data feeds, and to transmit alerts or other messages back to TET. The performer will define decision support tools to capture critical metrics of current/historical mission performance. These tools require automatically generated alerts for situations that may hinder mission success and that prompt actions from planners. In addition, develop predictive models that provide real/near-real time updates of arrival times for logistical items. The performer will also define an open source widget based approach to enable mobile/handheld devices to interact with the ITV module. At the conclusion of Phase I, the performer should provide a viable path forward for the implementation of the concept ensuring coupling of communication with the TET tool. The small business shall deliver architecture views, describe the software's major functions, describe the user interface, outline data messaging functions, and describe the proposed software development process, schedule, risk, and cost. PHASE II: Develop and demonstrate a handheld ITV prototype based on Phase I efforts in a SIL environment. The software shall demonstrate all major functionality to include ITV and prediction of arrival of TET missions in execution, implementation of decision support tools (i.e. real-time alerts) , and collection of key performance metrics for use in future analytic models. This includes validating ability of the ITV module to locate an object for a lift of opportunity mission in execution with high confidence to accurately predict arrival times within a low margin of error, using only the data feeds provided to the performer through ONR and PSU ARL. The ability of the module to mine, digest, and derive object location, in the finest granularity possible, in a timely manner (within a few minutes) is necessary to ensure an appropriate planning capability. Alerts and other desired analytical and decision support aspects will also be demonstrated. Phase II will include a government evaluation by transportation planners at NAVSUP in Norfolk. PHASE III: Phase III will focus on the seamless integration of the ITV module with the TET in a cross domain NIPRNET/SIPRNET environment by transportation planners in Norfolk, VA. This includes refining and implementing enhancements to existing software functions and addition of functions that support transportation planning and ITV. The performer will work closely with NAVSUP and the program of record for the TET application to ensure that both TET and ITV components will work in an efficient and effective manner in a relevant environment during planning missions. This effort includes software verification and validation functions and completion of information assurance tasks necessary for deployment on NIPRNET/SIPRNET.
Advanced UHF SATCOM Satellite Protection Features
More than 60 percent of Satellite Communications (SATCOM) users are supported by the Ultra High Frequency (UHF) band. The Navy’s Communications Satellite Program Office (PMW 146) acquires UHF SATCOM satellites for the Department of Defense (DoD). The current operational UHF Follow On (UFO) satellites will soon be replaced by the Mobile User Objective System (MUOS) constellation, which should be fully launched by 2017. Soon the Navy will investigate potential next generation UHF SATCOM systems. Acquisition of the system that follows MUOS will likely start in the next few years. Enhanced protection features will be critical for that system. Since the MUOS satellites were designed, it has become apparent that the space domain “is becoming increasingly congested, contested, and competitive” as described in the National Security Space Strategy. The next generation of satellites must be protected from a number of threats including radio frequency interference. Many previous research efforts have developed technology to protect terrestrial UHF SATCOM terminals. These efforts aimed to protect terminals against local interference in the SATCOM downlink (space to ground link) or against interference in the SATCOM uplink (ground to space link) that was re-broadcast to the ground by the satellite transponder. This effort seeks innovative solutions which are resident on the satellite. The goal is to protect the SATCOM uplink at the satellite instead of protecting the terrestrial terminals. CubeSats or nano-satellites are popular among universities and are gaining momentum with commercial and government organizations. They may be an avenue to demonstrate new electromagnetic interference protection features quickly and at low cost. Such a demonstration path, if applicable to a particular interference protection feature, would likely allow the new technology to be tested in space in time for it to be incorporated into the next generation of UHF SATCOM satellites. If the technology is also applicable to nano-satellites, it could find use in emerging nano-satellite programs within DoD. PHASE I: Determine feasibility and develop advanced electromagnetic interference protection features for future UHF SATCOM satellites. Perform analysis, modeling and simulation, or other calculations to establish performance possibilities. Translate design concepts into a product development roadmap establishing a technical and program pathway to an operational capability demonstration. Tasks under this phase include: • Develop new electromagnetic interference protection concepts for UHF SATCOM satellites • Create an initial design of the protection feature(s) • Characterize and explore system trades • Predict performance parameters • Implement early prototype(s) and demonstrate in a laboratory environment if applicable • Evaluate potential CubeSat or nano-satellite demonstration options, if applicable PHASE II: Develop a prototype electromagnetic interference protection feature(s) and demonstrate it in a space environment. • Evaluate measured performance characteristics versus expectations and make design adjustments as necessary • Demonstrate the performance of the electromagnetic interference protection feature • Demonstrate the technology in a space environment. This could entail integration with a CubeSat or nano-satellite. Another possibility is laboratory based environmental testing such as thermal vacuum, vibration, radiation and other testing. Capabilities demonstrated (e.g. performance characteristics and/or limitations) in Phase II may become classified. PHASE III: Based on Phase I and II effort, finalize design and integrate the electromagnetic interference protection technology into future UHF SATCOM systems.
Cell Free Platforms for Prototyping and Biomanufacturing
There is a critical need for capabilities that will enable DoD to leverage the unique and powerful attributes of biology to solve challenges associated with production of new materials, novel capabilities, fuels, and medicines. This topic is focused on improving the utility of cell-free systems as a platform technology to address key technical hurdles associated with current practices in engineering biology. A successful platform should address several or all of the bottlenecks associated with the state-of-the-art in cell-free systems, including production of cell-free reagents with improved consistency and scalability, improved methods for characterizing and validating cell-free reagent preparations, new cell-free systems to expand the number of organisms capable of being modeled, and improved reproducibility of results over scaled volumes. In addition, these cell-free platforms should be distributable in a format that can be readily transitioned to academic, government, and commercial researchers, all of whom rely on the ability to rapidly assay engineered biological systems. Biological production platforms have great potential to provide new materials, capabilities, and manufacturing paradigms for the Department of Defense (DoD) and the Nation. However, the complete realization of this potential has been limited by current approaches to engineering biology that rely on ad hoc, laborious, and time-consuming processes, as well as the large amount of trial and error required to generate designs of even moderate complexity. One technology that could address many of these bottlenecks is the use of cell-free systems for the rapid prototyping and testing of biological systems. Conventional approaches to engineering genetic systems rely on molecular cloning into DNA vectors, transformation or transfection of cells, antibiotic resistance-based selection, growth in appropriate media, and assaying cells for the desired function. While significant progress has been made toward improving these processes, engineering living cells is inherently costly, slow, and complex. By short circuiting many of the steps required for in vivo gene expression, cell-free systems offer several advantages that could potentially transform the state-of-the-art, including reduced cost, increased throughput, decreased system complexity, and the ability to be utilized in a distributed setting. In addition, cell-free systems enable the production and testing of cytotoxic compounds, the prototyping of pathways with toxic metabolic intermediates, and for the production of molecules, such as proteins containing non-standard amino acids, that are difficult to engineer into living systems. Although the use of cell-free assays has significant potential to rapidly engineer and test biological systems, several technical hurdles remain that have prevented widespread adoption of the technology. Methods for preparing reagents used in cell-free experiments are often inconsistent, which can lead to irreproducible results. In addition, current methods do not produce batches of a sufficient volume of high quality reagent to enable widespread use. Furthermore, existing internal controls are insufficient for the complete characterization and validation of reagents, which makes instituting process controls difficult. The cell-free platform itself also requires improvement, as only relatively simple biological processes have been demonstrated and in only a handful of organismal environments. PHASE I: Develop an initial design and determine the technical feasibility of a technology platform for the consistent and large-scale production of cell-free reagents from multiple organisms, including methodologies for characterization and validation. Develop detailed analysis of the cell-free platform’s predicted performance characteristics including, but not limited to, total volume of reagent to be produced, batch volume and variability, organisms to be utilized, cost per unit, and distribution format. Include analysis of predicted performance relative to current standard practices. Define key component technological milestones and metrics and establish the minimum performance goals necessary to achieve successful execution of the cell-free platform. Phase I deliverables will include: a detailed analysis of the proposed platform, a technical report detailing experiments and results supporting the feasibility of the approach, and defined milestones and metrics as appropriate for the program goals. Also included with the Phase I deliverables is a Phase II plan for transitioning initial designs and proof-of-concept experiments into protocols that are sufficiently robust and reproducible to be viable as commercial technologies. PHASE II: Finalize the design from Phase I and initiate the development and production of the cell-free platform. Establish appropriate performance parameters through experimentation to determine the efficaciousness, robustness, and fidelity of the approach being pursued. Develop, demonstrate, and validate the reagents and protocols necessary to meet the key metrics as defined for the program, and provide an experimentally validated comparison of the new methods relative to competing state-of-the-art processes. Phase II deliverables include a prototype set of cell-free reagents, including for new organismal systems, and valid test data, appropriate for a commercial production path. PHASE III: The widespread availability and use of cell-free systems will further enable the rapid engineering and optimization of biologically-based manufacturing platforms for the production of previously inaccessible technologies and products, and will facilitate the rapid prototyping of multi-pathway metabolic designs necessary for the engineering of complex biological systems. This will enable DoD to leverage the unique and powerful attributes of biology to solve challenges associated with production of new materials, novel capabilities, fuels, and medicines, while providing novel solutions and enhancements to military needs and capabilities. The successful development of reliable and distributable cell-free platforms for rapidly prototyping biological systems will have widespread applications across the biotechnology and pharmaceutical industries including rapid, optimized production of high value chemicals, industrial enzymes, diagnostics, and therapeutics. These cell-free platforms will be impactful for industrial biotechnology and pharmaceutical firms, as well as government and academic research-scale operations.
Cortical Modem Systems Integration and Packaging
The DoD has a critical need for breakthrough medical therapies to treat wounded warriors with multiple comorbidities of sensory organs. This topic seeks to integrate state-of-the-art electronics, packaging, and passivation technologies with the latest low-power data and power delivery semiconductor components in a single package. In other words, DARPA seeks to wirelessly bridge cortical neural activity sensing components within the skull to external computing and network systems, designing an effective “Cortical Modem” that connects human brains to computer equipment and networks in a direct analogy to early telephonic modems, which connected computers to the ARPANET. DARPA is open to a multiplicity of system architectures that, first and foremost, demonstrate significant improvements in the scale of neural channel bandwidth from the current 100-signal demonstrations, but secondly, may span a wide spectrum of implementation strategies from high-bandwidth transmission systems with limited implantable computation capability, to implantable integrated analysis and compression systems coupled to a limited bandwidth telemetry systems. Significant advances in the miniaturization and ever lower-power performance of electronic and photonic technologies have enabled critical developments in miniaturized communications products like cellular phones. However, the time lag between such advances and their adoption in the fields of neuroscience and neuro-engineering has, in many cases, grown to more than twenty years. With such large interface component feature sizes characteristic of the older technologies in common experimental use, the supporting interface electronics have now become one of the most significant and fundamental limits to their integration within human and animal bodies. For example, the Utah array features a 400 micrometer electrode pitch, a limitation compounded by the wet etch microfabrication technology available to the manufacturer. Note that this 400 micron feature size is representative of 1980s CMOS technologies, and is too coarse for interfacing with, for example, the visual cortex where neural pitch ranges from ten to thirty microns. As the mobile computing industry continues to push miniaturization, functionality, and power-consumption requirements to their limits, so too is the field of neuroscience pushing ever closer to full-duplex single-neuron scale interfaces. With focused technology development and integration to build a Cortical Modem, the necessary critical electronics and packaging could be leveraged across the entire academic and corporate neuroscience ecosystem to result in dramatically accelerated advances in science and commercialization of neuroscience technologies. The goal of this topic is to develop cortical modem components that substantially improve the scale of signal transduction from the current 10x10 electronic probe arrays, as well as the scale of telemetry delivery of those signals., For reference purposes, one mm^3 volume of cortical tissue encloses approximately 100,000 neurons indicating an eventual need to both transduce and deliver wireless telemetry for as many as 10^7 independent neural channels. Proposals should target the design and implementation of a COTS-based full duplex cortical interface component. Essential elements of this component include flexible direct electronic interfaces to neural activity, sensors and low power pre-processing circuitry to convert and encode neural sensor signals into formats that can be transmitted wirelessly across the skull, wireless telemetry suitable for safe use in humans, and power delivery electronics. Packaging must leverage state-of-the-art miniaturized single system-on-a-chip ceramic packaging that incorporates on-board wireless power reception and conditioning circuitry. Critical to the design of the system is a careful power and link budget analysis to account for relevant FDA and FCC regulations. In addition, proposals should detail the intended components (i.e. make, model, and part numbers), their interface design, and the technical and mechanical specifications that will ultimately yield the lowest power, smallest form-factor, highest signal-to-noise ratio and bandwidth system possible using COTS components. Critical systems integration challenges must be addressed explicitly in the proposal. Technical challenges and considerations include system power, transmission bandwidth, frequency and data rates, transmission protocols, optical wavelengths, etc. Offerors are to first uncover and understand the critical integration challenges that may limit the translation and commercial-viability of full-duplex cortical interfaces, and second to push the standards of integration by producing a first generation of truly miniaturized and implantable interface componentry, thereby accelerating innovation across the entire field of neuro-engineering. Industrial and military collaborators should then produce products and reach their first commercialization milestones on a similarly accelerated timeline. Technical challenges may include: • The development of a standard interface between a multiplicity of different neural sensing components and the data collection and transmission system. • Maximizing the scalability and bandwidth-power product of both the internal neural sensing and external wireless data and power interfaces, but doing so within safe heat dissipation limits of the outer cortex and skull. • The potential need for data translation and encoding components to minimize power requirements for transcranial data and power delivery. • Establishing optimal trade-offs between physical, electronic, and data transmission specifications required to minimize the componentry bill of materials (BoM) and hence the size of the device that needs to be implanted. • Sourcing state-of-the-art packaging and system-on-a-chip prototyping support • Determining optimal bio-material passivation strategies and packaging materials limitations. • Determining optimal power-bandwidth tradeoffs and scalability to support increasing sensory density, resolution, and sensitivity limitations. PHASE I: Explore and determine the fundamental systems integration and packaging limitations (that are common across the entire neural interface field) in implementing a full-duplex read/write neural interface system that bridges data and power delivery across the human skull. Phase I deliverables: 1) Final Report that identifies the neural read/write signals modalities (not necessarily required to be the same); details the technical challenges relevant to the read and write signals within the deployment environment; quantifies the information limits to the system relative to the information input/output of the cortical area of interest; details component-level metrics for coping with the data and power requirements; describes integration process, system-level challenges; and a thorough business plan describing the NRE costs, minimum rate of production, units per year required to achieve sustainable production of a cortical modem, and market analysis; 2) Develop a fully-operational proof-of-concept demonstration of the key components and functional systems in a bench-top / PC-board scaled prototype along with all the design documents and complete specifications, along with documentation of committed sources and service providers for the fabrication of the ultimate integrated system-on-a-chip Cortical Modem device to be produced in Phase II; full specifications and a complete BoM are required, itemizing each component and system that comprises the final prototype system. These demonstrations should be performed in relevant in vitro environments analogous to the final deployment environment in the human skull and cortex. PHASE II: Development, demonstration, and delivery of a working fully-integrated cortical modem at a 1:1 physical scale with the underlying neurons. The Phase II demonstration should operate within a physical simulacrum that mimics as closely as possible the electrical and mechanical properties of human cortex, skull, and scalp. The integrated system should leverage COTS silicon and electro-optical devices wherever possible, and form a data and power bridge between the internal cortex and external machines. On the cortex side, a modular neural interface architecture should support bi-directional communications through a multiplicity of neural probe modalities, including, but not limited to, optical, electronic, and bio-molecular sensing interfaces. The external interface should be comprised of a wireless interconnection through intervening brain and skull tissue to external computing systems. Proposers are encouraged to adapt modular componentry strategies that are generalizable to a wide range of neural interfaces. The Cortical Modem system should be able to collect and transmit neural signals through the skull in a complete, implantable package. It will have a form-factor and packaging that can be implanted in the cortex with core system functionality provided by COTS semiconductor components in a single ceramic system-on-a-chip package, rather than a fully-customized chipset. The Phase II final report shall include (1) full system design and specifications detailing the electronics and proof-of-concept neural interfaces to be integrated; (2) expected performance specifications of the proposed components in vivo; and (3) calculations of energy and link budget scalability to larger cortical regions. PHASE III: Breakthrough medical treatments for wounded warriors with multiple comorbidities of the sensory organs. Effective restoration sight, sound, smell, and vestibular sensation after massive head trauma. Breakthrough medical treatments for upper spinal cord injuries, enabling restoration of motor and sensory capability. Breakthrough medical treatments for diseases of sensory organs, providing sight and sound to treat indications not possible through use of current retinal prostheses and cochlear implants.
Broadband Self-calibrated Rydberg-based RF Electric Field and Power Sensor
There is a critical need for capabilities that will enable the DoD to have self-calibrated electric field and power sensors in the RF, microwave, and millimeter-wavelength regimes. This topic seeks the demonstration of a portable broadband (1 GHz – 1 THz) electric field, power sensor, or key components towards a device. The sensor should be capable of operating in greater than 1 kV/m electric fields as to be usable for high-energy DoD applications. The electric field and power measurements must be SI traceable to remove the need for the recalibration process. Furthermore, the electric field-sensing device should be capable of sub-wavelength imaging of RF electric fields with spatial resolutions exceeding 10 μm. Many DoD and commercial applications critically rely on using calibrated electric field and power sensors in the RF, microwave, and millimeter-wavelength regimes. Currently no self-calibrated sensor exists in the 100 GHz – 1 THz frequency band. Typical detectors in the sub-THz frequency range are antennas which inherently perturb the field they are trying to sense, resulting in greater than 5% measurement errors. Antennas have the further limitation that they are narrow-band detectors. A SI-traceable sensor in the 1 GHz – 1 THz range would remove the need for costly recalibration of older devices and would replace many narrow-band antennas with a single low-SWaP device in a handheld package. Quantum sensors based upon Rydberg atoms offer the potential of traceable calibration, high sensitivity, wide spectral coverage, and high power capability. In addition to DoD applications, a Rydberg field and power sensor would have numerous commercial applications: circuit design [1, 2], biological sensing , aeronautics applications , and mobile communication . This technology would not only verify circuit design but inform it by employing sub-wavelength RF field imaging of the complicated electronic fields from various dense circuits and metamaterials [1, 2]. Current technology employing electromagnetically induced transparency (EIT) in Rydberg atoms in an atomic vapor cell is a promising route but requires further development in order to achieve DoD functionality. These devices function by converting an electric field amplitude into a measurable frequency splitting  that is SI-traceable . The electric field magnitude E is given by |E|=ℏΔf/P, where ℏ is Planck’s constant divided by 2π, Δf is the measured frequency splitting, and P is the transition dipole moment. Current work has demonstrated sensitivities of 3 μV/sqrt(Hz) measuring electric fields as low as 7.3 μV/cm  and up to 40 V/m  in a 1-130 GHz frequency range. These results are the first calibrated field measurements in the 100 GHz – 1 THz frequency band to date. Employing this technique to image RF electric fields resulted in sub-100 μm spatial resolutions  for electric fields with frequencies up to 104 GHz [2, 10]. The fabrication of micrometer-sized vapor cells is one of the more challenging technological developments necessary for these sensors. The size of these vapor cells must be reduced to at least one quarter of the length of the minimum wavelength of interest in order to prevent variations in the measured RF fields produced by standing waves. These cells must be all dielectric, made of quartz or Pyrex for example, and must be filled with alkali atoms such as Rb and Cs or a mixture of atomic species. The fabrication of micrometer-sized vapor cells suffers from atomic adsorption to the cell walls. These vapor cells must employ a mitigation technique for the reduced vapor pressure such as novel coatings or materials, bonded infrared absorption glass to the outside of the cell for IR heating or optical coupling mirrors bonded to the cell to form optical resonators for enhanced atom-light interaction. Such vapor cell production would not only benefit electric field sensing but atomic vapor-based magnetometry. Atomic vapor magnetometry currently provides the most sensitive magnetic field measurements  but it does not have high spatial resolution because it is limited to integration over the vapor cell length. Commercially available micrometer-sized atomic vapor cells would allow for the extension of atom-based magnetometry into a different spatial resolution regime [12, 13]. PHASE I: Demonstrate the operation of key components towards the electric field or power sensor in a laboratory setting such as: broadband measurements (100-250 GHz), electric field sensitivities better than 100 μV/cm, circuitry imaging with better than 50 μm spatial resolution, or fabrication of an alkali vapor cell with sub-mm length scales, and the development of a technique to mitigate reduced vapor pressures. Phase I deliverables include a final report that documents the results of each demonstration and design concepts to extend the measurement space to 1 GHz - 1 THz, improve the spatial resolution, and detail an experimental method to use the device in a high electric field environment (greater than 1 kV/m). PHASE II: Construct and demonstrate a breadboard system with a path towards a portable device. If the performer is developing components, fabricate the miniaturized alkali vapor cell to less than a 100 μm length. Phase II deliverables: 1) a demonstration in a simulated or relevant environment achieving broadband measurement (1 GHz – 1 THz), detection of less than 1 μV/cm electric fields, and sub-wavelength imaging with better than 10 μm spatial resolution. 2) a final report that documents the results of the demonstration and specifications of the fabricated alkali vapor cell 3) Completed designs for a portable prototype. This phase is expected to reach TRL 5. PHASE III: If successful this technology could transition to multiple DoD offices and could eventually replace current 1 GHz – 1 THz based electric field and power sensors, removing the need for recalibration against standards. This device could also be commercially viable to examine densely packed microwave circuit designs imaging the electric fields with sub-100 μm resolution to strongly inform and guide circuit design. Development of the micrometer-sized alkali-based vapor cells would be commercially usable for atomic vapor-based magnetometry opening new realms of spatial resolution for the highest magnetic field sensitive magnetometers. Such vapor cells could also have potential use in the timing community.
Many-Core Acceleration of Common Graph Programming Frameworks
Today there is a DoD need for graph analytics capabilities, which are critical for a large range of application domains with a vital impact on both national security and the national economy, including, among others: counter-terrorism; fraud detection; drug discovery; cyber-security; social media; logistics and supply chains; e-commerce, etc. Widely used graph development frameworks have enabled online (but not real-time) graph analytics for broad classes of problems at a modest data scales and support only offline analytics for very large data scales. The Facebook graph today has over 1 Trillion edges. A single iteration of a graph traversal takes up to 3 minutes using Apache Giraph on 200 commodity CPU servers. A full breadth first traversal of the graph could take nearly 20 minutes, and algorithms that relax to a solution can require 50-100 iterations, implying that it could take several hours to compute the Page Rank of the Facebook graph. Bringing analytics within these graph programming frameworks into real-time on large graphs requires that they be able to leverage the computing advances in multi-core platforms. However, scalable, data-parallel graph analytics on many-core hardware is a fundamentally hard problem that goes well beyond the current state of the art. Graph data models and algorithms are used for network structured data, when the data are poorly structured, or when complex relationships must be drawn from multiple data sets and analyzed together. Graph operations are inherently non-local and, for many real-world data sets, that non-locality is aggravated by extreme data skew. Graph analytics are data intensive rather than compute intensive which means that memory and network bandwidth are the bottlenecks for graph processing. Overall, current solutions applied to scaling graph frameworks such as Tinkerpop and Graphlab do not have all of the desired attributes integrated, specifically 1) Solutions based on map/reduce or requiring checkpoints to disk are 1000s of times too slow to extract the value latent in graphs for time-sensitive analytics. (2) Solutions based on non-updatable data representations are limited in their application to complex analytics. 3) Solutions that provide robust scaling and high performance require specialized programming techniques that are not easily accessible to the existing graph development community. Approaches leveraging multi-core technology have significant promise. At the purely hardware level, GPU memory bandwidth is set to jump by 4x by Q1 2016 (Pascal). This should provide a 4x speedup. Thus going from 10x - 100x speedups over CPUs to 40x - 400x over CPUs. PHASE I: Develop innovative approaches to apply many-core GPU and/or hybrid CPU technologies to existing graph development APIs. The focus should be on framework fidelity, computational scalability, and easing the burden of integration. In addition, develop detailed analysis of predicted performance of the proposed approach and plans for developing the approach into a comprehensive platform to accelerate a graph framework in Phase II. The Phase I deliverable is a final report documenting the effort and results. PHASE II: Develop a comprehensive implementation of an existing graph framework accelerated for commodity high performance many-core (GPUs) and multi-core CPUs technologies using the approaches identified in Phase I. Develop a prototype and establish a preliminary benchmark using various standard problems, and apply the tool to a DoD relevant problem. Phase II deliverables will include software, a final report documenting the effort, a document describing the architecture and a user’s manual. PHASE III: Real time data ingest and reasoning analytics for military situational awareness platforms. Commercial uses of the accelerated graph framework include a 1000-10000X acceleration of existing graph analytics such as Facebook’s current graph traversal.
Ovenized Inertial Micro Electro Mechanical Systems
There is a critical DoD need for capabilities that focus on temperature stabilization of MEMS inertial sensors to improve bias and scale factor stability. Military operations rely on satellite-based Global Positioning System (GPS) for precision Positioning, Navigation & Timing (PNT) information. However, GPS is an extremely small signal, which may be degraded due to signal interference or obstructed by environmental factors such as clouds, urban canyons or other impeding structures . In GPS-degraded environments critical PNT information must be gathered from alternate sources, such as navigation by the technique of dead reckoning based on acceleration and rotation inputs from an Inertial Measurement Unit (IMU) . IMUs based on Micro Electro Mechanical Systems (MEMS) are low Cost, Size, Weight, and Power (CSWaP), but typically exhibit high calibration environmental sensitivity, particularly to external temperature variation [3,4]. MEMS sensors are early in their development; they have made their way into consumer market but underlying limits to sensitivity and stability are not well understood. This is analogous to the development of crystal oscillators (XO) developed early in the 20th century. Over the past century, temperature sensitivity of crystal oscillators has been improved by applying temperature compensation algorithms based on the externally sensed ambient temperature (TCXO) . However, the best performing crystal oscillators rely on ovenization of the resonant device to provide the highest stability (OCXO). The evolution of MEMS-based inertial sensors is likely to follow a similar trajectory due to the similarity of vibrating MEMS devices to quartz oscillators. At present, uncompensated MEMS inertial sensors are widely available for commercial applications and digital temperature compensation (TC-MEMS) devices are emerging . Temperature stabilization has been demonstrated to improve long-term stability and reproducibility of MEMS inertial sensors in an academic setting but has yet to be transitioned into marketable MEMS-based inertial sensors . This SBIR seeks to develop Ovenized Inertial MEMS (OI-MEMS) with a viable path to commercialization. PHASE I: Design a concept for achieving tactical grade inertial sensor performance, as listed below. The sensor should operate on 500mW in a 0.5cc package. Phase I deliverables will include: a fabrication flow process, and a detailed analysis of predicted performance metrics. Bias Stability over temperature (-40 to +85°C) • Gyroscope: 1°/hr • Accelerometer: 1 mg Scale Factor Stability over temperature (-40 to +85°C) • Gyroscope: 10 ppm • Accelerometer: 1 ppm ARW • Gyroscope: 0.125°/rt(hr) • Accelerometer: .5 ft/s/rt(hr) PHASE II: Develop, demonstrate, and validate Phase 1 model predictions; refine fabrication procedures to fine tune thermal expansion and coefficient second-order effects; conduct life cycle and environmental testing to verify performance; manufacture and deliver gyroscope or accelerometer prototypes for government evaluation. Required Phase II deliverables include 5 packaged sensors with necessary electronics to operate the Ovenized Inertial MEMS device. PHASE III: The military need for PNT information in the absence of GPS is in very high demand. Current DARPA programs are pursuing self-contained navigation for applications such as missile guidance, mounted and dismounted soldier navigation in GPS denied environments. Much progress has been made in existing microPNT programs. This SBIR will complement those efforts, by addressing the key driver of long-term instability with a fast track to commercialization. Due to the high performance of the OI-MEMS, there is limited commercial application. However, there is a market for high performance, small CSWaP inertial sensors for oil drilling and agricultural applications.
Compact, Configurable, Real-Time Infrared Hyperspectral Imaging System
There is a compelling DoD need to create a low cost, compact and reconfigurable infrared imaging spectrometer that can operate in real time, and in a variety of backgrounds and ambient conditions. Hyperspectral imaging (HSI) systems have been fielded for the detection of hazardous chemical and explosives threat materials, tag detection, friend vs. foe detection (IFF) and other defense critical sensing missions. Such systems currently exist in airborne and ground sensing configurations in short-wave, mid-wave and long-wave infrared (IR) spectral regions. They are based on HSI sensor hardware architectures combined with multivariate analysis algorithms [1,2]. While these imaging systems can provide sensitive and specific detections of targets and identification of materials in complex backgrounds, they are typically large, costly to field, operate, and support, and generally do not operate in real-time. Those systems that operate in real time typically compromise some degree of freedom, such as the number of spectral bands, image definition, or number of targets being detected. Reconfiguring the system to an alternative set of targets or backgrounds requires significant effort, which makes adjusting to dynamic mission conditions impractical. Nonetheless, intelligence based on HSI systems has proven very useful, resulting in an increasing demand for it; but due to the high cost of procuring and maintaining an HSI system, they are only available to privileged users. Specifically, what is needed is an IR hyperspectral imaging and sensing capability with the following characteristics: (1) rapidly field-configurable operation to adapt to different targets or operating conditions; (2) real-time, target on-the-move operation, ideally at the frame rate of the focal plane array camera; (3) real-time automated target signature detection, performed within the system to dramatically reduce data bandwidth, downlink transmission bandwidth requirements, and post-processing; (4) significantly reduced cost, size, and weight; and (5) imaging operation with minimal support infrastructure. The resulting system should be able to support one or more of the following missions: counter IED detection, IFF, bio/chemical WMD detection and tag, track and locate (TTL) missions. The performance goals of such a system are: • Frame rate 10 frame per second (fps) or greater • Free spectral range covering at least one band of 850-1700 nm for SWIR, 3-5μm for MWIR, 8-11+μm for LWIR • Form factor, suitable for operation as a handheld, wearable or UAV-mounted configuration • Weight less than 5 lbs. • Run time greater than 4 hours, with power source included in weight metric • Cost of less than $50,000 in volume of 1000 or more • High Definition Chemical Image - Megapixel (1Kx1K) or greater • Low latency of less than or equal to 100ms • Interface compatible with XML schema • Autonomously link to existing military architecture or infrastructure (e.g., cell phone). In summary, a Compact, Mission-Configurable, on-Demand, Real-Time, Infrared Hyperspectral Imaging Sensor is envisioned. It is acknowledged that all spectral ranges may not be accommodated in a single sensor, and that the objective vision may not be fully realizable during the course of a Phase II SBIR. However, concrete and compelling hardware/software progress towards this vision is expected to be demonstrated. PHASE I: Design a concept for an infrared hyperspectral imaging system capable of real-time, and multi-mission configurable-on-demand operation with specific performance objectives as described. Develop an analysis of predicted performance, and define key component technological milestones. Establish performance goals in terms of parameters such as time of operation; probability of detection and false alarm; detection time; spectral range; image quality; field of view; day, night and obscured condition visualization; image frame rate; and size, weight and power (SWaP). In addition, provide a contrast with existing hyperspectral imaging systems. Produce an initial mockup, possibly using 3D printed parts and/or solid models, showing the system form factor at the preliminary design level. Phase I deliverables would include: • A description of the system design and functions mapped to real-time imaging system requirements, • A performance assessment against existing approaches, • An evaluation of key tradeoffs, and • A risk reduction and demonstration plan. • Final report/phase II proposal PHASE II: Develop and demonstrate a prototype real-time mission-configurable infrared hyperspectral imaging sensor system with the specified features, including on board detection, and operation at 10 fps or higher sampling rate. Construct and demonstrate the operation of a laboratory prototype, which would have the core features needed to achieve mission configurability capabilities. Exercise relevant software functions and exposure to different mission conditions, including demonstration of ability to change system detection configurations against multiple different target sets through rapid field configuration. Perform additional analyses as needed to project eventual performance capabilities. Phase II deliverables would include: • A final design with all drawings, simulations and modeling results; • One prototype of the real-time chemical imaging system; • Software applications as needed; • Performance data compared with performance and environmental goals; and • Schedule with financial data for program execution. • Preliminary and critical design reviews • Monthly reports PHASE III: As described above, the military utility of the data and intelligence that is generated by the current large and costly systems has been demonstrated. Driving the SWaP and cost down such that the system can be used by a dismount or on a small UAV will enable proliferation of the capability in the same way that night vision goggles or cell phones have become an integral part of the soldier’s arsenal. Requiring the system to be compatible with existing systems and data formats will help ensure more rapid acceptance and use. Commercial application of hyperspectral imaging has been increasing in parallel to military applications. These include agriculture, mining, medical imaging and diagnoses, environmental management, disaster management and hazard assessment. Like military applications, the cost and size of these systems limits their availability to all but the most privileged users. Driving the system cost and SWaP down would enable proliferation of these devices to a potentially large user base, including municipalities (police, fire, etc.), agriculture (farmers, land managers, etc.), and healthcare (health screening and microbiology).
Low Cost Expendable Launch Technology
There is a compelling DoD need to leverage emerging commercial entrepreneurial and defense technologies enabling lightweight, high-specific-energy liquid-rocket technology. Many established aerospace and emerging entrepreneurial companies are developing new rocket stage technologies that promise to reduce the cost of access to space. The goal of this topic is to leverage these investments to enable low-cost launch vehicles that minimize gross and dry weight while maximizing the propellant load, engine specific impulse and/or payload. Technological trends facilitating such lightweight stages include an ongoing computer/software revolution enabling affordable design, sophisticated software in lieu of mechanical complexity, integration, and test; micro-miniaturization of electronics and mechanical actuators; high strength-to-weight composites and nano-engineered materials; lightweight structural concepts and thermal protection; advanced manufacturing methods, high thrust/weight rocket engines and turbo-machinery; and novel high-density-impulse liquid propellants that are safe, cheap and easy to handle. The offeror must demonstrate a clear understanding of the system applications of the launch vehicle and the supporting technologies. A system application of interest to the government is modifying the launch vehicle as a low-cost upper stage for DARPA’s Experimental Spaceplane (XS-1) program. Key design goals include balancing low gross mass with adequate velocity change, payload and manufacturing cost. Additionally, reusable launch concepts such as XS-1 may carry stages through either normal or longitudinally-oriented hardpoints/racks. Stages with efficient structural arrangements to cope with such load paths while remaining low in mass and cost are of interest. Other potential system applications include a wide range of commercial launch vehicles, tactical missiles, satellite integral propulsion and future boost-glide tactical or air transport systems. Similarly, a clear understanding of the technology applications to XS-1 as well as other proposed military and commercial systems is also essential. Critical technologies could include lightweight structures and propulsion, high-density-impulse propellants, miniaturized avionics, modular components, altitude compensation and complementary aerodynamic/propulsion integration, and stability, guidance and control subsystems all integrated into the stage while keeping the system simple and affordable. Offerors may seek to design and fabricate an entire stage or only critical subsystems. PHASE I: Develop the design, manufacturing and test approach for fabricating extremely low-cost, high propellant mass fraction launch vehicles and upper stages for space access. Critical component or analytical risk reduction is encouraged. Identify potential system level and technology applications of the proposed innovation. Although multiple applications are encouraged, to help assess the military utility the proposed stage should be useful as an upper stage on the XS-1 experimental spaceplane. The stage(s) must be designed to support: 1) an ideal velocity change of up to 20,000 fps objective, 2) a payload of 3,000 lbs, 3) a gross mass of less than 30,000 lbs, 4) a unit fly away cost of <$1M per stage, and 5) a safe and affordable alternative to today’s carcinogenic propellants such as hydrazine, unsymmetrical dimethylhydrazine and red fuming nitric acid. PHASE II: Finalize the Phase I design, then develop, demonstrate and validate the system design, critical hardware components and/or enabling technologies. Design, construct and demonstrate the experimental hardware or component prototypes identified or developed in Phase I. The Phase II demonstration should advance the state of the art to between Technology Readiness Level 5 and 6. Required Phase II deliverables will include the experimental prototype hardware and a final report including design data such as CAD and detailed mass properties, manufacturing and test plan, costing data, test data, updated future applications and Phase III military transition and commercialization strategies. PHASE III: The offeror will identify military applications of the proposed innovative technology(s) including use as a low-cost upper stage on the XS-1 experimental spaceplane. Leveraging of commercial and defense stages tailored to support specific upper stage needs is encouraged. Technology transition opportunities will be identified along with the most likely path for transition from SBIR research to an operational capability. The transition path may include use on commercial launch vehicles or alternative system and technology applications of interest to operational military and commercial customers.