Department of Defense
November 20, 2013
November 20, 2013
SBIR / 2014
January 22, 2014
NOTE: The Solicitations and topics listed on this site are copies from the various SBIR agency solicitations and are not necessarily the latest and most up-to-date. For this reason, you should use the agency link listed below which will take you directly to the appropriate agency server where you can read the official version of this solicitation and download the appropriate forms and rules.
The official link for this solicitation is: http://www.acq.osd.mil/osbp/sbir/solicitations/index.shtml
OBJECTIVE: Develop and demonstrate gear coatings in order to increase the endurance of helicopter transmissions operating after loss of primary oil flow. The objective is to develop low cost, low friction, highly reliable coating that is capable of allowing a transmission to run for 45 minutes in a loss of lubrication condition. DESCRIPTION: Under normal rotorcraft operations, flowing lubricant keeps heat generation low and removes excess heat away from the gearboxes. In an oil-out condition, where the primary oil flow no longer exists, the rotating gear components generate more heat which leads to gearbox failure. Recent transmission oil loss incidents have caused emergency landings as well as fatalities. Army rotorcraft have specific loss of lubrication requirements for drive systems, as noted in ADS-50-PRF. The drive systems are required to operate after loss of primary oil flow for a minimum of 30 minutes at cruise conditions (approximately 50% power rating). The Army desires the ability to run for a longer period of time after a loss of lubrication condition to improve aircraft survivability. One way to meet and exceed this requirement is to employ the use of an emergency oil system. This auxiliary system takes over during the loss of primary oil flow, but requires the aircraft to always carry around the additional associated weight. This additional weight reduces aircraft payload and range. This topics goal is to develop a technology that will allow a gearbox to survive for at least 45 minutes at a 50% power rating, without the need for an auxiliary lubrication system. In order to meet this goal, an innovative gear coating approach is desired that will allow the gears to run at elevated temperatures for a longer period of time before failure. The gear coating technology developed under this effort should be designed to be affordable, capable of application to typical aircraft gear steels (AISI 9310, X-53, etc.), and compatible with typical gear manufacturing processes. Candidate coatings must be extremely durable, as gears are expected to operate without failure for many thousands of hours (order 109 cycles) in a lubricated gearbox. The coating should be designed so that it does not wear at temperatures below 400 degrees Fahrenheit, and will only degrade during oil out conditions. The coating should be between 1-40 micron"s thick, entirely cover each gear tooth, and not be detrimental to gear performance in any way. If the coating process results in increased surface roughness, a method to restore the original surface topography should be addressed. PHASE I: Develop and conduct a feasibility demonstration of the proposed coating technology. This may include modeling of the coating performance, and coupon level experimental testing. This demonstration shall validate the solutions for identified critical technical challenges. PHASE II: Contractors are encouraged to collaborate with an Army rotorcraft OEM during Phase II. The contractor shall further develop the gear coating based on the Phase I effort for implementation on an Army rotorcraft. The capabilities of the advanced coating will be validated by performing a loss of lubrication test in a full scale rotorcraft gearbox. This testing shall validate the ability of the coating to increase loss of lubrication performance. PHASE III: This technology could be integrated in a broad range of military/civilian aircraft where loss of primary oil flow is a concern. The potential exists to integrate and transition this coating into existing and future Army gearboxes, such as those for the Apache, Chinook, Black Hawk, and Kiowa Warrior.
OBJECTIVE: Develop and demonstrate innovative and advanced concepts for seat restraint systems, which will be intuitively easier and faster to use under any adverse conditions and can be readily integrated into existing forward, aft, and side-facing troop seats in military rotorcraft while providing the necessary protection requirements to prevent injuries during crash. DESCRIPTION: Unlike a pilot who uses the seat restraint system so frequently that its use becomes a matter of habit, troops encumbered with varying types and quantities of equipment in military rotorcraft are often not familiar with the troop seat restraint systems. As a result, they may not use the restraint systems or don them incorrectly especially under adverse conditions like under enemy fire or in the dark. However, it is crucial for the troops and passengers to don the restraint systems quickly and properly in their troop seats to prevent injuries during crash. It has been reported that current 4-point and 5-point seat restraint systems are so difficult for troops to use that in many cases, the crew chief must assist troops in finding the straps and donning them properly. This difficulty results from lack of familiarity, equipment worn by the troop, which blocks visibility, system complexity, and straps that get misplaced down between or behind troop seats. PHASE I: Develop innovative conceptual designs for seat restraint systems for troops in military rotorcraft, which will be intuitively easier and faster to use while providing the required protection to prevent injuries during crash and meeting the ingress and egress requirements. Required Phase I deliverables include monthly progress reports and a final technical report. PHASE II: Upon successful completion of Phase I effort, conduct detail design, analysis, testing to demonstrate operational prototype improved restraint systems that can be easily integrated into existing forward, aft, and side-facing troop seats. Required Phase II deliverables include bi-monthly progress reports, test plans, test reports, design review packages, and a final technical report. PHASE III: Optimize the design and the seat integration resulting from Phase II effort and conduct necessary performance, environmental, user-acceptance testing in an operational environment. The innovative and advanced seat restraint systems for troops in Army rotorcraft can be applied to troop seats for other military platforms as well as the mine-blast protected seats in military and commercial ground vehicles.
OBJECTIVE: Develop and demonstrate a paint additive with associated non-destructive interrogation device to detect the very early/insipient formation of corrosion. DESCRIPTION: Corrosion is a chronic problem for all of DoD and significant funding is spent annually detecting, repairing and in some cases replacing aircraft components. Inspection for corrosion is currently accomplished visually by maintenance personnel. Thus, the corrosion is usually only detected when it has formed and is causing structural damage to the effected components. In an effort to enhance corrosion detection practices, corrosion sensing devices that infer the possible formation of corrosion from monitored environmental parameters have been investigated. Sensors to detect the electrolyte catalyst that leads to corrosion formation have also been examined. However, in both cases, these sensors only monitor for corrosion in the immediate area they are installed. Thus, these corrosion detection approaches are greatly limited. It is anticipated that inspection for corrosion with a paint additive/non-destructive interrogation system would allow for discovery of nominal corrosion that can be easily treated before any damage occurs. This topic's goal is to develop and demonstrate a paint additive with associated non-destructive interrogation device to detect the very early/insipient formation of corrosion. The additive must be able to function/survive in the harsh internal environments. The non-destructive interrogating device should be as small and lightweight as possible and have the capability to transverse in small areas. The additive/non-destructive interrogation system should be affordable (Threshold: $10K unit cost, Objective $5K unit cost) and readily producible. The paint additive must be compatible with current coating systems and not interfere with pretreatments. Camouflage of the aircraft must not be compromised in any way. PHASE I: Develop and conduct a feasibility demonstration of the proposed paint additive/non-destructive interrogation technology. This may include modeling of the additive performance, and coupon level experimental testing. The paint additive must be compatible with current coating systems and not interfere with pretreatments. Camouflage of the aircraft must not be compromised in any way. The paint additive system will be graded for performance with metrics to include sensitivity of the paint additive to indicate corrosion when interrogated, the total area the interrogator head is able cover when investigating for Corrosion (Threshold: 5 mm squared, Objective: 10 mm squared), and the flexibility (ability to change angles) of the interrogator to reach difficult to access areas of the aircraft. This demonstration shall validate identified critical technical challenges. PHASE II: The contractor shall further develop the paint additive/non-destructive interrogation device based on the Phase I effort for implementation on an Army rotorcraft. The capabilities of the advanced paint additive/non-destructive interrogation system will be validated by conducting testing using panels representative of aircraft manufactured structural sections. This testing shall validate the ability of the paint additive/non-destructive interrogation device to detect corrosion through the paint at the lowest level possible. Testing should include accelerated exposure to corrosion formation on the panels. The contractor shall address manufacturing/formulation of the new paint and additive, as well as identify phase III path ahead for military qualification. PHASE III: This technology could be integrated in a broad range of military/civilian aircraft where corrosion is a concern. The potential exists to integrate and transition this coating into existing and future Army aircraft components, such as those for the Apache, Chinook, Black Hawk and Kiowa Warrior.
OBJECTIVE: Develop and demonstrate novel, electrostatic methods for improved fine sand/dust particle separation in turbine engines. DESCRIPTION: Sand/Dust particles have significant detrimental effects on turbine engine performance and durability causing impact on mission effectiveness and sustainment costs. Fine sand particles melt in the combustor and solidify (turn to glass) on turbine vanes, blades, and other hot section components thereby disturbing flow characteristics and reducing efficiency. Glass releases during engine cycles/transients and can damage downstream components. These fine particles also can get into the turbine cooling flow and plug up cooling passages in turbine blades and vanes causing premature oxidation. The above effects reduce turbine performance, increase turbine temperatures, and shorten the life of hot section components resulting in more frequent engine overhauls. Fine particles (1-80 micron size) are much more difficult to separate from the main flow stream because they tend to more easily follow the flow and are not easily filtered by inertial effects. The goal of this program is to improve separation of these fine particles via novel electrostatic methods (i.e., attraction/deflection of particles into the scavenge flow or conglomeration of the particles into larger masses which can more easily be filtered/scavenged) to enable more effective aerodynamic separation techniques. The goal metric is to improve fine particle (1-80 micron size) separation efficiency from current level of about 70% in current inertial separators to greater than 90%. The electrostatic system design effort should include calculations/modeling and/or subcomponent tests to determine the most effective methods along with the required electrostatic charges and particle velocity limits to effectively conglomerate/deflect the sand/dust particles for subsequent aerodynamic filtration. A key technical challenge will be the ability to effectively achieve electrostatic conglomeration/deflection of fine sand particles in a turbine engine realistic airflow environment. Another key technical goal to be addressed will be the ability to integrate the proposed electrostatic separation method into the engine with no increase in pressure drop and no more than a 15% increase in filtration system weight and volume relative to the proposed baseline engine filtration system. The size and weight of the proposed system will be dependent on the intended engine application. Proposed electrostatic filtration concepts can be applied to various parts of the engine wherever most effective. System development and validation efforts should include component testing to demonstrate the electrostatic system effectiveness as a function of inlet particle size distributions, where inlet to exit particle sizes will be evaluated. Small businesses are encouraged to work with major engine manufacturers to ensure effective application of proposed technologies into future gas turbine engine configurations. PHASE I: Key components of proposed electrostatic filtration methods should be developed and validated to substantiate the feasibility to achieve improved fine particle separation in turbine engines. Fine particles (1-80 micron size) are much more difficult to separate from the main flow stream because they tend to more easily follow the flow and are not easily filtered by inertial effects. The goal metric is to improve fine particle (1-80 micron size) separation efficiency from current level of about 70% in current inertial separators to greater than 90%. PHASE II: Fully develop, fabricate, and demonstrate the electrostatic filtration system in a ground test environment to validate improved fine particle (0-80 microns) separation efficiency from current level of 70% to greater than 90% in turbine engines, while not debiting the coarse sand separation capability. The current efficiency levels are typically 90%-97% for coarse sands (80-1000 microns). PHASE III: Phase III options should include endurance testing and integration of the enhanced electrostatic filtration system into a turbine engine system to demonstrate the performance of the system with engine sand ingestion ground testing. The resulting technology will enable significantly enhanced durability of hot section components of future advanced turbine engines operating in sand and dust environments. Both the military and commercial aircraft applications are likely to encounter such an environment and thereby will derive benefit from this technology. This improvement in electrostatic technology should improve fuel consumption and power density. It may reduce overall maintenance costs.
OBJECTIVE: Contractor shall develop a power source system for electronic health monitoring systems capable of providing a fixed voltage (between 1.5 and 3.3 volts) at 100 microamps for 20 years over the industrial temperature range. DESCRIPTION: Contractor shall develop an electronic system health monitoring power source to meet the following requirements: (1) Conventional battery technology, nuclear batteries, hybrid battery/capacitor/etc. power sources that meet all the requirements are acceptable solutions. We are not interested in thermal energy conversion. We are not interested in energy harvesting, since there are little to no opportunities for scavenging energy in a storage environment as described in Bradford ; (2) the power source output voltage shall be a fixed voltage between 1.5 volts and 3.3 volts; (3) the power source shall have a 20 year lifetime for operation and storage; (for example, the power source shall be capable of 19 years 364 days of storage, and 1 day of operation, 19 years 364 days of operation and 1 day of storage, 10 years of storage and 10 years of operation, etc.); (3) the power source"s temperature range shall cover the industrial temperature range of -40 C to +85 C with a wider temperature range up to full military temperature range of -55 C to +125 C desired; (4) the power source shall provide 100 microamps average current, a 1 milliamp current pulse for 1 second every hour, one time current pulse of 10 milliamps for 1 second; (5) the power source shall be environmentally friendly, with minimal disposal issues. (conventional chemical and nuclear batteries will be given equal technical evaluation weighting; however, nuclear batteries shall include a plan for obtaining a NRC license and a life-cycle management plan.); (6) a power source that can be certified flight worthy; (7) the power source shall be no larger than 2 by 2 by 1 inches; and, (8) the power source shall weigh no more than 2.5 ounces (no more than 70 grams). For reference, current technology, ultra-low power microcontrollers require 50 microamps to 200 microamps per MHz clock frequency over the voltage range of 0.9 to 3.6 volts. For example, a 100 microamp/MHz microcontroller with a 12 MHz clock frequency would require 1.2 mA of current. PHASE I: Contractor shall provide a feasibility study to develop a power source meeting the requirements in the Description section. Contractor may develop models, prototypes, and/or simulations to meet the requirements. Contractor shall provide a final phase I report. For a nuclear power source system, contractor shall include a preliminary plan for obtaining a NRC license and a life-cycle management plan. PHASE II: Contractor shall develop a power source to meet the requirements described in the description section. Contractor shall develop test method(s) to verify power source meets the requirements described in the description section. Contractor shall provide the government with a report describing the test method(s) and test results. Contractor shall have an independence source, with government concurrence, test and evaluate prototype power source. Contractor shall provide a copy of the test and evaluation report to the Government. Contractor shall deliver 2 prototype power sources to the government point of contact for test and evaluation. Contractor shall provide midterm and final reports. For a nuclear power source, contractor shall include in the final report a plan for obtaining a NRC license and a life-cycle management plan. PHASE III: Batteries are problematic in electronic health monitoring systems. Current batteries do not have more that 5 to 10 years of shelf life over the industrial temperature range. A new battery, with a 20 year operating/shelf life would eliminate replacing batteries every 5 years. Low power consumer electronics, industrial control, oil and gas monitoring systems, and harsh environment monitoring systems would all benefit from long life battery technology. Medical electronics (pacemakers, and insulin pumps) would also benefit from improved battery technology.
OBJECTIVE: Research and develop an electronic health monitoring system with ultra-low power sensors and signal processing chain(s). DESCRIPTION: The purpose of this SBIR is to research and develop ultra-low power sensors and signal processing (analog and digital) chains for electronic health monitoring applications. Current electronic health monitoring systems spend most of their time in an ultra-low power sleep mode. We are interested in an electronic health monitoring system that can run continuously on less than 100 microamps current. A typical health monitoring system consists of a sensor, analog signal processing, an analog-to-digital converter, microcontroller and digital signal processing. We are interested in an electronic health monitoring system with less than 100 microwatts continuous power consumption. A low power sensor by itself does not meet the low power system requirement for this topic. Due to the low current requirement, this topic is for a wired health monitoring system. Offeror may propose a wireless system; however, the offeror still must meet the 100 microamp average current requirement. For example, a typical pressure sensor may output a voltage, current or frequency proportional to the applied pressure. Analog signal processing may be required to appropriately scale the sensor's output signal for an analog-to-digital converter. Digital signal processing may be required to linearize the pressure transfer function. For example, a typical industrial pressure sensor outputs a current. A gain stage is required to convert the sensor output current to a 0 to 3 volt range for a 12 bit analog-to-digital converter. A digital signal processing step is required to convert the 12 bit digital code number 0 through 4095 (0x0000 through 0x0fff) to a 0.0 to 100.0 psi (0.0 to 7.00 bar) floating point number. The entire sensor system is required to be ultra-low power. Missiles may be placed in long term storage for 10 to 20 years or longer. Health monitoring systems require, extremely low power sensors and extremely low power analog and digital signal processing to achieve more than 10 years of operation. Batteries to power health monitoring systems are a separate concern. This SBIR topic is not seeking any battery research or development. This SBIR topic is not seeking any chemical sensor development. PHASE I: Contractor shall research the feasibility of developing an ultra-low power electronic health monitoring system with less than 100 microwatts continuous power consumption. We are interested in sensors for humidity, temperature, rate of temperature change, pressure, and battery charge level (e.g. percentage of battery life remaining). Contractor shall select 4 of 5 sensors for the health monitoring system. The technical merit for the phase I proposals will be evaluated based on the following criteria: 25% weighting for sensor and sensor performance, 25% for analog and digital signal processing chains, and 50% for low power. Phase I proposals are required to address the three criteria above. For the Phase I proposal, the contractor shall provide a system block diagram and describe the system operation: (1) the proposed sensors, (2) the analog signal processing chain(s), (3) the analog-to-digital converter(s), (4) digital signal processing chain(s), and (5) any post digital processing, and/or linearization required to convert the digitized sensor values to measurement values (for example V/m, Pascals, Kelvins, etc. in floating point or fixed point). To demonstrate the feasibility of the proposed health monitor system, the contractor may develop models, simulations, and/or prototypes. The contractor shall provide midterm and final reports. The final report shall describe health monitor system (1) operation, (2) estimated system performance, (3) estimated power consumption for each individual element, and the total system power, (4) estimated operating temperature range, (5) estimated operational vibration limits, (6) estimated operational lifetime, and (7) estimated non-powered lifetime (shelf lifetime). PHASE II: Contractor shall develop the electronic health monitoring system based on the phase I research. The electronic health monitoring system (without battery) shall have a volume smaller than (1 x 1 x 0.2 inches) and weigh less than 0.36 ounce (10 grams). Contractor shall have an independent source, with government concurrence, test and evaluate the performance of the electronic health monitoring system. Contractor shall provide a copy of the test and evaluation report to the government. Contractor shall provide 2 electronic health monitoring systems to the government point of contact for test and evaluation. Contractor shall provide a preliminary datasheet for electronic health monitoring system. Contractor shall provide a final report describing the electronic health monitoring system. Contractor shall provide the government a 2 day on site training for the electronic health monitoring system. PHASE III: Military, aerospace, medical and industrial applications are always looking for lower weight, and lower power technologies. Automotive applications require rugged, low power electronic health monitoring systems for diagnostics and prognostics. Military systems (missiles, aircraft, ships, and vehicles) are interested in ultra-low power electronic health monitoring systems for system monitoring, diagnostics and prognostics. For the medical industry, battery powered systems like heart pacemakers, blood glucose monitors, etc. would benefit from ultra-low power electronic health monitoring systems.
OBJECTIVE: To develop and demonstrate ceramic matrix composite materials for missile applications with transpiration cooling. DESCRIPTION: The advantages of external transpiration walls for cooling and drag reduction of hypersonic air vehicles are well known[1,2]. These same advantages have been applied to internal gas turbine combustor walls and should equally apply to airbreathing missiles ducted rockets, ramjets, and particularly scramjet powered air vehicles. Transpiration refers to the transport of fluid through a porous wall at near zero momentum; i.e., an"oozing"or"bleeding"process. This transpiration cooling process provides for a layer of fluid to insulate and convect heat away from the porous wall and, additionally, reduce the viscous drag of the cross flow. The application of transpiration cooling to missile systems has been inhibited, at least in part, to the added complexity plus the high cost and time required to fabricate porous transpiration walls from heat resistant metals. Even for scramjet engine applications where the internal viscous drag may well exceed the external form or pressure drag, the advantages of the technology have largely been deferred while addressing the fundamental problems of engine design and operation. Ceramic matrix composites (CMC), however, may well provide an alternative to the tedious and expensive production process for metal transpiration surfaces. Ceramic matrix composites offer unique properties for high temperature applications. Most commonly proposed as a structural material for rocket nozzles, motor cases, and airframes, they should provide adequate strength over the required range of operating temperatures with the potential for weight savings and increased propellant loading since insulation materials may be reduced or eliminated. Indeed, those strength and temperature characteristics are now being exploited for uncooled turbine shrouds and high pressure turbine static seals for advanced turbojet engine designs[5,6]. More uniquely, in an unfilled form, these porous ceramic matrix composites could prove ideal as transpiration materials for combustor walls or external air vehicle surfaces. CMCs such as Carbon (C), silicon carbide (SiC), and alumina (Al2O3) are often manufactured by a process that results in a highly porous product after the first step (often as much as 20% porosity). In subsequent steps they are filled and processed so as to reduce the porosity. The reduction in porosity improves both structural and high temperature properties which are beneficial for the applications mentioned above. CMCs with porosity may have applications for transpiration cooling but their properties need further investigation to determine the optimum combination of porosity, strength, and temperature capabilities. Hence, the technology for ceramic matrix composite production is now well established but innovation will be required for transpiration boundaries in terms of fabrication and properties to include thermo-structural properties, containment fixtures, porosity, and fluid flow characteristics. What is needed then is a clear demonstration of porous CMCs for missile transpiration cooling applications, specifically for scramjet engine internal combustor walls and for hypersonic air-vehicle external walls. Success in those applications should assure equal success in many commercial applications as well. Metrics for success would be an area weight of half that for a fabricated steel transpiration surface at one-fifth the cost. Additionally, the unfilled CMC material should show a degradation of no more than fifty percent for any time period at elevated temperatures. PHASE I: Innovative technical approaches will be formulated leading to the development of ceramic matrix composite materials for transpiration cooling as a marketable product. Preliminary analysis of test article concepts will be performed for structural and thermal requirements. PHASE II: To demonstrate and validate the technical approaches of Phase I, plans will be formulated for the completion of two ground based tests in a Government test facility. The first test will be of a CMC transpiration cooled and/or fueled planar wall in an existing supersonic combustion duct. The second test will be of a CMC transpiration cooled external wall section in an exiting conical body at hypersonic velocities. Additionally, prototype ceramic matrix composite coupons for these tests will be delivered to the Government for high temperature strength testing and fluid flow characterization. These two tests will require both planar and conical CMC transpiration surfaces. The combustion duct will employ two planar transpiration walls of approximately 0.10 m width by 1.2 m length. The 10.5 degree (half angle) conical body will use a transpiration wall of approximately 0.76 m in length with a base diameter of 0.34 m. Additional geometric details are given in references 1,2, and 8. The two ground based tests will be run to validate the use of ceramic matrix composite materials for transpiration cooled combustor and air vehicle walls. Testing will require detailed analysis for thermal and structural requirements, fabrication of the required composite transpiration wall materials, installation in the existing combustor and conical tunnel model, and completion of the ground testing. Deliverables will consist of the resultant test data to demonstrate structural integrity and aerothermal characteristics of the transpiration cooled installations. Metrics for success in these ground based tests will be (1) structural integrity for a test time of 50 ms with and without transpiration, (2) a reduction of viscous surface drag by a factor of ten with active transpiration, and (3) a reduction in surface heat flux by a factor of ten with active transpiration. PHASE III: The end result of this research effort will be a validated approach for the production of composite materials for transpiration cooling as a marketable product. For military applications, this technology is directly applicable to all rocket propulsion missile systems as an advanced material for nozzles, motor cases, and airframes with the additional application to combustors for airbreathing missiles. Gas turbine applications, both military and commercial, to include both turbo shaft and turbojet variants have already employed CMC materials for combustor walls and variants of transpiration cooling for combustor walls and turbine blades[9,3]. This advancement for transpiration with porous CMC materials will have direct application to this large commercial arena. For strictly commercial applications, this technology is directly applicable to all commercial launch systems such as the NASA Aries, and the Delta and Atlas families. Additionally many industries utilizing high temperature combustors petroleum, cement, power generation, and food processing to name but a few could employ this technology.
OBJECTIVE: Develop, investigate, and validate a novel optical primary optic for a wide field of view semi-active laser spot-tracking missile seeker based on a biologically-inspired compound eye for use in the near-infrared spectrum. Provide effective rejection of solar interference, and allow tracking of a target by a missile in flight without the need for a gimbaled sensor. DESCRIPTION: The US Army employs semi-active laser optical sensors on many missile platforms to provide precision guidance to targets. The Army now places a strong emphasis on low-cost missile seekers for use on relatively small missile platforms. The Army requires a novel, non-moving (strap-down) seeker solution in order to reduce cost and complexity as well as enable use on small missile platforms. The Army has a need for a missile seeker system which will not require the use of a moving mechanical structure, such as a gimbal, to maintain track on a target. A strap-down missile seeker must still evaluate the same wide field of regard as that which may be covered by an equivalent gimbaled sensor. A semi-active laser seeker must also operate in an environment which may be rich with background radiance, such as when the sun may be located in the sensor field of view. Small invertebrates have eyes specifically tailored for their tasks . Many of those tasks include wide-field source tracking in high-background environments. The Army therefore recognizes the corollary that a biologically-inspired compound eye-type sensor may provide a novel solution to the problem of a strap-down semi-active laser seeker. PHASE I: Create a design for a novel compound eye seeker and provide performance estimates, analyses, and simulations, to include -but not limited to- resolution, optical throughput, and any measures to predict rejection of solar or other radiation outside of a selected near-infrared band-pass and angles of interest. The novel seeker optics shall be capable of detecting and tracking a target illuminated by a selected laser operating in the near-infrared (900nm-1100nm) wavelength region. The novel seeker optics shall enable narrow band-pass filtering of the source to a 10nm (threshold), 2nm (objective) transmission bandwidth, which shall be effective throughout the entire field of regard of the seeker. The novel seeker design shall allow tracking of the intended laser source throughout the sensor"s field of regard when in the presence of a bright background source (i.e. the sun) located at any field point greater than 2 degrees (threshold), 1 degree (objective) from the intended source. The novel seeker design shall allow implementation on a missile platform as small as 2.75 inches and as large as 7 inches in diameter. The novel seeker shall provide a field of view not less than 25 degrees. The Army will not likely require a field of view greater than 50 degrees. The novel optic shall optimize tracking accuracy and light collection ability while simultaneously providing the previously-stated capabilities. A typical detector sensor for this application is a quadrant detector less than 9.25mm in diameter. The Army shall prefer such a solution for ease of implementation, but will consider other reasonable detector configurations in the interest of cost to performance benefits. Potential risks to achieving a low sensor cost with the proposed concept shall be identified in Phase I. Rough order of magnitude cost estimates on known components are encouraged in Phase I reporting in to reduce risk earlier in the potential Phase II effort. The seeker design shall identify any assumptions or requirements regarding sensor/detector configuration or any additional optics required for the operation of the seeker. Phase I proposals will be technically evaluated on the perceived ability of the technology to meet the previously-stated desired system performance goals as well as achieve future cost goals. PHASE II: Construct and deliver to the Army a prototype sensor system based upon the Phase I design which shall consist of a complete assembly of the novel optical technology, sensor/detector device, electronics for sensor operation, and any hardware or software required to obtain complete signal output from the sensor. The Army technical point of contact shall provide any additional detailed requirements, sensor recommendations, and detector module hardware as necessary, pending a Phase II award. Detailed design analysis and verification thereof shall be performed as part of the Phase II effort. The army shall use results from the analysis effort to run detailed performance models and simulations. The awardee shall provide detailed cost estimation for prototype and production-level quantities of the seeker. PHASE III: Development of the optical technology described herein will have immediate application to laser communications in both military and commercial sectors. The technology should find ready applications in laboratory applications. Additional military spin-offs would include missile warning sensors.
OBJECTIVE: Future thermal battery performance demands a substantial increase in potential and energy density over the current state-of-the-art. The objective is to develop high potential and high capacity thermal batteries based on advanced electro-chemistry and engineering technologies pushing the boundaries of the current state-of-the-art. DESCRIPTION: Smaller size and longer mission lifetime are among the critical changes being implemented in the next generation missile systems. The need to increase the range of smaller missiles demands higher total energy in miniaturized thermal batteries needed to power the guidance systems and other on-board electronics. The simultaneous fulfillment of higher energy and smaller footprint demands systems with significantly increased energy density. Considering the existing thermal batteries operating at lower potential, e.g., less than 1.8V, the new thermal battery may need to operate at higher cell potential and be stable at thermal battery operating conditions. Many factors including material characteristics, mechanical perturbations, thermal stabilities, and electrical issues also prevent the current thermal batteries from reaching their optimum potential and specific energy. PHASE I: Conduct a feasibility electro-chemical and engineering analysis of the optimization of potential and specific energy. Target cell level parameters should include at least 2.5 V of the average operating voltage, and at least 1,000 Wh/kg of specific energy. Identify component materials/engineering and provide merits of the proposed system. Address the performance trade-offs and compatibility issues among electrodes and the electrolytes. Design and fabricate the first-generation thermal battery and perform cell testing to demonstrate the feasibility of the proposed materials and engineering approaches to provide a clear pathway for an advanced thermal battery with optimized performance leading to the target goals. PHASE II: Design and fabricate a thermal battery to demonstrate the avarage cell level operating voltage of>2.5 V, and the cell level specific energy of>1,000 Wh/kg with a current density not less than 0.5 A/cm2. All appropriate electro-chemical characteristics, engineering testing and validation of the performance of the prototype system should be performed. Provide all the cell components along with synthesis and manufacturing processes. A working prototype should also be submitted to Army for evaluation. PHASE III: Demonstrate improvements in performance in non-operational and operational environments. Provide complete engineering and test documentation for development of manufacturing prototypes. A Phase III application for Army missile systems could include battery miniaturization in legacy programs as well as incorporation into emerging programs. Programs that would benefit from this technological innovation would include, but are not limited to, the following programs: TOW, Excalibur, Stinger, Javelin, NLOS, Griffin and JAGM. The development of other military applications of this technology may include future urban warfare surveillance/reconnaissance unmanned aerial vehicles. This technology is also applicable to sonabuoys which are large users of thermal batteries. Among numerous civilian applications of this technology, smaller emergency backup power sources for the aviation industry are considered to be the most applicable.
OBJECTIVE: Develop an innovative, low cost approach to capture fragmentation mass, location, material type from a Joint Munitions Effectiveness Manual (JMEM) arena characterization test. The objective is to reduce cost, man hours and turnaround time of data. DESCRIPTION: AMRDEC is interested in developing techniques to improve the data collection and analysis associated with performing ground-based static warhead arena characterization tests. These tests adhere to the guidelines and procedures described in the Joint Munitions Effectiveness Manual (JMEM). The current method of characterizing muntions is costly, labor intensive and produces an incomplete record of data. For an arena test, a warhead is placed in the center of an arena consisting of blast-pressure gages and fragment collection media (often celotex bundles). When initiated, the warhead fragments impinge themselves into the celotex bundles for subsequent measurement and analysis of their location, weight and shape. Several days, if not weeks of tedious and error-prone labor (~30hours/bundle) are necessary to locate, recover, weight and record the geometry of each fragment into a database; many of the smaller fragments are not even recovered. At an estimated $100/hr - the cost of an arena test can balloon quite substantially as the number of bundles and manhours to collect/analyze each bundle increase. A typical fragment collection procedure is as follows: 1. Examine one bundle sheet at a time (typically a 2-foot thick bundle contains 48 sheets [4'x8'x0.5"sheet]) 2. Locate fragments by hand and record location 3. Dig fragment out of celotex and place in a bag with fragment number for reference in database 4. Clean fragment with acetone and determine material type 5. Weigh fragment and record 6. Compile all information into spreadsheet database for further analysis PHASE I: Develop an innovative, low cost concept to capture fragmentation mass, location, material type using an automated method to detect and map fragment and provide fragment mass from a celotex bundle. This model should demonstrate modeling and simulation of the ability to detect, track, derive position and mass, and record each fragment embedded in a bundle. The deliverable thresholds are: 1. Material detection - Steel only 2. Location Coordinates - x, y, z coordinates with +/-0.5in 3. Mass - within 10% of fragment true weight 4. Time - 15 hours per bundle The deliverable objectives are: 1. Material detection - Steel, Aluminum, Titanium 2. Location Coordinates - x, y, z coordinates within +/-0.25in 3. Mass - within 5% of fragment true weight 4. Time - 10 hours per bundle During this phase, a plan/design for implementing this system into hardware will need to be developed. PHASE II: Following the development plan outlined in Phase I - design, develop, and implement a prototype fragmentation data collection and analysis tool (hardware and software) during a JMEMs arena test. A deliverable from the Phase II will be the delivery of an analysis system( hardware and software) to AMREDC and a successful demonstration of the hardware and software of a JMEMs arena test meeting the minimum threshold requirements (preferably meeting the objectives) with documented results. Present a path forward to support JMEMs arena testing with the implementation of the analysis tool and look at potential commercialization areas. PHASE III: Mature the system developed in Phase II to a test-ready status. The contractor will pursue commercialization of the various technologies developed in Phase II for potential commercial users in the areas of sensors and software capable of high speed, high fidelity physical position and size measurements and detection. Once proven, this technology could be used in a vast array of fields such as the medical field and mining industry.
OBJECTIVE: Develop (1) an industrially compatible microbial expression system for high yield production of structural proteins and (2) bio-derived fibers from stable protein solutions through an aqueous-based, automated, scalable spinning process. DESCRIPTION: Nylon 6,6, a common chemically-derived polymer, is widely used in commercial and military applications that require a high level of wear and tear. For the Army Combat Uniform (ACU) and accompanying undergarments, blends of nylon and cotton are typically generated to enhance comfort and improve moisture management, relative to nylon being employed autonomously. However, certain aspects of durability and mechanical integrity are decreased in the blended fabric. Moreover, the thermal decomposition of Nylon 6,6, independently or within a blend, results in melt drip, which can cause secondary burns during a flame or thermal event in combat. Biologically-derived fibers have been identified as a viable alternative to nylon-based fibers, with mechanical integrity approaching nylon while being considered high comfort materials. Furthermore, the biological nature of the materials avoids melt drip injuries, as the fiber would not exhibit melt behavior but rather complete thermal degradation. Additional benefits include (i) reduced weight, which addresses the ever evolving challenge of Soldier load, (ii) reduced energy demands for fiber production, and (iii) biodegradability, which addresses a growing trend of reduced signature and footprint for contingency operations. Silkworm silk, the most common bio-fiber currently available for textile applications, is not viable for use in U.S. Soldier textiles due to limitations on use of foreign sourced materials. The Army must initiate efforts to investigate, create and develop domestic bio-derived textiles to keep pace with the next generation of requirements, both environmental and performance related. This topic seeks to create a bio-derived alternative to nylon by maturing the current state-of-the art relating to fibers derived from structural proteins such as collagen (1), fibrin (2), spider silk (3) and resolubilized hagfish slime (4). Structural proteins can self-assemble into aligned polymer-like chains during fiber spinning, akin to melt extrusion of chemically-derived polymers. The self-assembly process of structural proteins is tailorable, dependent upon the solution matrix, and to some extent controllable, unlike other bio-fibers such as jute and cellulose. Structural proteins therefore represent an exciting opportunity to create bio-derived fibers that not only could possess mechanical stability similar to that of nylon, but by manipulating the self-assembly process, could strive for properties of high-performance polymeric fibers such as Kevlar. Furthermore, and in contrast to natural fibers, structural protein-based fibers possess surface functional groups that can serve as a platform for incorporating multifunctionality into bio-derived textiles, such as biorecognition elements for pathogen sensing and antimicrobials for odor and skin irritation control. While some advances have been made toward production of bio-derived fibers using structural proteins (e.g., spider silk (5)), several technical hurdles persist. Protein production remains a limiting factor for scaling of fiber production, and purification is a key hurdle, as impurities limit fiber extensibility. Protein stability and control of molecular chain alignment, before and during spinning, are also major challenges for reproducible fiber formation. The specific goal of this topic is to develop innovative methods to produce structural proteins in high yield and to reproducibly extrude these proteins into biological fibers. The structural proteins should be produced using a microbial host capable of generating kilogram quantities of protein, and the purified protein monomers should be spun into fibers using aqueous-based spinning techniques to reproducibly generate fibers with properties equivalent to nylon. The processes and products developed within this topic will serve as the basis for commercialization of next generation, lightweight bio-derived fibers to replace nylon in civilian and military textiles. PHASE I: Develop approaches to produce structural proteins in a scalable microbial host and screen structural protein constructs for reproducible fiber formation. Using methodology compatible with existing industrial biotechnology infrastructure, demonstrate production of structural protein at lab-scale quantities exceeding 10 grams of protein (fermentation yield>0.5 g/L;>50% purity). Assess the scalability of both the microbial host production approach and the purification process. Characterize protein conformation and self-assembly dynamics in solution. Develop an aqueous-based spinning approach to extrude fibers from the purified protein solution, which is capable of scaling to industrial production levels. Using the developed spinning approach, screen structural protein constructs for fiber formation. Single fibers must meet or exceed the following metrics: minimum 2-fold single draw, pliable immediately after drying,>40% of Nylon 6,6 mechanical stability (tensile strength = 12-40 MPa; elastic modulus = 0.13-1.6 GPa). PHASE II: Optimize the protein production and purification approaches developed in Phase I and demonstrate production of structural protein constructs which achieved the required Phase I fiber metrics at pilot-scale quantities exceeding 2500 grams of protein (fermentation yield>2 g/L;>80% purity). Assess the scalability of the optimized pilot-scale expression system and purification process to industrially-relevant levels. The structural protein solution produced using the optimized pilot-scale system should exhibit no change in turbidity over a period exceeding one month while stored in a diluted state (i.e., non-spinnable concentration), no change in turbidity over a period of 2-4 days at concentrations suitable for fiber spinning, and an ability to maintain protein conformation and self-assembly dynamics in a concentrated state that are required for reproducible fiber formation. Assess batch-to-batch reproducibility of protein solution production. Optimize and scale the aqueous-based spinning approach developed in Phase I to generate>1 pound of fiber. To assess reproducibility of fiber production, a minimum of 20 single fibers exceeding 10 inches in length from>5 distinct spin trials must demonstrate<20% variation in mechanical and thermal stability. A minimum of 10 yarn samples, each consisting of a minimum of 5 single fibers, must also demonstrate<20% variation in mechanical and thermal stability. Single fibers and yarns must meet or exceed the following metrics: mechanical stability equivalent to Nylon 6,6 (tensile strength = 30-100 MPa; elastic modulus = 0.32-4.0 GPa) and pliability after>1 month of storage. A minimum of 10 yarn samples must also be provided to the Army for independent assessment of mechanical and thermal stability. A minimum of 10 woven or non-woven fiber swatches measuring at least 6 inches x 6 inches must be produced that reproducibly demonstrate properties that meet or exceed properties of nylon swatches of equivalent form. A minimum of 10 fiber swatches must also be provided to the Army for lab scale developmental testing. PHASE III: The development of methods to produce structural protein fibers at industrially-relevant scales will support reproducible, commercially-viable,"green"manufacturing of the next generation of lightweight fibers for military and civilian needs. There are a number of military applications for a bio-derived alternative to nylon including ropes, webbing and harnesses, and advanced textiles for protection (e.g., the flame retardant-ACU), which are relevant to the civilian sector as well. Additional opportunities for dual-use commercialization include biomedical materials (micro-sutures, artificial ligaments and tendons), and biodegradable fishing nets and automotive air bags. Optimization of solution, spinning and post-spinning processing conditions, and thus enhancement of mechanical integrity, could lead to generation of bio-derived fibers that could replace synthetic fibers that are generated from harsh, chemical processes (e.g., Kevlar) for use in ballistic applications for the military and civilian first responders. Optimization could also open avenues toward use of bio-derived fibers in fiber-reinforced composites to replace the currently employed glass and natural fibers, which have disadvantages in terms of high energy demand for production and moisture absorptivity, respectively. Fiber-reinforced composites are widely used by the military in rigid shelter side panels, combat surgical hospitals and airbeams for deployable structures and military/civilian dual-use applications such as vehicle body panels and helicopter blades. Further applications are possible upon assessment of biodegradability, biocompatibility, and environmental stability.
OBJECTIVE: Develop an ultra-low voltage system for neurological sensing capable of operating with no secondary power supply, with performance in the range of current commercially available solutions. A successful application will include coordination with U.S. Army Research Laboratory (ARL) scientists to ensure understanding of the application space of sensing brain activity within real-world environments. DESCRIPTION: Recent advances in neuroscience have made great strides towards improving our understanding of how the nervous system operates and our ability to monitor its function inaction. Meanwhile, drastic improvements in sensor technology, as well as streamlined system design, have led to more wearable electroencephalography (EEG) solutions, which hold the promise of performing true neurophysiologcal monitoring outside of the lab and in more everyday or even potentially hazardous environments. For example, use of dry (gel-free) electrodes greatly minimizes preparation time, and improved higher-bandwidth wireless technology has eliminated the need for a physical tether between a wearer and the system, while also reducing artifact noise. Truly fieldable neurobiological sensing systems would be revolutionary for on-line monitoring of Soldier cognitive state in a wide range of environments, potentially integrated as part of a standard helmet or patrol cap. However, all current solutions for the electronics used within the central EEG data acquisition (DAQ) systems pose fieldability challenges because of size, weight, and power (SWAP) requirements. This is primarily due to the use of conventional integrated circuit (IC) design and components, which are based on standard operation levels of millivolts to volts and high power radios. As a result, they have an implicit requirement for conventional power supplies (e.g. batteries) that add substantial weight to the total package. Conventional systems have, at most, 8-12 hours of battery life; this adds the burden of requiring continual interaction and maintenance for the user. A system that is extremely small, lightweight, able to operate for extremely long periods (hundreds of hours to indefinitely), and requires virtually no interaction or direct attention from the user (a"construct-and-forget"approach) would reduce these barriers to fieldable systems integration and provide ground-breaking capabilities across a broad range of environments and application domains. The intended goal of this proposal is to extend the operational time per use from hours to weeks, reduce the weight of the system to a few grams, and decrease the size of the system to a couple of square centimeters, through the development and refinement of a system-on-a-chip (SoC) DAQ system targeted for external electrophysiological neurological applications (e.g. EEG), which operates on ultra-low power requirements (microwatts). While some previous work has demonstrated putative systems operating on ultra-low power [1-5], none have been produced in a usable form-factor, or that operate within a power realm low enough to be completely self-sustaining (operating in milliwatt or hundreds of microwatt range) while containing the processing requirements necessary for EEG. The Phase II goal will be to completely eliminate the necessity for an external power source through use of a local thermoelectric or other alternative source. In order to suit the largest range of envisioned applications, the final system would need to be capable of collecting high-resolution data such as from a dense array (e.g. 64+ channels), high data precision (24+ bit), or high sample rate (1 kHz) with resulting SNR comparable to that of conventional EEG measurement systems. Additionally, it must be capable of handling typical signal conditioning and pre-processing procedures for EEG, data storage, and near-field (<6 meter) transmission, with integrated power management as a central tenet. Local power may be appropriate, but must not require service/maintenance from the user, must be consistent with the goal of minimizing the size, weight, safety, and obtrusiveness of the total system, and must be compatible with long term human use applications. PHASE I: Develop a System on a Chip design with multiple channel capability (three channel minimum) for application in a dry EEG data acquisition system. The SoC should include signal conditioning, preprocessing, storage, signal transmission, and integrated power management. The overall SoC should be capable of operating on ultra-low (<100 microwatt) power supply. ARL can provide expertise in EEG application, potential use, and conventional system design as needed. Phase I deliverables include: 1) Deliver specifications and complete schematics for proposed SoC and components, 2) proof-of-principle simulation results (e.g., SPICE, CADENCE), providing evidence for operation equivalent to that of conventional-voltage EEG systems, and 3) a proposed power scavenging method that removes the need for an external power source or user maintenance, including plug-in charging or other interaction. The proposed SoC and power source should maintain an extremely small, lightweight form factor. PHASE II: Fabricate, test and validate the performance of the design developed in Phase I, and develop a generation II design with expanded channel capability and operational performance equivalent to conventional methods as evidenced in both simulated (phantom) models and human users. Coordinate with U.S. Army Research Laboratory neurotechnology experts to enable the integration and testing of the developed technologies with a full EEG system form factor in a variety of relevant environments and use conditions, which can be performed by ARL. The generation II design should expand the SoC design to potentially include high-density (64+ channel) and/or high fidelity (24 bit, 1kHz) acquisition. This could be achieved in a single SoC or by multiplexing multiple SoC chips, but must maintain the goal of operation without external power source or user maintenance. By end of performance, deliver to ARL at least 10 functional SoC chip sets either already integrated or capable of integration with systems. PHASE III: Develop a marketable device which could be used as the primary component of a fieldable EEG system, with potential uses in academic research, industry, medical, and military applications where high portability is crucial. Potential applications include monitoring of vigilance or mental fatigue, seizure prediction or identification, casualty or TBI assessment, or daily stress monitoring.
OBJECTIVE: To develop advanced flexible sensor system with distributed sensing capability for measuring extreme pressure/force/acceleration on Soldiers"heads and bodies caused by ballistic, blast, and blunt impacts. DESCRIPTION: Traumatic brain injuries (TBI"s) caused by extreme events such as blast, ballistic, and blunt impacts have produced a high incidence of casualties and long-term chronic consequences among U.S. troops fighting in Iraq and Afghanistan. In order to augment medical understanding of the injury and to develop drastically improved personal protection, there is an urgent need for sensors capable of measuring such extreme pressure on human bodies. The US Army has recently developed a single-point helmet sensor to collect data of hits from explosions and other blunt impacts , which, however, is unable to measure the pressure distribution on the entire head. To fully understand the pressure distribution on the entire head upon blast/ballistic/blunt impact, and to evaluate the effectiveness of protective helmet, measurement of multi-point 3D pressure distribution is needed. Such advanced sensors are also needed for helmet and body armor testing. The current testing standards measure the maximum deformation of clay behind a helmet or a body armor plate during ballistic impact tests and uses this back face signature as a predictor of behind armor blunt trauma injury [2-4]. However, the clay deformation and blunt trauma injury have not been well correlated and understood. A direct measurement of multipoint dynamic pressure behind helmets and armor plates is extremely important to establish biomechanical-based behind-armor/helmet blunt trauma criteria for protective equipment. The current commercial off-the-shelf sensors have not been able to meet the stringent demands posed by the high impact events. It is the objective of this SBIR program to develop an advanced flexible, high-frequency, high-durability and multifunctional lightweight film type sensor array system, which has (1) a capacity to measure extremely high pressure of 350 MPa (50 ksi) caused by multiple blasts, ballistic and blunt impacts, (2) an extremely fast time response (1 micro-second or less), (3) a large deformation capacity up to 5.08 cm (2 inches), (4) ability to record multi-point pressure profiles over a spatial resolution of 0.6 cm (0.25 inch) or less with a scalable total surface area of coverage up to 2 square meters, and (5) a wearable electronic system including data acquisition, storage, and power with a total volume no larger than 50 cubic centimeters with a fast sampling rate (at least 1MHz) to record the pressure history and the location of the blast/impact events. The system should able to function for at least 48 hours on a single charge. Equally important, the sensor must be soft, flexible, thin, lightweight, and friendly to human bodies. PHASE I: The Phase I proposal should focus on pressure/force film sensors, including a detailed description of technical approaches to achieving the pressure/force, time response, and deformation requirements. The Phase I study shall develop a preliminary design of the proposed pressure/force film sensor array and a sensor calibration method. Ballistic and blunt impact tests shall be carried out to demonstrate the feasibility of the sensor to measure the behind-helmet/soft armor impact associated with a large deformation over a total area of at least 230 square centimeters (36 square inches). System design should demonstrate potential to meet performance metrics as enumerated in the topic description. PHASE II: Using the results from Phase I, the Phase II effort shall develop a prototype wearable sensor array system having multi-point distributed pressure/force sensing capability and incorporated with MEMs accelerometers. The prototype shall have micro data acquisition, communication, and power modules. The prototype shall be integrated in the U.S. Army"s advanced combat helmets and soft armor and evaluated through ballistic, blast, and blunt impact tests. Required testing includes but is not limited to selected helmet and soft body armor systems. To facilitate sensor system development and integration into Army research initiatives, one (1) to four (4) developmental sensor systems should be supplied to the Army for conducting further testing during the Phase II process. System performance must be demonstrated to meet or exceed specific metrics as enumerated in the topic description. PHASE III: In addition to military applications, the developed sensors can be applied to many other civilian applications such as crashworthiness in vehicle designs, blast resistance in commercial aircraft, weather and blast resistance in civil structures, sporting equipment, and law enforcement protective gear. Phase III should include establishment of a manufacturing infrastructure, either through an industrial partnership or development of in-house capabilities. The manufacturing capability should include sufficient volume and facilities to establish a fully commercialized sensor product and support a potential operational capability.
OBJECTIVE: Develop low-profile Very-High Frequency/Ultra-High Frequency (VHF/UHF) antenna apertures using choice materials such as anisotropic/inhomogeneous magneto-dielectrics. DESCRIPTION: The advent of metamaterials has extended the design space for antenna apertures [4, p.4, (2.20)] creating the possibility for very thin (relative to wavelength) antenna structures. While basic physics limits directivity [4, p.10, (2.107)] to the available aperture size, such restrictions do not preclude antennas of reasonable directivity having thicknesses that are fractions of wavelengths. Properly designed antennas may exhibit bandwidths of an octave or more from the standpoint of the realized gain [4, p.29, (2.321)] keeping the realized gain greater than 0 dBi over one or more octaves. Specifically, the intent of this solicitation is to model and create low-profile antennas. These antennas should have a thickness that does not exceed 1/30th of the wavelength in free space for the lowest operating frequency of the antenna and the antenna should maintain a realized gain greater than 0 dBi over one or more octaves. The antenna should be realized using use magneto-dielectric layer(s) having anisotropic /inhomogeneous constitutive parameters. It is presumed, but not required, that the anisotropic /inhomogeneous (effective) constitutive parameters are achieved through the use of metamaterials. The antenna should be operational in the VHF/UHF range. Any solution that achieves these antenna goals is desired. The use of high relative permeability and permittivity materials sandwiched between the radiating element (e.g., a dipole, bow tie, Archimedean spiral, etc.) and an electric ground plane allows the radiating element to be in close proximity to a ground plane while maintaining reasonable input impedance for matching purposes . A fundamental problem with such geometry is the excitation of lateral (surface waves) [3, p. 736] that may guide a significant percentage of the power to the edges of the antenna before scattering into space. Presuming the antenna structure is contained in some type of enclosure (often metallic), these lateral waves may contribute to unwanted internal resonances having a deleterious effect on both the impedance match and the antenna"s radiation pattern. Accordingly, the antenna may not perform as needed from the standpoint of efficiency or the stability of the radiation pattern broadside to the antenna. A possible solution to the aforementioned problem would employ a magneto-dielectric layer having anisotropic /inhomogeneous constitutive parameters. Properly tailored, such a layer could enhance a frequency independent antenna design while mitigating (or exploiting) the lateral wave excitations. A magneto-dielectric layer created of anisotropic metamaterials (having graded unit cell parameters) is one possible realization of such geometry. In general, the layer may have constitutive parameters that are anisotropic as well as inhomogeneous for both the permittivity and permeability. As frequency independent antennas inherently use the entire element at the lowest frequency while using less of the element as the frequency increases, the gain and input impedance remain reasonably constant [2, p. 611]. The anisotropic/inhomogeneous prescription for the metamaterial magneto-dielectric should be developed in such a way as to keep the efficiency of the antenna as high as possible while maintain a good impedance match over an octave (or more) frequency band. Clearly, the"antenna"must be considered to be the entire structure consisting of the ground plane, magneto-dielectric, and the excitation element. The realization of the anisotropic /inhomogeneous magneto-dielectric layer is further complicated by the inherent problems associated with the ferromagnetic resonance of the magnetic material and the associated magnetic loss tangent. Practical realizations of an antenna are likely to be limited to frequencies under 1 GHz. PHASE I: Demonstrate through computer simulations and/or analysis the viability of using a choice material, such as anisotropic/inhomogeneous magneto-dielectrics, to enhance absolute antenna gain. Such an analysis, over an octave bandwidth, should demonstrate stability of the input impedance (<-10 dB return loss) as well as stability of the radiation pattern across the frequency band. The realized gain (dBi or dBic) should remain positive and not exhibit significant decreases in the broadside direction within the frequency band of at least one octave anywhere in the VHF/UHF range. The demonstration should be for a low-profile antenna having a depth of a small fraction of wavelength at the lowest frequency of operation. Demonstrate the material properties simulating or measuring a small sample of inhomogeneous material to the government with accompanying measured data. Additionally, the demonstration should indicate that the antenna will be suitable for transmit (up to 50 W) as well as receive. PHASE II: Fabricate prototype antennas from the choice material, such as inhomogeneous magneto-dielectric material , demonstrated in Phase I. Measure their RF performance (reflection coefficient, radiation pattern, gain etc.) and compare measured results with commercially available antennas in the same frequency band to establish a reasonable benchmark of performance. Refine antenna fabrication to minimize the volume of magneto-dielectric material to mitigate cost while maintaining antenna performance. At least one working antenna prototype with measured data should be delivered to the government. The antenna must be suitable for both transmitting and receiving, so the ability to contend with power levels up to 50 W on transmit and low-noise temperature on receive should be demonstrated. Designs may be for linear polarization, but must not preclude the more desirable circular polarization. PHASE III: Phase III will focus on the transition of selected low-profile antenna design(s) onto military platforms such as MRAP, UAV wing, and/or onto commercial platforms such as vehicular and airborne. Address fabrication cost and volume challenges that are relevant to this platform transition. Specific RF applications that may be targeted for these antennas include terrestrial communications, tactical satellite communications, radar, GPS, and RF sensors.
OBJECTIVE: To develop and demonstrate a"novel"direct fuel injection system for small unmanned compression-ignition heavy-fuel (primarily Jet Propellant-8) engines (3-70 horsepower) DESCRIPTION: The Army is in need of efficient heavy-fuel small engines (3 to 70 horsepower) for unmanned aerial and ground systems. The success of these engines requires efficient and reliable micro direct fuel injection systems. Currently, there is no reliable direct fuel injection system for small engines due to various reasons. One of them is manufacturing capability of micro nozzles. Use of direct fuel injection into a combustion chamber can achieve much higher engine efficiency than that of carburetor or port fuel injection. Direct fuel injection also provides flexibility to control combustion processes to improve engine performance and efficiency by using injection strategies such as multiple injections and rate shaping which shall be compared with the current Shadow Unmanned Aerial Vehicle (UAV) engine (i.e. a 38-horsepower rotary engine with aviation gasoline). Since much higher fuel injection pressure can be applied for direct fuel injection, engine power can be increased significantly higher than the old fuel injection systems. Direct fuel injection system has been predominantly developed for low- to heavy-duty engines. Therefore, while the demand for the small engines is increasing, there has been no reliable direct fuel injection system for small engines. Furthermore, in contrast to low- to heavy-duty engine classes for which significant amounts of research have been done by large companies, no noticeable research on small engines have been done until now. One of the main challenges in developing a direct fuel injection system for small engines is preparing combustible fuel-air mixture within the confined space of the small engine combustion chamber. The current off-the-shelf fuel injectors provide excessive liquid fuel penetration and high cycle-to-cycle variations in small fuel quantities. Smaller injector nozzle (<75 micrometers) can achieve shorter penetration length; however, the current manufacturing capabilities such as electrical discharge machining (EDM) are limited for nozzle sizes smaller than 75 micrometers. The lower side of the nozzle sizes shall be as low as 40 micrometers, while the upper side being up to 150 micrometers. Therefore, the drilling method should be able to machine from 40 to 150 micrometers with the tolerances of less than +/-2.5 micrometers. The material shall have good thermal shock resistance, high hardenability, excellent wear resistance, and hot toughness such as AISI (American Iron and Steel Institute) Type H13. The material shall be heat treated with a method such as Rockwell Hardness Scale"C"with the hardness scale between 51 and 53 (air or oil quenched above 1000?C), which is the most common heat treatment for nozzle manufacturing. The wear rate of the nozzle internal surface shall be minimized and be equivalent to the current production common-rail diesel injector nozzle wear rates. The nozzle shall be a stand-alone component with the following requirements: Houses the needle and provides the needle seat Accommodates the hydraulic circuitry for needle motion and fuel delivery below the seat Interfaces with the control valve Provides a small sack volume below the seat for orifice intersection Operates up to 15,000 psi (or 1035 bar) Accommodates 1 to 8 orifices at the nozzle tip Ability to custom configure the number of orifices, size, spray angles and directions from an undrilled"Blank"needle/nozzle set Injection quantity: 0.5 to 45 mm3/cycle Cycle to cycle variability: less than 2% PHASE I: Develop"novel"micro direct fuel injection system concepts which can provide appropriate fuel-air mixture for efficient combustion in small internal combustion engines. The technology shall be evaluated through fluid flow analysis for injection characteristics conducive to small engine environments. Assess the manufacturability of the proposed technology identifying the methods and equipment capable of production. Phase I shall be assessed based on the micro injector design concepts, manufacturing processes and machining capability, and nozzle internal flow and spray analytical analysis in small engine environment (i.e. Shadow UAV engine). The parameters for the analysis shall include nozzle inlet diameter, nozzle outlet diameter, nozzle shaping, sac volume, nozzle orifice orientation, and discharge coefficient among others. PHASE II: Develop and demonstrate the technology and manufacturing methods. Assess injector fuel charge preparation through experimental spray testing and analysis as well as 3-dimensional computational fluid dynamics (3-D CFD) analysis. Characteristics should include minimum injection quantity, cycle-to-cycle variability, rate of injection, and spray patterns among others at various fuel injection pressures, injection quantities, and multiple injections. Manufacturing assessment shall evaluate the method, repeatability, and tolerance holding capability by measuring nozzle geometries with scanning electron microscope (SEM) or scanning acoustic microscopy. The assessment shall be made according to the requirements in the description section. Deliverables should include the reports, test and analysis results, manufactured prototype injector(s). PHASE III: Develop and demonstrate a fuel injection system that can be integrated with small engines (3-70 horsepower) for unmanned aerial and ground systems. This system should be available for future Army and commercial unmanned aerial and ground systems. This heavy-fuel (i.e. Jet Propellant-8 (JP-8)) injection system should lead to the development of higher engine performance, higher fuel economy, lower noise, and reliable engines for unmanned systems. The developed fuel system could be implemented as a JP-8 fuel injection system for the Shadow Unmanned Aerial Vehicle (UAV) engine (currently it uses a 38-horsepower rotary engine with aviation gasoline). The demand for small UAVs is projected to grow not only in the military but also in the commercial applications. The developed fuel injection systems could facilitate the development of small UAV engines fueled with heavy fuels such as JP-8, Jet A, diesel, and alternative heavy fuels. Increased demand in the commercial sector would enhance the research and development in small engines. This would lead to more advanced propulsion systems for future DoD UAV systems.
OBJECTIVE: The objective is to develop a software-defined navigation receiver with an improved assurance level for position, navigation, and timing (PNT). To mitigate the jamming, interference, and spoofing vulnerability of the Global Positioning System (GPS) receiver, the receiver gathers signals from GPS as well as other emerging global navigation satellite systems (GNSSs). Furthermore, the receiver can use opportunistic non-GNSS signals to extract useful information to assist the GNSS signals in order to improve PNT assurance. DESCRIPTION: Current Army operations rely heavily on Global Positioning System (GPS) signals to provide position, navigation, and timing (PNT) information. Most current navigation receivers rely on GPS signals only and are vulnerable to jamming, interference, and spoofing. GPS signal gaps often exist in urban and indoor environments, where the signals are susceptible to line-of-sight blockage and multi-path reflection. With the modernization of GPS satellites and the emergence of other global navigation satellite systems (GNSSs), including Galileo, GLONASS, and COMPASS, many more new GNSS signals other than the widely used GPS L1 signal is available. PNT information can also be extracted from signals of opportunity. These signals of opportunity include both satellite and terrestrial signals, such as Long Range Navigation (LORAN), cellular code division multiple access (CDMA), Global System for Mobile Communications (GSM), 4G Long Term Evolution (LTE), broadcast TV/Radio, Iridium, and much more. Dedicated pseudolites can also be set up to broadcast PNT information. These signals of opportunity can be combined with GNSS signals to increase the availability and assurance of PNT information, and create seamless situational awareness for Soldiers in combat. For example, it has been demonstrated that precision timing can be extracted from cellular CDMA signals and used to improve the coherent integration time of a GPS receiver to recover weak GPS signals in an indoor environment. Although GNSS receiver technology has matured significantly over the past 20 year, most receivers can only acquire signals at one or two GPS frequencies. Multi-GNSS, multi-frequency receivers are at the early stage of development as new GNSS signals are being broadcasted. Ways to develop techniques and algorithms to recover PNT or PNT-related information from the signals of opportunity mentioned above to assist and complement GNSS receivers in GNSS-denied and weak signal environments are being actively pursued by many researchers. However, computationally efficient implementations of these signal fusion techniques to improve PNT assurance and create devices that are suitable for portable use and have a low power form factor still require significant development. The goal of this program is to develop a reprogrammable, software-defined, multi-GNSS, multi-frequency receiver with signals of opportunity fusion. The receiver will leverage recent developments in high-speed analog-to-digital converters and field-programmable gate arrays (FPGAs) for signal capturing, digitizing, and processing. An efficient digital signal processing algorithm will be developed to integrate multi-GNSS signals and signals of opportunity. A broadband radio frequency (RF) front end that can cover the ~500-MHz band occupied by GNSS signals at L-band as well as other frequencies used by signals of opportunity will also be developed as part of the receiver. PHASE I: In Phase I, the proposer will analyze and select a suitable receiver architecture for receiving, capturing, and analyzing multiple GNSS signals at multiple frequencies as well as a select number of signals of opportunity. The receiver must cover GPS, Galileo, GLONASS, and COMPASS signals. The analysis must show integration of at least three signals of opportunity and demonstrate a path to incorporate more signals if necessary. The receiver can be implemented using a wideband direct digital RF front end or a channelized RF front end. In order to integrate signals of opportunity, an alternative architecture using a wideband RF channel for GNSS signals and a number of narrowband RF channels for signals of opportunity is permissible. The proposer will analyze the trade-off between the speed and vertical resolution requirements of the analog-to-digital converters and the digitized signal distortion. The proposer will investigate digital signal processing algorithms to process and track multiple GNSS signals. The tracking can be performed using Kalman filters as well as other innovative approaches. The tracking must be able to incorporate signals of opportunity. Performance metrics, such as receiver sensitivity, interference rejection, etc., will be defined in the study to quantify the improvement in PNT assurance level. The proposer will analyze the implementation of the signal processing algorithms in FPGAs and determine the computational resource requirements. The overall power consumption and performance of the receiver will be analyzed and compared to current state-of-the-art GPS receivers. PHASE II: In Phase II, the proposer will demonstrate a prototype of a multi-GNSS receiver with signals of opportunity fusion by implementing the receiver architecture and signal processing algorithm developed in Phase I. The receiver must be a complete receiver including a broadband antenna, an RF front end, digitizers, and a signal processing unit. A single broadband antenna must be used to cover all signals of interest. The signal processing unit must be implemented in an FPGA for programmability. Either evaluation boards or a dedicated developmental system can be used for the implementation. The proposer will perform a field demonstration of the prototype against state-of-the-art GPS receivers under adverse environments such as weak signal (indoor), intentional jamming, etc., using metrics developed in Phase I. The proposer will demonstrate the programmability of the receiver by using different numbers of GNSS signals and signals of opportunity. Programmability should also be demonstrated using the new and updated signal processing algorithm. PHASE III: In Phase III, the proposer will develop a portable version of the multi-GNSS receiver with signals of opportunity fusion. The receiver should be implemented in an application-specific integrated circuit (ASIC) form factor to lower the size, weight, power, and cost (SWAP-C). Initially, the SWAP-C of the multi-signal receiver could be worse than that of a state-of-the-art, single-signal GPS receiver because of the increase in functionality. At the commercialization stage of the program, the SWAP-C of the multi-signal receiver should be comparable to that of a state-of-the-art, single-signal GPS. Also, a military version of the multi-signal receiver capable of receiving, processing, and integrating military GNSS signals (for example, GPS L2 M code) will be developed for transition to PD PNT.
OBJECTIVE: To develop a plug-and-play nonlinear device based on periodically poled lithium niobate waveguide (or similar) having high difference, and sum, conversion efficiency with at least 10 dB signal-to-noise. DESCRIPTION: The Department of Defense and the Army has a vested interest in secure communications. Quantum communication has been shown to be secure against eavesdropping due to the nature of entanglement. Potential next generation quantum communication systems include transmission of quantum information entangled with quantum memories. Quantum memories allow for extended distance quantum key distribution (QKD) and for storage and later retrieval of quantum information in a network composed of many nodes. In these cases, frequency conversion of single photons is needed. Such quantum frequency conversion has been demonstrated for certain wavelengths with periodically poled lithium niobate (and similar periodically poled ferroelectric waveguides). The objective here is to develop a complete package that supports quantum frequency conversion between the specified wavelengths. This quantum frequency conversion package would allow for long-haul quantum communication because the output/input is a photon in the telecommunications band. Successful demonstration of the packaged outlined below will directly and significantly impact quantum communication, long-haul quantum communication, and hybrid quantum technologies. Periodically poled lithium niobate (PPLN), a ferroelectric crystal, is a versatile nonlinear medium. It is well established as a material of choice for optical amplifiers, second harmonic generation and nonlinear waveguides. For efficient use of PPLN in frequency conversion, a waveguide is often fabricated into the PPLN. Obtaining efficient input coupling of two nondegenerate optical frequencies into a PPLN waveguide is challenging. Moreover, obtaining an efficient waveguide for both the nondegenerate frequencies in order to achieve high efficiency frequency conversion is another difficulty. This call is for two devices, one capable of difference frequency generation (DFG) and a second capable of sum frequency generation (SFG). The aim of this program is to design, fabricate and successfully demonstrate a complete packaged PPLN (or other nonlinear compact medium packaged<30 cm3) having high efficiency, fiber or other waveguide coupling at the input and have an output signal-to-noise of at least 10 dB. There is appreciable overlap in the design so simultaneous work on DFG and SFG is very reasonable. Any needed power supplies or oven controllers can be separate to the packaged PPLN and their size is not critical. PHASE I: The design of the periodically poled lithium niobate (or other nonlinear medium) waveguide must be demonstrated. The design and simulations must show (i) high efficiency DFG (of inputs 795 nm and 1989 nm) and high efficiency SFG (of inputs 1324 nm and 1989 nm), where both DFG and SFG are at the level>30%/W/cm2 (where W/cm2 is the product of the input powers divided by the squared length with<300 mW total input power) (ii) a signal-to-noise of the output of at least 10 dB and (iii) the input coupling efficiency should be>40% for all DFG and SFG inputs, (iv) provide a design of the coupling method of the inputs to the PPLN waveguide for both DFG and SFG. The designed holder for the PPLN should be<30 cm3. PHASE II: Difference Frequency Generation: The fabrication of the periodically poled lithium niobate waveguide (or other nonlinear medium) must be completed and the following demonstrated: (i) high efficiency DFG (of 795 nm and 1989 nm) and SFG (of 1324 nm and 1989 nm), both>30%/W/cm2 and total input power of<300 mW (ii) the signal-to-noise of the output of at least 10 dB, (iii) the input coupling efficiency should be>40% for all SFG and DFG inputs. The efficiency and signal-to-noise can be experimentally demonstrated with ample input powers that is well above the single photon level. The package (<30cm3) should be designed for optimal coupling of the inputs into the waveguide, where for DFG the inputs are either fiber or waveguide coupling into the PPLN and the inputs are fiber coupled for SFG. PHASE III: The complete packaged device (<30cm3) must be delivered, one for sum frequency generation and one for difference frequency generation. That is, a periodically poled lithium niobate waveguide (or other nonlinear medium) and (i) be a complete plug-and-play package, specifically, it should include the waveguide, its housing (ii) the inputs should be either fiber coupled or waveguide coupled into the PPLN waveguide for DFG and be fiber coupled into the PPLN waveguide for SFG (iii) the output should be fiber coupled for both DFG and SFG and (iv) the signal-to-noise of the output should be at least 10 dB (measured for DFG at 795 nm vs 1324 nm and for SFG 1324 nm and 795 nm) (v) the product should be tunable in the input of at least 5 nm with the output having at least 10 dB signal-to-noise. The output does not need to be measured at the single photon level. Note that any needed power supplies or oven controllers can be separate to the packaged PPLN and their size is not critical. The compact, portable and robust nature of the device is an important feature. The product"s commercialization would serve as a device to bridge two neutral atom-based quantum systems that are remotely situated but connected by telecommunications fibers. This device could be integrated into a more secure quantum communication network for the DoD. Beyond a research tool, this device would operate as bridge hybrid quantum systems which require frequency conversion for spectral overlap. Furthermore, commercialization of the methodology for optimal coupling and conversion would allow for consumer access to these specialized devices for specific use in extending the technique into other wavelength regimes where the inputs are highly nondegenerate.
OBJECTIVE: Demonstrate the application of electromagnetic fields to develop, manipulate, process, and produce ultralightweight metals with superior properties. DESCRIPTION: The Army is highly interested in the application of electromagnetic fields for development of ultralightweight metals with tailored microstructures and properties. The current methods used to manipulate metal properties involve varying scale, composition, temperature, and pressure to improve strength, hardness, facture toughness, elastic modulus, density, etc., but the use of these traditional techniques for tailoring a wide range of chemical and physical properties is reaching a plateau. It is worth noting that significant ongoing research is being dedicated to re-engineering and exploring the creation of materials at the nanoscale, which holds potential for future applications that inherently hinge on surmounting scalability, assembly, and producibility challenges. However, there is an emerging technology that goes beyond factors of scale, composition, temperature, and pressure, and holds great promise in facilitating the realization of transformal materials with the aid of externally applied fields. The application of fields may alter phase transformation pathways, create new microstructures, shift equilibrium favoring new metastable alloys, align phases, manipulate and shape nanoscale architectures, and produce materials with revolutionary structural and multifunctional properties otherwise unattainable by conventional processing and production methods. The application of electromagnetic fields offers the unique opportunity to direct the architecture of materials features across atomic, molecular, micro, meso, and continuum levels. These fields may either be used to induce a permanent material property improvement or to selectively activate enhanced time-dependent properties via dynamic stimulation. Relatively low energy field-assisted processing methods such as spark plasma sintering (SPS), microwave sintering, and flash sintering have been developed to introduce electric and microwave fields for reduction of sintering temperatures and times [1-3]. Ultrahigh electric and magnetic fields have been applied during material consolidation to enhance material properties and alter conventional phase diagrams, pushing the limits of traditional materials science . However, the fundamental thermodynamics and reaction kinetics that result in improved processing and revolutionary changes in properties are not well understood. Research on materials subjected to ultrahigh magnetic fields has been conducted at the National High Magnetic Field Laboratory (NHMFL) [5-6]. As an example, work by Oak Ridge National Laboratory (Ludtka et al) at the NHMFL has led to prediction of modified phase diagrams for a number of metals, including bainitic steel under a 30T applied magnetic field . While several field-assisted methods have recently emerged, the goal of this effort is the development of new technologies that combine, augment, or control metal properties with electromagnetic fields and concurrently drive dynamical processes within assembly and production. The overarching technical challenges are to (1) develop a fundamental understanding of the chemical, physical, structural, and engineering aspects of field augmentation of metals, (2) identify phenomena that enable control of applied field manipulation of metals (3) develop concepts and approaches demonstrating enhancement to strength, hardness, facture toughness, elastic modulus, etc., (4) perform numerical modeling to describe and predict electromagnetic field influence on properties, (5) elucidate approaches that enable field control for scale-up of metals production, and (6) develop an agile manufacturing design for in-house fabrication and commercial licensing. PHASE I: Perform research and analysis that will allow for the demonstration of new concepts to apply electromagnetic fields (electric, magnetic, microwave, etc.) for the development of high specific strength metals with tailored microstructures and properties. Concepts should demonstrate a significant enhancement in strength, hardness, fracture toughness, elastic properties, etc., for metals and metal alloys (e.g. magnesium, aluminum, etc.). Concept evaluation will include fabrication of coupons that demonstrate significant property improvements when compared to current state-of-the-art metals via comprehensive characterization techniques (microscopy, property testing, nondestructive evaluation, etc.). Explore the incorporation of derived principles and theories into modeling and simulation tools with design predictive capabilities. PHASE II: Demonstrate an approach that enables field control for scale-up of metals production through the development of a novel process and a functional system for applying electromagnetic fields. Design and construct the necessary equipment and devices for accurately and reproducibly applying electromagnetic fields to fabricate metals with improved properties. Continued investigation and insight into the physics of the interactions between the applied fields and metals is also required as it relates to scale-up. Development of appropriate process models is necessary and required. In-situ characterization capabilities including process control and feedback are desired but not required. PHASE III: Develop an agile manufacturing system for in-house fabrication and commercial licensing by assembling commercial equipment suitable for applying electromagnetic fields of interest to a range of metals during processing under necessary temperature and pressure conditions. This system will include in-situ characterization capabilities and process control for quantitatively analyzing the effects of electromagnetic fields in real-time (microscopy, x-ray diffraction, nondestructive evaluation, etc.). Demonstration of this innovative system for fabricating metals with tailored and enhanced properties will assist the proposer in commercialization of the process or metals developed under this effort. Anticipated commercial applications may include novel advanced metals, electromagnetic equipment and devices, and modeling tools that accurately simulate the effects of applied electromagnetic fields on materials processing. The potential advantages of developing these applications include energy savings, property control and tailoring, and small volume production, which are equally valuable to both commercial and defense manufacture. Virtually all metals and some other materials industries, even commodity industries, as well as commercial and defense aerospace, automotive, and ship industries could benefit.
OBJECTIVE: The objective of this topic is to produce abuse tolerant, full LiCoPO4 based Li-ion cells of size greater than or equal to 1 Ah. DESCRIPTION: Li-ion batteries provide the most energy storage capability on a weight and volume basis and high energy dense batteries are needed to reduce the weight borne by the soldier. However, Li-ion batteries have been shown to be susceptible to abuse which may lead to fires and explosions as recently shown by highly publicized airplane groundings. There is thus a need for high energy batteries which are tolerant to abuse conditions. LiFePO4 is a well-known battery material which is known to be safe owing to the nature of the bonding of the oxygen atoms - the covalent nature binds the oxygen whereas higher energy oxide cathode materials such as LiCoO2 may tend to lose oxygen and accelerate fires and explosions . The tradeoff is the lower energy content of LiFePO4 which is a 3.4 V system where the energy of the cell is a product of voltage times the capacity. LiCoPO4 is a cathode material which has the same chemical structure as that of LiFePO4 but a much higher voltage of 4.8 V thereby offering the possibility to have a high energy battery (40% more than LiFePO4) which is tolerant to abuse conditions . It has not yet been commercialized owing to restrictions on the voltage limits of the electrolytes and owing to capacity fade issues with the electrolyte. However, recent developments of high voltage electrolytes  and the invention of a substituted form of LiCoPO4  have led to the possibility to commercialize this cathode material. This solicitation aims to build upon these new developments to build abuse tolerant Li-ion cells and demonstrate this tolerance through standard testing. A successful program will smooth the path towards commercialization. PHASE I: Full Li-ion cells of size greater than 1 Ah will be produced using LiCoPO4 (or substituted LiCoPO4) as cathode and standard commercial graphite as the anode. Abuse tolerance results will be obtained, where abuse tolerance minimally includes overcharging and short circuit. Desirable additional testing includes crush, nail penetration and high temperature exposure. The overcharging test uses an excessive current rate and charging time to determine whether a sample cell can withstand an overcharge condition without an explosion or fire. The short circuit test directly connects the positive and negative terminals of the cell to find the cell"s tolerance to a maximum current without explosion or fire. The heating test measures a cell"s ability to withstand an elevated temperature for a period of time. The tests shall follow standard lithium battery testing protocols such as UL (Underwriter"s Laboratory) 1642, (dated 25 November 2009). Since this testing is highly dependent on cell format and the nature of the counter electrode it is imperative that control cells with the same counter electrode and cell format are used. The preferred control cell chemistry is a LiCoO2 cathode with a graphite anode. Ideally, the same electrolyte for test and control cells would be used though a small amount of high voltage stabilizing additives (<1 weight % of the electrolyte) may be required for the higher voltage cell. During phase I, safety trends shall be determined in comparison to other high energy density Li-ion battery cathodes such as LiCoO2. A minimum of 3 cells and preferably 5 cells will be used to monitor temperature during overcharge and short-circuit testing. The results shall be documented in monthly reports and in a final report. The reports will include details of materials utilized, procedures and process parameters used; test setup descriptions, results and conclusions; and performance assessment. Additionally, the reports shall include a description of test results, discussion, analysis and conclusions. PHASE II: Phase II will investigate non - electrochemically active components that may further enhance safety and abuse tolerance of LiCoPO4 based Li-ion cells and complete a full spectrum of testing as described by UL or other testing protocols. The work may also desirably include the study of electrolyte safety additives or strategies such as non-flammability, low volatility, and/or thermal barriers. During phase II, the testing will determine if the Li-ion cells (minimum of 3 preferably 5 per test) pass or fail each abuse test. The findings will be reported in monthly progress reports and in a final report. The report shall include details of materials utilized, procedures and process parameters used; test setup descriptions, results and conclusions; and performance assessment. Additionally, the reports shall include a description of test results, discussion, analysis, and conclusions. Additionally, a minimum of 3 preferably 5 of the most abuse tolerant Li-ion cells will be delivered to ARL where the cells meet the metrics of an initial discharge capacity of 120 mAh/g based on the active cathode material and maintenance of the discharge capacity after charge discharge cycling of no less than 80% of initial capacity after 500 cycles. PHASE III: The end state of this research is a high energy, high voltage LiCoPO4 based Li-ion battery with demonstrated abuse tolerance. Military applications include soldier power, auxiliary power such as in a silent watch application and energy storage for a microgrid. Commercial applications include personal electronics such as cellular phones, laptop computers, power tools and transportation in an electric, hybrid electric vehicle or in aviation.
OBJECTIVE: Combat vehicle tire/tracks represent a significant percentage of the vehicle"s signature. The objective is to develop a high durability coating to provide long term color matching for combat ground vehicle tires and track treads that matches the body color of the combat vehicle. The developed technology will improve the signature of combat vehicle by increasing the percentage of the vehicle addressed by camouflage coatings. DESCRIPTION: Currently, combat vehicles benefit from highly specialized paint coatings which provide significant reductions in detection by matching the vehicle"s color with the spectrum of their surroundings. This ability is the basis of camouflage. A significant amount of ground combat vehicles front and side image profile consists of tires and tracks. Currently, these tires and tracks are black in color due to the materials in which they are constructed (MIL-DTL-3100H). This means that a significant portion of a vehicle is left unaided by camouflage and is often in high contrast to the vehicle and environment. As a result, vehicles can be identified and targeted more easily and at greater distances. The goal is to develop a durable coating method to reduce the contrast between the vehicle body and their tires and tracks. Ideally this will improve the overall visible signature reduction of the vehicle. This includes all surface areas of tires and treads track pads (side walls, tread grooves and tread surface). A coating method is being sought to provide this color matching capability. Current mil specification colors include those defined in MIL-DTL-53039D. The Army"s primary camouflage colors are Aircraft Gray, 36300, Aircraft Green, 34031, Black, 37030, Brown 383, 30051, Green 383, 34094, Green 808, IRR Foliage Green 504, 34160,Tan 686A, 33446 and Woodland Desert Sage, 34201. Of these colors, the color of immediate interest for tire coloration is Tan 686A. The coating should have a flat or lusterless finish as described in MIL-DTL-53072 as tested by ASTM D523 - Standard Test Method for Specular Gloss. (Department of Defense adopted). Of particular importance is the durability of the coating material. It should be able to remain true in color and finish for the duration of the application lifetime. The material should be suitable for driving conditions on both paved and off road terrain. The coating should be designed to survive the high pressures and shear characteristics of rapid acceleration and stopping encountered with military combat vehicles. The coating needs to retain its conformity to the tire and track elastomeric material. The thickness of the coating should not interfere with the performance of the vehicle. It should also not attract more contamination than the baseline tire, tread material. The coating must tolerate environmental durability issues such as vehicle heat, UV and weather. Coating materials should not contain heavy metals or other known hazardous materials (MIL-DTL-53072D). The color coating method should not damage or reduce the lifetime of the tires, treads, tracks, underbody coatings or vehicle paint. The color match coating method should require the same level of cleaning effort as with the original tire, track tread. Minimal masking off of tires from the rest of the vehicle during the coating process is preferred. Application with the tires and tracks on the vehicle is desirable. PHASE I: Demonstrate a durable coating which will allow color matching of tires and track vehicle treads. This coating will be in mil spec colors with gloss levels to match current Army MIL-DTL-53072 requirements. For phase I, the durable tire coating will match color and gloss of Tan 686A. Materials developed during Phase I shall be evaluated on appropriate substrates to demonstrate their wear and adhesion performance to simulate application on a vehicle tire/ track tread. This data along with samples of the material applied to an appropriate substrate will be provided for comparison to Tan 686A. A cure schedule will also be provided that describes the time for each step required to apply the coating. This will include the time required to reach sufficient cure to handle a tire as well as the time needed to reach a full cure sufficient to drive a vehicle without damaging the coating. PHASE II: Demonstrate the ability to produce the coating system in primary camouflage colors brown 383, green 383, green 808, IRR foliage green 504, tan 686A, and woodland desert sage 34201. Phase II will include best efforts to match mil spec colors in the visible (400-700 nanometers) and near infrared (700-900 nanometers). Phase II will include the demonstration of a prototype application station. The station will be portable and designed to be placed inside of a military vehicle style paint spray booth or portable painting structure if necessary. It should not require any additional safety or waste management requirements beyond those for applying CARC paint MIL-DTL-53072D. A field repair kit will be developed that uses hand held equipment that can be shipped by air to combat locations. Kits will be available in all primary camouflage colors with similar performance to the original coating. Each kit will be sufficient to completely color treat a surface area of 4 feet x 4 feet. Successful deliverable will be judged upon evaluation of the prototype application station"s ability to apply the coating to tires and track treads to combat vehicles. The coatings will be evaluated by the government for color accuracy, and coating permanence after being driven on the vehicles for a period of 6 months under various conditions. PHASE III: Phase III will provide the military a commercialized version of the color coating system. The technology will be incorporated into the CARC coating acquisition. Phase III coatings would seek to improve overall durability and lifetime of the tire by protecting it from environmental damage (dry-rot, ultra violet light damage). The coating would be applied to all combat vehicle tires and track treads when they receive their initial camouflage coating in paint spray booths. Commercial touch up kits will also be manufactured to allow in field touch-up and in field modifications typically needed under special force missions. This technology would be applicable to produce a durable commercial coating to improve the lifetime of off road sport utility, emergency response, recreational and construction vehicle tires and track treads which are susceptible to UV and ground level ozone (dry-rot). For general public safety this technology can provide cars and trucks with a means allow coloring of black-wall tires to improve visibility of vehicle against asphalt roadways. . The coating technology could also be applied to elastomeric inflated structures to provide a long lasting color and environmental protection. Such structures are used by both Department of Defense and the civilian market. Additionally, this technology could also be used to color rubberized inflatable boats for military and commercial purposes.
OBJECTIVE: The objective is to research and develop a non-relational database (NRB) approach for use with existing simulation based training applications. NRB strategies allow for horizontal scaling of computational resources (ie taking advantage of more nodes in a computational cluster of computers) that traditional structured query language databases do not. The resulting approach is expected to increase efficiencies in the way the SBT application operates. DESCRIPTION: The background rationale for scaling the simulation based trainers is to address the issue of properly representing the operational environment for Army training needs. The majority of current simulation based virtual environment training applications are only used to train at the small unit level, 40 soldiers or less. The reason for this is the inability for current systems to handle larger numbers of concurrent users in the same place at the same time. This also means there are limited system resources left over for opposing forces and neutral entities. It is believed that virtual world technology may be used to achieve the goal of full spectrum operations during virtual mission rehearsal exercises. Traditional structured query language databases follow a"one-size-fits-all"approach to data storage and retrieval. High performance applications can be tuned for efficiency using non-relational approaches to database design. Algorithmic research is required to apply non-relational database techniques to virtual environment based trainers, therefore allowing for an increased amount of simultaneous training participants and/or increased level of complexity in the virtual training environment. After the analysis of the simulation based training prototype is complete and the non-relational database is constructed for it, a generalized version of that database and the analysis approach can then be used and applied to other simulation based training applications. The impact of this SBIR technology directly affects the virtual training application"s ability to scale the number of participants and the ability to increase complexity of the scenes. Non-relational database approaches have not yet been widely adopted by simulation based training architectures. Non-relational database architectures are still in their infancy compared to traditional database approaches. Early experimentation in academia shows great promise for efficiency gains with their proper employment. The commercial game engines that virtual training systems are based on have different data requirements as they are proprietary and do not share a common code base. The non-relational databases rely on specific tuning for each application making the prospect of a migration to a non-relational database risky for them. A generalized approach that shows significant increases in performance and efficiency would be extremely attractive to industry. Possible areas of optimization include scene graph management and object sorting. When dealing with extremely large data sets or many simultaneous uses, both of these applications would benefit from increases in efficiency and translate to increased performance. Scalability can be examined in three different categories: size of operational area, number of entities in the environment, and the complexity of the environment. The next generation of training applications need to handle more human users, more complex objects such as vehicles and non-player characters and larger operational areas to create realistic scenarios in new operational environments. The application of next generation database technology to existing prototypes will help achieve increases in scalability. PHASE I: The offeror will be provided with a prototype virtual training environment currently in use by the US ARL/STTC. The offeror will analyze and provide conceptual designs for alternative database deployment for the virtual environment provided to them by the US ARL/STTC. The alternate database designs will include non-relational distributed clusters intended to maximize scalability and availability for use with large quantities of data. The offeror will provide estimates of differences in performance. This effort will determine the feasibility of applying the non-relational distributed model to the problem of database scalability. PHASE II: The offeror will conduct a comparative study of the latest research in non-relational distributed databases with the current relational database deployments. The offeror will design performance tests which are representative of real-world usage and report the results of the testing. The offerer will develop, test, and demonstrate an implementation of the highest performing database approach in a relevant Army training environment, such as the ARL Simulation and Training Technology Center (STTC) Military Open Simulator Enterprise Strategy (MOSES) effort and the Training and Doctrine Command"s (TRADOC) Enhanced Dynamic GeoSocial Environment (EDGE) effort. Although the approach may be demonstrated in a specific environment (e.g., MOSES)t, the design will not be tied to this environment. The outcome of this research is to provide designs and guidance for current simulation based training environments to inspect for possible migration to a next generation non-relational distributed database. Phase II deliverables will include a comparative study of the various non-relational database designs, test results, a working prototype of the highest performing implementation. PHASE III: The offeror will work to apply this approach for large scale (mission command to boots on the virtual ground) operations during mission rehearsal exercises. A simulated mission rehearsal training exercise will support thousands of human users simultaneously with an accurately represented operational area. Beyond US Army use, the offeror will also work to commercialize the results of the application research and resulting algorithms as a solution in the entertainment industry to provide more realistic gaming experiences, with higher fidelity and larger numbers of simultaneous players.
OBJECTIVE: Demonstrate an approach for canceling cosite interference on dismounted soldiers as a result of collocated communications and an electronic warfare (EW) system without a physical connection between the EW and communications system. DESCRIPTION: Dismounted soldiers carry various Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) equipment associated with various mission profiles. The Army continues to strive to bring more information and extend the tactical network all the way down to the individual rifleman. This requires dismounted soldiers to carry various wireless communications equipment which is spectrum dependent. In addition to communicating over the Network, some missions necessitate the need for soldiers to carry electronic warfare (EW) for force protection. Soldiers carrying both radios and EW systems become at risk for electromagnetic fratricide between the systems. This can lead to degraded communications and the inability to deliver timely situational awareness and mission command in a friendly force EW environment. Solving this issue for dismounted soldiers introduces unique challenges as the size, weight, and power (SWaP) associated with equipment must be conducive to dismounted soldier operations. Adding additional cabling between the EW and communications systems introduces additional weight, snag hazards, and mobility issues. The Army is seeking an innovative solution to the cosite interference issue with minimal increase to soldier load. The solution shall consist of an applique that is integrated onto the communications radio. Such an approach may consist of a"sleeve"for a handheld radio that provides the interference cancellation capability. The size of the applique shall not increase the size of the integrated system (handheld + applique) to more than 50% of the standalone handheld radio. The target platform for this applique is the AN/PRC-154"Rifleman Radio". The applique shall be battery powered with interconnections that are compatible with the Rifleman Radio. The approach must address in-band (within the modulation bandwidth of the primary communications signal) interference generated by the EW system. This interference can be a result of products of intermodulation, harmonics, spurious emissions, and an elevated noise floor generated by the EW system. The approach shall also address out of band interference generated by the aforementioned effects as well as interference generated by the primary EW signal(s). It is assumed that the communications signal is not assigned to a targeted EW frequency. The proposed approach cannot change the output of the EW system. The approach shall also consider the effects of near-by EW systems operating on adjacent soldiers in close proximity. The approach shall also consider the desensitation effects that occur within the communications system as described in . The solution shall address the Soldier Radio Waveform (SRW) operating over a tuning range of 225 MHz 2 GHz. The solution shall provide at least 25 decibels (dB) of interference reduction, with a target of 40 dB, within the modulation bandwidth of the SRW channel. This shall be accomplished without a priori knowledge of the EW signal (i.e. reference signal). PHASE I: The Phase I effort shall include feasibility study outlining problem considerations and potential solutions. An analysis of theoretical limits of the various technical approaches shall be presented in additional to practical limitations. The Phase I effort will identify the best approach and provide a recommendation for Phase II implementation. The Phase I deliverable will be a report documenting the results of the Phase I effort. PHASE II: The Phase II effort shall construct and demonstrate the operation of a prototype that will cancel cosite interference on the Soldier Radio Waveform. The prototype shall cover the 225 MHz 450 MHz military Ultra High Frequency (UHF) communications band. The effort shall include power considerations with respect to battery life associated with the developed hardware. The Phase II prototype will be tested at a government facility in an operationally representative environment and shall demonstrate at least 25 dB restoration in receiver sensitivity in the presence of the EW system. The prototype shall be delivered to the government with an associated user manual, interconnect diagram, and a report documenting the results of the Phase II effort. PHASE III: Phase III efforts will focus on reducing the size, weight, and power of the Phase II prototype and integrating into Army Program of Record SRW radios. This work will include extending the Phase II prototype to cover additional frequency bands. The Phase III work may also target additional commercial off the shelf (COTS) SRW radios that are demonstrated during the Army"s Network Integration Events (NIE) which currently occur twice a year. The technology developed under Phase II may also be modified and transitioned to the commercial cellular for use in mitigating the strong interferers in Code Division Multiple Access (CDMA) systems.
OBJECTIVE: An innovative HW/SW solution will be developed to map a computer host/network, run attack scenarios without disrupting the host/network and develop actionable courses of action to counter real cyber-attacks. DESCRIPTION: The Cyber domain represents an elusive environment that the Army must defend. Over the past two decades, our cyber defenses have been largely ineffective, because our information systems are relatively static while our cyber defenses have remained reactive in nature, making it fairly easy for adversaries to explore, map, track, and then to launch decisive attacks. Cyber-attack threat vectors are increasing in quantity and quality. Their level of sophistication and their ability to evade and circumvent our defensive cyber capabilities has grown by leaps and bounds. Cyber adversarial reasoning capabilities that can inform our defensive sensors about adversarial intent and thus improve attack prediction on the host platform and network are greatly needed. Using this solution will deliver timely mission command & tactical intelligence to provide cyber threat situational awareness in all environments. Today"s cyber battle space has seen attack threat vectors increasing in quantity and quality. From Internet connected televisions, to SCADA systems hastily connected to the Internet; the number of Internet connected devices have skyrocketed in recent years, resulting in an exponential increase in the attack surface. Nation state funded cyber espionage has brought highly skilled professional hackers in the cyber battle space. Hacktivism in the mainstream adds a new dimension to cyber warfare in that their actions have many unforeseen consequences. While the motivations between hacktivism and nation state based espionage differ; the result is the same. Someone always wants access to the data you are trying to protect, or disrupt the network you rely on. In an effort to understand this new battle space, the capabilities to analyze cyber adversarial reasoning and predict attacks on the host and network are needed for both strategic and tactical networks. The tactical network brings additional cyber issues due to a communications and networking architecture that is resource constrained and ad-hoc in nature where both the infrastructure and end hosts have the ability to roam on the battlefield. The goal of this effort is to investigate the use of host and network based sensor agents to develop accurate host and network maps, analyze the data for potential threat vectors and provide aggregated threat prediction with 85% confidence. This data can then be provided to an automated war gaming engine to run scenarios through the host and network in real-time to determine the most likely attack vectors on the network. This approach will aide in an overall plan to understand the adversaries strategies and tactics in order to build real-time adaptive software/network protection systems that will be developed and run in this environment. This solution will Increase network resilience and provide the building blocks for proactive cyber defenses. This will enable better network planning and stronger configurations of strategic and tactical networks. PHASE I: 1) Develop algorithms that can reason on adversarial intent and that can improve cyber attack prediction. 2) Develop an automated war gaming engine to run scenarios in real-time on the host and network. 3) Show overall feasibility of concept, with demonstration software on representative hardware. Concept/approach should emphasize its scalability to an enterprise network and a minimum of 100 nodes. The concept/approach should also use major standards to ensure use in a common operating environment to the greatest extent possible. 4) Produce a detailed research report outlining the design and architecture of the system, as well as the advantages and disadvantages of the proposed approach. PHASE II: 1) Based on the results from Phase I; design and implement a fully functioning prototype solution for both cyber adversarial reasoning and attack prediction and war gaming engine. 2) Provide test and evaluation results that demonstrate the effectiveness and accuracy of capabilities of the solution to reason on adversarial intent and perform attack prediction on the host and network. 3) Develop a final report describing the strategy, architecture, design, and development of cyber adversarial reasoning and attack prediction and war gaming engine techniques. PHASE III: Phase III Dual Use Application: Further develop prototype into a transitional product with necessary documentation for a Program of record such as the Warfighter Information Network - Tactical (WIN-T) or Program Execution Office Enterprise Information Systems (PEO EIS) for integration into their infrastructure. This capability could be incorporated into commercial host/network security applications to enhance the usability and improve commercial security protection.
OBJECTIVE: Research and develop programmable RF transceiver technology, including software, hardware and documentation capable of fragmenting one RF transmission into multiple RF fragments and reassembling fragments post reception to original composite. DESCRIPTION: This Fragmented Spectrum Efficiency Manager (FSEM) system effort is intended to provide communications capability to deliver detection-resistant timely mission command & tactical intelligence and situational awareness in all environments. Use of Commercial Off The Shelf (COTS) products are important but not to the extent of restricting research. The solution must demonstrate coherent processing in the fragmenting of a transmission and distribution of fragments of spectrum to four or more geosynchronous satellite paths. The solution must also demonstrate the aggregation of the fragments post satellite transmission. Therefore this effort requires at least two independent hardware elements to operate geographically separated. The FSEM fragmenting and aggregating will function in the frequency range of 1 to 2 GHz (L-Band). The FSEM must however interface to a frequency conversion component for transport and address satellite communications latencies associated frequencies from C through Ka band. The FSEM must also interface with satellite modems in the same frequency range as the frequency conversion interface. The solution can utilize overhead framing techniques but efficiency must be great enough to stay below the 5% bandwidth overhead utilization threshold. Lastly, the energy per bit, commonly referenced to a 0 dB noise figure (written in the industry as Eb/N0) can grossly impact data throughput. Power levels from path to path will vary. Satellite link performances can range from unusable to completely error free within 2 dB Eb/N0. The energy per bit performance at the receive part of the modem will be a strong metric in terms of receive quality of a satellite link in assessing FSEM aggregation performance. The solution should incorporate leveling and phase techniques to enable optimized aggregation. The means to fragment a spectrum transmission into 4 or more segments and re-aggregating fragments successfully recovering the original data error free within the original composite post aggregation with low bandwidth cost is the technical risk associated with this effort. The FSEM fragmenting processor will have only one L-Band carrier SMA input and at least four L-Band SMA fragment outputs. As a threshold requirement, each input and output of the FSEM fragmenting processor must handle a single carrier bandwidth range of 38.4 KHz through 40 MHz with an objective requirement of 150 Hz (or lower) through 1 GHz (or higher). The FSEM fragmenting processor number of outputs must be programmable by the user to distribute fragments between 0-100 percent of the original composite input up to at least four spectrum outputs, including: replicating 100% to one channel; 100% to each channel; and non-replicated, uneven distribution. As an example, the FSEM fragmenting distribution non-replicated across four channels can be: 10%, 25%, 30% and 35%. Interfacing signal levels range from +5 to -40 dBm. The FSEM aggregating processor will be completely independent of the FSEM fragmenting processor and have a corresponding minimum of four L-Band SMA inputs and one L-Band SMA output. Each FSEM aggregating processor channel input/output carrier bandwidth requirements are the same as the FSEM fragmenting processor. Interfacing signal levels range from -25 to -70 dBm. PHASE I: In Phase I, the contractor shall develop the architecture and the design approach for the programmable Fragmented Spectrum Efficiency Manager (FSEM) system. The architecture and design should, at a minimum, meet the threshold requirements identified in the Description paragraph, above. Existing technologies such as inverse multiplexing and packetizing are referenced as known architectures and will not be acceptable as a Phase I deliverable due to the circuit dependencies requiring feedback, bit stuffing and packetizing which are inefficient methods in a SATCOM environment. The design must be state of the art and reflect agility in programmatically replicating in a non-symmetrical fashion fragments of bandwidth across four or more channels in a single direction and be re-aggregated at the receiving end using overhead costs within the threshold cited above. The design must show the encoding and preamble methods used on the transmission disassembly that provides a method of recovery at the receiver with means to handle the varied latencies and levels encountered in the reassembly process. PHASE II: In Phase II, the contractor shall build, test and deliver a prototype Fragmented Spectrum Efficiency Manager (FSEM) tool system in accordance with the design delivered in Phase I, to include all required hardware, software and user documentation. The prototype shall incorporate commercial and/or military standards on all interfaces. The contractor shall develop and deliver a test methodology that includes Government approved test plan, test procedures, verification cross reference matrix (VCRM) and script files as necessary for testing. The contractor shall support demonstration testing at the Joint Satellite Engineering Center (JSEC) laboratory at Aberdeen Providing Ground for one week. The prototype system shall, at a minimum, meet the threshold requirements identified in the Description paragraph, above. The prototype will be tested with multiple spectral transmissions for disassembly and reassembly. The prototype is required to be programmable and must be able to be pre-configured by an operator to disassemble/reassemble at programmable spectrum fragments and programmable bandwidth distribution sizes. The disassembly will be handled at one location and the reassembly will be handled at a different location therefore the prototype must consist of two physical elements each capable of being programmed and operated independently. PHASE III: In Phase III, the Fragmented Spectrum Efficiency Manager (FSEM) prototype design will be refined, optimized and productized for transition to military Programs of Record and commercial applications. All circuitry, fabrication and interfaces must utilize industry recommended or military standards (i.e.; MIL-STD-530, RS-422, etc.) wherever possible and must meet safety standards prior to delivery and be labeled in accordance with best Commercial practices. The Fragmented Spectrum Efficiency Manager (FSEM) system has the potential for use in multiple emerging transmission technologies, where there is a need for coherent fragment disassembly and reassembly along multiple transmission lines. Immediately, the system will provide an inherent passive Anti-Jam (AJ)/Anti-Scintillation (AS) capability which will undergo testing upon delivery. The current focus is on emerging and existing military communications systems, but this technology may also be of use in commercial areas requiring high volume data communications, including video. Military efforts such as Future Advanced SATCOM Terminals (FAST) are launching efforts to expand the digital domain in today"s transponded SATCOM. Creating a means to programmatically traverse multiple polarizations offers a robust means of communications impervious to man-made scintillation and interference that if appropriately productized can be utilized throughout DoD.
OBJECTIVE: Develop high-performance, low-power, acceleration-compensated oscillator technology with a combined active and passive compensation utilizing advanced Micro Electro-Mechanical Systems (MEMS) packaging technology. DESCRIPTION: C4ISR/EW systems mounted on high dynamic platforms such as tactical vehicles, fixed and rotary wing aircrafts, unmanned aerial vehicles, and missiles, rely on one or multiple oscillators generating precision frequency and time signals to function properly. Such systems include radar systems; sensor systems; signals intelligence systems; GPS-aided navigation, guidance, targeting systems; and broadband, high data rate communication systems. High-performance oscillators ubiquitously utilize quartz crystal oscillators. Since those oscillators are the most acceleration-sensitive components in C4ISR/EW systems, the performance of the entire system is degraded when a platform having C4ISR/EW systems undergoes ever-occurring severe dynamics of accelerations, vibrations and shocks. To overcome such degradation, the oscillators are compensated with combined active and passive methods against severe dynamic environments. Today"s state-of-the-art oscillators offer the performance acceptable for the current systems with relatively large Size, Weight, and Power (SWAP). As future weapon systems demand better performance at reduced Size, Weight, and Power - Cost (SWAP-C), new high-performance acceleration-compensated oscillators need to be developed. The performance goals of the new oscillators are equivalent to the performances that can be obtained by a 10 MHz oscillator with Short-term stability 1E-13 from 1 sec to 100 secs; Phase noise at rest -140, -150, -155, -160 dBc/Hz at 10, 100, 1000, 10000 Hz, respectively; Acceleration sensitivity 1E-12/g from 10 Hz to 2000 Hz; Temperature coefficient +/-1E-11 from -40 C to 85 C; Aging 1E-8 over 10 years. The SWAP goals are: Volume<1 cc; Power<100 mW. In comparison to current oscillators, new oscillators to be developed need to deliver significantly improved performance with a combined active and passive acceleration-compensation while the SWAP is reduced by more than an order of magnitude. These challenging goals are difficult to be accomplished by an evolutionary engineering applied to the existing technology. Innovative oscillator technology utilizing the most advanced Micro Electro-Mechanical Systems (MEMS) packaging technology is sought PHASE I: Conduct a feasibility study that identifies and addresses the problems that must be overcome in order to successfully demonstrate high-performance low-power oscillator with a combined passive and active acceleration compensation. Develop acceleration-compensation methodology and packaging design to meet the performance specifications. Demonstrate the feasibility on tabletop at TRL 3. Deliver a final report that covers the outcome of this study, performance specifications, and oscillator design and fabrication plan details. PHASE II: Fabricate prototype oscillators to test, demonstrate and validate the feasibility of a combined passive and active acceleration-compensation under simulated laboratory conditions. Field testing will be performed at a Government facility to assess operability and reliability of the oscillator using MIL-STD-810G as a guide for the testing. The final report, TRL 5 prototype oscillators (20 units), its description and operation guide, and test reports will be delivered. PHASE III: Purpose of this research is to develop low phase noise capability for use with clock systems such as Chip Scale Atomic Clock (CSAC). A product resulting from this effort could demonstrate in the lab, improved performance with the CSAC for applications such as radars and certain communication systems. Other military applications could include surveillance UAVs, robotic platforms, as well as potential military satellite platforms. Commercial products could include telecommunication satellites and other small platforms that require high precision timing performance with low cost and low power consumption.
OBJECTIVE: Develop a GPS anti-jam antenna that interoperates with both GPS pseudolites and Blue Force Electronic Attack (BFEA) interference sources. DESCRIPTION: The signals transmitted by GPS satellites reach the surface of the earth at extremely low power levels, and as a result, the signals are susceptible to intentional and unintentional interference. Sources of intentional interference, known as GPS jammers, are becoming increasingly easier to obtain and use, and consequently, their use has become more pervasive. [1, 2] Military GPS receivers can be outfitted with special purpose antennas to help mitigate the effect of jamming. These antennas, or arrays of several antennas, create nulls in the direction of interference sources to cancel the incoming noise. One such antenna technology, known as the Controlled Radiation Pattern Antenna (CRPA), consists of an antenna array and a processing unit that performs a phase-destructive sum of the incoming interference signals.  GPS pseudolites (pseudo satellites) are another technique utilized by the military to help mitigate intentional GPS interference. A GPS pseudolite is a terrestrial or airborne platform that transmits GPS signals at power levels strong enough to be received in a noisy environment. Pseudolites can be deployed in a wide Area of Operation (AoO), and compatible military GPS receivers can navigate using the signals transmitted by pseudolites. [4, 5] Blue Force Electronic Attack (BFEA) interference sources have the ability to deny the use of GPS and other satellite based navigation systems (collectively known as GNSS) to hostile forces, while simultaneously maintaining service to Military GPS User Equipment (MGUE). To this end, the BFEA interference sources will broadcast waveforms that are designed to preserve specific military GPS signals while denying access to civilian GPS and GNSS signals. Both of these technologies, GPS anti-jam antennas and GPS pseudolites, are effective at mitigating interference to GPS, or in the case of BFEA, denying its use to hostile forces, but they are largely incompatible. That is, a receiver with a GPS anti-jam antenna could null the strong signals produced by a GPS pseudolite, mistaking it as an interference source. The goal of this SBIR effort is to develop a GPS anti-jam antenna that interoperates with both GPS pseudolites and Blue Force Electronic Attack (BFEA) interference sources. The antenna shall be capable of receiving military GPS signals, civilian GPS signals, and GPS pseudolite signals. The antenna shall also be capable of nulling several hostile interference sources, and ignoring any BFEA interference sources. The antenna shall accomplish these tasks simultaneously, that is, it shall receive GPS signals from satellites or pseudolites, while ignoring hostile and BFEA interference sources. The initial intent is to deploy these anti-jam antennas on ground and air platforms, specifically platforms that will emit the pseudolite or BFEA signals. Antennas co-mounted on these platforms shall be capable of ignoring the strong emissions mounted nearby. PHASE I: Design a novel military GPS anti-jam antenna capable of simultaneously receiving military GPS signals, civilian GPS signals, GPS pseudolite signals, and nulling or ignoring interference sources. Develop the overall antenna design and antenna processing unit. PHASE II: Develop a prototype GPS anti-jam antenna capable of simultaneously receiving military GPS signals, civilian GPS signals, GPS pseudolite signals, and nulling or ignoring interference sources. Demonstrate this capability in a controlled laboratory environment. PHASE III: This anti-jam antenna system could potentially be used in a broad range of military applications where access to GPS would otherwise be denied or degraded due to hostile interference. For example, if the Army is conducting operations in an area where GPS signals are denied as a result of hostile interference sources, but GPS pseudolites are deployed to help mitigate this issue, the anti-jam antenna system would allow a military GPS receiver access to the pseudolite signals while at the same time, it will nullify any interference sources in the area. This will give the receiver a much greater advantage over using either one of those technologies alone. The anti-jam antenna system could also be installed on GPS pseudolites, since they will likely be used in environments where GPS is denied or degraded and where BFEA sources are deployed. Commercial applications include government agencies employing GPS devices in the pursuit of criminals using GPS jammers, or transportation systems that employ GPS augmentation signals for improved tracking.
OBJECTIVE: Development of a dual-frequency-band imaging radar operating at L-band and Ka-band with GMTI and"video SAR"capabilities for use on small manned and unmanned aircraft. DESCRIPTION: With different frequency bands, a radar can get very different views of the world. A low frequency radar, operating at L-band for example, will image the larger features of terrain, vegetation, and man-made targets due to the use of a longer wavelength signal. A very high frequency radar operating at Ka-band will image the same area very differently due to fine scale backscattering of the short wavelength signal. In addition, a Ka-band SAR can rapidly form a series of images of a target area that can be viewed sequentially at a sufficient frame rate, providing a"video SAR"view. Each frequency band has advantages in detecting different types of targets, especially when used to support advanced exploitation algorithms (e.g. SAR change detection, GMTI tracking, SAR & GMTI target recognition). A lightweight dual-band system for a small platform introduces challenges in power and processor management, yet it provides a significant payoff by increasing the chance of detecting targets of interest. The system must: - Provide SAR and GMTI modes at L-band and Ka-band - Operate from 3,000 to 10,000 feet AGL - NES0 = -30 dB m2/m2 - MNR = -18 dB - Cover a swath 5 km wide at L-band, and 0.5 km wide at Ka-band (at finest resolution) - Have a maximum weight of 35 lbs - Provide 1 meter range resolution at L-band - Provide 10 cm range resolution at Ka-band - GMTI minimum detectable velocity of 2.3 m/s at L-band (at 75 knots aircraft ground speed) - GMTI minimum detectable velocity of 0.5 m/s at Ka-band (at 75 knots aircraft ground speed) - Provide"video SAR"at Ka-band with a frame rate of 3 fps PHASE I: Expected deliverables are a detailed radar system design, performance estimation, and a schedule and work-plan for completing Phase II. The radar system design includes details on data processing, including SAR image formation and GMTI Doppler processing. Performance estimation may include modeling and simulation. Included in the Phase II work plan is a proposal of the expected utility of the system and compatibility/effectiveness with exploitation algorithms. PHASE II: Phase II includes the development and demonstration of a prototype radar system. Performance measurements are to be made to validate that (A) the system meets the system requirements over the range of operational altitudes and (B) the imagery can be exploited to detect targets of interest. Image products are to be compliant with current standards (NITF 2.1 for SAR, STANAG 4607 for GMTI). The test data and results are to be provided to the Army as part of a final report detailing the work done in Phase II. PHASE III: The completion of this phase would result in a mature technology which would undergo an appropriate operational demonstration following integration onto manned and/or unmanned military aircraft. The technology developed under this SBIR would also have commercial applicability, as small form factor Ka-band sensors may have utility as collision avoidance sensors in commercial aircraft.
OBJECTIVE: To design, develop and build a prototype RF current source or RF power amplifier that drives the type of low impedance magnetic current loop for the magnetic sensor described below. DESCRIPTION: Operation of the sensor is dependent on the magnetic field that is projected and that in turn is directly related to the current in the magnetic current loop. Conventional 50 ohm amplifiers require considerable matching resulting in a narrow bandwidth and excessive operational sensitivity. Current bandwidths are less than 0.5 % and it is expected that an RF current source will provide a bandwidth of greater than 10%. This will allow for much greater flexibility of mounting configurations. The only option today is to use a 50 ohm power amplifier. Using a conventional 50 ohm power amplifier requires a matching network to transform from 50 ohms to approx. 1.005-j3.09 ohms. Thus, the conventional approach would be to start with a voltage controlled current source to a 50 ohm power amplifier which is then impedance matched to 1.005-j3.09 ohms. This process provides for a maximum bandwidth of only 1% and results in poor efficiency. Developing a RF current source or RF power amplifier that drives a low impedance magnetic sensor will result in a minimum of a 10x increase in bandwidth and approximately a 5-10x reduction in required power with the sensor operating at approx. 1.005-j3.09 ohms. This type of magnetic sensor can easily penetrate the ground to detect deeply buried threats, such as landmines, etc., while its design will reduce unwanted electromagnetic (EM) interference. The commercial and military applications include the development of greatly improved metal/anomaly sensors. The concept behind transmitting a large magnetic field while minimizing the generation of a propagating EM wave, is to use a current loop in which the current around the loop has a constant magnitude and a constant phase . Usually in a current loop sensor the current changes phase around the loop and this phase change generates a propagating EM signal. By keeping both the magnitude and phase constant, little EM signal is projected but a strong magnetic signal is produced that extends normal to the plane of the loop creating a large magnetic field in the near field. This field will penetrate conducting dielectrics such as ground which have little effect on the magnetic field but substantially terminate the electric field and thus, a propagating EM wave. In  the in-phase current loop is created using multiple small loops. In  an in-phase current loop design is presented in which reactive compensation is used. Periodic series capacitors placed around the loop compensate for the"time-of-flight"phase change along a segment of the loop. Thus a magnetic current loop was developed for use in a magnetic-current-loop-based communication system. This design divided the loop into small segments and reactive compensation is added to each segment. Adding reactive compensation to each segment of the loop cancels the series reactance of each segment of the loop and provides for current magnitude and phase uniformity along the loop at any given instant in time . We have built and modeled such a magnetic sensor and the impedance at 13.56 MHz is around 1.005-j3.09 ohms . PHASE I: The contractor shall conduct a feasibility study to develop a current source which can greatly improve the bandwidth and reduce the required power needed to drive a low impedance magnetic sensor. The contractor shall submit a report which shall detail the results of the feasibility study of the sensor to be used to perform this mission. The report should contain a description of the sensor, as well as technical details of how the sensor will perform the required task(s) and expected performance. A brief high level plan for phase II work should be included in this report in the event of a phase II selection. PHASE II: The contractor shall develop a robust prototype sensor based on the results of the Phase I effort. The prototype sensor will be able to be drive a low impedance magnetic sensor and demonstrate an ability to penetrate the ground to detect deeply buried threats. A demonstration of the sensor will be done at a location determined by the government. PHASE III: Based upon Phase II results the sensor will be improved upon and optimized for commercialization. Multiple military programs and commercial applications can benefit from this sensor including: R & D laboratories and both military and commercial metal/anomaly sensor developers/manufactures. The most likely path for transition to operational capability is development of a superior metal/anomaly sensor than the sensors presently available.
OBJECTIVE: To develop a semiconductor laser giving an output power greater than 10 mW in the middle ultraviolet (UV) region with center wave tolerance of plus or minus of 10 nm and with good reliability. DESCRIPTION: A compact room temperature semiconductor laser diode emitting in the mid-UV region is needed for testing sensors within a hardware-in-the-loop simulation environment; and for other applications such as remote sensing, and for short-range non-line of sight communication. The existing UV lasers use nonlinear optical effects. This makes UV sources bulky, fragile, with significant power supply requirements, and provides limited UV wavelength options. A fieldable semiconductor laser in the mid-UV regime will provide the critical path for testing future sensors in a hardware-in-the-loop simulation environment. PHASE I: A detailed analysis of the proposed approach followed by complete design is required. The design shall be made at one mid-UV wavelength, and will discuss applicability in other mid-UV wavelengths. The contractor shall deliver a detailed report on the analysis, results, conclusion, and a feasibility plan to address this effort. PHASE II: A compact prototype mid-UV laser will be produced and delivered to the Army. The delivered laser would produce a continues output power of 10 mW at the middle ultraviolet wavelength with center wave tolerance of plus or minus of 10 nm, beam divergence full angle in far field less than 2 mrad, beam diameter less than 2 mm within operating temperature range of 0 to 40 degree centigrade. Required Phase II deliverables will include a prototype, testing in laboratory, and a final report. PHASE III: The follow-on work would significantly improve the performance, size, weight and power to enable development in commercial marketing. The technology developed under this effort will be transitioned to military and commercial application.
OBJECTIVE: Develop a High Frequency (HF) Time Difference of Arrival (TDOA) radio geolocation remote sensor system that uses a physically small antenna. The High Frequency remote sensor system will be capable and effective at providing accurate geolocation coordinates on High Frequency radios using NVIS (Near Vertical Incidence Skywave) communication mode. DESCRIPTION: Geolocation of High Frequency (HF) Radios using Near Vertical Incidence Skywave (NVIS) mode propagation with a remote sensor system using TDOA (Time Difference of Arrival) technique is needed for providing force protection for an area of operations. Innovation is required in developing TDOA processing of HF NVIS signals. There are many challenges to be met and problems to be solved to select and verify the same wave point on the received signal and then accurately time stamp the same point on the wave and then an algorithm to process the time stamped signal to provide a line-of-bearing. Multiple lines-of-bearings must be processed to determine accurately the geolocation of the radio. The processing of the time stamped data must be processed with the uniqueness of the HF ground-wave taken into account. The system must isolate the ground-wave from the direct-wave and sky-wave. Primary requirement of this research task is to provide solutions to these challenges in the form of a low cost remote sensor system that provides persistent surveillance of an area to be monitor for extended periods of time. Research and development efforts have been completed in the past using aircraft as TDOA platforms to provide LOB (lines-of-bearing) on ground based HF emitters on direct-wave propagation. Much work on ground base systems have be completed in the past using very large antenna arrays to do single station location of HF skywave mode communications but these are ineffective and not accurate against NVIS mode communications. Large DF antenna arrays have been used in single station location system but these are too large to be used as a deployable force protection ground based system. None of these approaches satisfy the requirements for deployable force protection and persistent surveillance for imminent threat warnings or detection and geolocation of HF interferers for spectrum management purposes. Capability to get LOB and geolocation of HF radios using NVIS communications links is desired for remote sensors providing force protection over the area of operations. Ground remote sensors are the best solution for area of operation deployable force protection. These remote sensors must be easy to deploy and low cost given the installation in remote locations that makes the sensor vulnerable to be lost. This research effort would use innovative control and data processing of a remote wideband spectrum surveillance receiver system with organic precision time stamping of RF events. The research would involve developing a processing system by means of TDOA technique applicable to HF NVIS communications to process the data collected by the wideband receive system to locate HF transmitters. Ground based remote sensor geolocation of NVIS emitters using a TDOA algorithm is the primary research area. Man power and support is a major factor in the research and design approach of the sensor system. Minimum personnel time to deploy, operate and maintain the system is a key goal for this sensor system. The sensor system must not be dependent on availability of commercial or generator power. It must use renewable power and be compatible with multiple power sources. PHASE I: Will consist of researching past approaches to using TDOA for geolocation of HF transmitters and provide a detailed design of a low cost ground based HF deployable remote sensor system. The sensor system must be easily deployed, use renewable power, data processor system, and software system that can be easily integrated into current geolocation systems. Data format must be compatible with deployable force protection situational awareness database system and other national database systems. The cost to build a demonstration system shall be provided and estimated cost to build six field evaluation systems. PHASE II: Develop a HF TDOA remote sensor demonstration system based on the detailed design presented during Phase I. Identify a test range and setup the demonstration sensor system collecting data needed to determine the accuracy, effectiveness, and viability of the sensor system. An operational field test report is to be provided with data, analysis, and evaluation of the sensor system. The report shall provide lifetime cost analysis of the sensor system and manpower required to deploy and operate the sensor system. PHASE III: Fully develop the low cost, ground based, easily deployable, HF TDOA technique, remote sensor system. U.S. Army, DOD, FAA and FCC Uses: Deployable Force Protection, Persistent Surveillance, Imminent Threat Warnings, Remote Sensor System, Spectrum Management Commercial Uses: Detection and location of HF communications interferers
OBJECTIVE: Research, develop and design a Digital Readout Integrated Circuit (DROIC) optimized for high performance cooled IRFPA technology. Elegant, innovative 2D readout solutions/designs that utilize standard silicon foundry processes are preferred; however a 3D approach that shows high yield potential, reasonable cost to fabricate will be considered. DESCRIPTION: The IR industry"s continual desire for larger format, smaller pixel size FPAs to achieve higher resolution and wider field of views (FOVs), without sacrificing existing performance, has presented a tremendous challenge for today"s ROIC technology. Today, the vast majority of ROIC designs are still analog in the sense that a large integration capacitor in the unit pixel is utilized to integrate the detector photo current and dark current. The capacitor must be large enough to allow for sufficiently long integration time to achieve the desired Noise Equivalent Delta Temperature (NEDT) and also not saturate at the higher background temperatures. However, by moving to smaller pixels, it becomes increasingly harder to achieve the capacitor sizes or the well capacity in the pixel to maintain the sensitivity and dynamic range requirement. This topic seeks to advance the performance of cooled IRFPA technology through innovative investigation and development of DROICs that could meet the following objectives: The readout will be large format (~1Kx1K), small pixel pitch (<12um) and shall be designed to exhibit high injection efficiency, low noise, low power dissipation (<150mW @ 60Hz), A/D conversion on-chip with>20 bits of dynamic range, an effective well capacity greater than 500 million electrons and non-linearity<.1%. The readout shall also be capable of operating>1KHz frame rate. PHASE I: Investigate research and design digital readout architecture optimized for large format, small pixel pitch, high performance IRFPAs through the use of modeling, analysis, empirical testing or construction. Innovative 2D readout solutions/designs that utilize standard silicon foundry processes are preferred. Establish working relationship with IR detector vendor to acquire IR detector arrays (such as SLS, QWIP, MCT, InSb) for possible phase II effort. PHASE II: Using results of Phase I, design, develop and fabricate the DROIC with objective of<12um, 1Kx1K format. To demonstrate performance of DROIC, hybridize (mate) DROIC to detector array and evaluate performance of IRFPA through lab characterization. Develop and fabricate camera electronics to image the Infrared Focal Plane Array (IRFPA). Deliver the imaging system/camera to the government. PHASE III: Transition the DROIC technology to the IRFPA industry. Military applications include high performance FLIR imagers, compressive sensing, and hyperspectral imaging. The commercialization of this technology includes night driving aid, search and rescue, security, border patrol, and firefighting.
OBJECTIVE: Design, integrate and build prototype see-through head borne Dismounted Soldier Display capable of wirelessly receiving and displaying high definition video and situational awareness information, utilizing state of the art display technologies. DESCRIPTION: Commercial and military products are available and emerging that include some of the capabilities, but none that address all three key areas of this proposal simultaneously wireless receipt of a video source, high definition video display, and a see-through display. Advances in commercial smart phones combined with results from research into two areas - micro-displays and wireless video transmission - are rapidly enabling a low cost, common input wireless wearable display. Current military fielded systems require a Soldier to either interrupt their current actions by looking at a handheld device, or by raising their weapon to look through a device taking up valuable time, and potentially causing an escalation of force. Both are undesirable actions, increasing workload and creating hostile situations. A common input wireless display for dismounted Soldier applications would be a capability enabler. It would allow the presentation of Mission Command information such as situational awareness queuing, tactical maps, or sensor imagery. Additionally, video from other Soldier mounted sensors such as weapon sights, handheld targeting systems and sensor data from Unmanned Aircraft Systems (UAS) could be displayed. Wireless transmission of this video is important, for remote video sources, and for human factors considerations - as tethered solutions limit mobility, and create snag hazards. As more capability gaps are addressed, more equipment will be available for the Soldier to utilize, so much so that information overload can occur. A display with common input can allow the Soldier to switch inputs quickly, without having to switch between systems in their hands. There has been considerable research into Heads-Up Display (HUD) technology spanning decades which has been primarily in the aviation community. Within the aviation community, HUD devices are either fixed-frame or helmet mounted. Relevant fielded systems to the Soldier include NAVAID, AN/AVS-7(V), Land Warrior, Nett Warrior, and accessory devices, such as the HTWS Head mounted display (HMD). These systems provide different capabilities, such as navigational assistance, piloting information, map display, and sensor data in various look up, see-through, and occluded configurations. Accessory displays, such as the Tac-Eye or Red-I are available for military equipment. Relevant commercial systems include a multitude of portable multimedia players integrated into eyewear allowing for discreet, occluded, wired viewing of video with various display resolutions. These however are wired solutions, and require physically unplugging and changing inputs to the system. Recently products have come onto the commercial market such as low resolution GPS enabled ski goggles, capable of displaying performance statistics, but not video, in a look down configuration. Combining these capabilities and Point of View (POV) sports cameras, exist products such as the Recon Jet a glasses mounted, look down occluded display, capable of displaying video. Other products include Google Glass - marketed as a smart phone accessory, in a look-up, transparent, configuration, able to display video from a nearby smart phone. A wireless, high-definition, see-through display would be a capability enabler providing a platform for weapon mounted sensors to stream video, UAS data, mapping, and navigation, situational awareness queuing, such as targeting, gunshot detection, and Rapid Target Acquisition (RTA). In the far future, a common display platform could de-couple the requirement for every system used by the Soldier to include a display potentially reducing costs across systems. Similar to how modern televisions functions, a Soldier would need to simply change the input to their common display. PHASE I: Develop a detailed design of proposed Dismount Soldier Display with Wireless input (DSDW). Perform a tradeoff study of candidate configurations (including the specific see-through display technology) and components, and identify the best solution in terms of SWaP and performance. The final report shall also provide an estimate of the display"s cost, size, weight, and power consumption. Innovative mounting techniques to eyewear or the helmet is encouraged. Requirements include the following: 1. DSDW shall be compatible with modern military communications helmets, such as the Advanced Combat Helmet (ACH) or Enhanced Combat Helmet (ECH). 2. DSDW shall be compatible with protective masks, such as the Joint Service General Purpose Mask (JSGPM) or similar. 3. DSDW shall be compatible with the Ballistic Laser Protective Spectacles (BLPS) [MIL-PRF-44366B], Spectacles Special Protective Eyewear Cylindrical System (SPECS) [MIL-PRF-31013] and items from the Authorized Protective Eyewear List (APEL) [PEO Soldier APEL]. 4. DSDW shall not impede the use of PVS7 Night Vision Goggles (NVG), PVS14 NVG or PSQ20 Enhanced Night Vision Goggle (ENVG), and be able to be worn simultaneously with a pair of NVGs. 5. The wireless video protocol and interface will be determined as part of this effort. 6. Source video will be digital only. 7. The DSDW shall not exceed 0.5lb (threshold), 0.1 lb (objective) including batteries. 8. DSDW should be able to support an 8 hour mission without a battery change. 9. Batteries shall be easily replaced, and commercial batteries are preferred. 10. DSDW shall have the capability to be used by both, or either eye. 11. Latency between source video and displayed video shall be 1 frame maximum. 12. DSDW shall be functional in both day and night missions. 13. DSDW shall have variable display brightness to allow viewing in ambient illumination conditions from bright sunshine to total darkness without degrading system performance. 14. Display resolution shall be 1920H x 1080V pixels minimum. 15. Display frame rate shall be 30Hz (capable of 60Hz and greater desired) 16. DSDW shall be capable of displaying full color video. 17. Bandwidth required to support HD video is in excess of 186MB per/sec. 18. Exit Pupil shall be greater than 18mm. 19. Eye Relief shall be compatible with protective eyewear from Requirement 3. 20. DSDW must not emit noise that is detectable in any direction within five meters. (Audio security) 21. DSDW must not emit light in low light conditions that is detectable by another user within five meters, or there must be an acceptable approach for light security. 22. Distance from transmitter to DSDW shall not exceed 3 meters. PHASE II: Finalize design configurations and interfaces with Critical Design Review, and integrate wireless display prototypes (2 minimum) for Soldier demonstration and evaluation. Information and design approaches from the SBIR effort(s) will support Army Research and Development of a Common Wireless Display for Dismounted Soldier applications. PHASE III: Wireless interface for display of high resolution video and sensor information has multiple military, law enforcement, and civilian applications for example, as accessories to mobile smart phones as augmented reality displays, for sporting related activities to display performance information, and as replacements or accessories for laptop, tablet and computer displays, video gaming, and for wireless home video transmission.
OBJECTIVE: Design and develop techniques to detect emplaced roadside EFPs using vehicular mounted forward-looking active or passive sensor technologies. DESCRIPTION: The roadside environment is often a less homogenous background than that of the adjacent road. Roadways are generally clear of surface clutter but may have varying degree of subsurface stratum. Roadside environments may contain a variety of roadway infrastructure including curbs, signs, sidewalks and unstructured terrain including vegetation, plants, trees and debris. EFP weapons are often camouflaged roadside emplacements within the cluttered roadside environment. Detection of these threats is the focus of this SBIR Topic. PHASE I: The Phase I goal is for a vendor to utilize modeled/simulated data or government furnished data1,2 of EFPs to demonstrate emerging analytical techniques and concepts that extract features unique to EFP targets. The EFP feature concepts and techniques must be leveraged in Phase II, e.g. range, intensity, texture, RCS, etc., and must be documented for and demonstrated to government Subject Matter Experts. Fully autonomous algorithms are not required during this Phase I. The Phase I final report must contain a full description of the EFP surrogate target(s), sensor data, and description of and rationale for the features chosen and identified taking into consideration concealing foliage and camouflage. The report must also provide recommendations of feature(s) for further investigation. PHASE II: The Phase II goal is to prototype autonomous algorithms for detecting and tracking EFPs. These algorithms must be implemented with either the target or sensor in transit. The techniques/algorithms used and sensor data will be provided to the Government for evaluation. The vendor"s Phase II final report must include algorithm performance estimates for terrain and vegetation conditions resident within the datasets. The report must also include recommendations for algorithm improvements. PHASE III/CPP: The Phase III goal is to further develop the techniques or algorithms from the Phase II effort to a mature state, proof of technological feasibility, such that they can be implemented in a real time detection system with potential fielding through the normal DoD acquisition process. Products might lead to enhanced commercial mounted avoidance scanning systems for automobiles or possibly as track ballast and throughway inspection systems for trains on railways. DoD applications might include sensor technology for a wide range of side attack threat munitions detection.
OBJECTIVE: Develop a framework for a secure, standards based Attribute-Based Access Control (ABAC) solution that is capable of dynamically redacting and filtering data within the DIB Query Service (1.3 and later) SOAP endpoint and is interoperable with Simple Object Access Protocol (SOAP) Dial-Tone, Distributed Common Ground System (DCGS) Directory Information Base (DIB), and Distributed Common Ground System-Army (DCGS-A) architectures. DESCRIPTION: The current DCGS-A systems were built to provide best in class data services at the time and are in need of architecture enhancements to support current guidance. The current deployed DIB systems lack identity and attribute awareness, leaving DCGS-A with systems that have constricted security boundaries. These boundaries impair a warfighter"s ability to use information resources beyond the user"s immediate visibility, awareness, or access. To comply with Intelligence Community (IC) Directives, DCGS-A requires the capability to redact and filter federated data within the DIB (1.3 and later) Query Service SOAP endpoint, in a manner that allows the re-use of existing deployed architectures to the greatest extent possible and supports information-sharing efforts, actionable intelligence, and use of new and emerging IT technologies (e.g. cloud and shared computing services). In the fielded system, it is currently not possible to redact or filter information accessed within the DIB Query Service SOAP endpoint. A standards-based ABAC solution would take in attributes and access control policies and return only data that the entity is allowed to see. Solution architecture for this effort will incorporate SOAP Dial-Tone, Distributed Common Ground/Surface System (DCGS) Multi-Service Execution Team (MET) Office (DMO) DIB architectures, and DCGS-A"s fine-grained Attribute-Based Access Control (ABAC) mechanisms. This solution will address Intelligence Community Directive 501 (ICD 501) The Discovery and Dissemination or Retrieval of Information within the Intelligence Community, ICD 503 Intelligence Community Information Technology Systems Security Risk Management, Certification and Accreditation, and other relevant DoD/IC guidance and net centric requirements. The solution will leverage IC security markings maintained by the Office of the Director of National Intelligence (ODNI)/Controlled Access Program Coordination Office (CAPCO) and XML Data Encoding Specification for Information Security Marking Metadata V9 (ISM.XML.V9) 17 July 12. Previous efforts to be leveraged include DMO"s DIB 2.0 PL3 certification and the DMO DIB 1.3 Redaction demonstration to support architecture and system development design goals. PHASE I: Prepare a feasibility study for a framework solution that can redact and filter data elements for Product Retrieval and Dissemination within SOAP-based DIB (1.3 and later) Query Service, accessible through DCGS-A ABAC architecture. This framework will support Special Operations Forces (SOF) and Army (513), with the Army as the producer node. The attribute store will be setup in the consumer node. The Army node will setup a trust between the consumer and producer Secure Token Service (STS). PHASE II: Using the resulting materials and/or designs from Phase I, develop Integrated Master Schedule with resource allocation and assemble a prototype to demonstrate the feasibility and efficacy of the solution. Benchmark and identify production tasks, system throughput, scaling of capabilities, use of identity, policies, attributes, management of policies, management of attributes and auditing for production. Use the resulting prototype to support an Interoperability Demonstration Pilot. PHASE III: Operationalize the dissemination of the solution within DCGS-A and DIB. Prepare the roadmap to guide related efforts and support accreditation. DUAL USE COMMERCIALIZATION: Military Application: Transition capability into current ABAC-based solution to support secure near real-time collaboration with DCGS-A and other entities. Commercial Application: Companies with need to protect sensitive data while collaborating in interoperable environments, including healthcare, banking, and other industries.
OBJECTIVE: The objective is to develop a scanning lidar to measure the spatial evolution of dense obscurant clouds (one way transmission 0.25%) with high temporal and spatial resolution. The system should be capable of measuring an obscurant concentration point cloud contained in a 10x10x10 meter measurement volume with sample spacing of 1/5 meters and a total 3D cloud update rate of 1Hz. This measurement scenario supports the development of advanced obscurants and methods to disseminate them effectively DESCRIPTION: Smoke and obscurants play a crucial role in protecting the Warfighter by decreasing the electromagnetic energy available for the functioning of sensors, seekers, trackers, optical enhancement devices and the human eye. Recent advances in materials science now enable the production of precisely engineered obscurants with nanometer level control over particle size and shape. Numerical modeling predicts that order of magnitude increases over current performance levels are possible if high aspect-ratio conductive particles can be effectively disseminated as an unagglomerated aerosol cloud. The high efficiency of these materials makes field evaluation a challenge. One dissemination mechanism packages the material into a grenade and functions one or more units for personal, vehicle or even wide area screening. During the first few seconds of an event, the obscurant cloud is optically dense and rapidly evolving. A non-invasive means to determine concentration over time and space would be an invaluable tool for obscurant development. Lidar systems have been used to successfully characterize obscurant clouds (Uthe) during military obscurant testing. While these systems did not have the spatial resolution needed for current testing, lessons learned from evaluating dense obscurant clouds and multiple scattering are relevant for this effort. More recently, scanning lidar systems have been proposed for situational awareness during degraded visual environment (DVE) scenarios (Xiaoying). Obscurants and Chemical / Biological clouds are aerosols. Lidars can be used for detection and evaluation of any aerosol cloud. The obscurant clouds are a more difficult case because they are optically dense. This is the reason for the topic: no lidar has been developed with the capability to penetrate an obscurant aerosol and map the density profile of the cloud. The high resolution required along the line of site requires a high speed detector and digitizer. Currently available electronics do not have the dynamic range to measure the round trip transmission through the obscurant cloud. This challenge could be mitigated by transmitting multiple pulses along each line of site through the cloud; however the need to scan a 3D point cloud limits the number of pulses that can be combined for each line of site. Using multiple apertures or changing other lidar parameters over a small number of pulses are possible methods to solve this problem. Another method for cloud penetration is to use a portion of the electromagnetic spectrum where the material is less efficient. Unfortunately, obscurant materials are typically broadband and are being developed to cover all portions of the electromagnetic spectrum of military interest. Thus, other techniques will need to be developed to achieve the required dynamic range. PHASE I: Design a lidar system that map the 3D concentration of an evolving obscurant cloud in a 10x10x10 meter volume. The design should address the dynamic range needed to measure path transmittance though the cloud of 0.25% or less while maintaining high spatial resolution inside the cloud. PHASE II: Fabricate, test and demonstrate a LIDAR system that meets the specifications described in Phase I. This prototype system will take the laboratory designed system and hardware and package it into a fieldable unit. Housing for this unit should be weatherproof to withstand conditions in an outdoor environment. In addition to fabricating a prototype unit, the contractor with thoroughly test it alongside Army personnel at the Army's Edgewood Chemical and Biological Center (ECBC) test range. Standard Military smokes and obscurants will be used to evaluate the prototype instruments performance. A final report shall be produced by contractor using data provided by the Army on the various aerosol clouds dispensed. Prepare a cost estimate for building the system in quantity. PHASE III: The lidar system developed in this program can be used by research and development organizations and test ranges to evaluate obscurant munitions and devices. It has application in other DoD interest areas including chemical and biological detection, aircraft landing aids, meteorology and pollution control. Industrial applications could be very similar to these.
OBJECTIVE: To develop a concept which produces microshear to efficiently separate and disseminate fine powders that are densely packed within a container. Concepts should address material agglomeration issues that arise with optimized packing densities. A systematic study of the forces necessary to overcome binding effects of the materials could be developed along with mathematical modeling to support separation concepts. This problem is compounded by the fact that in general, high packing densities are employed to maximize the yield of obscurant devices. Improved methods to separate these highly compressed materials are needed. DESCRIPTION: Smoke and obscurants play a crucial role in protecting the Warfighter by decreasing the electromagnetic energy available for the functioning of sensors, seekers, trackers, optical enhancement devices and the human eye. Recent advances in materials science now enable the production of precisely engineered obscurants with nanometer level control over particle size and shape. Numerical modeling predicts that order of magnitude increases over current performance levels are possible if high aspect-ratio conductive particles can be effectively disseminated as an unagglomerated aerosol cloud. Recent history demonstrates that when these designer particles are packed into a center-burster grenade configuration, obscuration performance can decrease by more than an order of magnitude, canceling the performance increases inherent in the particles. There is clearly no mechanism in the present design that acts to separate particles from one another. That is the need. In the classic center-burster configuration used by the Army for grenades, it is speculated that the shock wave provides no aid in deagglomeration (and may, in fact, contribute to agglomeration) because it passes through the confined material before aerosolization begins. The container is ruptured by the shock wave, but only after it has passed through the fill material. Consequently, an entirely new concept design may be required to take advantage of the energy contained in the shock wave and/or of the compact storage of energy in explosives of any sort. The modeling of explosives and resultant shock waves is still in development. Classic treatment of the phenomenon does not take into account the extreme temperatures, pressures and velocities, resulting in erroneous calculations. More accurate methods should be considered. PHASE I: Demonstrate, with modeling or other means, a concept that will create forces to separate anisotropic particles when disseminated. Demonstrate by electronic means or in the laboratory how the concept works. Estimate the separation efficiency possible. The Army is interested in a device the size of a Coke can, although any size can be used to demonstrate the concept in Phase I. Anisotropic powders to consider for demonstration purposes include fine metal or graphite flakes. PHASE II: Prepare devices that will demonstrate the mechanism. Provide 5 such items to ECBC for measurement of separation efficiency achieved. In Phase II, a design of a manufacturing process to commercialize the concept should be developed. Present grenades used for disseminating fine powders achieve roughly a 30% packing efficiency, a 40% dissemination efficiency and can reduce the extinction efficiency of the material (through agglomeration and destruction or modification) of the particles by 50%. This results in a total efficiency of about 6%. The goal of this effort should be to double that number. Consider how the design would change to effectively disseminate microfibers of graphite or metal. PHASE III: The techniques developed in this program can be integrated into current and future military obscurant applications. Improved grenades and other munitions are needed to reduce the current logistics burden of countermeasures to protect the soldier and his equipment. This technology could have application in other DoD interest areas including high explosives, fuel/air explosives and decontamination. Improved separation techniques can be beneficial for all powdered materials in the metallurgy, ceramic, pharmaceutical and fuel industries. Industrial applications could include metal hardening, metal cladding, seismic exploration, destruction of hazardous waste and welding.
OBJECTIVE: Development of a novel platform for the detection/diagnosis of biological agents harnessing the specificity of attachment of phage to bacteria and the sensitivity of Quantum Dot (QD) nanocrystals. DESCRIPTION: An ideal detection/diagnosis technology/platform would have the following desirable properties: the potential for rapid, high-sensitive (detection at very low concentrations of the agent), high-specific (high true positive/true negative and low false-positive/low false-negative) detection of agents of interest in a low background interference. Also, the platform to conduct the assays should be user-friendly, portable and include a point of care or field device that is capable of detecting multiple agents simultaneously in a high throughput manner in any matrix type (clinical or environmental). Additionally, it is highly desirable to have a flexible technology amenable for a plug and play format to develop new assays rapidly and above all, the technology should be inexpensive to manufacture and operate. Currently, there are two general paradigms in bioagent detection: protein based immunodiagnostics and nucleic acid based PCR diagnostics. Although numerous current technologies address many of the desired characteristics, none of the existing technologies/platforms meet all these requirements. For example, some immuno diagnostics technologies based on Lateral Flow Immunoassay are rapid and inexpensive, user-friendly and field deployable but lack sensitivity, have high background issues and almost always needs confirmatory assays from another technology. The nucleic acid based assays are rapid, easy to develop and relatively user-friendly and field deployable but lack multiplexing and high throughput and may have background (false positive) issues and interference issues from some matrix types. The purpose of this topic is to explore the potential of a third paradigm, phage based diagnostics, to address many if not all of the limitations of the currently fielded assays and instruments, and to leverage existing platforms. The general concept is to harness the specificity (natural or engineered) of attachment of phages or phage proteins to bioagents in addition to exploiting the sensitivity of Quantum Dots for developing a fluorescent detection technology that can be used in currently fielded detection platforms. Phages are also useful for live agent detection unlike the current approaches that can detect both live and inactive agents. The proof of concept for phage Quantum Dot technology has been established in some phage-bacterial systems1, 2. The specificity of phages to their cognate bacterial species or strain is fairly well established. It is also conceivable to engineer specificity for any type of agent onto phages using affinity panning3. Quantum Dots are superior to many other fluorophores available today. They have a broad absorption spectrum and a narrow emission spectrum, highly tunable based on size and hence multiplex detection of agents is possible. Quantum Dots can also be synthesized to carry specific functional groups or moieties on their surface for binding specific agents. This topic seeks technologies to demonstrate the utility of a phage-QD assay in a platform that can be readily fielded. Some of the proposed paths are: manipulate whole phage or phage structures such as tail fibers, or phage tail fiber like bacteriocins5, phage proteins such as cell wall binding domains of endolysins6, (modify to chemically link a functional group such as biotin or genetically engineer phage/protein to express a functional moiety such as biotin binding peptide on the surface) and use Quantum Dots for detection upon binding to specific bacteria. Engineering a universal phage with binding specificities in combination with QD for detection of any agent type (spore, vegetative cells or protein) is also desired. PHASE I: This phase will demonstrate the feasibility of the phage-QD assay in a user-friendly, fieldable fluorescent detection system. Milestones and deliverables for Phase I: Demonstrate a phage/phage protein-bacterium-QD assay detection in a biodefense relevant pathogen (e.g., B. anthracis vegetative cells or spore, Y. pestis etc.) in a fieldable platform Demonstrate multiplex capability Delineate assay sensitivity and LoD in buffer and a clinical matrix Produce a summary report of the results that demonstrates a comparative evaluation, establishes suitability of the technology, and provides research findings with supplemental documentation to justify selection by providing insights into expanding the proposed technology to other agents of interest. A clear description of a path forward for expanding the technology to other agents of interest is required PHASE II: This phase will involve expanding the work performed in Phase I to include multiple agents and developing a 510 K package for FDA licensure of the platform technology. Milestones and deliverables for Phase II: Expand the phage-QD detection concept for detection of viral agents and proteins such as toxins Develop inclusivity/exclusivity testing of a phage-bacterial pair Develop and establish LoD for the assays Conduct clinical trials with contrived samples or real specimens to establish sensitivity/specificity Prepare initial packages for submission to FDA PHASE III: This phase will expand the work in Phase II in integrating and commercialization/fielding of the platform. Milestones and deliverables for Phase III: Extend the work accomplished in Phase II. Focus will be on integrating the Phase II assays with the fieldable platform Demonstrate the multiplexing and high-throughput applications in a field/point of care setting Establish quantitative parameters of detection Describe a commercialization strategy of the technology to commercial healthcare settings and /or transition to military operations and acquisitions PHASE III Dual Use Applications: Further research and development during Phase III efforts will be directed towards expanding the panel of phages for detection of all bacterial and viral pathogens of interest to the medical community beyond DoD using the plug and play format afforded by this technology. Inexpensive, portable fluorescence readers with the ability to exceed visual detection limits and to document test results would find extensive use by first responders, in civilian medical facilities, in the public health field, and for point of care diagnostics in remote regions throughout the world.
OBJECTIVE: Develop innovative approaches to substantially improved serum half-life of the protective monoclonal antibodies for prophylaxis and/or therapy against ricin intoxication. DESCRIPTION: Ricin from the castor oil plant Ricinus communis, is a highly toxic, naturally occurring protein. A dose the size of a few grains of table salt can kill an adult human. Although estimates vary, the LD50 of ricin is approximately 25 micrograms per kilogram in humans if exposure is from injection or inhalation. In light of the abundant availability and relatively simple methods required to obtain ricin from castor oil plants, this toxin is considered a biological warfare threat agent. Moreover, the Department of Defense identified a capability gap to protect against exposure to the ricin toxin in the Initial Capabilities Document for Joint Medical Biological Warfare Agent Prophylaxes, approved 14 September 2004. Currently, there are no licensed, specific medical prophylactic or therapeutic measures against the ricin toxin. To that end, the goal of this SBIR topic is to solicit innovative approaches to improved serum half-life of the protective monoclonal antibodies (mAbs) for prophylaxis and/or therapy against ricin intoxication. The approaches may include but are not restricted to methods to reduce mAb degradation, use of genetically derived immunoglobulin glycoforms, methods to reduce hepatic turnover of mAbs, methods to create slow-release mAb depots and combinations thereof. The specific improvements in duration should be determined using baseline values and ideally technologies will be developed to improve mAb duration 2-fold or more. PHASE I: Phase I studies will focus on development of proof-of-concept in vitro and in vivo models and prototyping systems to modify ricin-specific mAbs. PHASE II: Studies in Phase II will include small scale process development, production of material for in vivo potency and pharmokinetic/pharmodynamic studies in small animals and culminate in a pharmokinetic/pharmodynamic study in a non-human primate (NHP) model. PHASE III: Studies in this phase will focus on scale-up manufacturing, toxicity and potency assessment in appropriate animal models, followed by cGMP manufacturing, pivotal efficacy studies in NHPs and clinical trials. PHASE III Dual Use Applications: It is anticipated that successful development of a platform strategy to improved mAb half-life will have dual use in both improved mAb countermeasures for the warfighter and improved therapeutic mAbs for use as pharmaceutics.
OBJECTIVE: Develop novel technological methods for oral delivery of live adenovirus compared to the current FDA-licensed product,"Adenovirus Type 4 and Type 7 Vaccine, Live, Oral"(Barr Labs, Inc.). The new methods should provide material benefits in terms of production and comparable safety and immunogenicity. DESCRIPTION: Adenoviruses are a frequent cause of epidemic acute respiratory disease (ARD) in military basic recruits and pose a significant threat to military readiness, especially during times of large mobilization. ARD is a debilitating febrile disease that frequently results in loss of training time, hospitalization and pneumonia and, on occasion, death. There is no FDA-licensed treatment option available for adenovirus. Before vaccines were available, adenovirus was isolated in 30-70% of recruits with ARD and 90% of cases of pneumonia. The vast majority of adenovirus infections in recruits is caused by types 4 and 7, although types 14 and 21 have also caused severe outbreaks. A live oral vaccine against adenovirus type 4 was developed in the 1960s and determined by clinical studies to be safe and effective in preventing ARD due to adenovirus type 4. Subsequently, a similar live oral adenovirus type 7 vaccine was developed. After extensive testing, these vaccines were approved by the FDA and manufactured by Wyeth Laboratories in 1980. The Wyeth adenovirus vaccines were very safe and effective in eliminating outbreaks of ARD in recruits. However, in 1995 Wyeth elected to cease production and all DoD supplies were depleted by 1999. With the loss of the vaccine, adenovirus outbreaks and deaths in basic recruits returned to pre-vaccine levels. DoD contracted with Barr Laboratories to develop replacement live oral adenovirus vaccines. The new Barr Adenovirus vaccines were demonstrated to be safe and effective and were licensed by the FDA in March 2011. The vaccines are administered to all new basic recruits in the Army, Navy, Air Force, Marines, and Coast Guard. The Barr and Wyeth vaccines are intentionally very similar. The type 4 and 7 virus strains in both vaccines are identical and both vaccines are prepared in WI-38 cell cultures. Both vaccines are similarly formulated as enteric coated tablets containing either adenovirus type 4 or adenovirus type 7 (at least 32,000 tissue-culture infective doses per tablet). The vaccine is described as a"tablet within a tablet"with the inner core containing live virus and excipients and the outer tablet layer consisting of cellulose and other ingredients. Oral administration of vaccine results in a self-limited, asymptomatic infection in the gastrointestinal tract and produces protective neutralizing antibodies in the serum. This topic is intended to enhance the manufacturing methods currently used in production (or"tableting") of the licensed adenovirus vaccine. The technology used to produce the tablets originates from the 1960s and its continued long-term sustainability is of concern to DoD. The proposed solution should utilize novel technological methods and capitalize on advances in biotechnology and pharmaceutical production technologies. The proposed novel method will be suitable for production of live adenovirus vaccine for oral administration to healthy individuals at risk of acquiring adenovirusassociated acute respiratory disease. The new method will take advantage of reliable modern production methods; will be able to achieve licensure by the FDA; will provide clear benefits compared to current tableting methods, such as as facility and equipment requirements, technical complexity, robustness, etc., and; will ultimately yield vaccine that shares comparable virologic, safety, and immunologic characteristics with the licensed vaccine. Targets for improvement may include, but are not restricted to drying, stabilization, and enteric protection of live virus for oral delivery. Ideally, the new vaccine will be reviewed by the FDA as"Biosimilar"to the reference product, the Barr Adenovirus vaccine, permitting expedited clinical development. PHASE I: Produce a conceptual design for developing, manufacturing, and testing the proposed solution. Produce a prototype vaccine of at least two Adenovirus types (e.g., types 4 and 7) for the purpose of demonstrating feasibility of the proposed solution. Production methods and vaccine ingredients are not expected to be finalized at end of this phase; GMP product is not required. Test(s) of feasibility may include modified disintegration and/or dissolution tests demonstrating the potential for pH-controlled release of adequate amounts of live adenovirus. Additional and/or alternative feasibility tests may be proposed. DoD may provide small quantities of the licensed vaccine for relevant non-human research conducted as part of this phase. Provide a comprehensive analysis of significant risks associated with the proposal, including manufacturing, technical, regulatory, cost, and performance. Provide a comprehensive report of all testing performed during this phase. PHASE II: Produce vaccine under GMP pilot production conditions, perform required nonclinical studies (such as microbial burden), and conduct an 8-week FDA phase 1 trial in healthy adults of at least two adenovirus types (e.g., types 4 and 7). Animal efficacy and reproductive toxicity studies are not required. Animal toxicology studies may be waived by the FDA, if the product is deemed equivalent to the Barr vaccine. The results of the clinical trial will permit a clear comparison of the new and Barr vaccines in terms of fecal shedding, absence of spread beyond the gastrointestinal tract (i.e., no detectable vaccine virus in throat, blood or urine), immune response, and preliminary safety. Vaccine shelf-life assessment will also be initiated during this phase. DoD may provide limited quantities of licensed vaccine required for testing proposed during this phase, including a clinical trial. Provide a comprehensive report of all testing performed during this phase, including a complete searchable dataset of the clinical trial. PHASE III: The overall goal of this effort is to modernize the production of vaccine tablets administered to all military basic recruits to prevent febrile respiratory illness due to adenovirus types 4 and 7. The technological innovations utilized in the modern vaccine will also permit more efficient development of new adenovirus vaccines comprised of different adenovirus types, such as types 14 or 21, should they be needed in the future by DoD. Modernization of tableting methods will also support commercialization of the product in other markets (e.g., South Korea and China have reported problems with adenovirus type 7) and derivative products. For instance, oral adenoviruses may be very useful and widely utilized for delivery of vaccine vectors for multiple other pathogens, gene therapy, and cancer therapy. The focus of this phase should be on further development and validation of the methods and product with the objective of obtaining FDA approval of the new vaccines with an indication for use in military personnel at risk of acquiring adenovirus infections.
OBJECTIVE: Develop an antibiotic clinical decision support (CDSS) system to improve treatment for critical care patients with severe infections. The system will provide recommendations to critical care providers in the intensive care unit (ICU) on antibiotic dosing based on multiple patient factors, including weight, infection source, infectious agent, cultures, susceptibilities, and prior treatments. The final system will include the ability to be configured to the unit"s specific antibiotic regiments, characteristics, and anti-biograms and will also integrate with hospital information technology (IT) infrastructure. DESCRIPTION: Many military casualties survive the first few days of burn and trauma resuscitation and surgeries only to fall victim to an infection and sepsis. Intensive care for these casualties is complex, requires treatment of systemic and often multiple organ systems at a time and is highly error-prone. Permanent morbidity or death results when the right care is not delivered or is delivered too late. Evidence-based best practices are published, but yet not implemented in step by step detail at the bedside due to complexity. Severe sepsis and septic shock have a mortality rate near 30% (1). Recent data has shown that computer based clinical decision support programs can greatly increase compliance with evidence based best practices, resulting in a 50% reduction in mortality for sepsis (2) and for initial empiric antibiotic therapy (3). Targeted antibiotic therapy is important for effective treatment of the organism and for avoiding highly toxic side effects of broad spectrum antibiotics and for avoiding increased pathogen resistance. However, choice of antibiotics often requires changing after susceptibility and cultures results are received. Burn casualties are particularly susceptible to multiple incidents of sepsis over a prolonged hospital stay. An antibiotic CDSS system is needed to quickly process a myriad of factors in determining the optimal antibiotic therapy for a patient at any given time throughout the hospital stay. This system should take into account not only empiric choice decisions but individualize dosages based on minimum inhibitory concentration (MIC) calculations. PHASE I: Develop an initial concept design and demonstrate elements of a CDSS software program providing specific antibiotic therapy guidance from initial diagnosis through discharge. Contractors should provide a basic software application that demonstrates a basic ability to process and make recommendations on potential antibiotic therapies. Contractors should explore novel approaches to implement antibiotic decision making as additional patient data is received. The clinical decision support system should include broad spectrum, narrow spectrum, and targeted antibiotic therapy. The CDSS system should provide new recommendations when susceptibility and culture results are received. The system should include MIC calculations. The contractor will conceptualize a graphical display showing the infection status and antibiotic therapy from initial infection diagnosis to discharge. PHASE II: The contractor will further develop and demonstrate the CDSS and will optimize bedside workflow. The contractor will demonstrate input clinical data IT integration with the hospital IT system for at least one variable. The contractor will develop and demonstrate a graphical representation of infection status and antibiotic therapy from initial infection diagnosis unto discharge. The successful clinical decision support program will allow physicians to adjust antibiotic therapy and will be easily adopted by the bedside caregivers. PHASE III: The contractor will produce a working CDSS system capable of updating antibiotic decisions by the medical director of a unit. The CDSS system should gather most of the input laboratory data through IT integration. The CDSS system should be configured so that multiple client computers, tablets, or smartphone device can access the system simultaneously. The final CDSS system must include evidence-based antibiotic therapy recommendations and an algorithm to optimize dosage for patients based on MIC calculations. Such a system could save lives by providing evidence-based, individualized, optimized antibiotic therapy for critical care patients throughout a hospital stay. Such a system should be of great commercial interest for all branches of the U.S. armed services as well as civilian critical care professionals in multiple critical care units.
OBJECTIVE: Develop a portable closed loop burn resuscitation system to improve treatment for serious burn patients. The resuscitation system will provide optimized fluid therapy for a patient based on patient attributes as well as patient response to therapy. The final system will automate many traditional caregiver tasks so that a medic could operate the system with similar care results. The final system will also communicate complete resuscitation data to a central location. DESCRIPTION: Burns represent 5% of overall casualties, but 10% of potentially survivable deaths. Severely burned patients require significant intravenous resuscitation, often requiring 20 liters within a 24 hour period. Burn fluid therapy carries the risk of severe complications or mortality resulting from under-resuscitation (end stage organ failure) or over-resuscitation (abdominal compartment syndrome). Fluid therapy requires adjustment at least hourly as the burn pathophysiology morphs during the resuscitation phase. Casualty evacuations (CASEVACs) are also performed over multiple hours and caregivers available in the deployed setting often have limited training for and experience of burn care. The result is that many patients who arrive at definitive care facilities are often over- or under- resuscitated. Clinical care tasks that involve significant cognitive workload, such as calculating an optimal fluid resuscitation dose hourly, are prone to mistakes and subsequent patient harm. Labor intensive workload tasks also reduce the ability of a caregiver to provide adequate care to multiple patients simultaneously and makes logistical placement of highly trained personnel difficult. The primary performance goal is to increase the average length of time when the medic needs to perform a technical or care task with the casualty. Current approaches utilize patient urinary output response as the primary feedback mechanism for adjusting fluid infusion rates; however, urine output is not as useful or is not present in patients with renal failure, high bladder pressure or other physiological complications. Patients do not always respond to crystalloid therapy. A patient may need a secondary therapy, such as albumin or pressors, as an adjunct therapy. The decision of when to begin adjunct therapy and at what dosage is complex and difficult to optimize by guidelines or paper protocols. Detailed records of the resuscitation process, including how much of each fluid was given, and when, are often lost in the CASEVAC process. Multiple hand-offs from the forward surgical team to the combat support hospital to the critical care air transport to definitive care facilities results in lost records or missing data. It is important to not only retain the data, but to display the data graphically and easily communicate the data to a central location. A resuscitation system that optimizes and automates fluid therapy, includes an adjunct therapy for non-responders, includes a second patient response mechanism, and that keeps and exports a complete record of the resuscitation process during CASEVAC will be of great value in improving treatment for severely burned military personnel. PHASE I: Develop and demonstrate a prototype portable burn resuscitation system which incorporates a least a urine monitoring device and an infusion pump. Conceptualize approaches to automating caregiver cognitive and physical tasks, including approaches to optimizing crystalloid resuscitation based on patient responses, to adjunct therapy decisions and manual tasks. The contractor will conceptualize a graphical display that shows input variables and therapy settings in a meaningful way to caregivers. The contractor will identify a method for communicating data to a central location. The contractor will identify clinical and technological issues that would require fully-automated care to disengage and require medic intervention. The contractor will create a plan to test or simulate each issue to gather data on the Mean Time Before Disengaging (MTBD) from fully-automated mode. Furthermore, the contractor will identify a method for determining the MTBD of the system as a whole based on the MTBD data of the individual issues. PHASE II: The contractor will further develop the burn resuscitation system. The contractor will implement the best approaches from Phase I into hardware and software that optimizes and automates crystalloid resuscitation based on patient responses, provides an adjunct therapy, displays graphically the patient response variables and therapy settings and reduces other cognitive and manual caregiver tasks. The contractor will demonstrate data export into a central location. Based on the MTBD plans created in Phase I, the contractor will perform tests and simulation studies and provide data on MTBD for each clinical and technical issue, and then calculate the system MTBD. The performance goal will be an average MTBD of 45 minutes or longer for individual issues and a system MTBD of 20 minutes or longer. PHASE III: The contractor will validate and produce a working portable burn resuscitation system that optimizes and automates crystalloid fluid resuscitation based on patient responses, will provide a therapy adjunct to crystalloid resuscitation, will graphically display patient responses and therapies over the resuscitation phase and will communicate data to a central location. The burn resuscitation system will have a MTBD of 20 minutes or longer. The final burn resuscitation system will optimize fluid therapy for patients, will identify and provide a second therapy for patients when crystalloid resuscitation is not adequately effective, and, importantly, will automate care with the primary goal of reducing required frequency of caregiver interactions with the patient. Such a system will optimize care, will enable lesser trained caregivers to provide adequate care of patients, and will enable caregivers to effectively take care of more patients simultaneously. Such a system should save lives, improve the quality of life for military and civilian patients, improve the CASEVAC process and should be of great commercial interest for all branches of the U.S. armed services and civilian burn care professionals. Validation of the system will be performed in accordance with FDA regulation related to development and validation of a closed loop medical device. The contractor will submit the developed system for FDA clearance for eventual use in a clinical environment.
OBJECTIVE: Develop a biocompatible dressing material for the controlled delivery of analgesic drugs to burn wounds. DESCRIPTION: Thousands of U.S. military personnel have suffered serious burn wounds and other injuries during Operations Iraqi Freedom (OIF) and Enduring Freedom (OEF), where burns have been identified as the primary cause of injury in 5% of all military personnel evacuated from those battlefields (1-2). Uncontrolled acute burn pain contributes to several sensory abnormalities including development of chronic pain (3-4). Burn patients report intense pain during procedures such as wound debridement, dressing changes and strenuous physical and occupational therapy. In fact procedural pain is the most common grievance reported by the burn population (5). Management of the intense pain that accompanies burn wounds relies heavily on systemically administered opioids, which produce many side effects including tolerance, hyperalgesia, hemodynamic instability, respiratory depression, and dependence (4). There is therefore a need to reduce quantities of opioid analgesics administered to burn patients. This need could be met through development of a topical analgesic wound dressing for severe combat-associated burn injuries that would reduce or eliminate the need for systemic treatment with opioids. Such dressings would also serve to protect the wound from infection and thereby aid in wound healing. This request addresses one of the new USAMRMC SBIR topics for FY14, which falls within the goals of the Clinical and Rehabilitative Medicine Research Program: Novel pharmaceutical approaches to alleviate the development of tolerance and physical dependence of opioids without diminishing opioids"analgesic effects. To further accelerate healing, active wound dressings have been developed that provide controlled local delivery of therapeutic agents, such as biological growth factors and antimicrobial agents, while keeping the wound surface moist, removing exudates, inhibiting bacterial invasion and allowing oxygen permeation. Active wound dressings would ideally maintain the local drug concentration at a constant optimum therapeutic dosage level from the moment of initial application until complete wound healing is achieved (i.e., zero-order kinetics). To be suitable for topical application to burns, the composite wound dressing materials must also be carefully selected for biocompatibility, mechanical strength and surface adhesion, and resistance to bacterial invasion so that clinicians can readily apply and remove them at appropriate times during therapy. Eventually the development of wound dressings that deliver combinations of analgesics and antimicrobials is planned. PHASE I: Identify and define a biocompatible material which can provide the controlled delivery of analgesic drugs. This material must be adaptable to a wound dressing format. Required Phase I deliverables will include determination of technical feasibility using appropriate in-vitro, and if possible in-vivo assays. Such assays would demonstrate the ability of a prototype material to act as a stable, biologically compatible depot for one or more of the opioid analgesics (morphine, methadone, fentanyl, meperidine, oxycodone), or other mainstay analgesics (ketamine, gabapentin) currently used for treatment of burn wounds in the military population. In addition, demonstration of relevant physical properties of the material, such as mechanical strength, surface adhesion, resistance to bacterial invasion, and capacity for controlled release of analgesics (including release kinetics), is expected. In addition, once proof of feasibility is achieved, applicants are required to provide specific plans for how they will test the material in a validated animal pain model in Phase II. No human testing will be proposed for the Phase I (6 month) period. Although the period of time is short, it is preferred that an animal model be used for some part of the feasibility studies in Phase I. PHASE II: The proposed study should demonstrate and validate the efficacy of the material tested in Phase I for pain control in an in-vivo model that replicates burn injury induced pain. The safety of the material in combination with one or more of analgesics listed above must be demonstrated with respect to biocompatibility, toxicity and immunogenicity. The ideal material would have physical properties allowing it to conform to any burn wound surface. This includes liquids, gels, bandages and other dry materials that upon wound contact would conform to the complex topography of the wound bed. Phase II deliverables will include development, testing and demonstration of a prototype analgesic wound dressing composed of the material in combination with one or more of the above listed analgesics. The product should be self-administrable in the field and take effect within minutes of application. In addition, the product should be easily removed (i.e. dressing changed) without causing damage to the wound bed or left in place (i.e. biodegradable). The product also needs to be shelf stable for long periods of time at all likely temperatures experienced in austere combat environments. All these requirements must be demonstrated during the Phase II period. PHASE III: In this phase it is expected that all pre-clinical testing and validation of the Phase II prototype product(s) will be finalized. Phase III will primarily consist of clinical trials designed to test the safety and efficacy of the Phase II product(s) in burn pain control. The focus of these trials will be on testing for combat-related wound care and/or civilian wound care that is similar to combat wounds. Phase III efforts will be directed towards technology transfer, preferably commercialization, of the product(s) from Phase II. This will include application for FDA approval. Commercialization efforts will include GMP manufacturing of sufficient materials for evaluation. The small business should have plans to secure funding from non-SBIR government sources and /or the private sector to develop or transition the prototypes into a viable product(s) for sale to the military and/or private sector markets. The end-state of this research will be the full development of one or more products consisting of biocompatible wound dressings that deliver sustained pain control, which can be used effectively in austere combat environments. Such products are also expected to have utility extending beyond the combat environment, into civilian burn wound care and emergency medicine applications.
OBJECTIVE: Demonstrate a prototype diagnostic test for gastroenteritis caused by Norovirus infection. The technology shall be able to detect infection at the onset of symptoms, be able to test unprocessed samples, incorporate all necessary controls, be compatible with use in an austere environment (small, lightweight, and insensitive to environmental extremes), and provide users with automated results interpretation. DESCRIPTION: Infectious diseases can have a significant impact on the operational readiness of military forces. The military has a requirement for innovative technologies to diagnose patients at or near the point of care or point of need to improve clinical outcome and conserve resources. Due to the operational environment, patients seen at the military equivalent of an outpatient clinic (doctrinally termed Role of Care 1 or 2) must be quickly treated and returned to duty or evacuated to a more capable medical unit (Role of Care 3). Therefore, diagnosis on the day of symptom onset is essential. In military settings, the point of need is frequently an austere environment without, for example, access to typical laboratory infrastructure, reliable electric power, refrigeration or controlled room-temperature storage, or specially trained laboratory personnel. Therefore, the proposed solution should be small, lightweight, and insensitive to environmental extremes such as dust, high humidity, and storage temperatures of up to 45degreesC. If electric power is required, it should be provided by rechargeable or disposable commercially available batteries. The use of lithium-ion batteries is discouraged due to restrictions on their shipment by air. The proposed solution should test unprocessed clinical samples and provide easily interpretable results. A single test should be complete (sample to answer) within 60 minutes, and the proposed solution should be able to test 20 patient samples in four hours. The device should be designed to minimize the risk of contamination and resulting false positive or false negative results. The proposed technology and approach should be consistent with obtaining U.S. Food and Drug Administration (FDA) clearance as a Clinical Laboratory Improvement Act (CLIA)-waived diagnostic device allowing use outside a CLIA-regulated laboratory (e.g., a doctor"s office), or, in the military, use by non-laboratory personnel (e.g., medic or independent duty corpsman) when prescribed by a physician. The Army is not planning to field current and future US military polymerase chain reaction diagnostic devices (the Joint Biological Agent Identification and Diagnostic System and the Next Generation Diagnostic Systems Increment 1) at Army Role of Care 1 or 2, so development of assays for these systems is not consistent with the objective of this Topic. This effort is also intended to demonstrate the ability to develop rapidly diagnostic tests for novel diseases. If an analyzer component is proposed, it should be designed so that the system can be easily upgraded by the user to incorporate additional tests. Proposed solutions that utilize synthetic or molecular biology approaches and simplified manufacturing requirements are desired. Noroviruses are the most common cause of epidemic gastroenteritis. Most human cases of Norovirus gastroenteritis are caused by viruses in genogroups II and I. Norovirus genogroup IV may also cause human illness. The clinical symptoms include nausea, vomiting, diarrhea, and abdominal cramps. Norovirus infection is readily communicable between individuals sharing close quarters such as military camps and bases, cruise ships, and naval vessels. Transmission is most frequently through the consumption of contaminated food and person-to-person contact. Asymptomatic viral shedding for two or more weeks may complicate outbreak control. There is only one FDA-cleared diagnostic device for Norovirus (FDA, http://www.accessdata.fda.gov/scripts/cdrh/devicesatfda/index.cfm), and it is not cleared for the diagnosis of individual patients but only as an aid in investigating the cause of acute gastroenteritis outbreaks. Further, the technology used (Enzyme Immunoassay) is not compatible with use in the military equivalent of an outpatient clinic. The U.S. Department of Defense is seeking innovative materiel solutions to provide a deployable, rapid, easy to use capability to diagnose Norovirus infection. Development of an FDA-cleared diagnostic device is not contemplated within the scope of Phases I or II of this effort. PHASE I: Specific Aim 1: If an analyzer device is required by the technical approach proposed by the Offeror, demonstrate a breadboard prototype of such a device in conjunction with the reagents developed as part of Specific Aim 2. Specific Aim 2: Demonstrate the development of reagents (compatible with the prototype device under Specific Aim 1) capable of detecting Norovirus genogroup II viruses (or appropriate simulant) in clinically relevant sample matrix. Deliver a report describing the design of the reagents and initial assay performance data, if available. PHASE II: Specific Aim 3: Based on the Phase I prototype device and development feasibility report, produce a pre-production prototype demonstrating potential military utility. Deliver the pre-production prototype for DoD laboratory evaluation. Deliver a report describing the design and operation of the pre-production prototype device. Specific Aim 4: Further develop the Norovirus genogroup II analyte-specific reagents to include the ability to detect genogroup I (minimum) and genogroup IV (objective). Deliver a report documenting the performance of the reagents. Deliver sufficient quantities of reagents to allow the DoD to perform 50 tests during a DoD in-house laboratory evaluation. PHASE III: By the end of Phase III, the Contractor will obtain FDA 510(k) clearance or Pre-Market Approval from the FDA to market the device and reagents as a diagnostic device for Norovirus infection. Ideally, the device will receive a CLIA waiver. Such a device will fulfill a documented capability gap (Initial Capability Document for Infectious Disease Countermeasures, CARDS Number 14057, February 2007), and supports the Military Infectious Disease Research Program, U.S. Army Medical Research and Materiel Command, and the Pharmaceutical Systems Project Management Office, U.S. Army Medical Materiel Development Activity (USAMMDA). USAMMDA is the advanced developer of medical materiel for the U.S. Army and manages contracts for product development from after the proof-of-concept phase through initial fielding to operational units. Further, the device may have commercial market applicability to the health care and cruise ship industries and possibly also the airline, hospitality, and food service industries, and non-governmental and intergovernmental organizations (NGOs and IGOs) implementing public health, humanitarian assistance, and disaster relief projects in the developing world.
OBJECTIVE: Identify, design, synthesize and characterize PET/SPECT radiotracers for imaging of tauopathies. Tauopathies are associated with abnormally phosphorylated or folded Tau protein in response to TBI or concussion. Small molecules which bind pathological Tau species are sought. DESCRIPTION: Traumatic Brain Injury (TBI) is a suspected risk factor for neurodegenerative diseases such as Alzheimer"s (1) and Chronic Traumatic Encephalopathies (CTEs) (2). CTEs are associated with concussion/sports injuries. Single Photon Emission Computed Tomography (SPECT) or Positron Emission Tomography (PET) radiotracers are under consideration for routine use in the clinic for diagnosing neurodegenerative disease. Indeed, the PET imaging agent, Fluorbetapir (Amyvid) is now FDA approved for assessing brain neuritic plaques (3) for Alzheimer"s disease. Until Fluorbetapir was approved, confirmation of molecular-level neurodegenerative change in the clinic was limited to a handful of agents or autopsy. Of these agents, none of them are intended to assay specific changes related to TBI. Based on reports which link Tau protein with TBIs and CTEs (4), this topic seeks proposals that will develop small molecules that specifically bind Tau or its pathogenic species. It is anticipated that Phase I will encompass the identification, design, synthesis and characterization of candidate molecules for Tau binding. Successful proposals will identify a specified tau pathological target in the proposal. Candidate molecules may be identified via literature, but a sound plan demonstrating how the agent would be synthesized, characterized and radiolabeled must be included as part of the proposal. Phase II should provide validation of the lead agent (e.g. Phase I Clinical Trials) ahead of FDA approval and commercialization. The successful candidate agent could be used to 1) detect and diagnose tauopathies at their earliest stages, 2) monitor treatment/injury, 3) aid researchers in distinguishing the differences between neurological disease and injury. Proposals should indicate how the marker will impact 1 or more of these research goals. PHASE I: Proposals should contain a rationale for the appropriateness of the lead compound. The proposal should also describe the synthetic chemistry strategy (or equivalent), and provide a plan for radiolabeling the compound(s). It is expected that at the end of Phase I, several candidate radiolabeled agents will be made available for phase 2 funding under an approved IND. A structural validation strategy using analytical chemistry methods (e.g. NMR, UV/VIS, Mass Spectrometry) should be included. No animal/human research will be done as part of Phase I. PHASE II: Phase II funding is expected to characterize several candidate radiolabeled agents in pre-clinical and/or Phase I human studies. The characterization work will validate any lead agent(s) for Phase III SBIR research. Evidence that GMP/GLP manufacturing practices will be enforced should be included as part of the proposal. Studies into the distribution, metabolism and pharmacokinetics of these agents are encouraged, along with a robust plan for assessing safety and efficacy as part of the workplan. The Phase II plan should also include a plan for commercialization and FDA approval. Suggested elements of the proposal narrative concerning commercialization include considering and detailing the anticipated costs and design of Phase II/III Clinical Trials. The Phase II/III Clinical Trials should feature robust neurological endpoints which would illustrate the efficacy of these agents. PHASE III: Sports Injuries, Alzheimer"s disease, and TBIs share tauopathies as a common pathological feature. There is an absence of reliable, quantifiable, non-invasive markers for Tau. This marker could be used to 1) detect and diagnose tauopathies at their earliest stages, 2) monitor treatment/injury, and 3) aid researchers in distinguishing the differences between neurological disease and brain injury. For example, a marker such as this may help in making determinations regarding return to duty (or the practice field) for individuals that have suffered from concussions. Other imaging strategies such as CT and MRI do not currently provide the functional resolution to assess brain injuries over time. SPECT/CT agents are uniquely positioned to longitudinally study brain injuries and response to therapies because they require a chemical tracer to specifically bind to pathological species in the brain. This research could have a large impact on how we understand, diagnose and treat concussion, both domestically and within the military.
OBJECTIVE: The objective of this topic is to develop and demonstrate a secure wireless communications capability and developer"s implementation toolkit that can be installed on any handheld ruggedized SMART phone or tablet in order connect the devices with wireless medical sensors and secure military communication networks. This research will incrementally advance the state of the art in enroute combat casualty care assessment, monitoring, and intervention at point of injury (POI) and on attended casualty evacuation vehicles such that the final demonstration shows proof-of-concept feasibility for secure proximal and remote wireless patient monitoring, encounter documentation, medical information exchange and telementoring from any location on the battlefield. DESCRIPTION: This topic is designed to focus and address the current gap in availability of secure close range broad band communications needed to implement a wide-range of potentially hands-free wireless capabilities for medical monitoring and intervention; information capture, storage and security; and communications integration with both civilian and sensitive but unclassified military networks. It is now technically possible and operationally feasible to combine most of the physiological monitoring, medical information exchange, imaging, and telemedicine technologies already in use, undergoing evaluation, or still in development; with emerging semi-autonomous, autonomous, or closed-loop treatment and intervention systems. This topic focuses down to prototyping a wireless transmission capability and developer"s toolkit that could potentially be used on any mobile device, wireless physiological or telemetry sensor, or network access point vehicle to enable secure wireless communications among devices. Using the toolkit, the wireless communications application could be installed on a SMART phone, tablet, physiological sensor, and/or network access point by an application developer in order to facilitate development and implementation of far forward combat casualty care applications that: 1) enable transfer of medical data from medical sensors via secure wireless transmission modes (e.g. Ultra wideband, Secure Bluetooth, Ultraviolet communication, etc.) to enable medics, remote providers or even closed loop patient support systems to monitor patients at POI, while enroute via air or ground evacuation, or at forward medical treatment facilities such as battalion aid stations; 2) interface to secure tactical radios and tactical cellular mesh networks on the battlefield to enable tele-operated, semi-autonomous or autonomous operation, and command and control on the move; 3) record and transmit"store and forward"electronic patient encounter documentation such as the electronic Tactical Combat Casualty Care (eTCCC) Card; and 4) enable secure two-way wireless imaging and video information exchange and telementoring. After Phase III development, the final production model of the secure wireless capability must be ruggedized for shock, dust, sand, and water resistance to enable reliable, uninterrupted operation in combat vehicles on the move, to include operation and storage at extreme temperatures, and the developer"s kit must be able to install this capability on the devices. Size and weight are important factors; the ultimate object of system would be secure wireless connections between patient continuous medical monitoring sensors and military tactical radios to a handheld integrated processor, display, and communications device that is powerful enough to run"Predictive Algorithms". A wireless device"chip"and antenna that can be easily installed using the developer"s kit into a mobile phone or tablet sized device that a soldier/medic can carry in a uniform pocket would be ideal. Quantitative values for acceptable operational and storage temperatures and power requirements should be planned to comply with applicable MIL-SPECs (available on line). To facilitate commercialization, the developer"s kit should enable embedding the secure wireless capability within medical sensors, mobile smart phone devices, military common-user tactical radios, and/or ubiquitous civilian communications equivalents. PHASE I: Research solutions for technical challenges on this topic as identified above for a capability that incorporates feasible solution for a wireless transmission capability and developer"s toolkit that could potentially be used on any mobile smart phone type device, wireless physiological or telemetry sensor, or network access point vehicle to enable secure wireless communications among devices via secure wireless transmission modes (e.g. Ultra wideband, Secure Bluetooth, Ultraviolet communication, etc.). Flesh out commercialization plans that were developed in the Phase I proposal for elaboration or modification to be incorporated in the Phase II proposal. Explore commercialization potential with civilian emergency medical service systems development and manufacturing companies. Seek partnerships within government and private industry for transition and commercialization of the production version of the product. PHASE II: From the Phase I design, develop a ruggedized prototype a wireless transmission capability and developer"s toolkit that could potentially be used on any mobile Smart Phone type device, wireless physiological or telemetry sensor, or network access point vehicle to enable secure wireless communications among devices via secure wireless transmission modes (e.g. Ultra wideband, Secure Bluetooth, Ultraviolet communication, etc.). In addition to demonstrating secure wireless connectivity to mobile Smart Phones, medical monitors and military tactical radios; the prototype device should demonstrate communications to ubiquitous civilian broadband wireless communications networks. Demonstrate the system with soldier medical attendants in a relevant environment; such as at a USA Army TRADOC Battle Lab. Flesh out commercialization plans contained in the Phase II proposal for elaboration or modification in Phase III. Firm up collaborative relationships and establish agreements with military and civilian end users to conduct proof-of-concept evaluations in Phase III. Begin to execute transition to Phase III commercialization potential in accordance with the Phase II commercialization plan. PHASE III: Refine and execute the commercialization plan included in the Phase II Proposal. Execute proof-of-concept evaluation in a suitable operational environment (e.g. Advanced Technology Demonstration (ATD), Joint Capability Technology Demonstration (JCTD), Marine Corps Limited Objective Experiment (LOE), Army Network Integration Exercise (NIE), etc.). Present the prototype project, as a candidate for fielding, to applicable Army, Navy/Marine Corps, Air Force, Coast Guard, Department of Defense, Program Managers for Combat Casualty Care systems along with government and civilian program managers for emergency, remote, and wilderness medicine within state and civilian health care organizations, and the Departments of Justice, Homeland Security, Interior, and Veteran"s Administration. Execute further commercialization and manufacturing through collaborative relationships with partners identified in Phase II.
OBJECTIVE: Demonstrate an ultra-low power multi-node ambulatory physiological monitoring system where a System on a Chip (SoC) senses, processes, and communicates health state information to a standard Android smart platform. DESCRIPTION: Commercially-available medical-grade wearable physiological status monitoring (PSM) devices (e.g., www.equivital.co.uk, www.zephyr-technology.com) are currently being used to (a) understand and improve the health and well-being of soldiers, and (b) collect quantitative data to guide improvements to the clothing and individual equipment that soldiers wear and use. For example, PSM systems are being used to assess the thermal-work strain imposed by training and deployment to harsh environments, and to document the physiological impact of wearing protective clothing ensembles. Currently available PSM systems are well-suited to these focused R & D applications where test durations are limited to a few days at most, and the system size, weight, power, and cost of current PSM systems can be tolerated. However, efforts to transition these legacy systems into routine use by military personnel to support tactical decision making (e.g., how much water do I need? When should I take a rest break? Who is most at risk of overheating?) is not yet practical. Current systems suffer from excessive size, weight, power requirements, cost, and use of proprietary technology that restricts the use of 3rd party sensors and algorithms, and a lack of a viable on-body low power short-range (2-5 m), low signature, low data rate, body area network data communication capabilities. To be successful a body area network data typically needs to be low-bandwidth, low-power, short-range, and have a minimal electromagnetic signature. Future broad use of PSM systems requires wear-and-forget technologies that can be used routinely for extended periods of time without recharging or replacing batteries. PSM systems of this type could be used to facilitate physical training , avoiding overtraining and musculo-skeletal injuries, host individualized models that facilitate mission and logistical planning (e.g., anticipate work rates and water and ration requirements), mission support, after action reviews, model validation & verification, model individualization, mission planning and tailored logistical support). An example of a high value application of next-generation SoC PSM systems is to promote the physical and mental fitness of soldiers and their family members through Activity, Nutrition, and Sleep Management (ANS) http://www.armymedicine.army.mil/assets/home/Army_Medicine_2020_Strategy.pdf. Towards this end, the Army is interested in promoting the development and maturation of integrated next-generation open-architected SoC PSM solutions. The deliverable would be a Technology Readiness Level (TRL) 4 validation of the SoC PSM system demonstration and validation in a laboratory environment that includes (a) collection and processing of one or more physiological signals at a given node, (b) wireless transmission of collected data from one node to the SoC node serving as a gateway to an Android smart platform, and documentation of (c) ultra-low power requirements and (d) the reliability and validity of the data stream(s). PHASE I: Building on the baseline capabilities of relevant extant SoC PSM systems, design a practical plan for achieving (a) the desired hardware, firmware, network communication, and data display capabilities of the envisioned multi-node SoC PSM demonstration system described above, (b) and establish a plan for SoC PSM system test, evaluation and validation. No research or testing involving animal or human subjects will be needed. PHASE II: Using the Phase I plans, and building on existing SoC capabilities: develop, demonstrate and validate an ultra-low power, multi-node, miniaturized SoC PSM system capable of sensing, processing, and communicating individual physiological information to a smart Android platform for display. The SoC PSM system will (a) incorporate an ECG sensor (required) and incorporate other on-chip sensor types (e.g., skin temperature, accelerometry for activity and posture detection) (desired) and be able to link to off-chip sensors (e.g., photoplethysmography/oximetry) (desired); (b) have a processor capable of signal processing and data management (required), and execution of simple algorithms (desired), (c) have ultra-low power requirements (total power SoC power<100W (required) or<50W (desired); (d) and have at least one (required) or more (desired) integrated radios capable of interlinking SoC nodes and providing digital data feed suitable for input to a handheld computer (e.g., secure Android platform). The on-body personal area network operating frequencies and/or power output should be such that licensing is not required. Ultrawide band or narrow band radios are preferable to BlueTooth solutions. The demonstration SoC PSM system will include three (3) nodes (required) or four (4) nodes (desired) (sensor 1-3 and an Android node). The system is required to recognize family members, i.e., the distributed SoC nodes (chest, head, foot) that reside on a given individual and the associated SoC node that serves as the gateway with the hand held Android display. All associated nodes should pair without human intervention. Demonstrate (a) SoC signal processing, (b) reliable and secure data transmission from sensor nodes to a central SoC node connected to a handheld computer with display (e.g., Android smart platform), (c) delivery of digitized data suitable for Android platform data storage, processing, and display. The SoC PSM system will maintain accurate track of time so any data collected will be linked to a time line, even if the device is turned off or goes into a low-power mode. An open systems architecture is required that will accommodate third-party sensors and support the execution of simple Government-furnished algorithms that predict body core temperature from heart rate. This system is ultimately intended for general use by soldiers and is not intended to be a medical device that provides diagnostic information or information requiring medical knowledge to interpret. Testing involving animal or human subjects will be needed. A plan for cost-effective manufacturing of the SoC PSM system, including the associated components (e.g., antenna) and packaging, is required. PHASE III: The end-state of this work effort is a well-designed ultra-low power SoC PSM system capable of reliably collecting and processing and wirelessly communicating valid health state information over short 2-to-5m distances to an Android smart platform. The goal is to transition this SoC PSM technology to PEO-Soldier and US Marine Corps, and Special Forces Program Managers to meet their requirements for real-time physiological status monitoring. Pursue commercialization of SoC PSM opportunities in the health and well-being market space, or in the home monitoring and health care space, to address national needs related to obesity and diabetes prevention and management and reducing health care costs.
OBJECTIVE: This project will develop and demonstrate an innovative and novel medical module payload for the Squad-Multipurpose Equipment Transport (S-MET) unmanned ground vehicle (UGS), enabling the S-MET to extract a combat casualty and perform medic attended CASEVAC. DESCRIPTION: Combat Medics and Marine Corpsmen routinely put themselves at risk to get to, and extract wounded, and in doing so often become casualties themselves. The military medical community is leveraging the robotics/UGS development work ongoing at the U.S. Army Maneuver Center of Excellence and elsewhere in DoD to provide improved and new capabilities for front line medical personnel to improve their capabilities and reduce risk. S-MET Medical Module Payload Performance Goals: One of the key secondary uses of the S-MET is to provide non-autonomous casualty evacuation (CASEVAC) capability to the small unit. The ability to re-configure vehicles to is critical during CASEVAC operations. Ultimately, the task of clearing the wounded from the battle space is the responsibility of the maneuver Commander. The CASEVAC capabilities within a formation are maneuver enablers for that Commander. Their efficacy is gauged by the unit's ability to perform this task, which is directly tied to overall mobility of the formation. The S-MET must have tie down points and will be re-configurable to accommodate and litter(s), providing full body access without any internal interference or displacement of crew or passengers. The S-MET must also be capable of carrying at least two Medical Equipment Sets, (Combat Lifesaver This topic will develop an S-MET MMP, which when integrated with the S-MET unmanned ground vehicle (UGV), will allow a combat lifesaver (infantryman with additional medical training) or a medic to use the vehicle to safely and expeditiously extract and move a casualty to a Casualty Collection Point, or a Medical Evacuation (MEDEVAC) Point. Desired characteristics: 1. Integrated module and S-MET vehicle capable of approaching an identified casualty, and extracting and moving the casualty with combat lifesaver or medic attendant assistance. 2. Separate module that can be installed and removed from the S-MET platform quickly (less than 10 minutes) and easily. 3. Module doesn't impact S-MET performance (e.g., speed, maneuverability) 4. Integrated medic/combat lifesaver assist module and S-MET vehicle should be as fast as a human performing the same extraction & CASEVAC task. 5. Module should be safe - for the casualty (i.e., no additional harm) and for other personnel in the area. 6. Module is teleoperated and/or semi-autonomous for some tasks (end goal is for an semi-autonomous system with human-in-the-loop for safety). Patient will be attended during CASEVAC operation. A key goal of this research topic is to leverage and demonstrate the novel capabilities of 3-D printing to speed design and development, reduce prototyping costs, reduce production costs, and reduce maintenance and repair costs, as well reducing required spare parts inventories. Use of 3-D printing capability should be used when it makes sense (e.g., to accelerate an iterative design, development and test approach; and to reduce part fabrication costs). PHASE I: The Principle Investigator (PI) will research S-MET documentation (when available for public release) applicable to this research topic. The team will also research and analyze the bio-mechanics of lifting and carrying, or dragging a casualty (see"Research Involving Animal or Human Subjects,"section, below, - No Human Use during Phase I, therefore the use of modeling and simulation (M & S) is strongly encouraged). However, Human Use Protocol planning and documentation should be initiated, as required. The PI will develop and deliver a prototype medical mission payload module design (see Description section, above). The PI will develop and demonstrate as much of the prototype design functionality as possible using M & S and'brass board'components. Finally, a draft Commercialization Plan will be developed. Phase I Deliverables: 1. Report cataloging and summarizing all S-MET and other documentation researched and used to develop the medical mission module payload. 2. Report describing the bio-mechanics of lifting, dragging and carrying a 300 lb casualty and the impact thereof, on medical MMP functionality and design. 3. Modeling and Simulation Plan, if any, for Phase I, with links to any envisioned Phase II M & S Plan. 4. Initial medical MMP prototype design. 5. Demonstration of any M & S tools developed or in development, as well as any'brass board'components. 6. Report detailing Human Use Protocol planning and documentation. 7. Draft Commercialization Plan. 8. Report describing the planned use of 3-D printing technology. PHASE II: The PI will leverage the Phase I work and refine the medical MMP design. A working prototype shall be built and demonstrated in the laboratory (minimum), and in a more relevant outside environment (desired), using a robotic system or UGV (S-MET vehicle desired, but if not available a surrogate UGV may be used). Technology Readiness Level desired at the end of Phase II is TRL-5. The Phase I Commercialization Plan will be completed. Phase II Deliverables: 1. Prototype medical MMP demonstration in both a laboratory and field environment. 2. Technical reports containing each demonstration's results. 3. Updated medical MMP design. 4. Develop, implement and document any Human Use Protocols plans and schedule, as required. 5. Demonstration and documentation (report and software) of any M & S tools employed in this Phase II effort. 6. Updated Commercialization Plan, including targeted (or ideally, acquired) commercial or academic partners for Phase III. 7. Report describing the use of 3-D printing and lessons learned. PHASE III: 1. Technical and programmatic reports and plans, and technical and operational demonstrations supporting further medical MMP development and commercialization. 2. Updated and optimized medical MMP design. 3. Updated or new Human Use Protocols developed, approved and executed. 4. Operational demonstration of the medical MMP integrated with a S-MET vehicle or UGV surrogate, if no S-MET platform is available. 5. Report describing the use of 3-D printing and lessons learned. The dual use applications for a robotic/UGV system medical mission payload module for expeditious and safe extraction and short range casualty movement are obvious - Military: Tactical combat casualty extraction and evacuation, casualty extraction and evacuation from a contaminated environment (chemical, biological, radiological, nuclear (CBRN)), Humanitarian Assistance missions; and Disaster Relief Missions; Civilian: Mass casualty situations (e.g., collapsed buildings, victim rescue in a CBRN contaminated environment (e.g., industrial chemical spill).
OBJECTIVE: Develop technology to enable mobile, military, containerized cold-storage assets that use carbon-dioxide as the refrigerant, for purposes of eliminating reliance on the more heavily regulated and expensive refrigerants currently used. DESCRIPTION: The result of chlorofluorocarbon (CFC) and hydrochlorofluorocarbon (HCFC) regulation in the 1990's was that road and rail transport refrigeration systems today use hydrofluorocarbons (HFCs) instead as their refrigerants. This includes all our military's existing and near-term mobile cold-storage assets. The issue is that HFCs are now under threat. In 2006, through the Kyoto Protocol, the international community, including the United States, came to an agreement that HFCs must eventually be phased out to help slow global climate change, because HFCs have a global warming potential (GWP) more than a thousand times greater than carbon-dioxide (CO2) . The first milestone -- phase-out of HFCs from automobiles in Europe beginning January 1, 2013 -- has passed. The next targets are supermarket and industrial refrigeration, then building air-conditioning, followed by road and rail container transport refrigerator/freezers, and domestic appliances. This trend has inspired Product Manager - Force Sustainment Systems (PM-FSS) and the Combined Arms Support Command (CASCOM) to ensure the Army is not disadvantaged by such regulation. The Army is therefore seeking advanced development of technologies that enable mobile military containerized cold-storage assets that use alternative refrigerants, namely CO2. Several major corporations initiated development ahead of the upcoming ban. Having concerns that ammonia, hydrocarbons, and the current hydrofluoroolefins (HFOs) are not suitable replacement candidates due to flammability and toxicity issues, researchers are investing in CO2 solutions. CO2 refrigeration is not new, but the need to eliminate refrigerants with high GWP, the necessity for greater efficiency to decrease energy consumption, and a desire for far lower refrigerant costs, is driving innovation. It is supported by recent advances in materials, processor-based electronic controls, and the advancement of computer-aided analysis tools and techniques, which in turn impact the designs of heat exchangers, compressors and other components, and enhance control over the process. The issues with CO2 refrigeration -- high pressures and/or increased component counts -- now appear to be surmountable with the emergence of these new technologies. Furthermore, the inherently high process pressures result in higher fluid densities, meaning reductions in weight and size are also possible -- given the same power requirements -- for various components including the compressor. The majority of work thus far has focused on vending machines (Danfoss and the Coca-Cola Co.), supermarkets (Hillphoenix, Hannaford Bros. and Sobey's), and recently Carrier has been developing for containerized transport refrigeration. While Carrier's containerized system is the closest to what we need, it is designed only for ordinary road, rail and ship transport conditions, not military application. And it is still in development, so commercial acquisition is not possible. As such, military investment is necessary. While small businesses may be able to leverage some of Carrier's technology, additional innovative development will address the fact that military assets require ruggedization for off-road transport, efficiency gains to minimize logistical burden, modification to power consumption levels to accommodate use with camp grids and gensets, the need for dual evaporators, compactness to create space for the inclusion of onboard gensets and fuel tanks, and adaptation to extremely hot environments - areas of deficiency in the Carrier unit. Concepts should target modular refrigeration units (RU) suitable to large containerized cold-storage. The weight objective for a system capable of cooling a 20'ISO container is<1200 lbs, with a threshold target of<1600 lbs -- including frame, onboard auxiliary power unit, and all ancillary components. The space available for the refrigeration mechanicals in such a frame is roughly 45"wide x 23"deep x 23"tall. At 135 degrees F ambient, the capacity required for simultaneously cooling 2/3rds of a 20'container to 38 degrees F and 1/3rd of the container to -5 degrees F is ~15,000 BTU/hr. The RU driving these temperatures should have a reserve capacity of 30% at this condition, and a target coefficient of performance of greater than 1 -- a reasonable goal for CO2 even in the most challenging climates. The annualized average power consumption target would be 2 kW in the Middle East. The design should lead eventually to a production cost of<$40k, the objective being<$25k. Respondents shall consider the two most common CO2 cycles, as well as alternative vapor compression cycles, and explain and justify their choice and safety considerations, especially with regard to cycles with extreme pressures. While it is expected that to limit scope the concepts will utilize the CO2 refrigeration compressors currently available (e.g., Carrier, Danfoss, Tecumseh, Daikin, etc.), development of new compressor technology will be considered if it does not jeopardize development of other technologies. Other technologies examined will be effective heat exchanger materials and geometries; efficient motors; variable-speed drives; algorithmic controllers employing process monitoring, logging and anticipatory software; and electrically or mechanically-pulsing evaporative control valves. PHASE I: During Phase I and the Phase I Option, offerors shall develop the initial concept design; demonstrate the practical and technical feasibility of their approach materially via scaled-down bench-top/breadboard fabrications of the most critical component technologies; then validate empirical results with modeling and simulation. Phase I deliverables will include progress and final reports detailing activities, description and rationalization of the design process and resulting concept, successes and failures, results of performance modeling and benchtop evaluation, safety, risk mitigation measures, MANPRINT, and estimated production costs. The final report shall specify how requirements will be met with a full-scale prototype in Phase II. Concepts will be judged on adherence to the quantitative and qualitative factors in the Description section above, and more generally on metrics such as cost, complexity, reliability, maintainability, size and weight. PHASE II: During Phase II, the researcher is expected to refine and scale-up the technology developed during Phase I, and further validate the concept and demonstrate how goals are being met by fabricating for delivery one or more fully-functional, full-size CO2 refrigeration units that have been subjected to various performance and environmental evaluation exercises representative of actual field conditions. The data deliverable shall be progress reports and a final report documenting the theory, design, safety, MANPRINT, component specifications, performance characteristics, and any recommendations for future enhancement of the equipment. PHASE III: During Phase III, the researcher is expected to perform final tasks necessary to polish the technology and through advanced testing prove it is capable of fulfilling the requirements necessary for technology transition and commercialization. Likely military applications will be containerized cold-storage assets such as the Tricon Refrigerated Container System (TRCS), the Multi-Temperature Refrigerated Container System (MTRCS), and the single-temperature Refrigerated Container System (RCS). The technologies developed will be applicable to the millions of refrigerated transport containers that travel road and rail across the our nation and the world.
OBJECTIVE: Develop a technique or system to rapidly determine the impact of deviations from established shade performance specifications on the camouflage effectiveness of Soldier uniforms in a photorealistic and radiometrically correct manner. DESCRIPTION: This SBIR seeks innovative approaches to visualize and quantify the impact on camouflage effectiveness of materials determined to be"off spec"by various amounts for shade and near-infrared (NIR) both upon initial submission and after completion of various performance tests, such as colorfastness to light and laundering or durability (wear). As an example, current specifications for printed camouflage fabric for combat uniforms (e.g., MIL-DTL-44436A, Cloth, Camouflage Pattern, Wind Resistant Poplin, Nylon/Cotton Blend) require visual evaluation by a trained color specialist of the submitted specimen against the established standard under standard lighting conditions, D75, in a shade booth. The visual appearance of the specimen is compared to a set of physical standard and tolerance samples pulled from production runs by a team of subject matter experts. The near-infrared performance is evaluated based on spectrophotometer measurements of each color in wavelengths between 600-860nm and how they compare to the established tolerances in the current specification for that pattern. When specimens are judged to be outside of the acceptable range, a waiver may be granted if it is determined to be in the best interest of the US Government by subject matter experts and contracting personnel. However, there is currently no method to visualize, in a photorealistic and radiometrically correct process, what the appearance of these deviations is or to quantitatively evaluate the impact of the waived specimens on the overall performance of the camouflage in a combat environment without actually fabricating prototypes from the material in question and conducting a field test. For instance, due to both the complexity of the camouflage patterns and the sensitivity of the human visual system, a small deviation in one color may have a minimal impact on a pattern"s visual and/or NIR performance while a change of a similar magnitude in another color of the same pattern could have a profound impact. The goal for this task is to design, develop and demonstrate an innovative technique or techniques for rapidly visualizing and quantitatively determining the impact of the off-spec performance of Soldier Camouflage materials in relevant, real-world background scenes in the visual and near-infrared regions of the spectrum. PHASE I: Design, develop and demonstrate a system process for creating photorealistic and radiometrically correct visualizations of off spec material for comparison to standard material performance at a minimum of 3 ranges (background dependent close, far and mid-ranges) of military relevance. Metrics to quantify time to generate visualizations, end product accuracy and fidelity will be chosen or developed by the contractor. An actual world location will be selected to encompass typical background elements relevant to evaluating material conformance to specification requirements. PHASE II: Develop a prototype demonstration system for generating photorealistic and radiometrically correct visualizations of Soldier camouflage materials for comparison to standard materials in the visual and NIR that is extendable to other spectral regions, as well as a means to quantify the performance impact of the deviation. The software architecture and system operational requirements will be clearly stated and compatible with existing Army tool suites. Comparisons of the generated visualizations will be conducted against actual physical samples of standard and off spec material(s) and evaluated for scene generation speed, complexity and accuracy. Specimen and scene preparation and evaluation time shall be determined and documented. PHASE III: Scene simulations are used in many current DoD applications, such as missile development, algorithm development and intelligence gathering, as well as in the motion picture and computer gaming industries. Our goal is to have a simpler version of a background and target scene generator than the hyperspectral variants required for sensor or missile development, suitable for use during full scale procurement. However, a more robust, radiometrically correct version than the photorealistic renderers used in the entertainment industry is required. Additional applications of this work could include extending the visualization software to other spectral regions beyond the NIR for other military products. PHASE III DUAL-USE APPLICATIONS: This visualization process could also aid customers in the commercial market by demonstrating the impact of varying shade tolerances to find the best combination of appearance, durability and cost. This could be useful in many different markets including textiles, paints and plastics, food and brand marketing. An additional application could be in the evaluation of transportation safety regulations.
OBJECTIVE: Develop a technology to non-destructively inspect and test helicopter sling load slings to standards in TM 4-48.09 and FM 3-55.93 DESCRIPTION: Helicopter slings are textiles used to attach a payload (e.g. a truck, howitzer, or container) to the underbody of a military helicopter. This"external underslung payload"is then transported from one location to another. Helicopter slings come in two strengths and sizes. The 2,500 lb capacity sling (NSN for full sling set: 1670-01-027-2902) has a 7/8"outside diameter and is twelve feet long. The 6,250 lb capacity sling (NSN for full sling set: 1670-01-027-2900) has a 1 1/4"outside diameter and is twelve feet long. Each sling leg consists of a nylon rope made from double-braided nylon rope with an eye splice at each end. The outer braid is covered with a vinyl coating. Helicopter slings are exposed to harsh environmental conditions. They are dragged over sand, dirt and cement. They"re exposed to blowing sand and debris under a helicopter during payload hookup and forward flight. The slings can also be sprayed with or submerged in salt water. The salt crystals, upon drying, act as sand as any other foreign particle would and weaken the sling from the inside out. Slings can also be subject to vibrations during flight. There is also UV degradation and decay over time. Manuals TM 4-48.09 and FM 3-55.93 have specific inspection criteria that each helicopter sling load (HSL) component must meet in order to be deemed safe for use. Recent deployments have demonstrated the susceptibility of damage to textile ropes, slings and pendants due to sand penetration in desert environments. During storage, transport and usage, sand has been shown to work its way into the core, strength-bearing fibers of textile slings where the sand can cause microscopic damage to these fibers. This damage leads to reduced strength and safety. The current inspection practices are acceptable for metallic components; however these inspection techniques cannot accurately determine the disposition of the internal fibers of the ropes. Since the current inspection process for ropes, slings and pendants is subjective, it produces different results with different inspectors. In most cases the equipment is removed from service prior to reaching the minimum breaking strength of the double braided rope. This means that the inspection process is generating a false positive result for unserviceable equipment. In other cases the slings break in flight leading to the loss of payload and possible loss of life. About seven years ago, tests were conducted on HSL sling legs in an effort to correlate rope break strength to the visual inspection. The test results were inconclusive as the breaking strengths did not correlate to the external condition of the sling. A visual inspection can only verify the surface conditions while the primary load bearing element is within the sling and not visible. This SBIR proposes to use methods, such as but not limited to electro textiles, sling barrier coats, and/or low stretch fibers to indicate overload conditions that could alert the inspector to significant strength loss, etc., to allow for non-destructive testing and data gathering during testing. This would transform the current subjective inspection process into something quantifiable, significantly advancing the state of the art. This technology, when developed, could be embedded into the Special Patrol Insertion/Extraction System (SPIES), Fast Rope Insertion/Extraction System (FRIES), Helicopter Sling Load (HSL) equipment and climbing/ rappelling ropes. In the cases where personnel are attached to these ropes, the enhanced inspection techniques would also provide a safety improvement. The most successful proposal will present a technology end state of"go/no-go"for a sling based on the proof load values in Table IV drawing 38850-00009"Rope Assy. Sling"(attached) regardless of the type of damage (fiber breakage, UV degradation, chemicals, etc.). This most successful proposal would determine the overall health of the entire sling with reference to the proof load values. While the entire length of the sling must be examined, the areas approximately one inch beneath each eye splice (end of splice taper) are the most likely candidates for damage. The device that will measure/determine the sling health should be portable, use DoD-approved batteries, and have an option for AC and DC power inputs. PHASE I: Develop innovative theoretical approaches to determine and display current break strength of a textile sling or rope without destroying it. Develop an initial concept design and model key elements. Define and develop key component technological milestones. Phase I deliverables include a report detailing theoretical approaches to the research, an initial concept design and modeling key elements, key component technological milestones, a first order prototype design, and a recommended path forward. PHASE II: Design, construct and demonstrate the operation of a prototype that can accurately and non-destructively determine and display the strength of a sling or other textile. Validate accuracy of prototype. Conduct life cycle and environmental testing of prototype. Phase II deliverables include the physical prototype(s) produced, the prototype design (CAD files, technical drawings), and a report detailing Phase II work and a recommended path forward. PHASE III: Refine and improve Phase II prototype design to be more user friendly and size/cost efficient. Manufacture at least one of the refined/improved prototypes. Conduct user evaluations in the field. Write training/operator manual. One specific military application would be to use this prototype for HSL sling inspections prior to a mission. The most likely path for transition to operational capability, in the absence of a formal requirements document, would be for an air assault unit to formally request the ability to non-destructively test/inspect their slings. Potential commercial applications could include helicopter sling load but would most likely center around inspection of crane slings or climbing ropes. PHASE III DUAL-USE APPLICATIONS: Military and civilians using any kind of textile, especially a sheathed one like a sling, are under the same subjective inspection restrictions. An objective strength inspection without pull testing is not currently possible. Therefore, any product that is produced through this SBIR work would be viable in both military and commercial applications (crane slings and rock climbing rope).
OBJECTIVE: Develop innovative anti-fog technology concepts compatible with impact resistant transparent materials and associated coatings that can be applied to optically corrected complex curvature lenses. DESCRIPTION: Fogging of eyewear has been a long standing issue regardless of the eyewear purpose. Protective eyewear is only effective when worn properly; however, if the user cannot see through the protective eyewear due to fogging, the eyewear is most often removed to accomplish the tasks and thus negates the protection. There are many commercial anti-fog coatings available, however the anti-fogging performance is still highly variable and subject to trade-offs with other performance objectives such as compatibility with the lens material, anti-scratch coatings, and resistance to chemicals among the chief concerns. An innovative new approach to prevent fogging is needed. Ideas such as highly durable superhydrophilic surfaces or coatings, oleophobic coatings to aid in self-cleaning and other unique approaches are desired. Concepts should be compatible with the various impact resistant transparent materials in use today, such as polycarbonate and nylon, and the associated anti-scratch and anti-reflective coatings used with those materials as determined by tests described in ANSI Standard Z87.1 and MIL-PRF-32432. Permanent and re-applied approaches are considered, however treatment/coatings lasting longer than 6 weeks of continued anti-fog usage would be highly desired. The new concepts must maintain the lens materials ability to meet ANSI Z87.1 standard and military ballistic fragment protection requirements, and ultimately to meet performance tests described in MIL-PRF-32432, and be able to be applied to optically corrected complex curvature lenses. Powered concepts must minimize power consumption, be nearly inaudible (15 dB or less) when worn, self-contained for power, minimize wearer vibration, and maintain compatibility with helmet fit, and not prone to snagging. Powered concepts should have a goal of being able to last a full 72 hour mission without needing a battery change/recharge. Anti-fog effectiveness should be evaluated according to the established international standards, such as ASTM F659 Appendix A, EN 168, and others as acceptable, with an ideal goal of having a change in Haze (ASTM Standard D1003-00) of less than 2%. PHASE I: Identify candidate anti-fog concepts and demonstrate anti-fog effectiveness. Demonstrate the ability to incorporate concept and show compatibility with impact resistant transparent materials in optically corrected complex curvature lens geometry. Identify partnerships with an eyewear manufacturer for guidance on manufacturability. Mitigate risk by identifying and addressing the most challenging technical barriers in order to establish viability of the concept. PHASE II: Refine the anti-fog technology concept to improve anti-fog effectiveness and address ability for high volume manufacturing. For powered concepts, optimize the size/power consumption with a goal of lasting for a 72 hour mission without requiring a battery change/recharge. Minimize sound and vibration to the user. Conduct initial ballistic fragment protection tests and rework design as necessary. Provide at least 50 final version working prototypes for government testing and initial Warfighter acceptance testing. PHASE III: Further develop concept anti-fog technology for a final technology able to be incorporated into manufacturing lines of protective eyewear manufacturers. Conduct full acceptance tests in accordance with MIL-PRF-32432 on an Authorized Protection Eyewear List (APEL) approved product. PHASE III DUAL-USE APPLICATIONS: The initial use for this technology will be to improve anti-fog performance of military protective eyewear. Additional dual-use applications will naturally cover the commercial protective eyewear markets. Depending upon the applicability of the technology, additional dual-use applications would cover any commercial market needing anti-fog protection for transparent materials such as recreational SCUBA divers, automotive windows, aircraft windows, instrument panel windows, etc.
OBJECTIVE: Develop innovative and cost effective power source solutions for fuzing and munitions applications that will improve reserve battery technology, improve energy harvesting capabilities and/or enable utilization of active battery technologies. DESCRIPTION: Munitions power sources, traditionally reserve batteries (liquid and thermal), are a critical component of fuzing technologies which require electrical energy for providing mission power to electronic systems. The size and power reduction advancements that have been realized in the supported electronics enable performance functionality traditionally associated with larger munition fuzing (155mm and 105mm Howitzer) to extend to smaller munitions platforms such as medium caliber munitions (e.g 40mm, 30mm, etc.). These largely-consumer-electronics-driven technology advances from circuit integration and smaller die sizes have not progressed proportionally for the DOD-unique reserve batteries that are needed for power, resulting in unique DoD solutions. The complex mechanical design and manufacture of these miniature high power devices are driven by the performance requirements of medium caliber munitions (30mm and 40mm), including a twenty year shelf life, flight times less than a minute, and temperature ranges that include storage at high (165 degrees F) temperatures, an operating temperature range that extends as low as -45 degrees F, and very fast turn-on (rise time) performance which demands the use of highly-reactive electrochemical materials. The power sources must activate reliably at setback levels while providing sufficient drop safety and reliability, and also survive extreme gun launch conditions. Specifically, these power sources should meet the following requirements: 2.9 V minimum, 40 mA, rise times of 10 ms@2 mA and 100 ms @ 40 mA, active life of 20 sec under max load, setback forces up to 100 kg's, spin rates of 1000 rps, size of .255"dia x .275"length. As a result, munitions designers are facing challenging integration problems with no readily available power source. Additionally, the uniqueness of current power sources has created a supply issue and a dwindling industrial base. When successful, this SBIR will provide a more universal power source that meets the above requirements and applies to all current and future electronic fuzing regardless of caliber. The Government is seeking cost effective innovative solutions, including but not limited to improving existing reserve battery technology. Novel power source solutions/architectures would incorporate energy harvesting devices with increased efficiencies. All of these proposed solutions would be subject to meeting the requirements described above. Present power source technical challenges are experienced for the medium caliber munitions allotted volume, but this topic is not limited to this form factor. PHASE I: Investigate concepts and approaches for novel power source solutions that address current technical challenges for fuzing and munitions applications. Deliver an engineering study that identifies the key components or technologies that will be demonstrated in Phase II and technical risks associated (e.g. but not limited to performance, manufacturing, volume, etc). PHASE II: Develop prototype hardware based on Phase 1 findings for solutions to the identified power source technical challenges. The prototype shall at a minimum be demonstrated in a simulated environment (demonstrating compliance to the requirements listed above) and shall be easily verified to show increased performance over legacy technology. Government facilites may be needed to perform verification testing, specifically to simulate gun launch environment. A cost analysis will also be delivered to estimate unit production costs. PHASE III: Assuming success, this power source technology could be used in existing and planned munitions, either as a pre-planned product improvement or insertion into development efforts. The technology will enable a new generation of munitions power sources that are applicable to direct and indirect fire applications, to include medium caliber, tank, mortar, artillery, rocket, missiles, and consumer electronic devices and smart phones.
OBJECTIVE: Develop a printed low voltage Ignition Bridge for munition detonators and igniters that can be mass produced on standard/current production equipment. DESCRIPTION: Detonators and Igniters are used in munitions to initiate energetic materials to detonate or burn, resulting in propulsion or explosion. Since printed electronics and energetics is a relatively new technology, current printed igniters are produced in laboratory scale, not maximizing efficiencies of mass production. Additionally, the very small size of printed micro initiators may facilitate the use of additional energetics to enhance performance and lethality and may facilitate the integration of"smart fuzing"electronics within the warhead. Smart fuzing can increase munitions lethality effectiveness significantly. Recent advances in printing techniques demonstrate the capability for low-cost, mass produced ignition bridges. These techniques include, but are not limited to, screen and inkjet printing. This topic encourages new and novel mass fabrication approaches for low cost and variable volume production of ignition bridges with that will accommodate adaptable design changes. Designs should be amiable to inclusion of other manufacturing processes to construct complete detonators. Proposed technologies should investigate the utility of the process for deposition onto or into various flexible and rigid substrates, including but not limited to polymers, paper, circuit boards and ceramics. The ability to change the deposition process and/or equipment is an important design criterion, in this context"flexible"means that the processes and equipment can be used for many different purposes, not just for the prescribed design. This will facilitate lower production costs for smaller production runs because the equipment can be used for multiple products. This topic will result in a mass producible (in the range of 14,000 units per year) material solution that will provide ignition of MEMS based medium caliber fuzing as well as indirect fire cannon artillery post launch propulsion systems, resulting in increased reliability and performance of the fuze and/or post launch propulsion system. PHASE I: Perform an engineering study of current and future electronic printing production techniques that will demonstrate the feasibility of applying mass fabrication techniques to printed bridges in ammunition fuzes and boosters (medium and large caliber). The study shall include a product performance and heat transfer analysis, weight saving analysis, and a manufacturing cost analysis; and will conclude with production of generic design prototypes. Goals for the study are as follows: 1. Demonstrate by analysis that the technology can survive and perform safely and reliability in the high heat (160+ degrees F) and high shock (20,000+ g"s) environment of the ammunition and gun tube. 2. Realize a weight savings of at least 10 percent from the current designs (for example, the current Multi Option Fuze for Artillery (MOFA) weighs 1.85 pounds, application of this technology should reduce the weight to 1.67 pounds) 3. Realize a cost savings of at least 20 percent of the current unit costs (for example, the unit cost of the MOFA is approximately $300, application of this technology should reduce the cost to $240) 4. Demonstrate by analysis and prototype fabrication the feasibility of a pilot production run of 100 units in a 24 hour period Specific values related to cost, size/weight, and environmental conditions for each intended end item will be provided to the contractor after contract award for use in the analyses. PHASE II: Based on success of the Phase I study (as validated during Phase I by government Subject Matter Experts), Phase II efforts will focus on developing and producing specific material solutions that will provide ignition of MEMS based medium caliber fuzing as well as indirect fire cannon artillery and mortar fuze and post launch propulsion systems. The result will be new or modified designs that leverage the mass production techniques and equipment identified in Phase I, and a verification of the mass production capability by demonstrating the ability to produce at least 100 units of one design in a 24 hour period and switching to a different design on the same equipment to produce 100 in the subsequent 24 hour period. These produced items will then be tested in a simulated operational gun launch environment (most likely at a government facility) to validate performance is reliable, safe and survives the intended environment. The contractor is responsible for defining the pilot production procedures and simulated operational test procedures. The final report will include the design data developed in Phase II, results of the pilot runs and simulated operational testing (including all procedures followed), and a cost analysis of producing the designs given the selected manufacturing method(s). PHASE III: Phase III will qualify the successful Phase II designs in the end item, to include validation by all applicable safety review boards. This will result in insertion of the new technology in the end item as a product improvement or next generation design implementation. The results of this topic will also have widespread application to commercial electronics, particularly where miniature form factor and flexible geometries are required.
OBJECTIVE: Investigate and determine the optimal control law architecture and required amount of Automatic Flight Control System (AFCS) partial authority needed to achieve ADS-33E-PRF Level 1 in the Degraded Visual Environment/Usable Cue Environment-2 (DVE/UCE-2) handling quality ratings with flight control augmentation on the OH-58F platform. DESCRIPTION: The cornerstone of a good degraded visual environment strategy and fixed design requirement per ADS-33E-PRF, is a Attitude Command Attitude Hold - Height Hold (ACAH-HH) augmentation mode to reduce the inner-loop workload of DVE pilotage. Traditionally, full authority systems have been employed to achieve DVE augmentation, but it has been proven that partial authority systems can also meet this requirement which saves on system weight and cost due to the removal of redundancy requirements. The key to implementing partial authority augmentation for DVE is to ensure target Handling Quality Ratings (HQRs) can be achieved while still meeting the emergency hard over recovery requirement. The intent of this effort is to determine the feasibility and limitations of a partial authority system as a DVE solution on the OH-58, predict and measure the actual authority amount required to meet Level 1 HQRs in the DVE/UCE-2, and derive a technical architecture model to be employed in a potential follow on program of record. PHASE I: Develop a white paper/feasibility study to model the OH-58D/F flight control system and apply a predictive computational method/tool to analyze and predict the amount of partial authority required for a 4-axis AFCS to achieve Level 1 handling quality ratings in the DVE/UCE-2 IAW ADS 33E-PRF. Select and compare multiple control law architectures and analyze performance trade-offs and benefit differences. Investigate and determine the most suitable hardware technology to implement the AFCS considering, current Stability Control Augmentation System (SCAS) architecture, weight, reliability, power availability, unit cost, and maturity. Ultimately, provide a recommendation for the best solution set(s) that meets the above criterion. PHASE II: Design, develop, and model a prototype AFCS on an OH-58 flight test aircraft based on the solutions recommended in Phase I. Perform flight test evaluation of the installed system per USNTPS-FTM-No. 107 and ADS-33E-PRF to validate predicted performance based on the solutions developed in Phase I. PHASE III: FY 17 time frame to support development of the OH 58 F Block II helicopter.
OBJECTIVE: A high capability active suspension that maximizes soft soil mobility and mitigates road breakaway rollovers on 10-37 ton wheeled vehicles. i.e. Joint Light Tactical Vehicle (JLTV) and Mine Resistant Ambush Protected (MRAP) Vehicles. DESCRIPTION: The Army is looking for opportunities to enhance soft soil (mud and sand) mobility and reduce vehicle rollovers caused by road breakaways using advanced suspension technologies. The suspension technology would be designed and developed for use on JLTV and MRAP vehicle platforms with the intent of maximizing the vehicles soft soil performance and reducing the severity of road break away rollovers. There has been significant work in the past on advanced suspension technologies that improve ride and handling stability of various vehicles, but no work in the area of suspension control algorithms that can walk a vehicle out of an immobilized condition in soft soil or prevent a rollover when the road breaks away under a heavy vehicle. These are the two largest mobility issues for MRAPs and have a high likelihood to be an issue for JLTV especially since it is intended to operate in much softer soils than MRAP. This suspension system will require the ability to rapidly respond to unexpected events, and control the vehicle"s movement throughout its entire suspension"s travel range. Control algorithms will need to be developed that can detect a rollover event and properly mitigate it while at speeds of up to 65 mph without causing a loss of control of the vehicle. Additionally, control algorithms will need to be developed to determine when a vehicle is stuck in mud or sand, and generate enough load transfer, from side to side, to get the vehicle unstuck. The Army is looking for innovative ideas in the area related to mobility, and more specifically suspension systems, to improve Soldier performance in the field when they encounter unexpected mission or life threatening events. The final product of this effort would be to build and test a prototype system to determine and demonstrate the systems ultimate level of capability. PHASE I: In phase I, the Contractor would propose a technological solution that would enhance soft soil (mud and sand) mobility and reduce vehicle rollovers caused by road breakaways using an advanced suspension technology, develop a model that demonstrates the functionality and performance improvements that can be expected with the technology, and then write a final report that summarizes the effort and the expected benefits should the system be built and developed for the MRAP Family of Vehicles or the JLTV. The report will include a summary of the data generated, the benefits of the system, the concerns related to integration onto the vehicle, an estimate of the expected durability of the proposed system, and any commercial applications of the system. PHASE II: In Phase II, the Contractor would generate detailed designs of the parts modeled in Phase I. The contractor would fabricate the parts and install them in an MRAP or JLTV. Once the parts are installed the contractor would conduct a proof of principle (PoP) test to demonstrate the performance improvements. Finally the contractor would write a technical report detailing the results of the test, the potential of manufacturability of the components, and the cost of the integration and parts should the system go into production. PHASE III: In Phase III, the Contractor shall develop detailed manufacturing and instillation plans for use on the MRAP All Terrain Vehicle (M-ATV), the MaxxPro Plus, and the JLTV vehicle (still to be determined). The Contractor shall also determine the potential use of this product on agricultural and mining vehicles.
OBJECTIVE: Develop a real-time in-line diagnostic tool to provide simple and timely verification that treated water is safe to discharge DESCRIPTION: This SBIR topic will deliver technology that the Army can integrate into its future wastewater treatment concept of operations. The Army is developing mobile wastewater treatment systems to provide tactical base commanders more organic logistics support. This will reduce their need for wastewater convoys. The limited manpower at small bases will not include wastewater specialists, so operators must be enabled with simple, effective methods to verify the wastewater treatment process and to ensure that the treated water is safe to discharge. Essential process verification measurements relating to the nutrient content of the water can neither be done on-site nor in frequent intervals. This means that the operators are lacking key information to adjust or correct wastewater treatment system operations to avoid pollution. There are a variety of measurement options that satisfy wastewater discharge permit requirements and parameters of interest include (but are not limited to): chemical oxygen demand, biological oxygen demand, total organic carbon, and coliform bacteria. As a worst case, the standard method to measure biological oxygen demand (BOD5) takes 5 days and so some quicker measurement methods will likely be predictive. As an innovative effort, this request is NOT looking for proposals that integrate various commercial items with minor modifications to meet the above requirements. The proposals should identify cutting edge research that allows for the consolidation of the wastewater treatment verification parameters on a single platform such as a chip that will also overcome the limitations of the current commercial methods towards real-time process verification for quickly emplaced treatment systems. The effort should focus on primarily on the development of sensors rather than data loggers, controls or communications. Ideally, prototypes delivered to the Army would be used to demonstrate capability to monitor wastewater discharge during Army technology demonstrations (TECD4a) for small base support from 2016 to 2017. PHASE I: Demonstrate feasibility of core technology in a laboratory setting. Verify measurement range or sensitivity equivalent to commercially available equipment currently used by the water industry. Verify accuracy by comparing the results to analysis conducted using the appropriate reference method from the current edition of Standard Methods for the Examination of Water and Wastewater. Directly measured physical and chemical properties should have an accuracy within 10%. Each parameter shall be tested in standard preparations and then selected tap water mixtures. PHASE II: Design, build a complete sensor prototype for multiple wastewater analytes housed within a single platform no larger than one cubic foot. The sensor should be capable of communication with a device (internal or external, preferably commonly available) to log data. Test integrated prototypes to the criteria of phase I with standard preparations and collected water. Delivered prototype must be suitable for 3rd party and Army laboratory testing and field demonstration, but design does not need to be finalized, nor is military standard durability required. Clear operational manuals do not require military format. During this phase, the Army expects to work closely to clarify mission integration requirements appropriate for the initial prototype maturity. PHASE III: Final solution is a quick-connect autonomous inline system but a kit that accepts batch samples may be suitable. The sensor platform should be self-calibrating with duration of at least one month before recalibration is needed. The most supportable design would utilize commonly available supplies, common communication protocols and not directly interface with the controls of the wastewater treatment system. The Army can integrate the technology developed under this SBIR into the mobile wastewater treatment systems being developed to answer Acquisition requirements. Water utilities could insert the technology developed under this SBIR in facilities to improve maintenance and reduce contamination of our nation"s waterways. Broader application may be for monitoring in accordance with discharge permits for industrial and municipal facilities.
OBJECTIVE: Develop and demonstrate a model or full sized high energy density capacitor for microsecond discharge times operating at high voltages with an energy density greater than or equal to 1.2 Joules per cubic centimeter (J/cc). DESCRIPTION: The Army is in need of pulse power components that dramatically reduce weight and volume, while meeting the high voltage needs of a pulse forming network. Recent advances in the production and performance of current capacitors for the US Army have achieved many milestones; however there is only one known US vendor able to produce a high energy density capacitor. The topic goals are to develop another source that has the ability to produce a high energy density capacitor. In order to meet this goal, an innovative approach is desired to meet the required discharge life, energy density, and pulse widths. PHASE I: In Phase I, deliver a study that demonstrates novel methods of producing a high energy density pulse forming network capacitor. The capacitor should be optimized for DC lifetime with the goal of achieving the highest energy density possible while maintaining a discharge life of 10 events; discharge times are in the microsecond pulse widths. This study shall include modeling that allows the contractor to demonstrate the capabilities of the high energy density capacitor. PHASE II: Upon successful completion of Phase I, design and fabricate a high energy density capacitor suitable for a high voltage (3 kilo-Volt (kV) to 7kV range) pulse forming networks. Energy density should meet or exceed 1.2Joules/cubic centimeter (J/cc). Contractors are encouraged to collaborate with the Army on minimizing the packaging designs and reducing weight and volume. A working prototype should also be submitted to Army for evaluation. PHASE III: in phase III, the contractor shall design and fabricate a high energy density capacitor suitable for a high voltage (above 7kV) pulse forming networks, based on improvements from phase II. Energy density should meet or exceed 1.3J/cc. Minimum DC lifetime of 1000 hours and 10 discharge events.
OBJECTIVE: This topic will identify material formulas, manufacturing process and parameters to allow complex die forming of thick gage armored steel components. Upon successful completion, this technology may be used on all army platforms including GCV & JLTV. This technology will improve structural and armored panel performance while reducing part count. Anticipated application will include underbelly protection. DESCRIPTION: Steel armor manufacturing processes produce flat sheets/plates of heat treated armor steels. These raw materials mandate flat surfaces on army vehicles which compromises the armor protection due to the mandated joining technologies. Armor panels are typically joined using bolts or welds. Welding allows for continuous joints but negates the heat treatment of the armored steel in the weld heat affected zone, thus reducing the protective properties. Bolts compromise protection due to non-continuous joints and require material overlap. A method is required to form armored steel into complex shapes without reducing the materials protective properties. Presently, the automotive industry uses hot stamped steel in high production environments to create crash protective"cages"using steels up to 2.5mm thick. As stated, the current production material properties are optimized for automotive crash conditions. The army requires material with properties which, after forming, will meet the Army high hard and rolled homogeneous armor steel specifications. Formed thicknesses of interest in this project range from 2.5mm up to 50mm. Upon successful completion the Army will use this technology to form underbelly protection panels, driver hatches, gunner protection kits etc with reduced part count, fewer joints, less weight and improved protection. Application of this technology to form non-standard military vehicle panels is of immediate interest (example: form an armored steel Ford Ranger door outer skin). PHASE I: Phase I is expected to identify and supply the correct metallurgical material properties/formulas required in the base purchased material such that the properties after processing/forming will be within 95% of the values specified in the MIL specifications (see references listed below). Materials of interest are rolled homogeneous armor (RHA) and high hard armor (Hi-hard) steel. Form tolerances of stamped components must be held within +/1.5mm. Material suppliers for the base material must be identified and information including material cost, volume break points and lead time to procure the base materials (which upon completion of the forming process will result in the specified properties for Hi-hard and RHA) is to be provided. Thicknesses to be addressed will vary up to 2"thick. Forming methods, computer simulation, testing parameters and testing methods shall be developed. Testing parameters and methods (test plan) will require TARDEC approval. Of interest in this project is production of 2 ft. square blanks, 4 ft square"boat hulls"and 4 ft square"V"shaped panels samples from both steel variants. Bend radii at 3 times metal thickness is required, 2 times metal thickness is desired. Upon completion of Phase I a complete study including all details required to enter into Phase II will be provided to US Army TARDEC. PHASE II: Phase II is expected to produce, test and verify the material properties of the 2 ft. square blanks, 4 ft square"boat hulls"and 4 ft square"V"shaped configuration identified in Phase I. Samples/testing will be required in both high hard steel and rolled homogeneous steel. TARDEC will be provided the test results as detailed in the"test plan"which was developed and approved in Phase I. Both manufacturing (draw/forming) and performance (finite element analysis, ballistic and blast) computer simulations are to be verified for the four thicknesses specified and for both materials types. US Army TARDEC is to receive 10 samples of the 2 ft. square blanks, 4 ft square"boat hulls"and 4 ft square"V"shaped configuration and both material types. TRL: (Technology Readiness Level) TRL Explanation Biomedical TRL Explanation TRL 2 - Technology concept and/or application formulated PHASE III: Phase III is expected to specify the plan to commercialize this technology for both defense and commercial applications. Manufacturing readiness assessments are to be provided with specific interest in the maximum sizes and weights of panels which can be manufactured. The initial defense interest is in forming a full vehicle underbelly protection shield for the ground combat vehicle, forming door outer panels for commercial light truck applications and forming vehicle body structures for the Joint Light Tactical Vehicle. It is expected that the individual submitters will present plans for additional commercial usage and partnerships.
OBJECTIVE: The objective of this project is to develop and demonstrate a modular, open system architecture system to provide an EW operator or system of the effectivenss of an electronic attack. DESCRIPTION: EW systems attempt to disrupt or degrade an adversary"s electronic assets and serve as an invaluable force protection asset to prevent the adversary"s access to their electronics. Many of the electronic attack (EA) techniques used are relatively brute force and often apply a greater amount of power than the minimum required, ensuring that the desired effect is realized. However, the operational tradeoff of using more power than necessary is that it precludes the availability of the excess power for another attack at the same time. In order to prevent an EA from using more power than necessary, a feedback mechanism is required to inform an operator or the EA system how effective the attack is. This technique is known as battle damage assessment (BDA). The output of an EW BDA is most effective in real time (seconds) or near-real-time (hours) to provide timely, actionable feedback. For this SBIR topic, one of the challenges is to identify techniques that can provide autonomous, real-time EW BDA feedback with as much fidelity as possible without a significant amount of processing requirements. A second challenge of this SBIR topic is to develop autonomous, near-real-time techniques that generate very high fidelity data in a matter of hours. Desired features include: Technique to generate low-fidelity, real-time BDA reporting Technique to generate high-fidelity, near-real-time BDA reporting Extract the BDA information without physical contact with the enemy asset, from standoff ranges equal to, or greater than, the standoff used by the EA system Report the EA lethality estimate along with margin of uncertainty Extract the BDA information from as many as 10 targets at the same time The current state of the art for EW BDA is extremely limited for EA systems that attack wireless communication devices. A broad body of knowledge exists for EW systems that attack conventional targets such as missiles and vehicles, for which there is a clear and discernible physical body for which to measure the effectiveness of an EA. As an example, techniques exist for measuring the change in flight trajectory or velocity of a missile. However, when attempting to make a BDA for a wireless system for which there is no physical medium to measure, there are few, if any, techniques that are widely documented and demonstrated to be effective. Therefore, this SBIR represents a significant opportunity to increase the state of the art in EW BDA specifically for wireless systems which greatly enhances the capability of future Army EW systems such as the Multi-Function Electronic Warfare (MFEW) system under Project Raven Fire. PHASE I: Phase I will perform a feasibility study of the proposed approach to deal with common wireless threats. This study will document the viability, risks, and tradeoffs. This feasibility study will also identify technical approaches to the proposed solutions. The approach which offers the best balance of technical risk and performance will be developed into a deliverable prototype capable of demonstration in a laboratory environment. It is envisioned that no more than one meeting will be held at Aberdeen Proving Ground, Maryland. All other communications and meetings will be via telecon/VTC or held at the contractor facility. US Government contractors maybe used to facilitate proposal evaluation but they will not be technical evaluators. PHASE II: Hardware and software will be developed and produced in support of relevant field environment testing and evaluation on terrestrial (vehicle/dismount) platforms. Contractor will validate that the prototype(s) meet the performance objectives and will demonstrate the EW BDA capability in a field environment. A plan will be developed to detail the development, demonstration, maturation, and validation and verification (V+V) of these capabilities to assist in a transition to Phase III. The small business will deliver one prototype for each of the platforms, one for the vehicle mounted and one for the dismount platform. PHASE III: Develop and implement a technology transition plan with PD Raven Fire. The transition should include further technology maturation to Information Assurance, Environmental, and EMI/EMC considerations. Finalize the EW BDA hardware and software, and conduct qualification testing of the product with R & D prototype and actual EW/EA production hardware. This technology is applicable to the US Army and other DoD users that require EW/EA capability. Potential commercial applications include local, county, state or federal law enforcement sales of the original system. A second commercial application is the prediction of the effects of unwanted EMI on commercial system to reduce EMI to an acceptable level to ensure proper operation of the commercial electronic devices. The expected transition to Program of Record use would be in the Multi-Function Electronic Warfare and Defensive Electronic Attack PORs.
OBJECTIVE: The objective of this project is to develop and demonstrate Global Positioning System (GPS) pseudo-satellite antenna solutions that are capable of installation and operation on a US Army Tier II/III UAS and US Army Ground Vehicles. DESCRIPTION: While GPS is the most prevalent navigation method in use today, its weak satellite signal is vulnerable to both unintentional interference and deliberate jamming from an adversary. With so much of the Army"s operations and infrastructure depending on GPS, means are urgently needed to assure the continued availability of GPS-based Position, Navigation, and Timing (PNT) capabilities. Traditionally this has been solved through neutralization of the jammer or nulling or cancellation of the jammer signal. An alternative method is the reception, processing, and re-broadcast of the GPS Signal by a pseudo-satellite, or pseudolite. The intent of this initiative is to develop antenna solutions to broadcast the GPS augmentation signal according to IS-ICD-250. On a Tier II/III UAS, this antenna will need to be installed as a secondary payload and will be used to broadcast the GPS augmentation signal. On terrestrial platforms, the antenna will need to transmit the GPS augmentation signal according to IS-ICD-250. It is intended that no more than two antenna solutions will be necessary, one for the air tier and one for the terrestrial platforms. The antennas must function with existing pseudolite transceiver hardware developed under other US Army programs which will be used in later phases for test and demonstration. PHASE I: Phase I will perform a feasibility study of the antennas for the Tier II/III UAS and terrestrial platforms and their respective missions. This study will document the viability, risks, and tradeoffs. This feasibility study will also identify technical approaches to the required antenna solutions. The platform antenna of highest technical risk to meet the key performance requirements will be developed into a deliverable prototype capable of demonstration in a laboratory environment. It is envisioned that no more than one meeting will be held at Aberdeen Proving Ground, Maryland. All other communications and meetings will be via telecon/VTC or held at the contractor facility. US Government contractors maybe used to facilitate proposal evaluation but they will not be technical evaluators. PHASE II: Antenna apertures, based on the technical approaches presented and prototype developed within Phase I, will be developed and produced in support of relevant field environment testing and evaluation on terrestrial and aviation platforms. Contractor will validate that the antennas meet the performance objectives and will demonstrate the pseudolite capability in a field environment. A plan will be developed to detail the development, demonstration, maturation, and validation and verification (V+V) of these capabilities to assist in a transition to Phase III. The small business will deliver one prototype antennas for each of the platforms, one for the air platform and one for the terrestrial platform. PHASE III: Develop and implement a technology transition plan with Product Director Positioning Navigation and Timing (PD PNT). The transition should include further technology maturation to Information Assurance, Environmental, EMI/EMC and Airworthiness considerations. Finalize the antenna hardware and software, and conduct qualification testing of the product with R & D prototype and actual pseudolite production hardware. This technology is applicable to the US Army and other DoD users that require Assured PNT capability. The envisioned production rate is 20 antennas per month, for a total of approximately 1,500 systems to support Army pseudolite requirements.
OBJECTIVE: With recent advancements in digital processing technology, there exists the capability to develop an all digital radar. The purpose of this topic is to solicit research and development of an all digital transmit/receive module and a radar back end capable of processing resulting large data sets. This design should have the potential of growing into a final software and hardware design leading to a demonstration on actual hardware. DESCRIPTION: Conventional radar receivers are constructed with analog components. Only the demodulated baseband signal is converted (at the sub-array level) to digital format using a medium speed analog-to-digital converter (ADC). Since analog components are sensitive to temperature, supply voltage and semiconductor processing variations, the performance of analog radar receiver is limited and power hungry. With the development of digital processing technology, there are emerging trends toward digitization in radar receiver designs by applying direct intermediate frequency-to-digital conversion (IF sampling) and a direct digital synthesizer (DDS) at the transmit/receive (T/R) module. Digitization at the T/R module allows for much higher precision, lower noise, lower power and stability than analog counterparts. Moreover, it can retain the extreme flexibility of digital techniques such as direct digital modulations and waveform generation. In order to obtain high resolution and high anti-jam features, modern radar systems need to employ more complex waveforms; however, with analog radar receivers it is difficult to generate and process arbitrary waveforms. The All-Digital Radar (ADR) is a revolutionary approach to the development of modern radars capable of supporting the required functions within DOD, as well as a variety of potential commercial applications. Digitization at the T/R module supports the concept of one design for all radars. The ADR is scalable and provides total flexibility for needs that range from the smallest applications involving the protection of a small number of troops up to and including cruise missile defense and large space-based ballistic applications. The T/R module can be implemented in an integrated chip; hence it consumes much less power and is much lighter and more efficient than its conventional analog counterparts. The small size, light weight and low power greatly increase the applicability force protection and communication. Two issues have limited the development of the ADR technology: Processing the data from the ADR array (each T/R module has the data bandwidth of conventional radar) and low power ADC. Solutions to these two issues are available with today"s technology. The intent of this effort is to develop ADR hardware and a digital signal processing back end and perform analysis of improvements to functionality and cost for radars and communication system capabilities. Follow-on efforts will build and test a prototype ADR array. PHASE I: The offeror shall develop a design of an all-digital T/R module and the signal/data processing backend to aggregate, distribute and process data from a T/R module array. The minimum bandwidth of the module shall be 1 GHz. The T/R module shall include a DDS and an ADC. The DDS shall be capable of generating frequency and phase modulated signals. The ADC shall be low power (<2 watts). PHASE II: The offeror shall develop a 4 module ADR array and demonstrate signal generation, data aggregation, distribution and signal processing. The offeror shall provide complete designs of the T/R module along with the digital hardware and radar backend architectures for a 64 T/R module array. PHASE III: The offeror shall build and demonstrate a 64 module ADR array and digital hardware backend. The most likely transition of ADR technology is to systems that require low energy consumption in performing their mission. This would include UAS vehicles which are a candidate for all ADR T/R modules integrated into the airframes. Also satellite based radar and communications systems require low power consumption systems. ADR technology energy efficiency, flexibility in design, waveform diversity, common backend, and cost should lead to its adaptation in all future DoD radar designs. Commercial applications include the communications industry (telephone and video communications). Also, low cost applications such as radars to track aircraft on the ground at airports and surveillance systems for commercial buildings.
OBJECTIVE: The objective of this topic is to investigate and develop concepts of replacement, adaptation, or modification of the'shotgun'connector on the Hellfire missile that results in reduced aerodynamic effects, including drag, on the missile in-flight after launch. DESCRIPTION: The semi-active laser seeker guided Hellfire air-to-ground missile provides the most reliable precision targeting capability in the inventory today. Laser designation allows the military to accurately designate the specific target of interest among urban clutter, decoys, and other less threatening targets. Such precision targeting minimizes both collateral damage and the need for multiple strikes to ensure destruction of the target. This sought after capability has resulted in the desire to increase the range of the Hellfire missile and to launch it from high speed fixed wing aircraft. The Hellfire missile is carried for use in a rail launcher. The so called'shotgun'connector on the top of the missile body plugs into a mating connector on the rail when the missile is mounted and receives its targeting information and pre-launch power through this connector. The connector has shown itself to be both rugged and reliable. However, analysis has shown that the missile mounted connector has high aerodynamic drag and presents an asymmetric load to lateral airflows and buffeting. The'shotgun'connector contains two (2) thirty-seven (37) pin umbilical connectors capable of transmitting 392 watts of 28 VDC power, with possible voltage transients of 18 to 38 VDC. Approximately twelve (12) pins are dedicated to power transmission while the rest are used for signaling, or are unused, depending on the Hellfire missile model. All pins/wires in the present connector and associated internal cable designs are rated for the same electrical loads, however. Unclassified drawings and internal views will be provided upon award of contract. PHASE I: The goal of Phase I is to conduct a feasibility study to identify techniques for the replacement, modification to, or adaption to the Hellfire missile shotgun connector missile that results in a 90% reduction in aerodynamic effects, including drag, on the missile in-flight after launch. The proposed techniques must accommodate the existing internal packaging of the missile and cannot require a permanent modification to the connector on the launcher, since the launcher would have to be able to launch missiles with and without the proposed modification. The proposed techniques must be able to meet the physical environmental and carriage requirements imposed on the Hellfire and its launcher by the present Hellfire missile specification. Phase I should result in recommendations for the most promising approaches to be pursued in Phase II. PHASE II: The goal of Phase II is to produce prototypes of one or more of proposed techniques to demonstrate the proposed techniques. Field testing is expected and can be facilitated and supported by the Government if laboratory testing demonstrates sufficient promise. PHASE III: Numerous military programs would benefit from the technology, including the Joint Air-to-Ground Missile, Hellfire, Griffin, and other rail launched missile systems that require targeting information and power prior to launch. Limited commercial applications are anticipated for this technology.
OBJECTIVE: Design and build an inexpensive and ruggedized infrared projection system that can be utilized to create accurate real-time dynamic thermal representations on target silhouettes or other mediums based on training doctrine within the various live and virtual training applications to enhance realism and feedback for the trainee. DESCRIPTION: A dynamic infrared projection system technology would support the creation of a high fidelity, real-time thermal representation within the Live and/or Virtual training domains. The technology would provide an accurate and realistic thermal portrayal of battlefield entities, vehicles, threats, and terrain conditions, to include posture based, time based, and event based modifications to the thermal representations. This technology development will advance thermal representations to be consistent with thermal imaging technology used by soldiers and tactical platforms. Current technology/solutions are heating pads adhered to target silhouettes. The shapes are not accurate, get damaged with live fire engagements, and create thermal bleeding (non-realism). The shapes are static with respect to time, and do not allow for changes in thermal representations over time, movement, or posture changes. Based on user feedback, thermal pads provide an unrealistic and limited training experience to soldiers. The proposed technology would remove the thermal generation from the line of fire; would support the CGI development of highly accurate (ROC-V/CID) thermal representations which would be time/movement/action based. IR projector would also provide positive threat fire (vice delayed pyrotechnic solution) and vehicle kill indications (vice simple target silhouette lowering). IR projector could be coupled with a simple screen, non-contact hit sensors, and a SAF model to project multiple targets/threats simultaneously. The potential use cases for the technology would: Allow for the thermal representation to change hull or turret orientation toward the training participant based on training scenario or trainee action Allow for the thermal representation to provide a status indication of killed or damaged via changes to image Allow for the thermal representation to provide a visual muzzle flash and instant hot turret/barrel Allow for the thermal representation to provide an escalation of force from the target (a hostile person raising from bed of a pick-up armed with an RPG The proposed technology would support enhanced realism in the training domains, and provides a means to link virtual and constructive domains into the live domain. The dynamic infrared projection system technology would have to support operational conditions to include, open air environment, daylight, nighttime, rain and snow conditions, and could not rely on cooling systems. The dynamic IR projection system would require a high reliability, and would have to require minimal maintenance actions. PHASE I: Study, research, and conduct initial integration and design concepts of core technology components for the dynamic infrared projection system. Synchronization of work being completed by RDECOM, PM ITTS, PEO STRI and academia will be required. Determine feasibility of adapting DSP/DLP technology to support IR protection, development of supporting lens technology (low cost, durable), and ruggedized platform to support performance in open air environments (without cooling systems). Determine feasibility of a mixed mode projection system to support simultaneous visual and infrared images PHASE II: Refine design and continue technology investigation and integration into a prototype baseline, and implement basic modeling methods, algorithms, and interfaces between the control system and the projections system. Investigate integration with OneSAF and virtual training systems to provide a projection of the constructive or virtual entities. Create algorithms to project multi-layer scenes onto a single medium. Focus on environmental stability and reliability enhancements. Prototype basic model on a live fire range. Demonstration will be at TRL 6. PHASE III: Military application: Transition technology to the Army Program of Record called Future Army System of Integrated Targets. Technology would be viable for both digital and non-digital ranges, urban operations, and other live training ranges where the injection of realistic IR imaging is required for training feedback. Develop modes of operations where the dynamic IR outputs equate to inputs into other training or testing system. Commercial application: Test beds for commercial sensors with applications in infrared vision, monitoring, and imaging systems.
OBJECTIVE: Develop a low cost sensor that can accurately measure angular rate and position of a weapon system in all Six Degrees Of Freedom (6DOF) which is ultra low power, capable of determining absolute heading, however, requiring no in initial or sustainment calibration; for the purposes of Live and Virtual Non Line-of-Sight (NLOS) and Direct Fire tactical engagement simulation. DESCRIPTION: Many weapons systems and future concepts cannot be simulated using current line-of-sight laser-based systems, e.g. Multiple Integrated Laser Engagement System (MILES), in Live training exercises. Additionally, the Army has identified a Virtual small arms weapon training capability gap, using actual weapons in lieu of simulated weapons. There are many existing sensors on the commercial market that meet some of the Army"s requirements but not all of them. Typically, Commercial Off The Shelf (COTS) sensor modules use both gyros and magnetometers to measure the pointing vector of the weapon system, however these sensors have many error sources that substantially reduce the reliability and accuracy of the engagement which results in unrealistic or negative training. Current Micro-Electro-Mechanical Systems (MEMS) technology has provided system on chip capabilities for measuring 6 DOF orientation, however, due to poor signal-to-noise ratios and their sensitivity to temperature changes, accuracy is sacrificed and has proven insufficient to meet the Army"s technology gap. Current high-end, tactical grade Inertial Measurement Units (IMUs) provide the needed accuracy and environmental robustness, but remain unsuitable due to extraordinarily large size, power consumption and unit cost. We are seeking an innovative approach to precisely measure 6 DOF orientation, but in a low cost and low power form factor. It must be capable of measuring absolute heading (geodetic north) with an accuracy of 3 mils. The approach must be capable of measuring orientation in all environmental conditions where soldiers can operate. The device must operate while undergoing a slew rate of 60 degrees per second (threshold metric) and a slew rate of 300 degrees per second (objective metric). Additionally, the sensor and its associated processing electronics shall be enclosed in a package no greater than 1 inch wide by 1 inch high by 4 inches long while assuming that power will be provided externally to the sensor, but also assuming that it is very scarce. The sensor should require no initial, nor maintenance calibration. We are also seeking a solution which is low cost, at a production cost of less than $2000 per unit. PHASE I: Validate viability of the technical approach through simulation or mathematical model. PHASE II: Develop an initial breadboard prototype (TRL4 - Component and/or breadboard validation in laboratory environment) and develop an advanced prototype (TRL6 - System/subsystem model or prototype demonstration in a relevant environment) with transition customer collaboration with respect to requirements, design review, and prototype test and evaluation. System must be capable of being mounted on actual weapon systems and used in the Live/Virtual training environments. PHASE III: Likely military applications are for simulated tactical engagement training and, UAV (Unmanned Aerial Vehicle) flight control or ground based unmanned vehicle navigation and flight control, and for far target laser designators. Commercial application would be for light aircraft navigation and flight control. Likely transition opportunities in the test and training domains under PEO STRI are the One Tactical Engagement Simulation System (OneTESS), the Dismounted Soldier Training System (DSTS), the Call For Fire Trainer (CFFT), and the Engagement Skills Trainer (EST). In the operational domain, likely transition opportunity exists with PEO Soldier PM Soldier Systems & Lasers and their Laser Target Locator Modules (LTLM) that implement the use of a digital magnetic compass (DMC).
OBJECTIVE: The objective of this effort is to develop very compact explosive driven ferroelectric generators capable of producing more than 500 kV at the input terminals of a variety of loads. DESCRIPTION: As we develop new munitions with different types of payloads, there is an ever increasing requirement for smaller, lighter, and cheaper electrical power sources for use in a variety of munitions ranging in size from 25 mm (1 inch) diameter to 18 cm (7 inches) diameter. In the case of smaller munitions, the number of available useful power supplies is limited. One type of pulsed power source that can meet these limitations is explosive pulsed power. The field of Explosive Pulsed Power (EPP) [1 4] was established in the early 1950s. These power supplies either convert the chemical energy stored in explosives into electrical energy or use the shock waves generated by explosives to release the energy stored in materials such as ferroelectrics or ferromagnetic. Explosive pulsed power generators are currently under investigation by several Department of Defense Laboratories as power supplies for new classes of warheads and munitions. Of particular interest is the Ferroelectric Generator (FEG), since it is one of the few power supplies capable of generating very high voltages required to drive high power microwave tubes in the available payload volume on current small munitions. The potential Achilles heel for FEGs is that they are very low energy devices. For them to be useful, one needs to be able to use several FEGs to power a single circuit. This means that one must use a single switch for the oscillator, as opposed to dielectric breakdown switching. Thus, we need either two or three times the energy from a single FEG or a switch that requires exceedingly little energy to switch while still being able to control up to 500 kV. If either or both of these needs cannot be met, then the FEG will probably not end up being useful for our sort of applications. Thus, the objectives of this effort are to investigate those mechanical or electric processes that can be modified to increase the output of FEGs from the state-of-the-art value of 100 kV to 500 kV and to ascertain their capabilities to drive payloads such as High Power Microwave (HPM) sources. This would include investigating the influence of such fundamental processes as shock dynamics, the electrical, mechanical, and chemical properties of ferroelectric and potting materials used, methods for controlling electrical breakdown, power conditioning techniques (switches, transformers, etc.), load characteristics, and so on when driving one or more types of HPM loads. Since the load impacts the operation of the FEG, it is important that tests be done with the load. The desired goal is to deliver 500 kV pulses to various HPM loads including orbitrons, magnetrons , Virtual Cathode Oscillators (VIRCATORs), and/or Magnetically Insulated line Oscillators (MILOs) having volumes as feasibly small as technically possible. PHASE I: The goal of Phase I is to identify those mechanical, electrical, and/or chemical characteristics of the generator that could be modified in order to improves the performance of FEGs designed to drive HPM payloads. This will include doing proof-of-principle experiments to verify that the correct parameters to be modified to meet the objectives of this Topic have been identified. Since the type of explosives used impacts the operation of the FEG, explosive tests need to be done. This will necessitate the requirement that the proposing firm have access to approved explosive test facilities. PHASE II: The objective of Phase II is to finalize the design of the FEG and demonstrate that it can deliver 500 kV to various high power microwave sources. The proposing Firm must also address any power conditioning and integration issues. In addition, the proposing Firm should also address any manufacturing issues that would impact the production of these power supplies. PHASE III: These FEGs would be used in pulsed technologies that are applicable to multiple military and commercial applications requiring pulsed power. These include portable water purification units, portable nondestructive testing systems, portable lightning simulators, expendable X-ray sources, burst communications and telemetry, and oil and mineral exploration. Since several government labs and prime contractors are developing advanced munitions, the contractor will need to have developed a business plan for working with these agencies and companies.
OBJECTIVE: Design, develop and demonstrate innovative secure software architectures for mobile computing platforms to mitigate security threats within a space communications network to execute tactical missions more effectively and securely. DESCRIPTION: As DoD has become increasingly dependent on the use of mobile computing platforms to conduct mission operations, the need for improving security in an environment that includes commercial mobile devices has grown. Key device level security issues facing mobile computing platforms include network authentication, data protection, malware defense and mobile ad-hoc networking. The focus and priority of this topic is seeking innovative software architectures for mobile ad-hoc networking to ensure secure communications in a complex space network environment. Mobile devices include, but are not limited to, smart phones and tablets that have a unique combination of computing power, mobile applications, and access to network data. Current software architectures provide insufficient protection necessary for sensitive DoD systems utilizing mobile computing platforms. The desire for self-organizing, self-forming, scalable, multi-hop mobile ad hoc networks poses significant security challenges due to their wireless and distributed nature. These characteristics and the frequent networking reconfiguration make it more vulnerable to intrusions and misbehaviors than a wired network. Additionally, applications and operating systems installed on mobile devices are suspect to malware or spyware, or may perform unexpected functions such as tracking user actions or sending private information to outsiders. Malicious activities could disrupt Army networks and compromise sensitive information. Finally, current mobile devices and software architectures are limited by their processing capability for executing complex encryption algorithms or mission data intensive computations. Preliminary research assessments highlight the availability of next generation device/component technologies and outline novel architecture designs with the potential to significantly improve network security. Of particular interest are Android-based platform solutions with multiple processors. Secure software architectures are being sought that fully utilize multiple processors within these devices and across a mobile ad hoc network to increase security, robustness, and computation capability. New innovative solutions are required in the form of secure software architectures for mobile ad-hoc networks to protect applications and data within a complex space network environment from being exploited or exfiltrated from advanced threats. Solutions should target one or more of the issues defined here and should be scalable across a network of mobile devices. PHASE I: Research and develop novel secure software architectures for mitigating security threats within a mobile ad-hoc network. Provide a proof-of-concept design and prototype demonstrating the feasibility of the concept. Verify the Technology Readiness Level (TRL) at the conclusion of Phase I. PHASE II: Based on the verified successful results of Phase I, refine and extend the proof-of-concept design into a fully functioning pre-production prototype. Verify the TRL at the conclusion of Phase II. PHASE III: Develop the prototype into a comprehensive solution that could be used in a broad range of military and civilian mobile device network applications where increased security is required. This demonstrated capability will benefit and have transition potential to Department of Defense (DoD) weapons and support systems, federal, local and state organizations as well as commercial entities.
OBJECTIVE: Develop bipolar lead acid batteries that provide lighter weight and lower volume, for military vehicle applications. DESCRIPTION: Lead acid batteries are used in nearly all military ground vehicles. Many times, the size and weight of these batteries become prohibitive to meeting vehicle architecture and performance requirements. Bipolar lead acid technology offers a significant reduction in size and weight through the elimination of up to 50% of the inert lead grids. Traditional lead acid military batteries utilize a mono-polar construction, in which each cell consists of two plates (positive and negative) and each plate consists of a heavy lead metal grid pasted with active material. These plates are paired up to make a cell, and these cells are connected in series with metal connectors. In a bipolar design, each plate has positive active material on one side and negative active material on the other. Cells are created by stacking bipolar plates together so that the negative of one plate is paired with a positive of another plate, with each cell separated by the bipolar plate material. This construction alone offers an almost 50% reduction in inactive plate material. Additionally, the elimination of metallic connectors connecting the cells further reduces the weight and decreases the batteries internal resistance. Further improvements to the bipolar design can be accomplished through the investigation of alternate plate material and appropriate sealing techniques. An optimal bipolar plate material would be lightweight, inexpensive, and corrosion resistant. An appropriate sealing technique would prevent electrolyte from crossing between cells. Once an optimized bipolar lead acid battery design is established, the intended application would be a military vehicle starter battery. The current military lead acid batteries have an energy density of around 40Wh/kg , a specific energy 100 Wh/L and are capable of at least 120 deep discharge cycles. Some common military vehicle batteries are the 6T, 4HN, and 2HN, as well as some commercial form factors (Group 31, Group 75/86, Group 78, etc). A successful battery design would demonstrate improvements in energy density (60Wh/kg), specific energy (150Wh/L) and deep discharge cycle life (300 cycles). Additionally, the ideal final design would fit a standard military form factor and meet or exceed the weight, capacity, cold cranking amps, life cycle, and battery resistance requirements for that standard PHASE I: A successful Phase I would result in the development of bipolar lead acid cells that demonstrate 70Wh/kg energy density, 200Wh/L specific energy, and 50 deep discharge cycles on the cell level. Deliverables would include at least 5 bipolar lead acid cells for laboratory testing. PHASE II: A successful Phase II would scale up or otherwise optimize the cells developed in Phase I to fit a specific military form factor. These cells would be assembled into a multi cell string or battery module that demonstrate the feasibility of reaching the end goal of 60Wh/kg energy density, 150Wh/L specific energy, and 300 deep discharge cycles. Deliverables would include at least 3 bipolar lead acid strings/modules for laboratory testing. PHASE III: A successful Phase III would result in the development of bipolar lead acid battery that adheres to a standard military form factor and meets or exceeds military the specifications for weight, capacity, CCA, internal resistance, and cycle life. This battery should demonstrate 60Wh/kg energy density, 150Wh/L specific energy, and 300 deep discharge cycles. Potential military form factors include the 6T, 2HN, and 4HN as described in the military specifications MIL-PRF-32143B and MIL-B-11188H. Potential military applications would be any commercial lead acid start battery, such as Group 31, Group 75/86, Group 78, etc., as defined by Battery Council International. Deliverables would include at least 2 batteries for laboratory testing.
OBJECTIVE: A rapid modeling algorithm is required to predict convective heat transfer for military ground vehicle thermal and infrared (IR) signature analyses. The convection algorithm should model flow details at the levels of complexity and accuracy needed for convective heat transfer predictions and thermal signature evaluation. The algorithm must be capable of accurately modeling natural wind including its turbulent intensity. DESCRIPTION: Military vehicle survivability assessments require thermal modeling in order to achieve performance goals related to the control of thermal infrared (IR) signatures. Thermal management is the cornerstone of efficient IR signature control. A key part of the thermal analysis of ground vehicles is the prediction of the convection heat transfer caused by wind- and fan-driven flows, and by natural convection. Existing convection models are either CFD-based or they model convection using simple formulas. CFD solutions typically involve lengthy set-up times due to meshing requirements, require a high level user expertise, and can have computational solution times that restrict or prohibit their use in rapid design cycles or multiple-condition signature analyses. Simple convection formulas are not an acceptable solution since they can fail to capture important flow details such as windward flow acceleration, wake regions, exhaust flow impingement, etc. What is needed is a convection prediction process that can model the flow details at a coarse level at the level needed for convective heat transfer prediction, and that has none of the user burden usually associated with traditional CFD use. PHASE I: Develop and demonstrate an algorithm to predict the heat transfer due to wind-based convection on a simple object. Develop a plan for an advanced algorithm that can be integrated into an existing military ground vehicle modeling process. The plan must address how the convection algorithm will use the geometry description and property assignments currently made during the setup of the thermal model for vehicles. The plan must describe the expected changes in the inputs, operation, and computational speed of the thermal and signature modeling processes that will be caused by the integration of the convection algorithm into it. PHASE II: Develop a convection algorithm capable of modeling both exterior and interior convection flows, and demonstrate the algorithm by integrating it into an existing thermal and IR signature prediction model for a military ground vehicle. The demonstration must include an interior flow analysis of an engine soak condition, a wind-driven exterior flow analysis of a stationary vehicle, an exterior flow analysis of a moving vehicle, an advection analysis of the underbody of a vehicle showing the transfer of heat from hot components to downstream components, impingement of an exhaust plume flow, and the channeling of the flow due to wheels and other underbody flow obstructions. To demonstrate this, multiple vehicle models may be. The convection algorithm must be validated by comparing predicted results against measured data or data obtained from CFD simulations, and by comparing LWIR signature predictions against measured sensor imagery. The rapid nature of the set-up and solution time of the algorithm as integrated into the thermal and signature modeling process must also be demonstrated. PHASE III: A rapid and low user-burden approach to including accurate convection modeling can be transitioned to the thermal design of commercial automotive vehicles as well as to architecture, aerospace vehicles, HVAC, geothermal energy, electronics casing, and to structures and electronics that will be subjected to extreme weather conditions.
OBJECTIVE: Develop reliability-based design optimization (RBDO) techniques and software package for simulation-based designs to improve military ground vehicle systems and subsystems. These techniques are intended to be used for new vehicle designs and for changes to existing designs. The techniques should be able to determine effect on vehicle life that a particular change would cause. In addition, the techniques are expected to go beyond durability into other areas of vehicle design assessment, such as mobility and survivability. DESCRIPTION: This SBIR project will define, determine, and develop innovative numerical methods and computational tools for assessing uncertainty, risk and tradeoffs in vehicle designs and will implement a reliability-based design optimization (RBDO) of ground vehicle systems and subsystems for diverse physical applications including survivability, durability, mobility, fuel efficiency, etc. To extend RBDO, which traditionally has focused solely on durability, to broader applications such as survivability and mobility, it will be necessary to develop both sensitivity-based and sampling-based RBDO methods. Because the focus of this work is the optimization methodology itself, and not the solvers for the different disciplines, the expected RBDO solution is one that will allow for the user to modularly plug in their own solver (referred to here as a"black box") for the discipline being studied. We will focus initially on a crash safety/survivability problem. RBDO problems exhibit a strong parallel nature which requires large computations, so the proposed RBDO methods need to be mapped to a multiple core environment in High Performance Computing (HPC) to increase computational efficiency. For input random variables such as material strength parameters or duty cycle roughness, various input marginal distributions and correlated variables need to be supported. Also, both random and interval variables need to be supported. Methods for modeling input Probability Density Functions (PDFs) and Cumulative Density Functions (CDFs) from experimental data are necessary for both marginal and joint distributions. These new techniques should be able to handle interval distributions as well as the more traditional probability distributions. For sampling based RBDO, accuracy of the proposed surrogate model must be demonstrated. Since it is very expensive to run tests or full physics-based simulations of vehicle system and subsystems, it is important to minimize the number of samples needed by the RBDO solution process, so an efficient sequential sampling strategy should be implemented. Also, user generated surrogate models need to be supported for reliability analysis and RBDO. For sensitivity-based RBDO, in addition to the first-order reliability method (FORM), a higher-order method (similar to second-order reliability method (SORM)) needs to be developed for highly nonlinear RBDO problems. The proposed RBDO code needs to be easy to integrate with available commercial/non-commercial M & S codes via interfaces for broader multidisciplinary applications. For problems with a large number of potential design variables, to effectively use surrogate models, a variable screening method should be developed by mathematically determining the effect of variables on the output variances and automatically selecting variables that have the most significant affect on output. Lack of input model information and surrogate model uncertainty should be considered for confidence (i.e., assure reliability) of optimized designs. For prediction and evaluation of output distributions, a multi-dimensional visualization capability would be desirable, to allow for human users to appreciate the variability of the problem and its solution. PHASE I: The contractor shall research, design, and develop a reliability-based design optimization method and software package for multidisciplinary ground vehicle applications under input uncertainty. The contractor shall demonstrate integration of the RBDO code with commercial/non-commercial codes. The design methodology shall have the ability to be mapped on to multiple processors for speedy optimization process using such standard parallel techniques. The contractor shall show a plan for how to integrate the RBDO solver with various"black box"physics-based simulations in areas such as survivability, mobility and fuel-efficiency. The key focus for this stage will be crash safety/survivability optimization, using a"black box"for the objective and constraint functions in the optimization. The contractor shall discuss with the contracting officer"s representative (COR) a case study to work in Phase II. Feasibility of key capabilities: independent and correlated input variables, both sensitivity-based and sampling-based RBDO methods, random and interval variables, accuracy of surrogate models, and efficient sequential sampling strategy will be evaluated to help determine transition to Phase II. In addition, the contractor is expected to provide an assessment of the scalability of the algorithms to larger problems and more processors. PHASE II: The contractor shall extend the research and development of the robust optimization methodology from Phase I into a working"user friendly"software package. Tests on various necessary capabilities shall be conducted to demonstrate the accuracy, robustness, and performance of the methodology in a variety of conditions. The contractor shall show successful integration with"black box"simulations in crash safety/survivability, an area not normally accommodated by RBDO techniques. In addition to survivability, it is desired that both mobility and fuel-efficiency can be handled, but to keep the scope manageable, only the survivability portion will be demonstrated at this phase. The contractor shall perform a case study as agreed to with the COR before the start of this phase. By the end of Phase II, the software package must be ready to progress to full commercialization in Phase III. To improve the chances for successful commercialization, the user interface will be critical at this stage, and it is expected that a significant portion of the investment will go to this. PHASE III DUAL USE APPLICATIONS: The RBDO design methodology and software package developed above in the description can be used in a broad range of military and civilian applications. For potential military applications, the RBDO techniques and software package developed can be used by US Army, USMC, Air Force, Navy, as well as vehicle OEM"s and DOD suppliers to analyze performances and optimize systems of vehicle components for reliability and RBDO of systems. Also, via commercialization, civilian applications need to be promoted. Potential exists for use in the civilian automotive industry and for other applications. A good user interface and demonstrated integration with a variety of"black box"simulations are critical metrics for success in the marketplace.
OBJECTIVE: Develop a highly sulfur-tolerant fuel reformer for JP-8 and highway diesel fuel, for use with a sulfur-tolerant Solid Oxide Fuel Cell (SOFC) in a compact, 20-kilowatt (kW) net output system. DESCRIPTION: In order for Fuel Cell based Auxiliary Power Units (APUs) to meet the increasing power demands in the limited space claims available on combat and tactical vehicles, further development of sulfur-tolerant components is essential. TARDEC has developed and will be testing 7-10kW fuel cell based APUs that operate on JP-8 fuel, the U.S. single battlefield fuel. These APUs allocate up to 30% of their volume to hardware that removes sulfur-bearing molecules from JP-8 before it is reacted in the fuel reformer. Without desulfurized JP-8, the reformer and the fuel cell downstream would be poisoned. A sulfur-tolerant reformer would eliminate the need for JP-8 desulfurization, but would convert sulfur in the JP-8 into hydrogen sulfide (H2S) gas, a fuel cell poison, in the reformate (fuel gas). To deal with H2S, it can be absorbed with a compact, effective zinc oxide filter, a sulfur-tolerant fuel cell can be used, or a combination of the two approaches. This proposal is to develop a sulfur tolerant reformer capable of supporting a 20kW APU. Developing a reformer that does not require sulfur to be removed from the fuel is one of two essential pieces needed in a sulfur tolerant APU design, the other being the fuel cell stack. TARDEC is currently developing a Sulfur Tolerant Solid Oxide Fuel Cell stack through an Army SBIR that has successfully completed Phase 1. Aligning the sulfur tolerant reformer program to integrate with the sulfur tolerant stack program has the potential to deliver the high power density fuel cell APU system that is quiet, efficient and has reduced system complexity compared to other fuel cell APU systems and especially conventional engine based APUs. This technology has support from both the Office of Naval Research (ONR) and Air Force Research Laboratory (AFRL), making it a multi-service program. PHASE I: The contractor shall design a reformer to provide sufficient flow of reformate gas for a 20kW SOFC, assumed to operate at 40% efficiency at rated power. The reformate gas delivered to the fuel cell must have a sulfur content not exceeding 50 parts per million volume (ppmv) when processing JP-8 with a sulfur content of 3000 parts per million by weight (ppmw), and it is assumed that gas-phase sulfur removal will be used to limit sulfur levels in the reformate. Further, the reformate must contain less than 40 ppmv of two-carbon or higher-carbon reforming by-products to deter coke formation in the fuel cell system. PHASE II: The contractor shall build the reformer designed in Phase I, with automatic controls, and operate it for 2000 hours to demonstrate capability to operate on ultra-low sulfur diesel (ULSD) fuel and on JP-8 at typical sulfur levels (about 600 ppm) and maximum sulfur levels while delivering the required reformate quality. The contractor shall demonstrate through packaging studies how the reformer would be packaged with a sulfur-resistant SOFC. PHASE III: Military uses of the design are the Abrams tank ECP 2 and other future combat vehicles; power for medium Unmanned Ground Vehicles (UGVs) and for special operations all-terrain vehicles. The intended commercial applications are as a power source for refrigerated semi-trailers, and as a quiet substitute for mobile diesel generators. The core sulfur-resistant reforming technology can be adapted to SOFC power systems ranging from a few kilowatt to hundreds of kilowatts with diverse applications in aircraft APUs, recreation vehicles, marine craft and ships, that variously use jet fuels, commercial highway diesel fuel, compressed natural gas and propane.
OBJECTIVE: Develop an in-vehicle, software system, for an electronically-controlled diesel engine fuel system, to adaptively reduce inter-cylinder variability in output, in real-time, resulting in improved engine output, reliability, and fuel economy. DESCRIPTION: All multi-cylinder internal combustion engines exhibit imbalance in power output between cylinders. This is due to necessary hardware design compromises as well as manufacturing variances in engine components. An example of the former is the difference in the length and shape of intake manifold runners, which can lead to inter-cylinder variation in mass of air delivered, as well as differences in mixture motion. An example of the latter is manufacturing tolerance differences in individual fuel injectors, leading to variation in mass of fuel delivered to each cylinder. The variation in output between cylinders results in uneven acceleration of the crankshaft during each rotation. This causes vibration which can lead to accelerated wear and potential early failure of engine components. In addition, the secondary effects of variation in the combustion event within each cylinder are differences in temperature and pressure of the combustion and exhaust gases. If fueling mass and timing are not adjusted for each cylinder individually, when even one cylinder is approaching a dangerous operating condition, the output of rest of the cylinders might have to be maintained at a sub-optimal level, in order to protect the one"bad"cylinder. This can mean that the potential for higher output and/or better fuel economy for the engine as a whole is being limited. One widely-used measure of engine output is Indicated Mean Effective Pressure (IMEP). This metric is useful because it enables comparison of engines of differing size within a general design class (turbo-diesel, naturally-aspirated gasoline, etc.) Variations of 5% to 10% (depending on operating condition) in IMEP values between cylinders have been noted in diesel engines using DF2 fuel. Even higher variations have been seen for the same engines running JP8. The target for this project is to reduce inter-cylinder imbalance to below 2% under all operating conditions, regardless of fuel. In order to mitigate the effect of this imbalance, manufacturers will sometimes introduce engine control software calibration settings that vary the mass and timing of fuel delivered to individual cylinders, under prescribed operating conditions. However, when this is done, it is based on measured data acquired from a limited sample of engines tested on dynamometers. The result is a one-size-fits-all approach that remains static in the field, and cannot adapt to the variation between individual engines, or all of the potential environmental conditions that might be encountered. An alternative approach that has also been tried is to install in-cylinder pressure sensors that are capable, with appropriate analysis software, of quickly and accurately characterizing the combustion event. This system is quite capable of finding inter-cylinder imbalance so that it can be mitigated by uniquely tailoring the fuel delivery to each cylinder. Unfortunately, this system is expensive, both at installation and in terms of reliability and maintainability. An approach is required that can react to variations in inter-cylinder output in real-time, on the vehicle in the field, and continually adjust engine fueling parameters to reduce the variation, using only the instrumentation and electronic control systems installed by the manufacturer, by developing an intelligent signal processing algorithm, capable of being implemented in software that can be run in the manufacturer's engine control module. PHASE I: Complete a feasibility study that should determine the technical and commercial merit of developing a Adaptive Inter-Cylinder Output Balancing System (AICOBS). This effort shall fully develop an initial concept design, establish programmatic (cost, schedule and performance) goals, and deliver a final technical report detailing the AICOBSconcept. This initial concept must be designed to: 1) be"adaptive", meaning capable of reacting to the actual inter-cylinder imbalances being experienced by the engine (as opposed to being statically calibrated in advance), 2) be"real-time"meaning the recognition of the imbalance and the corrective reaction must occur within a sufficiently short time interval that inter-cylinder balance is maintained under all expected operating transients, 3) use only the standard sensors and actuators present in a typical electronically-controlled fuel system, 4) be capable of being run in a commercial engine control module along with all of the manufacturer's other control software, without negatively impacting the system's capability or throughput. PHASE II: This effort will culminate in two well-developed Adaptive Inter-Cylinder Output Balancing System (AICOBS) prototypes. The first prototype will be fabricated using the Phase I concept design. The performance goal for the first prototype will be 2% variation in IMEP across all cylinders under steady-state operating conditions. The second prototype will be improved through a testing and redesign process. The performance goal for the second prototype will be 2% variation in IMEP across all cylinders under steady-state and transient operating conditions. 1. Produce a prototype AICOBS based on the Phase I concept design. 2. Establish actual system performance through physical testing (and compare the results to the original performance goals), then document"Lessons Learned". 4. Redesign the Phase I concept while applying the"Lessons Learned"from performance, testing. 5. Fabricate a 2nd generation AICOBS prototype based on previous redesign applying the"Lessons Learned"approach. 6. Validate expected performance through testing (and compare to performance goals). This effort shall deliver: 1. Demonstration of Phase I AICOBS concept prototype, after testing is complete. 2. Demonstration of second generation redesigned AICOBS prototype after testing is complete. 3. A final technical report detailing the 2nd generation redesigned AICOBS prototype. PHASE III: Inter-cylinder balancing is a technology that would provide enhanced performance, fuel-economy, and reliability in all military ground vehicles that currently utilize digital electronic control for fueling. Most commercial diesel-powered vehicles already utilize electronic fueling control. Inter-cylinder balancing has been researched and tested by the big commercial manufacturers, but has not been put into production because of the high cost of the in-cylinder sensor hardware. This effort envisions a software-only solution, using sensor hardware that is already installed on commercial vehicles that feature electronic fuel control. Commercial vehicles would see the same benefits as military vehicles, namely enhanced performance, fuel economy, and reliability. These benefits would be realized without the significant additional expense of additional sensor hardware.
OBJECTIVE: To develop flame, smoke, and toxicity resistant, recoverable (retains its shape after impact) energy absorption trim material for use within military vehicle interiors. The material provides occupant impact protection during blast, crash and rollover events. DESCRIPTION: During underbody blast, crash and rollover events, the vehicle occupant, even when properly restrained experiences high velocity motion in infinite directions. Blast events in particular are, by nature of the infinite possible locations of the blast initiator (e.g. Improvised Explosive Device), conducive to setting the vehicle in a variety of motions. During a blast event the vehicle is pushed in an upward motion, and is also susceptible to rollover side to side or end to end depending on the location of the blast initiator relative to the vehicle location. The intent of the use of interior trim energy absorption materials is to select materials designed to absorb kinetic energy in a controllable and predictable manner, in such a way as to reduce the level of energy experienced by the vehicle occupant and thus reduce occupant injury due to impact. Currently there is little to no interior trim energy absorption materials used in the interior of military vehicles. Additionally, any materials which are used for military applications are typically validated to FMVSS 302, which is considered by the TARDEC Flame, smoke and toxicity Interior Team to be an inferior specification for military vehicle applications. There is a variety of commercially available energy absorbing material with both recoverable and non-recoverable characteristics; however the commercially available materials are not designed with a high level of resistance to flame, smoke and toxicity. The challenge to the military vehicle designer is to provide interior trim energy absorbing material solutions which achieve high energy absorption capabilities, are recoverable and durable, as well as resistant to flame, smoke and toxicity. Resistance to flame, excessive smoke and toxicity are characteristics unique to military vehicle interior applications due to the vehicle"s exposure to blast events typically from IED"s (Improvised Explosive Devices). Unlike a commercial automobile, military vehicles are designed with heavy armor, heavy transparent armor and are significantly more enclosed. Upon the and underbody blast event, for which the armor is penetrated and the vehicle interior is exposed to high heat and/or flame, the materials inside the vehicle shall resist FST, to the extent the occupant has sufficient time to evacuate the vehicle. PHASE I: Phase I of this effort shall consist of a feasibility study and concept development of one or more flame, smoke and toxicity resistant energy absorption material(s) which are capable of retaining their intended form upon multiple high and low impacts (high impact using Head Impact Test equipment at 15mph). The feasibility study shall describe through an analytical approach the means for which the proposed material will be developed to achieve a pass performance to MIL-STD-2031 and UL-94 for flame, smoke and toxicity resistance. The study shall also describe what affects the flame smoke and toxicity resistant formulation may have on the energy absorption, durability, and recoverability characteristics of the material. The energy absorption head impact criterion of<1000 HICd is the level of protection required. The concept development shall provide the expected performance of the proposed material"s head impact protection capability, and how this performance shall be achieved. The Interior Trim energy absorbing concept may include multiple layers of materials all of which shall be capable of flame, smoke and toxicity resistance with the outer later being capable of withstanding normal wear and tear typical of a military vehicle interior compartment. Design constraints shall be clearly defined. The material concept(s) shall provide confidence in support of performance to the following specifications, supported by sound engineering principles. MIL-STD-2031 Fire and Toxicity Test ASTM E162 Surface Flammability of Materials ASTM E1354 Heat and Smoke ASTM E662 Smoke Obscurrence ASTM D6264M-12 Damage Resistance for Fiber-Reinforced Polymer Matrix Composite (if applicable for surface material) ASTM D1242 Resistance to Abrasion (for surface material) MIL-STD-810 Environmental; Temperature Basic Hot A2 Method 501.5; Basic Cold C1 Method 502.5 Table 502.5-I and Table 502.5-II; Fungus Method 508.6, Part II; Vibration Procedure I Category 20, Table 514.6-I Annex D, Rain Method 506.5 Analytical tools such as Finite Element Analysis and modeling and simulation where appropriate, shall be used for this purpose. The outcome of Phase I shall include the scientific and technical feasibility as well as the commercial merit for the material concept solution provided. The concept(s) developed shall be supported by engineering principles. Supporting data along with material safety data sheets and material specifications shall also be included if available. The projected development and material cost and timing shall be included in the study. Phase I shall cover no more than a 6-month effort. PHASE II: Phase II of this effort shall demonstrate the material concept(s) successfully perform to the criteria developed in Phase I. Thirty (30) Material samples / Component Level assemblies sized at 12"x12"squares, including a durable outer surface (cover trim) which is securely attached to the energy absorption material (if separate), shall be shipped to the SANG (Selfridge Air force Base) HIL (Head Impact Laboratory) for pre-verification of energy absorption performance of greater than 1000 HICd 15ms at 15mph. Note; the cover trim shall also be durable and resist FST with minimal impact on the energy absorption performance of the EA (energy absorption) material it is covering. The material shall demonstrate the ability to absorb energy and recover. The recover feature shall return the material to a state which it will have the ability to be tested repeatedly and perform the same as it did when it was tested initially. The system shall also provide visual indication that it is damaged and not intended for additional impacts, example being crazing, evident deformation, color change or color with a distinct odor. The designed system (after being validated to the above criteria), shall be presented to TARDEC and approved before it is integrated onto a vehicle. Once approved six samples of the system shall be provided for integration onto a vehicle for the purposes of Blast, Crash, Roll Over System and Toxicity testing. The Contractor shall assist TARDEC in the installation of the parts to ensure proper fit and finish is achieved. The size of the sample shall be defined by the vehicle roof structure which will be made available by TARDEC to the contractor at the beginning of Phase II. In addition Phase II shall focus upon the validation and correlation of the modeling and simulation effort mentioned in Phase I, along with the fabrication and validation of the proposed material(s). Additionally the study in Phase II shall provide test data, reports and all modeling and simulation models used to develop the system for concept validation according to the attached DVP & R (Development Validation Plan and Report). Any required modifications and retesting shall be conducted during Phase II. PHASE III: In the final Phase of the project the contractor shall prove out the effectiveness of the system on an Army Vehicle (or vehicle that is representative of a vehicle in the Army fleet) in both blast and crash scenarios. The contractor shall provide an interior trim energy absorption material prototype headliner for the roof of the military vehicle (e.g. Bradley, MATV, Stryker) as well as a prototype component for the hatch ring frame. If the material solution is also capable of being utilized for small component protection such as grab handles, then the contractor shall also provide a prototype component as such. The prototype material shall be validated by the contractor, according to the attached DVP & R. Head impact protection performance shall be validated in vehicle, utilizing the SANG HIL. This system has the potential to be utilized in any Military and Civilian truck and automotive applications, as well as potential naval applications, further study for naval applications may be required however. Additionally, the material will be applicable to commercial automotive industry.
OBJECTIVE: The Phase II effort shall result in a novel polymer based transparent composite material that will be integrateable to the top, outer layer of the glass windshield or transparent armor in both commercial and military applications. This layer shall be designed to defeat rock strike threats and enhance transparent armor performance by reducing susceptibility to repetitive damage and latent cracks caused by rock and debris. DESCRIPTION: One of the common problems for military convoys in remote and desert areas is windshield and transparent armor damage caused by stones flying off the wheels of other vehicles. Cracked or broken windshields need to be repaired or replaced with new ones, which causes logistical problems. The minor, but repetitive, damage and latent cracks caused by the stones significantly reduces the ability of the transparent armor panel to defeat other rock strikes and ballistic projectiles. To address these problems, this solicitation requests the development of an innovative transparent and tough nanocomposite laminate that can be added on top of the outer glass to prevent windshield crack or breakage from road hazards, and reduce latent damage to the transparent armor. This development should ideally exploit innovative materials, designs, and/or manufacturing processes to create a light but tough transparent outer layer. Synthetic materials that take advantage of manufacturing techniques to develop fiber-based materials with three dimensional axial control including weaving techniques are of interest for this project. Recent advances in this area have resulted in development of materials with superior properties in strength, stiffness, toughness, and ballistic shock mitigation properties. With improvement in nanotechnology, discovery and exploitation of various nanostructures (such as, but not limited to, nanofibers, clay nanoplateletes, nanotubes, nanowires, etc.), and advances in composites fabrication processes, it is possible to develop new structures and materials that can be integrated into a transparent armor system that can lead to tougher, lighter, and thinner transparent ballistic panels. A lighter transparent armor is needed to improve mobility, maneuverability, and survivability of crew personnel. The goal of this solicitation is to develop a new material that can offer enhanced ballistic protection with at least 30% reduction in weight and significant reduction in thickness at comparable or reduced cost to currently fielded transparent armor windows. PHASE I: Phase I will consist of a feasibility study of an innovative design concept for the development of a polymer based transparent armor protection shield through the utilization of advanced materials and/or innovative fabrication techniques. The contractor must demonstrate the concept design by manufacturing at least four (4) 400mm x 400mm prototype transparent armor shields of the proposed technology. The panels shall be tested for ballistic protection according to ATPD 2352R, light weight tactical vehicle transparent armor requirements. The transparent ballistic panels shall defeat, at a minimum, multi-hit .30 caliber 7.62 mm Armor Piercing bullet threat at 2800 feet per second. The multi-hit pattern to be utilized is available in the Army-Tank Purchase Description (ATPD) 2352. Transparency requirements include at least 85% transmission of the maximum solar emission at 550 nm. Refraction coefficient and coefficient of thermal expansion of the materials should be similar to that of glass, that is, 1.45 in the 400-800 nm wavelength range. Stability of the index of refraction should be investigated in the range of -20 C to +40 C. The transparent armor panels must maintain the improved ballistic performance at low temperatures (-40F) and withstand the thermal cycling testing profile (-60 to 180F). The transparent armor panels shall be tested for abrasion resistance on the exterior surface per section 3.3.6 of ATPD 2352R. The novel material shall exhibit optical transparency as stated in the ATPD 2352R specifications, haze shall be less than 3%. The phase I panels shall be at least 15% lighter than currently fielded transparent armor systems at comparable or reduced cost. Additionally, ballistic performance of the complete transparent armor system shall be equal to, or better, than currently fielded systems, as measured by the V50 value of the system. PHASE II: Phase II work shall expand on Phase I results through the optimization of manufacturing processes and material properties based on the Phase I proof-of-concept studies, and demonstrate capabilities for large-scale manufacturing. Fabricate a minimum of 12 (400mm x400mm) coupons for rock strike testing to be conducted by TARDEC IAW ATPD 2352T. The contractor must verify the rock strike performance of their solution via testing prior to submitting the coupons to TARDEC. Additionally, the transparency requirements include at Phase II work shall expand on Phase I results through the optimization of manufacturing processes and material properties based on the Phase I proof-of-concept studies, and demonstrate capabilities for large-scale manufacturing. Fabricate a minimum of 12 (400mm x400mm coupons for ballistic testing to be conducted by TARDEC. The contractor must verify the ballistic performance of their solution via testing prior to submitting the coupons to TARDEC. The transparent ballistic panels shall defeat, at a minimum, multi-hit .30 caliber 7.62 mm Armor Piercing bullet threat at 2800 feet per second. The multi-hit pattern to be utilized is available in Army-Tank Purchase Description (ATPD) 2352. The phase II panels shall be at least 30% lighter than currently fielded transparent armor systems at comparable or reduced cost. Additionally, ballistic performance of the complete transparent armor system shall be equal to, or better, than currently fielded systems, as measured by the V50 value of the system. PHASE III: Development of polymer based lightweight transparent armor materials will directly impact military vehicle ballistic resistance capabilities, which can also be adapted to address civilian defense and automotive safety issues. Additionally, such technology will have a broad range of commercial applications in the airline industry. The new transparent armor materials will benefit light weight tactical vehicles by decreasing the amount of transparent armor replaced due to rock strikes. The developed concept will be tested on light- to medium-weight army tactical vehicles with the potential for the translational implementations. The commercial market for the developed composite includes aircrafts, helicopters, the automotive industry, law enforcement, security vehicles, and security construction (bank windows, check points, etc.).
OBJECTIVE: Develop an improved system for maintaining lateral stability of extended manned and unmanned convoys. DESCRIPTION: Current robotic leader follower autonomy methods, when applied to multi vehicle convoy, often produce trailing vehicle trajectories unacceptably different from the lead vehicle trajectory. In military applications, this path deviation error can seriously endanger convoy mission success and the participating vehicles themselves, particularly in combat zones. Following vehicles which stray from the path are more likely to encounter roadside hazards or Improvised Explosive Devices (IEDs) which they would have avoided had they followed the lead vehicle more precisely. For such missions, consistent following performance with lateral tracking errors of centimeters (Threshold: 20 cm; Objective: 15 cm) is needed. Some automated convoy approaches employ a set of sensor packages which include Global Positioning System (GPS) and inertial sensing installed on each vehicle. However, simple following of the GPS waypoints laid down by the lead vehicle does not provide sufficient accuracy and fails in the absence of good GPS signal. Other methods, in which each vehicle tracks and follows its predecessor, work well for a few vehicles but do not scale well to longer convoys (10 or more vehicles), as small tracking and control errors accumulate with each successive follower. To date, fusion of these two approaches has also failed to consistently achieve the desired accuracy for long convoys. Tracking of environmental features (landmarks) has been explored to help address this problem. In the case that the convoy is driving on roads in good condition with clearly painted lines, it is possible to exploit computer vision lane-tracking technology to enable the following vehicles to stay on-path by exploiting the markings on the road. Other environmental reference features can help when a sufficient number of such features are present. A more general solution is needed, though, if the convoy system is to operate on secondary roads (gravel, dirt, etc.), two track trails, off road (fields, desert, etc.) or on poorly-marked roads in cases where the presence of reliable landmarks cannot be guaranteed. The ideal solution shall improve on the state of the art for both convoy relative vehicle localization and control algorithms. It shall exploit both inter vehicle sensing (e.g., sensors mounted on one vehicle which detect angle and/or range to other vehicles) and navigation sensors (e.g., GPS and inertial systems), but it shall not be tied to specific sensor hardware. Solutions which minimize requirements for environmental features are preferred. PHASE I: Design a system that is capable of using sensors from different manufacturers and software algorithms for accurate leader-follower behavior in convoys of 10 or more vehicles. Convoy operations range in speed from 0 to 55 miles per hour (mph) with gap distance (the distance between vehicles) from 5 to 125 meters (m). Use open architecture principles in the system design. Feasibility of the approach shall be demonstrated in a simulation environment across a variety of lead vehicle paths and convoy speeds. The Phase I deliverable shall include a description of the system sensing hardware requirements, an analysis of expected system accuracy across a range of mission conditions, and an analysis of computation requirements. PHASE II: Phase II shall implement the Phase I design for a multi vehicle convoy using Government Furnished Equipment (GFE), fully robotic tactical wheeled vehicles, to perform a technical demonstration. The system shall take advantage of the native onboard GFE sensors (GPS, Light Detection And Ranging (LIDAR), radars, Inertial Measurement Unit (IMU), wheel encoders, gyro, Ultra-wideband (UWB) radios, color and Infrared (IR) cameras, Vehicle To Vehicle (V2V) radio) to determine its solution. Adding additional sensors as part of the solution is discouraged. The technical demonstration site shall be provided by the Government in the Contiguous United States (CONUS). The site shall be selected to represent an operationally relevant environment. The technical demonstration shall cover full spectrum operations and shall include long/short haul duration missions; varying from low/high speeds in different operational conditions. These conditions shall include combinations of but not be limited to the following examples: day and night; raining/snowing; dust/fog; structured and unstructured roads, two track trails and cross country routes. The Phase II deliverable shall include a technical report, software, source code and documentation. PHASE III: Development of a modular package suitable for both commercial and military use. Last year alone there were over 130 billion miles driving in the US alone by Class 8 commercial trucks. This type system shall help reduce the number of accidents where trucks depart from the road. This could also be integrated into the military robotics library supporting existing programs such as the Autonomous Mobility Applique System (AMAS) and Route Clearance Inspection System (RCIS) Program of Record (PoR). The Phase III deliverable shall include a technical report and full Government Purpose Right (GPR) software, source code and documentation delivered using