You are here
DOD STTR 2011.B 5
NOTE: The Solicitations and topics listed on this site are copies from the various SBIR agency solicitations and are not necessarily the latest and most up-to-date. For this reason, you should use the agency link listed below which will take you directly to the appropriate agency server where you can read the official version of this solicitation and download the appropriate forms and rules.
The official link for this solicitation is: http://www.dodsbir.net/solicitation/sttr11B/default.htm
Release Date:
Open Date:
Application Due Date:
Close Date:
Available Funding Topics
- AF11-BT01: Electro-optic Material Development
- AF11-BT02: Electrically-Small Superconducting Wide-Bandwidth Receiver
- AF11-BT03: Cognitive Radio Spectrum Management and Waveform Adaptation for Advanced Wideband Space Communication Systems
- AF11-BT04: 3-D nondestructive imaging techniques for mesoscale damage analysis of composite materials
- AF11-BT05: Printable Integrated Photonic Devices
- AF11-BT06: Sensitivity Analysis Methods for Complex, Multidisciplinary Systems
- AF11-BT07: High efficiency materials & processes for the reduction of CO2 to syngas
- AF11-BT08: Plasma Simulation Code Encompassing Single-Fluid through Two-Fluid Models
- AF11-BT09: Intracellular Detection of Small Molecules in Live Cells
- AF11-BT10: Innovative Electric Propulsion Technology for Responsive Space
- AF11-BT11: Technologies for Nanoscale Imaging Using Coherent Extreme Ultraviolet and Soft X-Ray Light
- AF11-BT12: Routing for IP based Satellite Ad-Hoc Networks
- AF11-BT13: Conformal, Light-Weight & Load-Bearing Antennas Based on Conductive Textile Threads
- AF11-BT14: MIMO Radar Clutter Modeling
- AF11-BT15: End to End Trusted Path for Embedded Devices and Applications
- AF11-BT16: Instrumentation for high-bandwidth optical measurements in harsh reacting flows
- AF11-BT20: Volume Charge Distribution Measurement in Thin Dielectrics
- AF11-BT21: Dynamically Evolving Malware Detection in Streams
- AF11-BT22: Tool for Blade Stress Estimation during Multiple Simultaneous Vibratory Mode Responses
- AF11-BT23: Simulation software for strongly coupled plasma
- AF11-BT24: High efficiency up- and down-converting infrared nanoparticles
- AF11-BT25: Electrode Surface Erosion at High Pressures
- AF11-BT26: Cellular elements for ensemble based programmable matter
- AF11-BT27: Learning from Massive Data Sets Generated by Physics Based Simulations
- AF11-BT28: Metamaterial-based MEM ultra-low-loss non-dispersive phased-array antenna
- AF11-BT29: Satellite Drag Physical Model Module for a Near Real Time Operation Test Bed
- AF11-BT30: Assured Information Sharing in Clouds
- AF11-BT31: Electrical Energy Storage System by SMES Method for Ultra-High Power and Energy Density
- AF11-BT41: Complex field atmospheric sensing suite for deep turbulence research
- MDA11-T003: Intelligent Adaptive Needs Characterization for M&S Systems Engineering
- MDA11-T004: Dual S & C-Band Telemetry Transmitter System for Missile Testing
- OSD11-T01: Multi-Processor Computer Supervisory Control Development, Verification, and Validation
- OSD11-T02: Programming Constructs to Enable Formation of Efficient Algorithm Mapping for ExaScale Processors
- OSD11-T03: Design and Analysis of Multi-core Software
- OSD11-T04: Operating System Mechanisms for Many-Core Systems
- OSD11-TD1: Information Salience
TECHNOLOGY AREAS: Materials/Processes
OBJECTIVE: Develop techniques and processes for scale up and production of high performance electro-optic (EO) organic material systems with sufficiently large Electro-optical Coefficient (r33>150pm/V).
DESCRIPTION: Recent interest in the development of high performance electro-optic devices that enable critical military systems has led to a demand for large quantities of high performance electro-optic materials. Traditionally the most utilized materials have been inorganic crystals such as Lithium Niobate, however the higher performing organic electro-optic materials would in many cases be the better choice for the device of interest. Organics are not typically used for device and system development due to their unavailability in sufficient quantity for independent device and system developers. Organics offer higher electro-optic activity and more flexible fabrication technology, but lack of availability has severely impeded system development. High performance organic EO materials would be useful in a variety of devices ranging from passive rf imagers, active rf photonic systems, and very high bandwidth data communication links. The flexibility of these materials could also provide for conformal rf photonic devices onto airborne platforms. Besides active EO polymers, compatible associated materials are needed to form claddings and electro-poled structures. This topic is to solicit developing scale up capability of the electro-optic polymers and the associated materials to provide a complete system for processing and fabrication of triple stack waveguide structures with optimally electro-poled coefficient. Demonstration of capability to successfully fabricate a Mach- Zehnder modulator from the material system is necessary.
PHASE I: Demonstrate capability to synthesize the most promising 1 or 2 organic electro-optic material systems for production scale up. Demonstrate the synthesis techniques compatible with mass production methodologies on at least one variety. Demonstrate the fabrication of Mach Zehnder modulators with at least one scaled-up material system.
PHASE II: Build up and optimize the synthesis of the organic materials. Demonstrate producibility of the material in mass production, while maintaining cost effectiveness for potential users of the material systems. Deliver at least 20 grams of EO polymers and sufficient associated materials in a complete device-fabrication-ready system for testing by a government laboratory.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: Phased Array Radar systems, high speed high volume communications, sensors, photonics.
Commercial Application: Broadband communications, high speed interconnects and field sensors.
TECHNOLOGY AREAS: Electronics
OBJECTIVE: Develop a wide-bandwidth (DC to 2 GHz) receiver utilizing two-dimensional arrays of high-transition-temperature (HTS) superconducting quantum interference devices (SQUIDs)
DESCRIPTION: Large bandwidth communication systems are needed for the handling of high data throughput, while at the same time they should be capable of reducing size, weight and power (SWaP) requirements by eliminating the need for multiple systems covering different frequency ranges. Sensors built from Superconducting QUantum Interference Devices (SQUIDs) have unmatched sensitivity, but require operation in a flux-locked-loop (FLL) to linearize the transfer function to reduce inter-modulation products. FLL operation reduces the bandwidth significantly to less than 100 MHz.
A technique has been proposed [1] to use an array of SQUIDs with individual areas that correspond to frequencies of the voltage oscillations from the first few Fourier components of a triangle wave, e.g., the 1st, 3rd and 5th components. The composite voltage response from the addition of these components in the appropriate ratios results in a highly linear voltage response of the array. Arrays of this type have been demonstrated with niobium low-transition-temperature superconducting (LTS) SQUIDS. However, these devices introduce greater weight and volume due to the need for refrigeration down to liquid helium temperature of about 4 K.
One potential solution is to use high-transition-temperature (HTS) SQUID arrays operating near 77 K to reduce SWaP requirements. The difficulty is that material non-uniformity creates large variations in SQUID critical currents and degrades the linearity of the array’s transfer function, resulting in large inter-modulation products. Recently, it has been shown that variations in HTS SQUID critical currents can be reduced by using SQUIDs fabricated utilizing Josephson junctions formed via an ion-damage process [2]. These junctions have no interfaces between different materials, which normally would be a major source of non-uniformity. Furthermore, SQUIDS fabricated from these junctions can be positioned freely on the substrate to construct 2-dimensional, series-parallel arrays. This is important because amplifier gain scales with the series size, while non-uniformity is reduced as more SQUIDS are added in parallel.
PHASE I: Determine the optimum SQUID array structure for a linear voltage response, while at the same time minimize non-uniformity for operation between DC and 2 GHz.
PHASE II:
Task 1. Fabricate prototype SQUID arrays with designs developed in phase I.
Task 2. Characterize devices to determine linearity, bandwidth & gain.
Task 3. Investigate both near-field & far-field magnetic field sensitivity.
Task 4. Test array operation on compact cryocoolers to determine size & weight requirements.
Task 5. Investigate cryogenic integrated circuit packaging for array chips & integrate into communication systems.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: Successful Phase III applications would result in improved microwave receivers for numerous air platforms, while reducing on-board weight, volume and power consumption. Such HTS SQUID arrays may lead to use in space vehicles.
Commercial Application: The most immediate application with a huge potential market would be introduction into cellular base stations. It also could be adapted for more sensitive and secure communications between commercial aircraft and ground stations.
TECHNOLOGY AREAS: Space Platforms
OBJECTIVE: Advanced game-theoretical frameworks and hybrid approaches for spectrum sensing and management in wideband space communication systems and hybrid space-terrestrial systems as well as countermeasures for adaptive RF interference & adversarial jamming.
DESCRIPTION: Space vehicles and air-/space-borne sensors are essential components for improved warfighting capabilities and enhanced defensive control over complex collaborative missions. Current military satellite communications infrastructure is multi-tiered to cater for different communication needs, ranging from high-capacity wideband transmissions, protected systems with anti-jamming features and covertness, to narrowband systems for small and mobile users. To support growing communication needs on limited satellite bands, advanced wideband systems will prevail in the future satellite communication infrastructure, and hybrid space and terrestrial systems will be essential to ensure end-to-end network performance and improve mission-based criteria. In these networks, the unprecedented complexity and unpredictability of the operating environments, aggravated by the extremely high stake of network management effectiveness, makes it crucial to develop cognitive radio networking and management solutions that are context-aware, capable of predicting and tracking the network conditions, and agile in waveform adaptation to provide countermeasures for persistent and adaptive RF interferences and adversarial jamming (e.g., broadband noise, multi-tone, repeat-back and frequency followers).
Game-theoretic approaches have resulted in efficient courses of action where competitive decisions are dependent on others' actions to autonomous command and control in dynamic and complex environments. In this STTR topic call, advanced game-theoretic frameworks are particularly sought for novel cognitive waveform design and adaptation techniques that can efficiently utilize the space bandwidth, coherently interface with terrestrial systems, and effectively counteract persistent and adaptive jammers and interference. For effective gaming, wideband spectrum sensing and cognition have to be integrated in order to gain awareness of the complex radio propagation environments, and quickly capture the network dynamics and adversarial activities as well. The adaptive waveform designs shall be able to perform autonomous frequency band selection exploiting propagation characteristics, rapid antenna selectivity by means of narrow beamwidth and adaptive nulling, spread spectrum and wideband processing, and satellite platform autonomy. Distributed implementations are sought for scalable and robust network operations with radio autonomy. Particular considerations are the waveform adaptation and antenna analysis for UHF (Ultra-High Frequency), SHF (Super-High Frequency), and Ka-band which are susceptible for adversarial jamming. Performance attributes include survivability through space segment threats without damage, operating through threats without interruption.
PHASE I: Identify novel approaches and frameworks for joint waveform adaptation and cognitive spectrum sensing and management in hierarchical spectrum sharing games with primary users, secondary users, persistent jammers and asymmetric information structures.
PHASE II: Refine the Phase I results and designs for waveform adaptation and dynamic spectrum sharing games. Demonstrate a proof of concept on spectrum sharing throughput as well as dynamic behaviors of cognitive radios and adversarial jamming. Conduct performance assessment on peak vs. average interference power constraints, active interference and temperature control, and exploitation of primary link performance margins in semi-realistic environments.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: Transition the technology into military satellite communications such as advanced narrowband systems and wideband gapfiller system in a net-centric environment to provide transformational and assured communications for force projection.
Commercial Application: Augment the technology into commercial satellite communication systems to meet surge requirements and provide global coverage tracking and data acquisition services during launch, early orbit and operations in low earth orbit and satellite anomalies.
TECHNOLOGY AREAS: Materials/Processes, Weapons
OBJECTIVE: Develop techniques for detecting and modeling the evolution of damage in composite materials such as plastic bonded explosives or concretes using nondestructive means.
DESCRIPTION: In hard target penetration, the onboard energetic material may be subjected to severe environments of both pressure and shear loading. Damage to this material may result in premature initiation, or suboptimal performance. This damage may be physically manifested at a scale far smaller than is usually captured in modern finite element codes. The damage affecting payload sensitivity is believed to occur at and below the grain scale, where the grain is defined as a single energetic crystal ranging in size from 5 to 400 microns. Crystal particles in the submicron range also exist but are usually not explicitly modeled at the meso scale. Mesoscale simulations are frequently regarded as ones where the mesh or material description is resolved down to the level where individual constituents are treated with separate continuum level material descriptions. For the case of a traditional energetic material, this implies resolving the description down to this same energetic crystal level or smaller. The binder and crystal in an energetic material are sometimes of very similar densities, which make resolving the individual constituents using nondestructive techniques very difficult. A diagnostic technique, which could capture the evolution of damage as a material sample was deformed, would be an invaluable tool for validation of simulation techniques. Such damage would need to be incorporated into constitutive models for the materials.
PHASE I: Design and evaluate appropriate experimental and diagnostic techniques for detecting the evolution of damage in representative volumes of composite materials. These materials shall include plastic bonded explosives and/or concrete materials. Potential damage modeling techniques should also be identified.
PHASE II: Develop and implement the designs and techniques from Phase I. Characterize a representative material over a selected range of conditions. Identify appropriate techniques for incorporating the observed damage into macro level constitutive models. Design validation experiments for these models.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: This technology is applicable to the study of a wide variety of composite materials, energetic materials, propellants and geologic.
Commercial Application: These techniques are ideal for medical imaging including the possibility of imaging dental appliances in the mouth of a patient & imaging scenarios including the location of inert objects in a patient or the aid in the placement of medical devices.
TECHNOLOGY AREAS: Sensors
OBJECTIVE: Develop proof-of-concept printing technology for the design, modeling and manufacture of integrated photonic devices at low dimensions.
DESCRIPTION: Printable electronics and photonics are emerging technologies that have attracted a lot of attention over the last decade. Traditionally, CMOS processes have been used to fabricate electronic and photonic devices; however, the processes have generally restricted fabrication to small areas. It is highly desirable to have an affordable process (e.g. printing, nanoimprint lithography, soft lithography etc) that can be used for large area manufacturing of electronic and photonic components on any substrate, including flexible substrates, thus allowing for functional integration of several materials such as organic semiconductors, dielectric and conductive polymers, electro-optic polymers, carbon nanotubes, nanowires, quantum dots, electro-biological materials, etc which are inherently flexible and compatible with flexible substrates such as plastic, fabric, and paper, on a single backplane permitting new functionalities and performance. This permits the production of electronic and photonic devices, which can be conformable, foldable, stretchable, rollable and deformable, which are capabilities of particular interest to the Air Force and DoD. Since more than one electronic and photonic device can be placed on the backplane to achieve fully functionality, such systems have many potential applications, such as conformal optoelectronic integrated circuits, flexible displays, lasers, optical waveguides, modulators, photodetectors, flexible lighting, e-paper, solar cells, RF amplifiers, batteries, sensors, actuators, radio and antenna. These products can be interactive, energy-efficient and ultra-low cost (throwaway). For printed electronic and photonics to be successful, new material discoveries; device designs/structures; scaling to below subwavelength and sub 100 nanometer dimensions; tools for integrated optical, electronic and mechanical quality assurance etc are needed. Especially desired are efforts toward printable optoelectronic integrated circuit for conformal communication systems, reconfigurable photonics, and printable Quantum Dot nano laser arrays. Being able to reliably place nanoparticles, nanotubes, nanoclusters, and quantum dots and structures with advanced printing technology will be one of the challenges to be demonstrated with in a project.
PHASE I: Develop proof-of-concept printing technology for low dimensional electronic and photonic devices. Develop consumables (e.g., printing inks), processing technologies (e.g., printing), and tools for automation and real-time control. Demonstrate production of integrated electronic/photonic components.
PHASE II: Fabricate a specific printed electronic and photonic system prototype and demonstrate its utility and performance.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: Potential Air Force/DoD applications are lasers, modulators, photodetectors, sensors, photovoltaics for energy-harvesting, antennae and displays for communications.
Commercial Application: These systems, apart from being valuable to the military, can also be of commercial value to the civilian applications.
TECHNOLOGY AREAS: Air Platform, Information Systems
OBJECTIVE: Develop computational tools to compute response sensitivities of parametric multidisciplinary systems that exhibit nonlinear, dynamic behavior for use in gradient-based optimization, smart sampling, uncertainty quantification, and risk analysis.
DESCRIPTION: Much progress has been made in the development of algorithms for the efficient computation of flows and their nonlinear, dynamic interaction with, for example, structure. Such couplings are complex, difficult to predict, and representative of other physical interactions that can dramatically constrain system performance. Recently, attention has been given to aircraft parameterization, e.g., the geometry of the outer mold line and the interior structure, for the long-term goal of automating aircraft design. However, little attention has been given to the computation of sensitivities of aircraft (or aircraft component) behavior to these parameters. This is a barrier to early identification of critical physical behaviors greatly constraining or enabling a vehicle concept.
Parametric aircraft descriptions are rich. Numerous parameters are needed to describe the geometry, structure, kinematic (when shape changing), material, and potentially thermal character of the aircraft. Design processes reflecting this richness emphasize gradient-based optimization (or hybrid approaches involving gradient-based optimization) owing to the large size of the design space. Gradient-based optimization requires sensitivities of responses and constraints at points in design spaces. Unfortunately, it is not realistic to compute sensitivities with respect to a large number of parameters, if the cost of these calculations is proportional to the number of design variables. This would be expected for many typical approaches, including finite-difference, complex step, dual-number, and analytic differentiation. Other applications of efficient sensitivity analysis include uncertainty quantification and risk analysis, which are needed to account for various sources of variability and lack of knowledge.
In recent years, adjoint methods have been investigated as a way to avoid proportional cost growth in sensitivity analysis [1, 2, 3], and results have been encouraging. Challenges that remain for design of dynamic systems at levels approaching preliminary are driven by a peculiar feature of the adjoint approach: forward-time computation of the nonlinear, dynamic response followed by the reverse-time computation of the adjoint variables using an equation set linearized about the forward-time response. Challenges include: obtaining dynamic sensitivities of autonomous and non-autonomous equation sets (potentially with diverse time scales); treatment of time-periodic and aperiodic behaviors (potentially chaotic to a modest degree); development of a flexible adjoint framework suitable for legacy and research codes, and robust and efficient memory management techniques [4, 5] for very large problems involving numerous objectives (or performance measures in general) and constraints.
This effort will enable rich, parametric, aircraft models to be optimized for the purposes of improved energy efficiency/performance and ability to more readily consider new aircraft concepts, while also enabling failure modes to be accounted for at a high-fidelity level.
PHASE I: Demonstrate feasibility of an adjoint-based sensitivity analysis methodology for a representative nonlinear aeroelastic problem exhibiting flutter (preferably limit-cycle oscillation), assuming parametric variations in geometry and structure.
PHASE II: Further develop the methodology demonstrated in Phase I to make it applicable to a generic aircraft exhibiting a rich parameterization and constrained/enabled by complex aeroelastic, and potentially aerothermoelastic, interactions. Develop and deliver the requisite algorithmic software tools, documentation of the methodology, and graphical user interface.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: This methodology and accompanying software will enable a rapid and efficient physics-based design and evaluation of new military aircraft concepts, including Next Gen Tac Air and high-speed concepts that require tight coupling and high fidelity.
Commercial Application: The methodology and software will enable physics-based design of new civil concepts, such as high-speed civil transport, and enable large improvements in energy efficiency through optimization to enable fuel impacts and reduced environmental impact.
TECHNOLOGY AREAS: Air Platform, Materials/Processes
OBJECTIVE: Develop high efficiency (>70%) electrodes for electrochemical conversion of CO2 and water to syngas for JP-8 production.
DESCRIPTION: The efficient conversion of CO2 into storable liquid fuels would help create a secure and sustainable source of carbon-neutral transportation fuels. Two approaches to this are to convert CO2 directly to multi-carbon alcohols [1], or to use CO2 to produce syngas (CO + H2) which can be used to produce fuels by well-established processes. The materials and processes for the efficient electrochemical production of syngas from CO2 have not been identified or optimized. Demonstrating a processes for the low overpotential (<1.0V) and high efficiency (>70% based on measured vs. reversible overpotential free energies) reduction of CO2 to CO with proper ratio of hydrogen generation (~3 H2 : 1 CO) will offer an alternative pathway to carbon-neutral, high-density transportation fuels when utilizing CO2 from concentrated CO2 sources such as stationary power sources and coupled to existing Fischer-Tropsch technologies.
PHASE I: Develop and demonstrate materials, catalysts and processes for electrochemical conversion of CO2 and water to CO and H2 in ratios appropriate for conversion to JP-8. Demonstrate low efficiencies greater than 60% with pathways to greater than 70%.
PHASE II: Demonstrate an electrochemical conversion system for conversion of CO2 and water to CO and H2 in ratios appropriate for conversion to JP-8 with overall efficiencies greater than 70%.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: Conversion of high concentration CO2 effluents to liquid fuels via the demonstrated process coupled to Fischer-Tropsch conversion.
Commercial Application: Carbon neutral liquid transportation fuel (diesel and jet) from stationary power sources.
TECHNOLOGY AREAS: Information Systems, Space Platforms, Weapons
OBJECTIVE: This topic seeks to develop robust unified plasma simulation software that encompasses single-fluid through two-fluid models and that is widely applicable to the large parameter space of Air Force needs in a single software package.
DESCRIPTION: This topic seeks to develop robust unified plasma simulation software that encompasses single-fluid through two-fluid models and that is widely applicable to many technologies and devices that span the large parameter space of Air Force needs in a single software package. Currently, simulation codes for collisional plasmas are developed using a single model, e.g. resistive magnetohydrodynamics (MHD), and are sometimes applied in situations beyond the region of validity of the model. Applying other plasma models involves learning a new code with its own peculiar structure and problem specification requirements. The development of a plasma simulation code that uses robust numerical algorithms to solve ideal MHD, resistive MHD, and two-fluid plasma models within the same code framework would significantly reduce the cost and time required to test and prototype plasma technologies. In particular, flexible, easy to use, and well-supported plasma fluid modeling software with a variety of physical models with different physics fidelity is needed to allow non-experts to simulate plasma dynamics in complex geometries.
Plasma simulation software is required to develop the understanding and predictability of plasma technologies and devices necessary to make design improvements and innovations. Fluid plasma models are derived by applying asymptotic approximations to the kinetic model. The approximations simplify the governing model and allow for more detailed calculations, but the approximations also limit the model’s region of validity. Since most software uses only a single plasma model, switching to a different plasma model requires using a completely different code and modifying the problem specification to fit the new code. The goal of this program is to develop a robust unified plasma simulation code that models the plasma with a user-selectable model comprising several single-fluid and two-fluid options. The numerical algorithms should resolve complicated plasma dynamics that include high-speed flows and sharp gradients influenced of externally applied magnetic and electric fields with realistic boundary conditions. Diagnostics should guide the user to the appropriate choice of models and help bound the problem. The software should allow for simulating complex 3D geometries, using body-fitted block-structured and unstructured meshes. An easy to use interface for specifying simulation geometry, computational grid, and simulation parameters must be provided. The solution output must be in a form that is easily analyzed and visualized, including synthetic diagnostics. The software must be verified and validated and include a test and benchmark suite. Further, the software should be widely available, well documented and supported through user and example manuals and hands-on training when required.
PHASE I: Identify robust numerical algorithms appropriate for time-dependent plasma simulations using single-fluid and two-fluid models on a common computational grid capable of 3D complex geometries. Develop a unified software framework that incorporates these algorithms and develop implementation plan for software development.
PHASE II: Develop a prototype 3D unified plasma simulation code with an interface that allows simple problem specification and input, which includes simulation geometry, computational grid, and simulation parameters. Configure the software generated output that can be immediately compared to experimental measurements. Develop software documentation to include user and example manuals and an developer’s manual. Design a hands-on training workshop.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: This tool will advance the state-of-the-art in high-energy density physics applications that generate high power, high energy directed energy sources. Advanced plasma propulsion schemes will also benefit from this tool.
Commercial Application: This tool will advance our understanding and exploitation of advanced plasma processing for manufacturing technology and environmental remediation, via plasma chemistry. Manufacturing using EM fields, such as metal forming, will use such a tool.
TECHNOLOGY AREAS: Chemical/Bio Defense, Biomedical, Sensors
OBJECTIVE: For the defense of toxic chemical and biological agents, the objective is to develop a broad-based biosensor, with “off” to “on” functionality that will allow for sensing of potential hazards.
DESCRIPTION: Many sensors exist for detecting chemical and biological agents in non-biological environments. However, low levels of acute or chronic exposure in humans often goes undetected until the outward appearance of illness. Therefore, risk reduction and treatment can often only be undertaken long after contact with such agents. Biomolecular switches are a versatile mechanism for detecting targets in biological samples, including real-time detection in complex environments. Biological switching involves conformational changes that lead to a detectible signal1. This signal can be electrochemical, optical, or biochemical2. Prior work has shown that biomolecular switches can be adapted for multiple stimuli, including cations, sugars, metals, drugs, and small molecules3-6. Currently, there are no sensors designed for monitoring exposure to biological or chemical warfare agents for either acute or chronic conditions in living systems. Accordingly, and in order to continue to protect the safety and health of our armed military personnel, the intent is to develop nanoparticle-based sensors that can be deployed in biological environments for the real-time detection of agents of interest7. These sensors should have a quantitative capability and broad applicability to detect changes caused by known and unknown threats8. This will require the development of sensor systems that can enter living cells and complex environments and remain in an “off” state until exposure to a target leads to a direct or indirect signal transduction event. Ideally, this sensor should be easy to implant and non-toxic so as to be administered even under appropriate suspicion. Importantly, this will allow the sensor to be of value in both acute and chronic settings.
PHASE I: Develop a candidate nanoparticle sensor with the ability to specifically bind small molecules and induce a detectable signal upon exposure to a chemical or biological agent. Furthermore, demonstrate in human cell culture models that such a nanoparticle can detect such indicator molecules in a quantitative fashion in response to external stimuli.
PHASE II: Utilize the designed nanoparticle to detect stimuli in complex environments such as saliva or blood or in a point-of-care assay format. In addition, demonstrate detection at biologically relevant concentrations of stimuli.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: In order to continue to protect the safety and health of our armed military personnel, the intent is to develop nanoparticle-based sensors that can be deployed in biological environments for the real-time detection of agents of interest.
Commercial Application: Successful commercial development of such a nanoparticle sensor would progress beyond successful completion of Phase II.
TECHNOLOGY AREAS: Space Platforms
OBJECTIVE: Develop new electric propulsion concepts with highly flexible operating envelope of thrust and specific impulse for both rapid maneuvering of large space assets and highly efficient orbit and station-keeping.
DESCRIPTION: The ability to rapidly perform space maneuvers, between various orbital inclinations, or between geostationary and low earth orbits (LEO), is a very desirable capability. Chemical propulsion is typically used to provide the high thrust capability for such maneuvers, but with the penalty of large propellant mass, and becomes prohibitive for repeated high-thrust maneuvers. Electric propulsion concepts are capable of reducing the system’s wet mass (thruster and propellant) by achieving much higher specific impulse (Isp) but usually at the expense of thrust or efficiency. For example, ion thrusters and Hall thrusters can have high Isp (2000 - 5000 sec), but the low plasma density greatly limits the thrust level. For higher thrust concepts, such as an arc-jet, the Isp is of the same order (x2) as chemical propulsion concepts but the efficiency is low and the thruster is poorly adapted to high-Isp/low-thrust maneuvers such as orbit and station-keeping. Dual-mode concepts, combining a chemical thruster and an electric propulsion thruster, are of high interest, but are difficult to achieve with a low mass constraint. What is desired is a compact, low-mass, high efficiency propulsion system with a potentially continuously variable thrust-Isp combination at constant electrical power. The desired operational characteristics are: an efficiency of 50% or above, thrust levels similar to chemical propulsion in the extreme low-Isp range (3N or above at 25 kW), and an upper Isp limit in excess of 3000 sec in the low-thrust regime. The proposed concept will be evaluated on its reasonable potential to approach or exceed these performance figures. The expected power constraint may range from 20 to 200 kW. The efficient production of a high density plasma (> 1016 cm3) being a key difficulty for a high-thrust regime, innovative concepts that focus on revolutionary approaches to this aspect of the problem are of special interest.
PHASE I: Provide engineering analyses or small-scale experimental demonstration of the potential of the proposed device or the validity of the proposed approach; identify key requirements for validating the technology, potential problems and propose approach for Phase II demonstration; identify dual-use and commercialization potential.
PHASE II: Demonstrate the technology with small-scale experiments, or experiments aimed at verifying key aspects of the concept for overall validation; provide detailed plan for scaling-up and additional testing; develop and implement commercialization plan.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: The technology leads to significant mass savings in spacecraft design, extended operating lifetimes, and effective designs of orbital transfer vehicles (OTV) for servicing military spacecrafts or for space debris removal.
Commercial Application: The technology can also lead to revolutionary capabilities for commercial and civilian space applications. OTVs with this propulsion system would provide insurance against misplaced satellites with non-functional propulsion systems.
TECHNOLOGY AREAS: Sensors
OBJECTIVE: To explore next-generation nanoscale dynamic imaging microscope technologies employing Coherent Diffractive Imaging combined with a tabletop-scale coherent EUV/ soft x-ray sources.
DESCRIPTION: Intense femtosecond laser pulses propagating through gases can generate, through a process known as high harmonic generation, coherent extreme ultraviolet and x-ray radiation. Recent advances in phase matching of the generating pulse with a desired high harmonic wavelength have demonstrated useful outputs up to 1 kev x-rays, with 10 kev or more possible. Recent work has also demonstrated that Coherent Diffractive Imaging [1] can be used in conjunction with ultrafast, tabletop-scale short-wavelength light sources based on high harmonic generation to achieve record (~ 20 nm) resolution for full-field optical imaging[2,3]. Further advances to sub-10-nm resolution, as well as the possibility for three-dimensional imaging [4], make this a promising technique. This topic seeks to advance this technology to develop an optimized, compact, stand-alone, coherent diffractive imaging microscope for applications in: 1) nanotechnology (mask inspection and nanostructure imaging for next generation electronics and data storage devices) in both transmission and reflection-mode; 2) bio-imaging of whole cells in 3D without sectioning and staining, but with the inherent elemental contrast of x-rays; 3) Understanding the function of interfaces relevant to catalysis and energy. Further advances depend primarily on improving the capabilities of the high harmonic illumination source[5]. In particular, optimized high harmonic sources are needed with well-controlled and adjustable bandwidth for 3D image extraction, and for the generation of fully spatially coherent, low scatter, beams at photon energy 40-500 eV for nano-imaging, and in the water window (270-570 eV) for bio-imaging and interface studies.
PHASE I: Phase I will develop an engineering design for an imaging instrument that will have maximum flexibility of operation from 40-570eV. The imaging system will need adjustability in wavelength and in power. Feasibility of a working imaging system will be demonstrated.
PHASE II: Phase II will produce a working prototype capable of imaging a wide range of samples. A high speed data processor will need to be designed for fast retrieval of 3D images.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: The full instrument will be important for the military for imaging of nano-scale objects arising from nano-tech weapons, bio-agents, and battlefield medicine.
Commercial Application: In the commercial sector, new nano-imaging modalities will be enabled, for understanding and inspection, for example for medical, micro-systems, and non destructive evaluation.
TECHNOLOGY AREAS: Information Systems, Space Platforms
OBJECTIVE: Demonstrate novel IP routing protocols onboard over satellites that would link user preferences and network conditions and improve end-to-end network performance, including heterogeneity of satellite nodes, interferences of satellite links, etc.
DESCRIPTION: The defense satellite communication system (DSCS) is a satellite system designed to provide high-volume and secure communication infrastructure for supporting real-time military voice and data communication. A number of satellite communication system, including the DSCS III satellite constellation, have been deployed and successfully supporting military communication over the past decades. However, those systems cannot meet the modern military mission critical operation, which requires the high bandwidth for a large number of war-fighting users [1, 2]. One example of this was demonstrated in Afghanistan, where the DoD had to lease transponders on commercial satellites to extend communication reach and increase bandwidth for end-users in war-fight [3]. To address this issue, the DoD had an initiative to migrate the existing satellite-based circuit-switched communication systems to the packet switched system using Internet protocols. As shown in [4], the current MILSTAR II satellite communication system takes two minutes to transmit a 24 MB 8’’x10’’ image; while the next generation IP based satellite communication will be able to do that in less than one second. Cisco Systems Inc. and Intelsat General Corp. demonstrated the viability of conducting military communication via an Internet router in space [5].
In an IP-based satellite communication network, the satellite can be considered as nodes on a network or a router in 3-D space and a number of satellites can be formed an ad-hoc network in 3-D space. When designing routing protocols in such IP based satellite ad-hoc networks, there is a need to consider the following challenges due to the unique characteristics of satellite communication links, heterogeneous satellite nodes and end-users. First, satellite links have higher bit error rates than terrestrial links forming the Internet and round-trip time is from several hundred milliseconds to half-second. Hence, these will have significant impact on the performance of user datagram protocol and transmission control protocol over satellite, along with QoS requirements, including connection establishment delay, connection establishment failure probability, throughput, and transmit delay. Second, satellite nodes are heterogeneous and are in time-varying 3-D space. Satellite nodes can be located at low and medium-earth orbits. Those nodes have different up/down link bandwidth and computing resources. Because those nodes are not geostationary, the slant range varies, inter-satellite links are dynamic, and communication of one link may interfere with other links. Third, end-users of satellite communication systems, including remote-piloted aircraft and ground-based vehicles, are heterogeneous. Lastly, for mission critical operations, the satellite networks often have tight real-time requirements, e.g., end-to-end delay, bandwidth, etc.
To address those challenges, this STTR topic calls for novel theoretical constructs and effective designs of advanced IP routing onboard over satellite that will link user preferences and network conditions, including timeliness, availability, throughput, and heterogeneity of satellite nodes. To efficiently improve the usage of network resource and reduce end-to-end latency, the cross-layer design and multiple path/QoS routing mechanisms shall be developed to allow users to bargain and meet end-user QoS requirements given network conditions. In particular, the satellite ad-hoc network can be considered as a mesh network in a time-variant 3-D space, which consists of heterogeneous access satellite node, core satellite node, and client with satellite antenna. While this topic encourages novel solutions, the examples of state-of-the-art algorithms such as across-layer design [6] and multi-path QoS routing [7] in mesh network and ad-hoc networks can also be found in recent scientific publications.
PHASE I: Develop IP-based satellite ad-hoc network architectures together with QoS for transmission control protocol (TCP) over satellite on connection establishment delay, connection establishment failure probability, throughput, transmit delay, residual error ratio, protection, priority and resilience. Demonstrate the feasibility through proof-of-concept
PHASE II: Refine the candidate solutions selected from Phase I results. Investigate options, e.g., maximum throughput vs. window size for various round-trip delays to improve performance of TCP over satellite. Assess computational requirements and communication overheads associated with centralized and decentralized implementation schemes for cross-layer algorithms and TCP over satellite. Document TCP research issues related to satellites.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: Extend the architectures and/or technology components into real-time dynamic net-centric operations with security measures, e.g., IPSec to communicate at tactical edges where there exist users and warfighters with different priorities and credentials.
Commercial Application: Transition the architectures and/or technology components into commercial systems. Anticipated benefits may include significant improvement of end-to-end performance of links with errors and enhancement of TCP over satellite channels.
TECHNOLOGY AREAS: Materials/Processes, Electronics
OBJECTIVE: To develop conformal, light-weight and load-bearing antennas based on conductive textile threads that can endow small unmanned aerial vehicles (UAV’s) with operational capability down to the UHF band.
DESCRIPTION: Communication missions of small UAV’s in low-altitude operation are severely limited for the signals from HAM radio, satellite radio, cellular phones or television due to the large size of antennas required. An approach to accommodate required “low frequency” antennas for small UAV’s has been to utilize the entire volume of the structures which are often constructed of reinforced polymer composites. Recent advances in materials and electronics as well as new design philosophies demonstrated that conductive, flexible and high strength textile threads can be employed and weaved within the UAV structure to lower the antenna’s operational frequency. Similar textile thread-based antennas can also be employed for conformal on-surface installations on a variety of platforms. These can result in scalable and removable antennas that can be attached to existing vehicles without adversely impacting their mission performance.
PHASE I: Examine the performance of conductive textile threads to realize planar and non-planar antennas with comparable performance to traditional metallic antennas. Examine new design concepts and demonstrate their capability to lower the operational frequency by exploiting 3D structure of small UAV’s.
PHASE II: Based on the design concepts proven under Phase I, develop laboratory prototypes for testing and evaluation of conformal, light-weight and load-bearing antennas for small UAV’s based on conductive textile threads. Evaluate performance of these prototypes under mechanical stress conditions and introduce remedies to avoid detuning and undesirable frequency variations.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: Conformal, light-weight and load-bearing antennas based on conductive textile threads will find significant usage in communication/navigation/surveillance systems for flight in controlled airspace.
Commercial Application: Conformal and flexible antennas can be used in various commercial products including cell phones, RFIDs, body-worn antennas and even medical applications for improved imaging and patient monitoring.
TECHNOLOGY AREAS: Sensors
OBJECTIVE: Develop physics-based MIMO radar clutter modeling and simulation capability.
DESCRIPTION: Fundamental to the performance evaluation of putative optimal and adaptive MIMO radar signal processing algorithms is a characterization of clutter in terms of its statistical and spectral properties. Radar clutter is the resultant of many scattering mechanisms and certainly MIMO radar clutter is no exception. Therefore this effort is geared towards the development of a physics based modeling and simulation capability for MIMO radar clutter viewed from co-located (single platform), distributed (multiple platforms such as swarms of UAVs), and hybrid configurations. Important issues in this context include the following items:
1. Employ physics (Maxwell's equations) to analyze the scattering mechanisms pertaining to MIMO radar clutter, while including the impact of multipath (produced, for example, by topographic undulations, ground vegetation and boulders or outcroppings, scattering by other members of a swarm of UAVs, and urban complexities), and radar waveform effects (including various selected modulation schemes, chirps of various kinds, and Doppler shifts from moving targets and sources), and then develop analytical models for clutter scenarios encountered in MIMO radar. The analytical models must account for the statistical and spectral properties of clutter in these scenarios in terms of fundamentals pertaining to electromagnetic and geometric factors inherent to the scattering surfaces.
2. Develop computer simulation schemes for generating MIMO radar clutter scenarios using the physics based models from (1). One key requirement of the models and their realization in computer simulation schemes is independent control of the first order probability density function (PDF) and the correlation properties. Another requirement of the computer simulation scheme is accurate (eliminating diffraction from artifactual edges) and 'water tight" CAD file representation of targets, topography, and buildings or other urban details. It is also imperative that the computer simulation scheme be provably accurate and rapid. The need for rapidity translates into ensuring that the code scales slowly with problem size.
3. Account for statistical and spectral properties of simulated and measured MIMO radar clutter data using the physics based models
4. In light of the statistical aspects of the physics it is necessary that performance validation of the simulation schemes using statistical and spectral analysis tools is pursued.
PHASE I: Develop physics-based models for MIMO radar clutter observed from co-located, distributed, and hybrid MIMO sensing assets. The models need to address the 4 items discussed in the "Description" above.
PHASE II: This phase will be devoted to an extensive and exhaustive development and validation of the simulation schemes from Phase I. Phase II will include relevant code and a final report, which provides comprehensive documentation of the techniques developed in the effort.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: More effective target indentification within urban canyons and complex topography.
Commercial Application: Law enforcement and RFID within cluttered environments.
TECHNOLOGY AREAS: Information Systems, Electronics
OBJECTIVE: Provide monitoring and control of remote devices that operate in autonomous domains while assuring integrity of end-to-end control functions using untrusted COTS operating systems and software.
DESCRIPTION: The deployment, monitoring, and control of remote devices and applications operating in autonomous administrative domains enable a variety of new network-centric missions both in the defense and civilian domains. These applications range from control of remote vehicles and sensors to domestic healthcare and industrial process control. Large-scale deployment of embedded devices and applications requires that commercially available platforms (e.g., operating systems, network software) be used and that their operation be enabled in autonomous administrative domains; i.e., embedded devices and applications may operate in a physically remote domain outside the full control of the client who executes a particular mission or application. For instance, a remote sensing device may be launched by a different party than the one who controls the mission and establishes the perimeter and time of operation and fidelity of sensor readings. Yet the party that launches the sensing device may have access to the devices’ software and could inadvertently or deliberately misconfigure or initialize it in an insecure state. Furthermore, the use of commercially available platforms allows exploitation of operating system and application flaws by a knowledgeable entity. Thus it becomes necessary to ensure that end-to-end control of remote embedded devices and applications despite the use of commercially available platforms and operation across different administrative domains. This makes the notion of end-to-end trusted path a necessary feature for embedded devices and applications. Traditional notions of end-to-end trusted path have relied on establishing cryptographically secure channels between trustworthy operating systems, applications, and service providers [1, 2]. While use of such channels is adequate in many applications, it is insufficient whenever commercially available platforms are used and the remote control mission or application spans multiple administrative domains. New capabilities are needed to ensure the integrity of sensitive application-code execution on both local and remote platforms [3], to provide remote attestation of code execution across different administrative domains, and to ensure the resiliency of the end-to-end infrastructure that establishes and maintains the trusted path. New capabilities are also necessary to detect misconfigured local client machines and remote embedded devices and applications before trusted path use. It also becomes necessary to detect and remove malware-contaminated client and remote-server software, and enable isolation of both trusted-path ends from contaminated platform code and improper management by remote administrators.
PHASE I: Perform research necessary to design, develop and demonstrate a methodology for the establishment of end-to-end trusted path for embedded devices and applications execution on COTS platforms operating in autonomous administrative domains based on a network-centric computing model.
PHASE II: Develop and demonstrate a prototype implementing Phase I methodology and demonstrate prototype baseline capability using commercially available platforms and devices. Identify appropriate performance metrics (e.g., confidentiality, integrity of end-to-end trusted path in the presence of known malware and defined insider attacks) for prototype evaluation. Detail the plan for the Phase III effort.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: This research is highly practical for Intelligence Communities, Communications, and Homeland Security.
Commercial Application: This research will be useful for industrial process control, remote patient monitoring in domestic healthcare, and mobile social services.
TECHNOLOGY AREAS: Air Platform, Electronics
OBJECTIVE: Enable significant improvements in the spatial and temporal resolution of critical phenomena in flowfields relevant to AF air-breathing combustion systems via development of innovative high-bandwidth planar or volumetric imaging and quantitative measurements.
DESCRIPTION: The development of air-breathing engines requires the utilization of advanced diagnostic methods capable of providing nonintrusive dynamic measurements in harsh, often high-temperature environments. These measurements can help engineers and engine designers investigate a variety of challenging problems, including flame-holding and flame-spreading in scramjet engines and augmenters, combustor ignition and relight, and jet-noise. To adequately understand these phenomena, engineers will increasingly rely upon advanced, nonintrusive, optical instrumentation capable of operating at high bandwidths and spatial resolution. Desired measurements include but are not limited to parameters such as combustion intermediate species, gas static temperature, gas velocity and turbulence quantities at rates in excess of 50 kHz and resolved spatial scales less than a centimeter; with preference resolved spatial order in the millimeter scales in flowfields appropriate for the engineering development of propulsion subsystems. Such flowfields may approximate those encountered in gas-turbines, augmenters, pulse detonation engines, and scramjets, as well as other portions of an air-breathing flowpath. This announcement is broad in that a broad spectrum of methods may adequately address this topic, provided the proposed solution has the potential to radically improve data rates and resolution beyond the current state of the art.
PHASE I: Develop concepts for measurements systems of one or more flowfield quantities of relevance to AF related air-breathing engines. Complete proof-of-concept measurements in a laboratory-scale flowfield (reacting or nonreacting, depending on the application). Demonstrate data acquisition rates in excess of 50 kHz.
PHASE II: Implement the full diagnostic system in an appropriate challenging flowfield. Demonstrate data rates greater than 50 kHz, spatial resolution of scales less than one centimeter, with preference spatial resolution order in the millimeter scales and suitable planar or volumetric scales to adequately resolve the macroscale behavior of the demonstration flowfield.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: Success will yield diagnostic systems that can be used for a variety of propulsion applications as noted above.
Commercial Application: Such instrumentation can also be applied, potentially, to commercial engines (e.g., aircraft or stationary, power-generation gas turbines) or high-speed flowfields.
TECHNOLOGY AREAS: Electronics, Space Platforms
OBJECTIVE: Develop a Pulsed Electro-Acoustic or alternative technique to give volume charge distribution and electric fields within realistic spacecraft dielectric insulators at space-like radiation energies.
DESCRIPTION: Pulsed Electro-Acoustic (PEA) techniques were developed to determine the internal charge distribution of high voltage power cable insulators after manufacturing. [1] By determining the volume charge distribution, the internal electric fields in a dielectric can be determined, allowing for better understanding of conditions leading to current leakage and dielectric breakdown. In the context of spacecraft charging and material selection, understanding of internal electric fields is essential to understanding the conditions capable of causing breakdown in a complex radiation environment. Multiple spacecraft have suffered anomalies or been disabled due to presumed dielectric discharge [e.g. 2]. The effects of dielectric discharge and prolonged electrostatic fields can change material properties over time. Understanding the internal charge distribution and charge carrier mobility in dielectrics can lead to material selection less sensitive to dielectric discharge and improved spacecraft materials. The current PEA methods lack the resolution required to determine the charge distribution in thin spacecraft dielectrics and at the relatively low incident energy of typical space plasma.
PHASE I: Determine requirements for and design a vacuum-capable device with resolution to determine charge distributions in typical spacecraft materials at typical on-orbit radiation energies at a range of temperatures typical to orbit. Evaluate existing technology to identify components capable of providing the necessary resolution with modification.
PHASE II: Construct a product capable of determining charge distributions in typical spacecraft materials at typical on-orbit radiation energies and perform tests on several common materials to provide a baseline of capabilities. Perform analysis of what is required to improve resolution and implement improvements if possible.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: Provide material testing to determine spacecraft materials for use in hazardous environments, leading to longer spacecraft life and improved survivability and reliability during substorms and other space radiation hazards.
Commercial Application: Generate improved data on dielectric charge distributions to improve material selection for spacecraft, thin electrical insulators on high voltage cables, and capacitors. Other uses may become apparent after continued use.
TECHNOLOGY AREAS: Information Systems
OBJECTIVE: The United States Air Force is looking for technological innovations based on machine learning techniques for Dynamically Evolving Malware Detection in Data and Network Streams.
DESCRIPTION: Polymorphic malware poses increasing challenges to effective signature-based antivirus protection; antivirus defenses have managed to stay ahead in the virus-antivirus co-evolution race only because antivirus signature updates are specific and targeted, whereas most polymorphic malware variation is random and undirected. More powerful malware mutation strategies that use automated machine learning to adapt to signature updates in the wild are being examined [1]. By tailoring their mutations to specific signature updates, such malware can reliably survive signature updates without re-propagating, posing potentially serious threats to existing network infrastructures. The relentless appearance of novel malicious and non-malicious executables can be conceptualized as a data stream in which each data point is an executable. New kinds of attacks and mutations constitute concept drift in such a stream. Current machine learning-based classification approaches that are being examined to detect concept drift in data streams (e.g., [2]) show promise for detecting evolutionary malware. One open challenge in fully adapting these to malware detection is the difficulty of reliably and efficiently extracting useful features from binary executables, which tends to be substantially more difficult than the feature-extraction problem for purely textual streams [3]. A second challenge is concept evolution—the continuous appearance of novel classes (e.g., new types of malware) in the stream [4]. Machine learning approaches that assume a fixed number of classes are therefore impractical for malware detection. In order to detect new malware as well as reactive and adaptive malware, dynamic mechanisms for novel class detection based on sophisticated machine learning algorithms are needed. Such mechanisms should account for both concept drift and concept evolution to reliably detect new and old malware variants in infinite-length binary data streams. Furthermore, practical detection mechanisms should be time-constrained so that prompt action can be taken.
PHASE I: Conduct a preliminary investigation of machine learning-based malware detection with novel classes in a controlled environment, as well as novel class detection in an unconstrained textual environment (e.g., web blog content).
PHASE II: Develop a proof-of-concept demonstration of the technology in a real-world environment, with real-time applications.
PHASE III DUAL USE COMMERCIALIZATION
Military Application: Tools developed from this research will be used for active defensive operations, such as robust malware detection and quick analysis of web content for malicious intent.
Commercial Application: Results of the research will be useful in commercial intrusion/malware detection systems and stream (e.g., text and binary) classification systems.
TECHNOLOGY AREAS: Air Platform, Space Platforms
OBJECTIVE: Develop a computational toolkit to estimate the stresses in rotating blades and other structures due to multiple structural vibratory modes.
DESCRIPTION: Turbine engine stress estimation is necessary during engine stress survey testing. Newer modern turbine engines with integrally bladed rotors (IBR) tend to have a higher number of blade vibratory modes excited by unsteady flows associated with IGVs, stators, distortion, etc. This results many times in multiple structural vibratory blade modes being excited simultaneously, therefore, making it difficult to estimate actual blade maximum stress using existing methods that model only single vibratory modes. A need exists for a method to extract component critical stress limits for complex multi-mode stress states from simple modal limits provided by the manufacturer. In addition, a method is needed that would identify critical stresses from a combination of relatively benign individual modal peaks to guide test program planning and life estimation programs.
PHASE I: The theory will be developed to determine the stresses in a structure while multiple vibratory modes are excited. In addition, the architecture for a program to determine the maximum stresses and present results to the user will also be developed.
PHASE II: A software package will be developed utilizing the theory developed in Phase I to estimate blade stresses under all of the possible mode combinations. The software shall automatically search for & determine those combinations that produce the greatest stresses & present them in an actionable manner. In particular, the location & stress amplitude of the maximum stress for a variety of blade geometries & modal combinations shall be presented.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: This code will be used extensively in the analysis of turbine engine blade stresses for both ground test programs and in support of flight testing and operations.
Commercial Application: The software could be used to estimate stresses in other rotating systems such as industrial steam and wind turbines, fans, and other components.
TECHNOLOGY AREAS: Information Systems, Materials/Processes
OBJECTIVE: Develop algorithms and tools to advance the state-of-the-art in the simulation of plasma in the strongly coupled limit, where the energy associated with the long range coulomb fields is larger than the thermal energy associated with the particles.
DESCRIPTION: Strongly coupled plasmas occur in a wide range of physical situations from ultra-cold neutral plasmas and tenuous ionosphere plasmas[1] through explosive gases associated with conventional munitions to the extreme conditions associated with high-energy ultrafast laser interactions with matter, even touching on warm dense matter physics[3]. Since the flow of energy in these situations is both critical to the functioning of Air Force technology, and central to our fundamental understanding of plasma physics, it is important to have theory and related software to understand and predict plasma behavior in the strongly coupled regime. Non-equilibrium plasmas are playing an important role in a variety of Air Force high technology products, either by virtue of providing the background operating environment (for space-based assets), the means by which electrical energy is converted into high power electromagnetic signals (directed energy sources), novel plasma chemistry (in micro-plasma devices), or fundamental limits for certain classes of quantum information systems (trapped ion and cold atom systems). The creation and evolution these non-equilibrium plasmas and the management of energy flow in these potentially high energy density situations is important to the further research and development of a wide range of Air Force technology. Currently, methods based on particle-in-cell (PIC) tools and single-fluid (MHD) models are the workhorse enabling computational technology to design and evaluate the intersections between plasma scenarios and Air Force needs.
While the PIC and MHD software packages have reached a relatively high state of development, with robust numerical algorithms, scalable parallel implementations, and high-fidelity physical accuracy, these codes generally simulate so-called "classical" plasmas [2] where the kinetic energy of the charged particle population greatly exceeds the potential energy associated with Coulomb self-fields. Note that this is in contrast to other uses of the term "classical" in the context of quantum mechanical processes in plasmas. When the Coulomb field energy becomes large, however, traditional methods of plasma physics often fail, as the inter-particle dynamics and multi-particle correlations become important. The physics associated with this development often require fine resolution of the spatial dynamics that involves length scales significantly shorter than the screening Debye length associated with classical plasmas. These small spatial scales naturally introduces novel time scales and equilibration processes that are important to understand and simulate. This involves coupling multiple physical phenomena for retaining the long-range forces common in plasmas with the short-range inter-particle forces more commonly associated with molecular dynamics. Finally, although this lies outside the scope of the current topic, this is clearly important physics toward the eventual inclusion of quantum processes in plasma physics.
PHASE I: Based on novel concepts beyond the current state-of-the-art, develop a plan to build and validate a strongly coupled plasma simulation model. Plan should address modeling, development of algorithms, implementation of computer code and, given the wide range of plasma parameters exhibiting strongly coupled phenomena, first tests of results for one physical system.
PHASE II: Develop algorithms for a strongly coupled plasma simulation tool and implement in prototype computer code appropriate for, at least, two-dimensional physical scenarios. Again, given the wide rate of potential physical arenas, apply code to simulate two specific systems, and test simulation results against observed data from the literature.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: Use the tool to characterize the impact of strongly coupled plasma in high-power electromagnetic fields for counter-directed energy, explosive blast situations, and space-weather effects on electronics.
Commercial Application: The tool will also have broad application to novel chemistry based on micro-plasma technology for novel environmental remediation, and for quantum information systems based on trapped ions for secure business transactions.
TECHNOLOGY AREAS: Materials/Processes
OBJECTIVE: The synthesis and characterization of non-toxic, environmentally stable and optically efficient nanoparticles that absorb and emit in the infrared and lack any visible light emissions.
DESCRIPTION: There is an Air Force need to develop optical materials that operate in the infrared spectrum. The ideal material would be an environmentally stable, non-toxic and optically efficient nanoparticle that absorbs infrared light and either upconverts or downconverts the energy to a different infrared wavelength. Emissions from the nanoparticles should be detectable using commercial infrared imaging devices, such as night vision goggles, short wave infrared (SWIR) cameras or forward looking infrared (FLIR) cameras. It is anticipated that these nanoparticles will be used outside, thereby requiring any absorption and emission wavelengths be outside any atmospheric absorbance bands. Additionally, this material should not have any emissions in the 400-700nm wavelengths, be synthesized economically and practically on large scales, do not photobleach, and can either fluoresce or phosphoresce.
PHASE I: Gram quantities of monodisperse nanoparticles with Stoke shifts of at least 100nm that are optically active in the infrared will be synthesized and characterized using traditional laboratory equipment as well as infrared imaging devices. Materials will be excited with 830nm or 1064nm light sources and either up- or down-convert in the infrared.
PHASE II: Research of new infrared nanoparticles would continue, though with an emphasis on production scalability, emission strength, and environmental stability. Additionally, the effects of the materials on the environment and animal toxicology will be examined. As an optimum formulation is identified, a > 500g sample will be required as a final deliverable.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: Military applications include tagging and tracking of assets, identifying friend or foe, and for use in identifying disturbed soil in perimeter security.
Commercial Application: Commercial applications include fluorescence optical imaging for biomedical applications.
TECHNOLOGY AREAS: Materials/Processes
OBJECTIVE: Develop a computational model to predict the flow field in the attachment region of an electric arc and the associated material removal.
DESCRIPTION: Data and the ability to predict surface heating and material removal of electrodes at moderate to high pressures (10-300 atm) with a moving arc is needed. The complexity of the process of electrode erosion, involving extreme temperatures (~14,000 K), a complex charge concentration zone, a material/gas interface, and the creation of a liquid pool at the arc foot, combined with the difficulty in obtaining validation data due to the brightness, temperatures, and voltages involved has limited the capability to model electrode material removal. Current models can predict material removal rates within about an order of magnitude but there are many factors that are poorly understood such that many different parameter settings can generate the same results. The flow field in the region of the arc attachment region; the constriction and attachment of the arc; the magnetic field produced by the arc; the effect on the arc of externally applied magnetic fields; the melting and vaporization of the electrode surface; removal of material due to surface shear, chemical processes, magnetic forces, and material vaporization; and the flow of current once within the electrode surface all must be understood and relevant models included in the computational model.
PHASE I: Develop the theory needed to model the relevant processes & identify existing computational tools that can be utilized as is or in a modified form in development of the final computational tool. Develop the architecture for the computation tool & a design for the test apparatus to be used to gather data needed to develop & tune the model.
PHASE II: Develop the computational model. Construct experimental apparatus and perform tests and analysis needed to determine properties, understand relevant processes, and develop databases for use by the computational tool. Tune computational model and validate using experimental data.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: Computational tool has military uses in the design of electrodes for arc heaters and rail guns.
Commercial Application: Computational tool could be used to aid in the design of large transformers and other electrical equipment.
TECHNOLOGY AREAS: Materials/Processes, Sensors
OBJECTIVE: Design, development, and fabrication of functional units that can serve as the “cellular” elements in an ensemble based mesoscopic interpretation towards an eventual programmable matter.
DESCRIPTION: Computer science has provided a strong theoretical foundation for the concept of programmable ensemble based matter: a form of matter created from massive numbers (>10^9) of nearly homogeneous units where the individual units are small enough that the ensemble functionally appears to be a uniform material. Each individual unit is required to take on those properties necessary for it to operate within the ensemble including:
1. A physical structure under 0.1 cubic millimeter with no dimension greater than 700 micrometers
2. The ability to collect/scavenge energy from external sources (chemical, physical, or electromagnetic), store the energy as needed for the operations of the individual unit, and distribute energy to nearby units.
3. The ability to selectively modify the chemical, physical, electrical, or magnetic properties of the structures surface to enable selective adhesion, separation, and/or actuation relative to similar structures.
4. The ability to download, store, and execute a program sequence.
5. The ability to communicate with nearby units at the level required for a group of units to operate collectively towards achieving goals such as forming shapes, controlling lattice structure and spacing, and achieving collective motion.
6. The ability to specialize based on physical, chemical, or electrical modification.
Of interest for this particular topic, are nano- or micro- processes for fabricating units that provide the functionality detailed in the six items above. It is also desired that the process can be scaled down over time to achieve units at a size as small as 0.001cubic millimeters with no dimension greater than 150 micrometers. It is essential that these processes provide a low cost approach for the fabrication of billions of units. Approaches based on either batch fabrication, novel nanofabrication (such as nanomembranes or nano-imprint) or bottom up assembly are both acceptable provided they can achieve extremely low cost and high volume fabrication.
It is currently believed that the final complex cellular unit will require a hybrid approach with one or more of the six items above provided by differing fabrication processes. These separately fabricated functional elements would then need to be assembled to form the complex cellular unit.
PHASE I: Detail an effective system architecture that embodies critical functions (i.e., power, signal, heat, mechanism, etc) & define a credible process for achieving the fabrication of the cellular units. Take convincing steps towards the experimental demo of the associated fab processes of the unit at the desired scale using the proposed approach.
PHASE II: Fabricate and demonstrate prototypes of cellular units at the size detailed in item (1) above, and demonstrate the ability to form scalable ensembles of these units.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: This work will create an innovative dynamic form of matter with application in software-defined reconfigurable antennas and sensors, 3D-displays, and surveillance and reconnaissance systems.
Commercial Application: Applications of programmable based matter will include a new type of displaying media, communication and visualization systems, medicine, biology, a software-defined 3D-modeling.
TECHNOLOGY AREAS: Air Platform, Information Systems
OBJECTIVE: To develop and evaluate novel methods for understanding and visualizing information obtained from (ultra) massive data sets generated through physics based simulation.
DESCRIPTION: Large-scale simulations are carried out routinely in modern computational physics. Such simulations produce enormous data sets. There is a need for developing and exploring new, more efficient methods for extracting coherent and usable information along with measures of the uncertainty associated with the information derived using only the given data. As an example, vortex detection [1] is an important aspect of data mining obtained with steady-state computational fluid dynamics simulations on curvilinear grids. A systematic theory for an ab initio data-driven organization of information and representation of data complexity is needed.
The effort should focus on novel approaches for developing a systematic theory for exploiting the relevant physical principles [2] in the simulation of physical phenomena. The approach should lend itself to a variety of secondary transformations for visualization [3]. Technologies may include, but are not limited to, information fusion, cognitive architectures, robust statistical learning, search and optimization, automated reasoning, and possibly new applications of mathematics [4]. Emphasis should be placed on identifying broadly applicable principles, supporting theoretical constructs, practical methods, and algorithmic realizations and measures of effectiveness.
PHASE I: Perform research to develop a set of new technologies as described above and define or identify a representative data set to be used in demonstrating the effectiveness of the resulting technologies.
PHASE II: Extend the Phase I knowledge products to develop a prototype comprehensive framework applicable to massive amounts of data obtained from physics based simulations.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: Developing new techniques for computer simulations including feature analysis in multi-physics simulations. Technology developed also applicable to threat identification hidden in massive datasets.
Commercial Application: This technology could be adapted to efficient analysis of gene expression and other high-throughput molecular data. Analysis of massive, high dimensional data from the life sciences would benefit.
TECHNOLOGY AREAS: Sensors
OBJECTIVE: Develop and demonstrate approaches for realizing metamaterial-based micro-electromechanical (MEM) ultra-low-loss non-dispersive phased-array antennas for multifunction radar and communication systems in unmanned and micro air vehicles.
DESCRIPTION: Phased-array multi-function radar and communication systems are ideal for unmanned and micro air vehicles that possess challenging requirements for size, weight, power consumption and cost. Whether a radar or communication system can meet such challenging requirements is critically dependent on the design of not only the antenna, but also key components such as phase shifters and transmit/receive electronics that are required to drive individual antenna elements. For example, recent advances in the integration of slow-wave structures with MEM switches have resulted [1], [2] in low-loss broadband-matched phase shifters, which will allow them to be integrated with the antenna [3] and driven by a few transmit/receive electronic modules, thereby reducing the size, weight, power and cost of the system. In addition, by taking advantage of combining the different dispersion characteristics of metamaterials and natural materials, non-dispersive phase shifters with a constant phase across a wide range of frequencies can be designed [4], which will greatly simplify the operation of frequency-agile and broadband-modulated multifunction radar and communication systems. To demonstrate the potential of the above-mentioned improvement in phased-array antennas, this project is divided into distinct phases as described in the following.
Both Phase I and Phase II should be conducted with a focus on not only performance improvement, but also robustness, reliability, manufacturability, yield, qualification and technology transition for deployment in unmanned and micro air vehicles. Costs of a transition/qualification effort should be estimated as part of the Phase II work package, and a potential transition strategy should be discussed in basic detail during Phase I and in finer detail during Phase II.
PHASE I: Phase I should focus on the design and evaluation of a metamaterial-based MEM Ka-band phase-shifter unit cell. The unit cell should be capable of uniform performance between 24.5 and 27 GHz with a phase shift of 45+/-5 degree, an insertion loss <1 dB, a return loss >20 dB, a switching speed < 5 us, and a power consumption < 10 uW.
PHASE II: The first half of Phase II should focus on the design, fabrication and demonstration of 4-bit Ka-band phase shifters with uniform performance between 24.5 and 27 GHz, a phase resolution of 22.5 deg from 0 deg to 337.5 deg, a phase ripple of less than +/-5 deg, an insertion loss < 2 dB, a return loss >20 dB, a switching speed < 5 us, and a power consumption <10 uW. The size of the phase shifter should be smaller than 5 mm x 5 mm x1 mm.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: The array should be applicable to all military radar and communication systems for which size, weight, power and cost are critical. With low RF loss and DC power, it can be readily stacked to form a two-dimensional array with only air cooling.
Commercial Application: The ultra-low-loss non-dispersive phased-array antenna should be equally applicable to commercial communications, such as satellite communications and wireless communications with multiple-input-multiple-output (MIMO) antennas.
TECHNOLOGY AREAS: Battlespace, Space Platforms
OBJECTIVE: Accurately determine current and future satellite drag with a thermospheric physical model using near real time space weather data and indices.
DESCRIPTION: Orbital drag on spacecraft is a critical factor in providing collision avoidance warnings for manned spaceflight and other high-value assets, accurately cataloging orbiting objects, predicting reentry times, and estimating satellite lifetimes, on-board fuel requirements, and attitude dynamics. Uncertainties in neutral density variations are the major limiting factor for precise low-Earth orbit determination at altitudes below about 700 km. Long-standing shortfalls in satellite drag prediction have been due to the complex thermosphere neutral density and wind response during periods of high geomagnetic activity and inadequate prediction capability for solar EUV and geomagnetic storm variations.
Although thermospheric physical models have been used for research for many years, their technical feasibility for satellite drag forecasting is still not at par with empirical models. A challenging R&D effort is to improve physical model capability so that its 72 hour neutral density forecast could be better than the current empirical Jacchia Bowman 2008 (JB08) neutral density model. The objective of this STTR is to establish technical feasibility of thermospheric physical models that can help improve orbit prediction in a near real time environment.
This R&D effort is enhanced by a recent AFOSR-supported Multi-University Research Initiative “Neutral Atmosphere Density Interdisciplinary Research." It has greatly improved the understanding of the physics of geomagnetic storm response and neutral density profiles under solar and geomagnetic quiet time conditions. Under the MURI various studies have used new data sets from orbital drag, satellite-borne accelerometers, and EUV remote sensors for improving climatological descriptions of thermospheric variability vs altitude, latitude, day of year, local time, and solar-geomagnetic conditions. While near real time data and indices including EUV data, solar flux and geomagnetic indices, solar wind and IMF measurements are now becoming available, thermospheric models have not yet taken advantage of the data for improving neutral density and wind modeling. This topic thus requests innovative approaches to develop a flexible and robust thermospheric physical model satisfying the needs of near real time satellite drag forecast. Successful proposals will help develop innovative algorithms employing new physical concepts and near real time space weather data and indices. The new algorithms will eventually be utilized in the modeling of satellite drag by the Air Force Space Command (AFSPC) and the Joint Space Operations Center (JSpOC).
PHASE I: Develop an initial model module running in a near real time environment. Demonstrate that the proposed new physical processes can achieve satellite drag specification capability equivalent to current empirical model JB08. Validate with historical orbit-averaged CHAMP and GRACE satellite neutral density data.
PHASE II: Incorporate new physical processes into thermospheric physical models. Develop innovative solar and geomagnetic forecast algorithms that use near real time space weather data and indices. Demonstrate that the developed model can improve current 3-day satellite drag forecast capability. Deliverables will be the model and prediction algorithms, validation reports, and any necessary data storage and network hardware.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: Results of this work can be used to improve AF space catalog accuracy, a critical component for space situational awareness. The developed model can be utilized in the DoD operational centers.
Commercial Application: The new algorithms matured under this grant can be used for high accuracy collision avoidance in commercial software applications.
TECHNOLOGY AREAS: Information Systems
OBJECTIVE: The United States Air Force is looking for technological innovations to provide assured information sharing capabilities using flexible cloud computing based architectures.
DESCRIPTION: Assured information sharing (AIS) frameworks should provide the ability to dynamically and securely share information at multiple classification levels among U.S., allied and coalition forces. As stated in the DoD Information Sharing Strategy, the vision for AIS is to "deliver the power of information to ensure mission success through an agile enterprise with freedom of maneuverability across the information environment" [1]. Current approaches are investigating ways to share data while at the same time enforcing various confidentiality, privacy and trust policies [2]. Furthermore, incentive based approaches to sharing data are also being explored [3]. However, due to the need for sharing large amounts of complex data, organizations are increasingly examining flexible cloud computing platforms for storing, sharing, querying and analyzing such data. For example, the CIO of NSA has recently stated that the agency is “focusing on a cloud-centric approach to information sharing with other agencies” [4]. Managing and sharing data in a cloud results in unique confidentiality, privacy, integrity, trust and availability challenges. For example, secret splitting for confidentiality enforcement must be harmonized with data locality for efficient query processing in clouds. Some recent efforts have examined security for cloud computing environments [5]. However these efforts are yet to address the security challenges for assured information sharing. Therefore novel approaches are needed for the development of a secure framework for policy based information sharing in a cloud with access control, identity management, secure data storage and query processing.
PHASE I: Perform preliminary investigations of advanced AIS solutions that can be combined with cloud based architectures to provide flexible and efficient secure data storage, policy based sharing and secure query techniques.
PHASE II: Develop proof-of-concept demonstrations of the technology.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: Results of the research will have tremendous applications in assured information sharing across the services (Air Force, Navy, and Army), agencies (CIA, FBI, DHS) as well as coalition organizations (US, UK, Australia).
Commercial Application: Results of the research will have applications to commercial information sharing and data analytics needs (e.g., information exchange between financial and healthcare organization).
TECHNOLOGY AREAS: Air Platform, Ground/Sea Vehicles
OBJECTIVE: Develop an electrical energy storage system with combined energy density > 25 Wh/kg and power density > 100 kW/kg for storage capacity from 150 kJ to 150 MJ and cyclic efficiency > 98%.
DESCRIPTION: The U.S. Air Force has wide-ranging needs for electrical energy storage systems for airborne and space craft, such as transient energy management (150 kJ) by the INVENT method to reduce heat loads, high energy storage (5-45 MJ) for directed energy systems, for communication satellites, and alternate propulsion methods such as hybrid-battery-electric, e.g. Boeing’s SUGAR Volt. A storage system combining fast charge/discharge rates and high energy density does not presently exist, and would be important to enable new capabilities for the AF. Traditional methods such as batteries, fuel cells, and ultra-capacitors have been developed for energy storage, however they have limitations such as slow charging rates, lower cyclic efficiencies < 95%, and decrease of charging capacity ~ 40-90 % with limited cycling. An alternate method is Superconducting Magnetic Energy Storage (SMES) which stores energy in currents circulating with zero resistance in magnet configurations. SMES has known advantages including no theoretical limit to specific power, very high cycling efficiencies > 98%, typical charge/discharge rates of 1-10 MJ/sec, and virtually zero degradation with infinite cycling. SMES devices have been developed with superconductors such as NbTi or Bi-Sr-Ca-Cu-O, however recent strong advances in improving the mechanical strength and magnetic-field-dependent engineering current density of RE-Ba-Cu-O wires (RE = rare-earth or Y) can significantly improve the specific energy and power densities. Specific power densities of SMES devices built with NbTi or Bi-Sr-Ca-Cu-O wire are typically very high up to 10^5 kW/kg, however only for specific energy densities = 20 Wh/kg. With the use of RE-Ba-Cu-O wires, the specific energy densities are expected to increase at least 8x. The energy storage density of SMES devices is E ~ B^2 where B is magnetic field. Magnets are being designed now with (Y,Gd)-Ba-Cu-O wire for B > 30 T compared to only 11 T for NbTi wire, which can increase the energy density > 8x. Also there is potential to operate RE-Ba-C-O at temperatures up to 20-40 K compared to 5 K for NbTi, which decreases cryo-cooling power requirements ~ 30-100 times.
Electrical systems for air and space applications require the highest energy and power densities possible with minimal volume. An operating voltage of ± 270 V is generally required to avoid Corona discharge for systems not self-contained.
It is expected that from this effort an energy storage system can be developed with the desired objectives, and capable of > 300,000 charge/discharge cycles. Storage devices of SMES will operate at temperatures ~ 5-80 K with a cryocooler, and/or with the option of cooling with liquid cooling; e.g. hydrogen at 20 K.
Basic research which can increase SMES performance and capability is also of interest, such as optimizing magnet configurations and coil spacing to minimize weight. Studies of alternate wire architectures and conductor cabling geometries combined with magnet design can increase quench protection, mechanical strength and integrity, and current balancing and sharing, and minimize or control effects such edge-field losses, joint resistance, flux creep, AC loss transients, reduction of wire currents from small defects, and wire weight.
PHASE I: Design of an energy storage system based on highest performance technology, with the goal of the highest combined specific energy and power densities and lowest volume. Perform limited trade studies to compare performance for different energy storage capacity size.
PHASE II: Construct and demonstrate a sub-size but scalable system designed in Phase I, including use of commercial-off-the-shelf (COTS) components. Conduct initial testing to verify expected specific energy and power densities are being achieved. Perform limited trade studies varying system design parameters to optimize and compare for different performance specifications.
PHASE III DUAL USE COMMERCIALIZATION:
Military Application: All DOD and NASA air and space vehicles require some form of electrical energy storage. The AF and NASA particularly require the highest specific energy and power densities possible, and other issues are important such as cyclic efficiencies.
Commercial Application: Commercial communication spacecraft and NASA spacecraft would use this technology. Large-scale electrical energy storage devices have commercial application for power storage and control, and for high energy particle accelerators and fusion energy.
TECHNOLOGY AREAS: Sensors
OBJECTIVE: Development of an innovative atmospheric sensing system to address deep turbulence challenges to support long-range sensing of weapon platforms.
DESCRIPTION: Successful development of the Air Force future long-range sensing, surveillance, and weapon platforms including their major components, such as tracking, beam and wavefront control sub-systems, depends heavily on accurate prediction and assessment of their performance in various atmospheric conditions and engagement scenarios. Currently this performance analysis is based on atmospheric measurements taken over different paths of less than a few kilometers in length. The obtained data (typically the refractive index fluctuation structure parameter) are then extrapolated on longer distances using classical (Kolmogorov, isotropic, stationary, and homogeneous) turbulence theory. Various studies using fine wire probes have shown that these assumptions have been violated. Since the existing turbulence theory and models have been experimentally verified under only relatively weak (or moderate) scintillation conditions (Rytov number <1), the commonly used mechanical extrapolation of these models for performance assessment of long-range systems may result in significant errors, leading to serious miscalculations of system capabilities and conceptual mistakes in overall system designs. Adequate performance assessment becomes an even more complicated problem in target-in-the-loop (TIL) beam control scenarios with the outgoing beam scattering off an extended target mostly due to still unresolved challenges in both analysis and numerical modeling of the target-return speckle-field propagation through thick turbulence. The development of an innovative multiple-wavelength complex optical field sensing capability would fill the existing gap in the understanding and assessment of deep turbulence effects on laser beam and image propagation in strong scintillation and anisoplanatic conditions. This would provide a common atmospheric diagnostic suite that currently does not exist. The proposed sensing techniques should not be based on assumptions of validity of the existing turbulence model as are based on observations using well-known scintillometers. The new system should provide the basic characteristics of complex optical fields such as the spatial-temporal correlation of phase, intensity and polarization state, image and beam quality metrics, etc. The developed system will be used to provide data urgently needed for the development of long-range passive and active imaging, space surveillance, tracking, and high-energy beam projection systems, as well as for verification of beam and image propagation models.
PHASE I: The offeror should propose a formulated concept, methods and techniques for fast (with times shorter than the Greenwood’s time constant) detection and characterization of basic optical wave and image characteristics, and develop data processing algorithms that address the deep turbulence challenges
PHASE II: Develop a prototype deep-turbulence sensing suite system and demonstrate its capabilities over long (>100 km) atmospheric propagation paths.
PHASE III Dual Use Applications:
Military Application: This development has the potential to be an enabling technology for long-range laser communications, space surveillance, imaging and weapon systems for Air Force, Army and Navy applications.
Commercial Application: Civilian applications include support of extended-range high-data-rate laser communications, and high-altitude imaging for agriculture monitoring, traffic management, and law enforcement.
TECHNOLOGY AREAS: Information Systems, Electronics
ACQUISITION PROGRAM: Modeling and Simulation
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), which controls the export and import of defense-related material and services. Offerors must disclose any proposed use of foreign nationals, their country of origin, and what tasks each would accomplish in the statement of work in accordance with section 3.5.b.(7) of the solicitation.
OBJECTIVE: This topic seeks to provide MDA M&S systems engineers with a tool that will intelligently assist them in the first phase of the systems engineering process: Understanding the needs of the stakeholder. The MDA Directorate of Engineering, Simulation Architecture (MDA/DESA) desires a tool that combines advanced semantics analysis techniques, inferential engines and adaptive/evolutionary algorithms to decrease the time required, increase the quality of the output, and decrease the number of repeated iterations of this phase of systems engineering. Based upon a semantic analysis of an initial stakeholder need description, the tool should infer possible relationships with previously collected stakeholder needs, existing solution architecture documentation, technical vernacular dictionaries, and other related material and then identify potential issues and prompt systems engineers with additional questions to clarify and expand upon the stakeholders need description. This should be an iterative process until the need statement meets acceptability criteria for clarity, characterization, and validity. This tool’s inferential engine should continuously learn from previous interactions with the same stakeholder and other stakeholders, questions generated by the system engineers, and feedback on need statements during functional requirements analysis.
DESRIPTION: The MDA Directorate of Engineering, Simulation Architecture (MDA/DESA) collects and analyzes M&S needs for all system-level M&S uses from all stakeholders, both internal and external to MDA. Each collected need generally requires many hours of discussions with the stakeholder to translate the original expressed need into a fundamental need statement with sufficient context and detail for the M&S systems engineering team to begin developing requirements. Numerous follow-up discussions are common.
The output of this systems engineering phase is referred to as the software scope in software engineering textbooks. It includes context (how the described need fits into the larger use), functional objectives (what function is to be performed), informational objectives (what customer-visible data objects will be produced) and performance objectives (how well the capability needs to perform to meet the need). Additionally, some characterization of the need relative to the M&S should be made (is it a model/simuland, data management, external interface, user interface, core functionality, supporting tools, or pervasive need) and its priority should be made. This output is then used in the next phase of the systems engineering process to develop “build-to” requirements.
For a variety of reasons, original expressed needs generally have at least one of several common recurring problems that must be resolved:
- Stated in overly general terms: The stakeholder’s concept is often defined only in their sub-conscious. Semantic analysis should help spot overly general phrasing and prompt questions to help define clear objectives and constraints.
- Stated through a specific expressed solution: Often the stakeholder states how they believe their need can be achieved instead of stating their actual need. Their proposed solution at best unnecessarily constrains possible solutions and sometimes may not actually address their real unstated need. Semantic analysis should help spot these cases and prompt questions to determine their real underlying need.
- Words/phrases with ambiguous definition: How a stakeholder uses a word or phrase is often different than the understanding of the systems engineering teams and developers. The tool should look for possible differences in vernacular, catalog known differences, and prompt questions leading to clearly defined terms in the need statements.
- Overly restrictive scope: A stakeholder may express a need to model a specific capability of a single real-system, but their real need is to model that capability for all similar real-systems. Semantic analysis should look for overly specific need phrasing and prompt questions to determine the more general need.
- No expressed performance levels: Stakeholders often provide no information on what is a sufficient minimum capability to fulfill the need. This lack should be spotted and questions prompted to develop specific performance criteria through either direct statement or indirect reference.
- Un-reasonable performance levels: Unknowledgeable stakeholders often request un-needed or perfect performance rather than determining their real needs. Semantic analysis should identify phrasing that indicative of unreasonable performance.
- No expression of priority: The stakeholders provide no information on their needs relative importance to their other needs. This lack should be spotted and questions prompted to determine priority.
PHASE I:
- Analyze the nature of the first phase of systems engineering, collecting and understanding stakeholder needs, as applied to MDA’s modeling and simulation enterprise.
- Identify techniques that can be developed into tools to assist systems engineers in working with the stakeholders to characterize their real needs.
- Develop a detailed concept description and long-term development plan.
- Prototype appropriate tools/tool components.
PHASE II:
- Develop the tools to an initial capability level.
- Test the developed tools through use by the 1 or more MDA/DESA systems engineers conducting interchanges with stakeholders.
- Improve systems based on test feedback (iterate as needed).
- Demonstrate capability to other high-level systems engineering teams for evaluation.
PHASE III:
- Field full capability to MDA/DESA.
- Field full capability to MDA/DESA.
- Make further improvements based on feedback and return-on-investment.
- Transition tool to other MDA systems engineering organizations (MDA/DE and Elements), DoD acquisition organizations (service program offices and labs), and Commercial engineering firms (software, aerospace, automotive, etc).
COMMERCIALIZATION: Working with stakeholders (customers) to determine their needs is a fundamental effort for any systems engineering team, especially for iterative development efforts such as any software project, military or commercial. An adaptive inferential expert-system that adapts to a system engineer’s specific topical area of work and assists them in questioning their stakeholders and documenting the results in a form that can be easily decomposed into requirements would be of great assistance to all systems engineering teams, big or small, military or commercial. The potential users would be any government agency or company that develops requirements from expressed customer needs. This is essentially all engineering organizations.
TECHNOLOGY AREAS: Sensors, Weapons
OBJECTIVE: To produce a dual band telemetry system consisting of a transmitter, antenna cables, and possibly a combiner to replace current S-band telemetry systems on missile defense interceptors and associated test target vehicles. The proposer will consider form, fit, and function as design constraints and will design, integrate, and produce a prototype (in the Phase II) for an advanced dual band telemetry system.
DESCRIPTION: As commercial demand for wireless bandwidth has grown, the military anticipates increased pressure to use more restricted or alternate frequency bands. There is a need for an additional 1500-2000 MHz of bandwidth over the next 10-20 years as the government prepares to transition 500 MHz over to the wireless industry as an initial increment. Given the precedent of reallocating bandwidth to meet this initial 500 MHz of bandwidth, the spectrum is now subject to future wireless industry requirements. New spectrum has been reallocated by the WRC-07 to telemetry use and utilization of which is the subject of this STTR topic.
TECHNICAL REQUIREMENTS: In this topic solicitation, MDA seeks solutions for telemetry systems which provide support for a dual S and C band telemetry system (S-Band selectable from 2200-2400 MHz and C-Band selectable from 4400-4950 MHz and 5091-5250 MHz). A 10 watt output transmitter with both PCM/FM and SOQPSK modulation modes as a minimum is required. The transmitter is to be no larger than 2 in X 3 in X 1 in, survive and operate in the vacuum of space, Temp – 35C to + 85C, acceleration 100 g 3 axes. Shock and vibration environments are negotiable because of shock mounting techniques. The offeror should assume that the dual band telemetry system will utilize the S-Band transmitting frequency spectrum for range safety requirements (GPS tracking and other missile parameters) and the C-Band transmitting frequency spectrum for MDA performance data. All range safety data (GPS receiver data and specific performance parameters) should remain in S-Band and missile performance data should be in C band. The proposer is to assume that the integrated dual S/C Band transmitter system will operate simultaneously in two bands throughout missile flight. Power conversion efficiency must be emphasized in missile applications. MDA will host a coordination and requirements review with the selected company to finalize design requirements and requirements, and determine the best missile to use for an integrated flight test for Phase III. This decision will be made on the basis of best host platform for the integrated dual band design considering available electrical power, space available, and telemetry stream to be collected.
PHASE I: In the Phase I project, successful offerors will develop a dual band system concept and preliminary design. MDA will assist in providing design space constraints drawn from anticipated system performance capability needs. The proposer will utilize these requirements to design a new unitary dual S/C band transmitter that considers form, fit, and function of the S-Band transmitters that it will replace.
PHASE II: The contractor will fabricate and test the prototype telemetry system of Phase I. This telemetry system will be certified to the appropriate missile flight environments and its parameters will be fully characterized.
PHASE III: This prototype telemetry transmitter system will be installed in a missile and flight tested. The transmitter will be powered, connected to missile onboard data, and will transmit data to the ground. The data will be collected and evaluated by MDA.
COMMERCIALIZATION: The contractor will pursue commercialization of the various technologies developed in Phase II for potential commercial uses in such diverse fields as flight qualification and flight testing.
TECHNOLOGY AREAS: Information Systems
OBJECTIVE: Develop techniques, algorithms, and architectures to enable automated system-level supervisory control of jobs running on multi-processor systems. Develop performance metrics to evaluate this supervisory control.
DESCRIPTION: As the DoD progresses toward more multi-processor systems, the need to automate dispatching of compute jobs and also to verify and validate this control becomes vital to efficient system performance. Currently, there is a human in the loop in many places for supervising the subdivision of compute jobs. Developing performance metrics for a supervisory control system and designing a system to meet these metrics is the first step toward automating this process. Subsequent verification and validation of the supervisory control system to perform more of this control function could lead to more efficient, effective, and reliable system performance.
Also, such a system could increase the autonomy and capability of an unmanned combat air vehicle (UCAV), though verification and validation (V&V) would be necessary to ensure that the supervisory control worked effectively and reliably. The verification process should evaluate whether the supervisory control system operates in compliance with any applicable specifications, regulations, or conditions which are determined prior to or during the design phase. The validation process shall ensure with a high level of assurance that the supervisory control system accomplishes its specified mission, meeting required performance metrics.
Multi-processor systems break into multiple jobs running simultaneously. These jobs are often interdependent. There is a need for supervisory controls to run the job allocation, ensuring that interdependent jobs are properly coordinated for efficient, rapid system performance. While device and processor level V&V has been established, a control mechanism is needed to supervise multiple jobs running together on larger multi-processor systems, with V&V implemented for evaluating the process. The possible use of a cloud computing environment to support an oversight function for the supervisory control could be considered.
Novel approaches are needed to develop computer controls in military operations with highly dynamic constraints and topologies. As the speed of battlefield operations increases, there exists a need to rapidly add, remove, and field assets across multi-vehicle scenarios, involving manned and unmanned aviation, and ground, as well as human assets. Proposed concepts should place special emphasis on autonomous Unmanned Combat Air Systems (UCAS), which deliver intelligence, critical information, and munitions on-demand. This will enable more autonomous, rapid-response system operation, with less need for humans in a high threat environment.
The supervisory control should have the capability of reliably supervising and controlling multiple interrelated computing processes on a multi-processor system, with a verification and validation approach developed to test this supervisor, ensuring that jobs are completed in the proper order to enable the appropriate data to be properly fed from one job to another. This supervisory control should emphasize efficient operation, so that it is practical to field on a UCAV.
PHASE I: Identify novel approaches, supporting algorithms, and architectures to effectively supervise multi-processor computing systems. Develop metrics to perform V&V of supervisory control. Demonstrate the feasibility of the supervisory control approaches through laboratory experimentation and, where appropriate, V&V tools.
PHASE II: Evaluate supervisory control designs with tests on multi-processor systems, using test results to determine performance metrics compliance. Develop a supervisory control framework prototype and evaluate by performing V&V testing.
PHASE III DUAL USE APPLICATIONS:
Military Application: Transition proposed architecture, software, and/or technology components into DoD system(s). An area of special emphasis is in UCAVs. Demonstrate benefits of approach in real-world operations or exercises.
Commercial Application: The technology will be compatible with a large class of multi-processor computing applications in commercial industry. Specifically, this research will benefit emerging commercial research in supervising large multi-user, multi-processor systems.
TECHNOLOGY AREAS: Information Systems
OBJECTIVE: The overall objective is to develop new software tools for addressing the requirements of programming the emerging class of computational hardware for Exa-Scale computing. The DoD will benefit through reduced software production costs, and reduced overall life cycle costs through greater portability, fewer defects, and energy reduction.
DESCRIPTION: Researchers and commercial companies are advancing the state of the art in high performance computing through the development of computational technologies that can achieve 100 GFLOPS/W efficiencies. The route to Exa-Scale includes circuit innovations such as Near Threshold Voltage (NTV) operation, which reduces the energy expended per transistor switching event. However, NTV operation also reduces the speed of individual transistors, so that to reach a specific performance level, a greater degree of application parallelism must be provided to the hardware beyond what is provided to conventional multi-core. Movement of data between processors or from processors to and from memory now expends much greater energy than computation. Thus computing hardware increasingly provide for explicit controls for communications and explicit scratchpad memory management. The trends in new accelerators such as GPGPU or the Cell architecture exhibit these features.
The multiplicity of these complexities for Exa-scale computing will require software tools that enable the automatic extraction and formation of parallelism and computation choreographies that minimize communication, or in other words that increase spatio-temporal locality while simultaneously increasing parallelism. Parallelism and locality are considerations that are in tension. There currently exist analytic compiler optimization algorithms that can extract and balance parallelism and locality, so the program expression itself is now becoming the barrier to increasing performance. Program expressions in common high level languages such as C force sub-optimal bindings, e.g. array layout, and obscure semantic relationships and thus restrict the parallelism that can be extracted. The solution is even higher level program expressions, that provide for more precision in the expression of application semantics, and that reduce extraneous constraints. These higher level program expressions can provide for more succinct specifications of software, and also can enable new parallelization and mapping opportunities such as cross-function fusion or exploitation of mathematical properties for reformulation. Rather than new parallel computing languages, these higher level expressions can be accomplished by embedding them in an existing language, as surface syntax on a more structured expression model that exposes more parallelism with the conventional high level language. Separating the semantics from the mapping is critical in providing the benefits of this approach.
Improving programming tools can reduce software lifecycle costs. Succinct expression also means fewer lines of code which allows proportionally greater productivity and reduced opportunities for costly errors. Higher level specifications enable greater portability by maximizing the semantic application-specific information while minimizing extraneous bindings to the original target hardware. Portability reduces costs by avoiding hardware platform lock-in, and also allows mission capability increases by more rapid adoption of new hardware capabilities.
PHASE I: Co-design higher level algorithmic expressions with specific optimizations for parallelism and locality. Perform proof of concept measurement of the improvements from the use of the higher level algorithmic expressions in increasing parallelism or locality beyond that which can be achieved with common high level languages. Develop mathematical basis of the underlying optimization model to illustrate the means by which it enables exploitation of parallelism and locality across stages of sensor signal processing computational chains. Benchmark on model kernels relevant to Air Force sensor missions such as image formation, moving target detection, or advanced trackers.
PHASE II: Implement new higher level algorithm specific language and optimizations. Test and tune on sensor mission relevant kernels to establish generality over air force missions; target toward multiple processing targets to establish porting benefits. Demonstrate capability to exploit parallelism and locality across stages of sensor computational chains. Demonstrate methodology for applying to tactical kernels on Air Force sensor missions.
PHASE III DUAL USE COMMERCIALIZATION: Deliver tool for tactical code development and optimization, feature enhancements, performance enhancements, and customer support. Commercial applications include energy exploration, manufacturing, bio-computing, and financial computing.
TECHNOLOGY AREAS: Information Systems
OBJECTIVE: Develop innovative technologies for early design and analysis of multi-threaded software for multi-core systems enabling robust development of distributed, real-time systems targeted for dynamic environments.
DESCRIPTION: The multi-core era has significantly impacted the way software is developed. While multi-core processors are expected to increase system performance, attaining increased performance is not straightforward. Parallelization of software required for multi-core systems increases the complexity of the system, as additional requirements to account for shared resources brought about by concurrency introduce new performance bottlenecks, such as lock contention, cache coherence overheads, and lock convoys [1] not seen in single core systems. Parallel programming models, such as PThreads, OpenMP, and Intel Thread Building Blocks, enable concurrent programming, however these programming models require the software developer be an expert in low level multi-threading programming (e.g. Synchronization, Communication, load balancing, and scalability) and require vendor dependent multi-core-specific architecture knowledge to program (e.g. cache size, shared resources, heterogeneous cores) [2]. Furthermore, portability is significantly hindered because multi-core architectures differ significantly, requiring applications be adapted to each new platform, resulting in limited portability of code [3]. Moving the programming model to a higher level of abstraction can overcome these difficulties, including providing platform independence, which will help to reduce the complexity introduced by concurrent programming. For example, an actor-oriented design might reduce the complexity and non-determinism of multi-threaded code by using these design constructs to hide the underlying thread implementation [4].
This topic seeks to increase the correctness of embedded mission critical software on multi-core processors through the development of early design and analysis methodologies and techniques that improve predictability and dependability. Due to the size (measured by lines of codes, components etc.) and complexity (multi-domain dimensionality, interconnection, etc.) of single-core software, analysis and testing during development and integration phases cannot readily verify and validate the correctness of the software system under development. As systems transition into the multi-core era, the complexity is further exacerbated as the software is expected to operate in a parallel manner across the cores, maintaining performance increases lost by the inability to continue increasing the clock frequency. The concurrency of multi-core processors requires the software developer to be aware of shared resources between cores to maintain correct operation of the software. Shared resources, such as L2 cache, have a significant impact on the performance of multi-threaded software; 20-plus percent performance reductions have frequently been reported due to inattentive developers and lack of tool support. To alleviate the complexity developers' face, model-centric software development and analysis tools, which raise the level of developer abstraction, allow for complete performance analysis early in the design cycle, and provide automated artifact generation (e.g. code, scripts, tests, deployment plans, etc.) are being sought.
PHASE I: Define the feasibility of novel approaches that will enable early design, analysis, test and validation of software targeted for multi-core machines. Select appropriate design, analysis, test and validation techniques and develop the conceptual approach and design to integrate the technologies. Define metrics to measure improvements offered by the concept and present an appropriate validation and verification plan.
PHASE II: Develop, design and demonstrate the novel approaches from Phase I in a prototype. Define and implement the appropriate tool and interface approaches, and methodologies for integration. Demonstrate the prototype's openness, scalability, and degree of automation in the exchange of design data by accomplishing performance, analysis, validation and verification against a representative DoD design.
PHASE III DUAL USE COMMERCIALIZATION: The ability to design and analyze software for multi-core systems has clear benefits for both the private and military sectors. With the shift to multi-core systems the only way to continue to achieve increases in performance is to develop parallel software. Design and analysis tools to help in the development of parallel software will be applicable to the private and public sectors since both domains are facing the same challenges of migrating to multi-core systems.
TECHNOLOGY AREAS: Information Systems
OBJECTIVE: Design and develop innovative technologies for operating system services that take advantage of abstract concepts to efficiently partition, communicate, and execute programs on many-core, multi-processor systems.
DESCRIPTION: The Department of Defense (DoD) is heavily reliant on computational power to perform rapid analysis of sensor data, search for records in databases, and to control complex machines. To retain our technical edge, the DoD must be able to take advantage of cutting edge techniques in parallel computing on many-core systems.
In the past new hardware brought higher clock frequency which translated into increase performance. Due to thermal constraints, increasing performance the same way is no longer feasible so chip manufactures are now placing multiple processors (or cores) on a single chip. In doing so it is up to the software developer to increase performance. That is, applications must be developed to execute in parallel, where possible, to take advantage of multiple cores. However, as we move into the many-core era, new operating system (OS) techniques will be needed. Many-core systems will have 10s-1000s of cores, complex interconnects between cores, heterogeneous cores, and complex shared resource management. These disruptive changes to the hardware will cause substantial scalability and correctness challenges for OS designers [1]. “Current operating systems are designed for single processor or small number of processor systems and were not designed to manage such scale of computational resources. The way that an operating system manages 1,000 processors is so fundamentally different than the manner in which it manages one or two processors that the entire design of an operating system must be rethought.” [2]
Research under this topic will explore OS mechanisms that take advantage of abstract concepts to efficiently and intelligently partition, communicate, and execute programs on many-core systems. For example a smart scheduler (dynamic or static) that places threads in an optimal layout accounting for shared resources (e.g. cache, interconnects, I/O, etc.) is one possible mechanism being sought. However, it is not the only mechanisms that could be proposed to enable the goals of this topic, other mechanisms that address the scalability issues [1,2,3] will be considered (e.g. memory management mechanisms). These mechanisms should allow continued and efficient operation of applications when the number of processors, memory, available communications bandwidth, interconnects, or other resources change; and they should be easily maintainable, adaptable, and usable by people with no more than a few days of specialized training. The solution needs to consider security aspects of the design in addition to performance and scalability. Solutions may target mechanisms to extend today’s operating systems or fundamentally different OSs currently under research, e.g. Barrelfish [1], the Factored Operating Systems (FOS)[2] and OctoPOS [3].
PHASE I: Define operating system mechanisms that take advantage of abstract concepts to efficiently partition, communicate, and execute programs on multi-processor systems. Show how the mechanisms will allow continued and efficient operation of applications when the number of processors, memory, and available communications bandwidth change; and how they will be maintainable, adaptable, and usable by people with no more than a few days of specialized training.
PHASE II: Develop a prototype of the mechanism and demonstrate and validate the concepts and design created during Phase I. Include the solution's openness, scalability, and degree of automation; as well as metrics of performance or analysis against current practice.
PHASE III DUAL USE COMMERCIALIZATION: The ability to design and analyze software for multi-core systems has clear benefits for both the private and military sectors. With the shift to many-core systems the only way to continue to achieve increases in performance is to develop parallel software and scalable OS that handle the shared resources in an intelligent manner. New OS mechanisms will be applicable to the private and public sectors since both domains are facing the same challenges of migrating to many-core systems.
TECHNOLOGY AREAS: Information Systems
OBJECTIVE: Identify and understand methods for determining information value as a basis for future decision support systems. Devise mathematical, information science, and computer science representations of information salience to provide basis for automation and subsequent development of generalized information salience models and subsequent development of automated decision support systems.
DESCRIPTION: The understanding of how humans process information, determine salience, and combine seemingly unrelated information is essential to automated processing of the large amounts of partially relevant information or information of unknown relevance. Recent neurological science research in human perception and that in information science regarding context-based modeling provides us with a theoretical basis for using a bottom-up approach for automating management of large amounts of information in ways directly useful for human operators.
Having a way of representing human perception and cognition, both active and unperceived is but the first step. Application of recent information science research in information representation of human-centered perception and cognition is the possibly the most critical piece in bridging the cognitive science to computer science in this multi-disciplinary research topic. Once this is accomplished, it is tractable to apply information contextual models and other techniques leading to development of generalized models and subsequent automation of human cognition-like processes.
Potential research areas are: 1) determining what human users consider to be essential pieces of information from large data elements in various situations (e.g., tasks, stress levels, distractions); 2) how does the human brain separate signal from noise; 3) what unconscious/unperceived processing do the brains of experts carry out that enable the extraction and interpretation of salient information; 4) how does the brain modify its “circuits” to enhance salient cue detection efficiency; 5) what are the salient features that are common across data types; 6) which features are most important as they map to human perception; 7) general description of information salience and combinations of salient information; 8) general description of situational context; 9) general description of space, time and uncertainty; 9) managing the inexplicit relationship among independently-derived information values; 10) managing individual-to-individual variations that violate the general model; 11) describing essential pieces of information mathematically; and 12) development of efficient algorithms to implement the general mathematical descriptions of information salience.
The intended environment for this capability would be a military tactical command center with a limited number of intelligence analysts tasked with extracting useful information from large amounts of information from multiple sources including sensors, intelligence reports, unit observations, and media sources. Ultimately, this capability would be used as a basis for developing anticipatory decision support systems for small military unit leaders.
PHASE I: Develop an empirical and then a mathematical framework for representing human perception and cognition. Show correlation of empirical and mathematical approaches using a representative data set.
PHASE II: Develop computer algorithms based on the basic Phase I work. Validate the algorithms against human generated semantic labels on a single or double parameter information set. Demonstrate correlation of the automated method versus the human-generated solution using a representative data set.
PHASE III: Mature Phase II work to develop an automated ontology generator that will process multiple parameter data sets. Validate the system against human analyzed information sets. Demonstrate time savings versus the human-generated data at an equivalent accuracy using a multi-faceted data set.
PRIVATE SECTOR COMMERCIAL POTENTIAL/DULL-USE APPLICATIONS: This STTR if successful, would enable quicker fielding of semantic web capabilities in that semantic labels (e.g., ontologies) could be generated manually allowing much more flexibility in providing new information content more quickly.