You are here
DoD 2013.2 SBIR Solicitation
NOTE: The Solicitations and topics listed on this site are copies from the various SBIR agency solicitations and are not necessarily the latest and most up-to-date. For this reason, you should use the agency link listed below which will take you directly to the appropriate agency server where you can read the official version of this solicitation and download the appropriate forms and rules.
The official link for this solicitation is: http://www.acq.osd.mil/osbp/sbir/solicitations/index.shtml
Release Date:
Open Date:
Application Due Date:
Close Date:
Available Funding Topics
- SB132-001: Oxytocin: Improving measurement sensitivity and specificity
- SB132-002: Real-time Characterization of Variable-rate Streaming Data
- SB132-003: High Density Optical Interconnects
- SB132-004: Exploiting Radio Propagation Reciprocity in Wireless Networks
- SB132-005: Novel schemes for highly reliable aerospace electromechanical primary actuation systems
OBJECTIVE: Improve oxytocin measurement techniques by developing quantitative assays to measure oxytocin more sensitively and specifically, particularly to discriminate between the 9- and 12- amino acid versions. Measurements of these two forms will be conducted in an in vivo system to determine their variance under experimental conditions known to affect oxytocin levels. DESCRIPTION: Oxytocin is a hormone widely known for its role in reproduction and childbirth. More recently its role as a neuromodulator has been highlighted, particularly in facilitating pair bonding, maternal interactions, and trust behaviors (1,2). An explosion of research on the effects of oxytocin has ensued, and the hormone is listed in 213 ongoing or completed clinical trials on clinicaltrials.gov. Oxytocin also affects behaviors relevant to national security. Oxytocin can impact behaviors ranging from whether two individuals trust each other, how someone reacts to stress, and even wound healing. Therefore, developing oxytocin assays with improved sensitivity and specificity will provide the necessary tools to understand the function of this important neurohormone (3,4). These tools can be used across DoD research programs in areas such as stress resilience, human decision making, and PTSD treatment to enable advances in technology development relevant to the warfighter. Unfortunately, from an experimental perspective, oxytocin is present at low levels in the body. Basal blood levels of oxytocin are in the pg/mL range (2). This low biological level makes accurate measurements of oxytocin difficult. In the laboratory, oxytocin is often measured by immune-assays, which involve the binding of anti-bodies to the molecule of interest (5). These molecular methods have improved somewhat to include extraction steps that concentrate the oxytocin samples. However, sensitivity is still an issue. While detection is usually possible in the blood, measuring oxytocin from other more readily available samples (i.e. saliva or sweat) is often not possible because the levels are undetectable. Innovative assay methods that significantly improve oxytocin assay sensitivity, particularly enabling measurement in these less traditional samples would be a breakthrough to this research field. Recent studies have shown the regulation of oxytocin to be a complex process. In particular two forms of oxytocin have been identified. A 10-12-amino acid pro-hormone is first produced, and then, at some point, may be cleaved to a 9-amino acid hormone. This shortened form is the active neuropeptide, oxytocin, known to bind to the oxytocin receptor and credited with oxytocin"s behavior altering effects. The biological role, if any of the 12-amino acid pro-hormone is unknown, but has been associated with atypical social behaviors in autism and possibly related to obesity in Prader-Willi syndrome (6,7). Additional forms of different lengths or biologically active metabolites of oxytocin may exist, as well. Current assay techniques are non-specific to these different forms of oxytocin, failing to differentiate bioactive from potentially inert forms. This lack of specificity may add significant noise to a measurement of very low hormone levels (5). New detection techniques that distinguish between these different forms of oxytocin may elucidate a functional role for this complex biological regulation. This technique could help explain seemingly paradoxical findings in the oxytocin literature, regarding its role both in pro-social and pro-stress behaviors. Regardless, more sensitive and specific assay techniques for oxytocin would provide a more accurate picture of the complex regulation of this intriguing neurohormone. Clinical researchers and practitioners would also benefit from such a tool, to aid in understanding oxytocin"s role in social disorders, such as autism, and potential therapeutic application, as demonstrated by its high use in clinical trials. Particularly, the ability to measure different forms of oxytocin, presently impossible, could demonstrate a highly specific biomarker of social and/or stress disorders, or at least altered responses to social and stressful stimuli. PHASE I: Determine the technical feasibility of the proposed measurement technique for oxytocin. Performers will identify an innovative technical approach for the sensitive detection of oxytocin and its derivatives. If possible, performers will conduct proof-of-principle studies to demonstrate that different forms of oxytocin can be reliably detected, using synthetic oxytocin. Phase I deliverables will include a technical report on the proposed technique and an experimental outline for studies necessary to improve assay sensitivity and determine oxytocin species"levels in an in vivo system. PHASE II: Develop and refine the technique identified in Phase I, particularly to increase its sensitivity. In addition, the technique should be used to measure biological oxytocin collected from in vivo animal or human experiments to determine how levels of the long and short forms of oxytocin vary under conditions of stress and social interaction, which have previously shown oxytocin sensitivity. Sensitivity of the new system should be benchmarked against existing detection methods and measurements from in vivo systems compared to detection of known levels of synthetic oxytocin. Required phase II deliverables will include a technical a report detailing the new measurement technique and its results from the above-mentioned comparisons. Findings related to differences in short and long forms of oxytocin under varying experimental conditions (e.g. stress or social interaction) should also be included. PHASE III: Military research laboratories would be very interested in using a highly sensitive and specific oxytocin assay. For example, the Air Force Research Laboratories have been measuring oxytocin with traditional enzyme immunoassay methods for defense applications. An improved assay technique could be used to reanalyze their samples and may shed new light on the problem of developing trust. Other military partners may be interested in this technology or a future derivative for the measurement of oxytocin to understand influence. The DARPA program Narrative Networks is examining oxytocin in this context and would benefit from increased measurement specificity and sensitivity. Developments made under this SBIR effort could be transitioned, in synergy with findings from Narrative Networks, to provide better assays of oxytocin as it changes with narrative influence. The clinical dimensions of oxytocin measurement may also present transition opportunities to the military, since oxytocin has been linked with stress reactions. New measurement techniques will allow clinical scientists to investigate if it plays a role or is correlated with Post-Traumatic Stress Disorder (8). A highly sensitive and specific assay to measure oxytocin in biological samples, including blood, saliva, and sweat would have a number of commercial applications. Biotechnology companies would be interested in refining an improved technique and would likely see additional investment in development and production as a small risk, given the research and commercial need for such an assay. REFERENCES: 1. Carter, CS, Grippo, AJ, Pournajafi-Nazarloo, H, Ruscio, MG, Porges, SW. (2008). Oxytocin, vasopressin and sociality. Progress in Brain Research 170: 331-336. 2. Meyer-Lindenberg, A, Domes, G, Kirsch, P, Heinrichs, M. (2011). Oxytocin and vasopressin the human brain: social neuropeptides for translational medicine. Nature Reviews in Neuroscience 12, 524-538. 3. Gouin JP, Carter CS, Pournajafi-Nazarloo H, Glaser R, Malarkey WB, Loving TJ, Stowell J, Kiecolt-Glaser JK. (2010). Marital behavior, oxytocin, vasopressin and wound healing. Psychoneuroendocrinology 35:1082-1090. 4. Karelina, K, DeVries, AC. (2011). Modeling social influences on human health. Psychosomatic Medicine 73, 67-74. 5. Szeto A, McCabe PM, Nation DA, Tabak BA, Rossetti MA, McCullough ME, Schneiderman N, Mendez AJ. (2011). Evaluation of enzyme immunoassay and radioimmunoassay methods for the measurement of plasma oxytocin. Psychosomatic Medicine. 73:393-400. 6. Green L, Fein D, Modahl C, Feinstein C, Waterhouse L, Morris M. (2001). Oxytocin and autistic disorder: alterations in peptide forms. Biological Psychiatry 50:609-613. 7. Dombret C, Nguyen T, Schakman O, Michaud JL, Hardin-Pouzet H, Bertrand MJ, De Backer O. (2012) Loss of Maged1 results in obesity, deficits of social interactions, impaired sexual behavior and severe alteration of mature oxytocin production in the hypothalamus. Human Molecular Genetics. 21:4703-4717. 8. Olff, M. (2012) Bonding after trauma: on the role of social support and the oxytocin system in traumatic stress. European Journal of Psychotraumatology. E pub April 27, 2012.
OBJECTIVE: Develop methods and tools for the characterization, underlying structure, trends, and events in streaming data sets in order to aid analysts in discovery and understanding. Methods and tools, applicable over a broad range of bandwidth of streaming data -- kbps up to beyond 100 gbps -- will leverage established principles of statistical analysis, visualization, and cognitive science. DESCRIPTION: Streaming data are not stored either due to cost of storage at very high data rates or they are stored with delayed processing that is of lesser operational value. Increasing data generation and collection make a streaming data model inevitable for some streams -- the question becomes more about the data-rate of the stream and the class of computational operations that are applicable. Current techniques for streaming data analysis use ad-hoc sampling and data decimation techniques, leaving the overwhelming bulk of the collected data unexamined and its value lost. Tasks of streaming data analysis include trend analysis, event detection, and discovery of underlying structures. Human cognitive abilities and the visual system are ideally suited to do these tasks. Data visualization techniques leverage the human visual system to organize and structure data using visual primitives (e.g. shape, color, intensity, size, position, etc.) to encode massive amounts of data and reveal relationships, anomalies, correlations, and associated uncertainties. However, current data visualization techniques, like much current analytical processes, rely on post-processing of stored data. Therefore, a new approach is required that enables analysis of data as it passes through system memory by converting the data stream, based on its rate/bandwidth, into appropriate visual elements that encode and characterize salient features of the data with real-time visualization processing. That is, the visualization process needs to be resident with the data as it passes through the system, and must be systematically driven by statistical characteristics of the data stream. The visual elements generated in this way should incrementally capture base statistics (e.g., counts, distributions, frequency, etc.) and higher order statistical measures (e.g., autocorrelation functions, probability distribution, time and frequency domain measures, etc.) and, when combined, provide insight into underlying structure, relationships, and trends in the data stream. The design of the visual elements should take into account cognitive abilities and biases in segmentation, grouping (e.g. gestalt measures), chunking, and user expertise and training. Additionally, the visual elements should capture sufficient statistics and structure, so that reconstruction of the data stream is possible to some level of precision. The real-time visualization tools need to be able to be tuned to the available processing resources and data throughput while maintaining analytical utility. The goal of this research topic is the application of established statistical and cognitive principles in the demonstration and development of a real-time system that can generate data visualizations that capture the structures and relationships in data streams (from kbps to and beyond 100 gbps streams, where methods may either be uniform or differ across this bandwidth spectrum). Visual primitives that leverage human visual processing will need to be defined based on cognitive principles of streaming information. Algorithmic, statistical, or rule-based definition of the combination of visual and analytical primitives is desirable. The system should be modifiable on-the-fly by human operators to handle new salient features or to highlight discovered correlations. Streaming data may be open source, purchased, or synthetically generated. The techniques should be broadly applicable. PHASE I: Task 1: Develop an approach for incrementally encoding statistical measures in visual elements. The visual elements should be able to be combined into complex visualizations that leverage human cognitive abilities for pattern recognition and correlation. In-situ visualization run-time code should be tunable to differences in system configurations (single vs. multi-core) and data bandwidth. Task 2: Develop an approach for the application and combination of the visual elements (from task 1). Task 3: Develop an architecture and conceptual design for the implementation of a dynamic system based on the elements and principles developed in tasks 1 and 2. Task 4: Implement a minimal proof-of-concept real-time system that processes some set of representative data and generates visualizations constructed from visual elements from tasks 1-2. Phase I deliverables should include a Final Phase I report that includes: (1) a detailed description of the approach (or algorithm) for applying statistical and cognitive principles to a specific data set; (2) a detailed system architecture and design; (3) code and a demonstration of the approach using the proof-of-concept system. PHASE II: Develop, demonstrate, and validate a proof of concept design of the real-time visualization generation tool. The required deliverable for Phase II will include: the full prototype system, demonstration and testing of the prototype system with high bandwidth data streams (order of Gbps), and a Final Report. The Final Report will include (1) a detailed design of the prototype tool, (2) the experimental results from the tool, and (3) a plan for Phase III. PHASE III: Phase III will consist of the delivery of systems to analysts in DoD and/or commercial operational settings. Within DoD and the intelligence community, real-time visualization tools for variable data rates are generically applicable across a broad array of analysis applied to multi-int data. It is anticipated that the final product will handle multiple data types such as structured and unstructured data, imagery, and video with different characteristics such as noise and reliability acquired through sensors. In commercial space also, streaming data is bourgeoning with wide variations across receiving devices, from handhelds to cloud computing. In Phase III, the commercial opportunity is to provide principled and effective visualization technology for this growing market. REFERENCES: 1. http://people.hofstra.edu/geotrans/eng/ch8en/conc8en/fuel_consumption_containerships.html 2. Ware, C., Purchase, H., Colpoys, L., McGill, M.,"Cognitive Measurements of Graph Aesthetics", Information Visualization June 2002 vol. 1 no. 2, pp. 103-110. 3. Agrawala, M., Li, W., Berthouzoz, F.,"Design Principles for Visual Communication", Communications of the ACM, April 2011, 54 (4), pp. 60-69. 4. Mackinlay, J. 1986."Automating the Design of Graphical Presentations of Relational Information", ACM Transactions on Graphics, 5(2), 110-141. 5. Roth, S. F., and Mattis J. 1990."Data Characterization for Intelligent Graphics Presentation", Proc. SIGCHI'90, Seattle, WA, ACM, 193-200. 6. Kosslyn, S. M., 2006,"Graph Design for the Eye and Mind", Oxford University Press. 7. Tufte, E. R.,"Visual Explanations: Images and Quantities, Evidence and Narrative", February, 1997 8. http://en.wikipedia.org/wiki/Sailing_faster_than_the_wind
OBJECTIVE: Demonstrate low-loss, high density optical waveguides suitable for chip-scale integration with layout and pitch comparable to next generation global-level interconnects. Identify and demonstrate active components beyond the state-of-the-art by incorporating these waveguides. DESCRIPTION: The use of optics has revolutionized communications due to its extremely large bandwidth, very low propagation loss and its intrinsic lack of electromagnetic interference. Historically, optical signaling began to uproot wired electronic communications with long trunk telecommunications lines. As the technology and manufacturing processes improved and reduced costs, the minimum link for which optics was superior became shorter every year. Today it is clear that high data-rate optical links can outperform their wired counterparts for links greater than a few centimeters. It is the short range electrical interconnect, however, that is principal source of energy dissipation in modern computing systems. In integrated circuits, performance and power efficiency improve with reducing the size of the system through scaling. For instance, the capacitance per unit length of an interconnect element tends to remain constant through geometric scaling. Because the loss of the line will scale as capacitance, the size reduction of the components and interconnects will reduce the total length and therefore the total power consumption. This reduction in communication costs was one of the original motivations for circuit integration. Miniaturization, however, cannot continue indefinitely. Although physical limitations such as ohmic resistance due to grain boundary scattering will increase with scaling, the principal limitation is due to the total number of lines enabled by continued scaling. To give a simple picture, if we are able to double the number of transistors on a chip without increasing its size, we have approximately doubled the thermal dissipation due to interconnects. Although electrostatic signaling is efficient for low-bandwidth and short distance communications, the loss in interconnects will be a key limiter in future computing systems. The huge information carrying capacity of optical fiber networks is a result of the high frequency of an optical electromagnetic wave. Optical interconnects can have low loss and large bandwidth, however optical interconnect dimensions are typically orders of magnitude larger than electrical lines, in general limited by diffraction effects to sizes on the order of the wavelength of light (~(mu)m). Therefore, they have been unsuitable to integrate with electronic devices at the chip level and consume waveguide pitches much larger than metal wires. That said, recent research in nanophotonics has identified techniques to confine optical fields to deeply sub-wavelength scales.[1-4] This project seeks the demonstration of optical waveguides with low-loss and high density as required for future interconnects. In addition to mitigating loss for high density interconnects, successful demonstration of highly confined fields will also provide several ancillary benefits for active devices. The limiting parameters of optical modulators, such as their bandwidth and driving voltage, can be significantly improved if implemented with highly confined optical fields. Other active devices, such as optical switches, can likewise be optimized. Successful development of optical interconnects will result in advanced electronic components for DoD systems with higher data throughput at lower power dissipation levels. These components will advance embedded computing applications at various sensor platforms requiring very data-intensive computations with severe power efficiency requirements. Some of the immediate benefits to the warfighter are real-time processing of complex sensor data for faster reaction times and the development of integrated, high-density optical switches and routers for extreme I/O applications. PHASE I: Through simulation and analysis, determine the technical feasibility of low-loss nano-photonic waveguides with dimensions compatible with future global interconnects. Develop a conceptual design and model key waveguide elements to provide the required confinement for on-chip integration with less than 10 dB/cm propagation loss. Define the materials, processes and fabrication steps needed to assess manufacturability and compatibility with standard integrated circuits fabrication processes. Design a preliminary concept for a high-speed, low-Vpi modulator. PHASE II: Design, fabricate, and experimentally characterize prototype, ultra-confined plasmonic waveguide devices to demonstrate dimension compatibility with electronic devices and with propagation losses below 20dB/cm. The chip-level integration requirement will be satisfied by waveguide pitch of less than 200 nm. Provide a path to achieving propagation losses below 10 dB/cm and a waveguide pitch below 100 nm. Required Phase II deliverables will include demonstration of the operation of a prototype device(s) meeting or exceeding the above specifications. In addition to waveguides, demonstrate a path to building optical couplers, modulators, switches, and multiplexers with sizes comparable to their electronic counterparts. PHASE III: For the Department of Defense, examples include specialized real time image and video processing from wide field-of-view, full-motion-video persistent surveillance systems. In general, embedded computing applications on various sensor platforms require very data-intensive computations with severe power efficiency requirements. Another key transition opportunity for the Department of Defense is in integrated, low-Vpi modulators for analog and digital photonic links. Dual Use Applications: A nanoscale photonic interconnect architecture promises to alleviate the problems associated with the interconnect bottleneck for high clock-speed computing, improving both power efficiency and performance. Applications for this technology span both the military and commercial arenas from terabit signal processors to multi-terabit per second routers. Apparent applications include data intensive computational platforms where the processing unit executes intensive I/O operations or requires considerable data transfers between different areas on the chip. Commercial Application: In the commercial space, applications range from ultra-fast on-board and off- board signal routing as well as efficient high-performance signal processors. Current high performance computing systems are moving to optical for board-to-board and box-to-box communications. In Phase III, the obvious commercial transition opportunity is in ushering optics into intra-chip communications. REFERENCES: 1. D. K. Gramotnev, S. I. Bozhevolnyi, Plasmonics beyond the diffraction limit, Nature Photonics 4, 83 (2010). 2. E. Ozbay, Plasmonics: Merging Photonics and Electronics at Nanoscale Dimensions, Science 311, 189 (2006). 3. J. Conway, S. Vedantam, H. Lee, J. Tang, E. Yablonovitch, What is the Smallest Volume Into Which Light Can Be Focused, Efficiently?, International Nano-Optoelectronics Workshop, i-NOW'07, p. 77 (2007). 4. R. Zia, J. A. Schuller, M. L. Brongersma, Plasmonics: The Next Chip-Scale Technology, Materials Today 9, 20 (2006).
OBJECTIVE: Develop the system components for utilizing the reciprocity characteristics of radio propagation to improve wireless network security and efficiency. DESCRIPTION: There is a critical military need for both increases in wireless system performance and spectral efficiency, and more distributed security techniques. In particular, the military is deploying wireless systems much more broadly and to lower echelons, which significantly complicates the process for distributing cryptographic key information. Creating a secret among authorized radios in order to encrypt or authenticate traffic is paramount to protecting personnel and missions. A scalable technical approach is needed to meet the evolving needs of military wireless communications. This technology is directly applicable to future systems developed at the Joint Tactical Networking Center (JTNC, formerly the Joint Program Executive Office of the Joint Tactical Radio System, JPEO JTRS). Wireless networks suffer from variations in received signal levels and delays due to motion of the transmitter or the receiver, as well as objects in the vicinity of either. One fundamental characteristic of radio wave propagation is that the delays and fades are the same (i.e. reciprocal) in both directions. In other words, the channel experience from radio 1 to radio 2 is the same as that experienced from 2 to 1. However, the radio frequency (RF) components and other electronics internal to the radios typically do not possess this reciprocity property. Consequently, wireless system designs typically assume no benefit from the reciprocity of the radio propagation. There is much to be gained if the system can take advantage of this property, both in terms of performance and potentially in security. System performance can be improved by lowering control overhead and enabling faster reaction times to time-varying fading because a radio can adapt its transmissions based on its own measurements of the channel rather than waiting for feedback from its intended receiver. The benefit can be even more pronounced in multi-antenna systems[1]. Security can be improved by leveraging the variations in the reciprocal channel as a secret shared (i.e. a key) among the two communicating radios in order to authenticate or encrypt transmissions between the two. Because the channel is dependent on the position of each of the radios, a third radio that is not collocated with either of the original two will not experience the same propagation fades and therefore will not be able to determine the secret key. A variety of work, both theoretical and experimental, has been performed on this topic, but a consistent set of analysis and experimentation has yet to lead to a recommended system approach. Clearly, the accuracy with which the devices can be calibrated determines the correlation of the channel estimate to the actual reciprocal channel. Most existing work focuses on the use of channel state information for a particular measurable statistic. For this effort, proposed solutions should assess the trade-off between accurate device calibration and the relevant performance metric. Proposed solutions should quantify the secrecy rate of a pair of radios exploiting channel reciprocity relative to a potential eavesdropper, as well as suggest approaches to share such a secret among several radios, including potential vulnerabilities of such approaches. In addition, the research should address the system impacts of maintaining calibration of the devices to produce a particular agreement in channel estimates. Analysis should be supported by experiments with actual radios, where the radios can be calibrated to varying degrees of fidelity. Agreement in channel estimates at participating radios should be quantified. The research should demonstrate the ability to generate a shared secret among desired radios, as well as to utilize the channel estimate to improve system efficiency. PHASE I: Propose a relevant channel model that incorporates time-varying fading and a model of RF component responses. The models should capture multiple antennas per radio and more than two radios in the system. Quantify the performance improvement and degree of secrecy as a function of the ability to calibrate radios and isolate propagation effects. Recommend approaches to adapt to multiple antennas and shared secrets among several participating radios. Evaluate system-level impact of effort to maintain sufficient agreement of channel estimates. Assess potential vulnerabilities of proposed solutions. Propose an approach to test solutions in a relevant environment. PHASE II: Develop a test environment to evaluate proposed solutions and assess vulnerabilities, including the ability to adjust the fidelity of channel state information available to the signal processing subsystem. Implement proposed solutions in test radios with sufficient data capture capabilities to quantify desired performance metrics. Test radios and proposed solutions in relevant environment with motion and interference. Adjust analysis from Phase I based on experimental results. Evolve proposed solutions to improve performance and secrecy in real RF propagation. PHASE III: Security of transmissions among radios is critical to military operations. As mobile systems become more widely deployed, more distributed key generation approaches will become necessary. In addition, information exchange requirements are increasing as access to radio spectrum is decreasing. The results of this research effort can enable scalability and efficiency of future military communications systems, and may potentially lead to an applique that can enhance existing systems in the near-term. Leveraging channel reciprocity has great potential benefits in commercial applications, both for licensed commercial cellular systems and for unlicensed wireless access points. The ever-growing demand for wireless services, particularly for mobile devices, is driving technology to be more spectrally efficient as well as adaptive to changing channel conditions. Moreover, the commercial world tends to lag the defense community in system security and the increased use of wireless devices for sensitive applications such as banking and medical monitoring has caused a gap between the security needs and capabilities of commercially available wireless systems. The solutions proposed as part of this effort can partially bridge that gap. REFERENCES: 1. Kaltenberger, et al.,"Relative Channel Reciprocity Calibration in MIMO/TDD Systems,"ICT Mobile Summit 2010, 19th Future Network & Mobile Summit, June 16-18, 2010, Florence, Italy. 2. Liu, H. et al.,"Collaborative Secret Key Extraction Leveraging Received Signal Strength in Mobile Wireless Networks,"in Proceedings of 2012 IEEE INFOCOM, March 25-30, 2012, Orlando, FL. 3. Wilson, R., D. Tse, and R. A. Scholtz,"Channel Identification: Secret Sharing using Reciprocity in Ultrawideband Channels,"IEEE Transactions on Information Forensics and Security, Vol. 2, No. 3, September 2007.
OBJECTIVE: Define and demonstrate a novel design scheme for high-reliability, fault tolerant electromechanical actuation for critical aerospace applications. DESCRIPTION: Many emerging and future USAF and USN aircraft programs, including efforts related to Next Generation Air Dominance, drive to demanding actuator packaging requirements that today"s electrohydrostatic or hydraulic actuators cannot easily meet. Lighter, smaller, more reliable actuation technology can help enable game-changing new aircraft capabilities. Electromechanical actuation offers great potential advantages in improved maintainability, ease of distribution, actuator packaging and installation, system-level power-to-weight ratios, stiffness, installation flexibility, and application-customized designs [Ref 1]. These theoretical advantages over hydraulic and pneumatic systems have been recognized for at least forty years [Ref 2]. At the same time, hydraulic actuation dominates critical applications in aerospace today, particularly for the movement of aircraft primary flight control surfaces. New aircraft, including the Lockheed Martin F-35, Boeing 787, and Airbus A380, still use hydraulics in primary flight control applications. Airworthiness standards for large aircraft expect system design to drive to a very low probability of catastrophic failure. As an example, the Federal Aviation Administration"s advisory circular AC25-1309 mandates system design with a probability of catastrophic failure of less than one in ten to the ninth operating hours [Ref 3]. Conventionally, hydraulic actuation achieves the requisite reliability at a system level by use of multiple independent hydraulic circuits driving a single control surface with very low probability of failure of the final hydraulic actuator output link or links. By contrast, existing electromechanical actuators typically employ a rotary-to-rotary or rotary-to-linear output elements that depend on mechanical rolling element bearings, which, in serial arrangement, do not achieve the requisite reliability ratios. Single-point failures that can lead to a mechanical jam are generally not certifiable in a primary control application. While parallel devices or complex arrangements may mitigate these failures, system complexity, cost, and weight are increased to the point that hydraulics are generally favored [Ref 1]. Alternate electromechanical actuator designs and arrangements are needed that can achieve failure rates less than one incidence per ten to the ninth hours, to enable their use in flight critical applications as well as in critical space applications. Novel approaches to electromechanical actuator design are sought, which robustly and comprehensively consider component reliability levels, and apply creative architectural solutions to achieve fault tolerance and high reliability. Fault tolerance is judged at a system level by a rigorous fault hazards analysis (FHA) and fault-tree buildup. An exemplary buildup for a state-of-the-art fault-tolerant mechanical actuator design is provided in Ref 4. Note that this dual-lane fault-tolerant electric drive architecture is still only capable of achieving a probability of system failure of 8.68 10^-6 failures per hour, far short of the 1 10^-9 failures per hour goal for unaided primary flight control. The limiting factors are identified to be the mechanical elements gearbox and actuator mechanism. From a system perspective, simply creating more reliable mechanical components will not achieve the requisite failure rate goals; a novel electromechanical system architecture is required. Electromechanical design approaches should be widely scalable but readily demonstrable at a small scale in a laboratory bench setting. If component performances are more than an order of magnitude greater of those found in standard catalogues of electronic or non-electronic component reliability, a full justification must be provided. Rigorous consideration should be given to the complexity and reliability of any required sensing, fault-detection, and toggling equipment incorporated in a design. Preferred design approaches are those readily scaled across different actuator sizes and across different aircraft applications. Approaches are preferred that could be scaled to encompass a family of actuators that could eliminate the hydraulic flight control actuation systems of a large, man-rated aircraft. Design concepts should also identify electrical bus requirements, including preferred numbers of electrical circuits, efficiency, and thermal considerations, whether the actuator will put electric power back on the bus, and if it will need to be conditioned. Designs are desired that will accommodate power-reconditioning and regeneration from actuator back-driving, to lower the overall heat rejection requirements. Additionally, thorough consideration of integration into an aircraft electrical bus is encouraged. In particular, minimizing peak power usage and power regenerated by the actuation system is considered important in order to reduce the load on the aircraft's electrical system. Preferred actuator designs should be operable on 270 VDC or +/- 270 VDC buses, although lower DC bus voltages are considered acceptable for bench tests. Actuators are desired that provide at least 3hp of peak power, although lower powers are considered acceptable for intermediate testing activity. Larger designs are preferred, especially actuator architectures that are modular or can be easily scaled across different applications requiring higher stroke, higher frequency response, higher output force, or frequent control reversal. High stiffness designs are preferred. The scaling limitations of any architectural approach must be clearly identified. PHASE I: Design a concept for a fault-tolerant, ultra-high reliability electromechanical actuator. Develop an analysis of predicted performance, and define key component technological milestones. Establish performance goals in terms of power-to-weight ratio, and contrast with existing systems. Perform a fault- tree analysis demonstrating a design approach capable of achieving the requisite reliability level. Perform initial hardware risk reduction or mockup of actuator output portion mechanical arrangement, possibly using 3D printed parts. Phase I deliverables will include a description of the conceptual actuator design, performance assessment against existing approaches, a thorough reliability analysis, and a risk reduction and demonstration plan. PHASE II: Develop, demonstrate, and validate the architectural approach to high actuator reliability. Construct and demonstrate the operation of a laboratory prototype actuator that has all of the requisite architectural features needed to achieve high reliability actuation. Exercise the relevant fault modes, and show robust operation. Perform additional analyses to project eventual performance capabilities of the architectural approach. PHASE III: High reliability, high power-to-weight ratio fault tolerance electromechanical actuators have applicability in many future USAF and USN aircraft programs, including especially Next Generation Air Dominance, which have demanding packaging requirements that current electrohydrostatic actuators cannot easily meet. Unmanned aircraft programs, including demonstration programs executed by DARPA, may have particular benefit from such actuators, as the inclusion of hydraulic systems adds great expense, complexity, and weight to an aircraft. The military transition path would be inclusion of an actuator into a future aircraft program of a record. Aerospace-grade electromechanical actuators can also find application in the commercial aerospace industry. It is an oft-stated goal to move towards more-electric aircraft, however electromechanical actuators have largely been relegated to non- flight-critical applications. A commercial transition path would be development of a flight-grade actuator and inclusion into a future aircraft program of record. As an example, Airbus explicitly states that a move towards electric actuation is in their long-term goals, as they desire to reduce conversion losses and increase overall systems efficiency [Ref 5]. REFERENCES: 1. Stephen L. Botten, Chris R. Whitley, and Andrew D. King,"Flight Control Actuation Technology for Next-Generation All-Electric Aircraft", 2000 2. Roskam, J., Rice, M., Eysink, H. A comparison of hydraulic, pneumatic, and electromechanical actuators for electromechanical general aviation flight control, SAE PAPER 790623, 1979 3. Federal Aviation Administration Advisory Circular AC25-1309 4. J.W. Bennett, B.C. Mecrow, D.J. Atkinson, and G.J. Atkinson,"Safety-critical design of electromechanical actuation systems in commercial aircraft", IET Electric Power Applications, 2010. 5. All-Electric Aircraft, Clean Sky Initiative FAA Advisory Circular, AC 25.1309-1A - System Design and Analysis