You are here
DOC SBIR 2015-NIST-SBIR-01
NOTE: The Solicitations and topics listed on this site are copies from the various SBIR agency solicitations and are not necessarily the latest and most up-to-date. For this reason, you should use the agency link listed below which will take you directly to the appropriate agency server where you can read the official version of this solicitation and download the appropriate forms and rules.
The official link for this solicitation is: http://www.grants.gov/web/grants/view-opportunity.html?oppId=275010
Release Date:
Open Date:
Application Due Date:
Close Date:
Available Funding Topics
-
- 9.01.01.73-R: Category-Theoretic Tools to Support Manufacturing Information Integration
- 9.01.02.73-R: Computer Aided Standards Development (CASD) – A Software Tool to Automate the Standards Development Process
- 9.01.03.68-R: High-Throughput Manufacturing Methods for Engineered MRI Contrast Agents
- 9.01.04.68-R: Laser Power Meter for Manufacturing Applications
- 9.01.05.68-R: Optical Microscopy as Applied to Fabrication of Atomic-Scale Devices
- 9.01.06.73-R: Predictive Modeling Tools for Metal-Based Additive Manufacturing
- 9.01.07.63-R: Stroboscopic Method for Dynamic Imaging in a Transmission Electron Microscope at GHz Frequencies
- 9.01.08.61-R: Tuning Germanium Crystal Reflectivity and Mosaic
Advanced Manufacturing is “a family of activities that (a) depend on the use and coordination of information, automation, computation, software, sensing, and networking, and/or (b) make use of cutting edge materials and emerging capabilities enabled by the physical and biological sciences, for example nanotechnology, chemistry, and biology. This involves both new ways to manufacture existing products, and especially the manufacture of new products emerging from new advanced technologies.”
This subtopic is calling for a software tool to test the categorical formalism on integration problems in smart manufacturing and additive manufacturing. Category theory has been identified as a flexible and straightforward mathematical formalism for establishing compatibility of information structures and setting up the required information exchange. The software tool must enable the creation of the category-theoretic mappings needed for integrating different information models in multiple domains. In addition, the tool must be scalable so that it can be used to solve integration problems of varying size and complexity and integration problems that change as systems evolve over time. This is crucial to the eventual commercialization of the tool.
The project goal is to develop prototype tools that can represent manufacturing information objects, currently stored in Excel spreadsheets or SQL-type databases, as categories and demonstrate that the tools enable the integration of information across these two representations.
Phase I activities and expected results:
- Develop a small-scale manufacturing-related demonstration software tool that can visually represent the information objects.
- Demonstrate how to merge the databases.
- Demonstrate how to answer a selected set of queries using the merged database.
Phase II activities and expected results:
- Expand tool to integrate software tools when data is captured in Excel spreadsheets.
- Demonstrate that the tool works for a small manufacturing example.
- Demonstrate that the new tool works on realistic problems that involve both databases and Excel spreadsheets.
NIST will be available to assist the awardee to choose scenario and information objects.
Reference:
1. Spivak, D. Category Theory for Scientists, MIT Press. (October 2014).
The development of documentary and test standards is a long and tedious process. Challenges facing standards developers include complex, inadequately defined terminology, and rapidly changing associated information content. Even after a standard is “set,” its implementation and adoption can be hampered by the gap between the technical requirements of that standard and the technology required to implement those requirements.
NIST seeks a software tool that will make the process of designing and developing documentary standards faster, more robust, and more integrated. The model for this subtopic is the so-called Computer Aided Software Engineering (CASE) tool, itself modeled after the Computer Aided Design (CAD) tool, but applied to standards development and deployment. The tool should provide the following capabilities for standards development and deployment:
- Categorize and organize standards’ content in a structured information model, supporting modularization and reuse.
- Establish terminology connections between related standards, and maintain semantic consistency across standards.
- Generate a visual representation and navigation scheme for the standard, so that the standard may be communicated to the end-user through interactive means (such as a touch-screen tablet).
- Provide an underlying formal model that is amenable to testing and verification, and that facilitates the implementation of the standard by automatic or semiautomatic generation of software modules. This should allow software implementers to extract portions of the standard to meet specific implementation requirements
Standards development organizations (SDOs) and the scientific and engineering societies that participate in those organizations will benefit greatly from such a tool. Vendors will benefit from the tool since it would pull from the existing standard, populate the tool, and allow a consistent assessment for the vendor to identify the requirements. NIST will greatly benefit from such a tool to enable better testing of standards for broad industry deployment. Such a tool will also allow NIST to develop metrics to assess the “quality” of a standard, and to verify whether an implementation meets a standard’s scope and requirements.
The life-cycle of a standard may involve three broad stages [1]. First is the development stage, in the course of which stakeholders gather within committees, prepare a draft, and come to a consensus on a final standard. The second stage is the deployment stage, which may include a pilot-scale implementation, followed by industry-wide implementation. The third stage is the maintenance stage, in which the standard is revised and maintained. A well-defined underlying information structure/model will facilitate the implementation of all three stages. In addition, it can support the instantiation and communication of the standard to the end-users using the varied digital media available today. Even though information management and software tools have advanced considerably over the recent decades, SDOs rarely take advantage of those advancements.
The goals of this project are the following: 1) Build a framework for developing a taxonomy and ontology for the terminology and concepts contained in a standard; 2) Capture the requirements of a standard in a formal model; and 3) Develop a standard as a structured information model, instead of a simple text document.
Additional tools to automatically verify these models for consistency and generate other artifacts such as documents and software implementation modules can be also developed. All of these will be supported by a tool that will allow standards developers and end-users to interactively view and navigate the information models. Such technology will greatly improve the deployment, adoption, and maintenance of standards.
The outcome of this effort will bring together SDOs, software implementers, and end-users (both manufacturers and their consumers) under a single framework and allow them to exchange standards information in an unambiguous and efficient manner. While the focus of this SBIR subtopic will be related to standards in manufacturing, the general methodology is applicable to other industry sectors.
Phase I activities and expected results:
- Expand on the NIST Ontological Visualization Interface for Standards (NOVIS) tool [2,3] to develop a taxonomy editor for standards. This should include a classification scheme and underlying ontology modeling the concepts and relationships.
- Develop a formal representation scheme to capture the requirements for a standard. This may be based on the Framework for Analysis, Comparison, and Test of Standards (FACTS) work [4].
- Develop an export/import mechanism for the information content of a standard and associated document formats.
- Develop a business case for a Computer Aided Standards Development (CASD) tool working with standards developing organizations, such as, ASME, ASTM, IEEE, OMG, and ISO.
Phase II activities and expected results:
- Design an initial architecture and software for realizing a computer aided tool for standards development.
- Develop a Computer Aided Standards Development (CASD) tool and a comprehensive case study/demonstration.
- Design an interface between a CASD tool and document generation software, in the form of a plug-in to a document editor that interfaces with the underlying CASD model.
- Design a mechanism for automatic or semiautomatic generation of software to implement modules of the standard.
- A framework for a standards repository where the standards may reside as information models. The framework should support version control, cross standard linking, and maintenance of information consistency across standards.
NIST will consult and provide input to assess progress and performance.
References:
1. Cargill, C.F., Why Standardization Efforts Fail, The Journal of Electronic Publishing. (2011).
2. Narayanan, A., et al., A Methodology for Handling Standards Terminology for Sustainable Manufacturing, NIST Interagency/Internal Report (NISTIR) – 7965. (2013).
3. Lechevalier, D., et al., NIST Ontological Visualization Interface for Standards User’s Guide, NIST Interagency/Internal Report (NISTIR) – 7945. (2013).
4. Witherell, P.W., et al., FACTS: A Framework for Analysis, Comparison, and Test of Standards, NIST Interagency/Internal Report (NISTIR) – 7935. (2013).
Microfabricated magnetic imaging agents with greater sensitivity and new functionality for magnetic resonance imaging (MRI) have recently been demonstrated at NIST [1-4]. The technology relies on thin-film fabrication methods adapted from the semiconductor industry. This “top-down” approach is expensive and suffers from low yield compared to “bottom-up” methods based on chemical synthesis for making other types of contrast agents. NIST seeks applications focused on the development and demonstration of high-throughput techniques for making this new class of MRI contrast agent in sufficient quantity that biologists and physicians can explore new applications. These methods must achieve the necessary control of dimensional and materials properties at nanometer size scales. In addition, manufacturing methods must lead to materials that can be readily prepared for animal studies and potentially for clinical trials using methods currently available in the biomedical community.
The technical proposal must provide details on how to achieve the following goals:
1. Manufacturing methods will lead to contrast agents with the sensitivity and functionality similar to those demonstrated at NIST using wafer-based manufacturing methods. For T2* agents (see Ref. [1]) , this implies micrometer-scale magnetic particles with sizes and magnetic moments that vary by no more than 5 to 10 % (ideally less) from one particle to another. For multispectral agents (see Refs. [2-4]), this implies, in addition, that magnetic particle shapes are sufficiently well controlled and similar to one another to ensure that the resulting shifted nuclear magnetic resonance (NMR) water linewidths generated by single particles, as well as by ensembles of particles, are no more than 10 to 20 % (ideally less) of the NMR frequency shift itself. For example, methods for making hollow magnetic nano-cylinders [3] should ideally have control to within a few percent over the thickness of the cylinder wall as well as the length and diameter of the cylinder. Depending on cylinder size, this may translate to control of a few nanometers for wall thickness and of a few tens of nanometers for cylinder length.
2. Manufacturing methods should be capable of producing millimolar solutions in 0.1 liter batches for in-vivo biological applications.
3. Manufacturing methods should have promise for producing contrast agents at a cost that is comparable with that of current imaging agents for MRI.
NIST seeks applications that address the issues identified above, as well as methods that streamline wafer based manufacturing (such as nano-imprinting), use roll-to-roll transfer techniques, use chemical synthesis approaches, or a combination of any of the above.
Phase I activities and expected results:
Demonstrate a manufacturing method for producing contrast agents that have sensitivity and functionality similar to those of microfabricated contrast agents that have been produced at NIST using wafer-based manufacturing methods.
Phase II activities and expected results:
Show the capability for producing millimolar solutions of sufficient quantities at a cost that is comparable to that of current MRI contrast agents.
NIST will be available for consultation and collaboration, including testing contrast agents for sensitivity and functionality.
References:
1. G. Zabow, S.J. Dodd, E. Shapiro, J. Moreland, A.P. Koretsky, Microfabricated High-Moment Micrometer-Sized MRI Contrast Agents, MAGNETIC RESONANCE IN MEDICINE 65, 645–655. (2011).
2. G. Zabow, S.J. Dodd, A.P. Koretsky, Ellipsoidal Microcavities: Electromagnetic Properties, Fabrication, and Use as Multispectral MRI Agents, SMALL 10, 1902–1907. (2014).
3. G. Zabow, S.J. Dodd, J. Moreland, A.P. Koretsky, The Fabrication of Uniform Cylindrical Nanoshells and their Use as Spectrally Tunable MRI Contrast Agents, NANOTECHNOLOGY 20, 385301. (2009).
4. G. Zabow, S.J. Dodd, J. Moreland, A.P. Koretsky, Micro-Engineered Local Field Control for High-Sensitivity Multispectral MRI, NATURE 453, 1058–1062. (2008).
The decreasing cost and increasing efficiency of high-power lasers is revolutionizing manufacturing in the U.S. and around the world. Multi-kilowatt lasers are now routinely used for welding, cutting, and additive manufacturing. Precision control of these processes, and thus the uniform quality of the manufactured product, requires a meter that can measure the power of such lasers with an uncertainty of only a few percent. Historically, NIST’s standards for power measurements of high-power lasers have been massive thermal detectors [1]. While suitably accurate, their size, cost, and technical characteristics (e.g. temporal response) are not optimal for use in manufacturing operations. For example, an important goal is to incorporate a power meter into the head of a laser welder, measuring the output power in real time and without sacrificing laser power or beam quality.
NIST seeks further innovation to improve the state-of-the-art, leading to the commercialization of smaller, faster, and cheaper power meters, which would also have high accuracy. One approach would be a device based on our recent demonstration showing that the inherent force in light (radiation pressure) can be exploited to measure high-power lasers in a manner that could be 1/10th the cost, 10 times the speed, a fraction of the size, and yet with accuracy that is comparable to the existing technology of large thermal detectors [2]. NIST is especially interested in the development of a second-generation radiation pressure power meter.
The goal of the project is to develop a small, fast, rugged radiation pressure sensor capable of measuring, in situ, high power laser radiation up to 10 kW (5 kW/cm2).
- Small: Dimensions less than 50 mm x 50 mm x 50 mm
- Damage threshold: 5 kW/cm2
- Temporal Response: 10 ms
- Reflectance: Primary reflected beam contains 99.99 % of input
- Robust: Survives acceleration of 3 g, operates with sensor in random physical orientation and in motion (up to 1 m/s).
- Signal processing/data access: by separate (external) processor (Laptop, FPGA, Raspberry PI, etc.)
Phase I activities and expected results:
- Develop suitable force-sensor mechanism (capacitive, current compensation, pressure, etc.).
- Demonstrate performance and calibration of force-sensor mechanism by using calibrated masses, or other means.
- Determine suitable high-reflectance mirror.
- Engineer packaging for small volume and ruggedization.
Phase II activities and expected results:
- Demonstrate temporal response.
- Demonstrate optical power density survivability and thermal management.
- Incorporate pressure sensor (power meter) into a laser-welding head, to demonstrate integration with real-world manufacturing processes.
NIST will be available to assist the awardee by discussing NIST’s research and ideas. The NIST 10 kW laser and laser-welding booth are available for device testing in collaboration with NIST; NIST can aid in determining the accuracy of the developmental power meter through comparison with NIST's Flowing Water Optical Power Meter. Temporal response can be determined by using NIST’s modulated laser source.
References:
1. C. L. Cromer, X. Li, J. H. Lehman, and M. L. Dowell, Absolute High-Power Laser Measurements with a Flowing Water Power Meter, presented at the 11th Conference on New Developments and Applications in Optical Radiometry, Maui, Hawaii, USA. (September 19–23, 2011). See also: http://www.nist.gov/pml/div686/laser_power_meter.cfm.
2. P.A. Williams, J.A. Hadler, R. Lee, F.C. Maring, and J.H. Lehman, Use of Radiation Pressure for Measurement of High-Power Laser Emission, Optics Letters, 38, 4248-4250. (2013). See also: http://www.nist.gov/pml/div686/laser-102213.cfm.
3. U.S. Patent Application No. 2014/030-7253, Optical Meter and Use of Same. See: http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&co1=AND&d=PG01&s1=20140307253.PGNR.&OS=DN/20140307253&RS=DN/20140307253.
See also: http://tsapps.nist.gov/techtransfer/index.cfm?event=public.techdisplay&ItemID=409
NIST seeks development of an optical imaging system that has micrometer resolution, an image field of 50 to 200 micrometers, and a depth of focus that ensures image quality over the field of view of interest. Such a system must have a working distance of nominally 20 cm, image an object that is in vacuum, and potentially have flexibility to work around obstructed sight paths.
To set the context, NIST is interested in developing new methods to fabricate atomically precise electronic devices. One of the primary challenges in this field, however, is in connecting these atomic-scale devices to macroscopic electrical contacts, enabling external measurement and additional fabrication steps. A new advance that links the atomic scale fabrication to macroscopic processes via imaging and fiducial markings is needed. Ideas for solving this challenge include stereo microscopy, a through-focus 3-D image reconstruction capability, and vacuum-compatible optical imaging. A product developed under this subtopic should be marketable as a tool for those doing research in atomic-scale devices, and ultimately in their commercial production.
The goal is to develop a solution to locating 50 nm sized features relative to larger fiducial markings, enabling future measurement and process steps on tools such as an SEM-based, e-beam lithography system. The objective is to design an imaging system that enables the relative position of near atomic scale features and their local contacts to be accurately determined to several nanometers of overlay. Numerous users of scanning tunneling microscopy (STM) systems will benefit from this desperately needed capability. The project should demonstrate the concept (in Phase I) as well as develop a working prototype system (in Phase II) capable of being implemented in a vacuum system with an STM. The prototype must be the basis for an actual system to be developed and sold commercially to the STM and atomic-scale device fabrication communities.
Phase I activities and expected results:
Demonstration of a concept that clearly allows optical imaging of an STM tip and its relative position to optically visible fiducial markings. The system must allow the tip position to be accurately estimated relative to fiducial markings.
Phase II activities and expected results:
Development of a prototype system as defined in Phase I.
NIST will collaborate using its STM Ultra-High Vacuum facilities to help in the design and testing.
References:
1. Fuechsle, Martin, et al. A Single Atom Transistor, Nature Nanotechnology, Vol. 7, p. 242 (2012).
2. Morton, John, et al. Embracing the Quantum Limit in Silicon Computing, Nature 479, p. 345 (2011).
NIST seeks the development of tools that rely on a suite of physics-based and empirical models to support predictive analyses of metal-based additive manufacturing (AM) processes and products. Physics-based models will be developed in such a way to ensure reusability in a predictive environment, irrespective of product geometry. The tool will support reliable and repeatable microstructure and performance predictions for various geometries for a given process and material. Such a tool should:
- Provide a set of physics-based and empirical models for metal powder-bed fusion manufacturing processes.
- Demonstrate scaling and composability of such models to support geometry-independent reusability.
- Provide ranges of parameter values for which models can be assumed reliable and accurate.
- Provide support for in situ feedback to allow for real-time adjustments during manufacture.
The successful development of such a tool will provide industry with a mechanism to move away from empirical testing and instead rely more on modeling and simulation, enabled primarily by measurement science underpinnings. As a potential means for qualifying AM parts, the tool will support NIST’s mission by reducing AM part lead-times and enabling SME’s to expand their market participation.
Industry relies heavily on the manufacturing of coupons to qualify metal parts created using AM processes. Predictive models provide a means for industry to move away from their dependence on testing and towards an environment supported by models and simulation. The transition to modeling and simulation for part qualification is underway, albeit very cautiously and deliberately. Current qualification through modeling and simulation is achieved only with very specific models deployed under very specific circumstances.
The goal of this project is to develop a tool that will support the broader application of physics-based and empirical models as a means for product qualification. This will be achieved by developing sets of composable models, each model accompanied with clear application boundaries. These models must be composable to a level of granularity that microstructure, and to an extent performance, can be predicted to a degree of certainty, for a given set of process parameters and irrespective of geometry. This tool will be an early step in allowing industry to move away from 100% testing and towards reliable modeling and simulation in AM.
As AM nears production-ready capabilities, advancements in modeling and simulation have become increasingly necessary. Many institutions, especially universities and small companies, do not have the resources to test each part created. Nor do these institutions have the resources for developing reliable predictive models. Development for this tool will focus on support for composable modeling for metal powder bed fusion processes, including direct metal laser sintering and selective laser melting, though the principles applied during its development should support broader applications. Therefore, one goal of this project will be to provide a foundation for developing similar tools in the future for other processes, including those that build parts using polymer-based processes.
Phase I activities and expected results:
- Development of a set of parameterized, composable models to support predictive analysis in a proof-of-concept operating environment.
- Development of a specified set of operating conditions for which the models are applicable, including the degree of certainty that they are able to predict performance.
- Demonstration of model composability and reliability by predicting the microstructure, to a specified degree of certainty, for several basic shapes.
- Prediction of fabricated part performance of several basic shapes, to a specified degree of certainty.
Phase II activities and expected results:
- Demonstration of automated or semi-automated model composition to predict microstructure to a specified degree of certainty.
- Demonstration of identification of in situ adjustments based on real-time predictive analysis.
- Development of a tool from which models can be rapidly called and stored on demand.
- Demonstration of model composability and reliability by predicting the microstructure, to a specified degree of certainty, on complex geometry.
- Prediction of fabricated part performance of complex geometry, to a specified degree of certainty.
NIST staff from the Measurement Science for Additive Manufacturing Program in the Engineering Laboratory will work with the awardee, providing consultation and assessment of performance and progress, to develop the fundamental measurement science for this predictive tool. This will support development of a tool necessary to support composable predictive modeling for manufacturing with metal powder bed fusion processes, similar to how finite element analysis is used in conventional machining.
References:
1. Pollock, Neil, and Robin Williams. Software and Organisations: The Biography of the Enterprise-Wide System or How SAP Conquered the World. Taylor & Francis US. (2008).
2. Bourell, D., Leu, M., Rosen, D., eds. Roadmap for Additive Manufacturing: Identifying the Future of Freeform Processing, (http://wohlersassociates.com/roadmap2009.pdf). (2009).
3. Energetics Incorporated, Measure Science Roadmap for Metal-based Additive Manufacturing, (http://events.energetics.com/NIST-AdditiveMfgWorkshop/pdfs/NISTAdd_Mfg_Report_FINAL.pdf). (May 2013).
A large portion of the global information technology (IT) infrastructure relies on nanoscale devices operating between 1 and 5 GHz. Familiar examples are GPS (1.5 GHz), cellular and wireless communication (2.4 GHz), dynamic random access memory (DRAM, 2 GHz) and computer processors (3 GHz). Although of wide-interest and the subject of many research and development efforts, the capability of directly imaging the propagating electromagnetic waves in a device is not available.
Transmission Electron Microscopy (TEM) is the gold standard technique in spatially-resolved imaging. However, dynamic events are not temporally resolved because the signals are time-averaged on the order of a second. Until recently, TEM has been ruled out as a viable time-resolved technique1, 2. If the TEM had the power to obtain nanoscale spatial resolution and collect images at high sampling rates, entirely new modes of observation and investigation will become available.
The goal of this project is to enable imaging of periodic, ultrafast phenomena at GHz frequencies and sub-nanometer spatial resolution to enable new measurements for magnetic data storage, advanced materials, electrochemical systems, and wireless communication. This goal shall be accomplished through the design and development of an electron beam modulator that can be integrated with a TEM to allow the capture of rapidly changing structures or features using stroboscopic methods.
Phase I activities and expected results:
- Demonstration of the feasibility of modulating a 200 – 300 keV electron beam either spatially or temporally at a frequency that is tunable between 1 and 10 GHz.
- Demonstration that the modified beam maintains sufficient spatial/energy coherence so that it remains useful for imaging.
Phase II activities and expected results:
- Build and test the electron modulator, working collaboratively with NIST and making use of a NIST- owned TEM as part of that collaboration, if applicable.
NIST will work collaboratively to design and develop the beam modulator concept with the awardee. The awardee shall provide expertise in constructing and testing of the device (Phase II). NIST will collaborate by integrating this device into an existing microscope.
References:
1. B. Barwick, H. S. Park, O.-H. Kwon, et al., Science 322, 1227. (2008).
2. J. S. Kim, T. LaGrange, B. W. Reed, et al., Science 321, 1472. (2008).
The standard for performance in monochromatic scattering of neutrons and x-rays has been pyrolytic graphite crystals (PG). PG has the disadvantages of scattering higher order wavelengths and it has only two useful reflections (002 and 004) that limit flexibility in desired wavelength and resolution. If the properties of germanium crystals could be tuned so that the reflectivity performance is comparable or superior to PG, then germanium would replace PG for many applications with improved performance and flexibility, since it rejects higher order wavelength contamination and has a much larger range of useful lattice spacings than PG. NIST seeks a new processing technology to make the performance of germanium comparable to that of PG.
The main goal of this project is to find a manufacturing technique that can improve the scattering performance of germanium crystals. The NIST Center for Neutron Research would serve as the main bridge for this project by testing the germanium crystal performance.
Phase I would consist of completing feasibility tests to find a manufacturing technique that is promising. This means the production of germanium crystals with peak reflectivities comparable to those of pyrolytic graphite (0.6 and higher) and mosaics in the range 15’ to 40’. The NCNR neutron scattering results from Phase I, will be published in open literature.
Phase II would consist of tuning the manufacturing process to obtain a commercial product.
NIST will provide assistance in the form of neutron beam time, at no-cost, at the NCNR, including staff assistance with data taking, analysis and discussion. The use of the NCNR under this subtopic is for non-proprietary research purposes where results are publishable and available to the general public in the literature.
Implementation of renewable energy and climate change related policies around the globe will require access to accurate, internationally recognized measurements and standards. These will be critical for both policy-making purposes as well as evaluating the impact of mitigation efforts. Such capabilities will be equally important for assessing the impact of energy and climate change policies on the economic development of each country. National Metrology Institutes (NMI) in each country need to be aware of the measurement and standards capabilities necessary for implementation of such policies, and must be able to ensure the quality and international acceptance of data on Air Quality and Greenhouse Gas (GHG) measurements and characterization of renewable energy sources.
Remote-sensing systems for Earth monitoring often detect infrared radiation in the 2 μm to 2.5 μm and the 3 μm to 5 μm atmospheric windows, where radiation absorption is low. Calibrating such systems can be more difficult than calibrating optical systems that use visible light, because—among other reasons—a type of measurement standard that is available for visible light is not yet available for this spectral range. This subtopic is referring to so-called “trap” detectors. A “trap” detector includes an assembly of photodiodes (or similar devices) arranged such that, in optical series, they adsorb (that is, “trap”) virtually all incident radiation. (Only a very small fraction of the incident radiation is lost due to reflection out of the detector.) The references, below, show examples of such designs.
Trap detectors have proven to be highly useful as precision standards for measuring optical power (and related quantities, such as irradiance) in the visible and ultraviolet wavelength ranges. An enabling technology was the silicon photodiode, once they became available sufficiently large and with sufficient spatial uniformity. However, until recently, sufficiently large and uniform photodetectors for infrared radiation were not available.
NIST believes that this is changing, and that detectors made with infrared-sensitive materials (e.g. InAs, GaInAsSb, etc.) are within the state-of-the-art in large area (e.g., 1.5 cm2 active area) and with a spatial variability of internal quantum efficiency of less than 0.1 % between 1 μm and 4.5 μm. In addition, the internal quantum efficiency of the detectors (i.e., the device efficiency after taking into account the radiation loss due to front-surface reflection) must be close to unity.
The goals of the project are to engineer, build, and demonstrate a trap-detector design that takes advantage of modern uniform, large-area infrared detectors. Ultimately, the goal is for the awardee to market such detectors to the infrared measurement community, for applications including radiation thermometry. The trap detectors must be useable at ambient temperatures of –20 °C and warmer, having internal thermo-electric coolers. The goal is to have a spatial uniformity of response over the input aperture of 0.1% or better at all wavelengths between 1 μm and 4.5 μm, where the source is a tunable laser beam of 0.2 mm diameter. An unvignetted acceptance angle of at least 7° is required. An ideal trap detector (3-element reflectance, 6-element tunnel, etc.) would have a 5 mm diameter precision entrance aperture. The design task includes precision electronics to accurately sum the signals from the individual photodetectors.
Phase I activities and expected results:
- An overall design an infrared trap detector, including optical, electrical, mechanical, and thermal factors.
- Identification, development, and/or fabrication of single-element detectors that are consistent with the overall design.
- Demonstration that the single-element detectors meet the design requirements, e.g., in spatial and spectral uniformity.
Phase II activities and expected results:
- Build and demonstrate the performance of the infrared trap detectors. Performance includes spatial uniformity when measuring infrared power, and angular uniformity when measuring infrared irradiance. It also includes demonstrating linearity of response and a minimization of signal noise.
-
Iteration to improve the design and its performance.
NIST will consult and provide applicable background information. NIST will collaborate with the awardee in determining the performance of the single-element detectors and the trap detector prototypes.
References:
1. Eppeldauer G. P. and Lynch D. C., Opto-Mechanical and Electronic Design of a Tunnel-Trap Si-Radiometer, J. Res. Natl. Inst. Stand. Technol. Vol. 105, No. 6, pp. 813–828. (2000).
2. Fox, N. P., Trap Detectors and Their Properties, Metrologia 28, 197. (1991).
Recognizing that the national and economic security of the United States depends on the reliable functioning of critical infrastructure, the President issued Executive Order 13636, Improving Critical Infrastructure Cybersecurity, in February 2013. It directed NIST to work with stakeholders to develop a voluntary framework – based on existing standards, guidelines, and practices - for reducing cyber risks to critical infrastructure.
The 2011 ISO/IEC standards for C [1] (C11) and C++ [2] (C++11) introduced a portable, relaxed multithreaded memory model. Instead of guaranteeing sequential consistency [3] for all legal (data-race-free) programs, these standards allow each atomic shared memory operation to specify the degree of memory consistency it requires. The compiler has to add only the synchronization needed to achieve the specified degree of consistency on a given architecture. The compiler also gains the freedom to reorder shared memory operations within a thread. With per-core processing throughput stagnant but core counts growing, engineers increasingly use this technology (in combination with lock-free synchronization [4]) to improve system performance, responsiveness and robustness, to reduce power consumption, and to maintain scalability on modern multicore and many-core processors. With the dominant role of C/C++ in systems software, much of our computing infrastructure (and many of our devices) will depend on such code.
Validating code written to the new model is difficult [5]. Beyond the usual problems of validating multithreaded code (e.g., an exponentially growing number of possibly relevant schedules), the new freedoms granted to compilers and hardware make thorough testing of a library practically impossible, since a bug might manifest only under some specific compiler or hardware reordering. This makes quality assurance and certification of libraries or systems containing such code by testing alone infeasible.
The only practical approach to this assurance challenge is formal verification, that is, a verification tool proves mathematically (typically guided by annotations provided by the software maker) that code meets its formal specifications. Because the software maker has done the hard work (annotation), a customer or certification authority can check the code for correctness by simply running the verifier on the annotated code, without having to trust the software maker. Thanks to improvements in technology, deductive verification is now practical for realistic code, and has been used to verify sophisticated software such as OS kernels [6], optimizing compilers [7], and concurrent system-level code [8].
A practical and sound deductive verifier for C11/C++11 code would allow library writers and device engineers to conclusively prove, to customers and certification authorities, that their code indeed meets its functional specifications. Such proofs enable development of an ecosystem of portable, trustworthy, and efficient software libraries.
Hard real-time guarantees are critical in some cybersecurity applications (e.g., to prove availability), as well as in software embedded in medical devices, manufacturing, and advanced communications. While real-time behavior is inherently nonportable and goes beyond the language standards, verification support for real-time properties would greatly extend the utility of the verifier.
The goal of this project is to construct a practical verifier for C11 and/or C++11, and to demonstrate its practicality. The verifier should have the following properties:
- The verifier targets C11 or a subset of C++11 with at least the full capabilities of C11.
- The verifier allows specification and sound verification of functional correctness.
- Verification is driven by specifications, and perhaps additional annotations, in the code itself; these annotations should be sensible to engineers.
- The verification methodology respects standard modular programming practice, such as information hiding, data abstraction, and separate specification/verification of libraries and concurrent components.
- The verification methodology supports specification of the behavior of external entities (e.g. devices) with which the software is designed to interact (e.g., through memory-mapped I/O), and verification of the behavior of the combined system.
- (optional) Verification may support specification and verification of real-time guarantees, based on Worst Case Execution Time (WCET) assumptions expressed through annotations.
NIST emphasizes that the desired outcome of the project is a practical tool that can be used as an aide to produce efficient, verified code, particularly libraries. NIST is interested in the utility of the tool, not in the novelty of the underlying technology.
Phase I activities and expected results:
The Phase I project should produce a verifier design (architecture and specification), a methodology for its use, a full-scale prototype development plan, and evidence that the proposed verifier can be built and will meet the project goals. The evidence should include examples of how code might be annotated to verify programs using
(1) standard program control structures (references to stack/heap objects, loops, function calls, framing, termination, etc.), (2) data structures (structs/unions, abstraction, type invariants, etc.), (3) concurrency (locks, weak memory types, lock-free data structures, linearizability, etc.), (4) external devices/entities (modelling, verification of the composite system), and optionally (5) real-time constraints. The Phase I report should also describe some possible verification examples to be tackled in Phase II, and outline how these examples would be verified using the proposed methodology.
Phase II activities and expected results:
Phase II work should consist of construction of a prototype verifier based on the design proposed in phase I, and demonstration of the verifier on some small but interesting examples. The prototype should cover the essential programming mechanisms of C11 (and, in particular, should handle the C11 memory model), but need not cover all language features. Suitable demonstration examples include lock-free hashtables (using hazard pointers), a toy infusion pump, a robust flash-based key-value store, etc.
NIST will have staff available to discuss reasonable subsets of C11 or C++11, possible types of example software to verify, and usable specification and annotation formalisms.
References:
1. ISO/IEC 9899:2011 – Information technology – Programming languages – C.
2. ISO/IEC 14882:2011 – Information technology – Programming languages – C++.
3. S. Adve and H.-J. Boehm, Memory Models: A Case for Rethinking Parallel Languages and Hardware. CACM Col. 53 No. 8. (2010).
4. M. Herlihy and N. Shavit, The Art of Multiprocessor Programming. Morgan Kaufman. (2012).
5. M. Batty, S Owens, S Sarkar, P. Sewell, and T. Weber, Mathematizing C++ Concurrency. In POPL (2011).
6. The CompCert C Compiler: http://compcert.inria.fr/compcert-C.html.
7. The open sourcing of seL4: http://www.sel4.systems/.
8. VCC, mechanical verifier for concurrent C programs: http://vcc.codeplex.com/.
Nearly all applications that deal with financial, privacy, safety, or defense include some form of access control. Access control is concerned with determining the allowed activities of legitimate users, mediating every attempt by a user to access a resource in the system. Access control policies are high-level requirements that specify how access is managed and who may access information under what circumstances [1]. For instance, policies may pertain to resource usage within or across organizational units or may be based on need-to-know, competence, authority, obligation, or conflict-of-interest factors.
Access control policies are increasingly written in policy specification languages such as XACML [2] and enforced through a Policy Decision Point (PDP) implementation. Some example PDP implementations include Sun XACML [3] and XACML .NET [4]. In such a way, the correctness of specified access control policies plays a crucial role in assuring adequate security. One of the techniques for increasing confidence on the correctness of access control policies is to conduct systematic policy testing: requests as test inputs and responses as test outputs.
There are two fundamental problems in systematic policy testing: (1) coverage criteria such as structural coverage criteria [5] to decide the adequacy of test inputs and decide when to stop testing and (2) request generation to satisfy coverage criteria and detect faults in policies.
To provide high security confidence levels for the nation’s critical IT infrastructure, it is important to provide a tool, which can thoroughly and automatically check the syntactic and semantic faults of AC policies before deploying them for operation. NIST’s effort of developing the tool – Access Control Policy Tool (ACPT) − [6] provides (1) GUI templates for composing AC policies, (2) property checking for AC policy models through an SMV (Symbolic Model Verification) model checker, (3) complete test suite generated by NIST’s combinatorial testing tool ACTS [7], and (4) XACML policy generation as output of verified model. Through the four major functions, ACPT performs all the syntactic and semantic verifications as well as the interface for composing and combining AC rules for AC policies; ACPT assures the efficiency of specified AC policies, and eliminates the possibility of making faulty AC policies that leak the privacy information or prohibit legitimate information sharing. To enhance the usability (comparing with ACPT) a tool with additional user interfaces, policy importing, and general security property specification capabilities need to be developed.
In addition to fundamental functions for verification and testing of access control models as provided by ACPT, advanced features are required for more capable and flexible use of access control policy tool. NIST seeks development of additional access control policy test capabilities, which may include: (1) easy and general user interface for policy property specification, (2) specification of object attributes inheritance relations, (3) more granular property verification results, (4) API or mechanism for acquiring or consuming information about users attributes, resources, environment, and inheritance relations, (5) more policy combination algorithms, (6) import XACML policies, (7) running under OSX and Linux in addition to Window system. (Awardee is encouraged to use NIST’s ACPT as the baseline system for the project, ACPT web page and source code are available at http://csrc.nist.gov/groups/SNS/acpt/acpt-beta.html).
Phase I activities:
Plan, specification and design for the access control policy tool development.
Phase I expected results:
Completed development plan, specification, and design including test plan for the proposed capabilities.
Phase II activities:
Code development, documentation, and testing of the access control policy tool capabilities.
Phase II expected results:
A robust beta version of access control policy tool that contains the proposed enhance capabilities, documentation for the code and user manual, and testing results to verify the completeness of the development.
In addition to ACPT source code, NIST would provide consultation, input, and discussion with the awardee to help with the evaluation of the proposed development.
References:
1. V. C. Hu, D. F. Ferraiolo, and D. R. Kuhn, Assessment of Access Control Systems, NIST IR 7316, Computer Security Division, Information Technology Laboratory, National Institute of Standards and Technology. (September 2006).
2. XACML. http://www.oasis-open.org/committees/xacml/.
3. Sun XACML. http://sunxacml.sourceforge.net/.
4. XACML .NET. http://mvpos.sourceforge.net/.
5. V. C. Hu, D. R. Kuhn, T. Xie, J. Hwang, Model Checking for Verification of Mandatory Access Control Models and Properties, Int'l Journal of Software Engineering and Knowledge Engineering (IJSEKE) regular issue IJSEKE Vol. 21 No. 1. (2011).
6. Computer Security Resource Center, Access Control Policy Tool (ACPT): http://csrc.nist.gov/groups/SNS/acpt/index.html.
7. Computer Security Resource Center, Automated Combinatorial Testing for Software (ACTS): http://csrc.nist.gov/groups/SNS/acts/.
Email coming from spoofed senders (referred to as phishing) is often used to inject malware into an enterprise. Several new technologies have been proposed to combat phishing by having the sending domain state its sending policy and vouch for sent email [1] [2]. Recently, a consortium of email providers developed a new technique called Domain-Based Message Authentication, Reporting and Conformance (DMARC) [3] [4] that allows sending domains to advertise their anti-phishing policies and provides a means to report global validation results of those policies back to the sending domains. This enables the sender to see how its anti-phishing techniques are interpreted by outside receivers as well as receive alerts if a phishing campaign has been launched using their domain as the spoofed sender.
DMARC has two key functions: First, the ability to authoritatively state the email sending policy and security technologies in use for a sender, and second, to set up a means to receive reports from recipients about email streams and (possible) intercepted phishing attacks. This standardized way of stating where to send abuse reports is very powerful and allows a domain to be alerted when a third party (i.e. an attacker) is spoofing their domain when sending email to a victim. This reporting allows for a more coordinated response and aids in investigation of the incident.
Currently, there is a lack of tools available for domain owners to parse and extract data from forensic reports. A phishing campaign conducted against one or several organizations could consist of hundreds or thousands of spoofed emails. These emails could come from various sources and forensic reports could identify the possible origin of the attacks. Forensic reports from different sources could be used to identify botnets in use and other intelligence that can be used to stop the campaign as quickly as possible.
The goal of the project is to design, develop and test a tool that takes multiple forensic reports (possibly stored in a database) and extract statistics and any other useful intelligence. The tool should also have dashboard front end that gives the user a quick summary of current or recent sets of forensic reports as well as alleged sources and targets. The tool should also allow a user to query for a set of reports based on date(s), target, or other relevant criteria. The tool should also allow for exporting of data and reports.
Tools to do other functions of DMARC are out of the scope for this project, but the tool may need to integrate into other DMARC reporting tools.
Phase I consists of designing, and documenting a database and associated processes and algorithms used to parse and analyze a collection of forensic reports. DMARC forensic reports are sent as an XML formatted file, but could be stored in a different format that allows for more flexibility in searching if necessary. Phase I also consists of developing and designing a dashboard front end to the tool that presents a visualization of key statistics and allows a user to perform searches over the report database.
Phase II consists of more robust development of the report analyzing tool and user front end. The front end should be cross platform or accessed via a web browser.
NIST will be available to act as subject matter experts in DMARC andDomain Name System(DNS) as needed. NIST would also be available to help test the developed algorithm or system as necessary.
References:
1. S. Kitterman. Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1. RFC 2708. (April 2014).
2. E. Allman, J. Callas, M. Delany, M. Libby, J. Fenton and M. Thomas. DomainKeys Identified Mail (DKIM) Signatures. RFC 4870. (May 2007).
3. M. Kucherawy and E. Zwicky. Domain-based Message Authentication, Reporting and Conformance (DMARC). Work in Progress. (Dec 2014).
4. Peterson, Alec. How DMARC is Saving Email: The New Authentication Standard Putting an End to Email Abuse. http://www.messagesystems.com/pdf/message-systems-ebook-how-dmarc-is-saving-email.pdf(2013).
The majority of small offices and homes connect to the Internet through a so-called small office/home office router (SOHO). These systems run their own embedded software and provide basic network services for a small network: both wireless and wired. This embedded software is usually proprietary to the manufacture, but open source SOHO router software exists [1][2]. These open source implementations often use (modified) well-known software packages to provide common network functions for a home network.
Recent security research has been focused on addressing vulnerabilities in embedded systems through penetration testing [3]. While very useful, penetration testing may not fully exercise all the individual components that are part of the router firmware. A more thorough testing of the open source components may be able to uncover and address issues like "Heartbleed" that compromised every system that relied on OpenSSL for Transport Layer Security (TLS) [4].
While there has been some recent research focused on assessing the vulnerabilities in SOHO routers [3], there has been little research or development focused on how SOHO routers could significantly improve the security posture of home networks. Customer premise / SOHO routers can serve as innovation platforms, implementing leading edge security services on behalf of the wide range of systems (e.g., computers, appliances, sensors) that they connect to the Internet. For example, a router can detect that a secure connection to a particular website or service is possible and proxy a secure connection to that site, even though the originating host did not choose to use a secure connection. In effect, acting as a Man-in-the-Middle (MITM), only this MITM improves overall security rather than launching attacks.
The new SOHO router capabilities of interest include:
- Ability to intercept and provide opportunistic encryption for protocols that can optionally use TLS (e.g. HTTP), but are instead operating in an insecure mode.
- Support of DANE (DNS-based Authentication of Named Entities) [5] techniques for discovery, retrieval and validation of keying material.
- Ability to act as aDomain Name System Security Extensions (DNSSEC) validating cache.
- Ability to utilize DNS privacy techniques such as query minimization.
- Full support of IPv6 [6] and HomeNet [7] router standards.
- Full support for current requirements [8] for customer edge routers.
- Ability to intercept traffic going to known malware sites to prevent data exfiltration and/or infected host systems from communicating with Command & Control sites.
The goal of the project is to design, develop and test a set of security tools that are part of the embedded system of a home router. These tools will allow the home router to act as a security proxy to protect insecure systems on a home network. The tools should have little or no necessary user bootstrap configuration but should be able to work without user action (although a user could alter the configuration if desired).
Phase I consists of identifying a suitable open source SOHO router implementation and performing a full security analysis of the candidate software. Any vulnerability discovered should be documented and reported as necessary (e.g. to the NIST National Vulnerability Database (NVD) if found to be a vulnerability to a widely used package). The second part would be to produce a plan on how to include new capabilities for new security services in home router firmware. A proof of concept version of code should be developed to insure correctness and for testing, but does not need to necessarily be already part of the embedded software in the home router.
Phase II consists of more robust development of the code based on the architecture developed in Phase I. A new branch of the candidate open source router firmware should be used as the code base.
NIST will be available to act as subject matter experts in DNS, DNSSEC and DANE as needed. NIST would also be available to help set up testing and evaluation of any algorithm or finished code as necessary.
References:
1. OpenWrt, a Linux distribution for embedded devices: https://openwrt.org/.
2. DD-WRT, a Linux based alternative OpenSource firmware: http://www.dd-wrt.com/site/index.
3. Constantin, Lucian. Fifteen New Vulnerabilities Reported During Router Hacking Contest PC World. (Aug 12, 2014). http://www.pcworld.com/article/2464300/fifteen-new-vulnerabilities-reported-during-router-hacking-contest.html.
4. Wheeler, David. How to Prevent the next Heartbleed. (April 2014). http://www.dwheeler.com/essays/heartbleed.html.
5. Dane Status Pages, DNS-based Authentication of Named Entities: https://tools.ietf.org/wg/dane/.
6. Singh, H., et al. RFC6204 Basic Requirements for IPv6 Customer Edge Routers - http://www.rfc-editor.org/rfc/rfc6204.txt(April 2011).
7. Home Networking (HomeNet) specifications: https://datatracker.ietf.org/wg/homenet/documents/.
8. Woodyatt, J. RFC6092 Recommended Simple Security Capabilities in Customer Premises Equipment (CPE) for Providing Residential IPv6 Internet Service: http://www.rfc-editor.org/rfc/rfc6092.txt(2011).
New medical diagnostic tests, improving the quality and cost-effectiveness of health care electronic records, reference materials for laboratory test methods, faster screening of promising vaccines, these are a few of the many areas where National Institute of Standards and Technology (NIST) research serves the needs of the bioscience and health care community. NIST collaborates extensively with other federal and private organizations to provide the new measurements and standards methods, tools, and data needed to advance biosciences and health research. Bioscience and Health represent a large portion of that content.
Broadband Coherent Anti-Stokes Raman Scattering (BCARS) [1-5] offers noninvasive, label-free, three-dimensional chemical imaging of materials and biological systems. Single-frequency Coherent Anti-Stokes Raman Scattering (CARS) microscopy, introduced in 1999, also offers 3D label-free imaging, but does not provide the chemical sensitivity that is so important in complex biological systems.
Discrimination of critical chemical composition changes in biological tissues (e.g. for diagnostics), and analysis of complex materials requires spectral data over a wide frequency range. Spectral range in the vibrational fingerprint region (500 cm-1 to 1800 cm-1) offered by BCARS makes possible label-free imaging and the chemical sensitivity needed to fully characterize subtle changes in biological and material systems that are concomitant with processes of interest such as disease progress and cellular differentiation.
Recent innovations in experimental technique and improvements in data analysis have brought the possibility of widespread use of BCARS much closer. The new techniques developed by NIST are based on a “3-color” CARS signal generation mechanism that relies on intrapulse generation of vibrational coherence in the sample [6] that produces signals in the fingerprint spectral region with unprecedented strength and clarity. While this new approach is very powerful, it requires advanced performance from the laser system and significant complexity in the detection scheme. Commercialization of such a BCARS system is presently untenable due to the complexity of the laser sources and supporting apparatus.
In order to remove a crucial barrier to commercialization and wide dissemination of BCARS microscopy, proposals are invited for design studies of a user-friendly laser source for a time-domain coherent Raman spectroscopy instrument. The characteristics of the desired laser source are given below:
NIST seeks development of a laser that would produce two broadband laser pulses (bandwidth of >3000 cm-1) with spectral content between 930 nm and 1400 nm, with repetition rates that are mismatched by a precise and time-invariant offset. Highly stabilized dual-comb laser systems are available commercially, and are used for applications such as optical clocks, however they are very expensive, and use the wrong wavelengths for coherent Raman imaging. The system we propose would have relaxed specifications for absolute repetition rate, and could be manufactured at much lower cost than the commercially available comb lasers. Also, availability of such a laser would make performing BCARS microscopy quite straightforward.
The goals of the proposed work are to demonstrate feasibility for design and construction of a single turnkey laser source that would be suitable for time-domain coherent Raman spectroscopy, and to demonstrate its reliability.
Phase I activities and expected results: Design a dual-pulse laser source to have the following characteristics:
Two trains of broadband, compressible laser pulses, each with repetition rate of approximately 500 -MHz, but with a fixed repetition rate mismatch 0.1 MHz, where each pulse train contains pulses that:
- Have > 3000 wavenumbers of spectral bandwidth between 930 nm and 1400 nm
- Are compressible to near transform limit (< 10 fs)
-
Have average power of at least 150 mW
Stability of the difference in repetition rates of the two pulse trains is better than 0.001 over a 1 hour time period.
Demonstrate that individual components of the laser system can function independently
Phase II activities and expected results: Assemble and test the laser system for the following:
- Pulse compressibility
- Pulse power stability
- Repetition rate mismatch stability
NIST will be available for consultation on progress and performance.
References:
1. T. W. Kee and M. T. Cicerone, Simple Approach to One-Laser, Broadband Coherent Anti-Stokes Raman Scattering Microscopy, Opt. Lett. 29, 2701-2703. (2004).
2. Y. J. Lee, Y Liu, M. T. Cicerone, Characterization of 3-Color CARS in a 2-Pulse Broadband CARS Spectrum, Opt. Lett. 32, 3370-3372. (2007).
3. Y. J. Lee and M. T. Cicerone, Vibrational Dephasing Time Imaging by Time-Resolved Broadband Coherent Anti-Stokes Raman Scattering Microscopy, Appl. Phys. Lett. 92, 041108. (2008).
4. Y X Liu, Y J Lee and M T Cicerone, Broadband CARS Spectral Phase Retrieval Using a Time-Domain Kramers-Kronig Transform, Optics Letters 34:9, 1363-1365. (2009).
5. Sapun H Parekh, Young Jong Lee, Khaled A Aamer, and M.T. Cicerone, Label-Free Cellular Imaging by Broadband Coherent Anti-Stokes Raman Scattering Microscopy, Biophysical Journal 99:8, 2695-2704. (2010).
6. Camp, J. J., Lee, Y. J., Heddleston, J. M., Hartshorn, C. M., et al. High-Speed Coherent Raman Fingerprint Imaging of Biological Tissues, Nat Photon 8, 627-634. (2014).
This is the main research area, please review subtopics for a better description of available funding topics.
NIST has numerous technologies that require additional research and innovation to advance them to a commercial product. The goal of this SBIR subtopic is for small businesses to advance NIST technologies to the marketplace. The Technology Partnerships Office at NIST will provide the Awardee with a no-cost research license for the duration of the SBIR award. When the technology is ready for commercialization, a commercialization license will be negotiated with the Awardee.
Applications may be submitted for the development of any NIST-owned technology that is covered by a pending U.S. non-provisional patent application or by an issued U.S. patent. Available technologies can be found on the NISTTech website http://tsapps.nist.gov/techtransfer/ and are identified as “available for licensing” under the heading “Status of Availability.” Some available technologies are described as only being available non-exclusively, meaning that other commercialization licenses may currently exist or it is a joint invention between NIST and another institution. More information about licensing NIST technologies is available at http://www.nist.gov/tpo/Licensing.cfm.
The technical portion of an application should include a technical description of the research that will be undertaken. Included in this technical portion of the application, the applicant should provide a brief description of a plan to manufacture the commercial product developed using the NIST technology. Absence of this manufacturing plan will result in the application being less competitive.