USA flag logo/image

An Official Website of the United States Government

NIST Small Business Innovation Research (SBIR) Program Federal Funding Opportunity (FFO)

Printer-friendly version
Agency: Department of Commerce
Program/Year: SBIR / 2014
Solicitation Number: 2014-NIST-SBIR-01
Release Date: February 19, 2014
Open Date: February 19, 2014
Close Date: May 2, 2014
9.01: Cyber-Physical Systems : Residential Heat Pump Fault Detection and Diagnostic Datalogger

A low cost, modular system for monitoring and controlling a process or environment is applicable to a wide variety of commercial and consumer-based endeavors. The ability to measure a quantity and then affect a change to control that quantity, or a related quantity, is highly desirable in an innumerable number of scenarios whether it be on the manufacturing plant floor or even within the home. This type of monitoring and control functionality is being popularized by smart phone based hardware such as the Nest Thermostat ( and other home automation technology ( Sophisticated monitoring tools integrated with process control are already applied in the manufacturing environment, but for some purposes, a commercial-type system is overkill. The refinement of home automation technology into a hardware and software tool that focuses on monitoring a residential heat pump is  one such system that would have broad application for the millions of heat pumps already installed in the United States, and would advance the development of more refined measurement tools for similarly complicated home appliances.

A primary objective of this research topic area is to develop fault detection and diagnosis (FDD) methods for residential heat pumps.  In particular, the development of tools which make use of artificial intelligence, deductive modeling, and statistical methods to automatically detect and diagnose deviations between actual and optimal system performance is necessary to accelerate market penetration of heat pump FDD technology. Historical work has focused on vapor compression heat pump and rooftop/packaged systems [1-3], however, the ability to detect and diagnose the cause of faults remains poorly understood.  In particular, the following two questions must be answered to advance the FDD state-of-the science:

1) What is the true economic and energetic cost of a particular fault? and

2) What is the statistical prevalence of important faults?

A recent NIST project to examine correlations of the effects of different faults in a yearly simulation for two different home types in 5 climate zones has provided insight to this topical area, however, additional research to fully understand the economic and energetic cost of faults is necessary

With respect to the second question previous studies in California and other Western States are reported for various faults such as refrigerant charge and indoor airflow [4]. However, a larger sampling of data from different climate zones with some degree of statistical certainty is necessary. In particular, the development of a measurement tool to gather data from a large portfolio of residential systems, e.g., a datalogging tool, will advance FDD methods.  An optimum tool should be easy to install with minimum invasiveness; economical and scalable; and able to communicate its data over wireless and wired communication networks. Moreover, an optimum tool should leverage existing home automation technologies, be a modular design to allow for the expansion of measurement inputs, and be applicable to a broad range of applications monitoring varied pieces of equipment, processes, and/or environments.

An optimum tool is envisioned as being installable on indoor units and outdoor units, and should be able to handle the appropriate environmental condition associated with these environments.   Minimum functionality of such a device comprises: 1) a plurality of measurement nodes for temperature distributed inside and outside of a building/residence that is using a heat pump; 2) a plurality of pressure transducers suitable for heat pump operations; 3) a plurality of suitable electrical measurement devices for both voltage and current deployable at suitable locations relative to fault occurrence; 4) appropriate analog to digital conversion; 5) suitable communications capabilities (e.g., wireless 3G/4G cellular; 802.11g; and hardwired TCP/IP).  The optimum tool should be able to store data for long periods of operation; provide adequate data security; and preserve data in the event of power failure. 

The product of this SBIR project will be a FDD datalogger tool which will allow detailed performance data collection on residential heating and air conditioning equipment.   This tool should be applicable to a broad set of residential heat pumps in order to maximum the tool’s applicability to collecting a wide range of field performance data. Data collection efforts will be designed with the aid of NIST statisticians to ensure that the goal of determining fault prevalence is met with a known degree of uncertainty.

Applicants should have access to a broad range of technical experts in the fields of consumer electronics, air conditioning, and related fields, experts expected to provide input toward the development of a practical FDD data logging tool. This input should produce an effective device design at a cost deemed reasonable to the cost of the development of the tool.

Phase I activities and expected results:
The awardee shall create a design that performs all of the necessary functions described above. If all functions cannot be directly incorporated, the awardee shall re-define the criteria to something buildable while meeting most of the previously listed requirements.

Phase II activities and expected results:
In Phase II of the SBIR project, the awardee shall construct three prototypes of the Phase I design for evaluation in the field, and conduct field testing. NIST personnel will be involved with field testing. During these prototype tests, the awardee shall work to refine its interfaces, perfect communication protocols/software tools, and refine the design to remove any flaws. 

On a case-by-case basis, NIST may provide technical experts to work with Phase I and Phase II awardees for consultations and discussions to answer design questions and clarify any other technical aspects within the field of expertise.

1. Breuker, M.S. and Braun, J.E., 1998a, “Common faults and their impacts for rooftop air conditioners,” International Journal of Heating, Ventilating, Air-Conditioning and Refrigerating Research, Vol. 4, No. 3, pp. 303-318.
2. Li H. A decoupling-based unified fault detection and diagnosis approach for packaged air conditioners. Ph.D. Thesis, West Lafayette, IN: Purdue University, 2004.
3. Rossi, T.M. and Braun, J.E., 1997, “A statistical, rule-based fault detection and diagnostic method for vapor compression air conditioners, International Journal of Heating, Ventilating, Air-Conditioning and Refrigerating Research,” Vol. 3, No. 1, pp.
4. Proctor, J., 2004, “Residential and Small Commercial Central Air Conditioning; Rated Efficiency isn’t Automatic,” Presentation at the Public Session. ASHRAE Winter Meeting, January 26, Anaheim, CA.
Point of Contact:
Mary Clague

9.02: Cybersecurity : Cryptographic Acceleration for Border Gateway Protocol Security (BGPSEC)

The Border Gateway Protocol (BGP) was initially developed in 1989 (RFC 1105 [1]) and last refined in 2006 to its current version BGP-4 (RFC 4271 [2]). Because the protocol itself does not provide any notion of security, more and more successful attacks against the protocol have been witnessed in recent years [3][4]. The Internet Engineering Taskforce (IETF) is currently developing two mechanisms with the intent to strengthen BGP to be protected against attacks (malicious or due to misconfiguration) such as route hijacks and redirects (i.e., AS-path modifications). These security mechanisms being developed in the IETF, based on a Resource Public Key Infrastructure (RPKI), are called route-origin validation [5] [6] and path validation [7]. The new security enhanced BGP together with these two mechanisms is known as BGPSEC (BGP with Security) [7].

BGPSEC relies heavily on cryptographic processing because it involves cryptographically signing and verifying the BGP update messages. In essence, BGPSEC requires real-time line-speed cryptographic operations on BGP path announcements, and hence poses many challenges in today’s routing landscape. The router control plane hardware currently deployed in the field is not up to the task due to lack of adequate processing power, especially for the cryptographic-intensive computations in BGPSEC. The global Internet BGP routing tables consists of approximately 500,000 routes (i.e., unique announced prefixes) with re-convergence requirement of approximately 15 to 20 minutes following router reboots. The current lack of any special purpose hardware for cryptographic processing in routers poses a challenge for the hardware vendors.

NIST is heavily involved in the design of the new BGPSEC protocol (i.e., route-origin and AS-path validation), and has developed a prototype validation software reference implementation [8]. There is a critical need to research and develop mechanisms to use embedded or off-board specialized cryptographic processors (e.g., any that may be available off-the-shelf from vendors such as Intel, AMD, Cavium, etc.) in routers so that future BGPSEC-enabled routers will be well equipped to handle the required cryptographic-processing loads. Special consideration needs to be given to router failure-recovery scenarios when speed of convergence would be most critical. Cryptographic-hardware can be combined with innovative BGPSEC processing optimization algorithms to further improve performance. It is important that cryptographic acceleration can be added to secure routers in a cost effective matter. Additional problems that also need to be addressed are minimization of power consumption, and efficient use of available memory since there may be some limitation in increasing memory size in routers.

The goal of the project is to research and develop an efficient router platform that is capable of performing the cryptographic operations associated with the evolving BGPSEC protocol. The resulting router would perform said cryptographic processing at nearly line speed even under router failure-recovery scenarios so that the speed of BGP convergence can meet operators’ stringent requirements while still maintaining security. The resulting router should be economically viable and commercially deployable in the near future.  

Phase I activities and expected results:
Phase I consists of incorporating an embedded or off-board cryptographic accelerator onto a PC-router platform using the NIST BGP-SRx [8] reference implementation. An optimum design would enable the addition of cryptographic hardware accelerators to a PC platform to completely off-load all cryptographic operations from the BGP-SRx software prototype. The selection of cryptographic accelerator hardware and software interfaces to BGP-SRx should be included in Phase I.

Phase II activities and expected results:

Phase II consists of developing and testing a prototype for the PC-router mentioned above in Phase I activities. The prototyped router should open the doors for commercial deployment of BGPSEC, including route-origin validation and AS-path validation.

NIST will be available for providing BGPSEC validation algorithms to be used in the router implementation as well as to assist in designing the prototype framework. The BGPSEC protocol standard has been only partially developed in the IETF; the route-origin validation is specified while the AS-path validation is still work-in-progress.

On a case-by-case basis, NIST may provide technical experts to work with Phase I and Phase II awardees to provide assistance in keeping the awardee abreast of the latest developments in the IETF and network operator community regarding the BGPSEC protocol. NIST may also provide expertise to assist with the design of the integration into today’s routing systems.

1. K. Lougheed and Y. Rekhter, RFC 1105, “A Border Gateway Protocol (BGP),” June 1989,
2. Y. Rekhter, T. Li, and S. Hares, RFC 4271, “A Border Gateway Protocol 4 (BGP-4),” January 2006,
3. Karrenberg, D., “YouTube and Pakistan Telecom,” Video presentation on YouTube, February 2008.
4. Cowie, J., “China’s 18-Minute Mystery,” Online Report from Renesys Corp., November 18, 2010.
5. Huston, G., Michaelson, G., and Kent, S., “Resource Certification - A Public Key Infrastructure for IP Addresses and AS’s,” Proc. of the IEEE Globecom Workshops, November 2009.
6. P. Mohapatra, J. Scudder, D.Ward, R. Bush, and R. Austein, RFC 6811, “BGP Prefix Origin Validation,” January 2013,
7. M. Lepinski (Ed.), draft-ietf-sidr-bgpsec-protocol-07, “BGPSEC Protocol Specification,” February 25, 2013,
8. NIST BGP Secure Routing Extension (BGP-SRx).
Point of Contact:
Mary Clague : Privacy Preserving Tools for Federated Authentication Models

Legacy Internet communication protocols were designed for secure communication in the Dolev-Yao model. This model consists of two communicating parties and an adversary who can overhear, intercept, and synthesize any message. In this paradigm, the legitimate communicating parties only send messages to each other. No messages are sent to the third party, who is an adversary intent on preventing the legitimate parties from achieving its goal.

In the last few decades, software and standards have been developed which satisfactorily solve the above problem. Current digital transactions, however, occur in a model that is very different from Dolev-Yao. Specifically, third parties are not necessarily malicious and, in fact, often are an important part of a multi-party communication protocol. These protocols seek to enhance the digital world. In particular, they aim at facilitating electronic commerce and other transactions in a cooperative, rather than adversarial, model. Of course, we cannot simply assume bad actors away. The next generation of protocols needs to replace “distrust” by “trust but verify.” It seeks to use novel Internet technologies (see, for example, the NIST Beacon at

In the modern internet world, with its myriad players - customers, standards bodies, industry, governments, privacy advocates, and many more - it will be hard to effect this transition. But transition we must if we are to realize the potential of the Internet for improving our quality of life and, from the United States perspective, our competitiveness.

However, industry and other actors often resist modification to its deployed technologies. This subtopic is about overcoming this critical barrier by focusing on test cases that are representative of many scenarios and specific enough to allow engineering of working solutions. So as to maximize the probability of successful commercialization and adoption by industry, these solutions should leverage existing Dolev-Yao protocols and standards by either using them as black-box primitives or implementing minimal changes to them.

A primary objective is to develop tools that solve remote authentication, identification, and attribute disclosure problems (e.g., JSON, SAML, OpenID Connect, OAuth). A representative problem is the “brokered identity problem,” in which there are identity and attribute providers that, due to privacy considerations, must issue assertions without knowing who the consumer of the assertion is. For example, an attribute verifier does not need to know what application the user is attempting to access. Signed assertions are issued and sent to a broker, who in turn forwards them to the assertion consumer, typically a service provider. It is fairly straightforward to use two-party protocols such as SAML to solve this problem if we are willing to allow the broker to read the assertions. However, it is more complicated to solve this problem under the so-called “honest but curious model” in which the broker follows the protocols but anything that it learns becomes public knowledge. The recipient will specifically develop and test working technologies that solve attribute disclosure problems in a multi-party authentication architecture for privacy preserving protocols outside the Dolev-Yao model.

Phase I activities and expected results:
Pick one or more representative problems;

  1. Research solutions from the cryptographic literature; and/or
  2. Choose candidate techniques and carry a preliminary assessment of how these choices impact feasibility vis-à-vis compatibility with existing standards and industry practice.

Phase II activities and expected results:
Develop working prototypes and work with NIST, the NSTIC NPO [1], and industry to carry out feasibility tests and evaluations, with an eye toward downstream commercialization.

On a case-by-case basis, NIST may provide technical experts to work with Phase I and Phase II awardees for consultations and discussions to answer design questions and clarify any other technical aspects within the field of expertise.

1. National Strategy for Trusted Identities in Cyberspace (
2. National Cybersecurity Center of Excellence ( Enhancing Technologies Workshop (
Point of Contact:
Mary Clague : Secure Email Agent Using the Domain Name System (DNS) as a Trust Infrastructure

Email is widely used for Internet communication both in dialogs between people and one-way messaging and notification systems (e.g., Email from your bank noting a deposit). However, email is inherently insecure and often spoofed by attackers looking to impersonate another user, or institution, in order to trick a victim to download malware or view a malicious site. This type of attack (often called "phishing") has been used to successfully infiltrate enterprises to steal sensitive data or leverage other attacks [1]. The current state of Email security is considered so poor that many financial institutions and government agencies tell their customers they will never send them unsolicited email and to distrust all email purporting to come from their domain [2].

There are standards developed by the IETF to provide authentication (via digital signatures) and confidentiality (via encryption) to email through the Secure/Multipurpose Internet Mail Extensions (S/MIME) [3]. S/MIME encryption uses asymmetric cryptography. A user's public key is usually stored in a digital certificate signed by its enterprise or a provider’s Certificate Authority (CA). However, S/MIME lacks the ability to easily establish cross-domain trust. Users within an enterprise can all configure a central trust anchor (to validate S/MIME digital certificates), but may not be able to obtain and validate the S/MIME digital certificates of users in a different domain. For example, employees of "" would like to send a digitally signed email to users in "". However, end users of are not able to obtain (or validate) the S/MIME digital certificates of the senders so they cannot validate the signed messages. One way to make obtaining (or validating) S/MIME digital certificates possible is to use the Domain Name System (DNS) as a public key distribution infrastructure.

The DNS [4] [5] is a globally distributed, hierarchical naming infrastructure that supports almost every other form of Internet communication. A DNS query is usually the first step in communication and is already currently used by the email protocol to find the proper destination for email messages. The DNS Security Extensions (DNSSEC) [6] provides a means to protect the integrity of DNS data and provide data authentication (i.e. that it comes from the authoritative domain holder). This means DNS can be used as a lightweight distribution channel for security information such as public keys or digital certificates.

New DNS data types have been developed to store digital certificates to support Transport Layer Security (TLS) for web traffic (e.g., https) and other uses [7]. This new data type can be used as the model for another new data type [8] to store email digital certificates. Email certificates require different features than simple TLS certificates. Email security often involves two different certificates: one for generating digital signatures and one for encryption of email contents. The new DNS data type for email digital certificates takes these differences into account, but otherwise it is treated like any other DNS data.

The goal of the project is to design, develop and test an extension to open source Mail User Agents (MUA) to use the DNS to obtain and verify email digital certificates. This extension could be a downloadable plug-in or a modified open source implementation that users can download and install on their own. This modified MUA would have two new functions related to secure email: First, the MUA, upon receiving a digitally signed email, would issue DNS queries to obtain additional information in order to validate the certificate before using the certificate to validate the message signature. Second, the MUA would have the ability to query the DNS to obtain a receivers email encryption certificate (or public key) in order to send an encrypted email.

Provisioning tools to format and store email digital certificates in the DNS are not part of this project, but may be developed to aid in the testing portion of the project.

Phase I activities and expected results:

Phase I consists of identifying possible candidates to use as the base code and designing the algorithms to query the DNS for digital certificates and how to interpret the response. MUA candidates are ideally open source and either easily extendable via plug-ins (for example Mozilla Thunderbird), or proprietary MUA software that has API's available to develop plug-ins that users can download and install independently (for example, Apple Mail).

Phase II activities and expected results:

Phase II consists of developing and testing a prototype of the MUA plug-in to send and receive encrypted and/or digitally signed email. The MUA will use the DNS to obtain digital certificates (or additional information to validate digital certificates) used to sign email. The MUA will also use the DNS to obtain the public encryption key of an email receiver and use it to encrypt and send a confidential email message.

On a case-by-case basis for Phase I and Phase II awards, NIST may provide technical experts to act as subject matter experts in DNS, DNSSEC and email as needed. NIST may also be available to establish the network and DNS infrastructure that would be needed to conduct testing of the resulting modified MUA. NIST has experience from previous DNS projects in setting up test domains using newly specified data types.

1. "Washington Post Site Hacked After Successful Phishing Campaign".
3. B. Ramsdell, S. Turner. "Secure/Multipurpose Internet Mail Extensions (S/MIME) Version 3.2 Message Specification". RFC 5751. Jan 2010.
4. P. Mockapetris. "DOMAIN NAMES - CONCEPTS AND FACILITIES" RFC 1034 Nov 1987
6. R. Arends, R. Austein, M. Larson, D. Massey, S. Rose. "DNS Security Introduction and Requirements" RFC 4033. March 2005.
7. P. Hoffman, J. Schlyter. "The DNS-Based Authentication of Named Entities (DANE)Transport Layer Security (TLS) Protocol: TLSA" RFC 6698. Aug 2012.
8. P. Hoffman, J. Schlyter. "Using Secure DNS to Associate Certificates with Domain Names For S/MIME". Work in Progress.
Point of Contact:
Mary Clague : Silicon Single-Photon Avalanche Diodes with Detection Efficiency that Exceeds 95 %

Recent advances in quantum communications and quantum random number generation have identified the critical need for detectors with single-photon detection efficiency above bounds that are determined by information theory. Additional losses in any preceding optical components require that the efficiency of the subsequent detectors be even higher. Devices of this type may be used in verifiable random number generation, a critical need for cryptography and cyber security [1]. Detectors with single-photon detection efficiency above 95 % are generally considered suitable for these applications, though higher efficiency is better. To date the only candidates that meet this requirement operate at cryogenic temperatures, which significantly increases the complexity and cost of any cryptographic apparatus based on such processes. Devices of this type are also critical to advance communications systems based on generalized quantum measurements [2], which can discriminate low-photon-number states at errors rates lower than those determined by the standard quantum limit.

While there are a wide variety of single-photon detectors, it has been demonstrated recently that silicon single-photon avalanche diodes can achieve detection efficiencies exceeding 80 % at some wavelengths, while maintaining relatively low noise (less than 100 Hz dark count rate) [2]. These advances are promising, and suggest that it may be possible to achieve near unity single-photon detection efficiency in a compact, low-cost device that requires only thermoelectric temperature control. Such a device could be a critical component in the development of a small-form-factor quantum-random-number-generation equipment. A primary objective is the development of quantum random number generation for cyber-security and selective disclosure protocols, as well as an ongoing commitment to quantum optics research and devices that enable quantum information processing. More generally, high-efficiency low-noise single-photon detectors play a critical role in applications ranging from quantum cryptography and DNA sequencing, to 3D imaging based on time-of flight ranging; technological advances in sensors that operate at the fundamental limit to electromagnetic signal strength are likely to have an impact in a variety of fields.

The award should ultimately result in the development and demonstration of single-photon avalanche diodes with single-photon detection efficiency somewhere in the silicon region (roughly between 350 nm and 1100 nm) that exceeds 98 %, with an intermediate Phase I goal of 95 %. In addition, noise is an important concern for detectors for cyber-security applications. To this end, the intrinsic dark count rate of the devices must be below 10 kHz, while the timing resolution must be better than 1 ns (full-width at half maximum), or, equivalently, the per-gate dark count rate should not exceed 10^-5+. Otherwise, there are no requirements on the wavelength at which the devices should operate, their maximum count rate, or their recovery time, though a recovery time below 10 microseconds is preferable.

To allow efficient optical coupling, the device’s active area should have a diameter larger than 50 micrometers. Devices that meet all these requirements would represent a significant advance in single-photon detection technology, and would benefit not only cyber-security and quantum information applications, but also the more conventional applications of high-efficiency single-photon detectors such as fluorescence spectroscopy and LIDAR.

Phase I activities and expected results:
Design, fabrication packaging, and testing and characterization of silicon single-photon avalanche diodes with >95 % single-photon detection efficiency

Phase II activities and expected results:
As in Phase I, improving efficiency from > 95 % to the ultimate goal of > 98 %.

On a case-by-case basis, NIST may provide technical experts to work with Phase I and Phase II awardees to work collaboratively to help test and characterize the devices fabricated and packaged under this project.

1. S. Pironio, A. Acin, S. Massar, A. Boyer de la Giroday, D. N. Matsukevich, P. Maunz, S. Olmschenk, D. Hayes, L. Luo, T. A. Manning, and C. Monroe, "Random Numbers Certified by Bell's Theorem," Nature 464, 1021 (2010).
2. F.E. Becerra, J. Fan, A. Migdall, “Implementation of generalized quantum measurements for unambiguous discrimination of multiple non-orthogonal coherent states,” Nature Communications 4, 2028 (2013).
Point of Contact:
Mary Clague

9.03: Health Care : Instrument to Detect Aerosolized-Droplet Dose Delivery of Vaccines

Delivery of aerosolized drugs through the pulmonary system has received much attention in recent years for addressing a variety of health issues – in particular the delivery of vaccines. Higher costs and increased chemical toxicity of drugs under consideration are requiring more stringent dose delivery criteria, and thus has affected inhaler design and development. Little quantitative information on spatially or temporally resolved concentration of a drug throughout the aerosol (i.e., presence of the active pharmaceutical ingredient (API) within a particular liquid droplet or solid particle) is available; concentration along with size is critical for proper dosage and transport efficiency to the site of action within the lungs.

NIST has developed a measurement approach to measure inhaler dose concentration of pharmaceutical-laden, multiphase aerosols [1]. Since many biological molecules either are naturally fluorescent or can be chemically modified with fluorophores, one can relate fluorescence intensity to concentration (or mass) of these inclusions within the droplet volume. The approach used distinguishes between aerosol droplets that may or may not contain fluorescing agent (i.e., to identify droplet-to-droplet variations in agent concentration). Development of a functioning prototype instrument is needed that integrates the measurement of particle/droplet fluorescence intensity with image splitting technology and magnification optics, as well as provides software algorithms to identify API-laden droplets/particles and statistical evaluates the overall API concentration.

Foundational capabilities to conduct this research include: 1) image and record fluorescence and scattering intensity of individual droplets; 2) determine API drug fluorescence by fluorescence spectrophotometry; 3) prepare solutions with API to prove that natural fluorescence can be used to identity API in a solution or mixture; 4) image and distinguish fluorescent and non-fluorescent microspheres (of comparable size to droplets in metered-dose or dry-powder inhalers) using fluorescence microscopy; 5) extract and identify particle size, scattering and fluorescence intensity from recorded images using an already existing mathematical algorithm; and 6) form composite images of the combined scattering and fluorescence images.

Phase I activities and expected results:
During Phase I, commercially available equipment will be identified and a prototype design will be developed that will accommodate different types of manufactured inhalers. Expected instrument operating parameters are droplet range of interest is between 1 µm and 10 µm, API concentration of 0.2 g/L – 1.0 g/L, total test time < 1 min, instrument response time < 1 s, maximum flow rate of 15 L/min, volume/dose of 50 µL – 100 µL, and droplet flow speed of 1 m/s – 10 m/s. The awardee will also address experimentally other potential issues that need to be considered for developing a functioning prototype instrument.

Phase II activities and expected results:

The objective of Phase II is the delivery of a functioning prototype device applicable to a wide range of healthcare technologies.

On a case-by-case basis, NIST may provide technical experts to work with Phase I and Phase II awardeesduring both phases of the research, including coauthoring manuscripts to be submitted for archival publication.

1. Presser, C., "Detection of Dose Concentration in Droplets Generated by Pulmonary Drug Delivery Devices," 21st Annual Conf. on Liquid Atomization and Spray Systems (ILASS Americas 2008), on CD, Orlando, FL, 2008.
Point of Contact:
Mary Clague : Production of NIST/UCSF Breast Phantom for Magnetic Resonance Imaging (MRI)

NIST, in conjunction with University of California San Francisco (UCSF), has designed a breast phantom for quantitative magnetic resonance imaging (MRI), specific to American College of Radiology Imaging Network (ACRIN) trial 6698. A phantom is an inanimate structure used to calibrate and test MRI scanners, coils, and their operating protocols. The initial design has received interest from researchers conducting clinical trials in breast cancer research and major MRI vendors and from major research institutions that design breast imaging coils and pulse sequences. The breast phantom consists of two independent phantoms, one focused on diffusion MRI measurements and the other focused on accurate measurements of fat and fibroglandular tissue properties; additional details are provided in the References. The objective is to develop and commercialize the NIST/UCSF breast phantom based on this design. Development needs include: lowering cost through advanced manufacturing techniques, creating sterilization techniques to ensure stability for a minimum of five years, incorporating quantitative traceability into dimensional and magnetic resonance properties, and writing software that overlays the phantom structure with magnetic resonance images.

Project Goals:

  1. Cost reduction: Current prototype phantoms cost approximately $12,000 to construct. For successful commercialization we require that the cost to the end-user be brought down to less than $5,000. Cost reduction can be achieved by simplifying the phantom design (while retaining the prescribed functionality), reducing the number of machined parts, integrating components, incorporated cost efficient production techniques, or other methods.
  2.  Improve phantom stability: The phantom should be stable for at least five years. Five-year stability requires all components to be sterilized so there will be no bacterial or fungal growth within the phantom. Current designs utilize corn syrup and other materials that are potential growth media for bacteria. The diffusion and T1 inserts must be well sealed and their properties and concentrations must be stable for at least five years. The phantom should be robust enough to withstand shipment and at least five years of normal use.
  3. Incorporation of quantitative traceability: A plan must be developed to insure quantitative traceability for critical components of the phantom. This includes traceable dimensional measurement of the resolution insets, traceability of the chemical composition of the T1 and diffusion components, and traceability of the T1 and T2 relaxation properties to gold standard NMR measurements. NIST may provide assistance and guidance on best methods to insure traceability.
  4. Analysis software: When imaging these phantoms an overlay is required to identify the exact position of the phantom components. Software is required to take the output of the three-dimensional models used for phantom construction and provide an output suitable for overlays in standard DICOM viewers. For the commercial phantom (Phase II), a complete analysis package is required to be disseminated with the phantom that can input DICOM images, overlay the designated regions of interest, process the data, and compare measured parameters with prescribed parameters.

Phase I activities and expected results:
Revise initial designs and develop manufacturing process to lower cost to end-user:
The goal is a total cost of $5000 or less per phantom. Prototype phantoms should be constructed using low cost manufacturing techniques. A plan is required to ensure measurement traceability of all key parameters. Traceable measurements should be made on key components going into prototype phantoms.

Develop sterilization plan and methods such that the fabricated phantoms have at least five-year stability.

Phase II activities and expected results:
Move toward manufacture of commercial breast phantoms: test performance of lower cost prototype, make final revisions with input from the breast MRI community, and produce breast phantoms for distribution. It is expected that at least six phantoms will be distributed to the user community in exchange for testing and feedback. A compact software analysis package will be developed to be distributed with the phantom to allow users quick feedback on their imaging protocols and image quality. A full distribution package will be designed that will include the phantom, analysis software, environmental site monitoring (e.g., temperature of the phantom during imaging), and any infrastructure required to ship the phantom and precisely load the phantom into various scanners and RF coil assemblies.

On a case-by-case basis, NIST may provide technical experts to work with Phase I and Phase II awardees for consultation, input, testing of prototype phantoms and solutions, and analysis of data collected from the user community. NIST may provide any needed clarification to the phantom specifications currently posted on the public TWIKI listed below, and NIST may be available for consultation and input.

1. The initial designs of the phantom and prototype testing are documented at the publically accessible NIST TWIKI site:
Point of Contact:
Mary Clague

9.04: Manufacturing : Compact, Rapid Electro-Optic Laser Scanner for Absolute 3D Imaging

Real time, three-dimensional (3D) imaging is needed by industry for both machine vision and monitoring of manufacturing processes. Today’s 3D imaging equipment have significant technical limitations: poor image resolution, low refresh rate, as well as a lack of rigorous, calibrated distance measurements, which render the equipment inadequate for high-quality measurements in today’s challenging manufacturing environments.

NIST has demonstrated a novel, prototype 3D self-calibrated laser radar (LADAR) imager. The NIST 3D LADAR imager is capable of non-contact, absolute dimensional measurements from distances as far as 5 m away.  Measurement of complicated 3D objects, such as parts and assemblies on a manufacturing line, or footprints and other unstable trace evidence in forensics investigations are expected potential uses for this technology. The NIST prototype can be improved upon further, and can potentially be used with a number of different potential LADAR techniques.

An optimum LADAR instrument would be cheaper, easier to build, and easier to use, and also be more compact and have a faster refresh rate. A significant technical barrier to such an instrument is the lack of an appropriate, compact laser scanner. In existing LADAR systems, the laser beam is scanned across the target surface through mechanically actuated mirrors that are based on bulk optics packaged within a much larger device. A compact LADAR would be possible if this conventional scanner could be replaced by a compact electro-optic scanner.

The combination of a compact, electro-optic scanner with a LADAR approach that exploits modern laser technology, including frequency combs, would result in a compact 3D LADAR imager with extremely high performance. This imager could be used to qualify physical parts on manufacturing lines, enabling absolute multiscale dimensional measurements of parts and assemblies up to one meter in size at resolution of one micrometer or better. These methods, and the knowledge gained by using them, will reduce manufacturing costs, and accelerate the adoption of metal additive technologies, by enabling real-time qualification of additive manufacturing parts for mission-critical uses.

The twin goals of a cooperative project are to transfer NIST 3D LADAR technology to the private sector, which, in turn, requires a party to develop a compact, rapid electro-optic (EO) laser scanner that would make this technology more commercially viable. The EO laser scanner will be used to steer a swept laser across an object of interest. A commercial feasibility EO laser scanner must steer a swept laser with at least 5 THz of laser bandwidth, and it must be capable of sweep transit times of 0.5 ms or less. The center wavelength of the EO laser scanner should be 1550 nm, and it must have a clear aperture of 1 cm or greater. The EO laser scanner must be able to operate with bidirectional light to support a monostatic LADAR configuration. Most importantly, the EO laser scanner must contain no mechanical parts, thus enabling robust operation within manufacturing environments that may involve exposure to large temperature swings, mechanical vibration, or other environmental changes that could cause misalignment of mechanical parts.

Through this subtopic, the development and demonstration of a commercial sale 3D LADAR system (or systems) based on NIST’s technology, which includes several proof-of-concept LADAR technologies, will be developed. For additional information please see the NIST Fiber Sources and Applications website

Phase I activities and expected results:

  1. Design a two-dimensional electro-optic scanner that can enable near-diffraction-limited scanning over an instantaneous bandwidth of > 1 THz (defined as an excess beam spread of less than 20 % diffraction limit), in a 1 cm FWHM beam, over 10 mrad of angular range, at sweep transit times of less than 0.5 msec, and with insertion loss of less than 6 dB. The design for the overall dimensions of the instrument should be 50 cm3or less.
  2. Conduct necessary laboratory tests to verify the design with a particular emphasis on the instantaneous bandwidth, beam size, number of “spots” (defined as the angular range divided by the diffraction limited angular spread), and insertion loss.
  3. Conduct preliminary measurements and become familiar with the NIST technology available for use in a complete LADAR system, which would be a significant component of a Phase II application.
  4. Conclude Phase I with a report that describes, in detail, the approach to laser electro-optic scanner, including calculations and data on how the electro-optic approach will meet specifications on sweep times, angular deviation, output beam diameter, beam divergence, insertion loss, and instantaneous optical bandwidth.

Phase II activities and expected results:

  1. Build an electro-optic two-dimensional scanner.
  2. Test and validate the scanner within existing NIST LADAR hardware prototypes.
  3. Create design concepts and develop a rapid 3D LADAR imager for commercial deployment.

On a case-by-case basis, NIST may provide technical experts to work with Phase I and Phase II awardees to work collaboratively and to provide existing information regarding tradeoffs in EO scanner performance to optimize its capabilities for 3D imaging. NIST expects to provide limited testing by incorporating any prototype EO scanners into an existing NIST LADAR imager. NIST may provide expertise in 3D LADAR imaging data acquisition and analysis.

1. T-A Lui, N.R. Newbury, and I.R. Coddington, “Sub-micron absolute distance measurements in sub-millisecond times with dual free-running femtosecond Er fiber-lasers,” Opt Exp 19 18501 (2011).
2. Esther Baumann, Fabrizio R. Giorgetta, Ian Coddington, Laura C. Sinclair, Kevin Knabe, William C. Swann, and Nathan R. Newbury, “Comb-calibrated frequency-modulated continuous-wave ladar for absolute distance measurements,” Opt Lett 38 2026 (2013).
Point of Contact:
Mary Clague : Computer Aided Standards Development (CASD) – A Software Tool to Automate Standards Development Process

The design and development of standards is a long and tedious process. This process is often hampered by requirements to keep complex terminology consistent and keeping its associated information content current. The implementation and adoption of standards is slowed by the gap between the technical requirements in a standard and the technology required to implement those requirements. This SBIR subtopic seeks a software tool that will make the design and development process faster, more robust, and more integrated. The tool will be similar to a Computer Aided Software Engineering (CASE) tool, but applied to standards development and deployment. The tool will provide the following facilities for standards development and deployment:

  • Categorize and organize standards’ content in a structured information model, supporting modularization and reuse.
  • Establish terminology connections between related standards, and maintain semantic consistency across standards.
  • Generate a visual representation and navigation scheme for the standard, so that the standard may be communicated to the end user through interactive means (such as a touch-screen tablet).
  • Provide an underlying formal model that is amenable to testing and verification, and that facilitates the implementation of the standard by automatic or semiautomatic generation of software modules. This should allow software implementers to extract portions of the standard to meet specific implementation requirements.

Standards development organizations (SDOs) and the scientific and engineering societies that participate in those organizations will benefit greatly from such a tool. Vendors will benefit from the tool since it would pull from the existing standard, populate the tool, and allow a consistent assessment for the vendor to identify the requirements.

The life-cycle of a standard may involve the following three broad stages [1]. First is the development stage where stakeholders gather within committees, prepare a draft, and come to a consensus on a final standard. The second stage is the deployment stage where a pilot implementation is undertaken by some consortia followed by industry wide implementation. The third stage is the maintenance stage where the standard is revised and maintained. A well-defined underlying information structure/model will also facilitate the implementation of all three stages. In addition, it can support the instantiation and communication of the standard to the end users using the varied digital media available today.

Even though information management and software tools have advanced considerably over the recent decades, SDOs rarely take advantage of those advancements. One area in which such advancements can help is in managing the terminology contained in standards. To address this issue, we need a framework for developing a taxonomy for the terminology and concepts contained in a standard. Another way is in capturing the requirements of a standard in a formal model. A third way is to produce a standard as a structured information model, instead of a simple text document. This can be supported by additional tools to automatically verify these models for consistency, and generate other artifacts such as documents and software implementation modules. All of these will be supported by tools that will allow standards developers and end-users to interactively view and navigate the information models. Such technology will greatly improve the deployment, adoption, and maintenance of standards.

The outcome of this effort will bring together SDOs, software implementers, and end-users (both manufacturers and their consumers) together under a single framework, and allow them to exchange standards information in an unambiguous and efficient manner. While the focus of this project will be related to standards in manufacturing, the general methodology is applicable to other industry sectors.

Phase I activities and expected results:

  • Expand on the NOVIS tool [2,3] to develop a taxonomy editor for standards. This should include a classification scheme and underlying ontology modeling the concepts and relationships.
  • Develop a formal representation scheme to capture the requirements for a standard. This may be based on the FACTS work [4].
  • Develop an export/import mechanism for the information content of a standard and associated document formats.

Phase II activities and expected results:

  • Design an initial architecture and software for realizing computer aided tool for standards development.
  • Develop a Computer Aided Standards Development (CASD) tool and a comprehensive case study/demonstration.
  • Design an interface between CASD tool and document generation software, in the form of a plug-in to a document editor that interfaces with the underlying CASD model.
  • Design a mechanism for automatic or semiautomatic generation of software to implement modules of the standard as per requirements.
  • A framework for a standards repository where the standards may reside as information models. The framework should support version control, cross standard linking, and maintenance of information consistency across standards.

On a case-by-case basis, NIST may provide technical experts to work with Phase I and Phase II awardees to consult and provide inputs and work closely with awardees to assess their progress.

1. Cargill, C.F., (2011): Why Standardization Efforts Fail, in: The Journal of Electronic Publishing.
2. Narayanan, A., et. al.: A Methodology for Handling Standards Terminology for Sustainable Manufacturing, NIST Interagency/Internal Report (NISTIR) – 7965, 2013.
3. Lechevalier, D., et al., NIST Ontological Visualization Interface for Standards User’s Guide, NIST Interagency/Internal Report (NISTIR) – 7945, 2013.
4. Witherell, P.W., et. al.: FACTS: A Framework for Analysis, Comparison, and Test of Standards, NIST Interagency/Internal Report (NISTIR) – 7935, 2013.
Point of Contact:
Mary Clague : Erbium-Based DPSS Lasers for Remote Sensing

The primary objective is to develop a narrow-band, tunable, diode-pumped solid-state (DPSS) pulsed laser system operating in the eye-safe infrared region around 1.6 micrometers in wavelength. Such laser systems are in demand for remote sensing of fugitive emissions, which can cost millions of dollars to industry, as well as for sensing and mitigation of pollutants for regulatory requirements and research. These applications demand high repetition rates (1 kHz – 10 kHz) and high-energy (>1 mJ) pulses.

Currently available commercial technologies for generating laser light in this region include telecommunications lasers using erbium-doped fibers, and optical parametric oscillators pumped by two additional laser systems. The former are limited in pulsed power output by nonlinear processes within the fiber, and the latter are complex and limit the field-portability of remote sensing instruments. The development of a system with orders of magnitude higher pulse energy than telecommunications lasers and lower complexity than optical parametric oscillator-based systems is necessary to advance these limitations.

Diode-pumped solid-state lasers using erbium ions embedded in a crystal matrix (such as YAG or YVO4), which have emission lines in the appropriate spectral region, have been developed and demonstrated to be suitable for both high-energy and high-repetition rate pulse production[1-6]. The grantee will develop and commercialize a system, or set of systems, optimized for remote sensing applications.

The project outcome should be a turnkey, environmentally robust DPSS Er-ion based laser system with high mode quality, high pulse energy, and high repetition frequency. The pulse duration should be in the tens of nanoseconds and the linewidth should be as close to transform limited as is practical. A state-of-the-art optical parametric oscillator based system can generate pulses of several tens of mJ at a 100 Hz repetition rate when pumped with a high-power Nd:YAG [7]. The project outcome should have comparable pulse energies and have a variable repetition rate exceeding 1 kHz and not exceeding 20 kHz. The laser should be capable of being reconfigured for a variety of spectroscopic lines and targets (for example, the 1570 nm CO2absorption range as well as the 1645 nm CH4range). There should also be fine tunability of the laser to densely sample points across a typical absorption feature. This tuning could be achieved, for example, by seeding with a tunable diode laser.

Phase I activities and expected results:
1) Design of laser platform including material selection

2) Performance modeling of laser platform

3) Feasibility study of use of laser design for detection of CH4and CO2.

Phase II activities and expected results:
1) Construction of laser system

2) Performance characterization of laser system

3) Environmental testing of laser system

4) Demonstration of absorption spectroscopy on at least one spectroscopic target using the laser

On a case-by-case basis, NIST may provide technical experts to work with Phase I and Phase II awardees and may consult and provide input through discussions.

Wang, X. et al. Dual-wavelength Q-switched Er:YAG laser around 1.6 μm for methane differential absorption lidar. Laser Phys. Lett. 10, 115804 (2013).
2. Wang, R. et al. Continuous-wave and Q-switched operation of a resonantly pumped U-shaped Er:YAG laser at 1617 and 1645 nm. Laser Phys. Lett. 10, 025802 (2013).
3. Zhu, L., Wang, M., Zhou, J. & Chen, W. Efficient 1645 nm continuous-wave and Q‑switched Er:YAG laser pumped by 1532 nm narrow-band laser diode. Opt. Express 19, 26810–26815 (2011).
4. Kim, J. W., Sahu, J. K. & Clarkson, W. A. High-energy Q-switched operation of a fiber-laser-pumped Er:YAG laser. Appl. Phys. B 105, 263–267 (2011).
5. Bigotta, S. & Eichhorn, M. Q-switched resonantly diode-pumped Er3+:YAG laser with fiberlike geometry. Opt. Lett. 35, 2970–2972 (2010).
6. Chen, D.-W., Birnbaum, M., Belden, P. M., Rose, T. S. & Beck, S. M. Multiwatt continuous-wave and Q-switched Er:YAG lasers at 1645 nm: performance issues. Opt. Lett. 34, 1501–1503 (2009).
7. Douglass, K. O. et al. Construction of a high power OPO laser system for differential absorption LIDAR. in Lidar Remote Sens. Environ. Monit. XII SPIE 8159, 81590D–81590D–9 (2011).
Point of Contact:
Mary Clague : Precision Specimen Control for Transmission Scanning Electron Microscopy

The primary objective is to significantly extend the capabilities of the scanning electron microscope (SEM), a tool considered invaluable for characterizing materials and products in numerous forms of manufacturing. Examples range from extremely fine-scale structures found in nanoparticle production and semiconductor processing to large-scale structures used for transportation and infrastructural applications. The efficiency and quality of all manufactured products depends intimately on the ability of engineering materials to perform their intended function. Those functions are a direct result of the properties imparted upon each material due to the arrangement of its atoms over dimensional scales from sub-nanometer to several hundreds of micrometers. It is therefore critical to product manufacturing and reliability that the microscopic structure of materials be precisely measurable over those size scales.

A rapidly emerging area of material characterization makes use of the detection of electrons that have transmitted through specimens within an SEM, in order to significantly improve spatial resolution and image contrast over many conventional SEM and transmission electron microscope (TEM) methods. This approach makes use of some operational principles analogous to those used in scanning transmission electron microscopy (STEM), and is therefore sometimes termed “STEM-in-SEM”. NIST is developing SEM-based technologies that make use of transmitted electrons in ways different from TEM-based STEM imaging, resulting in a broader characterization approach we call transmission SEM, or t-SEM.

Critical to the success of developing reliable quantitative material analysis methods based on detection of transmitted electrons in the SEM is the precise control of specimen position inside the SEM chamber. Quantitative analysis requires the microscope operator to position a specimen under very precise, well-defined (in terms of crystal directions as determined by electron diffraction) incident electron beam conditions, independent of that detector’s location. Existing SEM positioning systems are insufficient for the required level of control because (i) the detector itself mounts onto the stage, precluding independent movement of the specimen, (ii) the specimen cannot be tilted eucentrically, i.e., the transmitted electron image translates unacceptably during tilting, and/or (iii) the SEM stage itself gets in the way of detectors for STEM imaging and electron backscatter diffraction (EBSD).

Successful development of the required type of specimen control in the SEM would represent a major leap forward in advancing STEM-in-SEM (t-SEM) capabilities for quantitative analysis of materials. Two major benefits to manufacturing could result: (1) a host of powerful analytical material characterization methods would be brought within reach for those presently without access to TEMs, due to budgetary or personnel constraints, and (2) a new, broader spectrum of measurements will be achievable with relatively inexpensive modifications or add-ons to existing SEM investments, as compared to state of the art TEM purchases. As a result, manufacturers may perform detailed measurements for product optimization, as well as meaningful root-cause failure analyses, both from the key perspective of structure of engineering materials.

The following five goals must be met in order to consider this project successful:

  1. A method to hold a thin, fragile specimen, in the form of a circular, 3 millimeter diameter TEM grid, within an SEM, centered on the microscope’s optic axis.
  2. Precise operator control over specimen x and y translation (where z coincides with the optic axis of the microscope), as well as control over the incident beam direction (i.e., specimen orientation) within the specimen coordinate system (see activity number 2 for more detail).
  3. The positional control method must allow for the insertion of a STEM detector within the microscope, i.e., the specimen must reside and its position must be controllable within the available space between the bottom of the SEM pole piece and the top of the STEM detector.
  4. The positional control method must allow for the insertion of an existing EBSD camera on the microscope.
  5. The positional control method, when not in use, must allow for conventional SEM studies.

Phase I activities and expected results:
NOTE: any hardware design to enable control of specimen position for transmission imaging must also allow for conventional SEM operation. In other words, since NIST’s instrument is not dedicated solely to transmission SEM mode, the hardware must either: (i) be removable from the SEM, or (ii) be “placed aside” within the chamber to accommodate normal operations.

1. Specimen translation. Choose or develop a method for positioning a TEM specimen within the (x, y) plane beneath the SEM pole piece with a minimum step size of 250 nm or better. Specimen translation may be controlled either manually or in a motorized manner.

2. Specimen orientation. Choose or develop a method compatible with that in activity number 1 that allows for control of the incident electron beam direction (i.e., specimen orientation) within the specimen coordinate system via specimen manipulation; typically this is done in the field of TEM specimen control via either: (i) two orthogonal tilt axes (“double-tilt”) OR (ii) one tilt axis plus one rotation axis (“tilt-rotate”). The primary tilt axis must both: (a) have a tilt sensitivity of 0.5 degree or better, and (b) lie within the thin specimen plane to maintain a practically manageable degree of eucentricity. Specimen orientation may be controlled either manually or in a motorized manner.

3. Compact design. Design the method combining control of specimen translation and incident beam direction (via e.g., double-tilt or tilt-rotate) to fit within the space between the bottom of the pole piece and the top of the inserted STEM detector. For our particular microscope and STEM detector combination (a LEO 1525 with a KE Developments 3-channel dark-field/bright-field detector), the distance between the bottom of the pole piece and top of the detector is 15 mm. From a broader commercial perspective, this distance will vary somewhat depending on the microscope manufacturer and STEM detector technology available.

4. Interface to operator. Design a port adapter and/or feed-through system that can accommodate any necessary connections external to the microscope chamber. The port adapter and/or feed-through must allow for concurrent use of an existing EBSD camera mounted beneath the EDS and WDS ports on our microscope. We have unused ports available that should allow this.

Expected results: by the end of Phase I, a design should be submitted for review by NIST. The design should be complete except for exact machining dimensions. It should describe the translation mechanism and minimum translation step size, and the mechanism for controlling incident beam direction while maintaining a manageable degree of eucentricity. The design must be feasible for incorporation into our SEM. Approximate dimensions may be provided at the end of Phase I, and refined dimensions can be addressed during Phase II.

Phase II activities and expected results:
1. Refinement of the design and construction of a prototype specimen holder/control system with port adapter and/or feed-through that fits our microscope.

2. If the Phase I design results in a novel specimen holder that is to be introduced each time transmission SEM is to be performed, determine feasibility of introducing an airlock to accommodate the holder.

Expected results: by the end of Phase II, a NIST t-SEM operator should be able to mount a TEM specimen into the new system, translate it to a desired position with 250 nm or better accuracy, observe a transmission Kikuchi electron diffraction pattern with the EBSD camera, and tilt and/or rotate the specimen to a desired crystallographic orientation. Finally, the operator must be able to observe the transmitted image with the existing STEM detector.

On a case-by-case basis, NIST may provide technical experts to work with Phase I and Phase II awardees for consultations and discussions to answer design questions and clarify any other technical aspects within the field of expertise.


Point of Contact:
Mary Clague : Predictive Modeling Tools for Metal-Based Additive Manufacturing

The primary objective is to develop tools that rely on a suite of physics-based models to support accurate predictive analyses of metal-based additive manufacturing processes and products. Physics-based models must be developed in such a way to ensure reusability in a predictive environment, irrespective of product geometry. The tool will allow for accurate and reliable microstructure predictions for various geometries for a given process and material, reducing the need for empirical testing and allowing for part qualification based solely on analysis. This tool will allow industry to begin moving away from empirical testing and instead rely more on modeling and simulation, enabled primarily by measurement science underpinnings. Such a tool should:

  • Provide a set of physics-based models for metal powder-bed fusion manufacturing processes.
  • Demonstrate composability[1]of such models to support geometry-independent reusability. Provide ranges of parameter values for which composed models can be assumed reliable and accurate.
  • Provide an automated or semi-automated means for composing models.
  • Provide support for in situ feedback to allow for real-time adjustments during manufacture.

Industry currently relies heavily on the manufacturing of coupons to qualify metal parts created using additive manufacturing processes. Physics-based models promise to provide the ability for industry to move away from relying solely on testing and towards an environment supported by models and simulation. The transition to modeling and simulation for part qualification is underway, albeit very cautiously and deliberately. Current qualification through modeling and simulation is achieved only with very specific models deployed under very specific circumstances. 

The goal of this project is to develop a tool that will support the broader application of physics-based models as a means for product qualification. This will be achieved by developing sets of composable models, each model accompanied with clear application boundaries. These models must be composable to a level of granularity that microstructure, and to an extent performance, can be predicted to a degree of certainty, for a given set of process parameters irrespective of geometry. This tool will be an early step in allowing industry to move away from 100% testing and towards part qualification that is able to rely more on modeling and simulation.

As additive manufacturing becomes increasingly popular, many institutions, especially universities and small companies, do not have the resources to test each part created. Nor do these institutions have the resources for developing reliable predictive models. Development for this tool will focus on support for composable modeling for metal powder bed fusion processes, including direct metal laser sintering and selective laser melting, though the principles applied during its development should support broader applications.

Therefore, one goal of this project will be to provide a foundation for developing similar tools in the future for other processes, including those that build parts using polymer-based processes.

The awardee(s) will develop the fundamental measurement science for this predictive tool. This will support development of a tool necessary to support composable predictive modeling for manufacturing with metal powder bed fusion processes, similar to how finite element analysis is used in conventional machining.

Phase I activities and expected results:

  • Development of a set of parameterized, composable models to support predictive analysis in a proof-of-concept operating environment.
  • Development of a specified set of operating conditions for which the models are applicable, including the degree of certainty that they are able to predict performance.
  • Conceptual tool to demonstrate model composability and reliability by predicting the microstructure, to a specified degree of certainty, for several basic shapes.
  • Predict fabricated part performance of several basic shapes, to a specified degree of certainty.

Phase II activities and expected results:

  • Prototype tool to demonstrate automated or semi-automated model composition to predict microstructure to a specified degree of certainty.
  • Demonstrate identification of in situ adjustments based on real-time predictive analysis.
  • Development of a framework from which models can be rapidly called and stored on demand.
  • Demonstrate model composability and reliability by predicting the microstructure, to a specified degree of certainty, on complex geometry.
  • Predict fabricated part performance of complex geometry, to a specified degree of certainty.

On a case-by-case basis, NIST may provide technical experts to work with Phase I and Phase II awardeesto consult, provide inputs, and work closely with awardees to assess their progress.

[1]Composability is a system design principle that deals with the interrelationships of components. A highly composable system provides recombinant components that can be selected and assembled in various combinations to satisfy specific user requirements. The essential attributes that make a component composable are: 1) it is self-contained (i.e. it can be deployed independently- note that it may cooperate with other components, but dependent components are replaceable). 2) it is stateless (i.e. it treats each request as an independent transaction, unrelated to any previous request) (see Reference [1] in Section of this FFO.

1. Pollock, Neil, and Robin Williams. Software and organisations: The biography of the enterprise-wide system or how SAP conquered the world. Taylor & Francis US, 2008.
2. Roadmap for Additive Manufacturing: Identifying the Future of Freeform Processing, ( 2009
3. Measure Science Roadmap for Metal-based Additive Manufacturing, (, May 2013.
Point of Contact:
Mary Clague : Technology for Separation of Carbon Nanotubes

As an advanced material, carbon nanotubes (CNTs) hold great promise for a number of technological applications of strategic importance, including future digital electronics beyond current CMOS technology. A fundamental problem in CNT applications is the lack of purity of CNTs with well-defined electronic and optical properties. A recent NIST advancement in CNT separation has demonstrated that aqueous two-phase (ATP) extraction is a scalable and cost-effective solution to this long standing problem. Automation of the process to enable high-resolution, multistage extraction is the key to turn the NIST finding into an industrial manufacturing process. This project calls for technology to improve resolution and speed to for CNT separations. The goal is to develop an automated technique to enable the total fractionation of a synthetic CNT mixture in a single run to allow manufactures to monitor CNT chirality distribution in their production process, and for application developers to obtain high purity CNTs in an automated and continuous way.

Phase I activities and expected results:
Feasibility study regarding the design and fabrication for small-scale (1 mg) CNT separation defined by the two specific goals listed below:

1.     Single-chirality CNT separation of a synthetic mixture in a single run within 8 hrs;

2.      Separating semiconducting and metallic tubes and obtaining 99.9999% semiconducting tube purity. 

Phase II activities and expected results:
Fabrication and testing of a new instrument including instrument optimization, integration, and increase of throughput to achieve separation of 100 mg or higher CNT materials in a single run. 

On a case-by-case basis, NIST may provide technical experts to work with Phase I and Phase II awardees to work collaboratively in providing input, discussion, evaluation of instrument performance in CNT separation and benchmarking with other CNT separation methods.

1. C. Y. Khripin, J. A. Fagan, M. Zheng, "Spontaneous Partition of Carbon Nanotubes in Polymer Modified Aqueous Phases" Journal of the American Chemical Society, 135, 6822, (2013).
Point of Contact:
Mary Clague : Ultra-sensitive and Wide Dynamic Range, Cavity Ring-down Spectroscopy System for Detection of Ozone

The Standard Reference Photometer for Ozone (SRP) has met the need for an ozone standard for National Metrology Institutes (NMI) and the Environmental Protection Agency (EPA) since 1980. The instrument is based on UV optical spectroscopy and 1980’s electronics. The inherent problems with this technology are long term stability, sensitivity, and noise. As we go forward, there is an unmet need for an instrument that would provide the stability, sensitivity and accuracy to become an intrinsic standard for ozone, the Primary Standard. The technology could also be somewhat downgraded to be mass produced into instruments for field use to monitor ozone in the environment, the secondary standard.

NIST is interested in the development of this new instrument to replace the 1980’s technology SRP with one that has better sensitivity, stability and low noise. This would produce results that are accurate and precise for the world’s NMIs, and support the regulatory and measurement needs for ozone. From this molecule, other environmentally important chemicals could also be in reach.

The project goal is to develop an ozone sensitive measurement tool with the ability to measure in the range of 0.1 micromole per mole (ppm) to 5000 micromole per mole (ppm) with an uncertainty of less than 0.5 % relative. The instrument should be stable in reading ozone from a stable source to within 0.5% over one year.

The NIST SRP is not capable of measuring ozone below 1 ppm, and thus cannot calibrate field instruments in this range. In order for NIST and other NMIs to support these measurements, a stable and reliable instrument is needed that can measure down to 0.1 ppm or lower. The uncertainty of these measurements must be low enough to accommodate the natural uncertainty expansion of secondary instruments. Thus the uncertainty must be 0.5 % relative or better.

Other industries also require ozone traceability. Emissions from process streams can be very high, and must be monitored. The upper concentration limit of the instruments must accommodate these needs as well. The upper limit of 5000 ppm was chosen to reach a majority of these applications. However, some processes go beyond even this limit, e.g. when ozone is used to sterilize components. Some needs for traceability exists for these applications as well, however they may need to be covered by a different instrument.

Long term stability is required in order to demonstrate that this instrument can meet the stringent requirements of an intrinsic standard. An intrinsic standard is one that can be defined as a Primary Standard in its own right, due to the traceability of its signal to known principles that have defined uncertainties and no known biases. Therefore the concentration derived from the signal can be derived without reference to an external standard. The cavity ring-down design is one where an intrinsic standard is possible, however long term stability must be measured, and be as low as possible. The 0.5 % relative drift over one year is a maximum allowed drift for the instrument to be useful as an intrinsic standard. The instrument must also remain accurate, so that drift must be random about the true concentration of ozone.

The primary objective is to design, construct and test a cavity ring-down spectrometer suitable for measuring 1 – 5000 ppm of O3in air. Because of the need to span these O3concentrations, the spectrometer must have a wide dynamic range. This can be achieved either by probing different spectral regions of O3to access both relatively weak and strong absorption cross sections, through gas dilution methods, or by realizing a dynamic range of 50,000:1 or more at a given wavelength. The spectrometer should use robust, commercial single-frequency laser technology; for example distributed feedback diode lasers (DFBs) or external cavity diode lasers in the visible and/or near-infrared regions. Measurements near 600 nm could access the Chappuis-band (peak cross section of 5×10-21cm2) yielding an absorbance of ~10-6for a 75-cm long cavity given 0.1 ppm of O3in air. This absorbance level would be ~750 times greater than the estimated detection limit assuming standard low-loss mirrors (20 ppm) and 0.02% relative uncertainty in the measured time constant. However, at this wavelength, measurements on 5000 ppm of O3, would require that the sample be diluted by a factor of ~500 to ensure that the sample is not optically thick. Alternatively, or in parallel the spectrometer could be operated near a wavelength of 1.8 µm, which would give a peak absorbance of ~1.5×10-5for 5000 ppm and ~3×10-10at 0.1 ppm. The small absorption value corresponding to 0.1 ppm would be challenging to measure, but it is within demonstrated cavity ring-down detection limits and would mean that dilution would not be required.

Phase I activities and expected results:

Phase I involves the design and construction of a laboratory table-top instrument. The principal Phase I deliverable should demonstrate O3detection over 0.1 – 5000 ppm range with one integrated instrument.

Phase II activities and expected results:
 Phase II involves the development of a fully integrated prototype spectrometer. This system should be rack-mountable, incorporate fiber-based components when possible and be temperature-regulated (0.01 K maximum variation) to ensure long-term stability. The final prototype will include all necessary optical, gas-handling, instrument control and data acquisition systems.                                                                                                                                        

NIST has extensive experience in the development and application of cavity ring-down spectroscopy for quantitative measurements of gaseous species. On a case-by-case basis, NIST may provide technical experts to work with Phase I and Phase II awardees to work collaboratively with awardees through on-site training and by using our resources to provide critical data and implement experiments to support the effort. NIST may also provide extensive consultation to ensure that the awardee is knowledgeable about the existing technology, and to make the awardee aware of the most advanced techniques in cavity ring-down spectroscopy.

1. Potentially relevant NIST IP: U.S. Patent # 6,727,492 issued 04-27-2004.
Point of Contact:
Mary Clague

9.05: Technology Transfer : NIST Tech Transfer

NIST has numerous technologies that require additional research and innovation to advance them to a commercial product. The goal of this SBIR subtopic is for small businesses to advance NIST technologies to the marketplace. The Technology Partnership Office at NIST will provide the Awardee with a no-cost research license for the duration of the SBIR award. When the technology is ready for commercialization, a commercialization license will be negotiated with the Awardee. 

Applications may be submitted for the development of any NIST-owned technology that is covered by a pending U.S. non-provisional patent application or by an issued U.S. patent. Available technologies can be found on the NISTTech website are identified as “available for licensing” under the heading “Status of Availability.” Some available technologies are described as only being available non-exclusively, meaning that other commercialization licenses may currently exist or it is a joint invention between NIST and another institution. More information about licensing NIST technologies is available at

The technical portion of an application should include a technical description of the research that will be undertaken. Included in this technical portion of the application, the applicant should provide a brief description of a plan to manufacture the commercial product developed using the NIST technology. Absence of this manufacturing plan will result in the application being less competitive.

Point of Contact:
Mary Clague