You are here
NIST SBIR 2016 Phase I
NOTE: The Solicitations and topics listed on this site are copies from the various SBIR agency solicitations and are not necessarily the latest and most up-to-date. For this reason, you should use the agency link listed below which will take you directly to the appropriate agency server where you can read the official version of this solicitation and download the appropriate forms and rules.
The official link for this solicitation is: http://www.nist.gov/tpo/sbir/upload/FY16-Phase-I-SBIR-FFO-final.pdf
Release Date:
Open Date:
Application Due Date:
Close Date:
Available Funding Topics
-
9.01: Advanced Sensing for Manufacturing
- 9.01.01: Absolute Interferometry with Nanometer Precision
- 9.01.02: Design of Fiber-coupled Waveguide Difference Frequency Generation Devices
- 9.01.03 : High-Accuracy Angle Generator for Precision Measurements
- 9.01.04 : High-Density Cryogenic Probe Station
- 9.01.05 : High Temperature In Situ Pressure Sensor
- 9.01.06 : Iron Corrosion Detection Technology Using THz Waves: A field-operable Unit Based on NIST Spectroscopic Technology
- 9.01.07 : Object Identification and Localization via Non-Contact Sensing for Enhancing Robotic Systems in Manufacturing Operations
- 9.01.08 : Pre-Concentration Technology for Analysis of Halocarbon Gases at Trace Levels
- 9.01.09 : Quantitative Magnetometry of Single Nanoparticles with High Throughput
- 9.02: Biomanufacturing
- 9.03: Cryptography and Privacy
- 9.04: Cyber Physical Systems
- 9.05: Lab to Market
- 9.06: Materials Genome
- 9.07: Quantum-based Sensors and Measurements
Absolute length metrology with improved repeatability and uncertainty is needed for advanced manufacturing and other applications including coordinate measuring machine/ computer numerical control (CMM/CNC) calibration and positioning [1], laser materials processing, medical materials quality assurance, gauge block calibration, and optical and UV metrology of free-form optics [2,3]. There is also a need in some metrology systems, including atomic force microscopes (AFM) [4] and scanning electron microscopes (SEM), to measure the drift in stage or probe-related components relative to the metrology loop of the instrument. Absolute interferometry can remove the ambiguity of classical interferometry without the need for continuous monitoring of a displacement. Specialize research needs can benefit from absolute distance measurement with nanometer-level accuracy. In a commercial setting, classical interferometry cannot satisfy the need for tasks such as measuring rough or high-relief surfaces, discontinuous steps, reflections from multiple interfaces, or thickness measurement; absolute interferometry potentially can overcome all of these limitations.
The goal is affordable, accurate, and rapid absolute interferometric length measurement with improved repeatability and uncertainty, for precision measurement or research applications. Desired is development of an instrument that can absolutely measure the distance between two parallel surfaces with resolution and accuracy as high as possible—better than 10 nm for measuring short lengths and better than 1 part in 107 for longer lengths (ideally up to 500 mm) exclusive of uncertainties associated with air refractive index (i.e., it should be better than 1 part in 107 measuring a separation in vacuum). The parallel surfaces may be pointing in the same direction but laterally separated (such as a gage block measurement) or might be facing each other but made of transparent material allowing transmission of a laser beam, as in the case of optical thickness measurement.
Measurement update rates should exceed 1 kHz. For metrology of rough surfaces, such as many unpolished metals, the repeatability and uncertainty should approach the material surface roughness. Moreover, the solution should be able to achieve specification from both shiny and dull surfaces, should be able to achieve diffraction limited lateral resolution, and should be able to accommodate co-alignment with a processing beam for cases such as laser materials processing (welding, drilling, cutting).
Phase I expected results:
Experimentally prove the feasibility of a non-contact length metrology system that can provide measurement repeatability and expanded uncertainty better than 10 nm + 1´10-7 *L, where L is the measured length (surface separation) up to 500 mm, at >1 kHz update rates, exclusive of uncertainties associated with air refractive index. The system should be able to also measure non-specular surfaces to within approximately the surface roughness.
Phase II expected results:
Design, construct, and demonstrate a prototype of the non-contact length metrology system for which the feasibility was proven in Phase I.
NIST staff will be available for consultation, input, and discussion.
NIST seeks to determine the technical feasibility of fiber-coupled waveguide devices for the highly efficient (≥10% W−1) difference frequency generation (DFG) of mid-infrared laser radiation. The proposed compact photonic device would enable the deployment of optical sensors that operate in the important “molecular fingerprint” region of the electromagnetic spectrum (3000-5000 nm) thus meeting present and future demands for air-quality monitoring, gas metrology, and atmospheric monitoring. The demonstration of highly efficient fiber-coupled waveguide devices for frequency conversion, specifically at a wavelength ≈4500 nm, would allow the transfer of mature frequency-agile rapid scanning technology from the telecommunication bands into the mid-infrared where strong molecular transitions promise ultrasensitive detection limits. Compact mid-infrared optical sensors with frequency agility would be of great interest to various stakeholders within the U.S. gas sensors market, which is expected to increase to $550 million by 2018 [1].
The goal of this subtopic is to determine the technical feasibility of fiber-coupled waveguide devices for highly efficient DFG of mid-infrared radiation through proof-of-concept demonstrations. The frequency conversion process performed by these proposed devices should create an output photon with frequency fout = f1 - f2, where f1 and f2 are unique input laser frequencies in the near-infrared. Specifically, NIST seeks to determine the feasibility of a waveguide device capable of creating free-space continuous-wave (cw) radiation at an output wavelength of 4530 nm from the combined fiber-coupled input of cw lasers operating at 1572 and 1167 nm, respectively. Beyond a fiber-coupled waveguide device for DFG of coherent 4530 nm radiation, NIST has identified a long-term need for fiber-coupled waveguide devices (either narrowband or broadly tunable) that cover the entire mid-infrared region from 3000-5000 nm.
Phase I expected results:
Report on the feasibility and design of a proof-of-concept fiber-coupled waveguide device for DFG with ≥ 10% W−1 conversion efficiency.
Phase II expected results:
Construct prototype waveguide devices and demonstrate their highest achievable conversion efficiency at an output wavelength of 4530 nm.
NIST may be available to provide technical guidance, work collaboratively on design concepts, discuss goals, and to aid in prototype evaluation.
Reference:
[1] Research and Markets: “U.S. Gas Sensors Market 2015-2020 - Growth, Trends & Forecasts for $550 Million Industry”, Reuters Press Release, June 10, 2015.
More accurate angle generators would allow NIST and other metrology laboratories to lower uncertainty in high-precision angle measurements. These generators would provide a needed tool for NIST and other world leading metrology laboratories to understand how surface flatness affects the measurement of angle using autocollimators, which is the de-facto tool for high accuracy angle measurements at NIST. These effects are the limiting uncertainty component in angle measurements that support R&D in critical technology intensive sectors.
X-ray studies at synchrotron light sources of the atomic structure of materials, biological molecules, etc. are currently limited by the quality of the x-ray spot focused onto the specimen. The quality of the focused spot is determined by the form accuracy of the mirrors used to focus the beam, typically a pair of elliptically-shaped mirrors used at grazing incidence. Improvements to the form accuracy of these mirrors are limited by the current abilities of metrology techniques. Specifically, improvements to the measurement of the local surface slope are needed. Metrologists at these synchrotron light sources are now using an autocollimator-based scanning technique to measure the surface profile for large (up to 1.5 m length), curved mirrors [1,2]. Effects of the curvature of the surface under test on the accuracy of the autocollimator used to measure the local surface slope need to be characterized to achieve the required tolerances of the mirrors.
Next-generation photonic devices incorporate various components whose geometry must be well characterized. NIST is currently measuring the angular attributes of these artifacts for industrial customers using autocollimators. Uncertainty components due to non-flat surface for these measurements are not well understood and published reports in this area are not exhaustive [3].
The goal is to develop precision angle generators that have accuracies that are better than what is currently commercially available. It is desired that the angle generator be able to accommodate mirrors with clear apertures between 2 mm and 35 mm that are up to 300 mm ´ 50 mm ´ 50 mm in size (i.e., approximate size of curved mirrors used at synchrotron light sources).
Phase I expected results:
Experimentally prove the feasibility of fabricating an angle generator with an expanded uncertainty (k=2) of less than 0.01 arc-seconds over an angular range of 2.5 degrees.
Phase II expected results:
Provide a prototype automated precision angle generator with a rigorously documented uncertainty budget demonstrating that the target uncertainty requirements, an expanded uncertainty (k=2) of less than 0.01 arc-seconds over an angular range of 2.5 degrees, are met.
NIST will be available for consultation and collaboration as necessary.
References:
[1] Geckeler, R.D., Just, A., Krause M. and Yashchuk, V.V. “Autocollimators for deflectometry: Current status and future progress”, Nucl. Instrum. Methods Phys. Res. A 616 (2010) 140–146.
[2] Yandayan, T., Geckeler, R.D. and Siewert, F. “Pushing the limits – latest developments in angle metrology for the inspection of ultra-precise synchrotron optics”, Proc. SPIE 9206, Advances in Metrology for X-Ray and EUV Optics V, 92060F (2014).
[3] Kruger, O.A. “Methods for determining the effect of flatness deviations, eccentricity and pyramidal errors on angle measurements”, Metrologia 37 (2000) 101–105.
Electrical probe stations are ubiquitous tools in the semiconductor electronics and data storage industries [1-6]. These instruments enable the probing of electrical properties of microfabricated electronics on silicon wafers or other planar substrates. This probing is used to determine whether the microfabrication was successful; if so, the silicon wafers are then cut into smaller pieces called dies that are packaged and integrated into more complex electronic assemblies. Typically, a silicon substrate contains many identical die so the electrical probes are translated over the substrate, aligned to the relevant features, placed in contact with the substrate, used for measurements, and then translated to the next die location. Probe station technology is well developed for electronics that function at room temperature. In particular, so-called probe cards allow a large number (hundreds) of temporary electrical connections to be made to a substrate using only mechanical pressure. However, there is an unmet need for a probe station with numerous closely packed electrical probes that can operate at temperatures near 4 K.
In recent years, the need for and variety of electronics that operate at temperatures near 4 K have expanded greatly. Examples include sensors for industrial materials analysis, nuclear security, concealed weapons detection, and astrophysics. The next generation of instruments to study the cosmic microwave background, for instance, may require 10s to 100s of silicon wafers containing cryogenic circuitry. Another example is classical computing using high speed, low power superconducting elements. Still another example is quantum computing using novel circuit components also based on superconducting films. Both research and commercial activity based on cryogenic electronics are growing. In order to aid the manufacture of cryogenic electronics, NIST is soliciting proposals for the development of a probe station optimized for this emerging market area.
Cryogenic electronics must be tested at low temperatures near their planned operating temperatures. Testing after microfabrication but before dicing and integration can save manufacturers and customers the enormous expense of packaging, shipping, cooling, and attempting to use flawed electronic components.
While cryogenic probe stations are already commercially available, these units do not have performance suitable for emerging applications. For example, niobium is a crucial material in cryogenic superconducting electronics. The transition temperature of niobium is 9.2 K and devices containing niobium must be probed at temperatures well below this value in order for the tests to accurately predict device behavior. Hence, the silicon substrate being tested should be at a temperature near 4.2 K or colder. Existing cryogenic probe stations are not able to achieve temperatures this low for the large substrates (up to 150 mm in diameter) that are now used to make superconducting circuits. As the complexity of cryogenic electronics has increased, so too has the number of circuit elements that need to be probed on a single die. However, existing cryogenic probe stations have only small numbers of probes (typically less than 10) that are physically large and therefore cannot be used to contact the closely spaced features that are increasingly used in cryogenic electronics. Further, existing probes are often optimized for much higher signal bandwidth than is now needed for basic tests of circuit functionality. Finally, these probes often contact the substrate under test from a warmer temperature stage and therefore are a major heat load that makes temperatures near 4.2 K difficult or impossible to achieve.
To aid the manufacture of cryogenic electronics for sensing and computing, NIST seeks proposals for a high-density cryogenic probe station that meets the following technical goals:
- Sample cooling to 4.5 K or below. This value refers to the temperature of the substrate under test and not the temperature of the underlying metal. Use of a mechanical cryocooler is preferred but liquid or gaseous helium are also acceptable.
- Rapid cooling and warming are desirable. A cool-down time from 300 K to base temperature below 2 hours is preferred. A warm-up time from base temperature to 300 K below 1 hour is preferred.
- Compatibility with substrates up to 150 mm in diameter.
- The ability to simultaneously make 100 or more electrical contacts to a die under test. Contacts to be made using mechanical pressure only, not wirebonding or other contact schemes that mechanically alter the test substrate.
- Electrical contacts must be pre-cooled at the cold stage of the probe station before contacting the substrate under test so as to preserve a sample temperature below 4.5 K.
- Electrical contacts should be low resistance with a best-effort goal of 10 milliOhm contact resistances.
- Electrical contacts should also be high-density with a best-effort goal of center-to-center pitches as small as 150 mm. Metallic contact features on the substrate are expected to be as small as 100 mm in diameter. The mechanical pattern of the contacts can be fixed so long as it is reconfigurable via use of alternate probe cards.
- Electrical contacts to be compatible with signal bandwidths below 500 kHz.
- Mechanical provisions to move the electrical contacts over the complete substrate under test while cold in order to probe multiple identical contact patterns on the substrate.
- Optical or other provisions to align the electrical connections to the contact pattern on the substrate.
- Provisions at room temperature to perform basic electrical measurements (continuity, current-voltage curves, etc.) among any combination of the electrical contacts.
Phase I expected results:
Develop a mechanical and electrical design that addresses the project goals described above.
Phase II expected results:
Construct a prototype high-density cryogenic probe station that is able to achieve the project goals described above.
NIST personnel will be available to assist the awardee in a variety of ways, including but not limited to:
- consulting on instrument design including participation in design reviews,
- sharing the results of NIST research to develop high-density probe cards suitable for cryogenic applications, including prototypes, with the awardee, and
- fabricating and providing at no cost substrates up to 150 mm in diameter with metal test patterns that can be used by the awardee to demonstrate cryogenic probing.
References:
[1] Applications for superconducting electronics: http://www.scientificcomputing.com/news/2014/12/iarpa-develop-superconducting-supercomputer.
[2] http://arxiv.org/pdf/1309.5383v3.pdf (for the motivation for arrays of 500,000 superconducting sensors).
[3] Examples of existing cryogenic probe stations: http://www.lakeshore.com/products/cryogenic-probe-stations/pages/cryogenic-probe-stations.aspx.
[4] Micro-manipulated Cryogenic & Vacuum Probe Systems for Chips, Wafers and Device Testing from ~3.5 K to 675 K, http://www.janis.com/ProbeStations_Home_KeySupplier.aspx.
[5] Examples of existing high-density probe cards for room temperature operation: https://www.cmicro.com/products/probe-cards.
[6] Cantilever Probe Cards - Well Beyond the State of the Art, http://www.technoprobe.com/cantilever-probe-card/.
Any mention of commercial products is for information only; it does not imply recommendation or endorsement by NIST.
There is a need to measure pressure accurately. Chemical manufacturers need process sensors that are able to monitor changes in manufacturing systems. These systems need to have a low uncertainty and high sensitivity to change. Often accurate pressure measurements in the chemical manufacturing industry are necessary to keep manufacturing processes safe. Sometimes, these changes affect other parameters, such as a flow measurement, and are necessary to maintain good manufacturing processes. At NIST, the need for highly accurate pressure measurements is prominent when determining the thermophysical properties of fluids, especially because these measurements lead to the development of theoretical models for industry. NIST researchers have developed methods to achieve better-than-quoted uncertainty in today’s pressure transducers through good practices, but the market is still limited. In order to meet the high standards NIST has for metrology measurements, we seek a high temperature in situ pressure sensor that achieves better resolution than the current pressure transducers available today.
The goal of this SBIR subtopic is to develop an in situ pressure sensor for fluid systems that operate up to 200 ˚C and pressures up to 7 MPa. Here, “in situ” means that the sensor is either attached directly to the system being measured (e.g., attached to a standard pipe fitting) or in very close proximity to the system; in either case it would be at the same temperature as the system. The sensor shall have remarkable temperature stability, control over drift, a small wetted volume (less than 5 mL), and be manufactured from materials that are highly corrosion resistant. On the market today, there are sensors that can reach the desired temperature of 200 ˚C, but these sensors often have large volumes or high uncertainty. The desired pressure sensor is expected to have a small volume to allow for easy coupling to a variety of precision measurement systems at NIST [1,2] as well as being able to act as a sensor in the chemical industries.
NIST envisions at least two basic design approaches, and either would be acceptable, as would other proposed designs. In the first approach, the sensor would directly measure the pressure and transmit a signal to a control computer. In the second basic approach, the sensor would measure the difference between the pressure of the fluid system and a reference pressure. In this embodiment, the reference pressure would be that of an inert gas or a hydraulic fluid, which would then be measured by a conventional pressure sensor that would be located remotely, e.g., at ambient temperature.
NIST is interested in a system with the following performance metrics:
· The pressure sensor shall operate in thermostated conditions of –70 ˚C to 200 ˚C.
· The pressure measurement must reach equilibrium in a reasonable time (< 5 minutes).
· The uncertainty in pressure must be better or equal in pressure to current pressure transducers, i.e., 0.7 kPa or 0.01% of range. The uncertainty shall include all effects, including hysteresis with increasing or decreasing pressures, compensation for temperature over the full operating range, and drift in the zero point.
· The measurement pressure range shall be ambient up to 7 MPa. The sensor shall provide a direct pressure measurement up to 7 MPa or be able to withstand differential pressures up to 7 MPa (if a differential pressure measurement).
· Electronic signals (both raw signals and computed pressure) shall be accessible to the user through a standard interface (e.g., USB, IEEE-488, or RS-232).
· Any electronic coupling within the thermostated area (i.e., wiring, connectors) shall be temperature compatible up to 200 ˚C. Other components (such as read-out electronics may operate at room temperature.
· All wetted parts of the sensor shall be fabricated of corrosion-resistant materials.
· The sensor should be able to be calibrated and maintain its calibration with minimal drift. It is desired that NIST scientists should be able to calibrate it at regular intervals.
· Drift: The sensor must meet the uncertainty specification with calibration intervals of no more than 4 months.
· Hysteresis: Any hysteresis associated with changes in temperature or pressure must fall within the overall uncertainty specification.
· The internal volume shall be less than 5 mL.
· The overall size of the sensor within the thermostated zone shall be 1 L or less. Electronics, however, may reside outside of this area.
Phase I expected results:
Provide a complete design of the pressure sensor. It is expected that CAD drawings of the sensor will be produced. It is also expected that theory and calculations relevant to the sensor function will be explained and provided. The awardee shall address their expected values for each one of the metrics above and describe how they designed their sensor to meet them in their final report for Phase I.
Phase II expected results:
Construct a fully-functional, tested prototype. The prototype shall be cycled over the full range of temperature and pressure at least 10 times to demonstrate the stability and performance metrics and the data from these tests shall be provided. Documentation on drift, stability and chemical stability shall also be made available. Each metric in the bulleted list above must be measured and addressed. The prototype will be made available to NIST for testing prior to the end of the SBIR Phase II award.
NIST staff is willing to participate in discussions and provide input on the awardee’s design during the development process through email, teleconference or face-to-face visits. NIST is also willing to do testing, although the awardee will not be able to count this testing towards the testing requirements of Phase II. NIST scientists may be available for demonstration of the device at the awardee’s home site, if desired.
References:
[1] Outcalt, S. L. and Lee, B.-C. “A Small-Volume Apparatus for the Measurement of Phase Equilibria”, J. Res. Natl. Inst. Stand. Technol. 2004, 109 (6), 525−531.
[2] McLinden, M. O. and Lösch-Will, C. “Apparatus for wide ranging, high-accuracy fluid (p, ρ, T) measurements based on a compact two-sinker densimeter”, J. Chem. Thermodyn. 2007, 39, 507-530.
Corrosion of steel cost the U.S. several $100B per year. Early detection of this corrosion, most of which is buried under some kind of protective coating like concrete or polymers, would reduce this remediation cost and improve the safety of infrastructure and factories. Current detection methods cannot sense a particular corrosion product, only the presence of something, and especially at early stages when not much corrosion is present. NIST had a Corrosion Detection Innovative Measurement Science project from 2010-2014, in which a 0.1-1 Thz wave technology based on antiferromagnetic resonance detection was successfully developed. Two of the most common iron corrosion products, hematite and goethite, are antiferromagnetic, and thus can be detected by this method: hematite through several centimeters of concrete and goethite through polymer layers. This laboratory-based technology needs to be taken to the field in order to have practical commercial use. From talking to government and industry, there is certainly a large need for this technology and apparently a large commercial market exists, too. NIST desires to see this technology commercialized, which involves some technical challenges in translating the laboratory technology into the field.
The goal of this project is to demonstrate a field-operable measurement system, using 0.1 - 1 THz waves, that identifies the presence of the iron corrosion compounds hematite and goethite under a variety of protective coverings, including concrete (hematite) and polymers (hematite and goethite). NIST has demonstrated this technology in the laboratory – the goal is to be able to move this technology into the field for application to corrosion detection problems in factories and physical infrastructure. To be able to detect specific iron corrosion products through protective barriers is an unmet need in the US.
Phase I expected results:
Demonstrate the feasibility of taking the laboratory-based technology to the field, with identification of the technical problems that need to be solved and the current equipment that is available for doing so. A plan for how the field equipment would be operated and a list of sample applications is part of the Phase I expected results.
Phase II expected results:
Demonstrate, in the field and on an important application, of a portable antiferromagnetic resonance –based THz system for corrosion detection of hematite and goethite. This system should be capable of being commercialized.
NIST may be available to provide technical guidance, comments and advice on design concepts, discussion of technical problems, and previous lab data.
References:
[1] Cornell, R.M. and Schwertmann, U. The Iron Oxides: Structure, Properties, Reactions, Occurrences, and Uses, 2nd ed., Wiley-VCH GmbH and Co. KGaA (2000). Wiley On-line library.
[2] Kim, S., Surek, J., and Baker-Jarvis, J. “Electromagnetic Metrology on Concrete and Corrosion”, J. Res. Natl. Inst. Stand. Technol., 116, 655-669 (2011).
This project seeks to advance new low-cost, non-contact sensing technologies for identifying and localizing manufactured-part types of objects to improve the adaptability and ease of use of robots in manufacturing. Quantitative assessment of system performance is an essential project component.
Robots are expensive and complex. With steep learning curves and high adoption costs, small- and medium-sized enterprises (SMEs), which account for 98% of all manufacturing entities, find it difficult to justify their inclusion in everyday operations. Despite this difficulty, robot installations are expected to increase by 12% on average per year between 2015 and 2017. This sustained increase in adoption is expected to occur principally in the automotive, electronics, and materials sectors, where manufacturing applications involve structured environments with high-volume throughput and low mixtures or variations of parts. However, the International Federation of Robotics states that future product life-cycles will decrease alongside an increase in product variety. Unfortunately, the cost of automation is expected to grow exponentially in these low-volume, high-mixture part environments. Therefore, the new era of factory automation requires the adoption of a different technological paradigm where robotic systems can quickly adapt and reconfigure for high variations in parts. Many key technologies are required to unlock these desirable qualities in robotic systems including “human-like” dexterity and robust perception. This topic focuses on robust perception technology for part identification and localization [1-7].
A cornerstone of robot adaptability is perception. Like people, robots need to see and locate parts in the environment to efficiently interact with them. Using expensive tooling, current high-volume manufacturing robots operate under the assumption that parts are predictably placed, and therefore the robot does not need to perceive the object. This automation structure is counterpoint to environments with low-volume, high mixture parts, where automatic part-finding capabilities using non-contact sensing such as cameras and laser scanners become necessary. To adapt, robots need to quickly and accurately identify and locate parts in their environment so that they can make informed operational decisions.
Despite much progress, robotic object perception has yet to reach capability levels that are both robust and easy-to-use. Seamless integration necessitates a perception system capable of variable part identification and localization in six degree-of-freedom Cartesian space without the aid of reference markers or other specialized indicators in the environment. However, use of Computer Aided Design (CAD) model data is justifiable since all parts are assumed known in a manufacturing environment. These perception systems should leverage calibration and registration techniques that are conducive to the re-configurable and re-tasking environments associated with SMEs. Moreover, the solution must be comparatively low-cost, using commercial off-the-shelf hardware to make perception solutions easily accessible for SMEs. Finally, the system should be robust to lighting conditions, surface properties, part geometries, and occlusions. Performance levels should be reported using clear and concise verification and validation metrics and methodologies. Given that these specifications can be met, robot perception would help reduce integration costs, yielding conditions for improving adoption rates by SMEs.
The goals for this project are three-fold: 1) develop a non-contact perception technology, conducive to the reconfigurable and frequently re-tasked environments associated with SMEs, that is capable of identifying and localizing a variety of part types (e.g. spheres, cuboids, prisms, cylinders, gears, fasteners, hand tools) in six degrees of freedom (leveraging part CAD data to improve system performance is acceptable), 2) benchmark the performance measurement of said technology under a formal approach such as ASTM E2919-14 “Standard Test Method for Evaluating the Performance of Systems that Measure Static, Six Degrees of Freedom (6DOF) Pose,” and 3) work with NIST in constructing new test methods to benchmark robotic system performance for smart manufacturing using the developed perception technology.
NIST is currently developing test methods and metrics for measuring robot system performance in next-generation manufacturing environments. Meeting these project goals will foster innovation in robot perception and aid in the development of performance tests by applying state-of-the-art technology to challenging, dynamic manufacturing operations.
Phase I expected results:
Develop a prototypical non-contact perception technology for identifying and localizing parts with a methodical documentation of performance and conditions (e.g. ASTM E2919-14).
Phase II expected results:
Demonstrate honing perception performance, and provide the necessary tools for simplifying its integration and ease-of-use for non-expert commercial SME users. Develop task-level metrics and test methods with application to empirically convey the performance and significance of perception in SME environments. Demonstrate a hardened product ready for adoption by industry and commercialization in the marketplace.
NIST will be available to work collaboratively, exercising expertise in the modularity and re-tasking requirements of SMEs and measurement science for perception systems to ensure successful completion of project goals.
References:
[1] ASTM E2919-14 “Standard Test Method for Evaluating the Performance of Systems that Measure Static, Six Degrees of Freedom (6DOF) Pose”, [Available from: http://www.astm.org/Standards/E2919.htm].
[2] Falco, J., Marvel, J. and Messina, E. “Dexterous Manipulation for Manufacturing Applications Workshop”, NISTIR 7940, June 2013.
[3] International Federation of Robotics, Industrial Robot Statistics, 2014. [cited: 2015, Available from: http://www.ifr.org/industrial-robots/statistics/].
[4] Marvel, J., Messina, E., Antonishek, B., Van Wyk, K. and Fronczek, J, “Tools for Robotics in SME Workcells: Challenges and Approaches for Calibration and Registration”, NISTIR, under review.
[5] Computing Community Consortium, Workshop on Opportunities in Robotics, Automation, and Computer Science, 2014.
[6] Barajas, L.G., Thomaz, A.L. and Christensen, H.I. “Ushering in the Next Generation of Factory Robotics and Automation”, in National Workshop on Challenge to Innovation in Advanced Manufacturing: Industry Drivers and R&D Needs, 2009.
[7] Knight, W. “Increasingly, robots of all sizes are human workmates”, in MIT Technology Review, 2014.
Halocarbons are widely used in manufacturing of semiconductor and related nanoscale devices. These compounds are also greenhouse gases that can contribute significantly to warming of the atmosphere. Although emitted to the atmosphere from manufacturing processes in relatively small quantities, fluorinated halocarbon gases contribute approximately 3% of the approximately 6,670 million metric tons of greenhouse gases emitted to the atmosphere.
Effective measurement of these gases in the atmosphere is a measurement challenge due to their very low concentrations, typically in the picomole/mole or lower concentration region. In recent years there have been major advances in instrumental methods to measure the concentrations and isotopic compositions of the fluorinated gases in air and in other gaseous media by optical (e.g. cavity ring-down spectroscopy, Fourier transform infrared spectroscopy), mass-spectrometric (e.g. quadrupole, magnetic sector, time-of-flight) and other types of detection methods. Full advantage of these measurement advances can be significantly improved through sample pre-concentration prior to introduction to the measuring instrument of choice. In some cases separation from interfering compounds can also be accomplished in the pre-concentration step, significantly improving detection and quantification capabilities. Sample pre-concentration enables detection and measurement of fluorinated gases present in the atmosphere at concentration levels that are difficult to realize currently, picomole/mole and below levels, and over collection times that can facilitate the emission source identification and estimation of the quantity released.
The goal of this project for atmospheric monitoring applications is to demonstrate a fieldable prototype pre-concentration methodology for fluorinated gases used in the manufacture of advanced devices suitable for application with commercially available technologies, e.g., quadrupole mass spectrometry or similar, for 20 or more gases at the 1 picomole/mole level or below.
Phase I expected results:
Demonstrate a laboratory prototype capability for at least 5 fluorinated gases having detection limits of 10 picomole/mole or below.
Phase II expected results:
Demonstrate a fieldable pre-concentration prototype in an atmospheric monitoring setting using a commercially-available quadrupole mass spectrometer for 20 or more fluorinated gases at a detection limit of 1 picomole/mole or below for at least half of theses trace gases sampled from the atmosphere.
NIST may be available to provide advice concerning various types of expertise available at NIST, e.g., NIST expertise in advanced refrigeration technologies likely to be used by any pre-concentration approach.
Reference:
EPA. 2011. U.S. Greenhouse Gas Inventory, http://www3.epa.gov/climatechange/ghgemissions/gases.html.
Magnetic nanoparticles have diverse applications in biomedical analysis and therapy, environmental remediation, and nanoscale and microscale manipulation. However, it is difficult to quantitatively measure the magnetic properties of single nanoparticles with high throughput. This measurement problem limits the ability to perform quality control in manufacturing processes for magnetic nanoparticles, which in turn limits the ability to obtain reproducible and predictable results in commercial applications of magnetic nanoparticles. Through its ongoing inter-OU Nanoparticle Manufacturing Program and recent workshop, Advancing Nanoparticle Manufacturing [1], NIST has clearly identified the need of its stakeholders for new measurement technologies to solve this problem. Because of the widespread use of magnetic nanoparticles, there is a particular need for economical technologies that are commercially available to the many manufacturers and users of magnetic nanoparticles with diverse properties. NIST in general, and the Center for Nanoscale Science and Technology in particular, is interested in enabling innovative commercial research to close this measurement gap and fulfill its mission to support the U.S. nanotechnology enterprise from discovery to production by providing industry, academia, NIST, and other government agencies with access to nanoscale measurement and fabrication methods and technology.
The general goals of this project are to increase private sector commercialization of an innovative measurement technology, to use small business to meet federal research and development needs, and to stimulate small business innovation in technology. The specific goals of this project are to develop an innovative measurement technology that enables quantitative magnetometry of single nanoparticles with diverse properties with high throughput, and to develop a manufacturing process that enables the mass production of this measurement technology.
Nanoparticles require routine characterization for quality control to obtain reproducible and predictable results in research and development, manufacturing, commerce, and standardization. But there are no commercially available technologies to quantitatively measure functionally relevant magnetic properties of single nanoparticles, such as hysteresis loops and magnetic anisotropy, with industrially relevant throughput. Most existing instruments for magnetometry are intended for measurements of macroscopic sample volumes. Application of these instruments to nanoparticle samples requires measurement of many particles in an ensemble, complicating a quantitative interpretation of the data and obscuring the distribution of particle properties. More specialized instruments for magnetometry can resolve single nanoparticles, but the throughput of such measurements is low, limiting the rapid analysis of a large number of single particles to populate a distribution of properties. Measurement of distributions of magnetic properties is essential to characterize sample heterogeneity for quality control in nanoparticle manufacturing.
Commercial development of an innovative and economical measurement technology will benefit manufacturers and users of magnetic nanoparticles, as well as manufacturers of scientific instruments. The widespread availability of this measurement technology will enable nanoparticle manufacturers to improve quality control of magnetic nanoparticles, allowing users to obtain reproducible and predictable results using the samples and potentially implement the measurements themselves. Growth in this overall market will motivate instrument manufactures to further develop the technology to serve the market better.
Phase I expected results:
Demonstrate quantitative magnetometry of nanoparticles with high throughput, as defined by the following performance metrics: measurements of coercivity with a limit of uncertainty of less than 1 mT; measurements of isotropic or anisotropic nanoparticles with at least one critical dimension of less than 100 nm; measurement of more than 100 single ferromagnetic nanoparticles in less than 100 minutes.
Phase II expected results:
Demonstrate broad applicability of the measurement technology to a variety of commercially relevant magnetic nanoparticles with diverse magnetic properties. Demonstrate different forms of magnetometry including vector magnetometry. Increase the precision of the measurement technology by an order of magnitude. Increase the throughput of the measurement technology by an order of magnitude. Develop an economical manufacturing process for the measurement technology that is suitable for production and commercial venture.
NIST staff may be available to work collaboratively to develop the technology.
Reference:
[1] Advancing Nanoparticle Manufacturing, http://www.nist.gov/cnst/anm.cfm.
Biologic medicines represent the largest sector of the U.S. bioeconomy, with a global market of $145 billion, growing at greater than 15% annually and employing more than 810,000 workers. In 2013, seven out of ten of the top-selling drugs were biologics, specifically protein therapeutics, and it is projected that by 2020, 50% of all prescription drugs sales will be biologics. A general unmet need in the development and manufacturing of these products is the inability of the current state-of-the-art in measurement technology to characterize these complex protein products with sufficient precision and accuracy to ensure desirable clinical performance.
NIST seeks the development of new or improved measurement tools and methods that can more quickly, accurately, and precisely characterize the structure of biologic drugs. Advances in such measurement technology will:
· Enable faster and more confident assessments of the potential effects of changes in the manufacturing process, equipment, or raw materials.
· Aid development of biosimilars that will lead to greater competition and improved access to many medications for US patients.
· Increase general knowledge in the field of biopharmaceuticals and allow industry to develop improved and next generation protein therapeutics.
New analytical methods that can more accurately assess finished products, as well as analytical tools that can monitor attributes biologic throughout the manufacturing process are desirable. This opportunity applies to all types of analytical methods including those used for process monitoring, characterization, or lot release. New analytical tools or methods to be developed should have some advantage over current analytics in terms of higher resolution (greater sensitivity, orthogonality, or specificity), reliability, or the clinical relevance of the product attribute that is measured.
The development of new or improved analytical technologies to assess the product quality attributes below are of interest in this opportunity.
• Post-translation Modifications
Many protein therapeutics have post-translational modifications that are critical to their clinical activity. These modifications are typically complex and heterogeneous, and analytical methods for fast qualitative or quantitative assessment of these modifications and how they relate to potency and clinical performance need to be improved. For example, glycosylation of the Fc region of many monoclonal antibody therapeutics is important for their mechanism of action in cancer treatment. Of particular interest in this opportunity are improved methods for quickly analyzing and quantifying glycosylation and other modifications known to affect the efficacy and safety of these types of products. Analytical technologies to improvement measurement of other post-translational modifications including deamidated species, glycated species, sialylated species, C-terminal variants (HC -Lys, HC-Pro Amide), N-terminal variants (Gln vs pyro Glu), and oxidized forms would also be of interest.
• Higher Order Structure
Protein therapeutics must be folded into a three-dimensional structure to become functional and often a three-dimensional structure can be misfolded. It is possible that a distribution of three dimensional structures can exist for a product where there will be one major three-dimensional structure present with other minor variants differing in three-dimensional structure. We seek the development of new or improved higher order structure measurement tools that can detect and quantify the properly folded three-dimensional structure along with misfolded variants of protein therapeutics.
• Protein Aggregation and Particulates
Protein molecules can stick to each other to form aggregates which are thought to be the precursors to formation of larger particulate species. It is believed that these species have the potential to stimulate adverse immune responses in patients that can lead to neutralization of the protein therapeutic. Aggregation and particulate formation are particularly problematic for monoclonal antibody therapeutics that are typically formulated at high concentrations of 100 mg/ml or greater. In order to better understand adverse immune reactions, the ability to measure and quantify different types of aggregates in products needs to be improved. We seek the development of new or improved analytical tools that can directly measure the size and shape of protein aggregates or particulates, or assess their composition.
For assessment of all the product attributes above, new or improved analytical technologies or methods that require minimal sample preparation, or are capable of interrogating high concentration, formulated protein therapeutics, or protein therapeutics in raw process streams are also of particular interest in this opportunity.
Phase I expected results:
Establish proof of concept of new or improved analytical technology to measure desired product quality attribute (s) of protein therapeutics. Optimize and establish performance characteristics of new or improved analytical technology including sensitivity, resolution, quantitation, measurement speed (including sample preparation steps), accuracy, precision, and reproducibility.
Phase II expected results:
Directly compare results of new or improved analytical technology with results from a current state-of-the art analytical method (can be conceptually similar or orthogonal analytical method) used for characterization or product release testing of protein therapeutics. Demonstrate a factor of 2X or greater improvement in sensitivity, resolution, accuracy, precision, or speed of the new or improved analytical technology.
NIST will be available to consult or for potential collaboration with the awardee depending on the measurement technology to be developed. NIST will also be willing to provide where appropriate reference materials to better compare performance of the proposed new method or technology with that of the current state-of-the-art.
Biologic medicines represent the largest sector of the U.S. bioeconomy, with a global market of $145 billion, growing at greater than 15% annually and employing more than 810,000 workers. In 2013, seven out of ten of the top-selling drugs were biologics, specifically protein therapeutics, and it is projected that by 2020, 50% of all prescription drugs sales will be biologics. A general unmet need in the development and manufacturing of these products is the inability of the current state-of-the-art in measurement technology to characterize these complex protein products with sufficient precision and accuracy to ensure desirable clinical performance.
NIST seeks the development of new or improved measurement tools and methods that can more quickly, accurately, and precisely characterize the structure of biologic drugs. Advances in such measurement technology will:
· Enable faster and more confident assessments of the potential effects of changes in the manufacturing process, equipment, or raw materials.
· Aid development of biosimilars that will lead to greater competition and improved access to many medications for US patients.
· Increase general knowledge in the field of biopharmaceuticals and allow industry to develop improved and next generation protein therapeutics.
New analytical methods that can more accurately assess finished products, as well as analytical tools that can monitor attributes biologic throughout the manufacturing process are desirable. This opportunity applies to all types of analytical methods including those used for process monitoring, characterization, or lot release. New analytical tools or methods to be developed should have some advantage over current analytics in terms of higher resolution (greater sensitivity, orthogonality, or specificity), reliability, or the clinical relevance of the product attribute that is measured.
The development of new or improved analytical technologies to assess the product quality attributes below are of interest in this opportunity.
• Post-translation Modifications
Many protein therapeutics have post-translational modifications that are critical to their clinical activity. These modifications are typically complex and heterogeneous, and analytical methods for fast qualitative or quantitative assessment of these modifications and how they relate to potency and clinical performance need to be improved. For example, glycosylation of the Fc region of many monoclonal antibody therapeutics is important for their mechanism of action in cancer treatment. Of particular interest in this opportunity are improved methods for quickly analyzing and quantifying glycosylation and other modifications known to affect the efficacy and safety of these types of products. Analytical technologies to improvement measurement of other post-translational modifications including deamidated species, glycated species, sialylated species, C-terminal variants (HC -Lys, HC-Pro Amide), N-terminal variants (Gln vs pyro Glu), and oxidized forms would also be of interest.
• Higher Order Structure
Protein therapeutics must be folded into a three-dimensional structure to become functional and often a three-dimensional structure can be misfolded. It is possible that a distribution of three dimensional structures can exist for a product where there will be one major three-dimensional structure present with other minor variants differing in three-dimensional structure. We seek the development of new or improved higher order structure measurement tools that can detect and quantify the properly folded three-dimensional structure along with misfolded variants of protein therapeutics.
• Protein Aggregation and Particulates
Protein molecules can stick to each other to form aggregates which are thought to be the precursors to formation of larger particulate species. It is believed that these species have the potential to stimulate adverse immune responses in patients that can lead to neutralization of the protein therapeutic. Aggregation and particulate formation are particularly problematic for monoclonal antibody therapeutics that are typically formulated at high concentrations of 100 mg/ml or greater. In order to better understand adverse immune reactions, the ability to measure and quantify different types of aggregates in products needs to be improved. We seek the development of new or improved analytical tools that can directly measure the size and shape of protein aggregates or particulates, or assess their composition.
For assessment of all the product attributes above, new or improved analytical technologies or methods that require minimal sample preparation, or are capable of interrogating high concentration, formulated protein therapeutics, or protein therapeutics in raw process streams are also of particular interest in this opportunity.
Phase I expected results:
Establish proof of concept of new or improved analytical technology to measure desired product quality attribute (s) of protein therapeutics. Optimize and establish performance characteristics of new or improved analytical technology including sensitivity, resolution, quantitation, measurement speed (including sample preparation steps), accuracy, precision, and reproducibility.
Phase II expected results:
Directly compare results of new or improved analytical technology with results from a current state-of-the art analytical method (can be conceptually similar or orthogonal analytical method) used for characterization or product release testing of protein therapeutics. Demonstrate a factor of 2X or greater improvement in sensitivity, resolution, accuracy, precision, or speed of the new or improved analytical technology.
NIST will be available to consult or for potential collaboration with the awardee depending on the measurement technology to be developed. NIST will also be willing to provide where appropriate reference materials to better compare performance of the proposed new method or technology with that of the current state-of-the-art.
In working toward National Strategy for Trusted Identities in Cyberspace (NSTIC) aligned online transactions, the NSTIC National Program Office (NPO) has identified current inconveniences that arise when individuals request services and benefits from the government. In these transactions, government websites repeatedly ask individuals for the same personal information. In some cases, websites do not actually need specific values for attributes; they just need a claim (e.g., a claim that a user is in a certain age range, instead of using a birthdate). Users lack a convenient way to disclose requested information without repetitious form filling, transform verified attributes into claims, or track where information has been disclosed, leaving them susceptible to over-collection and over-sharing of personal information. Potentially old and incorrect data also reduces the quality of services, and can be costly for government agencies. Finally, agencies incur security costs and liabilities maintaining personal information in order to communicate with customers. This is not only an issue with government transactions – it’s also a part of our everyday interactions with private companies [1-2].
One solution is commonly known as a Personal Data Store (PDS). While commercial pilot programs have demonstrated its utility, it has yet to reach broad adoption in the government space. A PDS will provide users the ability to grant relying parties – sites that utilize identities or attributes provided by a third party service – secure, ongoing access to their personal information, attributes, and preferences. Hosted PDSs are segmented away from the rest of an information system, insulating private information and attribute details. Individuals will retain full control of their personal information; they’ll decide which attributes to release or permit access to and to whom.
The PDS should meet several critical requirements:
· Interoperate with a countless number of agencies or private companies engaging customers in online transactions by using open standards to transmit and store personal information and attributes;
· Revoke access to information provided to third or relying parties, and provide individuals with a clear and usable method for managing access to their stored data;
· Allow users to download their data in an open, portable format that can be migrated to other PDSs, maintaining the focus on user choice and convenience;
· Store, or link to, credential and attribute verifications from credential and attribute service providers, allowing individuals to share both self-asserted and signed or verified data about themselves;
· Provide the option for users to disclose verified attributes without revealing the user’s relationship with the relying party to the verifying credential or attribute service provider or revealing the user’s relationship with the verifying credential or attribute service provider to the relying party; and
· Transform verified attributes into provable claims.
Through the PDS, the user will manage their self-asserted data as well as provide access to any user-approved external authoritative sources. The PDS will be beneficial when citizens engage with government agencies online – especially through Connect.Gov. With Connect.Gov, individuals will be able to use their credentials from approved external websites to log in at federal websites. A personal data store would put citizens in control of their personal information in these transactions, cultivating trust in Connect.Gov. As the private sector becomes increasingly competitive regarding privacy, and more devices request information a part of the growing Internet of Things, tools like PDSs provide more privacy benefits by helping individuals manage their information disclosures, as well as increasing trust in online transactions across the Internet.
PDSs will enable trust, accuracy and convenience for individuals providing the same information to multiple agencies and companies without needing to fill out cumbersome, error-prone forms. Additionally, it will improve service delivery of the U.S. government by allowing its authoritative data to be made available, if an individual choses, through a PDS. This is a unique opportunity for a provably secure, technical solution to keep personal information under user control at a time when well-intentioned government actions toward data protection are so often met with suspicion. PDSs will provide a more explicit approach to consent, giving individuals greater control over what information is released, under what conditions, and to whom.
Phase I expected results:
Develop the architecture and a functioning prototype of the PDS, including user interfaces. These should be testable and deployable, as well as have integration abilities based on open standards.
Phase II expected results:
Develop an open-source PDS architecture based on open standards (where applicable). Demonstrate successful integration into the Connect.Gov architecture and successfully test and pilot this integration with at least two applications at different federal agencies.
NIST will provide consultation and input through regular discussions to solve problems as they occur. NIST will also work with General Services Administration (GSA) and other agencies to provide the test applications through Connect.Gov to which the integration will occur.
References:
[1] National Strategy for Trusted Identities in Cyberspace (http://www.whitehouse.gov/sites/default/files/rss_viewer/NSTICstrategy_041511.pdf).
[2] Privacy Enhancing Technologies Workshop (http://www.nist.gov/itl/csd/ct/pec-workshop.cfm).
The ability to compose existing interacting systems into a singlehigher-level system is essential to several emerging transformationaltechnology platforms, such as cloud services, the Internet of things (IoT),and cyber-physical systems (CPS). On a small scale, this has been achievedby data exchange formats, programming language interfaces, and algebraicspecification frameworks. However, a formal account of modularity forlarge-scale systems has been out of reach. Category theory, as a theory of structure and compositionality, stands out as a possible mathematicalformalism for specifying, analyzing, and composing large-scale modularstructures. This SBIR subtopic is calling for a software tool to test thecategorical formalism on a testbed of interacting systems compositionproblems. The software tool should be able to create categorical models ofsystems composition and show that these models apply to concreteproblems, such as those occurring for cyber-physical systems. The work proposed should lead to new generation of tools that will enhance the ability to build, test, and validate this new generation of cyber-physical systems [1].
The goal of the project is to create suite of tools based on category theory to transcend current ad-hoc practices in the creation of large-scale cyber-physical systems. The project will demonstrate the ability to define, build, test and validate the design of cyber-physical systems. Given their possible ubiquity in the future, it is important to have good formal methods that form the basis of design for cyber-physical systems and the internet of things.
Phase I expected results:
Demonstrate the feasibility of creating tools based on category theoretic foundations through exemplars of such a system. Evaluate the market needs and requirements of such tools.
Phase II expected results:
Develop and demonstrate the use of the tools proposed in Phase I, involving potential customers. Develop business plans and a detailed path to commercialization.
NIST may work both in consultative and collaborative capacity in assisting the awardee.
Reference:
[1] Lee, E.A. "Cyber Physical Systems: Design Challenges", International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing (ISORC), May, 2008; Invited Paper.
Current approaches to understanding the environmental impacts of manufactured products are based on a methodology known as life cycle assessment (LCA.) LCA uses generalized estimates for the impact of the different processes involved in the manufacturing of these products. The actual impacts of manufacturing processes can vary dramatically and are influenced by a wide range of factors, including the manufacturing operating environments and process settings and the cost and availability of labor, energy and materials [1-3].
The LCA tools available today focus on material production and use sustainability approximations that fail to account for specific manufacturing process performance, thus resulting in uncertain comparisons. With newer equipment, or by outfitting older equipment with sensors, we are able to accurately measure many of the factors contributing to the overall sustainability impact, such as energy, water, and material use; however, each process assessment is time consuming and the results are not widely shared.
An easily accessible source of manufacturing process reference data would allow more accurate assessments of the overall sustainability impact. A collection of reference data of manufacturing processes which use formal representation methods capable of integrating into software solutions such as analysis packages will be useful for industry decision making, integration with LCA solution providers, and educational purposes in general. For example, this reference data could be incorporated by solution providers to analyze manufacturing plans to reduce the impact of the manufacturing processes in terms of the environment, as well as operational costs. The data could further be used to predict production costs, schedule manufacturing resources, and control quality of production.
The repository of reference data should support collaboration for data contributions from a variety of sources, mechanisms for rating data accuracy and validity, mechanisms for people to find relevant data sources, and mechanisms to support automated interfaces to the repository. Furthermore, a business model for providing a viable service is needed. Reference data collection is a time consuming and the resulting data may contain sensitive information. The project should outline a business model that will support a variety of different stakeholder needs including academic research, industry specific interests, and integration with an enterprise’s proprietary reference data.
The goal of this project is to make manufacturing process sustainability reference data available to enable accurate estimates of the impact of manufacturing processes. Process reference data can facilitate process trade-off analysis that will result in an overall reduction of the impact of manufacturing activities and that will reduce costs to manufacturers.
Phase I expected results:
Demonstrate a framework for collecting unit manufacturing process data in a standard and reusable way from a range of contributors, making use of emerging standards from ASTM E60.13, Committee on Sustainable Manufacturing, for process characterization and the development of standard templates for information collection, storage, and retrieval. The reference data set should provide at a minimum the information needed for composing the unit manufacturing processes for purposes of sustainability-related decision support. Create and provide such a database to address existing gaps on manufacturing process- specific information for life cycle analysis. The framework will include methods to support collaborative contribution, open discussion, and rating of contributed process data.
Phase II expected results:
Develop a repository of reference data sets for manufacturing processes represented in Phase I using a standardized format suitable for access by both end users and software applications. The repository should accept and solicit contributions of data and provide mechanisms for the data to be reviewed and discussed. Features for determining the validity and accuracy of the datasets should be considered. The number of manufacturing processes in use is so large that collaborative development of the data sets will be necessary in order for the system to have broad coverage increasing its usefulness. Provide a collection of unit manufacturing process data to a wide range of manufacturing customers either directly or through providers of manufacturing analysis solutions. Provide a variety of innovative mechanisms to make the data available.
NIST will be available to work collaboratively with the awardee providing consultation and input on the standards activities and directions and connecting a network of data providers.
References:
[1] Sustainability-related standards are currently being developed by standards development agencies such as ASTM International. http://www.astm.org/WorkItems/WK35705.htm.
[2] Unit process life cycle inventory (UPLCI), as part of the CO2PE! Initiative is an international effort to document, analyze, and improve the environmental footprint for a wide range of manufacturing processes. http://www.co2pe.org/.
[3] Mani, M., Madan, J., Lee, J.H., Lyons, K.W. and Gupta, S.K. “Sustainability characterization for manufacturing processes”, International Journal of Production Research 52 (20), 5895-5912; http://www.tandfonline.com/doi/abs/10.1080/00207543.2014.886788.
Today, there are an estimated 8 billion square meters of commercial building space in the U. S., which consume 19 % of the energy used in the U. S. [1]. A report by the U. S. Department of Energy estimates that reducing air leakage through the exterior envelope of commercial buildings could result in energy savings of over 800 TBtu U. S.-wide by 2030 (5 % of the energy consumed by commercial buildings in 2014) [2]. Leaky building exteriors also lead to moisture issues inside wall cavities and building interiors, which can affect the integrity of the building envelope and the health of the occupants [3, 4]. However, barriers to making improvements in building envelope airtightness include not knowing a building’s current airtightness and thus not knowing how much energy (and thus money) one could save by investing in airtightness improvements. The current primary technique for determining building envelope airtightness is the building pressurization test, which is standardized in the American Society of Testing and Materials (ASTM) Standard E779-10 [5]. However, many building owners, tenants, and other stakeholders choose not to conduct this test because of the cost, time, and disruption to day-to-day building operations. It has also been shown that building envelope airtightness does not correlate to building age or construction characteristics [6].
Another method to determine building envelope airtightness is to use building energy models calibrated by utility data. Changes to equipment operation and efficiency, and to occupant usage are made in the model to reflect differences between design assumptions and operating conditions (e.g. [7]). When significant differences still exist after these changes are made, they are attributed to unknowns in the input parameters, such as building envelope airtightness. This value may be then adjusted using engineering expertise. However, it is difficult to know whether the airtightness value identified during the calibration process is the actual value or merely one that matches the modeling results and utility data.
Thus, a novel method is needed to determine building envelope airtightness that would be attractive to building owners and tenants. Such a method should be lower in cost, time, and effort and be less disruptive than the current best available technology, i.e., building pressurization testing. It should also provide a building envelope airtightness value with more confidence than calibrating energy models to coarse utility data. The new method should be accessible to the entire building industry. A commercial product that can deliver envelope airtightness values to clients would be crucial when faced with choosing whether or not to make capital investments to increase of airtightness. It would also allow building energy modelers to increase the confidence in their model results since infiltration is the one of the largest sources of the uncertainty [8].
The adoption of such a method has the potential to improve the airtightness and energy usage of innumerable commercial buildings. In addition, improved airtightness can prolong the life of buildings reducing or eliminating moisture migration into the building. The development of a method to easily determine building envelope airtightness would serve NIST in satisfying its goal by advancing the measurement science in this field. It would also serve the U. S. in reaching its goals to reduce reliance on oil and improve energy security [9].
The method developed to determine building envelope airtightness should be lower in cost, time, and effort and be less disruptive to normal operations compared to currently available technology. It should also provide a building envelope airtightness value with more confidence than calibrating to coarse utility data. It should be a method that can be used in a variety of building types including, but not limited to, stand-alone, those with shared walls, single story, and skyscrapers. The airtightness values determined by such a method should include bounds of uncertainty and be validated with pressurization test results. The method should be demonstrated in actual commercial buildings, with both multizone airflow and energy models of the buildings developed to support future research.
Phase I expected results:
Develop a literature review or market research demonstrating their knowledge of the state of the art in sensors and approaches to determining building envelope airtightness that could be commercialized. Present the details of the proposed feasibility study (or studies) that have high potential to address the need for novel methods to determine building envelope airtightness in commercial buildings. Report the results of their study (or studies), including sensitivity and uncertainty analyses in the Phase I final report.
Phase II expected results:
Provide a schedule to the NIST technical expert on how the method will be developed from Phase I to a final product. Select 3 or more commercial buildings for field testing (to be approved by the NIST technical expert) and obtain written consent to perform building envelope airtightness testing in order to validate the method developed. Identify plans to bring the method to the commercial marketplace and demonstrate that it is lower in cost, time and effort, and less disruptive to normal operations, compared to currently available technology. Provide a building envelope airtightness value with more confidence than calibrating to coarse utility data.
The NIST technical expert will be available for consultations and discussions to answer questions and clarify any other technical aspects of this effort.
References:
[1] DOE. Building Energy Data Book. 2011 [cited 2014; Available from: http://buildingsdatabook.eren.doe.gov/
[2] DOE, Windows and Building Envelope Research and Development: Roadmap for Emerging Technologies. 2014, U. S. Department of Energy: Washington, D. C.
[3] Bomberg, M., Kisilewicz, T. and Nowak, K. “Is there an optimum range of airtightness for a building?”, Journal of Building Physics, 2015.
[4] IOM, “Damp Indoor Spaces and Health”, I.o. Medicine, Editor. 2004, The National Academies Press: Washington, D.C.
[5] ASTM, ASTM E779-10 Standard Test Method for Determining Air Leakage Rate by Fan Pressurization. 2010, American Society of Testing and Materials: Philadelphia.
[6] Emmerich, S.J. and Persily, A.K. “Analysis of U. S. Commercial Building Envelope Air Leakage Database to Support Sustainable Building Design”, International Journal of Ventilation, 2014. 12(4): p. 331-343.
[7] Raftery, P., Keane, M. and Costa, A. “Calibrating whole building energy models: Detailed case study using hourly measured data”, Energy and Buildings, 2011. 43(12): p. 3666-3679.
[8] Hopfe, C.J. and Hensen, J.L.M. “Uncertainty analysis in building performance simulation for design support”, Energy and Buildings, 2011. 43(10): p. 2798-2805.
[9] Obama, B., “The President's Climate Action Plan”, 2013: Washington, D. C.
The Global Positioning System (GPS) is used for a myriad of innovative—and now, essential—applications that were not envisioned when the system was first designed [1]. The Department of Homeland Security reports that of their 18 defined areas of U.S. critical infrastructure (e.g., communications, transportation, and energy), 16 of them rely on GPS for precision timing and synchronization in their system operations [2]. However, the GPS signal is exceedingly weak, and it is vulnerable to interference, both accidental and deliberate.
Programs have been proposed to provide resilience to the many modern cyber physical systems that rely on GPS for timing data. One that is often suggested is “eLoran,” which could augment GPS by providing a complementary data transmission channel for timing and more [3–8].
One of the developments that allowed GPS service to become so widely used was the development of application-specific integrated circuits (ASICs) that allowed engineers to incorporate GPS receivers into their products at very low cost (a few dollars or less). No such single-chip receivers currently exist for eLoran. Development and demonstration of ASICs for eLoran, or ASICs that integrate eLoran receiver functionality with that of other systems, would accelerate adoption and utilization of eLoran if and when it is deployed. While NIST has no operational responsibility for either GPS or eLoran, NIST seeks development of eLoran ASICs in order to help facilitate the broad dissemination of precise, accurate time standards and to provide robustness and resilience for critical cyber physical systems.
The goals of this project are to develop and demonstrate reference designs for single-chip eLoran receivers (ASICs). These designs could be for stand-alone eLoran receivers or—even better—integrated into ASICs that receive multiple time-dissemination signals (e.g., GPS, NIST’s WWVB). Designs must take care to capture all the timing precision available in these signals, and the designs must be amenable for eventual mass production at very low cost (commensurate in cost to today’s ASICs that provide precision time but which lack eLoran compatibility).
As of this writing, the only U.S. eLoran signal is broadcast from Wildwood, NJ, and on an intermittent, experimental basis. The signal has a coverage radius of a few hundred miles. Proposals under this subtopic (Phase I now, and perhaps a Phase II later) should make clear the extent to which access to this signal might be required, and what if any arrangements might have been made for access to this signal when needed. Neither the U.S. Government nor its CRADA partners [9] make any representation or commitment though this SBIR solicitation that this signal would be available or guaranteed. Open-air eLoran signals may also be available in the UK and other nations [10].
Phase I expected results:
Develop a feasibility study consisting, at a minimum, of a system design and supporting analysis for timing accuracy and volume manufacturing costs. The design should be based on published eLoran specifications (e.g., [3–7]) and the analysis should greatly benefit from experimental validation of elements in the design.
Phase II expected results:
Production of prototype integrated circuits. The prototype should be produced to the specifications to meet the needs for functionality testing and support the broad and rapid commercialization of eLoran technology.
NIST will not provide assistance on this project.
References:
[1] See http://www.dhs.gov/science-and-technology/cyber-physical-systems.
[2] See http://www.gps.gov/multimedia/presentations/2012/10/USTTI/graham.pdf.
[3] The eLoran definition document may be found at: http://www.loran.org/news/eLoran%20Definition%20Document%200%201%20Released.pdf.
[4] The Loran C signal specification may be found at: http://www.navcen.uscg.gov/?pageName=loranSignalSpec.
[5] The eLoran 9th pulse modulation technique is described in http://www-personal.umich.edu/~tmikulsk/loran/ref/eloran_ldc.pdf and http://www.dtic.mil/get-tr-doc/pdf?AD=ADA575171.
[6] See http://rntfnd.org/wp-content/uploads/Delivering-a-National-Timescale-Using-eLoran-Ver1-0.pdf and the references therein.
[7] See http://www-personal.umich.edu/~tmikulsk/loran/index_3.html and the references therein.
[8] Additional background information on eLoran and updates on its current status may be obtained by searching on that term with an Internet search engine.
[10] See http://www.gla-rrnav.org/radionavigation/eloran/index.html.
Today’s manufacturing systems are able to collect vast amounts of data; however, much of that data is never used unless and until there is a known problem in the process. Sometimes the problem will not even be detected until the product is being used in the field, implying that the manufacturing problem may have persisted for several generations of the product. Advances in data visualization, which is a fundamental means of observing data and discovering problems, have come a long way for generalized applications. Data visualization still requires considerable effort to easily integrate with the systems generating data [1].
Current approaches (drag-and-drop dashboards, tableaus, etc.) to visualizing smart and sustainable manufacturing enterprises are quite limited and suffer from many drawbacks. Substantial manual effort from specialized practitioners is required to use them. In some cases, significant skilled programming is necessary. In other cases, significant visualization expertise is necessary. Understanding large amounts of big data, often stored as combinations of relational and non-relational data in a variety of quasi-federated databases or being streamed directly from machines, are not well understood by anyone in an enterprise add to further difficulty. Combining all of these skills in a single person is unlikely and are likely to remain out of reach, particularly for small manufacturers. (Large manufacturers have similar problems although for different reasons – while visualization teams exist, inordinately larger data sets make visualization harder in its own way.)
Even the best results are currently inflexible, unable to adapt to in-process schema changes or schema-less databases. This leads to inflexible software that either suffers from “bit rot” as schemas and databases change out from under the visualization software or from the inability to incorporate new data to improve visualizations.
Manufacturing systems pose other unique characteristics for data. For instance, correlations between time and spatial coordinates are one fundamental concept for assessing manufacturing performance. Performance is often plagued by the interaction of variables along multiple dimensions, rather than a two-factor correlation. Other unique characteristics also exist.
This project will investigate fundamental concepts that are of relevance to manufacturing data, develop procedures for applying visualization techniques to those concepts, and provide a natural language-based user interface to allow manufactures to quickly assemble their own visualizations based on their datasets.
The project goal is to make available manufacturing visualization software that is flexible, powerful, and easy-to-use. A natural language-based front-end is a necessary component and a superior interface to traditional drag-and-drop techniques. The system should be able to give advice to the user, for example, proffering certain visualization techniques for data that is recognizably appropriate or dissuading the use of visualization techniques that are inappropriate for given data. The software should include an expandable library of plugin visualization components allowing for new visualization technologies as they become available. A visualization browser must offer and suggest appropriate choices to deal with challenging data such as high-dimension data. A backend data crawler should be able to adapt to new data as it becomes available within the enterprise, with and even without explicit schemas. Application programming interfaces (API) should be provided so that visualizations may be augmented with scripts using arbitrary programing languages and the results integrated into other software without restrictive licenses.
Phase I expected results:
Demonstrate the feasibility of software for visualizations using limited natural language - based on a library of visualizations and manufacturing “big data” (large and varied databases).
Phase II expected results:
Demonstrate richer natural language interfaces, techniques to recommend visualizations based on data, and API libraries for both new visualizations and integration into other software. Demonstrate the interaction of non-visualization specialists with the system. Produce visualizations that are better than commercially-available spreadsheet packages such as Microsoft Excel, and at least as good as those from such as R, Wolfram, D3JS, but with the capability to produce visualizations much more quickly and without the development time or skills required by current commercially-available visualization software such as those mentioned here.
NIST is available to work collaboratively with the awardee providing consultation and input on the activities and directions and providing data and scenarios.
Reference:
[1] Visualization-related collections described in “visualization zoos” such as https://queue.acm.org/detail.cfm?id=1805128, http://www.idea.org/blog/2012/10/25/great-tools-for-data-visualization/, and http://d3js.org.
NIST owns inventions that require additional research and innovation to advance the technologies to a commercial product or service. The goal of this SBIR subtopic is for small businesses to advance NIST-owned inventions to the marketplace. The Technology Partnerships Office at NIST will provide the awardee with a royalty-free research license for the duration of the SBIR award. When the technology is ready for commercialization, a commercialization license will be negotiated with the awardee.
Applications may be submitted for the development of any NIST-owned invention that is covered by a pending U.S. patent application or by an issued patent. Available NIST-owned inventions can be found on the NISTTech website at http://tsapps.nist.gov/techtransfer/ and are identified as “available for licensing” under the heading “Status of Availability.” Some available technologies are described as only being available non-exclusively, meaning that other commercialization licenses may currently exist. More information about licensing NIST’s inventions is available at http://www.nist.gov/tpo/Licensing.cfm.
The technical portion of an application should include a technical description of the research that will be undertaken. Included in this technical portion of the application, the applicant should provide a brief description of a development plan to manufacture the commercial product or to develop a commercial service using the NIST-owned invention. The absence of this development plan will result in the application being less competitive.
Phase I expected results:
Develop a feasibility study that examines expectations of the research to produce a commercial product.
Phase II expected results:
Provide further R&D that leads to demonstrated practical application and advancement toward a commercial product.
NIST staff may be available for consultation and collaboration.
As a part of the Materials Genome Initiative, NIST is charged with developing a materials innovation infrastructure. Key aspects of this infrastructure include the real-time acquisition and curation of experimental and simulation data and associated metadata and control of scientific equipment over a network. To accomplish this, we need research and development on the core requirements and on an overall strategy and software architecture that would enable control of diverse and geographically distributed experimental equipment (e.g. scanning electron microscope(SEM), Transmission electron microscope (TEM), x-ray diffractometers, dilatometry), computational resources (e.g. workstations, clusters, demonstration code), and the automatic capture and curation of their acquired scientific data and associated metadata across a network using backend systems such as the NIST developed Materials Data Curation System and the National Data Service’s Material Data Facility [1-3].
The goal of the project is to discover and document core requirements and develop an overall strategy and software architecture that when implemented will allow for the control of geographically distributed NIST research equipment and computational resources and their integration with scientific informatics backends including the NIST Materials Data Curator and the National Data Service’s Material Data Facility. Both the Materials Data Curator and the Materials Data Facility have REST APIs to facilitate automated data curation. The project will provide documented requirements and develop a specific strategy and software architecture for controlling NIST scientific instruments and computational resources and interfacing them scientific informatics backends in a format amenable to implementation by software engineers.
Phase I expected results:
Discover, validate, and document requirements for a system to enable scientific equipment control and scalable scientific data and metadata acquisition and curation as described in the project goals. Using previously documented requirements, develop and document an overall strategy and then develop and document a software architecture that when implemented will meet the project goals. NIST believes that a successful architecture would have several key properties: 1) It would be structured in independent layers, the top-most layer would present a high-level user interface to allow unified user access and control, while the lowest layer would provide connectivity to the scientific equipment or computational output, 2) the architecture relies on two public interfaces one for the highest level and the other at the lowest level that would allow the components to interact as a single application, 3) the architecture includes the notion of a default scripting language and provisions for integrated development environments to facilitate customization and extension of a system implementing the architecture in a standardized fashion, 4) the architecture is highly modular and includes the concepts of plugins and a generalized, abstract command set that facilitates interaction with the scientific equipment, 5) the public interfaces and abstract command set are conceived as being language neutral and allow users to control and extend a system implementing the architecture from a large variety of commonly used programming languages including Python, Java, and C++, 6) the architecture will provide for the capture of scientific provenance and system configuration to facilitate in reproducibility, 7) the architecture will support the concept of scientific workflows.
Phase II expected results:
Develop an extensible infrastructure for the development of APIs to facilitate data curation of materials data from NIST dilatometers, x-ray diffractometers, scanning electron microscopes (EDS- composition scan, EBSD pattern), transmission electron microscopes, and tensile testing machines.
NIST staff familiar with the various instruments (SEM, TEM, optical microscopes, dilatometer, x-ray diffractometer) and simulations will be available to work with awardees to discover the requirements and develop the metadata schemas needed to collect the data. NIST staff responsible for the development of the Materials Data Curation System can be made available to help the awardees understand the architecture and capabilities of the MDCS.
References:
[1] Documentation for Materials Data Curator System (https://github.com/usnistgov/MDCS).
[2] Campbell, C.E., Kattner, U.R. and Liu, Z-K. “File and Data Repositories for the Next Generation CALPHAD”, Scripta Mater. (2014) 70, 7-11.
[3] Campbell, C.E., Kattner, U.R. and Liu Z-K. “The development of phase-based property data using the CALPHAD method and infrastructure needs”, IMMI (2014) 3 doi:10.1186/2193-9772-3-12.
Hybrid quantum networks are a key step towards realizing distributed quantum computing and quantum communications, two areas that promise to fundamentally change the way information is processed, delivered and secured. Hybrid quantum networks consist of quantum components that operate at different optical wavelengths. This is necessary since different functions of the network are best performed by different technologies (e. g. trapped ions, Rydberg atoms, nitrogen-vacancy color centers, etc.) that operate at incompatible wavelengths. Quantum interfaces are needed to make the different nodes compatible. Ideally, these interfaces are optical frequency converters that convert photons of one wavelength to another with 100% conversion efficiency and no additional noise. In practice, up to 86% internal conversion efficiency has been demonstrated [1] using periodically poled lithium niobate (PPLN) waveguides, and when coupling and collection losses are included, external conversion efficiencies between 51% [2] and 65% [1] have been shown. Furthermore, these devices have been developed by academic institutions and to our knowledge, no commercial vendors exist that can achieve this level of performance.
An unmet need exists for a commercial source of high-efficiency, high-performance optical frequency converters. Efficient, low-loss converters are needed for both up- and down-conversion. Devices must be able to efficiently convert between far-separated wavelengths, which requires on-chip mode conditioning and directional coupling [3]. Attention to packaging and fiber coupling is needed to make the devices robust and easy to use. NIST is interested in these devices to further efforts in realizing a hybrid quantum network. We are seeking proposals from US industry to develop reliable, high-efficiency devices and demonstrate a path towards commercialization.
The goal of this project is to develop commercial facilities and capabilities to manufacture, characterize and test high-performance optical frequency converters. High performance includes high conversion efficiency, low noise, robust packaging, good long-term stability and performance free of photorefractive damage. High external conversion efficiency requires excellent waveguide quality, low propagation losses and high coupling efficiency. On-chip filters and couplers are likely needed to achieve high launching and coupling efficiencies.
Phase I expected results:
Demonstration of expertise and capability in fabricating high-efficiency waveguides. Design and verification via modeling for high conversion efficiency, high efficiency input and output coupling (likely utilizing on-chip filters) for (a) upconversion between 1892 nm + 1550 nm ® 852 nm and (b) down-conversion between 852 nm + 1892 nm ® 1550 nm.
Phase II expected results:
Demonstrate packaged, fiber-coupled optical frequency converters for upconversion and downconversion. Fiber-coupling should enable high launching efficiencies of both pump and signal beams, which will likely requires two separate input fibers and on-chip beam combining. Develop waveguides with at least 80% internal conversion efficiency and 50% external conversion efficiency using a continuous-wave (CW) pump. The devices should achieve maximum conversion with input CW pump power below 1W. Demonstrate capability to characterize (a) internal and external conversion efficiencies, (b) waveguide propagation losses, (c) input coupling efficiency, accounting for facet, fiber-pigtailing and mode-matching losses, and (d) output coupling efficiency. Demonstrate fabricated waveguides for both processes mentioned in phase I and show the designs can be adapted and executed at other wavelengths, for instance processes having pump longer than 2100 nm.
It is expected that NIST researchers will be available for consultation and input.
References:
[1] Pelc, J.S., Ma, L., Phillips, C.R., Zhang, Q., Langrock, C., Slattery, O., Tang, X. and Fejer, M.M. “Long-wavelength-pumped upconversion single-photon detector at 1550 nm: performance and noise analysis”, Opt. Express 19, 21445-21456 (2011).
[2] Kuo, P.S., Pelc, J.S., Slattery, O., Kim, Y.S., Fejer, M.M. and Tang, X. “Reducing noise in single-photon-level frequency conversion”, Opt. Lett. 38, 1310-1312 (2013).
[3] Pelc, J.S., Yu, L., De Greve, K., McMahon, P.L., Natarajan, C.M., Esfandyarpour, V., Maier, S., Schneider, C., Kamp, M., Höfling, S., Hadfield, R.H., Forchel, A., Yamamoto, Y. and Fejer, M.M. “Downconversion quantum interface for a single quantum dot spin and 1550-nm single-photon channel”, Opt. Express 20, 27510-27519 (2012).