You are here
FY 2017 NIST SBIR Notice of Funding Opportunity (NOFO)
NOTE: The Solicitations and topics listed on this site are copies from the various SBIR agency solicitations and are not necessarily the latest and most up-to-date. For this reason, you should use the agency link listed below which will take you directly to the appropriate agency server where you can read the official version of this solicitation and download the appropriate forms and rules.
The official link for this solicitation is: https://www.grants.gov/web/grants/view-opportunity.html?oppId=291164
Application Due Date:
Available Funding Topics
- 9.01: Collaboration and Partnership
9.02: Data and Modeling
- 9.02.01.73: Interactive Software Tools for Processing QIF-formatted Part Models to Generate Realistic and Accurate Measurement Plans and Programs in Standard Digital Languages
- 9.02.02.73: Modeling and Simulation Analysis for Manufacturing Systems
- 9.02.03.73: Non-Destructive Testing Qualification of Complex Parts with Digital Image Correlation and Digital Signatures
- 9.02.04.77: Parallel Algorithms for Processing Huge Sparsely Labeled Datasets on Clusters of Multicore Processors for Healthcare and Manufacturing Applications
9.03: Precision Measurements
- 9.03.01.67: Atomic Vapor Cell Technology for Electric-Field Metrology
- 9.03.02.68: Innovative Manufacturing of Nanoscale Calibration Spheres
- 9.03.03.68: Multichannel, Chemically Precise X-ray Pulse Processor
- 9.03.04.62: Picogram-scale Atomic Force Microscopy Probes with Integrated Nanophotonic Readout
- 9.03.05.68: Precision 10 kV Programmable Voltage Source
- 9.03.06.68: Quantum-based Compact Programmable Primary Thermometer
- 9.03.07.63: Steel Corrosion Detection Technology Using THz Waves: A Field-operable Unit Based on NIST Spectroscopic Technology
- 9.03.08.67: Superconducting RF Filters for Advanced Signal Processing
- 9.04.01.60: An Automated System for Firearm Evidence Identifications
- 9.04.02.77: Design Recovery Driven Porting and Refactoring of Fortran 77 Codes for Performance, Sustainability, and Consistency
- 9.04.03.60: Digital Data Structure Understanding Tools
- 9.04.04.77: Facilitating Security, Reliability, and Privacy in Networked Internet of Things (IoT) Devices
- 9.04.05.77: High Speed Large Field-of-View Optical Microscope
- 9.04.06.63: Infrastructure Requirements, Strategy and Architecture to Enable Scalable Scientific Data and Metadata Acquisition and Curation in Support of the Materials Genome Initiative
- 9.04.07.73: Low-Latency High-reliability Wireless Protocol for Advanced Manufacturing Applications
- 9.04.08.77: Medical Device Cybersecurity Tools or Compensating Controls
- 9.04.09.77: Policy Machine/Next Generation Access Control Implementation
- 9.04.10.73: Smart Visualization for Smart Manufacturing
- 9.04.11.77: Sources of and Triggers for Cybersecurity Failures
Small businesses can obtain information about available NIST-owned inventions through multiple sources. NIST provides information on our website at www.nist.gov/tpo and at www.nist.gov/Licensing. Applicants can also obtain information from the U.S. Patent and Trademark Office site (www.uspto.gov) and private search engines.
Applicants will need to confirm that the NIST-owned invention is available for licensing by searching the invention using the link provided on the websites provided above. Alternatively, the applicant can confirm the status of availability by submitting a question to the NIST SBIR website as directed in Section 1.04. Some NIST-owned inventions are described as only available for licensing on a non-exclusive basis, which typically means that at least one non-exclusive commercialization license has been granted by NIST. Any such NIST-owned inventions are still available for licensing on a non-exclusive basis. Any NIST-owned invention that has been exclusively licensed to another party is not available for licensing. If the NIST-owned invention has become unavailable for licensing prior to the close of this NOFO in the field of use relevant to the subtopic, NIST has the sole discretion to deem such application ineligible under the subtopic, as stated in Section 4.02 (Phase I Screening Criteria).
A SBIR awardee for this subtopic for a research license, it will need to contact NIST’s Technology Partnership Office for a license to use the NIST-owned invention. Such awardees will be granted a non-exclusive research license and will be given the opportunity to negotiate a non-exclusive or an exclusive commercialization license to the NIST-owned invention, in accordance with the Federal patent licensing regulations, set forth in 37 C.F.R. Part 404, and to the extent that such NIST-owned invention is available for licensing and has not otherwise been exclusively licensed to another party. Any research license granted for the purpose of this subtopic will be royalty-free during the award period. More information about licensing NIST inventions is available at www.nist.gov/Licensing.
The technical portion of an application should cite the NIST patent or patent application, and include a description of the research that will be undertaken. Included in this technical portion of the application, the applicant should provide a brief description of the proposed plan to develop the commercial product or to develop a commercial service using the NIST invention. The absence of this development plan will result in the application being less competitive.
Phase I expected results: Develop a feasibility study that examines expectations of the research to produce a commercial product or service.
Phase II expected results: Provide further R&D that leads to demonstrated practical application and advancement toward a commercial product or service.
NIST staff may be available for consultation and collaboration to the extent that resources are available.
Interactive Software Tools for Processing QIF-formatted Part Models to Generate Realistic and Accurate Measurement Plans and Programs in Standard Digital Languages
The Quality Information Framework (QIF) Version 2.1 suite of quality information models is an ANSI-approved standard developed by the Dimensional Metrology Standards Consortium. All QIF information models are written in the XML Schema Definition Language (XSDL). Data files (called instance files) conforming to the QIF models are also written in XML. QIF, along with the Dimensional Measuring Interface Specification (DMIS), provide rich and comprehensive digital modeling specifications for every key element in the manufacturing quality measurement process, from digital part models, to measurement resources, to measurement rules, to measurement plans, to measurement programs, to measurement results, to summary measurement statistics. Measurement programs are generated by measurement plans, which in turn are generated from part models and a set of measurement rules. Measurement rules are intended to help users select appropriate measuring equipment, fitting algorithms, and methods for reporting measurement results. Since the use of QIF in actual software systems is in its early stages, measurement system software vendors currently create and consume relatively simplistic rule sets and plans. Furthermore, the ANSI QIF Rules 2.1 standard is still in the early phases of development. What is needed now are sophisticated, human-interactive software tools, which can receive QIF-formatted models and QIF-formatted rules as input, to create complex measurement plans which are validated and verified to be correct, complete, and compliant to ANSI QIF. Such a tool would also be used in testing the validity of the QIF specification itself, which is of special importance to NIST. A well designed tool could reduce measurement plan size substantially, depending on the part to be measured and its features.
This subtopic seeks to enable the development and testing of the QIF Rules specification. The awardee will be able to provide a testing ground for the current version of the QIF Rules specification, and provide valuable input for the development of the next version of QIF Rules. The same can be said for QIF Plans. QIF Plans have not been thoroughly exercised but current software implementations of QIF. If QIF Plans is going to be ready for prime time in the dimensional metrology software world, it needs to be tested against more complex requirements and manufacturing use cases.
Phase I expected results:
The awardee will design a structure for the interactive software tool, develop a prototype, and test the prototype with a real part. The awardee will design, create and test the software tool prototype with relatively simple inputs, but all in QIF format. The software tool will demonstrate its capability with a relatively simple part with geometry, features, and tolerances modeled in QIF Model Based Definition (MBD). The tool will process the part, using simplified QIF Rules, and generate QIF-formatted measurement plans. The goal in this phase is to design a tool, and build a prototype, that can be expanded to produce non-verbose QIF-compliant plans for a rich set of part types with a complex and realistic set of features and tolerances, using the specified rules in QIF Plans format.
Phase II expected results:
The awardee will build and demonstrate a more sophisticated measurement plan generation tool. First would be a test of the optimization of the tool against 1) a wide variety of parts and features and tolerances and 2) a wide variety of rules for measurement equipment types, measurement tool types, and feature-type measurement requirements. Second the awardee will test the correctness of the tool against 1) expert (human) analysis of the resulting plans from the tool and 2) the performance of the plan in simulation tools and a real environment. For example, the rules portion of the interactive software tool might have three interrelated functional modules: a module to allow intelligent selection of coordinate machines and probes, a module to allow intelligent selection of fitting algorithms for processing measured points to fit substitute features and a module to allow intelligent selection of the methods to report measurement results. During the software development, information models for software engineering have to be QIF-compliant, e.g., for product models, resource models, and rules models.
Finally, the interactive software tool will generate programs in DMIS for coordinate measuring machines (CMMs) from QIF-formatted plans. The system will also be able to accept a QIF Resources file as input. The proposal may be to extend an existing CMM programming system or to build one from the ground up.
NIST researchers may be available to collaborate with and provide consultation to the awardee.
Global competition requires manufacturers to assess the likely effects of decisions about design and configuration of products, processes, and resources. The complexity of modern manufacturing requires advanced analysis, simulation in particular, to answer questions such as: What will be the capacity of this proposed line? What throughput, cycle time, and other metrics can be achieved for this product with this line configuration? Where should capacity be added if demand increases? Large enterprises usually employ experts to build simulation models to answer these questions, although the process can be lengthy and expensive. Most small- and medium-sized enterprises (SMEs) simply do not have access to the necessary analytical capabilities.
Discrete-event simulation analysis is increasingly critical to analyzing, designing, evaluating, and controlling large-scale, complex, uncertain systems . However, it currently takes too long to design, collect information/data, build, execute, and analyze simulation models, leading to insufficient input to decision-making . These barriers are particularly formidable to SMEs, leaving them with little ability to quantitatively assess the effect of potential decisions. An ability to automate the development of simulations would have significant benefits for manufacturing enterprises.
The goal is to develop tools that significantly reduce the time, cost, and expertise required to use simulation analysis to answer questions about designing and operating manufacturing systems. Tools are sought that can answer a flexible and general set of questions about manufacturing systems, scenarios within them, and alternatives to them.
The tools will leverage models of manufacturing systems that provide the necessary input to analysis.
Phase I expected results:
Demonstrate an architecture with widely understood semantics for manufacturing system modeling, appropriate model-to-model transformations, and well-structured manufacturing system simulations. This demonstration should start with at least one frequently-asked manufacturing question or use case, answer the question using simulation analysis, and elucidate how a proper software implementation would significantly reduce the time, cost, and expertise requirements to answer the question as compared with the status quo.
Phase II expected results:
Deploy the Phase I architecture in a form usable by SMEs who currently have no access to manufacturing system simulations or other advanced analytics. Expand and mature the Phase I implementation to additional questions, use cases, and classes of manufacturing systems. The result should be a prototype of a commercially viable software solution.
NIST staff may be available for consultation, input, and discussion, as needed.
Non-Destructive Testing Qualification of Complex Parts with Digital Image Correlation and Digital Signatures
Qualification of final parts is a highly discussed topic in additive manufacturing (AM). Process immaturity and the associated uncertainty of part quality in metals AM has led many to seek new methods for qualifying manufactured parts, especially those of high complexity and low-volume production. Manufacturers have defaulted to costly destructive testing methods, often misrepresenting final part properties by creating and testing dog bone-shaped witness specimens. It is becoming progressively understood that due to variations in processing and thermal history that witness specimens do not accurately represent the final characteristics of the manufactured part.
As an alternative to the often costly, destructive testing of AM parts, Non-Destructive Testing (NTD) methods are increasingly part of AM part qualification. The most common methods focus on high-resolution scanning, both internal and external, using methods such as X-ray Computed Tomography, or X-ray CT. While these methods support detailed, fine-grained inspections, they also generate large amounts of data that are not readily interpreted. It is not always known how data should be interpreted to qualify a part against its intended functionality, or how part might perform under intended loading conditions. This subtopic focuses on qualifying parts by leveraging these functional requirements and utilizing NDT techniques to develop repeatable “signatures” of parts. These signatures will be based on the mechanical response of a part’s critical geometry under designed loading conditions, therefore qualifying a part against its intended use.
This subtopic addresses gaps in current qualification methods, specifically the need to address performance uncertainty introduced when manufacturing a part with AM processes. New methods are needed for the NDT qualification of complex parts to complement current predictive modeling, destructive, and non-destructive qualification methods. These methods should be robust enough to apply to any complex geometry while being mature enough to qualify against a part’s intended functionality.
The subtopic seeks to leverage high resolution Digital Image Correlation (DIC) technologies  and digital signatures [2,3] to evaluate mechanical responses under critical loading conditions at critical geometries of complex parts. By establishing acceptable loading thresholds and performance expectations, benchmark measurements taken from qualified parts can be compared with those of parts yet to be qualified under simulated loading conditions. The digital signatures of the qualified parts can then be used to verify and validate untested parts through DIC technologies and signature verification. Applications of such qualification techniques include NDT testing for deliberately introduced defects in AM manufactured parts, a major concern in AM security logistics .
The topics addressed under this subtopic directly relate to NIST’s goal of supporting U.S. commerce through measurement science. Successful development of the techniques will provide industry an affordable means of qualifying complex AM parts through advanced imaging and modeling technologies while maintaining manageable amounts of data generation.
The goal of this project is to develop and validate novel NDT methods that are well suited for the low-volume, high complexity, high cost production requirements in additive manufacturing. Methods should include novel, high-resolution digital image correlation techniques that can be applied to any complex geometry (including lattice). Based on predictive analytics and modeling, methods will identify functionally-critical locations and establish correlations between critical geometries in the design and surface points from which deformations will be measured. The stress and strain measurements, and analyses results, around these critical point locations will then be used to form a repeatable, function-based, response “signature.”
This project will develop new methods for qualifying functional parts based on DIC technologies and functional loading requirements. Goals include establishing the following methods:
• Based on actual loading conditions, a method will be developed for identifying critical surfaces and volumes on a functional part.
• A method will be developed to correlate actual loading conditions with simulated loading. The method will use simplified loading conditions to simulate stresses observed at critical surfaces and volumes under complex loading conditions. The method will demonstrate that loading experienced critical geometries of complex parts/shapes in actual loading conditions can be simplified so that the same loading conditions can be simulated in a lab environment. The method will be validated by demonstrating consistent mechanical testing results, and alignment between simulated and actual loading conditions.
• A method will be developed that utilizes high-resolution DIC to observe and analyze previously identified critical surfaces/ volumes. Results of the DIC related to stress and strain will then be mapped to the 3D geometry, providing a “DIC signature” that can be benchmarked to establish expected performance of a given material and geometry for given loading conditions.
• Methods will be established that correlate known responses of a qualified parts with the “DIC signatures” of parts yet to be qualified. Methods and thresholds will be established that will then allow for simulated loading conditions and corresponding DIC to attain “DIC signatures” of unqualified parts. These signatures will then be verified and validated against the signature of the qualified part. The “signature verification” method will be validated by demonstrating consistent results across “DIC signatures” for a given geometry, loading, and uncertainty threshold.
In concert, these methods will support the establishment of performance benchmarks for a given complex part. A benchmark signature will then be used to qualify remaining parts by comparing the established digital signatures with those obtained from other parts of the same material and geometry using high resolution DIC technologies.
Phase I expected results:
1. Validated method that induces the response of critical geometries (surfaces/volumes) experienced under actual loading conditions in a simplified, simulated lab environment. 2. Development and demonstration of a high-resolution DIC method that can be used to attain the stress maps around critical part geometries. 3. Validated method that demonstrates repeatability in high-resolution DIC on the critical geometries of complex shapes. 4. Validated method that matches benchmark “DIC signatures” consistently against parts with same material, geometry and loading. 5. Validated method that demonstrates that parts qualified using “DIC signatures” perform within acceptable thresholds and uncertainty, as predicted when compared against the performance of the benchmark part.
Phase II expected results: 1. Identified conditions on which the DIC method can be applied, demonstration of the limitations of the approach. 2. Demonstrated DIC using embedded markers and other imaging technologies to overcome some of the identified limitations. 3. Development of a prototype software package that supports the integration of the developed methods, including DIC, obtaining DIC signatures, and DIC signature validation.
NIST may be available to work collaboratively and provide technical direction and consultation.
Parallel Algorithms for Processing Huge Sparsely Labeled Datasets on Clusters of Multicore Processors for Healthcare and Manufacturing Applications
Many health care and manufacturing applications are unavoidably spatiotemporal and generate large amounts of data. Expert interaction and measurement costs for these datasets imply that only a very small fraction of the data can be labeled such that models or results are presented in a form that supports human understanding. In general, the underlying spatial and spatiotemporal relationships in these data can be represented as three-dimensional grid or graph structures. Traditional machine learning algorithms are not applicable as they tend to be samplebased, requiring labelling of a significant fraction of the data. Therefore, a pressing need exists for algorithms that can handle very large datasets with only a fraction of data labeled. Additionally, data volume and speed of data acquisition requires that such algorithms effectively exploit networked multicore, GPU, and parallel computing resources. The underlying technology should have broad applicability in spatiotemporal big data applications.
The goal of the proposed research is to develop fast, parallel semi-supervised machine learning (ML) algorithms that address challenges of very large datasets and applications in the domains of healthcare and manufacturing. Such ML algorithms should be effective for datasets having millions to billions of data points, with only a few thousands of data points labelled.
Phase I expected results:
Develop novel machine learning algorithms that can work effectively for sparsely labeled datasets. Demonstrate their parallelization capability on hundreds of traditional cores.
Phase II expected results:
Demonstrate the effectiveness of machine learning algorithms developed in Phase I on real-world applications. Demonstrate scalability of these algorithms on large clusters of GPU and multicore machines. Phase II results should lead to a commercialization path for the technology applications in specific domains (for example, healthcare and manufacturing).
NIST may be available to work both in consultative and collaborative capacity in assisting the awardee.
NIST is developing a fundamentally new approach for electric (E) field measurements -]9]. The probe is based on the interaction of RF-fields with Rydberg atoms, where alkali atoms placed in atomic vapor cells are excited optically to Rydberg states and an applied RF-field alters the resonant state of the atoms. The Rydberg atoms act like an RF-to-optical transducer, converting an RF E-field to an optical-frequency response. The probe utilizes the concept of Electromagnetically Induced Transparency (EIT), where the RF transition in the four-level atomic system causes a split of the EIT transmission spectrum for the probe laser. This splitting is easily measured and is directly proportional to the applied RF field amplitude. By measuring this splitting we get a direct measurement of the RF E-field strength. The significant dipole response of Rydberg atoms over the GHz regime enables this technique to make direct SI-traceable measurements over a large frequency band including 400 MHz-500 GHz. One of the main contributions to the uncertainties in the approach is due to the perturbation of the measured E-field caused by the size and shape of the vapor cell , , , and . Fundamental to the development of this technique is to have specially designed vapor cells that will have minimal perturbation on the RF field being measured.
Project goals include the development of compact atomic vapor cells (filled with Rb and/or Cs) suitable for atom-based E-field metrology. Ideally, the vapor-cell should be on the order of a few mm and have no stems (that are common found in current technology) in order to have minimal RF perturbation. These new cells should also include hollow-core fiber vapor cells designs.
Phase I expected results:
Design and simulate results for compact vapor cells. The vapor-cell should be on the order of a few mm and have no stems (that are common found in current technology). Successful designs should be compact and have minimal perturbation in the applied RF field. Results of full three-dimensional RF simulations at a range of frequencies are required in order to evaluate impact of the vapor cell on the measured field.
Phase II expected results:
Phase II expected results include using the compact vapor cells designs in Phase I to develop fiber coupled vapor cells. These vapor cell need to have pure atomic vapor (no buffer gasses) and demonstrate a ground-state absorption of 40% or better. Need to demonstrate EIT in the cell. Successful designs should be compact and have minimal perturbation in the applied RF field. Results of full three-dimensional RF simulations are required in order to evaluate impact of the vapor cell on the measured field. Also, in Phase II hollow-core fiber will be designed. Prototypes will be filled with either Cs or Rb and need to demonstrate EIT.
NIST will use our facilities to test the designs.
Nanoscale calibration spheres serve an essential role in dimensional metrology, basically as a means of in-situ size calibration for industrial, scientific and medical applications. Modern usage has increased with the importance of detecting and characterizing nanoparticle materials in the environment. NIST has long been a source of particle standard reference materials such as polystyrene latex (PSL) spheres [Refs. 1- 4] but the quality of its presently available items has not kept pace with current applications and instrumentation. The unmet need is that commercial production methods for PSLs are based on 1950’s-era R&D, sufficient for industrial processes but not optimized, or appropriate, for current application requirements as nanoscale size transfer standards. The consequence is that the average size, size distribution, shape, and material properties vary considerably and rapidly degrade as particle size decreases, especially below 100 nm diameter.
There are many commercial vendors of calibration spheres and they rely on NIST reference materials for traceability. But bad reference materials result in poor metrological quality, i.e., traceability and uncertainty, since precision and accuracy of the measurement method are influenced by sample inhomogeneity.
In the interest of improving the practice of dimensional metrology, NIST recognizes that reviving R&D efforts in manufacturing of calibration spheres is essential for fulfilling NIST’s mission of disseminating the unit of length.
This subtopic focuses on significantly improving manufacturing methods for PSL spheres by the technique of emulsion polymerization. This industrially useful process has proven to be quite robust; nevertheless, it suffers from numerous secondary reactions which lead to highly polydisperse size distributions particularly for small average particle sizes. That the reaction conditions can be optimized to produce a very narrow, monodisperse size distribution was demonstrated in the production of the discontinued 100 nm diameter NIST SRM 1963 [Ref. 2]. This excellent size distribution was created by accident and has not been duplicated, either by the original manufacturer, JSR (Japan), or any other commercial vendor. However, it does offer a proof of principle and the historical literature offers numerous hints at alternative process conditions and choices of components which could result in the understanding and control necessary to synthesize PSL particles in the range of 20 nm to 100 nm diameter with a size distribution as good as or better than NIST SRM 1963. The project goal is the commercial realization of such high quality PSL calibration spheres for use as transfer standards by NIST and other national measurement institutes and for more general uses in industry and technology fields.
The expected outcome of Phase I is twofold: first, exploration of process control parameters and chemical constituents of the emulsion polymerization system will be undertaken. Rational understanding and control of the PSL particle size distribution parameters will be demonstrated. Multiple sizes below 100 nm will be synthesized and shown to possess a size distribution close to that of the NIST SRM 1963. Second, metrics to allow statistically significant comparison of the particle size distribution to SRM 1963 will be developed and validated. The expected outcome of Phase II will be the production of prototype amounts of PSL samples over the range of 20 nm to 100 nm diameter for which the particle size distribution is as good or better than that of the NIST SRM 1963. In addition, the stability and purity of the sample will be assessed for suitability as a viable commercial product.
NIST may be available to work with the awardee principally on choices of measurement and validation methods, such as atomic force microscopy, scanning electron microscopy, and dynamic light scattering.
Energy dispersive x-ray spectroscopy is a widely used materials analysis technique in industries such as metallurgy and semiconductor manufacturing. In particular, it is used to determine elemental compositions including trace components (e.g. yield-destroying, processed-wafer defects in large semiconductor fabrication lines). In recent years, energy dispersive x-ray spectroscopy using cryogenic microcalorimeter sensors has become commercially viable. Microcalorimeter sensors provide more than 10x better spectral resolving power than previous semiconducting detectors. This additional resolving power can be used to separate overlapping x-ray features, improve peak-to-background ratios for trace constituents, and observe so-called ‘chemical shifts’, x-ray features that identify the chemical state of a particular element. While commercial x-ray pulse processors exist for semiconducting detectors, there is an unmet need for a pulse processor compatible with the electrical signals produced by emerging arrays of superconducting microcalorimeter x-ray sensors. A complete microcalorimeter x-ray spectrometer consists of a cryostat, the sensors, their readout electronics, and a pulse processor. At the present time, all of these components are commercially available except an optimized pulse processor. Hence, the commercial development of a pulse processor will enable the widespread dissemination of microcalorimeters to the large analytical community that already uses energy dispersive x-ray spectroscopy. NIST has a strong interest in the development of such a pulse processor because of its connections to precision industrial analysis, especially semiconductor manufacturing, its ability to extend precision x-ray measurements to chemical composition, and NIST’s previous role in the commercialization of other aspects of microcalorimeter technology. The ability to more precisely quantify chemical composition via this technology is expected to enable analytical applications in a wide range of U.S. industries.
The central goal of this subtopic is the development of pulse processing electronics with the specifications described below:
1. Accept analog input signals spanning range: 0 Volts to 10 Volts.
2. Digitize analog input signals with sampling speeds >= 200 kHz and >=12 bits of sampling.
3. Filter digitized input signals to extract pulse heights with the highest possible signal-to-noise ratio. Pulse processing should produce energy resolutions better than 3 eV at 6 keV and better μthan 2 eV at 1.5 keV from sensors with sufficient intrinsic signal-to-noise to achieve those resolutions. Extracted pulse heights should be independent of DC level of pulse signals.
4. Accept pulse rise times in range 20 μs to 100 μs and pulse fall times in range 100 μs to 500 μs. Pulse decays are expected to be well modeled by a single exponential time constant.
5. Algorithms to extract accurate pulse heights in the presence of pulse pile-up are encouraged.
6. Algorithms to correct pulse heights for slow drifts in system response are encouraged. An example would be the application of a correction based on a previously measured correlation between baseline value and pulse height.
7. Pulse heights measured in electrical units shall be converted into absolute energy by means of a user-supplied non-linear calibration curve. The calibration curve is expected to be a 2nd order polynomial or even more complicated functional form. Provisions for a separate operating mode to determine the calibration curve from spectra of a reference material or materials are encouraged.
8. Processed, calibrated pulse heights shall be output in a format that is accepted by the software environment of one or more vendors of tools for energy dispersive x-ray spectroscopy on scanning electron microscopes. When possible, the number of output pulse height channels shall reflect the improved spectral resolution provided by microcalorimeter sensors. For example, more than 3000 output channels are needed over the range 0 eV to 10 keV to avoid discarding useful energy information. Pulse heights shall be accompanied by time tags or other information needed to correlate x-ray results with the position of the microscope’s electron beam so as to generate x-ray maps of spatially varying materials.
9. Pulse processing shall be performed for at least 16 independent sensor channels.
10. User shall have the ability to look at a single co-added master spectrum or 16 individual spectra. User shall have the ability to exclude user-selected channels from the co-added master spectrum.
11. The pulse processing must be performed in real-time but a short latency is acceptable (< 3 sec).
12. Pulse processing shall be compatible with total input count rates > 10 kHz across 16 sensor channels.
13. Pulse processor shall compute and report deadtime.
14. Pulse processing architectures that can be scaled in the future to larger numbers of independent sensor channels (>16) are preferred. In this future vision, input signals might come in a single interleaved timestream as is produced by time-division SQUID multiplexing.
15. Applicant should either be a vendor of x-ray microcalorimeter spectrometers or should collaborate with a vendor of x-ray microcalorimeter spectrometers in order to be sure that the pulse processor is optimally matched to these instruments.
16. Algorithms that detect the loss of flux lock in the SQUID readout electronics and that can rapidly reset such electronics or signal that such a reset is needed are encouraged.
Phase I expected results:
A full design study of the desired pulse processor is expected for Phase I. In addition, partial demonstrations of pulse processing functionality with actual microcalorimeter sensors or with previously digitized and recorded data from microcalorimeter sensors are encouraged. These demonstrations can be performed at the single channel level.
Phase II expected results:
Construction and a full demonstration of the desired pulse processor at the 16 channel scale is expected for Phase II. This demonstration shall be performed with a microcalorimeter x-ray spectrometer. This demonstration shall include seamless integration of the pulse processor with a software environment for energy dispersive x-ray spectroscopy on electron microscopes.
NIST may be available to assist the awardee in consultation on desired specifications and candidate algorithms for filtering, calibration, addressing pileup, and addressing drift; providing previously digitized microcalorimeter pulse data; and exchange of site visits for technical discussions and system demonstrations.
Atomic Force Microscopy (AFM) is a nanoscale metrology technique essential for both nanoscience and nanotechnology research as well as nanoscale structure and device manufacturing. Decreasing the mass and size of AFM cantilevers both improves the speed of their response and decreases thermal mechanical noise by decreasing drag when operated in air or fluid environments. In both cases the measurement quality and throughput can be increased. Currently such miniaturization is limited by the difficulty of realizing precision measurement of the motion of such cantilevers.
NIST research has demonstrated a way to overcome this difficulty by integrating nanophotonic resonators in close proximity to extremely small cantilevers with masses below 1 pg. These allow motion measurements with a motion noise floor below 1 fm/Hz1/2 and a force noise floor of a few fN/Hz1/2 in air, and a similarly low force noise in water. High resonance frequencies enable the devices to respond on sub-microsecond time scales.
Currently such probes are economically difficult to produce in quantity, and are hard to integrate with commercial atomic force microscopes. Further development and commercialization of such probes, together with the development of the associated measurement techniques, will enable large improvements in the state of the art of AFM metrology, and its dissemination to a broader community. NIST in general, and the Center for Nanoscale Science and Technology (CNST) in particular, is interested in enabling innovative commercial research to achieve such improvement in AFM metrology, and fulfill its mission to support the U.S. nanotechnology enterprise from discovery to production by providing industry, academia, NIST, and other government agencies with access to nanoscale measurement and fabrication methods and technology.
The general goals of this subtopic are to increase the quality, availability, and throughput of high-precision AFM metrology by private sector commercialization of an innovative, NIST-developed measurement technology. In addition, this project should use small business to meet federal research and development needs, and to stimulate small business innovation in technology. The specific goals of this project are to develop nanoscale AFM cantilever probes with integrated nanophotonic cavity optomechanical high-precision readout and to develop and establish a manufacturing process that enables their mass production.
Similar to current commercial AFM cantilever probes, the new probes should be available in a variety of stiffnesses. They should be compatible with commercial AFMs and easy to exchange. They should operate in the optical telecommunication wavelengths range and allow quick and efficient coupling to commercial single-mode optical fiber excitation and detection systems with minimal mechanical alignment.
Development of such probes will meet an immediate need for these devices at NIST and enable and facilitate multiple AFM metrology research projects at CNST. Commercialization of this technology is expected to result in significant advances in the commercial state of the art in AFM instrumentation, with a further positive impact on U.S. nanotechnology enterprise as a whole.
Phase I expected results:
Demonstration of key elements of proposed probe design and fabrication process. Specifically, demonstration of the feasibility of fabricating all on-chip elements required for: 1) single-mode fiberoptic connection (such as single-mode fiber pigtailing or another optical connection approach); 2) exposing the probe tip for interaction with a sample (such as, but not necessarily, overhanging over the edge of the chip); 3) batch fabricating probe chips on 100 mm or 150 mm wafers (this may include e-beam lithography).
The demonstrated approach should be suitable for probes with modal mass below 1 pg and stiffness in the sense direction of 1 N/m to 100 N/m and optomechanical readout with a noise floor below 1 fm/Hz^0.5. Sense direction should be near-normal to the sample surface. NIST is not looking for detailed optomechanical probe structure design, but seeking to establish a clear path to batch fabrication on a wafer scale. Sufficient feasibility demonstration will require fabricating simplified test structures to demonstrate key fabrication process steps (such as overhanging the probe and/or fiber coupling), and a theoretical study specifying a detailed fabrication process flow.
Phase II expected results:
Implementation and demonstration of batch fabrication of functional AFM probes and supplying functional probes to NIST for validation. Optimization of probe’s design for improved performance. The probes shall have the modal mass below 1 pg, and optomechanical readout noise below 1 fm/Hz^0.5 with less than 50 mW incident optical power. Probes shall be available in several different designs covering a total stiffness range in the sense direction of at least 1 N/m to 100 N/m. NIST expects the awardee to have established a commercial supply of these probes at the end of the program.
NIST will provide full data on current NIST AFM probe design and performance data and details of currently used fabrication process and its implementation in the NIST CNST NanoFab, including recipes for all steps. Some of these steps are not currently compatible with batch fabrication, and NIST seeks to remedy this. NIST will provide full details of the current approach to assembly and integration of these probes into a commercial AFM instrument. NIST may be available for regular discussions/consultations regarding optomechanical design, as well as all other aspects of fabrication, as well as use of the probes in AFM. In Phase II, NIST expects to be testing the early prototype probes and final supplied probes in our fiber-optical sensing and AFM systems. NIST will implement the changes necessary to integrate and test the supplied probes in our AFM (such as custom adaptors or fiberoptic couplers, if necessary).
Measurement of high resistance standards (10 MΩ to 100 TΩ) and high voltage resistive dividers (100 MΩ to 225 MΩ) are made using modified Wheatstone bridge techniques. The most accurate technique used for high resistance uses programmable dc voltage sources in the main ratio arms of a modified Wheatstone bridge. Programmable bipolar dc voltage sources with ranges from 220 mV to 1100 V having expanded uncertainties (k=2) on the order of 6 μV/V provide the best accuracies for high resistance measurement. Similar Wheatstone bridges are used to calibrate high voltage dividers at voltages ranging from 10 kV to 220 kV. The best high voltage sources available in this range have an expanded uncertainty of 100 μV/V. The only commercially-available precision sources have uncertainties on order of 500 μV/V in the 1 kV to 10 kV range, more than an order of magnitude greater than commercial sources covering the other voltage ranges (i.e. 1 kV and below). Test laboratories would benefit from development of a commercial bipolar programmable dc voltage source with expanded uncertainties on the order of 6 μV/V to 10 μV/V to bridge this gap in measurement capability. Such a programmable dc voltage source would allow extension of the voltage range for calibration of high resistance standards and allow comparison of high voltage resistive dividers to high resistance standards in the 1 kV to 10 kV range with traceability to quantum resistance standards.
Commercial multifunction calibrators are used as the bipolar dc sources the high resistance bridges as they have state-of-the-art uncertainty for dc voltage for the 220 mV to 1100 V ranges. Unfortunately, as the name suggests, they have other functions such as ac voltage, ac current, and resistance that are not utilized for dc voltage applications and only add cost if they are not used. Commercial dc only calibrators exceed the 1-year specification of multifunction calibrators by 1.4 to 2.8 times on the corresponding ranges of 220 mV to 1100 V. To make high resistance bridges more cost effective and therefore more widely used in standards laboratories, an alternative dc voltage-only source with the same performance as that of state-of-the-art multifunction calibrators is needed.
The development of an alternative 1 kV dc bipolar voltage source technology that could be scaled to 10 kV would (1) improve performance in the 1 kV to 10 kV range and (2) provide a cost effective alternative to calibrators in the 220 mV to 1100 V range. This technology would be of interest to NIST as well as other standards and commercial laboratories.
Technologies that would scale from the existing voltage range of 1 kV up to 10 kV are desired. A programmable bipolar dc voltage source that has linearity error of less than 1 μV/V over the range 1 kV to 10 kV would be complementary to the state of the art sources at 1 kV. State-of-the-art voltage sources have expanded uncertainties of 6 μV/V over a one-year interval at 1 kV. To be useful in automated bridges, the bipolar programmable dc voltage source would need to be controllable by IEEE-488.2 interface or USB interface from a computer just as other instrumentation in an automated bridge would be. Front panel display of the output state (i.e. energized or standby) and voltage would be necessary for safe operation in a laboratory environment.
Phase I expected results: Design a bipolar programmable dc voltage source that extends the range from 1 kV to 10 kV and meets the project goals described above.
Phase II expected results: Construct a prototype 1 kV to 10 kV bipolar programmable dc voltage source based on the design of Phase I and demonstrate that the specifications and performance described above are met.
NIST may be available to work collaboratively on design concepts, discuss goals, and to aid in prototype evaluation.
There is great potential for improved thermometry and new applications with the development of a scalable primary thermometer that can replace the ITS-90 temperature scale, which consists of fixed point artifact standards. A very promising approach is a purely electronic temperature standard that exploits Johnson noise thermometry (JNT) and a quantum voltage noise source (QVNS) based on pulse-driven Josephson junctions made with high-temperature superconductors (HTS). This novel primary thermometer will enable a new paradigm for disseminating temperature that will be independent of artifact standards for the first time. Any national, military, or corporate metrology laboratory could then possess and operate a single primary thermodynamic temperature standard, including the triple point of water, and be able to calibrate a wide range of secondary thermometers over the range from ~230 K to 1300 K. This would essentially replace ten defined “artifact standard” fixed points based on the phase-change states of ten different materials. Additionally, the perfect scalability of the HTS JNT would eliminate the use of complex mathematical interpolation for measurement of temperatures between these points.
In addition to creating the world’s first quantum-accurate primary thermometry, development of an HTS JNT would also enable improved accuracy in industrial point-of-use applications. These include embedded sensors in extreme environments like nuclear or industrial furnaces, where installed probes can continue to be used in place with only an in-situ resistance measurement, which reduces down time and improves process control (as opposed to artifact probes require periodic replacement). Successful execution of this project will lead to practical application of the first temperature metrology systems incorporating quantum-based voltage synthesizers, elimination of many levels of the temperature calibration chain, and placement with immediate users of the world’s first intrinsically accurate, scalable, primary thermodynamic thermometer.
National Measurement Institutes (NMIs) and corporate metrology labs maintain the international temperature scale (ITS-90) with a series of fixed points and Standard Platinum Resistance Thermometers (SPRTs). Operating all of these artifact standards is complicated, labor intensive, and requires a large amount of both capital equipment and floor space. Calibrating temperatures between the fixed points requires interpolation, which increases uncertainty. A quantum-based electronic primary thermometer is intrinsically linear and programmable. As a single standard it can replace all of these instruments and it can also measure arbitrary temperatures, not just the fixed points, enabling application-specific calibration protocols. NIST currently leads the world in Johnson noise thermometry with a research system designed to measure Boltzmann’s constant using a 4 K quantum voltage noise source (QVNS) chip made of niobium-based junctions and custom-built bias and measurement electronics.
The subtopic goal is to replace the current QVNS chip operating at 4 K in a liquid helium dewar with a HTS QVNS on a compact, closed-cycle cryocooler (smaller than a liter). A successful development under this project could mean direct transition through conversion of the research JNT system into an automated programmable primary standard that can be operated by non-experts.
The goal can be met through the following approach:
1) Develop a compact, closed-cycle system operating at 77 K that requires less than 100 W, preferably using a commercially available cryocooler. This system should be capable of providing DC and RF connections to the chip for proper operation. The electrical connections must be optimized to minimize electrical loss and minimize the thermal impact on the chip. The system will need the development of chip packaging and a cryostat microwave design for 30 GHz pulse-drive bias.
2) Develop a HTS QVNS chip that synthesizes accurate voltage waveforms at a temperature of 77 K. This will require the design, fabrication, and testing of the chip. As few as 4 junctions are required for this device, so several possible junction fabrication methods could be used.
Phase I expected results:
The Phase I goal would be a complete system design for a compact closed-cycle system and the design and fabrication of a HTS QVNS chip.
Phase II expected results:
The Phase II goal would be to build a laboratory demonstration of a complete system with a working HTS QVNS chip.
NIST staff may be available to participate in discussions and provide input on the awardee’s design during the development process through email, teleconference or face-to-face visits. NIST may also be available to do testing, although the awardee will not be able to count this testing towards the testing requirements of Phase II. NIST scientists may be available for demonstration of the device at the awardee’s home site, if desired.
Steel Corrosion Detection Technology Using THz Waves: A Field-operable Unit Based on NIST Spectroscopic Technology
Corrosion of steel cost the U.S. several $100B per year. Early detection of this corrosion, most of which is buried under some kind of protective coating like concrete or polymers, would reduce this remediation cost and improve the safety of infrastructure and factories. Current detection methods cannot sense a particular corrosion product, only the presence of something, and especially at early stages when not much corrosion is present. NIST had a Corrosion Detection Innovative Measurement Science project from 2010-2014, in which a 0.1-1 Thz wave technology based on antiferromagnetic resonance (AFMR) detection was successfully developed. Two of the most common iron corrosion products, hematite and goethite, are antiferromagnetic, and thus can be detected by this method: hematite through several centimeters of concrete and goethite through similar or greater thickness polymer layers. This laboratory-based technology needs to be taken to the field in order to have practical commercial use. From talking to government and industry, there is certainly a large need for this technology and apparently a large commercial market exists, too. NIST desires to see this technology commercialized, which involves some technical challenges in translating the laboratory technology into the field. The main technical challenges are probably using real corrosion products, getting enough power through the barrier, especially concrete layers, to see the AFMR, and locally controlling the temperature of the specific material site of interest (where the beam is impacting).
The goal of this subtopic is to demonstrate a field-operable measurement system, using 0.1 - 1 THz waves, using NIST technology, that identifies the presence of the iron corrosion compounds hematite and goethite under a variety of protective coverings, including concrete (hematite) and polymers (hematite and goethite). NIST has demonstrated this technology in the laboratory – the goal is to be able to move this technology into the field for application to corrosion detection problems in factories and physical infrastructure. To be able to detect specific iron corrosion products through protective barriers is an unmet need in the U.S.
Phase I expected results: Demonstration of the feasibility of taking the laboratory-based technology to the field, with identification of the technical problems that need to be solved and the current equipment that is available for doing so. A plan for how the field equipment would be operated and a list of successful sample applications in the laboratory is part of the Phase I expected results.
Phase II expected results: Demonstration, in the field and on an important application, of a portable antiferromagnetic resonance –based THz system for corrosion detection of hematite and goethite. This system should be capable of being commercialized.
NIST may be available to provide technical guidance, comments and advice on design concepts, discussion of technical problems, and previous lab data.
High-performance filters can be enabled by operation at cryogenic temperatures. The lower loss associated with superconductors can enable unique functionality, such as power-limiting behavior and extremely high selectivity. These two unique characteristics can be combined to develop, for example, auto-limiting mulitplexers, which can be used to excise large in-band interference signals without attenuating smaller signals of interest within a given frequency band. The further development of superconducting filters is of interest for both analog signal processing applications such as multiplexers, as well as for control of signals generated by precision microwave sources. Auto-limiting circuitry based on superconducting devices has been under development at NIST for a number of years, and commercialization of the technology has the potential to improve the performance of sensitive RF and microwave receivers. The results of this research can also be applied for high-performance filtering necessary in the development of precision microwave sources based on superconductor technology.
Subtopic goals include the development of compact superconductor filters for application in auto-limiting multiplexers and precision sources based on superconductor technologies. For signal-limiting applications, there is particular interest in frequency bands centered around 1 GHz and 3.5 GHz. The filters are expected to be implemented in microstrip technology on microwave-friendly substrates such as sapphire, and packaged for operation on closed-cycle cryocooler-based platforms.
Phase I expected results:
Designs and simulated results for compact microstrip filters based on superconductor technology for approximately 20 MHz (3 dB) bandwidth operating at 3.5 GHz. Successful designs should emphasize compact filter layouts, and be designed for high temperature superconductors deposited on high-quality sapphire substrates. Results of simulations are required in order to evaluate impact of phase II work for fabrication and packaging of the most promising designs.
Phase II expected results:
Phase II expected results include fabricated superconductor filters for operation at 3.5 GHz with approximately 20 MHz (3dB) bandwidth. Up to two packaged devices for operation at cryogenic temperatures on a compact closed-cycle cryocooler as well as one or more unpackaged die are required for detailed evaluation of performance in both linear and nonlinear regimes.
Collaboration may be available in terms of discussion of design configurations, as well as simulated response; preliminary measurements of unpackaged devices that are amenable to wafer-probe style measurements; and support for advanced device modeling.
Since the 2009 NRC report , there has been a fundamental challenge in forensic science to establish a stronger scientific foundation and a statistical procedure for accurate firearm evidence identification and error rate estimation. To answer this challenge, researchers at NIST developed the Congruent Matching Cells (CMC) method [2,3], which is based on the principle of discretization that divides the entire image into cells, and uses subsequent cell correlations to quantify the topography similarity and pattern congruency of the correlated images. That makes it possible for ballistics identification and error rate estimation based on objective methods .
In addition to the congruent matching cells (CMC) method for correlation of breech face images, NIST researchers have recently developed the congruent matching cross-sections (CMX) method for firing pin correlations , the congruent matching profile segments (CMPS) method for bullet correlations , and the congruent matching features (CMF) method for a similarity map which shows the similar and dissimilar areas on the correlated images . All validation tests for above methods using known matching (KM) and known non-matching images show clear separation between KM and KNM scores without any false identifications and false exclusions.
NIST has received requests from U.S. and international customers to provide a commercial system or source code based on the NIST invented congruent methods to support their firearm evidence identification and error rate estimation. The goal of the proposed SBIR project is an Automated System for Firearm Evidence Identification and Error Rate Estimation. This will be a commercialized system based on NIST invented CMC, CMX, CMPS and CMF methods. It will be used for automatic and objective firearm evidence identification covering all ballistics images including breech face, firing pin, ejector mark of cartridge cases and bullets. The output of the system will include an objective and conclusive result of identification or exclusion with false positive (for identifications) or false negative (for exclusions) error rate estimations. A similarity map is also developed from the system to visualize the similar and dissimilar areas on the correlated image pairs. Both the error rate estimation and the similarity mapping will provide a powerful tool to support ballistics examiners in court proceedings.
Phase I expected results:
• Based on the NIST invented CMC method [2,3], conduct a feasibility study for the development of a commercialized correlation program system using C++, OpenCV, Java, Python or other languages with high correlation speed and accuracy for breech face image correlations.
• Validation tests using two sets of breech face images in the NIST’s ballistics and toolmark research database, that include:
- Fadul dataset with 40 images including 63 KM and 717 KNM image pairs;
- Weller dataset with 95 images including 370 KM and 4095 KNM image pairs.
The correlation results must show clear separation between KM and KNM scores without any false identifications and false exclusions.
Phase II expected results:
Based on the NIST invented CMC [2,3], CMX , CMPS  and CMF  methods, develop an automated system for automatic and objective identification of all ballistics images including breech face, firing pin, ejector mark of cartridge cases and bullets.
• Validation tests for the developed commercial software using at least two sets of breech face, firing pin and ejector images of cartridge cases and at least two sets of bullet images in the NIST’s ballistics and toolmark research database. The correlation results must show clear separation between KM and KNM scores without any false identifications and false exclusions.
• A similarity map showing the similar and dissimilar areas in the correlated image pairs.
• A Statistical Fitting Program based on the NIST proposed statistical procedures [2,3] combined with an Error Rate Procedure [2,3] which can report the cumulative and individual error rates [2,3] for both the identification and exclusions conclusions using the CMC [2,3], CMX , CMPS  and CMF  methods for bullets and cartridge case correlations.
Design Recovery Driven Porting and Refactoring of Fortran 77 Codes for Performance, Sustainability, and Consistency
NIST has a substantial code base written in Fortran 77 that will become very difficult to maintain as the generation of original developers retires; this issue was highlighted during the massive drive to update enterprise software systems to address the Y2K problem and will manifest itself again when the Unix “epoch” will reach its 32-bit limit in 2038. As such, there is a need for a tool that assists developers in modernizing this code base by (1) porting it to a more modern version of the language (e.g., Fortran 2008) or an altogether different language (e.g., C++), and transforming the code into a system that is documented, maintainable, and with a far better performance. This task is simply too onerous to achieve manually without tool support if the system at hand exceeds a few thousand lines of code. Furthermore, this need is not unique to NIST, but is rather universal across many federal agencies and national labs.
The project should develop a new tool or extend an existing tool that will assist developers in modernizing Fortran 77 systems and port them to more modern versions of the language (e.g., Fortran 2003 or 2008) or to a completely unrelated programming language such as C++. It is important to note that this is not a purely syntactic translation and will serve to reveal the underlying algorithms and data structures, document them, and potentially replace them by alternatives that are functionally equivalent, but deliver higher performance because they have lower runtime complexity or can take advantage of hardware parallelism that is available in modern computing platforms. Furthermore, the infrastructure embedded in the tool can be used to provide novel functionalities such as annotating a program’s source code to augment quantities with units (as in SI units) and evaluating the consistency of a program’s use of units.
At a minimum, the tool should provide the following functionality:
• Identify idioms of the language (common blocks, GOTO statements, poorly specified prototypes…) that impede sustainability and to replace them with more appropriate programming constructs. This should happen under the guidance of the developer who will specify the idioms to identify and how they will be replaced in the target language.
• Identify “dead code”.
• Support developers as they identify pieces of code that they want to refactor into separate routines. In doing so, the tool should automate many of the syntactic tasks that are associated with this refactoring (e.g., define prototype of new routine, declare variables in the new routine, and remove unneeded declarations from the calling routine, and format the source code of the new routine).
• Identify code sections that were replicated via “cut-and-paste” operations or that differ by simple changes in variable names.
• Allow developers to document code via text and Boolean expressions that can be evaluated at compile time or even during runtime.
• Keep track of all transformations implemented by developers so they can replay them in full or selectively to regenerate the “new” version of the system or to generate a different version of the system that is targeted at a new platform.
• Analyze the transformations put in place by developers to keep track of dependencies between these transformations. This will allow the tool to notify developers that they are introducing transformations that will break existing assumptions or prevent other transformations from being applied.
• Instrument the code to identify computational bottlenecks so developers can reengineer the code to reveal the underlying algorithms and data structures and, if possible, replace them by alternatives with the same functionality, but with better performance characteristics.
Phase I expected results:
The first phase should develop an architecture for the tool along with an implementation that provides a first set of functionalities. This first phase will be tested on a small system (e.g., 1–10,000 lines of Fortran 77 code).
Phase II expected results:
The second phase should provide an implementation with a complete set of functionalities. The tool will be tested on systems consisting of 10–100,000 lines of code. It will allow a “learned” developer to modernize a system consisting of 10–50,000 lines of code in a matter of weeks instead of months or years.
NIST may be available to consult with the awardee, discuss the problem and potential solutions, and evaluate the proposed implementations.
Digital Forensic Investigators often encounter structured data in an unknown format. So that an investigator can present accurate and complete evidence to the court, the data needs to be well understood. There has been at least one court case  where an incomplete understanding of data recovered from a web browser log was misleading in court. A better understanding of the data structure involved could have prevented presentation of inaccurate testimony in this case.
There are several scenarios where such data can be encountered, e.g., a non-standard file system, a memory acquisition of an unknown OS version (or custom kernel), or a database with unknown schema. An investigator has few automated options for analysis of digital objects within an unstructured environment. For example, file carving is often productive for individual files with a known structure and a known signature identifying specific file types. An investigator can try to unravel the unknown object structure manually, but this is a tedious, error-prone process.
NIST would like to see development of one or more tools that can aid the understanding of the structure of a digital object. Such a tool would be useful for developers of forensic tools used by law enforcement and also research in digital forensics currently conducted at NIST.
There are many different ways to approach this problem. One way such a tool could work would be to take a baseline image of the object and then to apply a series of operations and analyze changes produced by each operation. The tool would be given a description of the applied operation and a list of information to look for within changes, then the tool can examine the previous state before the operation was applied to infer some object parameter.
Depending on the type of data structure being reverse engineered, a set of questions can be posed. For example, reverse engineering a file system might proceed as follows:
• Create a single file, making note of the time. Identifying a list of differences between the state before creating the file and the state after creating the file creates a set of candidate locations for the file name and any meta-data such as recorded time values or file size.
• Append data to the file created above. This operation may reveal the location of a file size field.
• Create some more files. This could reveal the basic structure of directory entries and general layout. In general, the tools would make a small modification to a digital object with known data and then examine the raw object for changes including the known data.
This is just one of many possible approaches. A successful applicant is expected to be creative and innovative.
Phase I expected results:
An architecture of and development plan for an automated method to discover the layout and structure of an unknown data object of interest to a digital forensics investigator.
Phase II expected results:
A demonstration version of a tool that, given an unknown data object, deduces, relevant to a forensic exam, parts of the object. This tool will be marketable to law enforcement and forensic science entities.
NIST may be available to work collaboratively and for consultation, input and discussion.
The Internet of Things (IoT) increasingly appears to be the next great technology revolution. It is expected to impact everything from healthcare delivery, to how food is produced, to how we work, to all forms of transportation and communication, and to virtually all forms of automation. With that said, IoT will impact everyone, and in multiple ways.
With a technology revolution of such large impact on society, it is imperative that IoT-based systems can be trusted. This means that they should exhibit secure, reliable, and private behaviors, as well as many other attributes associated with quality [2, 4]. Privacy is particularly important because IoT-based systems will likely spin off huge amounts of data as a result of sensing and surveillance [1, 3, 4]. Therefore, techniques, tools, and methods to mitigate the numerous ‘trust’ challenges are needed before these automated IoT-based networks manage much of daily life.
Therefore, innovative research is needed to aid in answering the following question: “how can a network be trusted that was built based on the core principles of IoT?” These core principles include computing power, sensing, communication protocols and bandwidth between devices and objects, and actuation affecting the external systems that the IoT networks will control. The approaches sought could include testing techniques, formal methods, certification of devices, auditing and logging during operational usage, certification of networks, analysis of networks of things, and any other approach that addresses the question.
The goal of this subtopic is to facilitate the security, reliability, and privacy of clusters of networked IoT devices (NoTs) by securely auditing and logging their internal and external operations and data interactions in a scalable manner. The presence of an auditing system that can operate independently of any IoT vendor will foster IoT vendor interoperability and will steer technologies toward standards that will enable auditing for both security and reliability of IoT systems. Furthermore, it will offer end-users with operational transparency and will empower them to identify components that can be used together thus improving utility of the IoT systems. Another advantage of auditing and logging is that they offer the ability to increase reliability and resilience without requiring major changes to architectures of NoTs. Moreover, in the future, NIST envisions NoT platforms where individual devices and sensors become the enabling platform for third-party applications to offer services in the form of an application. Having a common auditing system for the system operations will help identify and address reliability and security issues. NIST is interested specifically in applications for home automation, building access control, personal health, and NoT use cases that are deployed as part of monitoring and control of functionality in critical infrastructures.
Phase I expected results:
Proof of concept using a simple network of IoT sensors, devices, and applications in one or two use cases showing how the innovation can produce a secure audit log of the overall IoT network operations. Also, show a system design of a functional prototype that can provide continuous reliable performance, log immutability, and protocol and device scalability.
Phase II expected results:
A full-scale prototype utility that can be applied to a more complex network of IoT sensors, devices, applications for multiple use cases and for different vertical markets (e.g., healthcare, transportation, agriculture, etc.), a user’s manual for the innovation, and experimental results from applying the prototype should be produced.
NIST may be available for consultation, input, and discussions.
In time-lapse optical microscopy of multiple field of views (FOVs), there is a trade-off between the spatial coverage that can be achieved at high spatial resolution and the temporal resolution. In other words, the larger the spatial coverage, the longer it takes to acquire all small (FOVs) to create one large FOV. Consequently, the events in a specimen that are characterized by high rates of changes that require high temporal resolution cannot be captured over a large FOV (e.g.., at population levels).
To address this tradeoff, NIST is interested in designs (Phase I) and prototypes (Phase II) of an optical microscope that would enable large FOV imaging of dynamic events and meet the specifications provided by the project goals.
Included are a few references [1-4] that describe some of the published designs addressing the current tradeoff. Proposers are encouraged to innovate published designs or pursue their own new designs.
The project goals are to enable microscopy measurements that could assist in characterizing dynamic events over entire populations of mammalian cells. One such example would be time-lapse imaging of live cell cultures.
To enable such measurements, the microscope design and its prototype should aim at meeting the following specifications:
Resolution: 1.4 μm or smaller when using the imaging system in a non-automated fashion and manually focusing on an object. This figure was arrived at by computing the Abbe diffraction limited resolution given by LAbbe = λ/2NA, where λ=550 nm, NA = 0.3 and allowing for a 50% deviation from ideality. The measurement of resolution can be performed using the method described by Vainrub5 or an equivalent method. Preference will be given to solutions that provide for similar resolutions with functioning in an automated mode with auto-focusing.
Spatial coverage and acquisition time: Solutions should be capable of imaging a 1 cm2 area/60 seconds.
Modality: Widefield or confocal solutions (solutions that include also transmitted light imaging are preferred).
Modularity: Plug-and-play component to an automated microscope equipped with a scanning stage and focus control.
Material cost: < $20K
Phase I expected results:
Design an optical microscope that could acquire high-resolution large-FOV images “near real time” as specified by the project goals.
Phase II expected results:
Prototype the designed optical microscope
NIST may be available to discuss microscope design approaches and to provide additional specific inputs about a variety of demanding imaging applications at NIST.
Infrastructure Requirements, Strategy and Architecture to Enable Scalable Scientific Data and Metadata Acquisition and Curation in Support of the Materials Genome Initiative
As a part of the Materials Genome Initiative, NIST is charged with developing a materials innovation infrastructure. Key aspects of this infrastructure include the real-time acquisition and curation of experimental and simulation data and associated metadata and control of scientific equipment over a network. To accomplish this, NIST needs research and development on the core requirements and on an overall strategy and software architecture that would enable control of diverse and geographically distributed experimental equipment (e.g. SEM, TEM, x-ray diffractometers, dilatometry, differential scanning calorimeters), computational resources (e.g. workstations, clusters, demonstration code), and the automatic capture and curation of their acquired scientific data and associated metadata across a network using backend systems such as the NIST developed Materials Data Curation System and the National Data Service’s Material Data Facility.
There is a need for developing an infrastructure to push results and metadata from instruments into a data curation system/platform. The goal of the project is to discover and document core requirements and develop an overall strategy and software architecture that when implemented will allow for the control of geographically distributed research equipment and computational resources and their integration with scientific informatics backends including the NIST Materials Data Curator and the National Data Service’s Material Data Facility. Both the Materials Data Curator and the Materials Data Facility have REST APIs to facilitate automated data curation. The project will provide documented requirements and develop a specific strategy and software architecture for controlling scientific instruments and computational resources and interfacing them scientific informatics backends in a format amenable to implementation by software engineers.
Phase I activities and expected results:
Discover, validate, and document requirements for a system to enable scientific equipment control and scalable scientific data and metadata acquisition and curation as described in the project goals. Using previously documented requirements, develop and document an overall strategy and then develop and document a software architecture that when implemented will meet the project goals. We believe that a successful architecture would have several key properties: 1) It would be structured in independent layers, the top-most layer would present a high-level user interface to allow unified user access and control, while the lowest layer would provide connectivity to the scientific equipment or computational output, 2) the architecture relies on two public interfaces one for the highest level and the other at the lowest level that would allow the components to interact as a single application, 3) the architecture includes the notion of a default scripting language and provisions for integrated development environments to facilitate customization and extension of a system implementing the architecture in a standardized fashion, 4) the architecture is highly modular and includes the concepts of plugins and a generalized, abstract command set that facilitates interaction with the scientific equipment, 5) the public interfaces and abstract command set are conceived as being language neutral and allow users to control and extend a system implementing the architecture from a large variety of commonly used programming languages including Python, Java, and C++, 6) the architecture will provide for the capture of scientific provenance and system configuration to facilitate in reproducibility, 7) the architecture will support the concept of scientific workflows. We have been largely inspired by the Micro-Manager project (https://www.micro-manager.org/wiki/Micro-Manager%20Project%20Overview) and recommend that awardees review this project.
Phase II activities and expected results:
Develop an extensible infrastructure for the development of APIs to facilitate data curation of materials data from dilatometers, x-ray diffractometers, scanning electron microscopes (e.g. EDS- composition scans, EBSD patterns), transmission electron microscopes, differential scanning calorimeters, and tensile testing machines.
NIST staff familiar with the various instruments (SEM, TEM, optical microscopes, dilatometer, x-ray diffractometer) and simulations may be available to work with awardee to discover the requirements and develop the metadata schemas needed to collect the data. NIST staff responsible for the development of the Materials Data Curation System may be made available to help the awardee understand the architecture and capabilities of the MDCS.
The future of manufacturing will include highly adaptive, reconfigurable, and mobile machinery that can interact with and collaborate with humans safely, reconfigure quickly and cost-effectively depending factory needs, and anticipate the situational environment. Mobile robotic work agents will have the ability to move between work cells, reconfigure, and perform tasks within that work-cell. The control systems that operate such factories will require communication technologies for command and control of machines with rapidity, reliability, and timeliness. Wireless protocols such as IEEE 802.11 currently address these requirements singularly but not all of them simultaneously. For example, existing protocols can provide low latency for steaming video at the cost of reliability. Other protocols can provide high reliability using low density parity check codes for forward error correction, but they sacrifice latency. Still others such as IEEE 802.15.4-based protocols offer reliability while sacrificing latency and throughput. New protocols are needed to address reliability (data transaction error rate < 1e-9) and latency (closed-loop sense-to-actuation time < 1ms) simultaneously within a factory work cell with at least ten (10) sensing/actuation devices such as proximity sensors, scanners, and switches. Such a system would address all aspects of the communications system including RF band selection, antenna selection, radio diversity, forward error correction, bandwidth, and interference mitigation. Ideally, the solution would build upon existing physical layer specifications.
This project addresses the wireless requirements of the future factory by developing a solution using existing physical layer technologies such as IEEE 802.11. The project will produce models and simulations of the proposed wireless communications system within a discrete manufacturing work-cell. The proposed communications system will include all aspects of the channel including the antenna system, radio front-ends, baseband processing, and error coding. Study of the proposed solution will include the co-simulation of the wireless communications model with a model of a factory process. Coexistence of the proposed solution with other competing data sources within the factory will be addressed.
Phase I expected results:
Produce a model of the end-to-end communications system with particularly detailed attention placed on the focal point of the model such as the antenna system that addresses the communications reliability requirement (data transaction error rate < 1e-9) while maintaining latency constraints (closed-loop sense-to-actuation time < 1ms).
Phase II expected results:
Produce a working system prototype of the wireless communications system. The hardware prototypes will include all aspects of the system modeled during Phase I. The prototype will include an Ethernet-based port for injecting/extracting sensor/actuator data. The prototype will also include common analog inputs and outputs such as on/off and pulse-width modulation interfaces. The prototypes will be demonstrated within a testbed that emulates the harsh radio frequency environment of the future factory, and may be used to demonstrate a wireless protocol standardization candidate. The prototype design shall demonstrate the commercialization potential of the wireless solution.
The NIST Engineering Lab has conducted several RF measurements campaigns to assess the characteristics of RF propagation within the factory environment. The results of these measurement campaigns which are publically available (http://doi.org/10.18434/T44S3N) include channel impulse responses (magnitude and phase) and can be used to develop novel approaches for wireless communications that are highly reliable and have low latency. In addition, NIST industrial wireless project staff may be available to collaborate with the incumbent as an advisor, provide manufacturing use case examples, and offer test-bed resources including the use of a wireless channel emulator.
Medical devices, such as infusion pumps, are a critical component of our national healthcare delivery system. There are millions of digitally connected medical devices in our hospitals, nursing homes, outpatient clinics, other commercial points of care and, increasingly, in the home. These devices are connected to people and to critical networks in these environments, and vulnerabilities in their programming provide entry points for cyber attacks with significant consequences .
There will be billions of exposures between patients and connected medical devices over the next 10 years. It is imperative that the technology, security, medical and public health experts collaborate to better design, implement and operate medical devices that compose critical cyber physical human systems.
NIST is working on cybersecurity guidance for wireless infusion pumps . NIST is interested in funding innovative technologies to better secure medical devices, device associated networks to deliver safer clinical work flow and environments. These technologies should have near term commercial potential and promise of adoption by healthcare delivery systems. These technologies would deliver increased awareness of device fitness, function and security threat in the promotion of safer healthcare delivery environments.
Phase I expected results:
Design of the hardware and/or software for a medical device or compensating control for medical device that demonstrates the desired security properties/features while not having a negative impact on the functionality or safety of the device. Some examples of these security properties/feature are malware protection, device monitoring, asset inventory, risk assessment, encryption, patching and updating, device tracking, etc. Description of the threats that the design will counter. Example scenarios of specific attacks that will be thwarted by the device.
Phase II expected results: Provide a prototype that demonstrates the design and attack scenarios from Phase I. Along with the prototype, provide a discussion of the security properties/features it addresses and how this fits within the healthcare ecosystem. Taking into consideration the cyber-safety concerns of medical devices as well as the usability challenges presented within the healthcare environment.
To solve the interoperability and policy enforcement problems of today’s access control paradigm, NIST has developed a specification  and open source reference implementation , of an access control system, referred to as the Policy Machine (PM). The PM is designed in support of, and in alignment with an emerging ANSI/INCITS standard under the title of “Next Generation Access Control” (NGAC) , . The PM/NGAC is a fundamental reworking of traditional access control into a form suited to the needs of the modern, distributed, interconnected enterprise. It is based on a flexible infrastructure that can provide access control services for a number of different types of resources accessed by a number of different types of applications and users. The PM/NGAC infrastructure is proven scalable and can support policies of various types  simultaneously while remaining manageable in the face of changing technology, organizational restructuring, and increasing data volumes.
The PM/NGAC is defined in terms of a standardized set of configurable data relations and a set of standardized functions that are generic to the specification and enforcement of arbitrary combinations of attribute-based access control policies. The PM is not an extension or adaptation of any existing access control model or mechanism, but instead is an attempt to fundamentally redefine access control in terms of its basic configuring data abstractions and functions. Its objective is to provide a unifying framework to support not only current OS and application policies, but also a host of orphan policies for which no mechanism yet exists for their viable enforcement. The PM requires changes only in its data configuration in the enforcement of arbitrary and organization-specific, attribute-based access control policies.
The current version of the open source is a close Java implementation of the NGAC standard, to include a policy and attribute store, a Policy Enforcement Point and a centralized Policy Decision Point, and an administrative tool for managing policies and attributes. In addition, the implementation includes in memory structures, a session manager, several applications, a system for viewing available resources, among others.
The PM and NGAC compare favorably  to XACML , the current de facto access control standard, in many respects, including performance, scalability, policy expression and enforcement, policy and attribute administration and visualization, and application adaptation.
Of the two standard Attribute-Based Access Control methodologies, XACML is the oldest with the first version having been published in 2003. Compared to the relatively young NGAC standard (published in 2013), there exist many more implementations and it has achieved much greater adoption. This is most likely because XACML was available first and, up to this point, there has been a lack of compelling evidence to convince the community to use PM/NGAC. Paramount to the argument to deploy PM/NGAC is a demonstration of its scalability. This concern has recently been put to rest with a publication showing linear run-time algorithms for both computing decisions and reviewing policies . These algorithms are now included in the latest version 1.6 of the open source reference implement. The next logical step in promoting PM/NGAC’s wide spread use is the availability of a commercially viable implementation.
In addition to fundamental features of the Open source version of the PM, advanced features are required for enhanced performance and usability. This SBIR subtopic seeks development of additional PM features, which may include: (1) easy and general user interface for managing, visualizing and analyzing policies; (2) extend the current in memory structures developed for a subset of the policy relations to the entire standard set necessary for computing decisions and reviewing policies; (3) adopt a more efficient and flexible storage mechanism for importing and exporting policies to/from memory; (4) enhance existing permission delegation approach through a better API and GUI; (5) replace the exist windows manager (Microsoft dependent) with a Java based implementation for enhanced portability; (6) Review and remove dormant features for better maintainability and increased performance; and (7) Better user, administrator, and application developer documentation.
Phase I activities and expected results:
Plan, specification and design for an enhanced implementation based on the existing PM open source for future commercial use. Completed development plan, specification, and design including test plan for the proposed capabilities.
Phase II activities and expected results:
Code development, documentation, and testing of the Beta version of a commercially viable PM/NGAC product. A robust beta version of PM/NGAC product that contains the proposed enhance capabilities, documentation for the code and user manual, and testing results to verify the completeness of the development.
In addition to PM source code, NIST may be available to provide consultation, input, and discussion with the awardee to help with the evaluation of the proposed development.
Today’s manufacturing systems are able to collect vast amounts of data; however, much of that data is never used unless and until there is a known problem with the equipment. Sometimes the problem will not even be detected until the product is being used in the field, implying that the manufacturing problem may have persisted for several generations of the product. Advances in data visualization, which is a fundamental means of observing data and discovering problems, hold the potential of faster detection of issues and more rapid improvements. However, data visualization still requires considerable effort to easily integrate with the systems generating data .
Current approaches (drag-and-drop dashboards, tableaus, etc.) to visualizing smart and sustainable manufacturing enterprises are limited and suffer from many drawbacks. Substantial manual effort from experienced practitioners is required. In some cases, skilled programming is necessary. In other cases, significant visualization expertise is necessary. Understanding large amounts of data, often stored as combinations of relational and non-relational data in a variety of quasi-federated databases or being streamed directly from machines and not well understood by anyone in an enterprise, adds further difficulty. Combining all of these skills in a single person is costly and is likely to remain out of reach, particularly for small manufacturers. (Large manufacturers have similar problems although for different reasons – while visualization teams exist, inordinately larger data sets make visualization harder in other ways.)
Currently, even the best results are inflexible, unable to adapt to in-process schema changes or schema-less databases. This leads to inflexible software that either suffers from “bit rot” as schemas and databases change out from under the visualization software or from the inability to incorporate new data to improve visualizations.
Manufacturing systems pose other unique characteristics for data. For instance, correlations between time and spatial coordinates are one fundamental concept for assessing manufacturing performance. Performance is often plagued by the interaction of variables along multiple dimensions, rather than a two-factor correlation. In response, some solutions focus on prioritizing dimensions or mathematically reducing dimensionality to best fit to practical visualizations. However, such data transformations can lead to a loss in context and information. Other unique characteristics exist. All in all, the manufacturing environment has become data rich but information poor.
The goal is to make available manufacturing visualization software that is more flexible, powerful, and easy to use than existing tools. The project will study fundamental concepts that are of relevance to manufacturing data, develop procedures for automatically applying visualization techniques to those concepts, and provide a natural language-based user interface to allow manufactures to quickly assemble their own visualizations based on their datasets. The solution is expected to make use of accepted and practical visualization principles, such as the proper mapping of visual variables to its target data , and apply these principles to create a manufacturing-focused toolset.
Additional features of the toolset may include a natural language-based front-end, user guidance on types of visualizations to apply to a given dataset, and data crawling capabilities. A natural language-based front-end will be a helpful component and, for some users, a superior interface to traditional drag-and-drop techniques. User guidance may come in the form of proffering certain visualization techniques that are recognizably appropriate for a dataset, dissuading the use of visualization techniques that are inappropriate for given data, and explaining visualizations that are not immediately obvious. The tool should offer and suggest appropriate choices to deal with challenging data such as high-dimension data. The software should include an expandable library of plugin visualization components allowing for inclusion of new visualization technologies as they become available. A backend data crawler may adapt to new data as it becomes available within the enterprise, with and even without explicit schemas.
Phase I expected results:
Phase I of this subtopic will demonstrate the feasibility of developing software for visualizations using limited natural language and based on a library of visualizations for manufacturing-specific “big data” (large and varied databases).
Phase II expected results:
Phase II of the project will focus on richer natural language interfaces, techniques to recommend visualizations based on data, and automated assistance at understanding novel visualizations. The end goal of Phase II will be a user interface that accepts natural language as an input and then produces interactive visualizations as an output.
At the end of the project, non-visualization specialists should be able to interact with the system, producing visualizations that are better than Excel, at least as good as those from R, Wolfram, D3JS, etc., but much more quickly and without the development time or skills required by current visualization software.
NIST may be available to work collaboratively with the awardee providing consultation and input on the activities and directions and providing data and scenarios.
NIST is developing a “Bugs Framework” (BF) to categorize and describe classes of bugs in software. For each bug class, the framework includes rigorous definitions and attributes of the class, along with its related dynamic properties, such as proximate, secondary, and tertiary fault causes, consequences, and sites in code. The boundary of the framework is source code; it does not describe the source of the bug (that is, when in the software lifecycle the programmer made a mistake causing the bug) nor what inputs trigger a failure from the bug. The sources and triggers are vital to connect the BF to the software development life cycle. Once this connection is made, software developers can determine the proper tools and techniques to preclude, find, remove, or mitigate bugs.
This project will develop an automated method to discovery the source of a bug and what triggers it given the identification of an instance of a bug that is in a particular class. The automated method will draw on the history of changes to the code and test inputs to the software under development. Although the programming languages that NIST is interested in are Turing Complete, NIST believes that undecidable problems, such as the Halting Problem, need not prevent development of a satisfactory method. Choosing a particular class of bugs should allow a stochastic or heuristic approach to suffice.
Phase I expected results:
An architecture of and development plan for an automated method to discover the source of and one or more triggers for an instance of a program bug, from one of the BF classes.
Phase II expected results:
A demonstration version of a tool that, given an instance of a bug from a BF class, (a) identifies the source of a bug and (b) one or more set of inputs that trigger the bug.
NIST may be available for consultation, input, and discussion.