You are here

Digital Assistants for Science and Engineering

Description:

Lead Center: LaRC

Participating Centers: ARC, JPL, JSC, MSFC

Scope Title:

Digital Assistants for Science and Engineering

Scope Description:

NASA is seeking innovative solutions that combine modern digital technologies (e.g., natural language processing, speech recognition, computer vision, machine learning, artificial intelligence, virtual reality, and augmented reality) to create digital assistants. These digital assistants can range in capability from low-level cognitive tasks (e.g., information search, information categorization and mapping, information surveys, and semantic comparisons), to expert systems, to autonomous ideation. NASA is interested in digital assistants that reduce the cognitive workload of its engineers and scientists so that they can concentrate their talents on innovation and discovery. NASA is also interested in digital assistants for operators and crew to improve safety and efficiency of facilities and vehicles. Digital assistant solutions can target tasks characterized as research, engineering, operations, data management and analysis (of science data, ground and flight test data, or simulation data), and business or administrative. Digital assistants can fall into one of two categories: productivity multipliers and new capabilities. Productivity multipliers reduce the time that the engineer, scientist, facility operator, and vehicle crew member spend on tasks defined by NASA policies, procedures, standards and handbooks, on common and best practices in science and engineering domains within the scope of NASA's missions, on standard operating procedures, on maintenance and troubleshooting, or on search and transformation of scientific and technical information. Proposals for productivity multipliers should demonstrate an in-depth understanding of NASA workflows and information needs for science, engineering, or operations. New capabilities are disruptive transformations of NASA’s engineering, science, facility, or vehicle environments that enable technological advances infeasible or too costly under current paradigms. Proposals for new capabilities should show clear applicability to NASA's missions. Moreover, proposals relying on natural language processing (NLP) of scientific and technical information should demonstrate capability or define a work plan to train NLP algorithms for technical and scientific terms that are in common use within NASA or within a science and engineering discipline. Proposals targeting digital assistants for crew must be deployable to hardware meeting space, weight, and power (SWaP) constraints typical for the vehicle(s) of interest. Furthermore, digital assistants for spacecraft must execute all functions onboard and cannot rely on ground systems to function. Additionally, digital assistants should be hands free especially for activities where crew are wearing spacesuits or are using their hands such as performing maintenance tasks. Examples of potential digital assistants include but are not limited to:

  • A digital assistant that uses the semantic, numeric, and graphical content of engineering artifacts (e.g., requirements, design, and verification) to automate traces among the artifacts and to assess completeness and consistency of traced content. For example, the digital agent can use semantic comparison to determine whether the full scope of a requirement may be verified based on the description(s) of the test case(s) traced from it. Similarly, the digital assistant can identify from design artifacts any functional, performance, or nonfunctional attributes of the design that do not trace back to requirements. Currently, this work is performed by project system engineers, quality assurance personnel, and major milestone review teams.
  • A digital assistant that can identify current or past work related to an idea by providing a list of related government documents, academic publications, and/or popular publications. This is useful in characterizing the state of the art when proposing or reviewing an idea for government funding. Currently, engineers and scientists accomplish this by executing multiple searches using different combinations of keywords from the idea text, each on a variety of search engines and databases; then the engineers read dozens of documents and returns to establish relevance. This example looks for digital assistive technologies to reduce this workload substantially.
  • A digital assistant that can highlight lessons learned, suggest reusable assets, highlight past solutions or suggest collaborators based on the content that the engineer or scientist is currently working on. This example encourages digital solutions that can parse textual and/or graphical information from an in-progress work product and search Agency knowledge bases, project repositories, asset repositories, and other in-progress work products to identify relevantly similar information or assets. The digital assistant can then notify the engineer of the relevant information and/or its author (potential collaborator).
  • A digital assistant that can recommend an action in real time to operators of a facility or the crew of a vehicle. Such a system could work from a corpus of system information such as design artifacts, operator manuals, maintenance manuals, and operating procedures to correctly identify the current state of a system given sensor data, telemetry, component outputs, or other real-time data. The digital assistant can then use the same information to autonomously recommend a remedial action to the operator when it detects a failure to warn the operator when their actions will result in a hazard or loss of a mission objective, or to suggest a course of action to the operator that will achieve a new mission objective given by the operator.
  • A digital assistant that can create one or more component or system designs from a concept of operations, a set of high-level requirements, or a performance specification. Such an agent may combine reinforcement learning techniques, generative-adversarial networks, and simulations to autonomously ideate solutions.
  • An expert system that uses a series of questions to generate an initial system model (e.g., using Systems Modeling Language (SysML)), plans, estimates, and other systems engineering artifacts.
  • Question and answer (Q&A) bots: A digital agent that can answer commonly asked questions on "how to" for scientists and engineers (e.g., what resources (grounds facilities, labs, media services, and IT) are available; where to get site licenses for software packages; who to contact for assistance on a topic; answers for general business procedures such as procurement, travel, time and attendance, etc.).
Expected TRL or TRL Range at completion of the Project: 3 to 5
Primary Technology Taxonomy:
    Level 1: TX 11 Software, Modeling, Simulation, and Information Processing
    Level 2: TX 11.4 Information Processing

Desired Deliverables of Phase I and Phase II:
  • Prototype
  • Hardware
  • Software

Desired Deliverables Description:

The Phase I deliverable can be a detailed architecture for a digital assistant with supporting analysis or a set of individual or integrated software functions that substantiate features of the digital assistant considered key or high risk. Phase II would conclude with a demonstration (prototype) or a deployable digital assistant with quantifiable reduction in time or cost of an activity typically performed by NASA scientists, engineers, or operators.

State of the Art and Critical Gaps:

Digitally assistive technologies currently permeate the consumer market with products like the Amazon Echo, Apple devices with Siri, Google devices with Google Assistant, and Microsoft devices with Cortana. Though Apple, Google, and Microsoft are also moving their assistive technologies into the enterprise space, these developments are largely focused on reducing information technology costs. Some cities and college campuses have also acted as early adopters of smart city or smart campus technologies that include digital assistants. However, application of these assistive technologies to engineering and science has largely been limited to university research. Moreover, most assistive technologies exercise no more cognition than a Q&A bot or executing simple commands. The emergence of improved natural language processing brings the possibility of digital assistants that can perform low-level cognitive tasks. This subtopic aims not only to bring commercially available assistive technologies to the engineering environment, but also to elevate their cognitive capabilities so that engineers and scientists can spend more time innovating and less time on low-level cognitive work that is laborious or repetitive.

Relevance / Science Traceability:

This subtopic is related to technology investments in the NASA Technology Roadmap, Technical Area 11 Modeling, Simulation, Information Technology, and Processing under sections 11.1.2.6 Cognitive Computer, 11.4.1.4 Onboard Data Capture and Triage Methodologies, and 11.4.1.5 Real-Time Data Triage and Data Reduction Methodologies. This subtopic is seeking similar improvements in computer cognition more generally applied to the activities performed by engineers and scientists and made more easily accessible through technologies like speech recognition.

References:
US Flag An Official Website of the United States Government