NASA is seeking innovative solutions that combine modern digital technologies (e.g., natural language processing, speech recognition, machine vision, machine learning and artificial intelligence, and virtual reality and augmented reality) to create digital assistants. These digital assistants can range in capability from low-level cognitive tasks (e.g., information search, information categorization and mapping, information surveys, semantic comparisons), to expert systems, to autonomous ideation. NASA is interested in digital assistants that reduce the cognitive workload of its engineers and scientists so that they can concentrate their talents on innovation and discovery. Digital assistant solutions can target tasks characterized as research, engineering, operations, data management and analysis (of science data, ground and flight test data, or simulation data), business or administrative. Digital assistants can fall into one of two categories: productivity multipliers and new capabilities. Productivity multipliers reduce the time that the engineer or scientists spend on tasks defined by NASA policies, procedures, standards and handbooks, on common and best practices in science and engineering domains within the scope of NASA's missions, or on search and transformation of scientific and technical information. Proposals for productivity multipliers should demonstrate an in-depth understanding of NASA science and engineering workflows or NASA's information needs. New capabilities are disruptive transformations of the engineering and science environments that enable technological advances infeasible or too costly under current paradigms. Proposals for new capabilities should show clear applicability to NASA's missions. Examples of useful digital assistants include but are not limited to:
- A digital assistant that can formulate candidate designs (of components or systems) from a concept of operations, a set of high-level requirements, or a performance specification. Such an agent may use a combination of technologies (e.g., reinforcement learning, generative-adversarial networks) to autonomously ideate solutions.
- A digital assistant that uses the semantic, numeric, and graphical content of engineering artifacts (e.g., requirements, design, verification) to automate traces among the artifacts and to assess completeness and consistency of traced content. For example, the digital agent can use semantic comparison to determine whether the full scope of a requirement may be verified based on the description(s) of the test case(s) traced from it. Similarly, the digital assistant can identify from design artifacts any functional, performance, or non-functional attributes of the design that do not trace back to requirements. Currently, this work is performed by project system engineers, quality assurance personnel, and major milestone review teams as defined in NASA governing documents for engineering such as NPR 7123.1 Systems Engineering.
- A digital assistant that can recommend an action in real-time to operators of a facility, vehicle, or other physical asset. Such a system could work from a corpus of system information such as design artifacts, operator manuals, maintenance manuals, and operating procedures to correctly identify the current state of a system given sensor data, telemetry, component outputs, or other real-time data. The digital assistant can then use the same information to autonomously recommend a remedial action to the operator when it detects a failure, to warn the operator when their actions will result in a hazard or loss of a mission objective, or to suggest a course of action to the operator that will achieve a new mission objective given by the operator.
- A digital assistant that can identify current or past work related to an idea by providing a list of related government documents, academic publications, and/or popular publications. This is useful in characterizing the state-of-the-art when proposing or reviewing an idea for government funding. Currently, engineers and scientists accomplish this by executing multiple searches using different combinations of keywords from the idea text, each on a variety of search engines and databases; then the engineers read dozens of document returns to establish relevance. This example imagines a digital assistant that accomplish a substantial portion of this work given the idea text.
- A digital assistant that can highlight lessons learned, suggest reusable assets, highlight past solutions or suggest collaborators based on the content that the engineer or scientist is currently working on. This example encourages digital solutions that can parse textual and/or graphical information from an in-progress work product and search Agency knowledge bases, project repositories, asset repositories, and other in-progress work products in the Agency to identify relevantly similar information or assets. The digital assistant can then notify the engineer of the relevant information and/or its author (potential collaborator).
- A digital assistant that understands system dependencies and, when presented with a design change, can assist (or autonomously perform) selection, modification, and execution of engineering analyses to be updated.
- A digital assistant that can autonomously subset, transform, analyze, and visualize large science datasets in response to a user query.
This subtopic targets terrestrial uses of digital assistive technologies in science and engineering environments. For application of digital assistive technologies for in-space applications, see subtopic H168-H6.03 Spacecraft Autonomous Agent Cognitive Architectures for Human Exploration.
Further, this subtopic is related to technology investments in the NASA Technology Roadmap, Technical Area 11 Modeling, Simulation, Information Technology, and Processing under sections 220.127.116.11 Cognitive Computer, 18.104.22.168 Onboard Data Capture and Triage Methodologies, and 11 .4 .1 .5 Real-time Data Triage and Data Reduction Methodologies. This subtopic is seeking similar improvements in computer cognition but more generally applied to the activities performed by engineers and scientists and made more easily accessible through technologies like speech recognition.
The expected TRL for this project is 3 to 5.
- CIMON "Crew Interactive Mobile Companion"
- NASA TM–2016-219361 Big Data Analytics and Machine Intelligence Capability Development at NASA Langley Research Center: Strategy, Roadmap, and Progress https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170000676.pdf
- NASA/TM-2016-219358 Machine Learning Technologies and Their Applications for Science and Engineering Domains Workshop – Summary Report