You are here

Spacecraft Autonomous Agent Cognitive Architectures for Human Exploration


Scope Title:

Spacecraft Autonomous Agent Cognitive Architectures for Human Exploration

Scope Description:

Autonomous and partially-autonomous systems promise the opportunity for a future with self-driving automobiles, air taxis, packages delivered by unmanned aerial vehicles (UAVs), and more revolutionary Earth applications. At the same time, it is expected that future NASA deep space missions will happen at distances that put significant communication barriers between the spacecraft and Earth, including lag due to light distance and intermittent loss of communications. As a result, it will be difficult to control every aspect of spacecraft operations from an Earth-based mission control, and thus, the crews will be required to manage, plan, and execute the mission and to respond to unanticipated system failure and anomaly more autonomously. Similarly, there is also opportunity for unmanned vehicles on Earth to benefit from autonomous, cognitive agent architectures that can respond to emergent situations without the aid of human controllers. For this reason, it is advantageous for operational functionality currently performed by external human-centric control stations (e.g., mission control) to be migrated to the vehicle and crew (if piloted). Since spacecraft operations will consist of a limited number of crewmembers who each operate with a limited performance capacity (in terms of both cognition and tasks), it will be necessary for the spacecraft to have assistive, autonomous, and semi-autonomous agents to be responsible for a large proportion of spacecraft operations so as not to overburden the crew.

Cognitive agents could provide meaningful help for many tasks performed by humans. Novel operational capabilities required by deep space missions, such as spacecraft and systems health, crew health, maintenance, consumable management, payload management, and activities such as food production and recycling could benefit from the assistance of autonomous agents, which could interface directly with the crew and onboard systems, reducing cognitive load and scheduling time on the crew. Additionally, cognitive agents could contribute to many general operational tasks in collaboration with the crew, such as training, inspections, and mission planning. Finally, autonomous agents could increase the mission’s resilience to hazardous events, both by directly responding to certain events (e.g., ones which unfold too quickly for the crew to catch, or which immobilize the crew) and by providing assistive services (e.g., fault diagnosis, contingency analysis, and mission replanning).

However, implementing these cognitive agents presents significant challenges to the underlying software architecture. First, these agents will need to be able to take a significant amount of responsibility for mission operations while still operating under crew directives. Additionally, agents with different dedicated roles will need to share resources and hardware and may have differing goals and instructions from human operators that need to be managed and coordinated. Such agents will, thus, need to be able to take these actions autonomously while enabling (1) effective crew (or vehicle occupant) control of the vehicle even when the agent is operating autonomously (meaning, the agents should not be acting in unexpected ways and should report when the situation has changed enough to justify a change in operations), (2) direct crew control of the task when manual intervention is needed, and (3) autonomous and manual coordination/deconfliction of agent goals and tasks. Second, for NASA space missions, long-duration spaceflight is likely to uncover new challenges during the mission that require some level of adaptation. Whether this is because of known low-probability hazardous events or because of “unknown unknown” situations that were not planned for, cognitive agents will need to have a capacity for “graceful extensibility.” This concept is not unique to space missions—Earth-based vehicles will also need to be able to respond to similar types of events in-time given the highly variable and heterogenous environments they will likely encounter when operated at scale. As a result, the architecture of the cognitive agent will need to be able to learn (both from taught examples and from the environment) and reconfigure itself (in collaboration with the crew) to perform new tasks. Finally, these capabilities need to be implemented with the high level of assurance required by mission operations, meaning that learned and autonomous behavior must be transparent, predictable, and verifiable using traditional software assurance techniques.

This subtopic solicits intelligent autonomous agent cognitive architectures that are open, modular, make decisions under uncertainty, interact closely with humans, incorporate diverse input/data sources, and learn such that the performance of the system is assured and improves over time. This subtopic will enable small businesses to develop the underlying learning/knowledge representation, methods for enabling the required behavior (e.g., operations and interactions), and necessary software architectures required to implement these technologies within the scope of cognitive agents that assist operators in managing vehicle operations. It should be feasible for cognitive agents based on these architectures to be certified or licensed for use on deep space missions to act as liaisons that interact with the mission control operators, the crew, and vehicle subsystems. With such a cognitive agent that has access to all onboard data and communications, the agent could continually integrate this dynamic information and advise the crew and mission control accordingly by multiple modes of interaction including text, speech, and animated images. This agent could respond to queries and recommend to the crew courses of action and direct activities that consider all known constraints, the state of the subsystems, available resources, risk analyses, and goal priorities. Cognitive architectures capable of being certified for crew support on spacecraft are required to be open to NASA with interfaces open to NASA partners who develop modules that integrate with the agent, in contrast to proprietary black-box agents. It should be noted that fulfilling this requirement would additionally make the cognitive agent suitable for a wide variety of Earth applications where a high level of assurance is needed (e.g., autonomous vehicles and aircraft).

An effective cognitive architecture would be capable of integrating a wide variety of knowledge sources to perform a wide variety of roles depending on mission requirements. For example, an effective prognostics and health management (PHM) agent would need to be able to take sensor data, interpret this data to diagnose the current state of the system using learned artificial intelligence (AI) models, digital twin simulations and data, and user input, and project out potential contingencies to plan optimal maintenance and/or fault avoidance operations under uncertainty. These operations would need to be modifiable in operations, for example, if a hazardous event occurs, there are changes to the mission, or there is a learnable change in behavior that reduces arising projection errors. This agent would need to be able to conduct operations autonomously for low-level inspection and maintenance operations while enabling safe human intervention throughout the process. It would finally need to communicate with crews for planning and performance of maintenance operations, to report/escalate potential hazardous contingencies, and for modification of operations (e.g., learning). This communication could include producing human-interpretable visual dashboards, communicating directly via speech, and direct manipulation of hardware (e.g., to teach/learn certain operations). Agents like this (with functionality appropriate to the given task) would be needed to perform a variety of roles in the spacecraft, including low-level tasks like state estimation, hardware control, and subsystem management and high-level tasks like mission planning and scheduling. Agents with independent responsibilities will furthermore need to be managed and coordinated to enable functional and resilient overall operations.

Well-constructed proposals will focus on developing a prototype cognitive agent(s) in the context of a limited test. This agent will need to embody a cognitive architecture that can be readily applied to a wide variety of roles and tasks throughout a mission that embodies the desired capabilities of autonomous and semi-autonomous operations, modifiable and/or learned behaviors, data/model fusion for decision-making under uncertainty, advanced user interaction, and assurance/transparency. This architecture could then be able to be extended to a wider scope in a more advanced mission in future phases of the project. This project and the agent architecture will need to be thoroughly documented and demonstrated to enable the understanding (e.g., capabilities and limitations) of this technology.

Expected TRL or TRL Range at completion of the Project: 2 to 5

Primary Technology Taxonomy:

  • Level 1 10 Autonomous Systems
  • Level 2 10.3 Collaboration and Interaction

Desired Deliverables of Phase I and Phase II:

  • Research
  • Analysis
  • Prototype
  • Software

Desired Deliverables Description:

For Phase I, the expectation is to develop (1) a preliminary cognitive architecture with trades study/requirements analysis supporting the selection of the architecture in a desired mission (e.g., Human Exploration of Mars Design Reference Mission:, (2) early feasibility prototypes of architecture features and conceptual description (e.g., in SysML) for a cognitive agent(s) in that mission, and (3) a detailed implementation plan for full architecture with technical risks identified and managed.

For Phase II, the implementation plan will be executed, resulting in a functional prototype of the agent capable of performing the desired roles/tasks for the mission chosen that passes the preliminary tests required to meet mission requirements. Ideally, this functional prototype will be suitable for a flight demonstration in a relevant operational context (e.g., the International Space Station (ISS)). At this phase, it will also be necessary to provide comprehensive documentation of cognitive architecture and final prototype of the agent(s), including architectural/process/interaction diagrams (e.g., SysML), reporting regarding the implementation and design process, and the flowdown of prototype tests (with results included) from high-level requirements. It is also desired that the software developed (both the agent prototype and the underlying architecture modules/library) will be releasable as open-source software that can be improved, modified, and adapted to new missions.

State of the Art and Critical Gaps:

Long-term crewed spacecraft, such as the ISS, are so complex that a significant portion of the crew's time is spent keeping it operational even under nominal conditions in low Earth orbit (LEO) and still require significant real-time support from Earth. Autonomous agents performing cognitive computing can provide crew support for future missions beyond cislunar by providing them robust, accurate, and timely information, and perform tasks enabling the crew more time to perform the mission science. The considerable challenge is to migrate the knowledge and capability embedded in current Earth mission control, with tens to hundreds of human specialists ready to provide instant knowledge, to onboard agents that team with flight crews to autonomously manage a spaceflight mission.

Most Apollo missions required the timely guidance of mission control for success, typically within seconds of an off-nominal situation. Outside of cislunar space, the time delays will become untenable for Earth to manage time-critical decisions as was done for Apollo. The emerging field of cognitive computing is a vast improvement on previous information retrieval and integration technology and is likely capable of providing this essential capability. This subtopic is directly relevant to the Exploration Systems Development Mission Directorate (ESDMD) and Space Operations Mission Directorate (SOMD) Advanced Exploration Systems (AES) domain: Foundational Systems - Autonomous Systems and Operations.

Relevance / Science Traceability:

There is growing interest in NASA to support long-term human exploration missions to the Moon and eventually to Mars. Human exploration up to this point has relied on continuous communication with short delays. To enable missions with intermittent communication with long delays while keeping crew sizes small, new artificially intelligent technologies must be developed. Technologies developed under this subtopic are expected to be suitable for testing on Earth analogues of deep space spacecraft, as well as the Deep Space Gateway envisioned by NASA.


Zhou, J., Zhou, Y., Wang, B., & Zang, J. (2019). Human–cyber–physical systems (HCPSs) in the context of new generation intelligent manufacturing. Engineering5(4), 624636.

Woods, D. D. (2018). The theory of graceful extensibility: basic rules that govern adaptive systems. Environment Systems and Decisions38(4), 433457.

Thomaz, A. L., & Breazeal, C. (2008). Teachable robots: Understanding human teaching behavior to build more effective robot learners. Artificial Intelligence172(67), 716-737.

Kotseruba, I., & Tsotsos, J. K. (2020). 40 years of cognitive architectures: core cognitive abilities and practical applications. Artificial Intelligence Review53(1), 1794.

Tang, L., Kacprzynski, G. J., Goebel, K., Saxena, A., Saha, B., & Vachtsevanos, G. (2008, October). Prognostics enhanced automated contingency management for advanced autonomous systems. In 2008 international conference on prognostics and health management (pp. 19). IEEE.

US Flag An Official Website of the United States Government