You are here
An AMBA-Compliant, Radiation Tolerant Tensor Core for Use in A.I. Applications
Phone: (719) 531-0805
Phone: (719) 531-0805
A novel data processing accelerator intellectual property (IP) Radiation-Hardened-by-Design (RHBD) core for use in NASA future missions is proposed. The core is an artificial neural network accelerator based upon work done at Google, IBM, and others. The IP core is known as a tensor core and follows an architecture of matrix multipliers, accumulators, register files, and fast and abundant memory access. The tensor core will be developed to be Advanced Microcontroller Bus Architecture (AMBA) bus compliant and will feature an architectural approach to easily expand the data processing elements when more die area is available. The core will be developed on the trusted Global Foundries (GF) 32nm Silicon on Insulator (SOI) process. There is extensive development currently occurring at this process technology, including NASA’s future High Performance Spaceflight Computing (HPSC) platform. The core is proposed as an effort to develop a data processing acceleration to decrease the down-link data bandwidth of future space missions. If more processing can be accomplished in situ, a given mission can be expected to require less data bandwidth, a problem that is becoming more critical with the ever increasing number of active missions. The IP core will be developed to be incorporated into other development at the 32nm process. The IP core will also be structured in such a way as to be incorporated into Micro-RDC’s future Reticle Programmable System on Chip (RPSoC) platform. The RPSoC is an active future platform, under development with funding from NASA and the Air Force, for digital and mixed signal designs to lower the cost of development at 32nm and to decrease lead-time from design inception to product delivery. The tensor core will be featured on this platform as a data acceleration core. The core will have RHBD techniques throughout the FEOL and BEOL to ensure that no data will be corrupted within the artificial neural network configuration or the data path.
* Information listed above is at the time of submission. *