You are here

Kokkos Tensor Library

Award Information
Agency: Department of Energy
Branch: N/A
Contract: DE-SC0022430
Agency Tracking Number: 0000263420
Amount: $250,000.00
Phase: Phase I
Program: SBIR
Solicitation Topic Code: C53-02a
Solicitation Number: N/A
Solicitation Year: 2021
Award Year: 2022
Award Start Date (Proposal Award Date): 2022-02-14
Award End Date (Contract End Date): 2023-02-13
Small Business Information
5335 Far Hills Avenue Suite 315
Dayton, OH 45429-4248
United States
DUNS: 141943030
HUBZone Owned: No
Woman Owned: No
Socially and Economically Disadvantaged: No
Principal Investigator
 Gerald Sabin
 (937) 433-2886
Business Contact
 V. Nagarajan
Phone: (937) 433-2886
Research Institution

Tensors are fundamental building blocks in a wide range of high-performance computer applications including Artificial Intelligence (e.g., Deep Neural Networks) and Numerical Model and Simulation (e.g., Finite Element codes). High Performance Computer platforms are increasingly heterogeneous, with upcoming exascale platforms using heterogeneous processors (e.g., Intel Sapphire Rapids, Nvidia Grace, AMD EPYC processors) that include vector engines, matrix engines, and heterogeneous cores coupled with compute accelerators from a variety of vendors (e.g., Intel Xe HPC/Ponte Vecchio, Nvidia A100, AMD MI100). Developing applications that can compile and run efficiently across the wide range of computer environments is a massive challenge. Tools such as Kokkos and KokkosKernels help reduce the burden, but they currently lack a performance portable tensor library. The proposed project will develop an optimized KokkosTensor API that supports tensor transpose, tensor contractions, as well as optimization of tensor expressions involving tensor contraction and other element- wise tensor operators. The implementations will utilize the Kokkos primitives to enable performance portable codes, will leverage high-performance vendor libraries where they exist (e.g., cuTensor), and will include architecture-aware tuned implementations where appropriate (e.g., Nvidia, AMD, and/or Intel accelerators). The Phase I effort will develop a prototype KokkosTensor implementation that includes tensor contraction and tensor transpose. The library will support multiple GPU architectures and will be demonstrated using existing Kokkos finite element applications. KokkosTensor will be beneficial to many DOE and commercial applications, such as higher-order finite element discretizations that are being used in hypersonic reentry, mechanics, and fire simulations at Sandia. There are other examples in solvers for matrix-free multigrid methods and several other use cases across.

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government