Photonic Accelerators for Artificial Neural Networks



OBJECTIVE: To develop high-speed, scalable, power-efficient photonic accelerators for vector, matrix, and tensor operations with potential applications in artificial neural networks.

DESCRIPTION: In the post-Moore’s law era, electronic hardware accelerators [1,2] with parallel computing structures and optimized local memory for special-purpose computing, as oppose to CPUs for the general-purpose von Neumann computing, have enabled new applications that circumvent physical limitations of integrated circuits (IC). These accelerators include the well-known graphical processing units (GPUs) and tensor processing units (TPUs). Benefiting from these hardware accelerators, new applications based on artificial intelligence (AI)/machine learning (ML) that utilize artificial neural networks (ANN) have proliferated at virtually every corner in academia, industry and the society in general, despite the stagnation in raw IC processing power. In particular, deep-learning neural networks (DNN), consisting of many hidden layers, have shown the ability to generate solutions that are sometimes even superior to those based on human intelligence. For example, in 2016, the Google AlphaGo AI machines, after training for only two hours by playing with each other, beat the world’s best human player in the game of Go [16]. It is generally believed that the early success of AI in the last few years heralds a much wider array of AI solutions for both commercial and defense applications in the future. One of the examples of AI for defense applications is GPS-less navigation in the jammed battlefield, in which navigation is enabled by AI-based pattern recognition of scenes acquired in real time. The important role that electronic hardware accelerators played so far clearly indicate that future developments in ANNs depend on advances in both software and hardware. However, currently, electronic hardware accelerators have already been pushed to their limits in term of scalability. Against this backdrop, there have been renewed efforts in exploring the role of optics for computing [3-6]. Three major building blocks comprising the ANNs and DNNs are 1) Interconnects, 2) matrix-vector and matrix-matrix multiplication, and 3) nonlinearity. Since optics and photonics can implement the first two functions as well as, if not better than, electronics; and optical nonlinearity at the per neuron level rather than the logic level is actually quite practical, now is the right time to explore the role of optics and photonics in ANNs and DNNs. This topic focus on photonic accelerator for linear vector, matrix, and tensor operations.

PHASE I: To develop a photonic accelerator architecture, build a prototype and experimentally demonstrate the operational principle and feasibility of the photonic accelerator. The prototype should be able to perform at least 2 TOPS (tera operations per second), achieve a matrix loading speed > 100 MHz, and consumes no more than 0.5 W of electrical power. In addition, the layout of an integrated photonic accelerator with performance matching or exceeding the requirements for Phase II described below, and consistent with available fabrication platforms, should be developed.

PHASE II: To fabricate and test an integrated photonic accelerator that can perform at least 100 TOPS (tera operations per second), achieve a matrix loading speed > 500 MHz, and consumes no more than 10 W of electrical power. Investigate the performance limits of the adopted photonic accelerator architecture in terms of computational dimensionality, computing power in units of TOPS, and power efficiency as functions of the input data rate and matrix loading speed. Production-scale costs of the photonic accelerator should be studied to show viability for reasonable cost reduction at manufacturing volumes. Motivation for phase III follow-on investment should be made evident.

PHASE III: Pursuit system-level AI applications based upon the photonic accelerator(s) developed in phase II. Clearly identify the advantages of the photonic accelerator over the state-of-the-art electronic accelerators, and subsequently determine whether the photonic accelerator will be used for inference and/or training. The AI system should be integrated at a military installation or on a military platform in potential applications scenarios including but not limited to communications, target classification & recognition, navigation, and simulation & training. Suitability of installing the photonic accelerator on mobile platforms such UAVs, UGVs and satellite, where power supply is limited, should be investigated. Dual-use AI applications of the photonic accelerator(s) in medicine & health care, finance, gaming, marketing and autonomous vehicles are encouraged.

KEYWORDS: lasers, modulators, photodetector, optical computing, artificial intelligence, neural networks


S. A. Manavski, “CUDA compatible GPU as an efficient hardware accelerator for AES cryptography,” in ICSPC 2007 Proceedings - 2007 IEEE International Conference on Signal Processing and Communications, 2007.; G. Quintana-Ortí, F. D. Igual, E. S. Quintana-Ortí, and R. A. van de Geijn, “Solving dense linear systems on platforms with multiple hardware accelerators,” ACM SIGPLAN Not., 2009.; H. J. Caulfield and S. Dolev, “Why future supercomputing requires optics,” Nat. Photonics, vol. 4, no. 5, pp. 261–263, May 2010.; D. Brunner, S. Reitzenstein, and I. Fischer, “All-optical neuromorphic computing in optical networks of semiconductor lasers,” in 2016 IEEE International Conference on Rebooting Computing, ICRC 2016 - Conference Proceedings, 2016.; T. Deng, J. Robertson, and A. Hurtado, “Controlled Propagation of Spiking Dynamics in Vertical-Cavity Surface-Emitting Lasers: Towards Neuromorphic Photonic Networks,” IEEE J. Sel. Top. Quantum Electron., 2017.

US Flag An Official Website of the United States Government