You are here

Implementing Neural Network Algorithms on Neuromorphic Processors



OBJECTIVE: Deploy Deep Neural Network algorithms on near-commercially available Neuromorphic or equivalent Spiking Neural Network processing hardware.

DESCRIPTION: Biological inspired Neural Networks provide the basis for modern signal processing and classification algorithms. Implementation of these algorithms on conventional computing hardware requires significant compromises in efficiency and latency due to fundamental design differences. A new class of hardware is emerging that more closely resembles the biological Neuron/Synapse model found in Nature and may solve some of these limitations and bottlenecks. Recent work has demonstrated significant performance gains using these new hardware architectures and have shown equivalence to converge on a solution with the same accuracy [Ref 1].The most promising of the new class are based on Spiking Neural Networks (SNN) and analog Processing in Memory (PiM), where information is spatially and temporally encoded onto the network. A simple spiking network can reproduce the complex behavior found in the Neural Cortex with significant reduction in complexity and power requirements [Ref 2]. Fundamentally, there should be no difference between algorithms based on Neural Network and current processing hardware. In fact, the algorithms can easily be transferred between hardware architectures [Ref 4]. The performance gains, application of neural networks and the relative ease of transitioning current algorithms over to the new hardware motivates the consideration of this topic.Hardware based on Spiking Neural Networks (SNN) are currently under development at various stages of maturity. Two prominent examples are the IBM True North and the INTEL Loihi Chips, respectively. The IBM approach uses conventional CMOS technology and the INTEL approach uses a less mature memrisistor architecture. Estimated efficiency performance increase is greater than 3 orders of magnitude better than state of the art Graphic Processing Unit (GPUs) or Field-programmable gate array (FPGAs). More advanced architectures based on an all-optical or photonic based SNN show even more promise. Nano-Photonic based systems are estimated to achieve 6 orders of magnitude increase in efficiency and computational density; approaching the performance of a Human Neural Cortex. The primary goal of this effort is to deploy Deep Neural Network algorithms on near-commercially available Neuromorphic or equivalent Spiking Neural Network processing hardware. Benchmark the performance gains and validate the suitability to warfighter application.Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. owned and operated with no foreign influence as defined by DoD 5220.22-M, National Industrial Security Program Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Counterintelligence and Security Agency (DCSA). The selected contractor and/or subcontractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances. This will allow contractor personnel to perform on advanced phases of this project as set forth by DCSA and NAVAIR in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material IAW DoD 5220.22-M during the advanced phases of this contract.

PHASE I: Develop an approach for deploying Neural Network algorithms and identify suitable hardware, learning algorithm framework and benchmark testing and validation methodology plan. Demonstrate performance enhancements and integration of technology as described in the description above. The Phase I effort will include plans to be developed under Phase II.

PHASE II: Transfer government furnished algorithms and training data running on a desktop computing environment to the new hardware environment. An example algorithm development frame for this work would be TensorFlow. Some modification of the framework and/or algorithms may be required to facilitate transfer. Some optimization will be required and is expected to maximize the performance of the algorithms on the new hardware. This optimization should focus on throughput, latency, and power draw/dissipation. Benchmark testing should be conducted against these metrics. Develop a transition plan for Phase III.It is probable that the work under this effort will be classified under Phase II (see Description section for details).

PHASE III: Optimize algorithm and conduct benchmark testing. Adjust algorithms as needed and transition to final hardware environment. Successful technology development could benefit industries that conduct data mining and high-end processing, computer modeling and machine learning such as manufacturing, automotive, and aerospace industries.

KEYWORDS: Neural Networks, Neuromorphic, Processor, Algorithm, Spiking Neurons, Machine Learning


1. Ambrogio, S., Narayanan, P., Tsai, H., Shelby, R., Boybat, I., Nolfo, C., . . . Burr, G. “Equivalent-Accuracy Accelerated Neural-Network Training Using Analogue Memory.” Nature, June 6, 2018, pp. 60-67. 2. Izhikevich, E. “Simple Model of Spiking Neurons.” IEEE Transactions on Neural Networks, 2003, pp. 1569-1572. 3. Diehl, P., Zarrella, G., Cassidy, A., Pedroni, B. & Neftci, E. “Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-Power Neuromorphic Hardware.” Cornell University, 2016. 4. Esser, S., Merolla, P., Arthur, J., Cassidy, A., Appuswamy, R., Andreopoulos, A., . . . Modha, D. “Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing.” IBM Research: Almaden, May 24, 2016. 5. Department of Defense. National Defense Strategy 2018. United States Congress.

US Flag An Official Website of the United States Government