You are here

SBIR Phase I: Addressing the memory bottleneck in deep neural networks in cloud platforms

Award Information
Agency: National Science Foundation
Branch: N/A
Contract: 1747360
Agency Tracking Number: 1747360
Amount: $224,586.00
Phase: Phase I
Program: SBIR
Solicitation Topic Code: IT
Solicitation Number: N/A
Solicitation Year: 2017
Award Year: 2018
Award Start Date (Proposal Award Date): 2018-01-01
Award End Date (Contract End Date): 2018-06-30
Small Business Information
2208 Pacific Coast Dr
Goleta, CA 93117
United States
DUNS: 080596325
HUBZone Owned: No
Woman Owned: No
Socially and Economically Disadvantaged: No
Principal Investigator
 Farnood Merrikh Bayat
 (805) 708-4652
Business Contact
 Farnood Merrikh Bayat
Phone: (805) 708-4652
Research Institution

The broader impact/commercial potential of this Small Business Innovation Research (SBIR) Phase I project will consist in defining the way toward an ultra-fast and energy efficient accelerator for Machine Learning applications deployed on cloud computing. The merging of cloud computing and Machine Learning is shaping our everyday life experience. Examples of applications running on the cloud and exploiting Machine Learning algorithms include data mining, natural language processing and pattern recognition. These three together represent cognitive computing and, due to a vast and growing number of APIs for developers, it is becoming easier to access the computational power of the cloud and develop new applications. This new computation potential is used by businesses to connect data and find patterns valuable for commerce or to improve cybersecurity. This Small Business Innovation Research (SBIR) Phase I project will define a new kind of hardware accelerator, able to speed up cognitive computation by orders of magnitude while reducing energy consumption compared with state-of-the-art processors. The proposed technology is fast and energy efficient, but can be prone to low precision and temperature variation sensitivity. During Phase I, the company will define the hardware accelerator at the system level, optimizing the design for ultra-high speed and sufficient precision to carry out the cognitive computation required. At the same time, the effect of temperature variation and noise will be minimized through improved design. Finally, the energy consumption of the new designs will be estimated and compared with the overall performance of state-of-the-art competitive architectures.

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government