You are here

SBIR Phase I: Processor Architecture for Radically Improved Performance and Energy Efficiency on Sparse Machine Learning

Award Information
Agency: National Science Foundation
Branch: N/A
Contract: 1746469
Agency Tracking Number: 1746469
Amount: $224,643.00
Phase: Phase I
Program: SBIR
Solicitation Topic Code: S
Solicitation Number: N/A
Solicitation Year: 2017
Award Year: 2018
Award Start Date (Proposal Award Date): 2018-01-01
Award End Date (Contract End Date): 2018-06-30
Small Business Information
9308 Springwood Drive, Austin, TX, 78750
DUNS: 080649095
HUBZone Owned: N
Woman Owned: N
Socially and Economically Disadvantaged: N
Principal Investigator
 Mitchell Hayenga
 (979) 450-4469
Business Contact
 Mitchell Hayenga
Phone: (979) 450-4469
Research Institution
The broader impact/commercial potential of this Small Business Innovation Research (SBIR) Phase I project is to expand the capability of modern computer systems to execute machine learning applications and enable new uses of machine learning in the everyday lives of people.  Due to widespread success, machine learning is being applied to automate tasks across most modern businesses.  However, the increasingly computationally complex and data intensive nature of these new problems is rapidly increasing. The scale of current problems stress the computational abilities and memory requirements of modern systems.  The technology to be developed under this Phase I project, will enable computer systems with radically higher performance and energy-efficiency while performing these machine learning tasks.  Enabling efficient machine learning will enable complex tasks to fit within modern mobile devices while simultaneously enabling computers within datacenters to solve increasingly large problems.  Finally, as businesses rush to deploy hardware for machine learning, the underlying algorithms and techniques are rapidly evolving.  The technology to be developed is highly adaptable, enabling high efficiency on current machine learning techniques while mitigating risks for businesses likely to adapt new machine learning algorithms. The proposed project introduces a new hardware architecture for the execution of data and control intensive machine learning workloads.  As machine learning has expanded in use, increasing data sizes have brought about the use of compressed data representations.  However, modern computational devices like microprocessors or graphics processors are highly inefficient when working on problems using these compressed representations due to irregular control and data access patterns.  This Phase I project introduces an adaptable architecture that excels at irregular computing and can dynamically re-allocate resources to hasten execution.  To demonstrate the capabilities of the new architecture, key execution kernels from modern machine learning applications will be adapted and developed to operate on compressed representations.  An existing simulation infrastructure will be extended to model key hardware requirements and gather performance estimations of the newly proposed hardware architectures.  Preliminary estimates demonstrate that multiple factors of improvement in energy efficiency and performance are expected across the key operations of machine learning applications.

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government