Authors

Presenter(s)

Yangjie Qi

Files

Download

Download Project (1.1 MB)

Description

General purpose computing systems are used for a large variety of applications. Extensive supports for flexibility in these systems limit their energy efficiencies. Neural networks, including deep networks, are widely used for signal processing and pattern recognition applications. This poster presents a digital multicore on-chip learning architecture for deep neural networks. It has memories internal to each neural core to store synaptic weights. A variety of deep learning applications can be processed in this architecture. The system level area and power benefits of the specialized architecture are compared with an NVIDIA GEFORCE GTX 980Ti GPGPU. Our experimental evaluations show that the proposed architecture can provide significant area and energy efficiencies over GPGPUs for both training and inference.

Publication Date

4-18-2018

Project Designation

Graduate Research

Primary Advisor

Tarek M. Taha

Primary Advisor's Department

Electrical and Computer Engineering

Keywords

Stander Symposium project

A Low Power High Throughput Architecture for Deep Network Training

Share

COinS