Download Full Text (283 KB)
With the rapid proliferation of computing systems and the internet, the amount of data generated has been increasing exponentially. This includes data from mobile devices, where almost all information is now becoming computerized, and science experiments, where large simulations on supercomputers are increasingly becoming the norm. With this massive increase in data, a key issue is how we process and make sense of this data. This is called the “Big Data” challenge. Deep learning is a class of mathematical algorithms that is now heavily used for Big Data analytics. These algorithms are based on very large scale neural networks. One of the key challenges with deep learning is that it requires massive computing power. At present clusters of high performance graphics cards designed primarily for computing (known as GPGPUs) are used for these tasks. A key problem with clusters of GPGPUs, is that they consume large amounts of energy, thus making it difficult to scale existing massive computing systems to future Big Data volumes. The deep neural network designed by Parallel Cognitive Systems Laboratory is based on application specific integrated circuits (ASIC), which provides high performance at reasonably low power consumption. However, these are extremely expensive to fabricate. The Field Programmable Gate Array (FPGA) is a type of integrated circuit that can be reconfigured to implement a large range of arbitrary functions according to application requirements. FPGAs are much cheaper than ASIC and consume less power than CPU and GPU. The objective of this proposal is to develop deep learning network based on FPGA. I will optimize the whole design to make it more suitable for the deep learning. Several pattern recognition applications which use deep learning will be used to test and evaluate the design.
Tarek M Taha
Primary Advisor's Department
Electrical and Computer Engineering
Stander Symposium poster
"Deep Neural Network Based on FPGA" (2018). Stander Symposium Posters. 1435.