Memristor based low power high throughput circuits and systems design

Date of Award


Degree Name

Ph.D. in Electrical Engineering


Department of Electrical and Computer Engineering


Advisor: Tarek Taha


Power density constraint and device reliability issues are driving energy efficient, fault tolerant architecture designs in recent years. With the emergence of big data applications low power, high throughput architectures are getting more interest. Neural networks have diverse use in the areas including big data analysis, sensor and signal processing applications. The memristor is a novel device having a large varying resistance range. Physical memristors can be laid out in a high density grid known as a crossbar. A memristor crossbar can evaluate many multiply-add operations in parallel in analog domain which is the dominant operation in neural network applications. The objective of this thesis is to examine memristor based extreme low power neuromorphic architectures for signal and big data processing applications.This thesis examines in-situ training of memristor based multi-layer neural networks where the entire crossbar is updated in four steps for a training instance (data). Existing training approaches update a crossbar serially column by column. Training of memristor based deep neural networks are examined using autoencoders for layer-wise pre-training of the networks. We propose a novel technique for ex-situ training of memristor based neural networks which takes sneak-path currents into consideration. Multicore architectures based on memristor neural cores are developed and system level area power are compared with traditional computing systems. Results show that the memristor neural network based architectures could be about five orders of magnitude more energy efficient when compared to the traditional computing systems.


Memristors Energy consumption, Neural networks (Computer science) Energy consumption, Computer network architectures Design, Electrical Engineering, Computer Engineering, Memristor crossbar, neural networks, training, multicore architecture, low power system

Rights Statement

Copyright 2016, author