Patrick K Martell
Download Project (2.2 MB)
Neural Networks have become increasingly popular in recent years due to their ability to accurately classify images in complex tasks. This began with AlexNet in 2012, which led to better performing networks such as GoogLeNet, and ResNet. The network architecture used in this work is the nonlinear line attractor (NLA) network. The method we use will utilize a polynomial weighting method rather than a linear weighting method. The architecture was also improved with a Gaussian weighting scheme, which provides a modularity in the architecture and reduces redundancy in the network. The polynomial weighting scheme improves the network on the tested datasets and yielded better convergence characteristics, quicker training times, and improved recognition rates than the linear counterpart. These changes led to a polynomial network which modifies the NLA architecture to include different ways to use polynomial weighting. In each layer, we can have orders of each input connected by a weight set, which can be trained by a backpropagation algorithm. While this network performed very well, we believe that there is still room for improvement. The previous method performs near the top of it's class, though does not perform better than the best deep learning networks for the MNIST database. By combining the polynomial network and region based approach with the current state of the art techniques for deep learning based methods, we believe that the combination will outperform the regular the polynomial based networks, regardless of polynomial order and region based connections. This expected increase will come from the ability for the polynomial method to further augment the ability of deep learning networks to understand the space. This accuracy increase will most likely also come at the cost of complexity and training time, as this is usually an accepted cost for the use of deep learning networks.
Graduate Research - Graduate
Vijayan K. Asari, Theus H. Aspiras
Primary Advisor's Department
Electrical and Computer Engineering
Stander Symposium project
"CNN Based HAPNet for Deep Learning" (2017). Stander Symposium Projects. 1071.