Presenter Information

Sudarshan Babu, Anna University

Location

Science Center Auditorium, University of Dayton

Start Date

22-4-2016 4:25 PM

Description

We propose a learning algorithm that gives a significant and consistent improvement in accuracy over the existing sparse representation learner. The proposed work is built on two essential properties of data. The first is that data points belonging to the same class span the same subspace (here onward referred as subspace property), and the second is that data points belonging to the same class are a part of the same cluster (here onward referred as clustering property).

This paper proceeds by discussing a mode of breakdown for the sparse representation learner. Then, we introduce clustering exploiting sparse representation learner, which exploits both the subspace and clustering property to overcome the issues faced by sparse representation learner. The paper provides a strong geometric perspective of the classification scene involved with the different optimization frameworks discussed in the paper. To support our claims empirically, experiments where conducted comparing the sparse representation learner with the clustering exploiting sparse representation learner with a set of five diverse datasets. The final finding of this work is that clustering exploiting sparse learners could be safely assumed to give a improvement of 5% to 10% over the ordinary sparse representation learner.

Comments

Copyright © 2016 by the author. This paper was presented at the 2016 Modern Artificial Intelligence and Cognitive Science Conference, held at the University of Dayton April 22-23, 2016. Permission documentation is on file.

 
Apr 22nd, 4:25 PM

Clustering Exploiting Sparse Representation Learner for Robust Classification

Science Center Auditorium, University of Dayton

We propose a learning algorithm that gives a significant and consistent improvement in accuracy over the existing sparse representation learner. The proposed work is built on two essential properties of data. The first is that data points belonging to the same class span the same subspace (here onward referred as subspace property), and the second is that data points belonging to the same class are a part of the same cluster (here onward referred as clustering property).

This paper proceeds by discussing a mode of breakdown for the sparse representation learner. Then, we introduce clustering exploiting sparse representation learner, which exploits both the subspace and clustering property to overcome the issues faced by sparse representation learner. The paper provides a strong geometric perspective of the classification scene involved with the different optimization frameworks discussed in the paper. To support our claims empirically, experiments where conducted comparing the sparse representation learner with the clustering exploiting sparse representation learner with a set of five diverse datasets. The final finding of this work is that clustering exploiting sparse learners could be safely assumed to give a improvement of 5% to 10% over the ordinary sparse representation learner.