Document Type

Conference Paper

Publication Date

7-2015

Publication Source

2015 International Joint Conference on Neural Networks

Abstract

Extreme Learning Machine (ELM) has been introduced as a new algorithm for training single hidden layer feed-forward neural networks (SLFNs) instead of the classical gradient-based algorithms. Based on the consistency property of data, which enforce similar samples to share similar properties, ELM is a biologically inspired learning algorithm with SLFNs that learns much faster with good generalization and performs well in classification applications. However, the random generation of the weight matrix in current ELM based techniques leads to the possibility of unstable outputs in the learning and testing phases. Therefore, we present a novel approach for computing the weight matrix in ELM which forms a State Preserving Extreme Leaning Machine (SPELM). The SPELM stabilizes ELM training and testing outputs while monotonically increases its accuracy by preserving state variables. Furthermore, three popular feature extraction techniques, namely Gabor, Pyramid Histogram of Oriented Gradients (PHOG) and Local Binary Pattern (LBP) are incorporated with the SPELM for performance evaluation.

Experimental results show that our proposed algorithm yields the best performance on the widely used face datasets such as Yale, CMU and ORL compared to state-of-the-art ELM based classifiers.

Inclusive pages

1-7

ISBN/ISSN

2161-4407

Document Version

Postprint

Comments

The document available for download is the authors' accepted manuscript, provided in compliance with the publisher's policy on self-archiving. Permission documentation is on file.

Some differences may exist between this version and the published version; as such, researchers wishing to quote directly from this source are advised to consult the version of record.

Publisher

IEEE

Place of Publication

Killarney, Ireland

Link to published version

Share

COinS