Authors

Presenter(s)

Abdulbasit Alhinqari

Comments

3:00-4:15, Kennedy Union Ballroom

Files

Download

Download Project (49.5 MB)

Description

As the demand for various AI applications continues to grow in importance for futuristic aspects of life, non-invasive Brain-Computer Interfaces (BCIs) are expected to become one of the top priorities. BCIs enable humans to control surrounding equipment and devices through a direct communication link from the brain. These systems often rely on the classification of Electroencephalogram (EEG) signals, which are recordings of human brain activity. Given this potential, an increasing number of researchers and scientists are focusing on this field.Traditionally, various algorithms have relied on manual feature extraction to classify EEG datasets. However, recent advancements in Convolutional Neural Networks (CNNs) and deep learning architectures have demonstrated significant success in tasks such as computer vision, natural language processing, and contextual analysis, largely due to their ability to perform automatic feature extraction. Despite their success in other domains, these methods still struggle to generalize effectively on EEG signals due to their non-stationary and random nature.This work focuses on EEG-based BCI systems that leverage CNNs and deep learning tools. Specifically, it explores the application of self-supervised contrastive learning techniques for the classification of motor imagery (MI) actions.

Publication Date

4-23-2025

Project Designation

Graduate Research

Primary Advisor

Vijayan K. Asari, Theus H. Aspiras

Primary Advisor's Department

Electrical and Computer Engineering

Keywords

Stander Symposium, School of Engineering

Institutional Learning Goals

Practical Wisdom; Community

Self-Supervised Contrastive Learning for BCI system

Share

COinS