M M Shaifur Rahman


Presentation: 1:15-2:30 p.m., Kennedy Union Ballroom



Download Project (617 KB)


Deep learning (DL) is currently one of the most popular branch of Machine Learning and uses Deep Convolutional Neural Network (DCNN) architectures. It can transform medical diagnostics. DCNN predictions are significantly dependent on high-quality input data. However, large-scale images are challenging to operate with classical deep-learning architectures due to their vast memory and computational requirements. Currently, one of the popular approaches to deal with large-scale input images is to resize the large image to a smaller dimension which decays the performance of the overall system. Another popular approach to overcome large-scale image problems is to sequentially crop the high-resolution image into multiple smaller images to fit in the computation memory (GPU). In this work, we demonstrate a novel approach to training and inference in higher-resolution input images (e.g., 1024 x 1024) with DCNN. Our proposed architectures are constructed with state-of-the-art DCNN backbone models such as ResNet101, DenseNet-121 and EfficientNet. Finally, the models are evaluated using large-scale diabetic retinopathy datasets (e.g., Dataset for Diabetic Retinopathy, Kaggle 2019 BD). The experimental results are compared against existing deep learning methods and demonstrate significant improvements in accuracy.

Publication Date


Project Designation

Graduate Research

Primary Advisor

Tarek Taha

Primary Advisor's Department

Electrical and Computer Engineering


Stander Symposium, School of Engineering

Institutional Learning Goals


Analysis of Large-Scale Diabetic Retinopathy using Deep Convolutional Neural Network