Title Diabetic Retinopathy Detection and Categorizing Using a Lightweight Deep
Learning Approach
Authors D. Praneeth and N. Satheesh Kumar Department of Computer Science and
Engineering, Chaitanya (Deemed to be University), Telangana, India
Year of Publication 2024 (Published in ARPN Journal of Engineering and Applied Sciences, Vol. 19,
No. 13)
Methodology Used A lightweight Convolutional Neural Network (CNN) for DR detection with fewer
trainable parameters for resource-constrained environments. EfficientNetB3
used for DR classification with compound scaling (depth, width, resolution).
Preprocessing: Gaussian filtering, image resizing to 224×224. Evaluation
metrics: Accuracy, Precision, Recall, F1-Score, Confusion Matrix.
Dataset Used APTOS 2019 Blindness Detection dataset (Kaggle). Fundus images labeled into
five classes: 0 – No_DR 1 – Mild 2 – Moderate 3 – Severe 4 – Proliferate_DR
Parameters Considered - Image size: 224×224 pixels - Features: microaneurysms, hemorrhages,
exudates, neovascularization - Evaluation metrics: Training/Validation Accuracy,
Precision, Recall, F1-Score, Confusion Matrix - CNN with reduced weights for
detection - EfficientNetB3 with compound scaling for classification
Result Detection Model (CNN): 95% validation accuracy, outperforming CNN (69%),
VGG (76%), EfficientNetB0 (78%), MobileNet (85). Classification Model
(EfficientNetB3): 84% validation accuracy, higher than ResNet50 (56%),
DenseNet (73%), InceptionV3 (74). Class-wise performance: - No_DR: Precision
0.99, Recall 0.98 - Mild: Precision 0.62, Recall 0.67 - Proliferate_DR: Precision
0.61, Recall 0.47
Research Gap - Sensitivity and specificity issues limit clinical reliability - Need real-time,
cost-effective, accessible DR screening solutions - Imbalanced datasets hinder
severe case detection (Moderate vs Proliferate_DR misclassification) -
Vulnerable to overfitting/underfitting, requiring better generalization for real-world
deployment