0% found this document useful (0 votes)
27 views48 pages

Skin Cancer Final Report-2

Uploaded by

hemavarshu1521
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views48 pages

Skin Cancer Final Report-2

Uploaded by

hemavarshu1521
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 48

COMPARITIVE ANALYSIS OF MACHINE

LEARNING METHODES FOR MULTI-LABEL SKIN


CANCER CLASSIFICATION
ABSTRACT

Skin diseases such as acne, eczema, psoriasis, and melanoma affect millions worldwide, often
requiring timely diagnosis and treatment. However, access to professional dermatological
care remains limited, particularly in remote areas. To address this challenge, this project
develops an AI-powered automated system designed to diagnose and predict common
dermatological conditions with high accuracy. Implemented as a web-based platform, the
system enables users to capture real-time skin images via smartphone cameras or upload pre-
existing photos for analysis.

A critical component of this system is image preprocessing, which enhances the quality of
skin images before diagnosis. Preprocessing techniques such as noise reduction, contrast
enhancement, and image resizing ensure optimal conditions for accurate analysis. Once
processed, the images undergo deep learning-based classification using Convolutional Neural
Networks (CNNs) and InceptionV3, two advanced architectures known for their ability to
extract and interpret complex patterns in medical imaging.

CNNs play a crucial role in automatically identifying key dermatological features, such as
lesion borders, texture, color variations, and shape. By learning these intricate details, the
model can differentiate between various skin conditions with minimal human intervention.
InceptionV3 further enhances classification accuracy by analyzing images at multiple scales,
ensuring that even minor abnormalities are detected. The model is trained on an extensive
dataset of labeled dermatological images, allowing it to distinguish between conditions with a
high degree of precision.

Beyond simply identifying skin diseases, the system provides users with a severity
percentage, helping them assess the extent of their condition. This quantitative evaluation
enables individuals to gauge the seriousness of their symptoms and take appropriate action.
The system generates personalized recommendations based on the diagnosis, suggesting
over-the-counter treatments, skincare routines, and, in severe cases, referrals to professional
dermatologists. By offering tailored guidance, the platform empowers users to manage their
skin health more effectively.
A unique feature of the system is its ability to track a user’s skin condition over time. By
storing previous scans, the platform allows individuals to monitor improvements or
worsening symptoms through a historical record. This functionality is particularly useful for
chronic conditions such as eczema and psoriasis, where treatment effectiveness can be
assessed over time. Additionally, the system integrates a notification feature that reminds
users to conduct follow-up skin assessments. These timely reminders ensure proactive
monitoring, especially for those at risk of developing severe dermatological conditions.

The proposed AI-driven platform significantly enhances accessibility to dermatological care,


particularly benefiting individuals in underserved communities where medical specialists are
scarce. By leveraging deep learning techniques, the system provides an efficient, cost-
effective, and user-friendly solution for early detection, accurate diagnosis, and continuous
skin health management. The automation of the diagnostic process reduces dependency on
specialists while maintaining a high level of accuracy, making dermatological assessments
available to a broader population.

Overall, this AI-powered system represents a transformative advancement in dermatology,


bridging the gap between technology and healthcare. By integrating CNN and InceptionV3
models, real-time skin analysis, and intelligent recommendations, the platform ensures that
individuals receive timely and reliable insights into their skin health. With its ability to
provide early intervention and continuous monitoring, this solution holds immense potential
for improving dermatological care and enhancing the overall well-being of users worldwide.
CHAPTER 1

INTRODUCTION

Skin diseases such as acne, eczema, psoriasis, and melanoma affect millions of individuals
worldwide, impacting their overall well-being and quality of life. While some skin conditions
are mild and manageable, others, such as melanoma, can be life-threatening if not detected
early. Proper diagnosis and timely treatment play a crucial role in preventing complications
and ensuring effective management. However, access to dermatological care remains a
significant challenge, particularly in remote and underserved areas where specialized medical
professionals are scarce. To address this gap, artificial intelligence (AI) has emerged as a
powerful tool in healthcare, enabling the automated diagnosis and real-time monitoring of
skin conditions with remarkable accuracy.

This project introduces an AI-powered automated system designed to diagnose and predict
common dermatological conditions using advanced image processing and machine learning
techniques. Implemented as a web-based application, the system provides users with a
convenient and accessible way to assess their skin health. By allowing individuals to capture
or upload real-time skin images, the platform ensures that dermatological assessments can be
conducted from the comfort of one’s home without requiring immediate medical intervention.
The system enhances accessibility to dermatological care, particularly for those living in
remote locations or facing financial constraints that limit their access to specialists.

To achieve precise classification of skin conditions, the system employs a hybrid deep-
learning approach that combines Convolutional Neural Networks (CNNs) with additional
machine-learning classifiers. CNNs play a fundamental role in automatically extracting
relevant features from images, such as lesion borders, texture, and color variations, which are
critical in distinguishing between different skin diseases. The integration of machine learning
classifiers further refines the diagnostic process by improving prediction accuracy and
ensuring the system adapts to various skin conditions. This hybrid approach not only
enhances reliability but also reduces the risk of misdiagnosis, thereby improving user trust in
the platform.

A key aspect of the system’s functionality is image preprocessing, which optimizes skin
images for analysis. Techniques such as noise reduction, contrast enhancement, and resizing
are applied to improve image clarity and facilitate accurate feature extraction. High-quality
image processing is essential in medical applications, as even minor distortions can impact
diagnosis. By ensuring that images are well-processed before classification, the system
enhances its overall efficiency and reliability.

Once the analysis is complete, the system provides users with a severity percentage, helping
them understand the extent of their skin condition. This feature is particularly beneficial for
conditions such as psoriasis and eczema, where the severity of symptoms varies among
individuals. Based on the diagnostic results, the platform generates personalized treatment
recommendations, including over-the-counter solutions, skincare routines, and, for severe
cases, referrals to dermatologists. This guidance empowers users to make informed decisions
about their skin health and take proactive steps toward treatment.

To ensure continuous monitoring, the system integrates an intelligent notification feature that
reminds users to conduct follow-up assessments. This proactive approach is especially
valuable for individuals managing chronic skin conditions or those at risk of developing
serious complications. Regular assessments enable users to track changes in their skin
condition over time, allowing for timely interventions when necessary.

By integrating AI-driven diagnosis, real-time monitoring, and automated alerts, this system
represents a significant advancement in dermatological care. It not only enhances early
detection but also improves healthcare accessibility for individuals with limited access to
dermatologists. The implementation of AI in dermatology bridges healthcare gaps and
ensures that individuals receive timely and effective skin health assessments. This innovative
solution holds immense potential in transforming dermatological care, making it more
efficient, accurate, and widely available to individuals worldwide.
CHAPTER 2

LITEATURE REVIEW

2.1 TITLE: A deep learning based approach for automated skin disease detection using Fast
R-CNN

AUTHOR: Prakriti Dwivedi Akbar Ali Khan

YEAR: 2021

DESCRIPTION: Skin conditions vary widely in terms of its symptoms and criticality which
can be persistent or temporary, pain-free or painful, mild or severe and at times situational or
genetic in nature. This varying complexity and uncertainty not only make it difficult for a
patient to sense it, but also becomes a daunting task for doctors to deal with it. Consequently,
if remained ignored or untreated, it can even be fatal at times. Therefore, the need for a rapid
detection system for skin disorder is a must to reduce its criticality level. This paper is an
attempt to develop a system using deep learning technology to detect skin diseases
accurately. Using the Fast R-CNN architecture of deep learning, appropriate annotation
technique and proper selection of parameters, the results were obtained. We are able to detect
the specified skin disease from the given classes with an overall accuracy of 90% and the loss
of 0.3 which shows the effectiveness of the model.
2.2 TITLE: A benchmark for automatic visual classification of clinical skin disease images.

AUTHOR: Sun, Xiaoxiao.

YEAR: 2016

DESCRIPTION Skin disease is one of the most common human illnesses. It pervades all
cultures, occurs at all ages, and affects between 30 % and 70 % of individuals, with even
higher rates in at-risk. However, diagnosis of skin diseases by observing is a very difficult job
for both doctors and patients, where an intelligent system can be helpful. In this paper, we
mainly introduce a benchmark dataset for clinical skin diseases to address this problem. To
the best of our knowledge, this dataset is currently the largest for visual recognition of skin
diseases. It contains 6,584 images from 198 classes, varying according to scale, color, shape
and structure. We hope that this benchmark dataset will encourage further research on visual
skin disease classification. Moreover, the recent successes of many computer vision related
tasks are due to the adoption of Convolutional Neural Networks (CNNs), we also perform
extensive analyses on this dataset using the state of the art methods including CNNs.
2.3 TITLE: Improving the diagnostic accuracy of dysplastic and melanoma lesions using the
decision template combination method.

AUTHOR: Faal, Maryam,

YEAR: 2013

DESCRIPTION: Melanoma is the most dangerous type of skin cancer, and early detection
of suspicious lesions can decrease the mortality rate of this cancer. In this article, we present
a multi‐classifier system for improving the diagnostic accuracy of melanoma and dysplastic
lesions based on the decision template combination rule.First, the lesion is differentiated from
the surrounding healthy skin in an image. Next, shape, colour and texture features are
extracted from the lesion image. Different subsets of these features are fed to three different
classifiers: k‐nearest neighbour (k‐NN), support vector machine (SVM) and linear
discriminant analysis (LDA). The decision template method is used to combine the outputs of
these classifiers.
2.4 TITLE: Diagnosis of skin diseases using Convolutional Neural Networks

AUTHOR: Jainesh Rathod Vishal Waghmode

YEAR:2018

DESCRIPTION: Dermatology is one of the most unpredictable and difficult terrains to


diagnose due its complexity. In the field of dermatology, many a times extensive tests are to
be carried out so as to decide upon the skin condition the patient may be facing. The time
may vary from practitioner to practitioner. This is also based on the experience of that person
too. So, there is a need of a system which can diagnose the skin diseases without any of these
constraints. We propose an automated image based system for recognition of skin diseases
using machine learning classification. This system will utilize computational technique to
analyze, process, and relegate the image data predicated on various features of the images.
Skin images are filtered to remove unwanted noise and also process it for enhancement of the
image. Feature extraction using complex techniques such as Convolutional Neural Network
(CNN), classify the image based on the algorithm of softmax classifier and obtain the
diagnosis report as an output. This system will give more accuracy and will generate results
faster than the traditional method, making this application an efficient and dependable system
for dermatological disease detection. Furthermore, this can also be used as a reliable real time
teaching tool for medical students in the dermatology stream.
2.5 TITLE: Comparison of Breast Cancer and Skin Cancer Diagnoses Using Deep Learning
Method

AUTHOR: Burcu Bılgıç

YEAR: 2021

DESCRIPTION: Artificial intelligence applications are of great importance in the solution


of cancer, which is one of the biggest health problems of our age. In this study, a study was
conducted on deep learning methods that make life important in the early diagnosis of breast
cancer and skin cancer, which are among the most common types of cancer worldwide.
Breast cancer and skin cancer data were classified as benign and malignant by deep learning
methods. While working with the deep learning method, the classification was made using
the Convolutional Neural Network (CNN) algorithm. In this classification, the data are
divided into benign cancer sets and malignant cancer sets. Finally, the data provided by the
logistic regression method were analyzed and success charts were created and both types
were compared. As a result, accuracy and loss graphs of both cancer types were formed. The
aim of the study is to compare breast cancer and skin cancer with the deep learning method.
And some breast cancer and skin cancer diagnoses are confused. In further studies, the basis
of differentiating the diagnosis of these two types of cancer from each other was made in this
study.
2.6 TITLE: Skin Disease Detection and Classification

AUTHOR: Mritunjay Kumar Ojha Dilrose Reji Karakattil

YEAR: 2022

DESCRIPTION: Skin being the outermost integument of the human body has a significant
role when it comes to protecting our body from sunlight, cold and other harmful germs and
substances. Human skin type varies from person to person. The human skin type variety
provides a diverse habitat for microorganisms and bacteria. Sometimes abnormalities of skin
also cause noticeable signs of underlying disease. Hence it is highly necessary to detect and
properly diagnose skin disease in the early stages to avoid spread of the disease. Our project
involves image processing and classification of skin diseases using image data. These have
been implemented with the help of two classifiers such as Convolutional Neural Networks
and MobileNet in the domain of Deep Learning. Therefore, this project is an early warning
tool which informs users about skin diseases that they are suffering from using a computer
aided method.
2.7 TITLE: Classification and Detection of Acne on the Skin using Deep Learning
Algorithms

AUTHOR: Nikhil Pancholi Silky Goel

YEAR: 2021

DESCRIPTION: Teenagers and young adults are prone to acne. It's a skin condition that
arises when oil and dead skin cells clog hair follicles. The face, forehead, upper back, and
chest are the most common places where it occurs. Excess bacteria, inflammation, and
blocked hair follicles are all reasons that contribute to acne. It is the seventh epidemic and is
thought to afflict 9.4% of the world's population. For the identification of acne on diverse
skins, we used various pre-trained CNN models such as Inception V3, VGG16, and VGG19.
We also used machine learning classifiers for a thorough examination of acne detection. The
Inception v3 with the logistic regression classifier provides the best accuracy of 99.5%.
2.8 TITLE: Skin Cancer and Oral Cancer Detection using Deep Learning Technique

AUTHOR: Geetika Sharma; Raman Chadha

YEAR: 2022

DESCRIPTION: Skin as well as Oral cancer are extremely dangerous and deadly forms of
cancer. Examination should be done on regular basis for both skin as well as Oral cancer can
prevent and helps in treating the cancer at early stage. Moreover, both skin and oral cancer
cases are increasing day by day, due to which there is an increase in the death rate also.
Another major factor why its symptoms should be diagnosed at early stage is an expensive
medical treatment. That is why many researchers had done a lot of research in the field of
cancer detection. But the literature review done so far has focussed to detect one type of
cancer. So, the main focus of this paper is to propose a methodology that will work on
detection of two types of cancer i.e. Skin cancer and Oral cancer using Deep learning
technique. A literature review is done on various research papers of skin and oral cancer
detection. Proposed methodology is presented in the form of flowchart for better
understanding.
2.9 TITLE: Ovarian Cancer Detection and Classification Using Machine Leaning

AUTHOR: Ms Aditya; I Amrita

YEAR: 2021

DESCRIPTION: Ovarian cancer is one of the leading causes of death among women. It
ranks fifth in cancer deaths among women and affects women if all demography and
ethnicity. It is important to accurately classify it from tumours hence avoiding false positives
for cancer, and hence catered to the patients appropriate needs. In this direction a
methodology is designed to classify between Benign Ovarian Tumourand Ovarian Cancer
with different machine learning classifiers with different imputation methods, with and
without feature selection and deep learning from Kaggle dataset. It was evident that feature
selection greatly increased the performance of the Machine learning model through accuracy.
Out of all the classifiers Random Forest with median imputation gave the best result. The
accuracy of DL model was on-par with the Random Forest classifier and did not show any
significant improvement over the traditional machine learning model. Ovarian cancer (OC)
ranks 5th in cancer death, accounting majority of deaths than any other cancer of the female
reproductive system. A woman’s risk of getting OC is about 1 in 78. The likelihood of dying
from OC is approximately one in hundred. OC typically affects older women. About half of
the women diagnosed with OC are over sixty years of age. Here are some more stats. In India
ovarian cancer death increased to 89% from 2007 to 2017. Total annual number of deaths
from ovarian cancers across all ages was 175,982 in 2017 around the world. [3]. In India, 5yr
survival rate is 13.9% in 2009. Total population with any form of cancer, measured as the
age-standardised percentage has increased steadily across the world and India. In 2017 1.31%
and 0.31% of population around the world and India respectively. In 20170.02% of world
population with cancer was ovarian, and 0.01% in India
2.10 TITLE: Deployment of Breast Cancer Hybrid Net using Deep Learning

AUTHOR: Nipun B Nair; Tripty Singh

YEAR: 2022

DESCRIPTION: Breast cancer (BC) occurs when healthy breast cells grow out of control
and become tumors. According to American Cancer Society, breast cancer occurs in one out
of eight women and in one out of thousand men. Early breast cancer detection is thus
important to give the maximum chance of survival for the patient. Breast biopsy is used to
analyze the breast cells and diagnose whether the sample of cells contain breast cancer. It is
an invasive method and manually analyzed by a pathologist under a microscope. There is a
chance of human error in such a method, and it is time consuming. Oncologist can diagnose
breast cancer at a faster and accurate and less painful way if they use machine learning and
image classification algorithms. One of the best machine learning techniques is Support
Vector Machine. If it is combined with the computational power of Convolutional Neural
Network, it becomes an immensely powerful classification algorithm. Support Vector
Machine and Convolutional Neural Network model gives better accuracy than other image
classifiers such as VGG16, RESNET 50, and INCEPTIONV3 models. This research is to
design Deployment of Breast Cancer Hybrid Net using Deep Learning For research dataset of
3538 images was deployed. During research experiments SVM-CNN, VGG16, RESNET 50,
and INCEPTIONV3 models accuracy reported were 93.35%, 89.54%, 92.45% and 88.6%
respectively.
2.11 TITLE: Using ImageNet Xception Model to Identify Skin Cancer and Non-Skin
Cancer Image Classification

AUTHOR: Md. Ismiel Hossen Abir; Awolad Hossain; Taspia Salam

YEAR: 2024

DESCRIPTION:

Skin cancer is a significant public health concern, requiring efficient and accurate diagnostic
tools. This research explores the application of deep learning, specifically the Xception
architecture, for bi- nary classification using images of skin cancer and non-skin cancer
conditions. Two datasets, comprising 250 training images and 40 testing images, were
Selection carefully and augmented to enhance model generalization. The Xception model,
pre-trained on ImageNet, demonstrated superior performance with a 94 percent accuracy
compared to an initial convolutional neural network (CNN) model. A comprehensive
analysis, including a confusion matrix and sample predictions, provides in- sights into the
model’s predictive behavior. We also use SHAP values from explainable AI using Xception
Model.
2.12 TITLE: Performance of Multi Layer Perceptron and Deep Neural Networks in Skin
Cancer Classification

AUTHOR: Yessi Jusman; Indah Monisa Firdiantika; Dhimas Arief Dharmawan; Kunnu
Purwanto

YEAR: 2021

DESCRIPTION:

Skin cancer refers to a condition where there exists abnormal growth of skin cells, mostly
occurs on skin exposed to the sun. There are several types of skin cancer, where the most
common types include basal cell carcinoma, squamous cell carcinoma, and melanoma.
Without proper treatment, skin cancer, particularly in the melanoma form, can lead to deaths.
Fortunately, early detection and classification of skin cancer are highly effective in
preventing serious damages from skin cancer. In this paper, we train Multi-layer Perceptron,
a custom convolutional neural network, and VGG-16 for skin cancer classification on a large
skin cancer dataset, HAM10000. The performance of each trained model is subsequently
compared and analyzed in terms of classification accuracy and computational time. Our
experimental setups reveal that the VGG-16 model can set the best classification accuracy
among the compared networks while in terms of testing time, the VGG-16 and custom CNN
models are being much faster than the Multi-layer Perceptron. The results of our study are
beneficial in providing systematic comparison and analysis of several neural networks in skin
cancer classification.
2.13 TITLE: Detection and Classification of Skin Cancer Using YOLOv8n

AUTHOR: Munawar A Riyadi; Adela Ayuningtias; R Rizal Isnanto

YEAR: 2024

DESCRIPTION:

Skin cancer is a disease caused by the growth of abnormal cells in skin tissues. The World
Health Organization (WHO) has recorded an 88% increase in deaths due to skin cancer
caused by exposure to ultraviolet rays. Currently, in the medical field, the diagnosis of skin
cancer involves a biopsy process, which requires considerable time and cost. Therefore, this
study aims to develop a system for detecting and classifying skin cancer based on the shape
of skin lesions using the You Only Look Once version 8 nano (YOLOv8n), which can detect
lesions rapidly. The dataset used is ISIC 2019, comprising of 4289 images of cancerous skin
lesions divided into 9 classes: Basal Cell Carcinoma, Squamous Cell Carcinoma, Melanoma,
Actinic Keratosis, Dermatofibroma, Nevus, Seborrheic Keratosis, Pigmented Benign
Keratosis, and Vascular Lesion. Experimental results show that the designed system performs
well in detecting and classifying the lesions, achieving an overall accuracy of 93.5%, with a
Precision of 93.5%, Recall of 93.7%, and an F1-Score of 93.5%.
2.14 TITLE: Cascaded Approach for Image Segmentation and Classification for Skin
Cancer Detection

AUTHOR: Abhipsa Pattanaik; Leena Das; Shobhan Banerjee

YEAR: 2024

DESCRIPTION:

With advancements in medical sciences, the detection and classification of skin cancer in
patients is a field where extensive efforts are been put in by researchers to improve real-time
solutions. It is a state where the growth of skin cells goes out of control. Since cancer evolves
with respect to time, hence, to classify the type of skin cancer we need to segment the image
properly before classification, which can lead to more reliable results in the detection of the
type of cancer that has occurred. In this paper, we have proposed a cascaded approach to
detect the type of skin cancer where instead of directly feeding an input as an input for
classification, we first input the original image to U-net for segmentation and derive the
granular details from it. Followed by which, the segmented image is fed and forwarded to
CNNs through which the classification of the cancer is performed. We have implemented and
compared the performances of MobileNet, DenseNet121, and ResNet50 in the classification
of skin cancer disease.
2.15 TITLE: Skin Cancer Classification and Detection Using VGG-19 and DesNet

AUTHOR: Ashwinee Barbadekar; Varad Ashtekar; Atharva Chaudhari

YEAR: 2024

DESCRIPTION:
Skin cancer is an alarming situation. Over 150 thousand cases of skin cancer have been
detected around the world. It is necessary that skin cancer is detected and diagnosed in its
initial stage. The system proposed in this paper performs lesion segmentation and
classification of cancer by taking dermatoscopic images as input. Skin lesion segmentation
system uses the BCDU-Net model. The dice co-efficient and IOU of segmentation system are
90.66% and 83.09% respectively. Performances of VGG-19 and the DenseNet model are
compared for skin cancer classification. VGG-19 provides an accuracy of 97.29% which is
considerably better than some of the previous models.
2.16 TITLE: Skin Cancer Classification Using Transfer Learning-Based Pre-Trained VGG
16 Model

AUTHOR: Archana Saini; Kalpna Guleria; Shagun Sharma

YEAR: 2023

DESCRIPTION:

Skin is one of the most dangerous types of cancer which affect, millions of people every year.
The identification and treatment of skin cancer is challenging and expensive due to the
requirements of advanced technologies. According to recent studies, dermatologists can
categorise medical images with the aid of machine learning and deep learning-based
methods. Skin cancer is currently one of the most prevalent and deadliest forms of cancer if it
reaches to the malignant stage. Melanoma is a skin cancer which occurs when the skin
undergoes through various abnormal changes. To reduce the effort, time, and risk involved
with skin cancer, an accurate automated system for skin lesion classification is required. Due
to the complexity of the skin's texture and the visual proximity of the disease, it becomes very
difficult to accurately identify this type of cancer. In this work, a pre-trained VGG16 model
has been used for the prediction of skin cancer. The results of the proposed model have been
identified in terms of AUC and loss at the validation and training phases. These results have
been identified by performing fine-tuning of the pre-trained VGG16 model and keeping the
epoch values as 2, 10, and 20. The performance of the model has been analyzed which
identified that the best results of AUC have been identified at epoch 20 with the ROC value
of 0.841 and loss value of 0.034. In future, this work can be expanded with the
implementation of transfer learning models to achieve the performance in terms of accuracy.
2.17 TITLE: Skin Cancer Classification using Deep Learning Algorithms

AUTHOR: Kr Senthil Murugan; A S V Jayamaharaja; R Vishal; R Deepalakshmi; P


Balamuruga

YEAR: 2023

DESCRIPTION:
Skin cancer detection, acritical facet of dermatology, has witnessed significant advancements
through deep learning methodologies. Leveraging the ISIC dataset, this project introduces a
potent approach to multi-class skin lesion classification, emphasizing skin cancer
identification, which achieves an impressive accuracy rate of 90%. The methodology
meticulously selects, preprocesses, and augments the dataset to ensure diversity and data
quality. Centered on Convolutional Neural Networks (CNNs), with considerations for
transfer learning and strategic dropout integration, the model exhibits robustness. Rigorous
evaluations, encompassing diverse performance metrics, provide comprehensive insights.
Additionally, potential deployment avenues, such as API development and cloud-based
solutions, are explored. This research underscores the substantial accuracy gains attainable
with the ISIC dataset and the transformative potential of deep learning in advancing skin
cancer diagnosis, offering hope for enhanced patient care and outcomes.
2.18 TITLE: Integrating Dielectric Modelling with Machine Learning for Skin Cancer
Classification

AUTHOR: Md Abdul Awal; Syed Akbar Raza Naqvi; Amin Abbosh

YEAR: 2024

DESCRIPTION:

This study explores using the Cole-Cole dielectric model in a machine-learning algorithm to
distinguish between healthy and cancerous skin at microwave frequencies. Dielectric
properties were measured using an open-ended coaxial probe and optimized as second-order
Cole-Cole parameters via an adaptive weighted vector mean optimization algorithm. These
parameters were then fed into an XGB classifier, achieving an F1 Score of 95.24%. The
results demonstrate that optimized Cole-Cole model parameters can effectively classify
normal and cancerous skin, highlighting the potential for non-invasive skin cancer
diagnostics. This study underscores the promising combination of dielectric modeling and
machine learning in enhancing skin cancer detection.
2.19 TITLE: SCCNet: An Improved Multi-Class Skin Cancer Classification Network using
Deep Learning

AUTHOR: Tanvir Ahmed; Farzana Sharmin Mou; Amran Hossain

YEAR: 2024

DESCRIPTION:

Skin cancer is the most prevalent type of cancer worldwide, and detecting it early is crucial to
a successful course of treatment. In recent years, machine learning methods have
demonstrated great potential for making skin cancer detection simpler. Using a model of the
fine-tuned Skin Cancer Classification Network (SCCNet) based on deep learning, we
proposed a unique method for classifying skin cancer into multiple categories in this research,
which is comprised of the Xception deep learning model and additional four types of fine-
tuned layers. The proposed model was trained by the publicly available skin cancer dataset
ISIC-2018. The dataset consists of seven classes and 21,000 images after data augmentation,
with 3,000 images in each class. The proposed model achieved an accuracy, precision, recall,
and F1-score of 95.20%, 95.14%, 95.00%, and 95.14% respectively, for seven-class
classification. The proposed SCCNet model, achieved an impressive accuracy of 95.20% in
the classification of multiple skin cancer classes, outperforms several state-of-the-art
approaches, demonstrating its potential to enhance dermatological diagnostics and guide
effective therapeutic interventions for patients with skin cancer.
2.20 TITLE: Vision Transformers (ViT) for Enhanced Skin Cancer Classification

AUTHOR: Mohamed Ghassen Dahmani; Mounira Tarhouni; Salah Zidi

YEAR: 2024

DESCRIPTION:

The deployment of artificial intelligence for the analysis of dermoscopic images has
catalyzed significant advancements in the early detection and subsequent treatment of skin
cancer, which continues to escalate in prevalence globally, presenting a formidable public
health challenge. Notable progress has been achieved in the realm of skin cancer detection
through the adoption of convolutional neural networks (CNNs), which constitute the core
architecture for deep learning in this field. However, while this approach excels at extracting
features from minor elements within the images, it falls short in accurately identifying and
localizing critical regions. As a result, our research has led us to utilize Vision Transformer
(VIT) models to enhance skin cancer detection and classification. By applying these VIT
models to various datasets and comparing the results with those obtained from CNNs and
other models, we have achieved remarkable outcomes. Specifically, we attained an accuracy
exceeding 96% with the VIT-B16 model on the benign versus malignant skin cancer dataset
and 94% on the HAM10000 dataset. These findings underscore the potential of Vision
Transformer models in improving diagnostic accuracy and advancing the field of
dermatological AI.
CHAPTER 3

EXISTING SYSTEM

The diagnosis and management of skin diseases have traditionally relied on clinical
evaluations by dermatologists, visual inspections, and laboratory tests. These conventional
methods, while effective, are often limited by accessibility, time constraints, and human
subjectivity. Patients must schedule appointments with dermatologists, which can involve
long waiting times, particularly in remote or underserved areas where specialized healthcare
professionals are scarce. The reliance on manual examination also introduces variability in
diagnosis, as different doctors may interpret symptoms differently, leading to potential
misdiagnosis or delays in treatment.

In the existing system, dermatologists primarily diagnose skin conditions through physical
examinations and dermatoscopy, a technique that involves the use of a magnifying instrument
to analyze skin lesions. In some cases, a biopsy may be required, where a small sample of
skin is extracted and examined under a microscope for definitive diagnosis. While these
methods are accurate, they are invasive, time-consuming, and often require specialized
laboratory facilities, which are not always available in rural healthcare centers.

For less severe conditions such as acne, eczema, or mild psoriasis, general practitioners and
pharmacists often provide initial treatment recommendations. Patients may rely on over-the-
counter (OTC) medications, topical ointments, or lifestyle modifications based on basic
visual assessments. However, these approaches lack precision, as many skin diseases share
similar symptoms. For example, fungal infections can resemble eczema, and non-cancerous
moles may mimic melanoma. Without proper diagnostic tools, patients may receive
ineffective treatments, leading to prolonged discomfort or worsening of the condition.

Moreover, self-diagnosis through online resources and mobile applications has become
increasingly common. Patients frequently use internet searches, symptom checker websites,
or dermatology forums to identify potential conditions. While online information can provide
basic awareness, it is often inaccurate, generalized, and lacks the expertise of a medical
professional. Many mobile applications claim to offer skin disease diagnosis based on user-
uploaded images, but most of these tools rely on limited datasets and simplistic algorithms,
making them unreliable for medical decision-making. Additionally, privacy concerns arise
when personal medical images are uploaded to unsecured platforms.

Some existing AI-powered dermatology systems have been developed to assist in skin
disease detection, but they often face limitations. Many of these systems rely on basic
machine learning models that require extensive labeled datasets for training. Due to the
diverse range of skin conditions and variations across different skin tones, these models may
struggle with accuracy, particularly for underrepresented populations. Additionally, AI-based
mobile applications often provide only binary classifications, such as "normal" or
"abnormal," without detailed insights into disease severity or personalized treatment
recommendations. This lack of comprehensive analysis makes them less effective for users
seeking precise medical guidance.

Another challenge in the existing system is the lack of continuous skin health monitoring.
Traditional dermatological evaluations are typically one-time consultations, with no
structured follow-up unless the patient proactively schedules another appointment. This can
be problematic for chronic conditions like psoriasis or eczema, where symptoms fluctuate
over time. Patients may not recognize gradual changes in their skin health, leading to delayed
interventions.

In conclusion, while the existing system for dermatological diagnosis provides effective
results in clinical settings, it faces limitations in accessibility, accuracy, and continuous
monitoring. AI-powered solutions have the potential to bridge these gaps, but current
implementations still lack reliability, personalized recommendations, and comprehensive
monitoring features. An improved AI-driven system must address these shortcomings to
provide an accessible, accurate, and proactive approach to dermatological care.
CHAPTER 4

PROPOSED SYSTEM

The increasing prevalence of skin diseases, such as acne, eczema, psoriasis, and melanoma,
highlights the need for accessible, accurate, and efficient dermatological care. However,
access to specialized dermatologists remains a challenge, particularly in remote and
underserved areas where healthcare resources are scarce. To address this gap, this project
proposes an AI-powered automated system designed to diagnose and predict common
dermatological conditions with high precision. By integrating deep learning algorithms, real-
time image processing, and personalized health recommendations, the system offers a
comprehensive solution for early detection, continuous monitoring, and proactive skin health
management.The proposed system is implemented as a web-based platform, allowing users to
either capture real-time skin images via smartphone cameras or upload pre-existing photos
for analysis. Unlike conventional diagnostic approaches that rely on in-person dermatological
consultations, this system empowers users to conduct preliminary skin assessments anytime
and anywhere.A fundamental aspect of the system is image preprocessing, which enhances
the quality of input images before classification. Preprocessing techniques such as noise
reduction, contrast enhancement, and image resizing optimize images for accurate feature
extraction. This ensures that the system can analyze images taken under different lighting
conditions, resolutions, and angles while maintaining diagnostic accuracy.The core diagnostic
process is powered by a hybrid deep learning model that combines Convolutional Neural
Networks (CNNs) and InceptionV3. CNNs automatically detect dermatological features such
as lesion borders, texture, color variations, and shape, enabling precise classification of skin
conditions. InceptionV3 further enhances classification accuracy by analyzing images at
multiple scales, ensuring that even minor abnormalities are detected. The model is trained on
an extensive dataset of labeled dermatological images, allowing it to recognize a wide range
of skin conditions across diverse skin tones and demographics.Beyond simply classifying
skin diseases, the system also provides a severity assessment by assigning a percentage score
to indicate the extent of the condition. This feature helps users understand the seriousness of
their skin problem and take necessary action accordingly. For example, a mild acne case may
require basic skincare recommendations, whereas a high-severity melanoma detection would
prompt an immediate referral to a dermatologist.A unique feature of the proposed system is
its ability to track a user’s skin condition over time. All scanned images and diagnostic results
are securely stored, allowing users to monitor improvements or worsening symptoms through
a historical record. This feature is particularly beneficial for chronic conditions like eczema
and psoriasis, where long-term monitoring is essential for effective treatment
management.Additionally, the system integrates an automated notification feature that
reminds users to conduct follow-up skin assessments at recommended intervals. This
proactive monitoring ensures that users stay informed about their skin health, helping them
detect early signs of disease progression or treatment effectiveness.The proposed AI-driven
dermatological platform significantly enhances the accessibility of dermatological care,
particularly benefiting individuals in underserved communities where medical specialists are
scarce. By leveraging deep learning techniques, the system provides a cost-effective, user-
friendly, and highly efficient alternative to traditional dermatological assessments.This
system not only reduces the dependency on dermatologists for initial diagnosis but also
ensures early intervention and continuous monitoring, leading to better health outcomes.
With ongoing advancements in AI and medical imaging, future enhancements could include
real-time telemedicine consultations, integration with wearable health devices, and expanded
diagnostic capabilities for additional skin conditions.The proposed system represents a
transformative advancement in dermatological diagnosis, bridging the gap between
technology and healthcare. By integrating CNN and InceptionV3 models, real-time image
analysis, severity assessment, and intelligent recommendations, this platform provides timely,
reliable, and accessible dermatological insights to users worldwide. Its ability to offer early
detection, continuous monitoring, and proactive treatment guidance makes it a valuable
innovation in modern dermatological care.
CHAPTER 5

SYSTEM REQUIREMENT

HARDWARE REQUIREMENT:-

PROCESS : INTEL® CORE™ I9-14900K 3.20 GHZ

RAM : 16 GB

HARD DISK : 1 TB

SOFTWARE REQUIREMENT:-

Frontend :HTML,CSS

Backend : PYTHON

Framework : FLASK

HARDWARE DESCRIPTION:-

1.PROCESSOR : INTEL® CORE™ I9-14900K 3.20 GHZ

The Intel® Core™ i9-14900K processor with a base clock speed of 3.20 GHz is a
powerhouse in terms of computational capabilities. Designed for intensive workloads, it
offers exceptional multi-core performance, making it ideal for tasks like gaming, video
editing, 3D rendering, and other processor-intensive applications. Its high clock speed and
modern architecture ensure efficient data processing, quick task execution, and minimal
latency.

2. RAM

With 16 GB of RAM, the system provides ample memory for seamless multitasking. This
capacity supports the smooth operation of modern software, gaming, and professional
applications like video editing tools or machine learning frameworks. It ensures that
switching between applications remains fluid, without system slowdowns.
3. HARD DISK

The 1 TB hard disk offers a significant amount of storage space to accommodate the
operating system, essential software, media files, and other data. While traditional hard drives
provide large storage capacities, combining this with an SSD can greatly enhance system
performance, particularly in boot times and file transfers.

SOFTWARE DESCRIPTION:-

FRONTEND : HTML,CSS

For the Front End, HTML (HyperText Markup Language) and CSS (Cascading Style Sheets)
are used. HTML serves as the backbone for structuring the content on web pages, such as
text, images, and forms. CSS complements HTML by defining the design and layout of these
elements, ensuring the application has a visually appealing and responsive user interface.
Together, HTML and CSS create the foundation for engaging and user-friendly front-end
development.
BACKEND : PYTHON

The Back End of the application is powered by Python, a versatile and beginner-friendly
programming language known for its readability and extensive libraries. Python efficiently
handles server-side operations, including data processing, database interactions, and
executing business logic, making it a reliable choice for backend development.

FRAMEWORK : FLASK

To streamline the development process, the Flask framework is used. Flask is a lightweight
and flexible Python web framework that allows developers to build web applications quickly
and efficiently. Its minimalist nature makes it easy to add or remove features based on project
requirements. Flask also supports integrations with databases, APIs, and other tools, enabling
seamless full-stack development.
CHAPTER 6

MODULE LIST

 User Authentication and Registration Module


 Image Acquisition Module
 Image Pre-Processing Module
 Feature Extraction Module
 AI-Based Classification Module
 Severity Assessment Module
 User Dashboard and Report Generation Module

MODULE DESCRIPTION

1. User Authentication and Registration Module

This module enables users to create an account and securely log in to the system. User
credentials are encrypted to ensure data privacy, and personal health records are securely
stored. Registered users can access their past diagnoses, monitor their skin health, and receive
personalized recommendations. Additionally, this module tracks user interactions, allowing
the AI to refine predictions based on historical data.

2. Image Acquisition Module

Users can capture real-time skin images using their smartphone cameras or upload pre-
existing images for analysis. The module ensures high-resolution image capture to improve
diagnostic accuracy. Basic image validation checks confirm that uploaded photos meet the
necessary quality standards, ensuring that only clear and properly lit images are processed for
analysis.

3. Image Pre-Processing Module

This module enhances image quality by applying noise reduction, contrast adjustment, and
segmentation techniques. Pre-processing standardizes image dimensions (299×299 pixels)
and color properties, ensuring consistency for AI analysis. These optimizations are
particularly important for InceptionV3, as the model relies on well-processed input data to
improve feature extraction and classification accuracy.

4. Feature Extraction Module

Feature extraction is performed using Convolutional Neural Networks (CNNs) and


InceptionV3 to analyze skin texture, color variations, and lesion shape. InceptionV3’s deep
architecture enables multi-scale feature learning, identifying fine-grained patterns in
dermatological images. This step enhances the ability to detect subtle variations indicative of
different skin conditions.

5. AI-Based Classification Module

This module employs a deep learning-based classification system using CNNs and
InceptionV3 to diagnose skin diseases. CNNs extract relevant visual features, while
InceptionV3 enhances classification by analyzing images at multiple levels of abstraction.
The model assigns a confidence score to its prediction, ensuring accurate and reliable
diagnoses. Continuous learning on diverse dermatological datasets improves the AI's
performance over time.

6. Severity Assessment Module

After classification, the severity assessment module determines the extent of the detected
skin condition. InceptionV3-based analysis provides a percentage-based severity score,
helping users assess the urgency of their condition. This information is crucial for
recommending appropriate treatment options or suggesting dermatologist consultations for
severe cases.

7. User Dashboard and Report Generation Module

The user dashboard presents diagnosis results, severity levels, and treatment
recommendations in an easy-to-understand format. A history of past analyses allows users to
track their skin health over time. Additionally, the system generates downloadable reports,
which can be shared with healthcare professionals for further consultation. The integration of
InceptionV3 ensures high diagnostic precision, making the system a valuable tool for skin
disease monitoring and management.
CHAPTER 7

USE CASE DIAGRAM & FLOW DIAGRAM

USE CASE DIAGRAM


FLOW DIAGRAM
CHAPTER 8

CONCLUSION
The AI-powered dermatological diagnosis system represents a significant breakthrough in
medical technology, offering a fast, accurate, and accessible solution for identifying and
monitoring common skin conditions such as acne, eczema, psoriasis, and melanoma. By
leveraging cutting-edge deep learning techniques, specifically Convolutional Neural
Networks (CNNs) and InceptionV3, this system transforms traditional dermatological care
into a user-friendly, digital, and cost-effective alternative. Its ability to process, analyze, and
classify skin conditions in real time makes it an invaluable tool in both urban healthcare
settings and remote regions where dermatologists are scarce.One of the key strengths of this
system is its ability to extract and interpret multi-scale features from skin images, enabling
precise classification of various dermatological conditions. CNNs, known for their
exceptional performance in image analysis and feature extraction, allow the system to detect
intricate details such as skin texture variations, lesion shapes, and border irregularities.
InceptionV3, with its deep architecture and multi-resolution processing capabilities, further
enhances diagnostic accuracy by analyzing skin images at different scales. This ensures that
even subtle patterns associated with early-stage dermatological conditions are detected,
reducing the chances of false positives or misclassification.Unlike traditional dermatological
evaluations that rely solely on human expertise, this AI-driven system operates with
consistent accuracy, minimizing human errors and subjectivity in diagnosis. By processing a
vast dataset of labeled dermatological images, the model continuously improves its ability to
recognize varied skin conditions across different skin tones, ages, and ethnicities, ensuring
fair and unbiased diagnoses for all users.Beyond mere classification, the system provides
users with a severity assessment module, offering a percentage-based severity score that
helps individuals understand the urgency of their condition. By categorizing conditions into
mild, moderate, or severe cases, users can make informed decisions about their next steps.
Those diagnosed with mild cases receive recommendations for over-the-counter treatments
and skincare solutions, while individuals with moderate or severe conditions are advised to
seek professional medical consultation. This personalized approach ensures that users receive
accurate and actionable guidance tailored to their skin health needs.Additionally, the system
promotes continuous skin health monitoring through its follow-up notification feature. By
reminding users to track changes in their skin condition, this function helps ensure timely
intervention, preventing the progression of chronic dermatological issues. This is particularly
beneficial for patients with recurring conditions like eczema or psoriasis, where long-term
monitoring is essential. By automating follow-up assessments, users gain a proactive
approach to skin health management without the need for frequent in-person visits.One of the
most significant advantages of this AI-powered system is its ability to bridge healthcare
accessibility gaps, particularly for individuals in remote or underserved regions. Traditional
dermatological care requires physical consultations, which may be difficult to obtain due to
geographical barriers, long wait times, and high costs. With this AI-driven web-based
platform, individuals can receive instant skin assessments, reducing their dependency on
dermatologists while still ensuring high-quality diagnostic accuracy.This system not only
benefits patients but also supports healthcare professionals by reducing their workload.
Dermatologists can focus on complex cases, while AI handles preliminary screenings,
making healthcare systems more efficient. By integrating AI with telemedicine platforms,
future iterations of this system could facilitate direct dermatologist consultations when severe
cases are detected, further strengthening its real-world medical applications.As AI technology
continues to evolve, future improvements to this system could include enhanced predictive
analytics using InceptionV3’s deeper learning models. Advancements in real-time image
recognition could refine diagnostic precision, allowing the system to detect skin anomalies at
even earlier stages. Furthermore, integration with wearable skin monitoring devices could
enable continuous tracking of skin conditions, providing real-time alerts for users at risk of
developing severe dermatological diseases.Expanding the system’s training datasets to
include a broader range of skin conditions, ethnicities, and age groups would further enhance
its accuracy and fairness. Additionally, integrating Natural Language Processing (NLP) for
chatbot-based consultations could make the platform even more interactive, enabling users to
receive instant responses to their skin health inquiries.In conclusion, the AI-powered
dermatological diagnosis system represents a transformative step forward in healthcare
innovation, offering early detection, personalized recommendations, and continuous
monitoring of skin conditions. By integrating CNNs and InceptionV3, the platform ensures
high diagnostic accuracy, making AI-driven dermatological assessments reliable and widely
accessible.This system not only improves healthcare efficiency but also empowers
individuals to take proactive control over their skin health. By addressing challenges in
accessibility, accuracy, and affordability, this AI-powered solution is set to redefine
dermatological care, making it more inclusive, efficient, and patient-centric.
CHAPTER 8

FUTURE ENHANCEMENT

While the current AI-powered skin cancer detection system offers significant value in
providing accessible and accurate dermatological care, there are several avenues for future
enhancement that can further improve its functionality, accuracy, and overall user experience.
These advancements would allow the system to evolve with technological progress and user
needs, ensuring it continues to serve as a reliable tool in skin health management. Currently,
the system focuses on common skin cancers such as acne, eczema, psoriasis, and melanoma.
Future enhancements could involve the inclusion of additional skin conditions, such as fungal
infections, dermatitis, vitiligo, and skin allergies. By expanding the range of conditions it can
detect, the system would offer a broader diagnostic capability, catering to more diverse user
needs. To increase the accuracy of diagnosis, the system could incorporate more advanced AI
models using transfer learning. By leveraging pre-trained deep learning models (e.g., from
large-scale medical image datasets like ImageNet or specialized dermatology datasets), the
system can improve its ability to identify even more subtle features in skin images. Transfer
learning would also allow for faster model training and adaptation to new skin conditions
with less data. To further bridge the gap between users and dermatologists, integrating the
system with real-time telemedicine consultations could be a major advancement. After
diagnosing a condition, the system could automatically connect users to dermatologists for
live consultations, enhancing the user experience and enabling professional validation of the
AI’s diagnosis. This feature would improve the accuracy of treatments suggested and ensure
more personalized care. Future versions of the system could integrate with wearable health
devices that continuously monitor skin health, such as sensors measuring skin hydration or
UV exposure. By collecting real-time data, the system could track skin conditions over time,
offer dynamic treatment recommendations, and send timely alerts based on user activity or
environmental conditions. To improve accessibility and user engagement, the system could
incorporate voice recognition and conversational AI, allowing users to interact with the
system using voice commands. Additionally, offering multilingual support would ensure the
system serves a global audience, particularly in non-English speaking regions, making
dermatological care more inclusive.
REFERENCE

1. Kachuee, Mohammad, Shayan Fazeli, and Majid Sarrafzadeh. "Ecg heartbeat


classification: A deep transferable representation." 2018 IEEE international conference on
healthcare informatics (ICHI). IEEE, 2018
2. S. Zhang, W. Wang, J. Ford, and F. Makedon, “Learning from incomplete ratings using
non-negative matrix factorization,” in Proc. 6th SIAM Int. Conf. Data Mining, 2006, pp.
549–553.
3. C. L. Chin, M. C. Chin, T. Y. Tsai and W. E. Chen, "Facial skin image classification
system using Convolutional Neural Networks deep learning algorithm", 2018 9th Int.
Conf. Aware. Sci. Technol. iCAST 2018, no. c, pp. 51-55, 2018
4. B. M. Sarwar, G. Karypis, J. A. Konstan, and J. Reidl, “Item-based collaborative filtering
recommendation algorithms,” in Proc. 10th Int. World Wide Web Conf., 2001, pp. 285–
295
5. T. George and S. Merugu, “A scalable collaborative filtering framework based on co-
clustering,” in Proc. 5th IEEE Int. Conf. Data Mining, 2005, pp. 625–628
6. C. Baur, S. Albarqouni and N. Navab, "Generating highly realistic images of skin lesions
with gans. Computer Assisted Robotic Endoscopy" in Clinical Image-Based Procedures,
Springer, 2018.
7. Nawal Soliman and ALKolifi ALEnezi, "A Method Of Skin Disease Detection Using
Image Processing And Machine Learning", Procedia Computer Science, vol. 163, pp. 85-
92, 2019, ISSN 1877-0509.
8. V.R. Balaji, S.T. Suganthi, R. Rajadevi, V. Krishna Kumar, B. Saravana Balaji and
Sanjeevi Pandiyan, "Skin cancer detection and segmentation using dynamic graph cut
algorithm and classification through Naive Bayes classifier", Measurement, vol. 163, pp.
107922, 2020, ISSN 0263-2241.
9. H. Q. Yu and S. Reiff-Marganiec, "Targeted Ensemble Machine Classification Approach
for Supporting IoT Enabled Skin Disease Detection", IEEE Access, vol. 9, pp. 50244-
50252, 2021.
10. L. F. Li, X. Wang, W. J. Hu, N. N. Xiong, Y. X. Du and B. S. Li, "Deep Learning in Skin
Disease Image Recognition: A Review", IEEE Access, vol. 8, pp. 208264-208280, 2020.
11. Consortium Brainstorm et al. Analysis of shared heritability in common disorders of the
brain. Science 360, eaap8757 (2018). [PMC free article] [PubMed] [Google Scholar]
12. Ke Zhou, Wenguang He, Yonghui Xu, Gangqiang Xiong, and Jie Cai. 2018. Feature
selection and transfer learning for Alzheimer‘s disease clinical diagnosis. Applied
Sciences 8, 8 (2018), 1372
13. Wang Pingping, Xie Yanming, Luo Yumin, Gao Li. Research progress of neuroimaging
in risk assessment of recurrence of ischemic cerebrovascular disease [J]. Journal of cardio
cerebrovascular disease of integrated traditional Chinese and Western medicine, 2020,18
(18): 3026-3029.
14. S. L. Oh et al., “A deep learning approach for Parkinson’s disease diagnosis from EEG
signals,” Neural Comput & Applic, vol. 32, no. 15, pp. 10927–10933, Aug. 2020, doi:
10.1007/s00521-018-3689-5.
15. Ting, D.S.; Liu, Y.; Burlina, P.; Xu, X.; Bressler, N.M.; Wong, T.Y. AI for medical
imaging goes deep. Nat. Med. 2018, 24, 539–540.
16. Han, H.S.; Choi, K.Y. Advances in nanomaterialmediated photo thermal cancer therapies:
Toward clinical applications. Biomedicines 2021, 9, 305.
17. Danilo Barros Mendes, "Skin Lesions Classification Using Convolutional Neural
Networks in Clinical Images", arXiv, 2018.
18. SHAKAR H. SALIH and S. H. E. R. E. E. N. AL-RAHEYM, "Comparison of Skin
Lesion Image Between Segmentation Algorithms", Journal of Theoretical and Applied
Information Technology, vol. 96, no. 18, 2018.
19. Yuexiang Li and Linlin Shen, "Skin lesion analysis towards melanoma detection using
deep learning network", Sensors, vol. 18, no. 2, pp. 556, 2018.
20. Samy. Bakheet, "An svm framework for malignant melanoma detection based on
optimized hog features", Computation, vol. 5, no. 1, pp. 4, 2017.
APPENDIX

SAMPLE CODE

import torch

import torch.nn as nn

from torchvision import models, transforms

from PIL import Image

import numpy as np

from torchvision.datasets import ImageFolder

from flask import Flask, request, render_template, redirect, url_for

import os

from datetime import datetime

# Setup

app = Flask(_name_)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Define transforms

transform = transforms.Compose([

transforms.Resize((224, 224)),

transforms.ToTensor(),

transforms.Normalize(mean=[0.485, 0.456, 0.406],

std=[0.229, 0.224, 0.225])

])
# Load the trained model

def load_model(model_path='model.pth', num_classes=2):

model = models.resnet18(pretrained=False)

model.fc = nn.Linear(model.fc.in_features, num_classes)

model.load_state_dict(torch.load(model_path, map_location=device))

model = model.to(device)

model.eval()

return model

# Predict function

def predict_single_image(model, image_path, class_names):

image = Image.open(image_path).convert('RGB')

image_tensor = transform(image).unsqueeze(0)

image_tensor = image_tensor.to(device)

with torch.no_grad():

outputs = model(image_tensor)

probabilities = torch.nn.functional.softmax(outputs[0], dim=0)

predicted_idx = torch.argmax(outputs[0]).item()

predicted_class = class_names[predicted_idx]

return {
'predicted_class': predicted_class,

'probabilities': probabilities.cpu().numpy(),

'class_names': class_names

# Load class names and model once at startup

train_dataset = ImageFolder(root=r'F:\ABDUL\ABDUL 2024\SKIN_DISES_VIT\


SKIN_CANCER\datasets - Copy\train')

class_names = train_dataset.classes

num_classes = len(class_names)

model = load_model('model.pth', num_classes)

# Routes

@app.route('/', methods=['GET', 'POST'])

def upload_file():

if request.method == 'POST':

if 'file' not in request.files:

return redirect(request.url)

file = request.files['file']

if file.filename == '':

return redirect(request.url)

# Save the file temporarily


upload_folder = 'static/uploads'

os.makedirs(upload_folder, exist_ok=True)

file_path = os.path.join(upload_folder, file.filename)

file.save(file_path)

# Make prediction

result = predict_single_image(model, file_path, class_names)

# Prepare data for template

probs = {cls: f"{prob:.4f}" for cls, prob in zip(result['class_names'],


result['probabilities'])}

confidence = float(result['probabilities']
[result['class_names'].index(result['predicted_class'])] * 100)

# Get timestamp

timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")

# Instead of removing, keep the file for display (clean up later if needed)

image_url = f"uploads/{file.filename}"

return render_template('result.html',

predicted_class=result['predicted_class'],

probabilities=probs,

image_url=image_url,
confidence=confidence,

timestamp=timestamp,

explanation=f"This image was classified as {result['predicted_class']}


based on the model's analysis of visual features.")

return render_template('index.html')

if _name_ == '_main_':

app.run(debug=True)

SAMPLE OUTPUT

You might also like