Documentation MINI
Documentation MINI
Professor
Bachupalli, Kukatpally,
Hyderabad,Telangana,India,500090
2023-2024
I
GOKARAJU RANGARAJU
INSTITUTE OF ENGINEERING AND TECHNOLOGY
(Autonomous)
CERTIFICATE
This is to certify that the major project entitled “Skin care products recommendation system”
i is submitted by S.kavya (21241A054D), K.Abhijitha (21241A057A), P.Mamatha
(21241A051D), L.Arpitha (21241A050B), in fulfillment of the award of a degree in
BACHELOR OF TECHNOLOGY in Computer Science and Engineering during the
academic year 2023-2024.
EXTERNAL EXAMINER
II
Many people helped us directly and indirectly to complete our project successfully. We
would like to take this opportunity to thank one and all. First, we wish to express our deep
gratitude to our internal guide Dr.S. Govinda Rao, Professor, Department of CSE for his
support in the completion of our project report. We wish to express our honest and sincere
thanks to Dr. B. Sankara Babu, HOD, Department of CSE, and to our principal Dr. J.
Praveen for providing the facilities to complete our mini project. We would like to thank
all our faculty and friends for their help and constructive criticism during the project
completion phase. Finally, we are very much indebted to our parents for their moral
support and encouragement to achieve goals.
S.Kavya 21241A054D
K.Abhijitha 21241A057A
P.Mamatha 21241A051D
L.Arpitha 21241A050B
III
DECLARATION
We hereby declare that the Mini Project entitled “Skincare Products Recommendation
System” is the work done during the period from 2023-2024 and is submitted in the
fulfillment of the requirements for the award of the degree of Bachelor of Technology in
Computer Science and Engineering from Gokaraju Rangaraju Institute of Engineering
and Technology (Autonomous under Jawaharlal Nehru Technology University,
Hyderabad). The results embodied in this project have not been submitted to any other
university or Institution for the award of any degree or diploma.
S.Kavya 21241A054D
K.Abhijitha 21241A057A
P.Mamatha 21241A051D
L.Arpitha 21241A050B
IV
Table of Contents
Abstract 1
1 Introduction 2
2 System Requirements 6
3 Literature Survey 14
4.1 Modules 17
7 References 38
Appendix 39
i) Full Paper-Publication Proof 40
ii) Snapshot Of the Result
iii) Optional (Like Software Installation /Dependencies/ pseudo
code)
V
List of Figures
Fig No Fig Title Page No
1.1.1 Skincare Products 3
1.1.2 Main Products 3
2.1.1 Python System requirements 6
2.1.2 Software requirements 7
2.3.1 Dataset 10
2.3.2 Dry Folder 10
2.3.3 Dry Sample 11
2.3.4 Oily Folder 11
2.3.5 Oily Sample 12
2.3.6 Normal Folder 12
2.3.7 Normal Sample 13
4.2.1 CNN 19
4.2.2 Tensor flow 20
4.2.3 NumPy 21
4.2.4 Matplotlib 22
4.3.1 Use case diagram1 25
4.3.2 Activity diagram 26
4.3.3 System architecture 27
4.3.4 Use case diagram2 28
5.1 Fig 5.1 29
5.2 Fig 5.2 30
5.3 Fig 5.3 30
5.4 Fig 5.4 31
5.5 Fig 5.5 31
5.6 Fig 5.6 32
5.7 Fig 5.7 32
5.8 Fig 5.8 33
5.9 Fig 5.9 33
5.10 Fig 5.10 34
5.11 Fig 5.11 34
5.12 Fig 5.12 35
5.13 Fig 5.13 35
5.14 Fig 5.14 36
5.15 Fig 5.15 36
7.1 Paper Publication proof 39
7.2 Stream lit Representation 40
LIST OF TABLES
Table No Table Name Page No
VI
ABSTRACT
A machine learning-based recommendation system for skin care goods is an individualized tool that makes
product recommendations to consumers based on their skin type and preferences. First, we give the user's
face as input, and the system examines several aspects of their face. This analysis enables the system to
comprehend the particular wants and preferences for attractiveness while also revealing insights into the
user's distinct face traits. Following input, the user's skin type is determined, and goods such as serum, face
wash, moisturizer, and sunscreen are suggested according to the user's skin type. The system's
recommendations are extremely tailored and unique, taking into account each user's requirements,
preferences, and distinct face features. Through constant learning from user interactions and feedback,
machine learning algorithms are able to update and optimize the suggestions.
The desire for customized beauty treatments and rising consumer awareness have propelled the skincare
industry's exponential rise. To improve user pleasure and engagement, a Skin Care Products
Recommendation System in this scenario makes use of advanced machine learning and data analysis tools
to provide customized product recommendations. The strategy merges product properties like ingredients,
efficacy, and user ratings with user-specific data like skin type, concerns, preferences, and environmental
conditions. Through the use of content-based filtering, hybrid recommendation models, and collaborative
filtering, the system is able to forecast and suggest skincare items that are most suited to individual needs.
Complete user profile setup, interactive feedback systems, and real-time suggestion updates are some of the
key characteristics. The system complies with applicable laws and industry best practices to ensure privacy
and data security. By use of a simple user interface, It is a useful tool for customers looking for the best
skincare solutions as well as for businesses looking to increase customer loyalty and stand out in the market
since it offers actionable data and tailored suggestions.
Thorough testing and user feedback are used to assess the recommendation system's efficacy, showing
notable increases in customer happiness and product relevancy. The core of creating an advanced, user-
focused skincare recommendation system that leverages artificial intelligence to revolutionize the skincare
buying experience is captured in this picture.
1
1. INTRODUCTION
The market for skincare products is expanding as people realize how important it is to have
customized skincare regimens that meet their individual demands. Finding the best solutions for their
particular skin type and problems may be difficult, though, due to the broad spectrum of alternatives
available. Our research is to use the latest advances in machine learning and artificial intelligence to create
a sophisticated skincare Recommendation System in order to address this problem.
Goals :
Our project's main goal is to develop a customized system of recommendations that can recommend the
best skin care products that match each user's unique profile. These user profiles will be developed with a
number of data points, such as:
Brand choices, sensitivity to ingredients (allergies, for example), and product categories (vegan, cruelty-
free, etc.) are examples of preferences.
Environmental factors include the climate of the area, levels of pollution, and other circumstances that may
have an impact on skin health.
Essential Parts:
Design of User Profiles: Users will provide comprehensive data on their interests, concerns, and skin type.
The suggestion procedure is built around this information.
Data Integration: To guarantee thorough and precise suggestions, the system will combine data from a
variety of sources, such as user evaluations, ingredient lists, product databases, and clinical studies.
Content-Based Filtering: Generates suggestions by examining user preferences and product attributes.
Hybrid Models: Provides more reliable and precise recommendations by combining content-based and
collaborative filtering.
Feedback Loop: By letting users comment on the suggestions they get, the system can improve its
algorithms and provide better suggestions in the future.
The User Interface: An intuitive design facilitates easy data entry for users and guarantees smooth
interaction.
By offering individualized and highly tailored product suggestions based on unique customer preferences
and skin types, a machine learning-powered beauty product recommendation system has the potential to
completely transform the skincare sector. Targeted skincare treatments are made possible by the system's
usage of advanced face analysis tools to comprehend each user's unique facial traits. Among the main
advantages of such a structure are growth of the industry
2
1.1.1 Skin care products
3
1.2 Existing System
Lack of Personalization: Current beauty suggestion systems frequently fall short in terms of tailoring
advice according to individual characteristics like age, skin tone, kind of skin, and preferences. In order to
guarantee that consumers receive ideas that are pertinent to their particular requirements, personalization is
crucial.
Restricted Diversity in Recommendations: A limited range of items that fails to meet the diverse demands
of customers is the outcome of certain beauty recommendation systems' lack of diversity in product
suggestions. This may result in the neglect of specialized or emerging businesses and a failure to
adequately reflect the range of skin tones and ethnic preferences.
Countless recommendation engines have emerged in the skincare sector to assist customers in selecting the
ideal products. These systems vary from simple recommendation engines to complex AI-powered
networks. An outline of various current systems, together with information on their features and methods,
is provided below.
Simple Filtering Mechanisms: Based on little user input, these systems provide straightforward suggestions
based on factors like skin type and principal skin problem. Usually, they propose items based on pre-
established guidelines. For instance, basic filtering is used by online retailers such as Amazon and Sephora
to classify items and provide recommendations according to broad tags such "oily skin" or "wrinkle
prevention."
The main goal is to develop a system that recommends cosmetic goods to consumers determined by their
skin type, preferences, and financial situation.
There are a few important phases in developing a recommendation system for beauty items. First and
foremost, data collecting is crucial. This includes learning about customer preferences and compiling
detailed information on beauty goods. User profiling is essential for capturing individual preferences, such
as skin type, worries, and financial limitations, once data has been gathered. The next step is algorithm
selection, in which the project's specifications are used to determine whether recommendation algorithms—
such as content-based or collaborative filtering—are suitable. To produce precise recommendation models,
model training is then carried out utilizing past user-product interaction data. Additionally, feedback
integration is essential since it enables the recommendation system to be continuously improved.
4
1.3.1 Methodology
To guarantee reliable and customized suggestions for customers, the creation of an entire Skin Care
Product Recommendation System requires a number of crucial phases and procedures. The primary
elements and procedures involved in developing such a system are described in this section.1. Creation of
User Profiles and Data Gathering
Skin Analysis: The optional incorporation of image analysis technologies to evaluate user-uploaded images
for wrinkles, pores, and pigmentation. Information Points.
Skin type (such as combo, dry, oily, or sensitive)
Skin issues (such as sensitivity, pigmentation, aging, and acne)
Sensitivities to ingredients and allergies Preferences for brands and products
Environmental elements (such as the pollution levels and climate)
Product Database: Detailed information on skincare products, comprising the following ingredients of
which product consist so that user can pick the products based on the ingredients which suit their face,and
the type of the product either it is a moisturizer, cleanser or sunscreen, and also contains user ratings and
reviews
We also need a data set consisting of skin type, type of the product and its url, skin tone in order to train our
model.
We can use algorithms such as CNN(Convolutional neural network), Random Forest, Multiple Regression
to train our machine learning model.
Security and Privacy of Data
Security of data:
Guarantees respect to data privacy laws (such as the CCPA and GDPR).
puts strong security measures into effect to guard user data from breaches and illegal access.
There are a few important phases in developing a recommendation system for beauty items. First and
foremost, data collecting is crucial. This includes learning about customer preferences and compiling
detailed information on beauty goods. User profiling is essential for capturing individual preferences, such
as skin type, worries, and financial limitations, once data has been gathered. The next step is algorithm
selection, in which the project's specifications are used to determine whether recommendation algorithms—
such as content-based or collaborative filtering—are suitable. To produce precise recommendation models,
model training is then carried out utilizing past user-product interaction data. Additionally, feedback
integration is essential since it enables the recommendation system to be continuously improved.
5
2. REQUIREMENT ENGINEERING
6
2.1.2 Software Requirements
The minimum tackle conditions for an Enthought python/ canopy / VS code stoner vary mainly
depending on the software. Apps that bear large arrays objects in memory need further RAM,
while apps that need to perform multiple computations or conduct snappily need a briskly CPU.
In this system following are the tackle conditions:
• System Intel I5 or advanced
• Hard Fragment 120 GB.
• Input bias Keyboard
• Ram 8 GB or advanced
Functional Requirements
Functional conditions describe the capabilities of a software system and how the system is
intended to serve given specific inputs or circumstances. May include computations and
processing, and fresh special functions.
Non-Functional Requirements
It is a requirement definition that offers criteria for judging a system's operations rather than
specific behavior. They encompass the majority of the measurements that define the system's
14 standard and quality; a few of the parameters that fall under this category are performance,
security, and so on.
7
Performance: Performance is primarily used to quantify the parameters known as time and
space. This project takes up less space and the actions or operations conducted are completed
in a matter of seconds.
Security: One of the most important aspects of any computerized program is security or
authorization. Because the information is private, no harmful person should be permitted to
access it.
The project takes up no additional space, and the procedure is completed promptly.
Portable: The system is usable on any operating system that supports Python
OVERALL DESCRIPTION
A software system's behavior is fully described in a software requirements specification (SRS), which is an
essential specification for a software system. It comes with a series of scenarios that outline every
interaction clients will encounter with the program. Non-functional requirements are included in the SRS
in addition to use cases. Requirements that provide limitations on the design or execution are known as
non-functional requirements (performance engineer specifications, quality standards, design restrictions,
etc.).A system requirements specification is an organized set of data that represents a system's needs. A
business analyst, also known as a system analyst, is in charge of examining the demands of stakeholders
and clients in order to pinpoint issues and provide fixes. Usually, the area of systems development lifecycle
executes
Business requirements lay forth what has to be done or provided in order to add value.
Product requirements list the characteristics of an arrangement or product, which may be one of several
A feasibility study:
At this point, the venture's viability is assessed, and a business proposal is presented along with a widely
accepted plan for the expansion and a few cost estimates.
FINANCIAL FEASIBILITY
8
The purpose of this research is to evaluate the framework's potential financial impact on the company.
There is a limit to the amount of assistance that the corporation can provide to the framework's research
and development. The purposes need to be justified. Because the majority of the innovations used are
publicly available, the framework was developed within the budget as well. As it was, the bespoke goods
needed to be acquired.
TECHNICAL FEASIBILITY
This inquiry is conducted in order to verify the technical feasibility, or the technical requirements of the
structure. Any framework that is developed must not place an unreasonable demand on the available
specialist resources. Tall requests for the available specialist resources will result from this.
SOCIAL FEASIBILITY
Ponder's point of view is to assess the client's degree of recognition of the framework. This includes the
process of getting the client ready to use the framework efficiently.
The framework must not make the customer feel helpless; rather, instep must recognize it as an emergency.
The techniques used to educate the client with the framework and make him identifiable with it are the only
factors that determine the degree of awareness by the clients. Raising his degree of confidence is necessary
to enable him to provide some valuable input, which is requested, given that he is the framework's final
user.
An vast and multidimensional dataset comprising user-specific data, comprehensive product data,
environmental conditions, and market trends is used in the Skin Care Product Recommendation System.
The development of an efficient recommendation system that provides precise and customized product
recommendations depends on the proper gathering, preparation, and incorporation of this data. Through the
utilization of an extensive as well as meticulously organized dataset, the system has the potential to greatly
augment customer contentment and involvement by offering customized skincare remedies that cater to
distinct requirements and inclinations.
Our Data Set consist of facial images of normal, oily, dry skin types of people the data set is classified in
three sub files those are oily, normal, dry skin type facial images there are around 3000 images in the data
set which comprises of the size 120 mb. We also have the dataset containing skin type, product type,
product url.
9
2.3.1 Dataset
Dry:
11
2.3.5 Oily Sample
Normal:
12
2.3.7 Normal Sample
13
3.LITERATURE SURVEY
Authors:- He, X., Liao, L., Zhang, H., Nie, L., Hu, X., & Chua.
Deep neural networks have made significant progress in speech recognition, computer vision, and natural
language processing in recent years. On recommender systems, however, the investigation of deep neural
systems has gotten comparatively little attention. In this study, we aim to build neural network-based
methods to address the central recommendation problem, namely collaborative filtering, based on implicit
feedback.
While deep learning has been utilized for recommendation in some recent work, its main application has
been in the modeling of auxiliary data, including item descriptions in text and musical qualities in acoustic
format. They continued to apply matrices factorization and a product inside based on the latent
characteristics of users in order to describe the crucial component of collaborative filtering—the interplay
between user and item attributes.
https://dl.acm.org/doi/10.1145/3038912.3052569
Author:-R. Burke
Deep neural networks have made significant progress in speech recognition, computer vision, and natural
language processing in recent years. On recommender systems, however, the investigation of deep neural
systems has gotten comparatively little attention. In this study, we aim to build neural network-based
methods to address the central recommendation problem, namely collaborative filtering, based on implicit
feedback.
While deep learning has been utilized for recommendation in some recent work, its main application has
been in the modeling of auxiliary data, including item descriptions in text and musical qualities in acoustic
format. They continued to apply matrices factorization and a product inside based on the latent
characteristics of users in order to describe the crucial component of collaborative filtering—the interplay
between user and item attributes.
Recommender systems employ user preferences to provide recommendations for things to buy or look at.
With their recommendations that efficiently filter vast information spaces to point users toward the
products that most closely match their interests and requirements, they have evolved into essential tools for
online purchasing and information access. Many methods have been put forth to carry out
recommendations, such as knowledge-based, collaborative, content-based, and other methods. These
techniques have occasionally been combined to create hybrid recommenders, which increase performance.
In addition to introducing a unique hybrid EntreeC that combines knowledge-based recommendation with
collaborative filtering to propose eateries, this study reviews the field of potential and real hybrid
recommenders. Additionally, we demonstrate that semantic ratings derived from the system's knowledge-
based component improve the efficiency of model.
https://link.springer.com/article/10.1023/A:1021240730564
3.3 Collaborative Filtering Techniques
14
Authors:- Sarwar, B., Karypis, G., Konstan, J., & Riedl, J.
Deep neural networks have made significant progress in speech recognition, computer vision, and natural
language processing in recent years. On recommender systems, however, the investigation of deep neural
systems has gotten comparatively little attention. In this study, we aim to build neural network-based
methods to address the central recommendation problem, namely collaborative filtering, based on implicit
feedback.
While deep learning has been utilized for recommendation in some recent work, its main application has
been in the modeling of auxiliary data, including item descriptions in text and musical qualities in acoustic
format. They continued to apply matrices factorization and a product inside based on the latent
characteristics of users in order to describe the crucial component of collaborative filtering—the interplay
between user and item attributes.
Data-intensive applications analyze large amounts of data with growing complexity and analytical
requirements by utilizing advances in hardware, software, and algorithms. These applications are vital in a
variety of domains, including scientific research, finance, and healthcare, where prompt and precise
insights are necessary for creativity and decision-making. Cloud computing innovations, collaborative
computing frameworks like Spark and Apache Hadoop, and specialized hardware like GPUs have
improved the performance and scalability of data-intensive applications dramatically. Through the effective
handling of data processing, storage, and analysis, these innovations allow businesses to extract valuable
insights from enormous datasets, leading to increased efficiency, productivity, and creativity in the digital
era.
https://dl.acm.org/doi/10.1145/3038912.3052569
Deep neural networks have made significant progress in speech recognition, computer vision, and natural
language processing in recent years. On recommender systems, however, the investigation of deep neural
systems has gotten comparatively little attention. In this study, we aim to build neural network-based
methods to address the central recommendation problem, namely collaborative filtering, based on implicit
feedback.
While deep learning has been utilized for recommendation in some recent work, its main application has
been in the modeling of auxiliary data, including item descriptions in text and musical qualities in acoustic
format. They continued to apply matrices factorization and a product inside based on the latent
characteristics of users in order to describe the crucial component of collaborative filtering—the interplay
between user and item attributes.
Practitioners and researchers in a wide range of fields, including data mining, marketing, management, e-
commerce customization, retrieval of data, and pervasive and mobile computing, have acknowledged the
significance of contextual information. Even though recommender systems have been the subject of
extensive research, most current methods concentrate on providing users with the most relevant
recommendations without accounting for any extra contextual information, like the time, location, or
presence of other individuals (e.g., for dining out or watching movies). We contend in this chapter that
pertinent contextual information matters to recommendation systems and that it is critical to consider this
15
information when making suggestions. We talk about the broad concept of contextual and how
recommender systems may model it.
https://link.springer.com/chapter/10.1007/978-0-387-85820-3_7
Deep neural networks have made significant progress in speech recognition, computer vision, and natural
language processing in recent years. On recommender systems, however, the investigation of deep neural
systems has gotten comparatively little attention. In this study, we aim to build neural network-based
methods to address the central recommendation problem, namely collaborative filtering, based on implicit
feedback.
While deep learning has been utilized for recommendation in some recent work, its main application has
been in the modeling of auxiliary data, including item descriptions in text and musical qualities in acoustic
format. They continued to apply matrices factorization and a product inside based on the latent
characteristics of users in order to describe the crucial component of collaborative filtering—the interplay
between user and item attributes.
Customers' disorganized bike movements in bike sharing system (BSSs) result in vacant or crowded
stations, which has a substantial
reduction in the demand from customers. A predefined set of the past patterns has been used by a wide
range of current research to build effective bike repositioning methods in order to mitigate the lost demand.
However, there is still a long way to go until proactive, reliable bike repositioning solutions are designed
and the root cause uncertainties in demand are well understood. In order to close this gap, we provide a
probabilistic satisficing technique based dynamic bike repositioning strategy that makes use of demand
characteristics that are unpredictable but can be learned from past data. Our approach involves creating a
new and effective hybrid integer linear program that maximizes the likelihood of meeting the unpredictable
demand.
https://www.ijcai.org/proceedings/2019/0813.pdf
16
4.PROPOSE APPROACH, MODULES DESCRIPTION UML
DIAGRAMS
4.1 MODULES
Skin Care Products recommendation System can be classified into number of modules and each one have
its own important functionality
ENGINE OF RECOMMENDATION:
Collaborative filtering makes product recommendations based on user activity data.
Content-Based Filtering: Assigns users to products according to user preferences and product attributes.
Hybrid Model: For increased accuracy, combines content-based and collaborative filtering.
Machine Learning Models: Apply algorithms to improve suggestions in response to user input and
interactions.
USER INTERFACE:
Frontend design: Offers an easy-to-use interface via which users may communicate with the system.
Interactive Features: Contains comparison charts for products, skin examination tools, and customized
advice.
Notifications for Users: Notifies users of updates and suggestions and serves as a reminder.
17
4.2 ALGORITHMS AND LIBRARIES
ALGORITHMS
A particular class of machine learning model known as convolutional neural networks (CNN) is a deep
learning technique that is particularly well-suited for the analysis of visual data. CNNs, also known as
convnets, extract features and recognize patterns in images using concepts from linear algebra, specifically
convolution processes. CNNs can be configured to handle sound and other signal data, even if processing
images is their primary function.
The connection patterns found in the human brain, particularly in the cortex of the eye, which is crucial for
the perception and processing of visual inputs, served as the model for CNN design. These models can
comprehend whole images because the artificial neural networks in a CNN are constructed to effectively
interpret visual information.
CNNs employ a number of layers of data, each of which picks up unique characteristics from an input
picture. A CNN may have hundreds, thousands, or even more layers, depending on how complicated the
task for which it is designed is. Each layer builds on the results of the one before it to identify intricate
patterns. Initially, a filter intended to identify certain characteristics is slid over the input image; this
procedure is called the convolution operation, which is why the term "convolutional neural network" was
coined. The feature map that indicates the locations of the identified features in the picture is the end
product of this method. The following layer uses this feature map as input, allowing a CNN to
progressively create the hierarchical structure of the image.
Basic characteristics like lines and simple textures are often detected by first filters. The filters in
subsequent levels are more intricate, combining the fundamental characteristics found in previous layers to
identify more intricate patterns. For instance, a deeper layer may begin recognizing forms once an earlier
layer has identified the existence of edges.
18
4.2.1 CNN
Object detection: CNNs can recognize and pinpoint several items in a picture. This feature is
essential in a variety of retail shelf scanning settings where the goal is to detect out-of-stock
merchandise.
Another major industry in which CNNs are used is facial recognition. For effective access control
based on face characteristics, this technology may be integrated into security systems.
Adept at finding characteristics and patterns in audio, video, and picture signals.
LIBRARIES
TENSORFLOW
The team at Google Brain created the open-source artificial intelligence framework TensorFlow. Large-
scale machine learning applications, deep learning, and neural network development are just a few of the
many uses for it. Researchers can advance the state of machine learning with TensorFlow's extensive and
19
4.2.2 Tensor flow
adaptable network of instruments, libraries, and community resources, while developers can create and
implement machine learning-powered apps with ease.
Versatility:
Deep neural networks, reinforcement learning, and other machine learning models are only a few of the
many models and techniques that are supported.
Flexibility:
Offers many APIs for various abstraction levels. Although the lower-level TensorFlow Core API provides
more flexibility and customisation, the high-level Keras API facilitates the creation and training of models.
Performance:
TPU (Tensor Processing Unit) speed, GPU support, CPU optimization, and GPU support allow for quicker
training and inference. Scalability: Designed to be easily deployed across a range of platforms, including
distributed computing clusters and mobile devices. Community and Support: Features a sizable, vibrant
community.
Tensors: Multi-dimensional arrays are represented using TensorFlow's core data structure, Tensors. The
fundamental units for manipulating and representing data are tensors.
Networks and Sessions: TensorFlow represents computations via dataflow graphs. Operations are
represented as nodes in the graph, and the data (tensors) that flow between operations are represented by
edges. Graphs are executed using sessions.
20
A high degree API for creating and refining models is called Keras. Because Keras is linked with
TensorFlow, building neural networks with little to no code is simple.
Forecasters: A a high degree TensorFlow API that makes training and deploying machine learning models
easier. Estimators manage model exporting, evaluation, training, and prediction.
TF data: A component for constructing intricate input pipelines out of smaller, repeatable components. It
makes preprocessing and data loading effective and scalable.
NUMPY
4.2.3 NumPy
A core package for computational science in Python is called NumPy, abbreviation for numerical Python.
Massive, multiple dimensions arrays and matrices are supported, and a number of mathematical operations
may be performed on these arrays. Because of its effectiveness and user-friendliness, NumPy is frequently
utilized in scientific research, data analysis, and machine learning.
1. Ndarray: The core data structure of NumPy is the ndarray, a fast and space-efficient multi-
dimensional array that provides vectorized operations for performance optimization.
2. Mathematical Functions: A comprehensive set of mathematical functions, including linear
algebra, statistical operations, and random number generation.
3. Broadcasting: Allows for arithmetic operations on arrays of different shapes, enabling efficient
vectorization and reducing the need for explicit looping.
4. Integration with Other Libraries: Seamlessly integrates with other scientific computing libraries
such as SciPy, Pandas, and Matplotlib.
21
5. Performance: Written in C, NumPy provides fast array processing and is often used as a base for
other high-performance libraries.
NumPy is a robust Python numerical computing package that provides a number of features necessary for
machine learning, scientific research, and data analysis. NumPy may be utilized efficiently for data the
preprocessing phase resemblance estimations, and other mathematical operations in the framework of a
skin-care Recommendations System, offering a strong basis for creating sophisticated recommendation
algorithms. Through the use of NumPy's effective array processing features, developers may create
scalable and effective systems that provide consumers with customized skincare suggestions.
NumPy may be used for machine learning algorithm implementation, feature engineering, and data
preparation in the frame of a skin care system that provides recommendations. Here's a condensed example
showing some possible uses for NumPy.
MATPLOTLIB
4.2.4 Matplotlib
A complete Python visualization toolkit for unchanging, animated, and interactive graphics is called
Matplotlib. It is incredibly flexible and frequently used to create a wide range of plots and graphs in the
research and data analysis communities. Plotting features in Seaborn and Pandas are only two examples of
the numerous Python visualization libraries built on top of Matplotlib.
Numerous Plot Types: Allows for the creation of more intricate visualizations such as contour and 3D
plots, in addition to the more straightforward lines, bars, scatter, histogram, and pie charts.
Customizability: Provides a wide range of plot customization choices, allowing for adjustments to shades,
labels, markers, and other elements.
22
Integration: Provides straightforward data visualization from these sources by integrating effectively with
other libraries such as NumPy, Pandas, and SciPy.
Support for Interactive Plots: Enables interactive plots to be included into Jupyter notebooks and GUI apps.
publishing Quality: Able to create plots with precise influence over plot aesthetics that are of a caliber
appropriate for publishing.
Plotting Features:
Plotting functions: plot, disperse, bars, hist, pie, and imshow; these allow the creation of many plot kinds.
functions to add annotations and enhance plot readability, such as grid, legend, xlabel, and ylabel.
Personalization:
Plot customization possibilities abound, such as using xlim and ylim to create boundaries, color and
lifestyle to select colors and styles, and marker to personalize markers.
With its numerous customization options and wide variety of charting features, Matplotlib is an
indispensable tool for Python data visualization. Matplotlib is a useful tool for visualizing consumer
characteristics, product ratings, and recommendation algorithm performance in the context of Skin Care
Recommendation Systems. Matplotlib improves decision-making and data comprehension by offering
lucid and perceptive representations, which raises the recommendation system's overall efficacy.
STREAMLIT
An free to download app framework called Stream lit was created expressly to make it simple and quick to
create and share data-driven web applications. Without having in-depth knowledge of web programming, it
enables data scientists and artificial intelligence engineers to transform data script into interactive online
apps.
Real-time Interactivity: Stream lit enables users to change data and view outcomes instantaneously by
supporting actual time interactivity with widgets such as buttons, sliders, and text inputs.
Integrates easily with a variety of well-known data science libraries, including NumPy, Pandas, Matplotlib,
23
Plotly, and others.
Automatic UI Generation: This feature eliminates the requirement for HTML or JavaScript by
automatically creating an individual's interface from Python code.
Live Code Software updates: Stream lit applications offer an effective development process by updating in
real time while you make changes and save your code.
Deployment: Stream lit applications are simple to set up and distribute, either Stream lit Sharing
Stream lit is a robust and intuitive Python tool for building interactive web apps. Researchers and artificial
intelligence practitioners will find it to be a great option due to its ease of use and seamless interaction with
the Python environment. Stream lit may assist in creating a dynamic and interactive user interface for a
skin care recommendations system that enables users to examine visualizations, investigate suggestions,
and communicate with the system in instantaneously. This improves the user interface and increases the
accessibility and engagement of the data and suggestions.
Within the realm of software engineering, Unified Modeling Language (UML) has become a widely used
standard visual modeling language. Its main goal is to make the complex behavior and structure of software
systems easier to describe, visualize, and record. With the help of UML's extensive collection of
diagrammatic approaches, developers, architects, and other stakeholders may effectively and clearly
convey system design concepts. The necessity for a consistent method of modeling system architecture to
improve comprehension, lower complexity, and foster clarity across the software planning and
development stages led to the creation of UML.UML diagrams come in a variety of forms, each of which is
intended to depict a distinct feature of a system of software. Among the most popular UML diagrams are
the following:
24
USE CASE DIAGRAM:
Input image
Train models
Predict
user
server
Recommend
products
Data
retrieval
Initially, the system receives the user's image as input and analyzes several facial features. This study
provides information about the user's unique facial attributes and helps the system understand the specific
needs and preferences for beauty. After the input, transformation techniques of data augmentation like
resizing, zooming, flipping are done. Then, by using Convolutional Neural Networks, the individual's skin
type is identified, and products like moisturizer, face wash, serum, and sunscreen are recommended based
on the skin type. The recommendations made by the algorithm are incredibly personalized and different,
accounting for the needs, tastes, and distinctive facial characteristics of every individual. Machine learning
algorithms can continuously learn from user comments and interactions to update and improve the
25
suggestions.
Activity Diagram:
26
System Architecture:
27
4.3.4 Use case diagram2
28
5. IMPLEMENTATION, EXPERIMENTAL RESULTS AND TEST
CASES
Model Creation:
Fig 5.1
29
Fig 5.2
Fig 5.3
30
Fig 5.4
Fig 5.5
31
Fig 5.6
32
Fig 5.7
Fig 5.8
Interface Code:
Fig 5.9
33
Fig 5.10
Fig 5.11
34
Results:
Fig 5.12
Fig 5.13
35
Fig 5.14
Fig 5.15
36
6. CONCLUSION AND FUTURE SCOPE
A noteworthy step in the customization of skincare regimens is the creation of a skincare Recommendation
System. With the use of customized for users data, extensive product details, and advanced
recommendations algorithms, the system offers customized skincare solutions that cater to each person's
requirements and tastes. Collaborative filtering, content-based filtering, as well as mixed approaches work
together to improve user engagement and satisfaction by enhancing the relevance and accuracy of
suggestions. The project's major accomplishments include Particularized Suggestions: Users are efficiently
paired with skincare items that address their individual skin types, issues, and needs thanks to the system.
Entire Data Integration A comprehensive approach to skincare suggestions is ensured by combining data
from several sources, including as feedback from customers, products databases, and environmental
variables. Interface That's Easy to Use: Easy engagement is made possible by a well-designed interface,
which enables users to enter data, get suggestions.
Prospective Range
This Skin Care products Recommendation System will see a number of extensions and improvements in
the future to further boost its accuracy, usability, and functionality. Here are some possible avenues for
further research and development:
Advanced Methods for Machine Learning and AI:
Deep Learning Models: Using deep learning models to analyze customer information and product features
more thoroughly.
Enhancing the system's capacity to examine and comprehend user evaluations and feedback in order to
provide more insightful suggestions is known as natural language processing, or NLP. Improved Skin
Examination
Image processing: Applying cutting-edge image processing methods to examine user-uploaded images in
order to diagnose skin conditions more precisely.
Real-Time examination: Creating instruments for in-the-moment skin examination that can deliver prompt
comments and suggestions. Combining Wearable Technology with Integration
Smart Devices: Establishing a connection with wearables and smart devices that track variables related to
skin health.Integrating health data from devices and apps to provide comprehensive skincare
recommendations that take lifestyle and general health into account.
Skincare Routines:
tailored skincare routines and advice based on user profiles are provided through tailored content and
education. Educational Resources, Providing information on trends, ingredients, and best practices in
skincare.
1.Du B, Bian X, “Beauty Net: Preserving Facial Beauty for Makeup recommendation”, 2017.
2.Panakorn, Konsan Srivut, “Deep learning-based Beauty product recommendation system” 2018.
3.Zhang, Qi, “Beauty GAN: Instance-level Facial Makeup transfer with Deep GAN” 2018.
4.Yang F, Chang S, “Deep Beauty: Beauty prediction and harmonization with user preferences”, 2019,
5.Aditi Kanhere, Vaijayanthi M, “Makeup Mantra: A Deep learning makeup recommendation system”,
2020.
6.Xue, Cheng J, Zhu, “BeautyPlus: A Recommendation system for beauty products”, 2021
7. Hameed, N.; Shabut, A.M.; Hossain, “Multi-class skin diseases classification using deep convolutional
neural network and support vector machine”, 2018.
8. Hsia, C.-H.; Chiang, J.-S.; Lin, “A fast face detection method for illumination variant condition. S”,
2015.
9. Vesal, S.; Ravikumar, N.; Maier, “SkinNet: A deep learning framework for skin lesion segmentation”,
2018.
38
APPENDIX
39
Additional Software
>Streamlit:
A freely available open-source framework called Streamlit makes it easy to develop and distribute
aesthetically pleasing data science and machine learning-based web applications. It's a Python
library designed mostly with machine learning developers in mind. online developers are neither
data scientists and engineers working with machine learning, and they have little interest in spending
weeks exploring how to create online applications using these frameworks. As long as the tool can
show data and gather the data needed for modeling, they go with the easier-to-use option. Using just
a few pieces of code and Streamlit, you can construct an aesthetically pleasing application.
• Every time a state changes, the application restarts, requiring a scene update.
• The application is structured such that values from the session state may be read and written.
• The session state has a Python dictionary interface and is a global variable in the package.
40
Update and Capture Widget Status:
• Custom object hash functions and the time to live variable are defined via the @st.cache decorator.
• Verifies all functions, arguments, the body of the function, and external variables.
• Functions that allow supplying for on_click or on_change parameters is supported by the widget
constructor.
41