Deep Learning Lab Manual
Deep Learning Lab Manual
DEPARTMENT OF ARTIFICIAL
INTELLIGENCE AND DATA SCIENCE
LABORATORY
MASTER RECORD
LABORATORY
MASTER RECORD
2
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
LABORATORY
MASTER RECORD
Branch : ___________________
5
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
TABLE OF CONTENTS
1. Index 7
2. Course Syllabus 8
4. List of Experiments 10
8. Experiment/Exercise Details 15
6
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
1. INDEX
Ex. Signature
Page Marks
No. Date Title of the Experiment/Exercise of the
No. Awarded
faculty
1.
Solving XOR problem using DNN
2.
Character recognition using CNN
3.
Face recognition using CNN
4.
Language modeling using RNN
5.
Sentiment analysis using LSTM
6. Parts of speech tagging using Sequence to
Sequence architecture
7. Machine Translation using Encoder-
Decoder model
8.
Image augmentation using GANs
9.
Mini-project on real world application
Content Beyond Syllabus
10. Build Regression Model
7
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
2. COURSE SYLLABUS
LIST OF EXPERIMENTS:
TOTAL: 60 PERIODS
8
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
COURSE OUTCOMES:
BT
CO CO Statements
Level
CO3 Apply Recurrent Neural Network and its variants for text analysis Apply
CO5 Develop real-world solutions using suitable deep neural networks Apply
CO1 3 2 2 2 3 2 3 2 2
CO2 3 2 2 2 3 2 3 3 3
CO3 3 2 2 2 3 2 3 3 3
CO4 3 2 2 2 3 2 3 3 3
CO5 3 3 3 3 3 3 3 3 3
9
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
4. LIST OF EXPERIMENTS
Course
S.No. LIST OF EXPERIMENTS
Outcome
10
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
12
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
The software used in a Data Science and Analytics Laboratory course includes
various programming languages, libraries, tools, and frameworks that provide the
necessary functionality to process, analyze, model, and visualize data. Below is a
breakdown of essential software for the course:
1. Programming Languages: Python:
Python is the most widely used language in data science and analytics due to its
simplicity and extensive ecosystem of libraries and frameworks. Libraries such as
Pandas, NumPy, Matplotlib, Scikit-learn, TensorFlow, and Keras allow students to
perform data manipulation, analysis, visualization, machine learning, and deep
learning tasks efficiently.
2. Data Science and Machine Learning Libraries:
Pandas: For data manipulation and analysis, particularly with structured data
(CSV, Excel, SQL).
NumPy: For numerical computing, matrix operations, and handling arrays.
Matplotlib and Seaborn: For creating static, animated, and interactive
visualizations of data.
Scikit-learn: For machine learning tasks, such as classification, regression,
clustering, and model evaluation.
TensorFlow and Keras: Popular frameworks for deep learning, supporting neural
networks, natural language processing (NLP), and computer vision tasks.
Statsmodels: For statistical models and hypothesis testing.
Plotly: For creating interactive plots and visualizations.
13
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
8. EXPERIMENT/EXERCISE DETAILS
Ex.No: 01 DATE:
EXERCISE DESCRIPTION:
This experiment demonstrates the use of a simple DNN to model the XOR function.
It uses a 2-input, 2-hidden, 1-output architecture with the sigmoid activation function.
The model is trained using gradient descent for binary classification.
TERMINOLOGIES USED:
• DNN: Deep Neural Network with hidden layers to learn complex patterns.
• XOR: A logic function that outputs 1 only when inputs differ.
• Sigmoid: An activation function to squash output between 0 and 1.
• Forward Propagation: Calculating output from inputs using weights.
• Backpropagation: Adjusting weights to minimize the error.
• Gradient Descent: Optimization method for learning by minimizing cost.
ALGORITHM:
1. Initialize input and output data for the XOR logic.
2. Define the sigmoid activation function and its derivative.
3. Randomly initialize weights and biases.
4. Perform forward propagation to get the predicted output.
5. Calculate error and apply backpropagation to update weights.
6. Repeat for several iterations (epochs).
7. After training, perform predictions on input data.
8. Display final predicted results
PROGRAM:
14
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
15
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
OUTPUT:
Input: [0 1], Output: 1
Input: [1 0], Output: 1
Input: [1 1], Output: 0
Input: [0 0], Output: 0
RESULT:
Thus, the Python program to implement the XOR logic using a Deep Neural
Network (DNN) and the network correctly classified all XOR inputs was successfully
executed, and the output was verified.
16
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
Ex.No: 02 DATE:
AIM:
To build and train a Convolutional Neural Network (CNN) using Python and
Keras to perform handwritten character recognition on the MNIST dataset.
PROGRAM DESCRIPTION:
TERMINOLOGIES USED:
• CNN (Convolutional Neural Network): A type of deep neural network specifically
designed for image data.
• MNIST: A dataset of 28×28 grayscale images of handwritten digits (0–9).
• Convolution Layer: Extracts features from image regions using filters.
• Pooling Layer: Downsamples feature maps to reduce computation.
• ReLU: Activation function that outputs max(0, x).
• Softmax: Converts outputs to probabilities for multi-class classification.
• Epoch: One complete pass through the entire training dataset.
ALGORITHM:
1. Import the MNIST dataset and preprocess the input data.
2. Build a CNN model with convolution, pooling, and dense layers.
3. Compile the model using categorical crossentropy and Adam optimizer.
4. Train the model on the training data for a fixed number of epochs.
5. Evaluate the model's accuracy on the test data
6. Display the test accuracy and sample predictions.
7. Save the trained model (optional).
PROGRAM:
17
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
18
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
OUTPUT:
Epoch 1/5
938/938 - 10s - accuracy: 0.92 - loss: 0.23
...
Epoch 5/5
938/938 - 4s - accuracy: 0.98 - loss: 0.05
Test Accuracy: 0.9857
RESULT:
19
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
Ex.No: 03 DATE:
PROGRAM DESCRIPTION:
This Python program implements a Convolutional Neural Network (CNN) using
TensorFlow and Keras to classify face images from the LFW dataset. It uses 500
samples for quick execution. The model includes convolution, pooling, and dense
layers to extract facial features and classify them into predefined identities.
Performance is evaluated using a confusion matrix.
TERMINOLOGIES:
• Conv2D – Applies convolutional filters to extract image features.
• MaxPooling2D – Downsamples feature maps to reduce dimensions.
• Flatten() – Converts 2D feature maps into a 1D vector for dense layers.
• Dense – Fully connected neural network layer for classification.
• to_categorical – Converts class labels into one-hot encoded format.
• confusion_matrix – Evaluates classification performance by comparing predictions
with actual labels.
ALGORITHM:
1. Start the program.
2. Get the relevant packages for Face Recognition
3. Load the A_Z Handwritten Data.csv from the directory.
4. Reshape data for model creation
5. Train the model and Prediction on test data
6. Prediction on External Image
7. Stop the program
PROGRAM:
20
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
21
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
22
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
23
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
OUTPUT:
Epoch 1/5
C:\Users\rohit\anaconda3\Lib\site-
packages\keras\src\layers\convolutional\base_conv.py:107: UserWarning: Do not pass an
`input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using
an `Input(shape)` object as the first layer in the model instead.
super().__init__(activity_regularizer=activity_regularizer, **kwargs)
13/13 ━━━━━━━━━━━━━━━━━━━━ 3s 158ms/step - accuracy: 0.3264 - loss:
1.5228 - val_accuracy: 0.4600 - val_loss: 1.4287
Epoch 2/5
13/13 ━━━━━━━━━━━━━━━━━━━━ 2s 152ms/step - accuracy: 0.4968 - loss:
1.3866 - val_accuracy: 0.4600 - val_loss: 1.4622
Epoch 3/5
13/13 ━━━━━━━━━━━━━━━━━━━━ 2s 148ms/step - accuracy: 0.4497 - loss:
1.4845 - val_accuracy: 0.4600 - val_loss: 1.4306
Epoch 4/5
13/13 ━━━━━━━━━━━━━━━━━━━━ 2s 153ms/step - accuracy: 0.4509 - loss:
1.4498 - val_accuracy: 0.4600 - val_loss: 1.4264
Epoch 5/5
13/13 ━━━━━━━━━━━━━━━━━━━━ 2s 145ms/step - accuracy: 0.4656 - loss:
1.4287 - val_accuracy: 0.4600 - val_loss: 1.4183
Training completed.
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
Prediction completed.
24
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
RESULT:
Thus, the program to implement the Face Recognition using CNN was
successfully executed and the output was verified.
25
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
Ex.No: 04 DATE:
LANGUAGE MODELING USING RNN
AIM:
To write a Python program to implement Language modeling using RNN.
PROGRAM DESCRIPTION:
In this experiment, we use a CNN to recognize handwritten digits (0–9) from the
MNIST dataset. CNNs are particularly effective for image classification tasks due to their
ability to learn spatial hierarchies. The model includes convolutional, pooling, and fully
connected layers, and uses ReLU activation and softmax for multi-class classification.
TERMINOLOGIES USED:
• RNN (Recurrent Neural Network): A neural network suited for sequential data like
text, where output depends on previous inputs.
• One-Hot Encoding: Representing categorical data (like characters or class labels) as
binary vectors.
• Category Tensor: A tensor used to represent the class (e.g., nationality) of a data
point.
• Loss Function (NLLLoss): Measures how far the model's output is from the target,
optimized during training.
• Sampling: The process of generating new data (e.g., names) by feeding predicted
characters back into the model.
• Softmax / LogSoftmax: Converts raw model outputs into probabilities for
classification.
ALGORITHM:
1. Start the program.
2. Get the relevant packages for Language modeling
3. Read a file and split into lines.
4. Build the category_lines dictionary, a list of lines per category
5. Add the Random item from a list
6. Get a random category and random line from that category.
7. One-hot vector for category
8. Make category, input, and target tensors from a random category, line pair
9. Sample from a category and starting letter
10. Get multiple samples from one category and multiple starting letters.
11. Train the model and Prediction on test data
12. Prediction on External Image
13. Stop the program
PROGRAM:
26
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
27
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
28
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
29
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
30
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
OUTPUT:
# categories: 4 ['Russian', 'German', 'Spanish', 'Chinese']
RESULT:
Thus, the program to implement the Language Modeling using RNN was
successfully executed and the output was verified.
32
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
Ex.No: 05 DATE:
PROGRAM DESCRIPTION:
This Python program demonstrates sentiment analysis using a Recurrent Neural
Network (RNN) built with TensorFlow and Keras. It is designed to work fully offline by
using a small set of manually defined movie reviews labeled as positive or negative. The
reviews are tokenized into sequences of integers using Keras's Tokenizer, and these
sequences are padded to a fixed length to ensure uniform input size. The model
architecture consists of an embedding layer to convert word indices into dense vectors,
followed by an LSTM (Long Short-Term Memory) layer that captures temporal
dependencies in the sequence data, and a dense output layer with a sigmoid activation
for binary classification. The model is trained and validated on a split of the dataset and
then evaluated to report accuracy and loss. This implementation provides a compact and
self-contained example of how natural language processing and deep learning can be
applied to classify text sentiment without the need for internet connectivity or large
datasets.
TERMINOLOGIES USED:
• Tokenizer: A utility that converts text into sequences of integers, where each integer
represents a word's index in the vocabulary. It prepares raw text data for model
input.
• Padding: A process that ensures all sequences (reviews) have the same length by
adding zeros (or truncating) to match a defined maximum length.
• Embedding layer: Transforms word indices into dense vector representations,
capturing semantic relationships between words in a lower-dimensional space.
• LSTM (Long Short-Term Memory): A special type of recurrent neural network layer
that is capable of learning long-term dependencies, ideal for sequence data like text.
• Binary crossentropy: A loss function used for binary classification tasks, measuring
the difference between predicted probabilities and actual binary labels
ALGORITHM:
1. Start the program.
2. Get the relevant packages for Keras-Preprocessing
3. Load the IMDB Dataset.csv file.
4. Remove HTML tags, URL and non-alphanumeric characters
5. Read a file and split into lines.
6. Tuning the hyperparameters of the model
7. Model initialization
8. compile model
9. reviews on which we need to predict.
10. Stop the program
33
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
PROGRAM:
34
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
OUTPUT:
Epoch 1/10
Epoch 2/10
Epoch 3/10
Epoch 4/10
Epoch 5/10
Epoch 6/10
Epoch 7/10
35
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
Epoch 8/10
Epoch 9/10
Epoch 10/10
Loss: 0.6943531632423401
Accuracy: 0.5
RESULT:
Thus, the program to implement the Sentiment analysis using LSTM was
successfully executed, and the output was verified.
36
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
Ex.No: 06 DATE:
AIM:
To implement Parts of Speech tagging using Sequence to Sequence (Seq2Seq)
architecture.
PROGRAM DESCRIPTION:
This program demonstrates Parts of Speech (POS) tagging using a Sequence-to-Sequence
(Seq2Seq) architecture built with TensorFlow and Keras. The model is designed to map
input sentences (sequences of words) to corresponding sequences of POS tags. It uses an
encoder-decoder structure where the encoder LSTM processes the input sentence and
compresses it into context-rich states, which are then passed to the decoder LSTM to
generate a sequence of POS tags. During training, the model learns from sentence-tag
pairs using teacher forcing. For inference, the decoder generates POS tags sequentially
starting with a <sos> token and stopping at an <eos> token. The model is trained on a
small synthetic dataset of English sentences and their POS tags.
TERMINOLOGIES USED:
• POS Tagging: The process of assigning grammatical tags (e.g., noun, verb) to
words in a sentence.
• Seq2Seq Model: A neural network architecture that transforms one sequence
into another, used for tasks like translation and tagging.
• Encoder: The part of the model that reads and summarizes the input
sentence into a context vector.
• Decoder: The model component that generates the output sequence (POS
tags) using the context vector.
• Embedding Layer: Converts words or tags into dense vector representations
to capture semantic meaning.
ALGORITHM:
37
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
PROGRAM:
38
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
OUTPUT:
Epoch 1/50
Epoch 2/50
Epoch 3/50
Epoch 4/50
Epoch 5/50
Epoch 6/50
Epoch 7/50
Epoch 8/50
Epoch 9/50
Epoch 10/50
Epoch 11/50
Epoch 12/50
Epoch 13/50
Epoch 14/50
Epoch 15/50
40
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
Epoch 16/50
Epoch 17/50
Epoch 18/50
Epoch 19/50
Epoch 20/50
Epoch 21/50
Epoch 22/50
Epoch 23/50
Epoch 24/50
Epoch 25/50
Epoch 26/50
Epoch 27/50
Epoch 28/50
Epoch 29/50
Epoch 30/50
Epoch 31/50
Epoch 32/50
Epoch 33/50
Epoch 34/50
Epoch 35/50
Epoch 36/50
Epoch 37/50
Epoch 38/50
Epoch 39/50
Epoch 40/50
42
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
Epoch 41/50
Epoch 42/50
Epoch 43/50
Epoch 44/50
Epoch 45/50
Epoch 46/50
Epoch 47/50
Epoch 48/50
Epoch 49/50
Epoch 50/50
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild
TensorFlow with the appropriate compiler flags.
RESULT:
Thus, the program to implement the Parts of speech tagging using Sequence
to Sequence architecture was successfully executed and the output was verified.
44
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
PROGRAM DESCRIPTION:
Machine translation using an Encoder-Decoder model is a key technique in natural
language processing (NLP). It translates sentences from one language to another using a
two-part neural network:
• The encoder processes an input sentence in the source language and creates a
context vector representing its meaning.
• The decoder uses this context to generate the corresponding sentence in the target
language.
• The model is trained on pairs of source and target language sentences. During
inference, the encoder summarizes the source sentence and the decoder translates
it word-by-word using previously predicted words and encoder states.
TERMINOLOGIES USED:
• Encoder: Part of the model that processes the input sentence into a fixed
representation.
• Decoder: Translates the encoder’s output into the target language sentence.
• LSTM (Long Short-Term Memory): A type of RNN used for handling long-term
dependencies in sequence data.
• Tokenization: The process of converting words into numerical indices.
• Embedding: Converts word indices into dense vector representations for training
neural networks.
ALGORITHM:
1. Define the input and output sequences.
2. Create a set of all unique words in the input and target sequences.
3. Add <sos> and <eos> tokens to target_words.
4. Define the maximum sequence lengths
5. Create dictionaries to map words to integers.
6. Define the maximum sequence lengths
7. Prepare the encoder input data
8. Prepare the decoder input and target data
9. Define the encoder input and LSTM layers
10. Define the decoder input and LSTM layers
11. Define, Compile and train the model
12. Define the encoder model to get the encoder states
13. Define the decoder model with encoder states as initial state
14. Define a function to perform inference and generate translations
15. Test the model
45
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
PROGRAM:
46
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
47
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
OUTPUT:
Epoch 1/50
Epoch 2/50
Epoch 3/50
Epoch 4/50
Epoch 5/50
48
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
Epoch 6/50
Epoch 7/50
Epoch 8/50
Epoch 9/50
Epoch 10/50
Epoch 11/50
Epoch 12/50
Epoch 13/50
Epoch 14/50
Epoch 15/50
Epoch 16/50
Epoch 17/50
Epoch 18/50
Epoch 19/50
Epoch 20/50
Epoch 21/50
Epoch 22/50
Epoch 23/50
Epoch 24/50
Epoch 25/50
Epoch 26/50
Epoch 27/50
Epoch 28/50
Epoch 29/50
Epoch 30/50
50
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
Epoch 31/50
Epoch 32/50
Epoch 33/50
Epoch 34/50
Epoch 35/50
Epoch 36/50
Epoch 37/50
Epoch 38/50
Epoch 39/50
Epoch 40/50
Epoch 41/50
Epoch 42/50
Epoch 43/50
Epoch 44/50
Epoch 45/50
Epoch 46/50
Epoch 47/50
Epoch 48/50
Epoch 49/50
Epoch 50/50
52
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild
TensorFlow with the appropriate compiler flags.
RESULT:
Ex.No: 08 DATE:
PROGRAM DESCRIPTION:
Image augmentation using Generative Adversarial Networks (GANs) is a technique that
leverages the power of GANs to generate new, realistic images that are variations of
existing images. This approach is commonly used in computer vision tasks, such as
image classification and object detection, to increase the diversity and size of training
datasets.
1. Generative Adversarial Networks (GANs): GANs consist of two neural networks: a
generator and a discriminator. The generator network takes random noise as
input and generates synthetic images. The discriminator network tries to
distinguish between real and synthetic images. During training, the generator
aims to produce images that are indistinguishable from real ones, while the
discriminator tries to get better at telling them apart.
2. Image Augmentation with GANs: You train a GAN on this dataset, where the
generator learns to generate images similar to those in the dataset, and the
discriminator learns to distinguish real images from generated ones.
3. Generating Augmented Images: Once the GAN is trained, you can use the
generator to create new, synthetic images. To augment an image from your
dataset, you feed it to the generator, and the generator produces a new image.
These generated images are typically variations of the original images,
introducing changes in aspects like style, lighting, perspective, or other factors
that the GAN has learned from the training data.
TERMINOLOGIES USED:
• T Generator: A neural network that creates synthetic images from random noise.
• Discriminator: A neural network that classifies images as real or fake.
• Latent Space: The input noise vector space from which the generator creates data.
• Conv2DTranspose: A layer that upsamples images, used in generators.
• Binary Crossentropy: The loss function used to train both generator and
discriminator.
ALGORITHM:
54
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
PROGRAM:
55
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
OUTPUT:
RESULT:
Thus, the Image augmentation using GANs was successfully executed and
the output was verified.
56
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
Ex.No: 10 DATE:
PROGRAM DESCRIPTION:
Regression analysis is a supervised learning technique used to model the relationship
between a dependent variable (target) and one or more independent variables (features). It
is widely used for predicting continuous values, such as house prices, stock prices, and
sales forecasting.
TERMINOLOGIES USED:
Dependent Variable (Target Variable): The variable we want to predict (e.g., house price,
sales).
Independent Variables (Features): The variables used to make predictions (e.g., square
footage, number of rooms).
The R² score indicates how well the independent variables explain the variability of the
dependent variable.
ALGORITHM:
1. Gather and prepare the data by handling missing values and errors.
2. Exploratory data analysis: Understand the relationships within the data through
visualizations and summary statistics.
3. Model selection and training: Choose a suitable regression algorithm and train it
on a portion of the data.
4. Model evaluation: Test the trained model on new data to assess its accuracy.
5. Deployment: Use the final, refined model to make predictions.
PROGRAM:
57
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
58
MASTER RECORD AD3511- DEEP LEARNING LABORATORY
AAACET DEPT. OF AI&DS
OUTPUT:
RESULT:
59
MASTER RECORD AD3511- DEEP LEARNING LABORATORY