PMDS603P Deep Learning Lab Experiment 4
August 2025
1 Work to do today
Note: Make a single PDF file of the work you are doing in a Jupyter notebook. Upload with
the proper format. Please mention your name and roll no properly with the Experiment
number on the first page of your submission.
Question1. cifar10 dataset is also an inbuilt dataset which contains 10 classes of images,
mainly, 0-airplane, 1-automobile, 2-bird, 3-cat, 4-deer, 5-dog, 6-frog, 7-horse, 8-ship, 9-truck.
Load the inbuilt dataset cifar10 as you did in last lab by replacing mnist.load datae() as
cifar10.load data(). First, try to import it from keras.datasets as you did for mnist.
Now, identify the size of the images you have first of all. You can now see 32*32*3 images
that is 32*32 pixel images with 3 channels that give the RGB values since we have a color
image.
Try to print the shape of each image and see. you will see it’s stored like 32*32*3 arrays.
Now, try to visualize certain images using appropriate functions.
Check the size of x train and x test and reshape them into one-dimensional arrays as done
in the case of mnist dataset.
Do necessary pre-processing and split the data into training, validation, and testing sets.
Create a new model using a sequential class with appropriate hidden layers and output
layer neurons. Choose appropriate activation functions like sigmoid and relu, etc. And also
an appropriate one in the output layer.
Choose the error function appropriately. Include early stopping technique in your model
and run the model for 500 epochs. Try to come up with a better model with decent accuracy.
The choice we have taken in the model here may not be the appropriate one. But you
can see the accuracy you are able to come up with without having overfitting happen there.
Question 2. Next from keras.regularizers import l2
Now we will try to include L2 regularization in out modeling part for performing regu-
larization. You can add this weight decay in any of your hidden layers like.
1
The parameter 0.001 is an arbitrary choice for the regularization parameter α that we
saw in the class.
Now try to fit your same model as above with this change and check for any improvements.
Question 3: Now, let’s see how we can proceed to do perform some hyperparameter
tuning and find out the appropriate parameter value. The following part is done for a very
simple model with one hidden layer and an output layer. The number of neurons and the
dropout parameter is being tuned to find appropriate ones.
What you see above is that we have loaded and pre-processed the data and created a
model that would be re-used for the tuning part.
Next, we will set up the keras tuner that can be used for the tuning part using tuner.search()
So our objective is to maximize val accuracy. We are using randomsearch with a maximum
2
10 combinations of our hyperparameters with each exceution per trail as 1. A directory is
created on the disk with the name mnist tuning which stores logs and information.
Now using tuner.search method, we can try to get the best model. Here I have only given
10 epochs. you can increase as well.
Now if you want to re-tune your model by adding few more layers etc then you have to
first delete the already stored information in the directory mnist tuning. For that you can
use this
import shutil
shutil.rmtree(’mnist tuning/dense dropout tune’, ignore errors=True)
Then again you can re-run.
So your work is to include L2 regularization and tune the above model to get better
accuracy with mnist.
Then later go back to the cifar10 dataset problem and come up with your best model.