Header
(https://www.nvidia.com/dli)
Assessment
Congratulations on going through today's course! Hopefully, you've learned some valuable skills along the way and had fun doing it. Now it's time to put
those skills to the test. In this assessment, you will train a new model that is able to recognize fresh and rotten fruit. You will need to get the model to a
validation accuracy of 92% in order to pass the assessment, though we challenge you to do even better if you can. You will have the use the skills that
you learned in the previous exercises. Specifically, we suggest using some combination of transfer learning, data augmentation, and fine tuning. Once
you have trained the model to be at least 92% accurate on the validation dataset, save your model, and then assess its accuracy. Let's get started!
The Dataset
In this exercise, you will train a model to recognize fresh and rotten fruits. The dataset comes from Kaggle (https://www.kaggle.com/sriramr/fruits-fresh-
and-rotten-for-classification), a great place to go if you're interested in starting a project after this class. The dataset structure is in the data/fruits
folder. There are 6 categories of fruits: fresh apples, fresh oranges, fresh bananas, rotten apples, rotten oranges, and rotten bananas. This will mean
that your model will require an output layer of 6 neurons to do the categorization successfully. You'll also need to compile the model with
categorical_crossentropy , as we have more than two categories.
Load ImageNet Base Model
We encourage you to start with a model pretrained on ImageNet. Load the model with the correct weights, set an input shape, and choose to remove
the last layers of the model. Remember that images have three dimensions: a height, and width, and a number of channels. Because these pictures are
in color, there will be three channels for red, green, and blue. We've filled in the input shape for you. This cannot be changed or the assessment will fail.
If you need a reference for setting up the pretrained model, please take a look at notebook 05b (05b_presidential_doggy_door.ipynb) where we
implemented transfer learning.
In [3]: from tensorflow import keras
base_model = keras.applications.VGG16(
weights='imagenet',
input_shape=(224, 224, 3),
include_top=False)
Freeze Base Model
Next, we suggest freezing the base model, as done in notebook 05b (05b_presidential_doggy_door.ipynb). This is done so that all the learning from the
ImageNet dataset does not get destroyed in the initial training.
In [4]: # Freeze base model
base_model.trainable = False
Add Layers to Model
Now it's time to add layers to the pretrained model. Notebook 05b (05b_presidential_doggy_door.ipynb) can be used as a guide. Pay close attention to
the last dense layer and make sure it has the correct number of neurons to classify the different types of fruit.
In [5]: # Create inputs with correct shape
inputs = keras.Input(shape=(224, 224, 3))
x = base_model(inputs, training=False)
# Add pooling layer or flatten layer
x = keras.layers.GlobalAveragePooling2D()(x)
# Add final dense layer
outputs = keras.layers.Dense(6, activation = 'softmax')(x)
# Combine inputs and outputs to create model
model = keras.Model(inputs, outputs)
In [6]: model.summary()
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
vgg16 (Model) (None, 7, 7, 512) 14714688
_________________________________________________________________
global_average_pooling2d (Gl (None, 512) 0
_________________________________________________________________
dense (Dense) (None, 6) 3078
=================================================================
Total params: 14,717,766
Trainable params: 3,078
Non-trainable params: 14,714,688
_________________________________________________________________
Compile Model
Now it's time to compile the model with loss and metrics options. Remember that we're training on a number of different categories, rather than a binary
classification problem.
In [7]: model.compile(loss = 'categorical_crossentropy' , metrics = ['accuracy'])
Augment the Data
If you'd like, try to augment the data to improve the dataset. Feel free to look at notebook 04a (04a_asl_augmentation.ipynb) and notebook 05b
(05b_presidential_doggy_door.ipynb) for augmentation examples. There is also documentation for the Keras ImageDataGenerator class
(https://keras.io/api/preprocessing/image/#imagedatagenerator-class). This step is optional, but it may be helpful to get to 92% accuracy.
In [22]: from tensorflow.keras.preprocessing.image import ImageDataGenerator
datagen_train = ImageDataGenerator( rotation_range=10, # randomly rotate images in the range (degrees, 0 to
180)
zoom_range=0.1, # Randomly zoom image
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images horizontally
vertical_flip=False,)
datagen_valid = ImageDataGenerator(samplewise_center=True)
Load Dataset
Now it's time to load the train and validation datasets. Pick the right folders, as well as the right target_size of the images (it needs to match the
height and width input of the model you've created). For a reference, check out notebook 05b (05b_presidential_doggy_door.ipynb).
In [23]: # load and iterate training dataset
train_it = datagen_train.flow_from_directory(
'data/fruits/train/',
target_size=(224, 224),
color_mode="rgb",
class_mode="categorical",
)
# load and iterate validation dataset
valid_it = datagen_valid.flow_from_directory(
'data/fruits/valid/',
target_size=(224, 224),
color_mode="rgb",
class_mode="categorical",
)
Found 1182 images belonging to 6 classes.
Found 329 images belonging to 6 classes.
Train the Model
Time to train the model! Pass the train and valid iterators into the fit function, as well as setting the desired number of epochs.
In [24]: model.fit(train_it,
validation_data=valid_it,
steps_per_epoch=train_it.samples/train_it.batch_size,
validation_steps=valid_it.samples/valid_it.batch_size,
epochs=20)
Epoch 1/20
37/36 [==============================] - 18s 485ms/step - loss: 0.6301 - binary_accuracy: 0.9994 - val_loss:
0.6369 - val_binary_accuracy: 0.9858
Epoch 2/20
37/36 [==============================] - 18s 478ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6350 - val_binary_accuracy: 0.9909
Epoch 3/20
37/36 [==============================] - 18s 490ms/step - loss: 0.6301 - binary_accuracy: 0.9994 - val_loss:
0.6368 - val_binary_accuracy: 0.9868
Epoch 4/20
37/36 [==============================] - 18s 487ms/step - loss: 0.6299 - binary_accuracy: 1.0000 - val_loss:
0.6355 - val_binary_accuracy: 0.9889
Epoch 5/20
37/36 [==============================] - 18s 479ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6374 - val_binary_accuracy: 0.9858
Epoch 6/20
37/36 [==============================] - 18s 475ms/step - loss: 0.6301 - binary_accuracy: 0.9994 - val_loss:
0.6358 - val_binary_accuracy: 0.9878
Epoch 7/20
37/36 [==============================] - 18s 492ms/step - loss: 0.6304 - binary_accuracy: 0.9992 - val_loss:
0.6374 - val_binary_accuracy: 0.9838
Epoch 8/20
37/36 [==============================] - 18s 477ms/step - loss: 0.6303 - binary_accuracy: 0.9992 - val_loss:
0.6368 - val_binary_accuracy: 0.9858
Epoch 9/20
37/36 [==============================] - 18s 484ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6347 - val_binary_accuracy: 0.9894
Epoch 10/20
37/36 [==============================] - 18s 476ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6353 - val_binary_accuracy: 0.9889
Epoch 11/20
37/36 [==============================] - 18s 483ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6352 - val_binary_accuracy: 0.9889
Epoch 12/20
37/36 [==============================] - 18s 476ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6342 - val_binary_accuracy: 0.9904
Epoch 13/20
37/36 [==============================] - 18s 480ms/step - loss: 0.6300 - binary_accuracy: 0.9997 - val_loss:
0.6342 - val_binary_accuracy: 0.9919
Epoch 14/20
37/36 [==============================] - 18s 481ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6358 - val_binary_accuracy: 0.9878
Epoch 15/20
37/36 [==============================] - 18s 480ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6338 - val_binary_accuracy: 0.9919
Epoch 16/20
37/36 [==============================] - 17s 466ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6365 - val_binary_accuracy: 0.9858
Epoch 17/20
37/36 [==============================] - 17s 472ms/step - loss: 0.6301 - binary_accuracy: 0.9994 - val_loss:
0.6321 - val_binary_accuracy: 0.9959
Epoch 18/20
37/36 [==============================] - 18s 485ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6320 - val_binary_accuracy: 0.9949
Epoch 19/20
37/36 [==============================] - 18s 478ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6332 - val_binary_accuracy: 0.9939
Epoch 20/20
37/36 [==============================] - 18s 478ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6332 - val_binary_accuracy: 0.9949
Out[24]: <tensorflow.python.keras.callbacks.History at 0x7faed05c2978>
Unfreeze Model for Fine Tuning
If you have reached 92% validation accuracy already, this next step is optional. If not, we suggest fine tuning the model with a very low learning rate.
In [14]: # Unfreeze the base model
base_model.trainable = True
# Compile the model with a low learning rate
model.compile(optimizer=keras.optimizers.RMSprop(learning_rate = .00001),
loss = keras.losses.BinaryCrossentropy(from_logits=True) , metrics = [keras.metrics.BinaryAccur
acy()])
In [16]: model.fit(train_it,
validation_data=valid_it,
steps_per_epoch=train_it.samples/train_it.batch_size,
validation_steps=valid_it.samples/valid_it.batch_size,
epochs=20)
Epoch 1/20
37/36 [==============================] - 21s 555ms/step - loss: 0.6301 - binary_accuracy: 0.9994 - val_loss:
0.6321 - val_binary_accuracy: 0.9949
Epoch 2/20
37/36 [==============================] - 21s 565ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6323 - val_binary_accuracy: 0.9949
Epoch 3/20
37/36 [==============================] - 21s 562ms/step - loss: 0.6304 - binary_accuracy: 0.9989 - val_loss:
0.6321 - val_binary_accuracy: 0.9949
Epoch 4/20
37/36 [==============================] - 20s 553ms/step - loss: 0.6300 - binary_accuracy: 0.9997 - val_loss:
0.6311 - val_binary_accuracy: 0.9980
Epoch 5/20
37/36 [==============================] - 21s 570ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6314 - val_binary_accuracy: 0.9970
Epoch 6/20
37/36 [==============================] - 21s 557ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6321 - val_binary_accuracy: 0.9949
Epoch 7/20
37/36 [==============================] - 21s 556ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6317 - val_binary_accuracy: 0.9959
Epoch 8/20
37/36 [==============================] - 21s 577ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6304 - val_binary_accuracy: 0.9990
Epoch 9/20
37/36 [==============================] - 21s 575ms/step - loss: 0.6300 - binary_accuracy: 0.9997 - val_loss:
0.6328 - val_binary_accuracy: 0.9939
Epoch 10/20
37/36 [==============================] - 21s 575ms/step - loss: 0.6301 - binary_accuracy: 0.9997 - val_loss:
0.6320 - val_binary_accuracy: 0.9959
Epoch 11/20
37/36 [==============================] - 21s 561ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6326 - val_binary_accuracy: 0.9949
Epoch 12/20
37/36 [==============================] - 21s 562ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6319 - val_binary_accuracy: 0.9970
Epoch 13/20
37/36 [==============================] - 21s 564ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6319 - val_binary_accuracy: 0.9959
Epoch 14/20
37/36 [==============================] - 21s 577ms/step - loss: 0.6299 - binary_accuracy: 0.9997 - val_loss:
0.6336 - val_binary_accuracy: 0.9929
Epoch 15/20
37/36 [==============================] - 20s 551ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6324 - val_binary_accuracy: 0.9949
Epoch 16/20
37/36 [==============================] - 21s 572ms/step - loss: 0.6300 - binary_accuracy: 0.9997 - val_loss:
0.6334 - val_binary_accuracy: 0.9929
Epoch 17/20
37/36 [==============================] - 21s 567ms/step - loss: 0.6302 - binary_accuracy: 0.9994 - val_loss:
0.6334 - val_binary_accuracy: 0.9929
Epoch 18/20
37/36 [==============================] - 21s 563ms/step - loss: 0.6308 - binary_accuracy: 0.9980 - val_loss:
0.6324 - val_binary_accuracy: 0.9949
Epoch 19/20
37/36 [==============================] - 21s 562ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6325 - val_binary_accuracy: 0.9939
Epoch 20/20
37/36 [==============================] - 21s 567ms/step - loss: 0.6298 - binary_accuracy: 1.0000 - val_loss:
0.6327 - val_binary_accuracy: 0.9939
Out[16]: <tensorflow.python.keras.callbacks.History at 0x7faeecd68080>
Evaluate the Model
Hopefully, you now have a model that has a validation accuracy of 92% or higher. If not, you may want to go back and either run more epochs of
training, or adjust your data augmentation.
Once you are satisfied with the validation accuracy, evaluate the model by executing the following cell. The evaluate function will return a tuple, where
the first value is your loss, and the second value is your accuracy. To pass, the model will need have an accuracy value of 92% or higher .
In [17]: model.evaluate(valid_it, steps=valid_it.samples/valid_it.batch_size)
11/10 [================================] - 4s 348ms/step - loss: 0.6306 - binary_accuracy: 0.9990
Out[17]: [0.6306365132331848, 0.9989868998527527]
Run the Assessment
To assess your model run the following two cells.
NOTE: run_assessment assumes your model is named model and your validation data iterator is called valid_it . If for any reason you have
modified these variable names, please update the names of the arguments passed to run_assessment .
In [18]: from run_assessment import run_assessment
In [19]: run_assessment(model, valid_it)
Evaluating model 5 times to obtain average accuracy...
11/10 [================================] - 4s 348ms/step - loss: 0.6330 - binary_accuracy: 0.9939
11/10 [================================] - 4s 355ms/step - loss: 0.6322 - binary_accuracy: 0.9949
11/10 [================================] - 4s 354ms/step - loss: 0.6310 - binary_accuracy: 0.9970
11/10 [================================] - 4s 356ms/step - loss: 0.6329 - binary_accuracy: 0.9939
11/10 [================================] - 4s 381ms/step - loss: 0.6316 - binary_accuracy: 0.9959
Accuracy required to pass the assessment is 0.92 or greater.
Your average accuracy is 0.9951.
Congratulations! You passed the assessment!
See instructions below to generate a certificate.
Generate a Certificate
If you passed the assessment, please return to the course page (shown below) and click the "ASSESS TASK" button, which will generate your
certificate for the course.
Header
(https://www.nvidia.com/dli)