Lecture 21
Lecture 21
April 6, 2021
1 Lecture 21
1.1 ID5059
Tom Kelsey - April 2021
Chapter 10 – Introduction to Artificial Neural Networks with Keras
This notebook contains all the sample code and solutions to the exercises in chapter 10.
Run in Google Colab
2 Setup
First, let’s import a few common modules, ensure MatplotLib plots figures inline and prepare a
function to save the figures. We also check that Python 3.5 or later is installed (although Python
2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as
Scikit-Learn �0.20 and TensorFlow �2.0.
[1]: # Python �3.5 is required
import sys
assert sys.version_info >= (3, 5)
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# Common imports
import numpy as np
import os
1
# to make this notebook's output stable across runs
np.random.seed(42)
3 Perceptrons
Note: we set max_iter and tol explicitly to avoid warnings about the fact that their default value
will change in future versions of Scikit-Learn.
In sklearn, Perceptron is a classification algorithm which shares the same underlying
implementation with SGDClassifier. In fact, Perceptron() is equivalent to SGDClas-
sifier(loss=“perceptron”, eta0=1, learning_rate=“constant”, penalty=None).
2
iris = load_iris()
X = iris.data[:, (2, 3)] # petal length, petal width
y = (iris.target == 0).astype(np.int)
[3]: y_pred
[3]: array([1])
axes = [0, 5, 0, 2]
x0, x1 = np.meshgrid(
np.linspace(axes[0], axes[1], 500).reshape(-1, 1),
np.linspace(axes[2], axes[3], 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_predict = per_clf.predict(X_new)
zz = y_predict.reshape(x0.shape)
plt.figure(figsize=(10, 4))
plt.plot(X[y==0, 0], X[y==0, 1], "bs", label="Not Iris-Setosa")
plt.plot(X[y==1, 0], X[y==1, 1], "yo", label="Iris-Setosa")
save_fig("perceptron_iris_plot")
plt.show()
3
4 Activation functions
[5]: def sigmoid(z):
return 1 / (1 + np.exp(-z))
def relu(z):
return np.maximum(0, z)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.plot(z, np.sign(z), "r-", linewidth=1, label="Step")
plt.plot(z, sigmoid(z), "g--", linewidth=2, label="Sigmoid")
plt.plot(z, np.tanh(z), "b-", linewidth=2, label="Tanh")
plt.plot(z, relu(z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
plt.legend(loc="center right", fontsize=14)
plt.title("Activation functions", fontsize=14)
plt.axis([-5, 5, -1.2, 1.2])
plt.subplot(122)
plt.plot(z, derivative(np.sign, z), "r-", linewidth=1, label="Step")
plt.plot(0, 0, "ro", markersize=5)
plt.plot(0, 0, "rx", markersize=10)
plt.plot(z, derivative(sigmoid, z), "g--", linewidth=2, label="Sigmoid")
plt.plot(z, derivative(np.tanh, z), "b-", linewidth=2, label="Tanh")
4
plt.plot(z, derivative(relu, z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
#plt.legend(loc="center right", fontsize=14)
plt.title("Derivatives", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("activation_functions_plot")
plt.show()
plt.figure(figsize=(10,4))
plt.subplot(121)
plt.contourf(x1, x2, z1)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: heaviside", fontsize=14)
plt.grid(True)
5
plt.subplot(122)
plt.contourf(x1, x2, z2)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: sigmoid", fontsize=14)
plt.grid(True)
[10]: tf.__version__
[10]: '2.5.0-rc0'
[11]: keras.__version__
[11]: '2.5.0'
Let’s start by loading the fashion MNIST dataset. Keras has a number of functions to load popular
datasets in keras.datasets. The dataset is already split for you between a training set and a test
set, but it can be useful to split the training set further to have a validation set:
[12]: %%time
fashion_mnist = keras.datasets.fashion_mnist
(X_train_full, y_train_full), (X_test, y_test) = fashion_mnist.load_data()
6
CPU times: user 229 ms, sys: 35.6 ms, total: 265 ms
Wall time: 264 ms
The training set contains 60,000 grayscale images, each 28x28 pixels:
[13]: X_train_full.shape
[14]: X_train_full.dtype
[14]: dtype('uint8')
Let’s split the full training set into a validation set and a (smaller) training set. We also scale the
pixel intensities down to the 0-1 range and convert them to floats, by dividing by 255.
[15]: X_valid, X_train = X_train_full[:5000] / 255., X_train_full[5000:] / 255.
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
X_test = X_test / 255.
X_test[1:17][1,1]
[15]: array([0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0.00784314, 0. ,
0.76862745, 1. , 1. , 1. , 0.94509804,
0.98431373, 1. , 0.96078431, 1. , 0.29803922,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. ])
You can plot an image using Matplotlib’s imshow() function, with a 'binary' color map:
[16]: plt.imshow(X_train[0], cmap="binary")
plt.axis('off')
plt.show()
7
The labels are the class IDs (represented as uint8), from 0 to 9:
[17]: y_train
[19]: 'Coat'
The validation set contains 5,000 images, and the test set contains 10,000 images:
[20]: X_valid.shape
[21]: X_test.shape
8
[22]: n_rows = 4
n_cols = 10
plt.figure(figsize=(n_cols * 1.2, n_rows * 1.2))
for row in range(n_rows):
for col in range(n_cols):
index = n_cols * row + col
plt.subplot(n_rows, n_cols, index + 1)
plt.imshow(X_train[index], cmap="binary", interpolation="nearest")
plt.axis('off')
plt.title(class_names[y_train[index]], fontsize=12)
plt.subplots_adjust(wspace=0.2, hspace=0.5)
save_fig('fashion_mnist_plot', tight_layout=False)
plt.show()
9
[24]: keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
[26]: model.layers
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 784) 0
_________________________________________________________________
dense (Dense) (None, 300) 235500
_________________________________________________________________
dense_1 (Dense) (None, 100) 30100
_________________________________________________________________
dense_2 (Dense) (None, 10) 1010
=================================================================
Total params: 266,610
Trainable params: 266,610
Non-trainable params: 0
_________________________________________________________________
10
[29]: hidden1 = model.layers[1]
hidden1.name
[29]: 'dense'
11
[30]: True
[33]: weights.shape
[34]: biases[1:15]
[34]: array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
dtype=float32)
[35]: biases.shape
[35]: (300,)
[37]: %%time
history = model.fit(X_train, y_train, epochs=30,
validation_data=(X_valid, y_valid))
Epoch 1/30
1719/1719 [==============================] - 2s 996us/step - loss: 1.0187 -
accuracy: 0.6807 - val_loss: 0.5207 - val_accuracy: 0.8234
Epoch 2/30
12
1719/1719 [==============================] - 2s 928us/step - loss: 0.5028 -
accuracy: 0.8260 - val_loss: 0.4353 - val_accuracy: 0.8532
Epoch 3/30
1719/1719 [==============================] - 2s 947us/step - loss: 0.4485 -
accuracy: 0.8422 - val_loss: 0.5320 - val_accuracy: 0.7982
Epoch 4/30
1719/1719 [==============================] - 2s 938us/step - loss: 0.4210 -
accuracy: 0.8530 - val_loss: 0.3914 - val_accuracy: 0.8642
Epoch 5/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4063 -
accuracy: 0.8581 - val_loss: 0.3747 - val_accuracy: 0.8692
Epoch 6/30
1719/1719 [==============================] - 2s 994us/step - loss: 0.3757 -
accuracy: 0.8676 - val_loss: 0.3712 - val_accuracy: 0.8728
Epoch 7/30
1719/1719 [==============================] - 2s 932us/step - loss: 0.3656 -
accuracy: 0.8708 - val_loss: 0.3628 - val_accuracy: 0.8732
Epoch 8/30
1719/1719 [==============================] - 2s 946us/step - loss: 0.3483 -
accuracy: 0.8756 - val_loss: 0.3860 - val_accuracy: 0.8616
Epoch 9/30
1719/1719 [==============================] - 2s 933us/step - loss: 0.3489 -
accuracy: 0.8759 - val_loss: 0.3594 - val_accuracy: 0.8698
Epoch 10/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3297 -
accuracy: 0.8830 - val_loss: 0.3435 - val_accuracy: 0.8774
Epoch 11/30
1719/1719 [==============================] - 2s 945us/step - loss: 0.3221 -
accuracy: 0.8839 - val_loss: 0.3441 - val_accuracy: 0.8774
Epoch 12/30
1719/1719 [==============================] - 2s 951us/step - loss: 0.3123 -
accuracy: 0.8876 - val_loss: 0.3304 - val_accuracy: 0.8820
Epoch 13/30
1719/1719 [==============================] - 2s 988us/step - loss: 0.3056 -
accuracy: 0.8894 - val_loss: 0.3284 - val_accuracy: 0.8868
Epoch 14/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2994 -
accuracy: 0.8923 - val_loss: 0.3408 - val_accuracy: 0.8772
Epoch 15/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2936 -
accuracy: 0.8942 - val_loss: 0.3213 - val_accuracy: 0.8842
Epoch 16/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2866 -
accuracy: 0.8980 - val_loss: 0.3098 - val_accuracy: 0.8898
Epoch 17/30
1719/1719 [==============================] - 2s 927us/step - loss: 0.2781 -
accuracy: 0.9003 - val_loss: 0.3561 - val_accuracy: 0.8728
Epoch 18/30
13
1719/1719 [==============================] - 2s 917us/step - loss: 0.2781 -
accuracy: 0.8994 - val_loss: 0.3138 - val_accuracy: 0.8898
Epoch 19/30
1719/1719 [==============================] - 2s 922us/step - loss: 0.2744 -
accuracy: 0.9026 - val_loss: 0.3118 - val_accuracy: 0.8916
Epoch 20/30
1719/1719 [==============================] - 2s 942us/step - loss: 0.2703 -
accuracy: 0.9034 - val_loss: 0.3276 - val_accuracy: 0.8820
Epoch 21/30
1719/1719 [==============================] - 2s 921us/step - loss: 0.2673 -
accuracy: 0.9049 - val_loss: 0.3063 - val_accuracy: 0.8926
Epoch 22/30
1719/1719 [==============================] - 2s 937us/step - loss: 0.2618 -
accuracy: 0.9051 - val_loss: 0.2971 - val_accuracy: 0.8966
Epoch 23/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2551 -
accuracy: 0.9071 - val_loss: 0.2979 - val_accuracy: 0.8940
Epoch 24/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2454 -
accuracy: 0.9121 - val_loss: 0.3068 - val_accuracy: 0.8896
Epoch 25/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2497 -
accuracy: 0.9106 - val_loss: 0.2976 - val_accuracy: 0.8938
Epoch 26/30
1719/1719 [==============================] - 2s 923us/step - loss: 0.2435 -
accuracy: 0.9130 - val_loss: 0.3058 - val_accuracy: 0.8888
Epoch 27/30
1719/1719 [==============================] - 2s 919us/step - loss: 0.2377 -
accuracy: 0.9154 - val_loss: 0.3020 - val_accuracy: 0.8954
Epoch 28/30
1719/1719 [==============================] - 2s 921us/step - loss: 0.2319 -
accuracy: 0.9173 - val_loss: 0.2993 - val_accuracy: 0.8936
Epoch 29/30
1719/1719 [==============================] - 2s 916us/step - loss: 0.2283 -
accuracy: 0.9173 - val_loss: 0.3043 - val_accuracy: 0.8892
Epoch 30/30
1719/1719 [==============================] - 2s 920us/step - loss: 0.2255 -
accuracy: 0.9200 - val_loss: 0.3030 - val_accuracy: 0.8922
CPU times: user 2min, sys: 29.2 s, total: 2min 30s
Wall time: 50.7 s
[38]: history.params
[39]: print(history.epoch)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21,
14
22, 23, 24, 25, 26, 27, 28, 29]
[40]: history.history.keys()
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
save_fig("keras_learning_curves_plot")
plt.show()
15
y_proba.round(2)
[45]: np.array(class_names)[y_pred]
16
6 Regression MLP
Let’s load, split and scale the California housing dataset (the original one, not the modified one as
in chapter 2):
housing = fetch_california_housing()
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_valid = scaler.transform(X_valid)
X_test = scaler.transform(X_test)
[49]: np.random.seed(42)
tf.random.set_seed(42)
[51]: %%time
history = model.fit(X_train, y_train, epochs=20,
validation_data=(X_valid, y_valid))
Epoch 1/20
363/363 [==============================] - 0s 758us/step - loss: 2.2656 -
val_loss: 0.8560
Epoch 2/20
363/363 [==============================] - 0s 605us/step - loss: 0.7413 -
val_loss: 0.6531
Epoch 3/20
363/363 [==============================] - 0s 607us/step - loss: 0.6604 -
val_loss: 0.6099
Epoch 4/20
363/363 [==============================] - 0s 598us/step - loss: 0.6245 -
val_loss: 0.5658
Epoch 5/20
17
363/363 [==============================] - 0s 602us/step - loss: 0.5770 -
val_loss: 0.5355
Epoch 6/20
363/363 [==============================] - 0s 609us/step - loss: 0.5609 -
val_loss: 0.5173
Epoch 7/20
363/363 [==============================] - 0s 622us/step - loss: 0.5500 -
val_loss: 0.5081
Epoch 8/20
363/363 [==============================] - 0s 590us/step - loss: 0.5200 -
val_loss: 0.4799
Epoch 9/20
363/363 [==============================] - 0s 599us/step - loss: 0.5051 -
val_loss: 0.4690
Epoch 10/20
363/363 [==============================] - 0s 599us/step - loss: 0.4910 -
val_loss: 0.4656
Epoch 11/20
363/363 [==============================] - 0s 583us/step - loss: 0.4794 -
val_loss: 0.4482
Epoch 12/20
363/363 [==============================] - 0s 599us/step - loss: 0.4656 -
val_loss: 0.4479
Epoch 13/20
363/363 [==============================] - 0s 599us/step - loss: 0.4693 -
val_loss: 0.4296
Epoch 14/20
363/363 [==============================] - 0s 596us/step - loss: 0.4537 -
val_loss: 0.4233
Epoch 15/20
363/363 [==============================] - 0s 597us/step - loss: 0.4586 -
val_loss: 0.4176
Epoch 16/20
363/363 [==============================] - 0s 597us/step - loss: 0.4612 -
val_loss: 0.4123
Epoch 17/20
363/363 [==============================] - 0s 596us/step - loss: 0.4449 -
val_loss: 0.4071
Epoch 18/20
363/363 [==============================] - 0s 600us/step - loss: 0.4407 -
val_loss: 0.4037
Epoch 19/20
363/363 [==============================] - 0s 604us/step - loss: 0.4184 -
val_loss: 0.4000
Epoch 20/20
363/363 [==============================] - 0s 596us/step - loss: 0.4128 -
val_loss: 0.3969
CPU times: user 6.03 s, sys: 835 ms, total: 6.87 s
18
Wall time: 4.6 s
[54]: array([[0.38856643],
[1.6792021 ],
[3.1022794 ]], dtype=float32)
19
7 Functional API
Not all neural network models are simply sequential. Some may have complex topologies. Some
may have multiple inputs and/or multiple outputs. For example, a Wide & Deep neural network
(see paper) connects all or part of the inputs directly to the output layer.
[55]: np.random.seed(42)
tf.random.set_seed(42)
[57]: model.summary()
Model: "model"
________________________________________________________________________________
__________________
Layer (type) Output Shape Param # Connected to
================================================================================
==================
input_1 (InputLayer) [(None, 8)] 0
________________________________________________________________________________
__________________
dense_5 (Dense) (None, 30) 270 input_1[0][0]
________________________________________________________________________________
__________________
dense_6 (Dense) (None, 30) 930 dense_5[0][0]
________________________________________________________________________________
__________________
concatenate (Concatenate) (None, 38) 0 input_1[0][0]
dense_6[0][0]
________________________________________________________________________________
__________________
dense_7 (Dense) (None, 1) 39
concatenate[0][0]
================================================================================
==================
Total params: 1,239
Trainable params: 1,239
Non-trainable params: 0
________________________________________________________________________________
__________________
20
[58]: model.compile(loss="mean_squared_error",
optimizer=keras.optimizers.SGD(learning_rate=1e-3))
history = model.fit(X_train, y_train, epochs=20,
validation_data=(X_valid, y_valid))
mse_test = model.evaluate(X_test, y_test)
y_pred = model.predict(X_new)
Epoch 1/20
363/363 [==============================] - 1s 1ms/step - loss: 1.9731 -
val_loss: 3.3940
Epoch 2/20
363/363 [==============================] - 0s 652us/step - loss: 0.7638 -
val_loss: 0.9360
Epoch 3/20
363/363 [==============================] - 0s 655us/step - loss: 0.6045 -
val_loss: 0.5649
Epoch 4/20
363/363 [==============================] - 0s 645us/step - loss: 0.5862 -
val_loss: 0.5712
Epoch 5/20
363/363 [==============================] - 0s 650us/step - loss: 0.5452 -
val_loss: 0.5045
Epoch 6/20
363/363 [==============================] - 0s 650us/step - loss: 0.5243 -
val_loss: 0.4831
Epoch 7/20
363/363 [==============================] - 0s 653us/step - loss: 0.5185 -
val_loss: 0.4639
Epoch 8/20
363/363 [==============================] - 0s 648us/step - loss: 0.4947 -
val_loss: 0.4638
Epoch 9/20
363/363 [==============================] - 0s 653us/step - loss: 0.4782 -
val_loss: 0.4421
Epoch 10/20
363/363 [==============================] - 0s 652us/step - loss: 0.4708 -
val_loss: 0.4313
Epoch 11/20
363/363 [==============================] - 0s 639us/step - loss: 0.4585 -
val_loss: 0.4345
Epoch 12/20
363/363 [==============================] - 0s 645us/step - loss: 0.4481 -
val_loss: 0.4168
Epoch 13/20
363/363 [==============================] - 0s 652us/step - loss: 0.4476 -
val_loss: 0.4230
Epoch 14/20
21
363/363 [==============================] - 0s 656us/step - loss: 0.4361 -
val_loss: 0.4047
Epoch 15/20
363/363 [==============================] - 0s 645us/step - loss: 0.4392 -
val_loss: 0.4078
Epoch 16/20
363/363 [==============================] - 0s 638us/step - loss: 0.4420 -
val_loss: 0.3938
Epoch 17/20
363/363 [==============================] - 0s 659us/step - loss: 0.4277 -
val_loss: 0.3952
Epoch 18/20
363/363 [==============================] - 0s 656us/step - loss: 0.4216 -
val_loss: 0.3860
Epoch 19/20
363/363 [==============================] - 0s 653us/step - loss: 0.4033 -
val_loss: 0.3827
Epoch 20/20
363/363 [==============================] - 0s 656us/step - loss: 0.3939 -
val_loss: 0.4054
162/162 [==============================] - 0s 397us/step - loss: 0.4032
What if you want to send different subsets of input features through the wide or deep paths? We
will send 5 features (features 0 to 4), and 6 through the deep path (features 2 to 7). Note that 3
features will go through both (features 2, 3 and 4).
[59]: np.random.seed(42)
tf.random.set_seed(42)
[61]: model.compile(loss="mse",
optimizer=keras.optimizers.SGD(learning_rate=1e-3))
22
y_pred = model.predict((X_new_A, X_new_B))
Epoch 1/20
363/363 [==============================] - 0s 848us/step - loss: 3.1941 -
val_loss: 0.8072
Epoch 2/20
363/363 [==============================] - 0s 685us/step - loss: 0.7247 -
val_loss: 0.6658
Epoch 3/20
363/363 [==============================] - 0s 690us/step - loss: 0.6176 -
val_loss: 0.5687
Epoch 4/20
363/363 [==============================] - 0s 687us/step - loss: 0.5799 -
val_loss: 0.5296
Epoch 5/20
363/363 [==============================] - 0s 698us/step - loss: 0.5409 -
val_loss: 0.4993
Epoch 6/20
363/363 [==============================] - 0s 681us/step - loss: 0.5173 -
val_loss: 0.4811
Epoch 7/20
363/363 [==============================] - 0s 688us/step - loss: 0.5186 -
val_loss: 0.4696
Epoch 8/20
363/363 [==============================] - 0s 686us/step - loss: 0.4977 -
val_loss: 0.4496
Epoch 9/20
363/363 [==============================] - 0s 687us/step - loss: 0.4765 -
val_loss: 0.4404
Epoch 10/20
363/363 [==============================] - 0s 693us/step - loss: 0.4676 -
val_loss: 0.4315
Epoch 11/20
363/363 [==============================] - 0s 682us/step - loss: 0.4574 -
val_loss: 0.4268
Epoch 12/20
363/363 [==============================] - 0s 694us/step - loss: 0.4479 -
val_loss: 0.4166
Epoch 13/20
363/363 [==============================] - 0s 688us/step - loss: 0.4487 -
val_loss: 0.4125
Epoch 14/20
363/363 [==============================] - 0s 681us/step - loss: 0.4469 -
val_loss: 0.4074
Epoch 15/20
363/363 [==============================] - 0s 681us/step - loss: 0.4460 -
val_loss: 0.4044
23
Epoch 16/20
363/363 [==============================] - 0s 689us/step - loss: 0.4495 -
val_loss: 0.4007
Epoch 17/20
363/363 [==============================] - 0s 684us/step - loss: 0.4378 -
val_loss: 0.4013
Epoch 18/20
363/363 [==============================] - 0s 694us/step - loss: 0.4375 -
val_loss: 0.3987
Epoch 19/20
363/363 [==============================] - 0s 677us/step - loss: 0.4151 -
val_loss: 0.3934
Epoch 20/20
363/363 [==============================] - 0s 669us/step - loss: 0.4078 -
val_loss: 0.4204
162/162 [==============================] - 0s 419us/step - loss: 0.4219
Adding an auxiliary output for regularization:
[62]: np.random.seed(42)
tf.random.set_seed(42)
Epoch 1/20
363/363 [==============================] - 1s 1ms/step - loss: 3.4633 -
main_output_loss: 3.3289 - aux_output_loss: 4.6732 - val_loss: 1.6233 -
val_main_output_loss: 0.8468 - val_aux_output_loss: 8.6117
Epoch 2/20
363/363 [==============================] - 0s 807us/step - loss: 0.9807 -
main_output_loss: 0.7503 - aux_output_loss: 3.0537 - val_loss: 1.5163 -
val_main_output_loss: 0.6836 - val_aux_output_loss: 9.0109
24
Epoch 3/20
363/363 [==============================] - 0s 791us/step - loss: 0.7742 -
main_output_loss: 0.6290 - aux_output_loss: 2.0810 - val_loss: 1.4639 -
val_main_output_loss: 0.6229 - val_aux_output_loss: 9.0326
Epoch 4/20
363/363 [==============================] - 0s 798us/step - loss: 0.6952 -
main_output_loss: 0.5897 - aux_output_loss: 1.6449 - val_loss: 1.3388 -
val_main_output_loss: 0.5481 - val_aux_output_loss: 8.4552
Epoch 5/20
363/363 [==============================] - 0s 799us/step - loss: 0.6469 -
main_output_loss: 0.5508 - aux_output_loss: 1.5118 - val_loss: 1.2177 -
val_main_output_loss: 0.5194 - val_aux_output_loss: 7.5030
Epoch 6/20
363/363 [==============================] - 0s 796us/step - loss: 0.6120 -
main_output_loss: 0.5251 - aux_output_loss: 1.3943 - val_loss: 1.0935 -
val_main_output_loss: 0.5106 - val_aux_output_loss: 6.3396
Epoch 7/20
363/363 [==============================] - 0s 793us/step - loss: 0.6114 -
main_output_loss: 0.5256 - aux_output_loss: 1.3833 - val_loss: 0.9918 -
val_main_output_loss: 0.5115 - val_aux_output_loss: 5.3151
Epoch 8/20
363/363 [==============================] - 0s 791us/step - loss: 0.5765 -
main_output_loss: 0.5024 - aux_output_loss: 1.2439 - val_loss: 0.8733 -
val_main_output_loss: 0.4733 - val_aux_output_loss: 4.4740
Epoch 9/20
363/363 [==============================] - 0s 792us/step - loss: 0.5535 -
main_output_loss: 0.4811 - aux_output_loss: 1.2057 - val_loss: 0.7832 -
val_main_output_loss: 0.4555 - val_aux_output_loss: 3.7323
Epoch 10/20
363/363 [==============================] - 0s 796us/step - loss: 0.5456 -
main_output_loss: 0.4708 - aux_output_loss: 1.2189 - val_loss: 0.7170 -
val_main_output_loss: 0.4604 - val_aux_output_loss: 3.0262
Epoch 11/20
363/363 [==============================] - 0s 812us/step - loss: 0.5297 -
main_output_loss: 0.4587 - aux_output_loss: 1.1684 - val_loss: 0.6510 -
val_main_output_loss: 0.4293 - val_aux_output_loss: 2.6468
Epoch 12/20
363/363 [==============================] - 0s 797us/step - loss: 0.5181 -
main_output_loss: 0.4501 - aux_output_loss: 1.1305 - val_loss: 0.6051 -
val_main_output_loss: 0.4310 - val_aux_output_loss: 2.1722
Epoch 13/20
363/363 [==============================] - 0s 787us/step - loss: 0.5100 -
main_output_loss: 0.4487 - aux_output_loss: 1.0620 - val_loss: 0.5644 -
val_main_output_loss: 0.4161 - val_aux_output_loss: 1.8992
Epoch 14/20
363/363 [==============================] - 0s 810us/step - loss: 0.5064 -
main_output_loss: 0.4459 - aux_output_loss: 1.0503 - val_loss: 0.5354 -
val_main_output_loss: 0.4119 - val_aux_output_loss: 1.6466
25
Epoch 15/20
363/363 [==============================] - 0s 814us/step - loss: 0.5027 -
main_output_loss: 0.4452 - aux_output_loss: 1.0207 - val_loss: 0.5124 -
val_main_output_loss: 0.4047 - val_aux_output_loss: 1.4812
Epoch 16/20
363/363 [==============================] - 0s 799us/step - loss: 0.5057 -
main_output_loss: 0.4480 - aux_output_loss: 1.0249 - val_loss: 0.4934 -
val_main_output_loss: 0.4034 - val_aux_output_loss: 1.3035
Epoch 17/20
363/363 [==============================] - 0s 810us/step - loss: 0.4931 -
main_output_loss: 0.4360 - aux_output_loss: 1.0075 - val_loss: 0.4801 -
val_main_output_loss: 0.3984 - val_aux_output_loss: 1.2150
Epoch 18/20
363/363 [==============================] - 0s 801us/step - loss: 0.4922 -
main_output_loss: 0.4352 - aux_output_loss: 1.0053 - val_loss: 0.4694 -
val_main_output_loss: 0.3962 - val_aux_output_loss: 1.1279
Epoch 19/20
363/363 [==============================] - 0s 803us/step - loss: 0.4658 -
main_output_loss: 0.4139 - aux_output_loss: 0.9323 - val_loss: 0.4580 -
val_main_output_loss: 0.3936 - val_aux_output_loss: 1.0372
Epoch 20/20
363/363 [==============================] - 0s 794us/step - loss: 0.4589 -
main_output_loss: 0.4072 - aux_output_loss: 0.9243 - val_loss: 0.4655 -
val_main_output_loss: 0.4048 - val_aux_output_loss: 1.0118
26
8 The subclassing API
[67]: class WideAndDeepModel(keras.models.Model):
def __init__(self, units=30, activation="relu", **kwargs):
super().__init__(**kwargs)
self.hidden1 = keras.layers.Dense(units, activation=activation)
self.hidden2 = keras.layers.Dense(units, activation=activation)
self.main_output = keras.layers.Dense(1)
self.aux_output = keras.layers.Dense(1)
Epoch 1/10
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
363/363 [==============================] - 1s 1ms/step - loss: 3.3855 -
output_1_loss: 3.3304 - output_2_loss: 3.8821 - val_loss: 2.1435 -
val_output_1_loss: 1.1581 - val_output_2_loss: 11.0117
Epoch 2/10
363/363 [==============================] - 0s 786us/step - loss: 1.0790 -
output_1_loss: 0.9329 - output_2_loss: 2.3942 - val_loss: 1.7567 -
val_output_1_loss: 0.8205 - val_output_2_loss: 10.1825
Epoch 3/10
363/363 [==============================] - 0s 810us/step - loss: 0.8644 -
output_1_loss: 0.7583 - output_2_loss: 1.8194 - val_loss: 1.5664 -
val_output_1_loss: 0.7913 - val_output_2_loss: 8.5419
27
Epoch 4/10
363/363 [==============================] - 0s 796us/step - loss: 0.7850 -
output_1_loss: 0.6979 - output_2_loss: 1.5689 - val_loss: 1.3088 -
val_output_1_loss: 0.6549 - val_output_2_loss: 7.1933
Epoch 5/10
363/363 [==============================] - 0s 798us/step - loss: 0.7294 -
output_1_loss: 0.6499 - output_2_loss: 1.4452 - val_loss: 1.1357 -
val_output_1_loss: 0.5964 - val_output_2_loss: 5.9898
Epoch 6/10
363/363 [==============================] - 0s 798us/step - loss: 0.6880 -
output_1_loss: 0.6092 - output_2_loss: 1.3974 - val_loss: 1.0036 -
val_output_1_loss: 0.5937 - val_output_2_loss: 4.6933
Epoch 7/10
363/363 [==============================] - 0s 804us/step - loss: 0.6918 -
output_1_loss: 0.6143 - output_2_loss: 1.3899 - val_loss: 0.8904 -
val_output_1_loss: 0.5591 - val_output_2_loss: 3.8714
Epoch 8/10
363/363 [==============================] - 0s 820us/step - loss: 0.6504 -
output_1_loss: 0.5805 - output_2_loss: 1.2797 - val_loss: 0.8009 -
val_output_1_loss: 0.5243 - val_output_2_loss: 3.2903
Epoch 9/10
363/363 [==============================] - 0s 816us/step - loss: 0.6270 -
output_1_loss: 0.5574 - output_2_loss: 1.2533 - val_loss: 0.7357 -
val_output_1_loss: 0.5144 - val_output_2_loss: 2.7275
Epoch 10/10
363/363 [==============================] - 0s 826us/step - loss: 0.6160 -
output_1_loss: 0.5456 - output_2_loss: 1.2495 - val_loss: 0.6849 -
val_output_1_loss: 0.5014 - val_output_2_loss: 2.3370
162/162 [==============================] - 0s 496us/step - loss: 0.5841 -
output_1_loss: 0.5188 - output_2_loss: 1.1722
WARNING:tensorflow:6 out of the last 7 calls to <function
Model.make_predict_function.<locals>.predict_function at 0x18a37fd30> triggered
tf.function retracing. Tracing is expensive and the excessive number of tracings
could be due to (1) creating @tf.function repeatedly in a loop, (2) passing
tensors with different shapes, (3) passing Python objects instead of tensors.
For (1), please define your @tf.function outside of the loop. For (2),
@tf.function has experimental_relax_shapes=True option that relaxes argument
shapes that can avoid unnecessary retracing. For (3), please refer to
https://www.tensorflow.org/guide/function#controlling_retracing and
https://www.tensorflow.org/api_docs/python/tf/function for more details.
28
9 Saving and Restoring
[70]: np.random.seed(42)
tf.random.set_seed(42)
Epoch 1/10
363/363 [==============================] - 0s 795us/step - loss: 3.3697 -
val_loss: 0.7126
Epoch 2/10
363/363 [==============================] - 0s 639us/step - loss: 0.6964 -
val_loss: 0.6880
Epoch 3/10
363/363 [==============================] - 0s 651us/step - loss: 0.6167 -
val_loss: 0.5803
Epoch 4/10
363/363 [==============================] - 0s 641us/step - loss: 0.5846 -
val_loss: 0.5166
Epoch 5/10
363/363 [==============================] - 0s 646us/step - loss: 0.5321 -
val_loss: 0.4895
Epoch 6/10
363/363 [==============================] - 0s 638us/step - loss: 0.5083 -
val_loss: 0.4951
Epoch 7/10
363/363 [==============================] - 0s 642us/step - loss: 0.5044 -
val_loss: 0.4861
Epoch 8/10
363/363 [==============================] - 0s 635us/step - loss: 0.4813 -
val_loss: 0.4554
Epoch 9/10
363/363 [==============================] - 0s 644us/step - loss: 0.4627 -
val_loss: 0.4413
Epoch 10/10
363/363 [==============================] - 0s 656us/step - loss: 0.4549 -
val_loss: 0.4379
162/162 [==============================] - 0s 396us/step - loss: 0.4382
29
[73]: model.save("my_keras_model.h5")
[75]: model.predict(X_new)
[75]: array([[0.54002357],
[1.650597 ],
[3.0098243 ]], dtype=float32)
[76]: model.save_weights("my_keras_weights.ckpt")
[77]: model.load_weights("my_keras_weights.ckpt")
[80]: model.compile(loss="mse",
optimizer=keras.optimizers.SGD(learning_rate=1e-3))
checkpoint_cb = keras.callbacks.ModelCheckpoint("my_keras_model.h5",␣
,→save_best_only=True)
Epoch 1/10
363/363 [==============================] - 0s 815us/step - loss: 3.3697 -
val_loss: 0.7126
Epoch 2/10
363/363 [==============================] - 0s 647us/step - loss: 0.6964 -
val_loss: 0.6880
Epoch 3/10
363/363 [==============================] - 0s 653us/step - loss: 0.6167 -
30
val_loss: 0.5803
Epoch 4/10
363/363 [==============================] - 0s 665us/step - loss: 0.5846 -
val_loss: 0.5166
Epoch 5/10
363/363 [==============================] - 0s 656us/step - loss: 0.5321 -
val_loss: 0.4895
Epoch 6/10
363/363 [==============================] - 0s 647us/step - loss: 0.5083 -
val_loss: 0.4951
Epoch 7/10
363/363 [==============================] - 0s 670us/step - loss: 0.5044 -
val_loss: 0.4861
Epoch 8/10
363/363 [==============================] - 0s 651us/step - loss: 0.4813 -
val_loss: 0.4554
Epoch 9/10
363/363 [==============================] - 0s 655us/step - loss: 0.4627 -
val_loss: 0.4413
Epoch 10/10
363/363 [==============================] - 0s 652us/step - loss: 0.4549 -
val_loss: 0.4379
162/162 [==============================] - 0s 411us/step - loss: 0.4382
[81]: model.compile(loss="mse",
optimizer=keras.optimizers.SGD(learning_rate=1e-3))
early_stopping_cb = keras.callbacks.EarlyStopping(patience=10,
restore_best_weights=True)
history = model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=[checkpoint_cb, early_stopping_cb])
mse_test = model.evaluate(X_test, y_test)
Epoch 1/100
363/363 [==============================] - 0s 806us/step - loss: 0.4578 -
val_loss: 0.4110
Epoch 2/100
363/363 [==============================] - 0s 650us/step - loss: 0.4430 -
val_loss: 0.4266
Epoch 3/100
363/363 [==============================] - 0s 650us/step - loss: 0.4376 -
val_loss: 0.3996
Epoch 4/100
363/363 [==============================] - 0s 657us/step - loss: 0.4361 -
val_loss: 0.3939
Epoch 5/100
363/363 [==============================] - 0s 676us/step - loss: 0.4204 -
val_loss: 0.3889
31
Epoch 6/100
363/363 [==============================] - 0s 651us/step - loss: 0.4112 -
val_loss: 0.3866
Epoch 7/100
363/363 [==============================] - 0s 661us/step - loss: 0.4226 -
val_loss: 0.3860
Epoch 8/100
363/363 [==============================] - 0s 669us/step - loss: 0.4135 -
val_loss: 0.3793
Epoch 9/100
363/363 [==============================] - 0s 652us/step - loss: 0.4039 -
val_loss: 0.3746
Epoch 10/100
363/363 [==============================] - 0s 646us/step - loss: 0.4023 -
val_loss: 0.3723
Epoch 11/100
363/363 [==============================] - 0s 647us/step - loss: 0.3950 -
val_loss: 0.3697
Epoch 12/100
363/363 [==============================] - 0s 659us/step - loss: 0.3912 -
val_loss: 0.3669
Epoch 13/100
363/363 [==============================] - 0s 650us/step - loss: 0.3939 -
val_loss: 0.3661
Epoch 14/100
363/363 [==============================] - 0s 649us/step - loss: 0.3868 -
val_loss: 0.3631
Epoch 15/100
363/363 [==============================] - 0s 647us/step - loss: 0.3878 -
val_loss: 0.3660
Epoch 16/100
363/363 [==============================] - 0s 664us/step - loss: 0.3935 -
val_loss: 0.3625
Epoch 17/100
363/363 [==============================] - 0s 651us/step - loss: 0.3817 -
val_loss: 0.3592
Epoch 18/100
363/363 [==============================] - 0s 649us/step - loss: 0.3801 -
val_loss: 0.3563
Epoch 19/100
363/363 [==============================] - 0s 659us/step - loss: 0.3679 -
val_loss: 0.3535
Epoch 20/100
363/363 [==============================] - 0s 659us/step - loss: 0.3624 -
val_loss: 0.3709
Epoch 21/100
363/363 [==============================] - 0s 656us/step - loss: 0.3746 -
val_loss: 0.3512
32
Epoch 22/100
363/363 [==============================] - 0s 658us/step - loss: 0.3605 -
val_loss: 0.3699
Epoch 23/100
363/363 [==============================] - 0s 657us/step - loss: 0.3822 -
val_loss: 0.3476
Epoch 24/100
363/363 [==============================] - 0s 661us/step - loss: 0.3626 -
val_loss: 0.3561
Epoch 25/100
363/363 [==============================] - 0s 669us/step - loss: 0.3610 -
val_loss: 0.3527
Epoch 26/100
363/363 [==============================] - 0s 673us/step - loss: 0.3626 -
val_loss: 0.3700
Epoch 27/100
363/363 [==============================] - 0s 666us/step - loss: 0.3685 -
val_loss: 0.3432
Epoch 28/100
363/363 [==============================] - 0s 670us/step - loss: 0.3684 -
val_loss: 0.3592
Epoch 29/100
363/363 [==============================] - 0s 688us/step - loss: 0.3581 -
val_loss: 0.3521
Epoch 30/100
363/363 [==============================] - 0s 656us/step - loss: 0.3687 -
val_loss: 0.3626
Epoch 31/100
363/363 [==============================] - 0s 647us/step - loss: 0.3613 -
val_loss: 0.3431
Epoch 32/100
363/363 [==============================] - 0s 657us/step - loss: 0.3555 -
val_loss: 0.3766
Epoch 33/100
363/363 [==============================] - 0s 658us/step - loss: 0.3620 -
val_loss: 0.3374
Epoch 34/100
363/363 [==============================] - 0s 656us/step - loss: 0.3502 -
val_loss: 0.3407
Epoch 35/100
363/363 [==============================] - 0s 653us/step - loss: 0.3471 -
val_loss: 0.3614
Epoch 36/100
363/363 [==============================] - 0s 672us/step - loss: 0.3451 -
val_loss: 0.3348
Epoch 37/100
363/363 [==============================] - 0s 660us/step - loss: 0.3780 -
val_loss: 0.3573
33
Epoch 38/100
363/363 [==============================] - 0s 656us/step - loss: 0.3474 -
val_loss: 0.3367
Epoch 39/100
363/363 [==============================] - 0s 658us/step - loss: 0.3689 -
val_loss: 0.3425
Epoch 40/100
363/363 [==============================] - 0s 646us/step - loss: 0.3485 -
val_loss: 0.3368
Epoch 41/100
363/363 [==============================] - 0s 660us/step - loss: 0.3674 -
val_loss: 0.3514
Epoch 42/100
363/363 [==============================] - 0s 656us/step - loss: 0.3471 -
val_loss: 0.3426
Epoch 43/100
363/363 [==============================] - 0s 646us/step - loss: 0.3545 -
val_loss: 0.3677
Epoch 44/100
363/363 [==============================] - 0s 666us/step - loss: 0.3407 -
val_loss: 0.3563
Epoch 45/100
363/363 [==============================] - 0s 672us/step - loss: 0.3554 -
val_loss: 0.3336
Epoch 46/100
363/363 [==============================] - 0s 675us/step - loss: 0.3499 -
val_loss: 0.3456
Epoch 47/100
363/363 [==============================] - 0s 673us/step - loss: 0.3623 -
val_loss: 0.3433
Epoch 48/100
363/363 [==============================] - 0s 667us/step - loss: 0.3401 -
val_loss: 0.3658
Epoch 49/100
363/363 [==============================] - 0s 664us/step - loss: 0.3528 -
val_loss: 0.3286
Epoch 50/100
363/363 [==============================] - 0s 662us/step - loss: 0.3560 -
val_loss: 0.3268
Epoch 51/100
363/363 [==============================] - 0s 650us/step - loss: 0.3483 -
val_loss: 0.3438
Epoch 52/100
363/363 [==============================] - 0s 655us/step - loss: 0.3405 -
val_loss: 0.3263
Epoch 53/100
363/363 [==============================] - 0s 669us/step - loss: 0.3468 -
val_loss: 0.3910
34
Epoch 54/100
363/363 [==============================] - 0s 645us/step - loss: 0.3337 -
val_loss: 0.3275
Epoch 55/100
363/363 [==============================] - 0s 657us/step - loss: 0.3462 -
val_loss: 0.3560
Epoch 56/100
363/363 [==============================] - 0s 659us/step - loss: 0.3342 -
val_loss: 0.3237
Epoch 57/100
363/363 [==============================] - 0s 657us/step - loss: 0.3395 -
val_loss: 0.3242
Epoch 58/100
363/363 [==============================] - 0s 644us/step - loss: 0.3315 -
val_loss: 0.3764
Epoch 59/100
363/363 [==============================] - 0s 649us/step - loss: 0.3394 -
val_loss: 0.3289
Epoch 60/100
363/363 [==============================] - 0s 651us/step - loss: 0.3377 -
val_loss: 0.3502
Epoch 61/100
363/363 [==============================] - 0s 661us/step - loss: 0.3522 -
val_loss: 0.3456
Epoch 62/100
363/363 [==============================] - 0s 666us/step - loss: 0.3473 -
val_loss: 0.3445
Epoch 63/100
363/363 [==============================] - 0s 659us/step - loss: 0.3427 -
val_loss: 0.3290
Epoch 64/100
363/363 [==============================] - 0s 663us/step - loss: 0.3212 -
val_loss: 0.3217
Epoch 65/100
363/363 [==============================] - 0s 661us/step - loss: 0.3374 -
val_loss: 0.3351
Epoch 66/100
363/363 [==============================] - 0s 658us/step - loss: 0.3323 -
val_loss: 0.3232
Epoch 67/100
363/363 [==============================] - 0s 650us/step - loss: 0.3470 -
val_loss: 0.3568
Epoch 68/100
363/363 [==============================] - 0s 654us/step - loss: 0.3316 -
val_loss: 0.3256
Epoch 69/100
363/363 [==============================] - 0s 654us/step - loss: 0.3354 -
val_loss: 0.3349
35
Epoch 70/100
363/363 [==============================] - 0s 654us/step - loss: 0.3316 -
val_loss: 0.3560
Epoch 71/100
363/363 [==============================] - 0s 640us/step - loss: 0.3371 -
val_loss: 0.3582
Epoch 72/100
363/363 [==============================] - 0s 656us/step - loss: 0.3201 -
val_loss: 0.3286
Epoch 73/100
363/363 [==============================] - 0s 669us/step - loss: 0.3373 -
val_loss: 0.3203
Epoch 74/100
363/363 [==============================] - 0s 652us/step - loss: 0.3327 -
val_loss: 0.3839
Epoch 75/100
363/363 [==============================] - 0s 646us/step - loss: 0.3268 -
val_loss: 0.3234
Epoch 76/100
363/363 [==============================] - 0s 655us/step - loss: 0.3322 -
val_loss: 0.3475
Epoch 77/100
363/363 [==============================] - 0s 649us/step - loss: 0.3224 -
val_loss: 0.3409
Epoch 78/100
363/363 [==============================] - 0s 660us/step - loss: 0.3331 -
val_loss: 0.3461
Epoch 79/100
363/363 [==============================] - 0s 636us/step - loss: 0.3310 -
val_loss: 0.3348
Epoch 80/100
363/363 [==============================] - 0s 651us/step - loss: 0.3323 -
val_loss: 0.3352
Epoch 81/100
363/363 [==============================] - 0s 658us/step - loss: 0.3297 -
val_loss: 0.3277
Epoch 82/100
363/363 [==============================] - 0s 652us/step - loss: 0.3441 -
val_loss: 0.3167
Epoch 83/100
363/363 [==============================] - 0s 657us/step - loss: 0.3369 -
val_loss: 0.3280
Epoch 84/100
363/363 [==============================] - 0s 638us/step - loss: 0.3182 -
val_loss: 0.3636
Epoch 85/100
363/363 [==============================] - 0s 646us/step - loss: 0.3235 -
val_loss: 0.3175
36
Epoch 86/100
363/363 [==============================] - 0s 654us/step - loss: 0.3184 -
val_loss: 0.3156
Epoch 87/100
363/363 [==============================] - 0s 655us/step - loss: 0.3395 -
val_loss: 0.3530
Epoch 88/100
363/363 [==============================] - 0s 650us/step - loss: 0.3264 -
val_loss: 0.3256
Epoch 89/100
363/363 [==============================] - 0s 653us/step - loss: 0.3210 -
val_loss: 0.3625
Epoch 90/100
363/363 [==============================] - 0s 647us/step - loss: 0.3192 -
val_loss: 0.3378
Epoch 91/100
363/363 [==============================] - 0s 655us/step - loss: 0.3237 -
val_loss: 0.3211
Epoch 92/100
363/363 [==============================] - 0s 652us/step - loss: 0.3281 -
val_loss: 0.3455
Epoch 93/100
363/363 [==============================] - 0s 652us/step - loss: 0.3424 -
val_loss: 0.3158
Epoch 94/100
363/363 [==============================] - 0s 646us/step - loss: 0.3209 -
val_loss: 0.3408
Epoch 95/100
363/363 [==============================] - 0s 652us/step - loss: 0.3230 -
val_loss: 0.3380
Epoch 96/100
363/363 [==============================] - 0s 656us/step - loss: 0.3342 -
val_loss: 0.3212
162/162 [==============================] - 0s 406us/step - loss: 0.3310
val/train: 1.08
37
11 TensorBoard
[84]: root_logdir = os.path.join(os.curdir, "my_logs")
run_logdir = get_run_logdir()
run_logdir
[85]: './my_logs/run_2021_04_06-17_41_42'
[86]: keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
Epoch 1/30
363/363 [==============================] - 0s 876us/step - loss: 3.3697 -
val_loss: 0.7126
Epoch 2/30
363/363 [==============================] - 0s 668us/step - loss: 0.6964 -
val_loss: 0.6880
Epoch 3/30
363/363 [==============================] - 0s 676us/step - loss: 0.6167 -
val_loss: 0.5803
Epoch 4/30
363/363 [==============================] - 0s 665us/step - loss: 0.5846 -
val_loss: 0.5166
Epoch 5/30
363/363 [==============================] - 0s 660us/step - loss: 0.5321 -
val_loss: 0.4895
Epoch 6/30
363/363 [==============================] - 0s 639us/step - loss: 0.5083 -
val_loss: 0.4951
38
Epoch 7/30
363/363 [==============================] - 0s 659us/step - loss: 0.5044 -
val_loss: 0.4861
Epoch 8/30
363/363 [==============================] - 0s 682us/step - loss: 0.4813 -
val_loss: 0.4554
Epoch 9/30
363/363 [==============================] - 0s 655us/step - loss: 0.4627 -
val_loss: 0.4413
Epoch 10/30
363/363 [==============================] - 0s 658us/step - loss: 0.4549 -
val_loss: 0.4379
Epoch 11/30
363/363 [==============================] - 0s 663us/step - loss: 0.4416 -
val_loss: 0.4396
Epoch 12/30
363/363 [==============================] - 0s 684us/step - loss: 0.4295 -
val_loss: 0.4507
Epoch 13/30
363/363 [==============================] - 0s 667us/step - loss: 0.4326 -
val_loss: 0.3997
Epoch 14/30
363/363 [==============================] - 0s 658us/step - loss: 0.4207 -
val_loss: 0.3956
Epoch 15/30
363/363 [==============================] - 0s 651us/step - loss: 0.4198 -
val_loss: 0.3916
Epoch 16/30
363/363 [==============================] - 0s 660us/step - loss: 0.4248 -
val_loss: 0.3937
Epoch 17/30
363/363 [==============================] - 0s 670us/step - loss: 0.4105 -
val_loss: 0.3809
Epoch 18/30
363/363 [==============================] - 0s 655us/step - loss: 0.4070 -
val_loss: 0.3793
Epoch 19/30
363/363 [==============================] - 0s 651us/step - loss: 0.3902 -
val_loss: 0.3850
Epoch 20/30
363/363 [==============================] - 0s 651us/step - loss: 0.3864 -
val_loss: 0.3809
Epoch 21/30
363/363 [==============================] - 0s 659us/step - loss: 0.3978 -
val_loss: 0.3701
Epoch 22/30
363/363 [==============================] - 0s 652us/step - loss: 0.3816 -
val_loss: 0.3781
39
Epoch 23/30
363/363 [==============================] - 0s 656us/step - loss: 0.4042 -
val_loss: 0.3650
Epoch 24/30
363/363 [==============================] - 0s 678us/step - loss: 0.3823 -
val_loss: 0.3655
Epoch 25/30
363/363 [==============================] - 0s 685us/step - loss: 0.3792 -
val_loss: 0.3611
Epoch 26/30
363/363 [==============================] - 0s 672us/step - loss: 0.3800 -
val_loss: 0.3626
Epoch 27/30
363/363 [==============================] - 0s 664us/step - loss: 0.3858 -
val_loss: 0.3564
Epoch 28/30
363/363 [==============================] - 0s 649us/step - loss: 0.3839 -
val_loss: 0.3579
Epoch 29/30
363/363 [==============================] - 0s 667us/step - loss: 0.3736 -
val_loss: 0.3561
Epoch 30/30
363/363 [==============================] - 0s 659us/step - loss: 0.3843 -
val_loss: 0.3548
To start the TensorBoard server, one option is to open a terminal, if needed activate the virtualenv
where you installed TensorBoard, go to this notebook’s directory, then type:
$ tensorboard --logdir=./my_logs --port=6006
You can then open your web browser to localhost:6006 and use TensorBoard. Once you are done,
press Ctrl-C in the terminal window, this will shutdown the TensorBoard server.
Alternatively, you can load TensorBoard’s Jupyter extension and run it like this:
[89]: %load_ext tensorboard
%tensorboard --logdir=./my_logs --port=6006
Reusing TensorBoard on port 6006 (pid 79706), started 0:53:05 ago. (Use '!kill␣
,→79706' to kill it.)
<IPython.core.display.HTML object>
[90]: './my_logs/run_2021_04_06-17_41_49'
[91]: keras.backend.clear_session()
np.random.seed(42)
40
tf.random.set_seed(42)
Epoch 1/30
363/363 [==============================] - 1s 921us/step - loss: 0.7645 -
val_loss: 302.8523
Epoch 2/30
363/363 [==============================] - 0s 694us/step - loss: nan - val_loss:
nan
Epoch 3/30
363/363 [==============================] - 0s 682us/step - loss: nan - val_loss:
nan
Epoch 4/30
363/363 [==============================] - 0s 648us/step - loss: nan - val_loss:
nan
Epoch 5/30
363/363 [==============================] - 0s 649us/step - loss: nan - val_loss:
nan
Epoch 6/30
363/363 [==============================] - 0s 673us/step - loss: nan - val_loss:
nan
Epoch 7/30
363/363 [==============================] - 0s 655us/step - loss: nan - val_loss:
nan
Epoch 8/30
363/363 [==============================] - 0s 667us/step - loss: nan - val_loss:
nan
Epoch 9/30
363/363 [==============================] - 0s 665us/step - loss: nan - val_loss:
nan
Epoch 10/30
363/363 [==============================] - 0s 665us/step - loss: nan - val_loss:
nan
Epoch 11/30
363/363 [==============================] - 0s 645us/step - loss: nan - val_loss:
41
nan
Epoch 12/30
363/363 [==============================] - 0s 652us/step - loss: nan - val_loss:
nan
Epoch 13/30
363/363 [==============================] - 0s 655us/step - loss: nan - val_loss:
nan
Epoch 14/30
363/363 [==============================] - 0s 661us/step - loss: nan - val_loss:
nan
Epoch 15/30
363/363 [==============================] - 0s 657us/step - loss: nan - val_loss:
nan
Epoch 16/30
363/363 [==============================] - 0s 648us/step - loss: nan - val_loss:
nan
Epoch 17/30
363/363 [==============================] - 0s 643us/step - loss: nan - val_loss:
nan
Epoch 18/30
363/363 [==============================] - 0s 663us/step - loss: nan - val_loss:
nan
Epoch 19/30
363/363 [==============================] - 0s 649us/step - loss: nan - val_loss:
nan
Epoch 20/30
363/363 [==============================] - 0s 645us/step - loss: nan - val_loss:
nan
Epoch 21/30
363/363 [==============================] - 0s 656us/step - loss: nan - val_loss:
nan
Epoch 22/30
363/363 [==============================] - 0s 658us/step - loss: nan - val_loss:
nan
Epoch 23/30
363/363 [==============================] - 0s 653us/step - loss: nan - val_loss:
nan
Epoch 24/30
363/363 [==============================] - 0s 653us/step - loss: nan - val_loss:
nan
Epoch 25/30
363/363 [==============================] - 0s 651us/step - loss: nan - val_loss:
nan
Epoch 26/30
363/363 [==============================] - 0s 645us/step - loss: nan - val_loss:
nan
Epoch 27/30
363/363 [==============================] - 0s 649us/step - loss: nan - val_loss:
42
nan
Epoch 28/30
363/363 [==============================] - 0s 656us/step - loss: nan - val_loss:
nan
Epoch 29/30
363/363 [==============================] - 0s 649us/step - loss: nan - val_loss:
nan
Epoch 30/30
363/363 [==============================] - 0s 647us/step - loss: nan - val_loss:
nan
Notice how TensorBoard now sees two runs, and you can compare the learning curves.
Check out the other available logging options:
[94]: help(keras.callbacks.TensorBoard.__init__)
12 Hyperparameter Tuning
[95]: keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
Epoch 1/100
43
363/363 [==============================] - 1s 757us/step - loss: 1.5673 -
val_loss: 20.7721
Epoch 2/100
363/363 [==============================] - 0s 618us/step - loss: 1.3216 -
val_loss: 5.0266
Epoch 3/100
363/363 [==============================] - 0s 631us/step - loss: 0.5972 -
val_loss: 0.5490
Epoch 4/100
363/363 [==============================] - 0s 608us/step - loss: 0.4985 -
val_loss: 0.4529
Epoch 5/100
363/363 [==============================] - 0s 614us/step - loss: 0.4608 -
val_loss: 0.4188
Epoch 6/100
363/363 [==============================] - 0s 617us/step - loss: 0.4410 -
val_loss: 0.4129
Epoch 7/100
363/363 [==============================] - 0s 620us/step - loss: 0.4463 -
val_loss: 0.4004
Epoch 8/100
363/363 [==============================] - 0s 618us/step - loss: 0.4283 -
val_loss: 0.3944
Epoch 9/100
363/363 [==============================] - 0s 616us/step - loss: 0.4139 -
val_loss: 0.3961
Epoch 10/100
363/363 [==============================] - 0s 625us/step - loss: 0.4107 -
val_loss: 0.4071
Epoch 11/100
363/363 [==============================] - 0s 615us/step - loss: 0.3992 -
val_loss: 0.3855
Epoch 12/100
363/363 [==============================] - 0s 620us/step - loss: 0.3982 -
val_loss: 0.4136
Epoch 13/100
363/363 [==============================] - 0s 627us/step - loss: 0.3983 -
val_loss: 0.3997
Epoch 14/100
363/363 [==============================] - 0s 620us/step - loss: 0.3910 -
val_loss: 0.3818
Epoch 15/100
363/363 [==============================] - 0s 611us/step - loss: 0.3948 -
val_loss: 0.3829
Epoch 16/100
363/363 [==============================] - 0s 634us/step - loss: 0.3981 -
val_loss: 0.3739
Epoch 17/100
44
363/363 [==============================] - 0s 638us/step - loss: 0.3821 -
val_loss: 0.4022
Epoch 18/100
363/363 [==============================] - 0s 618us/step - loss: 0.3851 -
val_loss: 0.3873
Epoch 19/100
363/363 [==============================] - 0s 616us/step - loss: 0.3753 -
val_loss: 0.3768
Epoch 20/100
363/363 [==============================] - 0s 611us/step - loss: 0.3634 -
val_loss: 0.4191
Epoch 21/100
363/363 [==============================] - 0s 633us/step - loss: 0.3787 -
val_loss: 0.3926
Epoch 22/100
363/363 [==============================] - 0s 622us/step - loss: 0.3628 -
val_loss: 0.4238
Epoch 23/100
363/363 [==============================] - 0s 616us/step - loss: 0.3892 -
val_loss: 0.3523
Epoch 24/100
363/363 [==============================] - 0s 622us/step - loss: 0.3676 -
val_loss: 0.3842
Epoch 25/100
363/363 [==============================] - 0s 649us/step - loss: 0.3677 -
val_loss: 0.4162
Epoch 26/100
363/363 [==============================] - 0s 637us/step - loss: 0.3690 -
val_loss: 0.3980
Epoch 27/100
363/363 [==============================] - 0s 614us/step - loss: 0.3731 -
val_loss: 0.3474
Epoch 28/100
363/363 [==============================] - 0s 628us/step - loss: 0.3725 -
val_loss: 0.3920
Epoch 29/100
363/363 [==============================] - 0s 607us/step - loss: 0.3660 -
val_loss: 0.3566
Epoch 30/100
363/363 [==============================] - 0s 633us/step - loss: 0.3700 -
val_loss: 0.4191
Epoch 31/100
363/363 [==============================] - 0s 641us/step - loss: 0.3635 -
val_loss: 0.3721
Epoch 32/100
363/363 [==============================] - 0s 624us/step - loss: 0.3628 -
val_loss: 0.3947
Epoch 33/100
45
363/363 [==============================] - 0s 627us/step - loss: 0.3647 -
val_loss: 0.3423
Epoch 34/100
363/363 [==============================] - 0s 625us/step - loss: 0.3547 -
val_loss: 0.3453
Epoch 35/100
363/363 [==============================] - 0s 608us/step - loss: 0.3496 -
val_loss: 0.4068
Epoch 36/100
363/363 [==============================] - 0s 613us/step - loss: 0.3476 -
val_loss: 0.3416
Epoch 37/100
363/363 [==============================] - 0s 610us/step - loss: 0.3786 -
val_loss: 0.3787
Epoch 38/100
363/363 [==============================] - 0s 617us/step - loss: 0.3540 -
val_loss: 0.3379
Epoch 39/100
363/363 [==============================] - 0s 616us/step - loss: 0.3769 -
val_loss: 0.3419
Epoch 40/100
363/363 [==============================] - 0s 611us/step - loss: 0.3522 -
val_loss: 0.3704
Epoch 41/100
363/363 [==============================] - 0s 602us/step - loss: 0.3705 -
val_loss: 0.3660
Epoch 42/100
363/363 [==============================] - 0s 624us/step - loss: 0.3545 -
val_loss: 0.3803
Epoch 43/100
363/363 [==============================] - 0s 616us/step - loss: 0.3596 -
val_loss: 0.3766
Epoch 44/100
363/363 [==============================] - 0s 610us/step - loss: 0.3443 -
val_loss: 0.3814
Epoch 45/100
363/363 [==============================] - 0s 617us/step - loss: 0.3591 -
val_loss: 0.3326
Epoch 46/100
363/363 [==============================] - 0s 607us/step - loss: 0.3528 -
val_loss: 0.3385
Epoch 47/100
363/363 [==============================] - 0s 608us/step - loss: 0.3663 -
val_loss: 0.3657
Epoch 48/100
363/363 [==============================] - 0s 607us/step - loss: 0.3479 -
val_loss: 0.3577
Epoch 49/100
46
363/363 [==============================] - 0s 620us/step - loss: 0.3601 -
val_loss: 0.3359
Epoch 50/100
363/363 [==============================] - 0s 615us/step - loss: 0.3616 -
val_loss: 0.3317
Epoch 51/100
363/363 [==============================] - 0s 611us/step - loss: 0.3532 -
val_loss: 0.3564
Epoch 52/100
363/363 [==============================] - 0s 608us/step - loss: 0.3427 -
val_loss: 0.3524
Epoch 53/100
363/363 [==============================] - 0s 609us/step - loss: 0.3502 -
val_loss: 0.4589
Epoch 54/100
363/363 [==============================] - 0s 608us/step - loss: 0.3402 -
val_loss: 0.3810
Epoch 55/100
363/363 [==============================] - 0s 616us/step - loss: 0.3497 -
val_loss: 0.3544
Epoch 56/100
363/363 [==============================] - 0s 621us/step - loss: 0.3401 -
val_loss: 0.3728
Epoch 57/100
363/363 [==============================] - 0s 630us/step - loss: 0.3440 -
val_loss: 0.3339
Epoch 58/100
363/363 [==============================] - 0s 603us/step - loss: 0.3347 -
val_loss: 0.4001
Epoch 59/100
363/363 [==============================] - 0s 609us/step - loss: 0.3444 -
val_loss: 0.3264
Epoch 60/100
363/363 [==============================] - 0s 623us/step - loss: 0.3414 -
val_loss: 0.3271
Epoch 61/100
363/363 [==============================] - 0s 614us/step - loss: 0.3621 -
val_loss: 0.3350
Epoch 62/100
363/363 [==============================] - 0s 613us/step - loss: 0.3497 -
val_loss: 0.3489
Epoch 63/100
363/363 [==============================] - 0s 610us/step - loss: 0.3484 -
val_loss: 0.3398
Epoch 64/100
363/363 [==============================] - 0s 608us/step - loss: 0.3299 -
val_loss: 0.3273
Epoch 65/100
47
363/363 [==============================] - 0s 614us/step - loss: 0.3409 -
val_loss: 0.3296
Epoch 66/100
363/363 [==============================] - 0s 608us/step - loss: 0.3363 -
val_loss: 0.3307
Epoch 67/100
363/363 [==============================] - 0s 619us/step - loss: 0.3558 -
val_loss: 0.3251
Epoch 68/100
363/363 [==============================] - 0s 611us/step - loss: 0.3372 -
val_loss: 0.3242
Epoch 69/100
363/363 [==============================] - 0s 606us/step - loss: 0.3394 -
val_loss: 0.3253
Epoch 70/100
363/363 [==============================] - 0s 628us/step - loss: 0.3350 -
val_loss: 0.3661
Epoch 71/100
363/363 [==============================] - 0s 619us/step - loss: 0.3428 -
val_loss: 0.3378
Epoch 72/100
363/363 [==============================] - 0s 614us/step - loss: 0.3260 -
val_loss: 0.3271
Epoch 73/100
363/363 [==============================] - 0s 612us/step - loss: 0.3409 -
val_loss: 0.3242
Epoch 74/100
363/363 [==============================] - 0s 614us/step - loss: 0.3394 -
val_loss: 0.3664
Epoch 75/100
363/363 [==============================] - 0s 622us/step - loss: 0.3286 -
val_loss: 0.3284
Epoch 76/100
363/363 [==============================] - 0s 625us/step - loss: 0.3391 -
val_loss: 0.3242
Epoch 77/100
363/363 [==============================] - 0s 621us/step - loss: 0.3293 -
val_loss: 0.3375
Epoch 78/100
363/363 [==============================] - 0s 618us/step - loss: 0.3372 -
val_loss: 0.3365
48
[100]: y_pred = keras_reg.predict(X_new)
[101]: np.random.seed(42)
tf.random.set_seed(42)
Warning: the following cell crashes at the end of training. This seems to be caused by Keras issue
#13586, which was triggered by a recent change in Scikit-Learn. Pull Request #13598 seems to
fix the issue, so this problem should be resolved soon.
[102]: %%time
from scipy.stats import reciprocal
from sklearn.model_selection import RandomizedSearchCV
param_distribs = {
"n_hidden": [0, 1, 2, 3],
"n_neurons": np.arange(1, 100),
"learning_rate": reciprocal(3e-4,3e-2)
}
49
242/242 [==============================] - 0s 607us/step - loss: 0.6558 -
val_loss: 0.9118
Epoch 7/100
242/242 [==============================] - 0s 624us/step - loss: 0.6644 -
val_loss: 0.8495
Epoch 8/100
242/242 [==============================] - 0s 634us/step - loss: 0.6308 -
val_loss: 0.8605
Epoch 9/100
242/242 [==============================] - 0s 620us/step - loss: 0.6182 -
val_loss: 0.6524
Epoch 10/100
242/242 [==============================] - 0s 618us/step - loss: 0.6155 -
val_loss: 0.8619
Epoch 11/100
242/242 [==============================] - 0s 613us/step - loss: 0.5881 -
val_loss: 0.8659
Epoch 12/100
242/242 [==============================] - 0s 642us/step - loss: 0.5572 -
val_loss: 0.5962
Epoch 13/100
242/242 [==============================] - 0s 622us/step - loss: 0.6052 -
val_loss: 0.9062
Epoch 14/100
242/242 [==============================] - 0s 613us/step - loss: 0.5939 -
val_loss: 0.9541
Epoch 15/100
242/242 [==============================] - 0s 606us/step - loss: 0.5904 -
val_loss: 0.6402
Epoch 16/100
242/242 [==============================] - 0s 620us/step - loss: 0.5608 -
val_loss: 0.7806
Epoch 17/100
242/242 [==============================] - 0s 637us/step - loss: 0.5925 -
val_loss: 0.7985
Epoch 18/100
242/242 [==============================] - 0s 616us/step - loss: 0.5859 -
val_loss: 0.8756
Epoch 19/100
242/242 [==============================] - 0s 626us/step - loss: 0.5636 -
val_loss: 0.8958
Epoch 20/100
242/242 [==============================] - 0s 623us/step - loss: 0.5662 -
val_loss: 0.8657
Epoch 21/100
242/242 [==============================] - 0s 636us/step - loss: 0.5437 -
val_loss: 0.5940
Epoch 22/100
50
242/242 [==============================] - 0s 635us/step - loss: 0.5378 -
val_loss: 0.8007
Epoch 23/100
242/242 [==============================] - 0s 624us/step - loss: 0.5596 -
val_loss: 0.7792
Epoch 24/100
242/242 [==============================] - 0s 616us/step - loss: 0.5427 -
val_loss: 0.7622
Epoch 25/100
242/242 [==============================] - 0s 617us/step - loss: 0.5540 -
val_loss: 0.6476
Epoch 26/100
242/242 [==============================] - 0s 603us/step - loss: 0.5298 -
val_loss: 0.5424
Epoch 27/100
242/242 [==============================] - 0s 606us/step - loss: 0.5479 -
val_loss: 0.8687
Epoch 28/100
242/242 [==============================] - 0s 613us/step - loss: 0.5472 -
val_loss: 0.5390
Epoch 29/100
242/242 [==============================] - 0s 616us/step - loss: 0.5539 -
val_loss: 0.7179
Epoch 30/100
242/242 [==============================] - 0s 635us/step - loss: 0.5388 -
val_loss: 0.6029
Epoch 31/100
242/242 [==============================] - 0s 616us/step - loss: 0.5269 -
val_loss: 0.5947
Epoch 32/100
242/242 [==============================] - 0s 614us/step - loss: 0.5219 -
val_loss: 0.5305
Epoch 33/100
242/242 [==============================] - 0s 610us/step - loss: 0.5247 -
val_loss: 0.6601
Epoch 34/100
242/242 [==============================] - 0s 610us/step - loss: 0.5255 -
val_loss: 0.6326
Epoch 35/100
242/242 [==============================] - 0s 613us/step - loss: 0.5271 -
val_loss: 0.5072
Epoch 36/100
242/242 [==============================] - 0s 610us/step - loss: 0.5184 -
val_loss: 0.7270
Epoch 37/100
242/242 [==============================] - 0s 610us/step - loss: 0.5270 -
val_loss: 0.5055
Epoch 38/100
51
242/242 [==============================] - 0s 600us/step - loss: 0.5055 -
val_loss: 0.7985
Epoch 39/100
242/242 [==============================] - 0s 613us/step - loss: 0.5232 -
val_loss: 0.5176
Epoch 40/100
242/242 [==============================] - 0s 604us/step - loss: 0.5131 -
val_loss: 0.5823
Epoch 41/100
242/242 [==============================] - 0s 601us/step - loss: 0.5264 -
val_loss: 0.7114
Epoch 42/100
242/242 [==============================] - 0s 606us/step - loss: 0.5277 -
val_loss: 0.5059
Epoch 43/100
242/242 [==============================] - 0s 610us/step - loss: 0.5240 -
val_loss: 0.5008
Epoch 44/100
242/242 [==============================] - 0s 607us/step - loss: 0.5282 -
val_loss: 0.7397
Epoch 45/100
242/242 [==============================] - 0s 598us/step - loss: 0.5246 -
val_loss: 0.6169
Epoch 46/100
242/242 [==============================] - 0s 605us/step - loss: 0.5206 -
val_loss: 0.5264
Epoch 47/100
242/242 [==============================] - 0s 613us/step - loss: 0.5286 -
val_loss: 0.6916
Epoch 48/100
242/242 [==============================] - 0s 607us/step - loss: 0.5100 -
val_loss: 0.6554
Epoch 49/100
242/242 [==============================] - 0s 612us/step - loss: 0.5291 -
val_loss: 0.6607
Epoch 50/100
242/242 [==============================] - 0s 612us/step - loss: 0.5377 -
val_loss: 0.8497
Epoch 51/100
242/242 [==============================] - 0s 614us/step - loss: 0.5125 -
val_loss: 0.6664
Epoch 52/100
242/242 [==============================] - 0s 606us/step - loss: 0.5409 -
val_loss: 0.5996
Epoch 53/100
242/242 [==============================] - 0s 612us/step - loss: 0.5257 -
val_loss: 0.6414
121/121 [==============================] - 0s 358us/step - loss: 0.5368
52
[CV] END learning_rate=0.001683454924600351, n_hidden=0, n_neurons=15; total
time= 8.2s
Epoch 1/100
1/242 […] - ETA: 30s - loss: 7.3553
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 845us/step - loss: 4.6915 -
val_loss: 23.0855
Epoch 2/100
242/242 [==============================] - 0s 605us/step - loss: 1.6966 -
val_loss: 10.8387
Epoch 3/100
242/242 [==============================] - 0s 615us/step - loss: 1.0801 -
val_loss: 4.4392
Epoch 4/100
242/242 [==============================] - 0s 614us/step - loss: 0.8957 -
val_loss: 1.5338
Epoch 5/100
242/242 [==============================] - 0s 606us/step - loss: 0.8083 -
val_loss: 0.7192
Epoch 6/100
242/242 [==============================] - 0s 624us/step - loss: 0.7513 -
val_loss: 1.2046
Epoch 7/100
242/242 [==============================] - 0s 626us/step - loss: 0.7264 -
val_loss: 2.4524
Epoch 8/100
242/242 [==============================] - 0s 618us/step - loss: 0.6758 -
val_loss: 4.1421
Epoch 9/100
242/242 [==============================] - 0s 611us/step - loss: 0.6653 -
val_loss: 5.9820
Epoch 10/100
242/242 [==============================] - 0s 601us/step - loss: 0.6333 -
val_loss: 7.7654
Epoch 11/100
242/242 [==============================] - 0s 608us/step - loss: 0.6136 -
val_loss: 9.6230
Epoch 12/100
242/242 [==============================] - 0s 606us/step - loss: 0.5906 -
val_loss: 11.3609
Epoch 13/100
242/242 [==============================] - 0s 608us/step - loss: 0.5808 -
val_loss: 12.9821
Epoch 14/100
53
242/242 [==============================] - 0s 609us/step - loss: 0.5506 -
val_loss: 14.2266
Epoch 15/100
242/242 [==============================] - 0s 611us/step - loss: 0.5965 -
val_loss: 15.4321
121/121 [==============================] - 0s 359us/step - loss: 0.9198
[CV] END learning_rate=0.001683454924600351, n_hidden=0, n_neurons=15; total
time= 2.5s
Epoch 1/100
1/242 […] - ETA: 29s - loss: 5.8528
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 834us/step - loss: 4.5646 -
val_loss: 1.3307
Epoch 2/100
242/242 [==============================] - 0s 615us/step - loss: 1.1490 -
val_loss: 0.6934
Epoch 3/100
242/242 [==============================] - 0s 612us/step - loss: 0.6342 -
val_loss: 0.5469
Epoch 4/100
242/242 [==============================] - 0s 608us/step - loss: 0.5420 -
val_loss: 0.7322
Epoch 5/100
242/242 [==============================] - 0s 605us/step - loss: 0.5172 -
val_loss: 0.4963
Epoch 6/100
242/242 [==============================] - 0s 614us/step - loss: 0.5124 -
val_loss: 0.5539
Epoch 7/100
242/242 [==============================] - 0s 617us/step - loss: 0.5163 -
val_loss: 0.5729
Epoch 8/100
242/242 [==============================] - 0s 617us/step - loss: 0.5377 -
val_loss: 0.7873
Epoch 9/100
242/242 [==============================] - 0s 615us/step - loss: 0.5345 -
val_loss: 0.5968
Epoch 10/100
242/242 [==============================] - 0s 619us/step - loss: 0.5096 -
val_loss: 0.4951
Epoch 11/100
242/242 [==============================] - 0s 605us/step - loss: 0.5419 -
val_loss: 0.7591
Epoch 12/100
54
242/242 [==============================] - 0s 611us/step - loss: 0.5164 -
val_loss: 0.5368
Epoch 13/100
242/242 [==============================] - 0s 597us/step - loss: 0.4886 -
val_loss: 0.4968
Epoch 14/100
242/242 [==============================] - 0s 605us/step - loss: 0.5110 -
val_loss: 0.5778
Epoch 15/100
242/242 [==============================] - 0s 613us/step - loss: 0.5475 -
val_loss: 0.5117
Epoch 16/100
242/242 [==============================] - 0s 617us/step - loss: 0.5359 -
val_loss: 0.7055
Epoch 17/100
242/242 [==============================] - 0s 610us/step - loss: 0.5182 -
val_loss: 0.5399
Epoch 18/100
242/242 [==============================] - 0s 609us/step - loss: 0.5416 -
val_loss: 0.5257
Epoch 19/100
242/242 [==============================] - 0s 605us/step - loss: 0.5374 -
val_loss: 0.7902
Epoch 20/100
242/242 [==============================] - 0s 612us/step - loss: 0.5233 -
val_loss: 0.5852
121/121 [==============================] - 0s 362us/step - loss: 0.5317
[CV] END learning_rate=0.001683454924600351, n_hidden=0, n_neurons=15; total
time= 3.3s
Epoch 1/100
1/242 […] - ETA: 30s - loss: 4.8657
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 859us/step - loss: 2.2879 -
val_loss: 66.5657
Epoch 2/100
242/242 [==============================] - 0s 622us/step - loss: 0.8607 -
val_loss: 137.1490
Epoch 3/100
242/242 [==============================] - 0s 619us/step - loss: 1.2411 -
val_loss: 716.1611
Epoch 4/100
242/242 [==============================] - 0s 608us/step - loss: 2.8496 -
val_loss: 2297.8618
Epoch 5/100
55
242/242 [==============================] - 0s 620us/step - loss: 7.2695 -
val_loss: 9988.3408
Epoch 6/100
242/242 [==============================] - 0s 612us/step - loss: 245.1801 -
val_loss: 39231.9766
Epoch 7/100
242/242 [==============================] - 0s 615us/step - loss: 385.6387 -
val_loss: 155196.9375
Epoch 8/100
242/242 [==============================] - 0s 617us/step - loss: 2780.1350 -
val_loss: 612492.9375
Epoch 9/100
242/242 [==============================] - 0s 618us/step - loss: 2928.6683 -
val_loss: 2435757.5000
Epoch 10/100
242/242 [==============================] - 0s 619us/step - loss: 54063.5914 -
val_loss: 10128977.0000
Epoch 11/100
242/242 [==============================] - 0s 610us/step - loss: 149789.0356 -
val_loss: 39694556.0000
121/121 [==============================] - 0s 363us/step - loss: 105477.6016
[CV] END learning_rate=0.008731907739399206, n_hidden=0, n_neurons=21; total
time= 1.9s
Epoch 1/100
1/242 […] - ETA: 30s - loss: 7.0882
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 816us/step - loss: 2.2000 -
val_loss: 23.1193
Epoch 2/100
242/242 [==============================] - 0s 617us/step - loss: 0.5372 -
val_loss: 22.1675
Epoch 3/100
242/242 [==============================] - 0s 633us/step - loss: 0.5351 -
val_loss: 22.3752
Epoch 4/100
242/242 [==============================] - 0s 613us/step - loss: 0.5158 -
val_loss: 21.3891
Epoch 5/100
242/242 [==============================] - 0s 606us/step - loss: 0.5126 -
val_loss: 20.8855
Epoch 6/100
242/242 [==============================] - 0s 608us/step - loss: 0.5075 -
val_loss: 20.6379
Epoch 7/100
56
242/242 [==============================] - 0s 605us/step - loss: 0.4916 -
val_loss: 20.0736
Epoch 8/100
242/242 [==============================] - 0s 619us/step - loss: 0.5100 -
val_loss: 20.7178
Epoch 9/100
242/242 [==============================] - 0s 613us/step - loss: 0.5000 -
val_loss: 20.0844
Epoch 10/100
242/242 [==============================] - 0s 622us/step - loss: 0.4819 -
val_loss: 17.0622
Epoch 11/100
242/242 [==============================] - 0s 612us/step - loss: 0.5075 -
val_loss: 19.1666
Epoch 12/100
242/242 [==============================] - 0s 606us/step - loss: 0.4912 -
val_loss: 20.8246
Epoch 13/100
242/242 [==============================] - 0s 619us/step - loss: 0.5010 -
val_loss: 22.0298
Epoch 14/100
242/242 [==============================] - 0s 604us/step - loss: 0.4672 -
val_loss: 17.6022
Epoch 15/100
242/242 [==============================] - 0s 616us/step - loss: 0.5293 -
val_loss: 18.6171
Epoch 16/100
242/242 [==============================] - 0s 603us/step - loss: 0.5032 -
val_loss: 20.0451
Epoch 17/100
242/242 [==============================] - 0s 610us/step - loss: 0.4972 -
val_loss: 17.5898
Epoch 18/100
242/242 [==============================] - 0s 597us/step - loss: 0.5131 -
val_loss: 17.4526
Epoch 19/100
242/242 [==============================] - 0s 606us/step - loss: 0.5060 -
val_loss: 19.5015
Epoch 20/100
242/242 [==============================] - 0s 604us/step - loss: 0.4820 -
val_loss: 17.3223
121/121 [==============================] - 0s 347us/step - loss: 0.9327
[CV] END learning_rate=0.008731907739399206, n_hidden=0, n_neurons=21; total
time= 3.3s
Epoch 1/100
123/242 [==============>…] - ETA: 0s - loss: 3.7717
/usr/local/lib/python3.9/site-
57
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 810us/step - loss: 2.7871 -
val_loss: 0.5742
Epoch 2/100
242/242 [==============================] - 0s 599us/step - loss: 0.5753 -
val_loss: 6.7367
Epoch 3/100
242/242 [==============================] - 0s 606us/step - loss: 0.5774 -
val_loss: 6.5227
Epoch 4/100
242/242 [==============================] - 0s 608us/step - loss: 0.6096 -
val_loss: 19.7082
Epoch 5/100
242/242 [==============================] - 0s 610us/step - loss: 0.4987 -
val_loss: 205.7216
Epoch 6/100
242/242 [==============================] - 0s 603us/step - loss: 1.3101 -
val_loss: 282.6049
Epoch 7/100
242/242 [==============================] - 0s 625us/step - loss: 1.6634 -
val_loss: 656.3256
Epoch 8/100
242/242 [==============================] - 0s 631us/step - loss: 20.9502 -
val_loss: 1380.0128
Epoch 9/100
242/242 [==============================] - 0s 632us/step - loss: 8.9660 -
val_loss: 2817.4570
Epoch 10/100
242/242 [==============================] - 0s 617us/step - loss: 4.7009 -
val_loss: 4499.3882
Epoch 11/100
242/242 [==============================] - 0s 628us/step - loss: 189.1043 -
val_loss: 8457.8770
121/121 [==============================] - 0s 371us/step - loss: 11.0521
[CV] END learning_rate=0.008731907739399206, n_hidden=0, n_neurons=21; total
time= 1.9s
Epoch 1/100
1/242 […] - ETA: 38s - loss: 4.5980
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 996us/step - loss: 3.4030 -
val_loss: 2.6033
58
Epoch 2/100
242/242 [==============================] - 0s 755us/step - loss: 1.1446 -
val_loss: 1.0424
Epoch 3/100
242/242 [==============================] - 0s 754us/step - loss: 0.8391 -
val_loss: 0.7507
Epoch 4/100
242/242 [==============================] - 0s 730us/step - loss: 0.7445 -
val_loss: 0.6758
Epoch 5/100
242/242 [==============================] - 0s 726us/step - loss: 0.6914 -
val_loss: 0.6484
Epoch 6/100
242/242 [==============================] - 0s 734us/step - loss: 0.6403 -
val_loss: 0.6241
Epoch 7/100
242/242 [==============================] - 0s 739us/step - loss: 0.6369 -
val_loss: 0.6073
Epoch 8/100
242/242 [==============================] - 0s 735us/step - loss: 0.5994 -
val_loss: 0.5826
Epoch 9/100
242/242 [==============================] - 0s 739us/step - loss: 0.5732 -
val_loss: 0.5597
Epoch 10/100
242/242 [==============================] - 0s 735us/step - loss: 0.5738 -
val_loss: 0.5445
Epoch 11/100
242/242 [==============================] - 0s 755us/step - loss: 0.5434 -
val_loss: 0.5314
Epoch 12/100
242/242 [==============================] - 0s 745us/step - loss: 0.5087 -
val_loss: 0.5147
Epoch 13/100
242/242 [==============================] - 0s 749us/step - loss: 0.5376 -
val_loss: 0.5030
Epoch 14/100
242/242 [==============================] - 0s 745us/step - loss: 0.5223 -
val_loss: 0.4904
Epoch 15/100
242/242 [==============================] - 0s 738us/step - loss: 0.5138 -
val_loss: 0.4791
Epoch 16/100
242/242 [==============================] - 0s 762us/step - loss: 0.4883 -
val_loss: 0.4695
Epoch 17/100
242/242 [==============================] - 0s 739us/step - loss: 0.5036 -
val_loss: 0.4608
59
Epoch 18/100
242/242 [==============================] - 0s 723us/step - loss: 0.4899 -
val_loss: 0.4524
Epoch 19/100
242/242 [==============================] - 0s 746us/step - loss: 0.4692 -
val_loss: 0.4476
Epoch 20/100
242/242 [==============================] - 0s 728us/step - loss: 0.4681 -
val_loss: 0.4383
Epoch 21/100
242/242 [==============================] - 0s 763us/step - loss: 0.4555 -
val_loss: 0.4355
Epoch 22/100
242/242 [==============================] - 0s 740us/step - loss: 0.4393 -
val_loss: 0.4282
Epoch 23/100
242/242 [==============================] - 0s 736us/step - loss: 0.4564 -
val_loss: 0.4230
Epoch 24/100
242/242 [==============================] - 0s 730us/step - loss: 0.4370 -
val_loss: 0.4166
Epoch 25/100
242/242 [==============================] - 0s 733us/step - loss: 0.4397 -
val_loss: 0.4161
Epoch 26/100
242/242 [==============================] - 0s 724us/step - loss: 0.4248 -
val_loss: 0.4142
Epoch 27/100
242/242 [==============================] - 0s 734us/step - loss: 0.4414 -
val_loss: 0.4100
Epoch 28/100
242/242 [==============================] - 0s 732us/step - loss: 0.4322 -
val_loss: 0.4132
Epoch 29/100
242/242 [==============================] - 0s 739us/step - loss: 0.4282 -
val_loss: 0.4103
Epoch 30/100
242/242 [==============================] - 0s 729us/step - loss: 0.4197 -
val_loss: 0.4032
Epoch 31/100
242/242 [==============================] - 0s 731us/step - loss: 0.4054 -
val_loss: 0.3964
Epoch 32/100
242/242 [==============================] - 0s 726us/step - loss: 0.4043 -
val_loss: 0.3956
Epoch 33/100
242/242 [==============================] - 0s 747us/step - loss: 0.3994 -
val_loss: 0.4013
60
Epoch 34/100
242/242 [==============================] - 0s 736us/step - loss: 0.4002 -
val_loss: 0.4004
Epoch 35/100
242/242 [==============================] - 0s 742us/step - loss: 0.3965 -
val_loss: 0.3913
Epoch 36/100
242/242 [==============================] - 0s 733us/step - loss: 0.3933 -
val_loss: 0.3986
Epoch 37/100
242/242 [==============================] - 0s 747us/step - loss: 0.3963 -
val_loss: 0.3871
Epoch 38/100
242/242 [==============================] - 0s 762us/step - loss: 0.3867 -
val_loss: 0.3998
Epoch 39/100
242/242 [==============================] - 0s 765us/step - loss: 0.3919 -
val_loss: 0.3858
Epoch 40/100
242/242 [==============================] - 0s 774us/step - loss: 0.3729 -
val_loss: 0.3967
Epoch 41/100
242/242 [==============================] - 0s 766us/step - loss: 0.3850 -
val_loss: 0.3918
Epoch 42/100
242/242 [==============================] - 0s 758us/step - loss: 0.3933 -
val_loss: 0.3866
Epoch 43/100
242/242 [==============================] - 0s 733us/step - loss: 0.3862 -
val_loss: 0.3800
Epoch 44/100
242/242 [==============================] - 0s 748us/step - loss: 0.3803 -
val_loss: 0.3997
Epoch 45/100
242/242 [==============================] - 0s 770us/step - loss: 0.3894 -
val_loss: 0.3861
Epoch 46/100
242/242 [==============================] - 0s 758us/step - loss: 0.3701 -
val_loss: 0.3805
Epoch 47/100
242/242 [==============================] - 0s 765us/step - loss: 0.3769 -
val_loss: 0.3919
Epoch 48/100
242/242 [==============================] - 0s 757us/step - loss: 0.3630 -
val_loss: 0.3826
Epoch 49/100
242/242 [==============================] - 0s 749us/step - loss: 0.3836 -
val_loss: 0.3812
61
Epoch 50/100
242/242 [==============================] - 0s 748us/step - loss: 0.3705 -
val_loss: 0.3905
Epoch 51/100
242/242 [==============================] - 0s 761us/step - loss: 0.3599 -
val_loss: 0.3832
Epoch 52/100
242/242 [==============================] - 0s 743us/step - loss: 0.3842 -
val_loss: 0.3827
Epoch 53/100
242/242 [==============================] - 0s 746us/step - loss: 0.3698 -
val_loss: 0.3859
121/121 [==============================] - 0s 397us/step - loss: 0.3865
[CV] END learning_rate=0.0006154014789262348, n_hidden=2, n_neurons=87; total
time= 10.0s
Epoch 1/100
1/242 […] - ETA: 37s - loss: 6.4054
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 992us/step - loss: 3.8639 -
val_loss: 17.5435
Epoch 2/100
242/242 [==============================] - 0s 737us/step - loss: 1.2643 -
val_loss: 15.4502
Epoch 3/100
242/242 [==============================] - 0s 739us/step - loss: 0.8537 -
val_loss: 11.1084
Epoch 4/100
242/242 [==============================] - 0s 749us/step - loss: 0.7261 -
val_loss: 8.0885
Epoch 5/100
242/242 [==============================] - 0s 764us/step - loss: 0.6709 -
val_loss: 6.1076
Epoch 6/100
242/242 [==============================] - 0s 737us/step - loss: 0.6338 -
val_loss: 4.7302
Epoch 7/100
242/242 [==============================] - 0s 762us/step - loss: 0.6050 -
val_loss: 3.6783
Epoch 8/100
242/242 [==============================] - 0s 757us/step - loss: 0.5728 -
val_loss: 2.8274
Epoch 9/100
242/242 [==============================] - 0s 741us/step - loss: 0.5618 -
val_loss: 2.2526
62
Epoch 10/100
242/242 [==============================] - 0s 749us/step - loss: 0.5420 -
val_loss: 1.7966
Epoch 11/100
242/242 [==============================] - 0s 759us/step - loss: 0.5264 -
val_loss: 1.4646
Epoch 12/100
242/242 [==============================] - 0s 760us/step - loss: 0.5001 -
val_loss: 1.1656
Epoch 13/100
242/242 [==============================] - 0s 753us/step - loss: 0.4951 -
val_loss: 0.9599
Epoch 14/100
242/242 [==============================] - 0s 743us/step - loss: 0.4816 -
val_loss: 0.8400
Epoch 15/100
242/242 [==============================] - 0s 762us/step - loss: 0.5011 -
val_loss: 0.7148
Epoch 16/100
242/242 [==============================] - 0s 758us/step - loss: 0.4759 -
val_loss: 0.6408
Epoch 17/100
242/242 [==============================] - 0s 783us/step - loss: 0.4740 -
val_loss: 0.5679
Epoch 18/100
242/242 [==============================] - 0s 778us/step - loss: 0.4779 -
val_loss: 0.5264
Epoch 19/100
242/242 [==============================] - 0s 779us/step - loss: 0.4594 -
val_loss: 0.4894
Epoch 20/100
242/242 [==============================] - 0s 781us/step - loss: 0.4491 -
val_loss: 0.4711
Epoch 21/100
242/242 [==============================] - 0s 769us/step - loss: 0.4446 -
val_loss: 0.4525
Epoch 22/100
242/242 [==============================] - 0s 755us/step - loss: 0.4432 -
val_loss: 0.4467
Epoch 23/100
242/242 [==============================] - 0s 748us/step - loss: 0.4445 -
val_loss: 0.4404
Epoch 24/100
242/242 [==============================] - 0s 741us/step - loss: 0.4271 -
val_loss: 0.4333
Epoch 25/100
242/242 [==============================] - 0s 737us/step - loss: 0.4087 -
val_loss: 0.4303
63
Epoch 26/100
242/242 [==============================] - 0s 743us/step - loss: 0.4180 -
val_loss: 0.4284
Epoch 27/100
242/242 [==============================] - 0s 732us/step - loss: 0.4333 -
val_loss: 0.4270
Epoch 28/100
242/242 [==============================] - 0s 754us/step - loss: 0.4217 -
val_loss: 0.4269
Epoch 29/100
242/242 [==============================] - 0s 745us/step - loss: 0.4192 -
val_loss: 0.4416
Epoch 30/100
242/242 [==============================] - 0s 757us/step - loss: 0.4192 -
val_loss: 0.4363
Epoch 31/100
242/242 [==============================] - 0s 749us/step - loss: 0.4109 -
val_loss: 0.4330
Epoch 32/100
242/242 [==============================] - 0s 759us/step - loss: 0.4095 -
val_loss: 0.4407
Epoch 33/100
242/242 [==============================] - 0s 746us/step - loss: 0.4074 -
val_loss: 0.4484
Epoch 34/100
242/242 [==============================] - 0s 746us/step - loss: 0.3977 -
val_loss: 0.4646
Epoch 35/100
242/242 [==============================] - 0s 756us/step - loss: 0.3947 -
val_loss: 0.4789
Epoch 36/100
242/242 [==============================] - 0s 755us/step - loss: 0.3863 -
val_loss: 0.4745
Epoch 37/100
242/242 [==============================] - 0s 733us/step - loss: 0.3977 -
val_loss: 0.4971
Epoch 38/100
242/242 [==============================] - 0s 720us/step - loss: 0.3813 -
val_loss: 0.5131
121/121 [==============================] - 0s 400us/step - loss: 0.4088
[CV] END learning_rate=0.0006154014789262348, n_hidden=2, n_neurons=87; total
time= 7.3s
Epoch 1/100
1/242 […] - ETA: 37s - loss: 7.1338
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
64
warnings.warn(
242/242 [==============================] - 0s 1ms/step - loss: 3.9716 -
val_loss: 2.0961
Epoch 2/100
242/242 [==============================] - 0s 748us/step - loss: 1.2441 -
val_loss: 1.2079
Epoch 3/100
242/242 [==============================] - 0s 746us/step - loss: 0.8593 -
val_loss: 0.8075
Epoch 4/100
242/242 [==============================] - 0s 736us/step - loss: 0.7813 -
val_loss: 0.7207
Epoch 5/100
242/242 [==============================] - 0s 746us/step - loss: 0.7201 -
val_loss: 0.6952
Epoch 6/100
242/242 [==============================] - 0s 759us/step - loss: 0.6827 -
val_loss: 0.6614
Epoch 7/100
242/242 [==============================] - 0s 739us/step - loss: 0.6703 -
val_loss: 0.6378
Epoch 8/100
242/242 [==============================] - 0s 752us/step - loss: 0.6521 -
val_loss: 0.6132
Epoch 9/100
242/242 [==============================] - 0s 762us/step - loss: 0.6350 -
val_loss: 0.6043
Epoch 10/100
242/242 [==============================] - 0s 761us/step - loss: 0.5927 -
val_loss: 0.5937
Epoch 11/100
242/242 [==============================] - 0s 756us/step - loss: 0.5958 -
val_loss: 0.5658
Epoch 12/100
242/242 [==============================] - 0s 752us/step - loss: 0.5681 -
val_loss: 0.5551
Epoch 13/100
242/242 [==============================] - 0s 745us/step - loss: 0.5353 -
val_loss: 0.5476
Epoch 14/100
242/242 [==============================] - 0s 757us/step - loss: 0.5369 -
val_loss: 0.5450
Epoch 15/100
242/242 [==============================] - 0s 752us/step - loss: 0.5528 -
val_loss: 0.5314
Epoch 16/100
242/242 [==============================] - 0s 742us/step - loss: 0.5318 -
65
val_loss: 0.5067
Epoch 17/100
242/242 [==============================] - 0s 774us/step - loss: 0.5005 -
val_loss: 0.4983
Epoch 18/100
242/242 [==============================] - 0s 766us/step - loss: 0.5180 -
val_loss: 0.4873
Epoch 19/100
242/242 [==============================] - 0s 738us/step - loss: 0.5039 -
val_loss: 0.4748
Epoch 20/100
242/242 [==============================] - 0s 756us/step - loss: 0.4846 -
val_loss: 0.4767
Epoch 21/100
242/242 [==============================] - 0s 768us/step - loss: 0.4711 -
val_loss: 0.4719
Epoch 22/100
242/242 [==============================] - 0s 752us/step - loss: 0.4926 -
val_loss: 0.4623
Epoch 23/100
242/242 [==============================] - 0s 756us/step - loss: 0.4587 -
val_loss: 0.4640
Epoch 24/100
242/242 [==============================] - 0s 756us/step - loss: 0.4524 -
val_loss: 0.4777
Epoch 25/100
242/242 [==============================] - 0s 744us/step - loss: 0.4561 -
val_loss: 0.4488
Epoch 26/100
242/242 [==============================] - 0s 755us/step - loss: 0.4432 -
val_loss: 0.4475
Epoch 27/100
242/242 [==============================] - 0s 747us/step - loss: 0.4535 -
val_loss: 0.4420
Epoch 28/100
242/242 [==============================] - 0s 770us/step - loss: 0.4486 -
val_loss: 0.4449
Epoch 29/100
242/242 [==============================] - 0s 765us/step - loss: 0.4329 -
val_loss: 0.4581
Epoch 30/100
242/242 [==============================] - 0s 765us/step - loss: 0.4322 -
val_loss: 0.4385
Epoch 31/100
242/242 [==============================] - 0s 761us/step - loss: 0.4157 -
val_loss: 0.4226
Epoch 32/100
242/242 [==============================] - 0s 752us/step - loss: 0.4463 -
66
val_loss: 0.4458
Epoch 33/100
242/242 [==============================] - 0s 758us/step - loss: 0.4255 -
val_loss: 0.4242
Epoch 34/100
242/242 [==============================] - 0s 761us/step - loss: 0.4045 -
val_loss: 0.4542
Epoch 35/100
242/242 [==============================] - 0s 765us/step - loss: 0.4014 -
val_loss: 0.4279
Epoch 36/100
242/242 [==============================] - 0s 756us/step - loss: 0.4058 -
val_loss: 0.4341
Epoch 37/100
242/242 [==============================] - 0s 745us/step - loss: 0.4116 -
val_loss: 0.4189
Epoch 38/100
242/242 [==============================] - 0s 753us/step - loss: 0.4083 -
val_loss: 0.4344
Epoch 39/100
242/242 [==============================] - 0s 743us/step - loss: 0.4144 -
val_loss: 0.4235
Epoch 40/100
242/242 [==============================] - 0s 746us/step - loss: 0.4052 -
val_loss: 0.4183
Epoch 41/100
242/242 [==============================] - 0s 743us/step - loss: 0.3805 -
val_loss: 0.4552
Epoch 42/100
242/242 [==============================] - 0s 770us/step - loss: 0.3908 -
val_loss: 0.4411
Epoch 43/100
242/242 [==============================] - 0s 768us/step - loss: 0.3971 -
val_loss: 0.4073
Epoch 44/100
242/242 [==============================] - 0s 748us/step - loss: 0.4066 -
val_loss: 0.4294
Epoch 45/100
242/242 [==============================] - 0s 753us/step - loss: 0.3865 -
val_loss: 0.4238
Epoch 46/100
242/242 [==============================] - 0s 745us/step - loss: 0.3847 -
val_loss: 0.4128
Epoch 47/100
242/242 [==============================] - 0s 748us/step - loss: 0.3953 -
val_loss: 0.3977
Epoch 48/100
242/242 [==============================] - 0s 748us/step - loss: 0.3861 -
67
val_loss: 0.4028
Epoch 49/100
242/242 [==============================] - 0s 754us/step - loss: 0.3941 -
val_loss: 0.4362
Epoch 50/100
242/242 [==============================] - 0s 742us/step - loss: 0.3816 -
val_loss: 0.4235
Epoch 51/100
242/242 [==============================] - 0s 737us/step - loss: 0.3967 -
val_loss: 0.4171
Epoch 52/100
242/242 [==============================] - 0s 740us/step - loss: 0.3808 -
val_loss: 0.4273
Epoch 53/100
242/242 [==============================] - 0s 748us/step - loss: 0.3852 -
val_loss: 0.4076
Epoch 54/100
242/242 [==============================] - 0s 760us/step - loss: 0.3787 -
val_loss: 0.3885
Epoch 55/100
242/242 [==============================] - 0s 744us/step - loss: 0.3765 -
val_loss: 0.4003
Epoch 56/100
242/242 [==============================] - 0s 754us/step - loss: 0.3831 -
val_loss: 0.4176
Epoch 57/100
242/242 [==============================] - 0s 766us/step - loss: 0.3727 -
val_loss: 0.4201
Epoch 58/100
242/242 [==============================] - 0s 765us/step - loss: 0.3825 -
val_loss: 0.4177
Epoch 59/100
242/242 [==============================] - 0s 774us/step - loss: 0.3836 -
val_loss: 0.4166
Epoch 60/100
242/242 [==============================] - 0s 752us/step - loss: 0.3914 -
val_loss: 0.3910
Epoch 61/100
242/242 [==============================] - 0s 762us/step - loss: 0.3911 -
val_loss: 0.4094
Epoch 62/100
242/242 [==============================] - 0s 758us/step - loss: 0.3759 -
val_loss: 0.4363
Epoch 63/100
242/242 [==============================] - 0s 763us/step - loss: 0.3712 -
val_loss: 0.4025
Epoch 64/100
242/242 [==============================] - 0s 761us/step - loss: 0.3880 -
68
val_loss: 0.4028
121/121 [==============================] - 0s 409us/step - loss: 0.3737
[CV] END learning_rate=0.0006154014789262348, n_hidden=2, n_neurons=87; total
time= 12.1s
Epoch 1/100
1/242 […] - ETA: 40s - loss: 3.7904
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 998us/step - loss: 3.0172 -
val_loss: 7.9722
Epoch 2/100
242/242 [==============================] - 0s 739us/step - loss: 1.1314 -
val_loss: 5.6563
Epoch 3/100
242/242 [==============================] - 0s 750us/step - loss: 0.8873 -
val_loss: 4.1443
Epoch 4/100
242/242 [==============================] - 0s 739us/step - loss: 0.7990 -
val_loss: 3.1169
Epoch 5/100
242/242 [==============================] - 0s 747us/step - loss: 0.7510 -
val_loss: 2.6199
Epoch 6/100
242/242 [==============================] - 0s 742us/step - loss: 0.7433 -
val_loss: 2.2830
Epoch 7/100
242/242 [==============================] - 0s 732us/step - loss: 0.7276 -
val_loss: 1.9726
Epoch 8/100
242/242 [==============================] - 0s 730us/step - loss: 0.6950 -
val_loss: 1.7536
Epoch 9/100
242/242 [==============================] - 0s 735us/step - loss: 0.6651 -
val_loss: 1.5654
Epoch 10/100
242/242 [==============================] - 0s 743us/step - loss: 0.6774 -
val_loss: 1.4316
Epoch 11/100
242/242 [==============================] - 0s 731us/step - loss: 0.6459 -
val_loss: 1.3165
Epoch 12/100
242/242 [==============================] - 0s 731us/step - loss: 0.6046 -
val_loss: 1.2101
Epoch 13/100
242/242 [==============================] - 0s 729us/step - loss: 0.6378 -
69
val_loss: 1.1236
Epoch 14/100
242/242 [==============================] - 0s 733us/step - loss: 0.6303 -
val_loss: 1.0591
Epoch 15/100
242/242 [==============================] - 0s 743us/step - loss: 0.6127 -
val_loss: 0.9875
Epoch 16/100
242/242 [==============================] - 0s 743us/step - loss: 0.5844 -
val_loss: 0.9345
Epoch 17/100
242/242 [==============================] - 0s 728us/step - loss: 0.6108 -
val_loss: 0.8832
Epoch 18/100
242/242 [==============================] - 0s 739us/step - loss: 0.5836 -
val_loss: 0.8424
Epoch 19/100
242/242 [==============================] - 0s 751us/step - loss: 0.5577 -
val_loss: 0.8079
Epoch 20/100
242/242 [==============================] - 0s 758us/step - loss: 0.5619 -
val_loss: 0.7646
Epoch 21/100
242/242 [==============================] - 0s 727us/step - loss: 0.5377 -
val_loss: 0.7347
Epoch 22/100
242/242 [==============================] - 0s 749us/step - loss: 0.5261 -
val_loss: 0.7075
Epoch 23/100
242/242 [==============================] - 0s 736us/step - loss: 0.5350 -
val_loss: 0.6815
Epoch 24/100
242/242 [==============================] - 0s 727us/step - loss: 0.5126 -
val_loss: 0.6537
Epoch 25/100
242/242 [==============================] - 0s 735us/step - loss: 0.5209 -
val_loss: 0.6361
Epoch 26/100
242/242 [==============================] - 0s 731us/step - loss: 0.4920 -
val_loss: 0.6174
Epoch 27/100
242/242 [==============================] - 0s 750us/step - loss: 0.5056 -
val_loss: 0.6011
Epoch 28/100
242/242 [==============================] - 0s 736us/step - loss: 0.4962 -
val_loss: 0.5887
Epoch 29/100
242/242 [==============================] - 0s 744us/step - loss: 0.4973 -
70
val_loss: 0.5778
Epoch 30/100
242/242 [==============================] - 0s 744us/step - loss: 0.4783 -
val_loss: 0.5671
Epoch 31/100
242/242 [==============================] - 0s 737us/step - loss: 0.4640 -
val_loss: 0.5557
Epoch 32/100
242/242 [==============================] - 0s 728us/step - loss: 0.4596 -
val_loss: 0.5475
Epoch 33/100
242/242 [==============================] - 0s 734us/step - loss: 0.4550 -
val_loss: 0.5403
Epoch 34/100
242/242 [==============================] - 0s 740us/step - loss: 0.4566 -
val_loss: 0.5322
Epoch 35/100
242/242 [==============================] - 0s 726us/step - loss: 0.4479 -
val_loss: 0.5250
Epoch 36/100
242/242 [==============================] - 0s 735us/step - loss: 0.4390 -
val_loss: 0.5165
Epoch 37/100
242/242 [==============================] - 0s 729us/step - loss: 0.4465 -
val_loss: 0.5106
Epoch 38/100
242/242 [==============================] - 0s 733us/step - loss: 0.4261 -
val_loss: 0.5053
Epoch 39/100
242/242 [==============================] - 0s 730us/step - loss: 0.4368 -
val_loss: 0.5004
Epoch 40/100
242/242 [==============================] - 0s 731us/step - loss: 0.4189 -
val_loss: 0.4966
Epoch 41/100
242/242 [==============================] - 0s 731us/step - loss: 0.4284 -
val_loss: 0.4922
Epoch 42/100
242/242 [==============================] - 0s 734us/step - loss: 0.4333 -
val_loss: 0.4891
Epoch 43/100
242/242 [==============================] - 0s 745us/step - loss: 0.4217 -
val_loss: 0.4850
Epoch 44/100
242/242 [==============================] - 0s 737us/step - loss: 0.4198 -
val_loss: 0.4854
Epoch 45/100
242/242 [==============================] - 0s 742us/step - loss: 0.4223 -
71
val_loss: 0.4828
Epoch 46/100
242/242 [==============================] - 0s 737us/step - loss: 0.4080 -
val_loss: 0.4779
Epoch 47/100
242/242 [==============================] - 0s 720us/step - loss: 0.4182 -
val_loss: 0.4783
Epoch 48/100
242/242 [==============================] - 0s 743us/step - loss: 0.3936 -
val_loss: 0.4755
Epoch 49/100
242/242 [==============================] - 0s 737us/step - loss: 0.4146 -
val_loss: 0.4765
Epoch 50/100
242/242 [==============================] - 0s 733us/step - loss: 0.4050 -
val_loss: 0.4753
Epoch 51/100
242/242 [==============================] - 0s 726us/step - loss: 0.3944 -
val_loss: 0.4714
Epoch 52/100
242/242 [==============================] - 0s 740us/step - loss: 0.4134 -
val_loss: 0.4726
Epoch 53/100
242/242 [==============================] - 0s 735us/step - loss: 0.3958 -
val_loss: 0.4721
Epoch 54/100
242/242 [==============================] - 0s 729us/step - loss: 0.4072 -
val_loss: 0.4708
Epoch 55/100
242/242 [==============================] - 0s 744us/step - loss: 0.3842 -
val_loss: 0.4703
Epoch 56/100
242/242 [==============================] - 0s 728us/step - loss: 0.4246 -
val_loss: 0.4713
Epoch 57/100
242/242 [==============================] - 0s 744us/step - loss: 0.3932 -
val_loss: 0.4704
Epoch 58/100
242/242 [==============================] - 0s 739us/step - loss: 0.4030 -
val_loss: 0.4718
Epoch 59/100
242/242 [==============================] - 0s 736us/step - loss: 0.4073 -
val_loss: 0.4712
Epoch 60/100
242/242 [==============================] - 0s 732us/step - loss: 0.3927 -
val_loss: 0.4701
Epoch 61/100
242/242 [==============================] - 0s 734us/step - loss: 0.3792 -
72
val_loss: 0.4718
Epoch 62/100
242/242 [==============================] - 0s 728us/step - loss: 0.3849 -
val_loss: 0.4716
Epoch 63/100
242/242 [==============================] - 0s 747us/step - loss: 0.3975 -
val_loss: 0.4704
Epoch 64/100
242/242 [==============================] - 0s 745us/step - loss: 0.3874 -
val_loss: 0.4735
Epoch 65/100
242/242 [==============================] - 0s 748us/step - loss: 0.3619 -
val_loss: 0.4738
Epoch 66/100
242/242 [==============================] - 0s 748us/step - loss: 0.4170 -
val_loss: 0.4729
Epoch 67/100
242/242 [==============================] - 0s 748us/step - loss: 0.3865 -
val_loss: 0.4716
Epoch 68/100
242/242 [==============================] - 0s 744us/step - loss: 0.3869 -
val_loss: 0.4731
Epoch 69/100
242/242 [==============================] - 0s 746us/step - loss: 0.3941 -
val_loss: 0.4720
Epoch 70/100
242/242 [==============================] - 0s 747us/step - loss: 0.4128 -
val_loss: 0.4721
121/121 [==============================] - 0s 395us/step - loss: 0.4001
[CV] END learning_rate=0.0003920021771415983, n_hidden=3, n_neurons=24; total
time= 12.9s
Epoch 1/100
1/242 […] - ETA: 41s - loss: 6.5298
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 1ms/step - loss: 4.6799 -
val_loss: 28.0492
Epoch 2/100
242/242 [==============================] - 0s 744us/step - loss: 2.2876 -
val_loss: 43.0472
Epoch 3/100
242/242 [==============================] - 0s 757us/step - loss: 1.7123 -
val_loss: 37.0128
Epoch 4/100
242/242 [==============================] - 0s 759us/step - loss: 1.4436 -
73
val_loss: 28.7538
Epoch 5/100
242/242 [==============================] - 0s 753us/step - loss: 1.1760 -
val_loss: 20.6120
Epoch 6/100
242/242 [==============================] - 0s 741us/step - loss: 1.0429 -
val_loss: 14.6245
Epoch 7/100
242/242 [==============================] - 0s 736us/step - loss: 0.9553 -
val_loss: 10.5960
Epoch 8/100
242/242 [==============================] - 0s 738us/step - loss: 0.8539 -
val_loss: 7.2861
Epoch 9/100
242/242 [==============================] - 0s 735us/step - loss: 0.8283 -
val_loss: 5.1836
Epoch 10/100
242/242 [==============================] - 0s 735us/step - loss: 0.7813 -
val_loss: 3.7344
Epoch 11/100
242/242 [==============================] - 0s 739us/step - loss: 0.7572 -
val_loss: 2.7778
Epoch 12/100
242/242 [==============================] - 0s 737us/step - loss: 0.7330 -
val_loss: 1.9391
Epoch 13/100
242/242 [==============================] - 0s 735us/step - loss: 0.7016 -
val_loss: 1.4673
Epoch 14/100
242/242 [==============================] - 0s 743us/step - loss: 0.6931 -
val_loss: 1.2321
Epoch 15/100
242/242 [==============================] - 0s 728us/step - loss: 0.7247 -
val_loss: 0.9812
Epoch 16/100
242/242 [==============================] - 0s 743us/step - loss: 0.6746 -
val_loss: 0.8534
Epoch 17/100
242/242 [==============================] - 0s 730us/step - loss: 0.6902 -
val_loss: 0.7166
Epoch 18/100
242/242 [==============================] - 0s 745us/step - loss: 0.6639 -
val_loss: 0.6424
Epoch 19/100
242/242 [==============================] - 0s 713us/step - loss: 0.6338 -
val_loss: 0.5949
Epoch 20/100
242/242 [==============================] - 0s 743us/step - loss: 0.6267 -
74
val_loss: 0.5764
Epoch 21/100
242/242 [==============================] - 0s 740us/step - loss: 0.6066 -
val_loss: 0.5809
Epoch 22/100
242/242 [==============================] - 0s 743us/step - loss: 0.6149 -
val_loss: 0.6027
Epoch 23/100
242/242 [==============================] - 0s 731us/step - loss: 0.5979 -
val_loss: 0.6369
Epoch 24/100
242/242 [==============================] - 0s 745us/step - loss: 0.5640 -
val_loss: 0.6922
Epoch 25/100
242/242 [==============================] - 0s 738us/step - loss: 0.5534 -
val_loss: 0.7604
Epoch 26/100
242/242 [==============================] - 0s 742us/step - loss: 0.5521 -
val_loss: 0.8304
Epoch 27/100
242/242 [==============================] - 0s 752us/step - loss: 0.5661 -
val_loss: 0.8810
Epoch 28/100
242/242 [==============================] - 0s 737us/step - loss: 0.5517 -
val_loss: 0.9624
Epoch 29/100
242/242 [==============================] - 0s 739us/step - loss: 0.5452 -
val_loss: 0.9578
Epoch 30/100
242/242 [==============================] - 0s 734us/step - loss: 0.5387 -
val_loss: 1.0158
121/121 [==============================] - 0s 395us/step - loss: 0.5490
[CV] END learning_rate=0.0003920021771415983, n_hidden=3, n_neurons=24; total
time= 5.8s
Epoch 1/100
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 1s 1ms/step - loss: 3.7671 -
val_loss: 4.3285
Epoch 2/100
242/242 [==============================] - 0s 784us/step - loss: 1.4168 -
val_loss: 2.8653
Epoch 3/100
242/242 [==============================] - 0s 775us/step - loss: 0.9880 -
val_loss: 1.8260
75
Epoch 4/100
242/242 [==============================] - 0s 779us/step - loss: 0.8960 -
val_loss: 1.2974
Epoch 5/100
242/242 [==============================] - 0s 766us/step - loss: 0.7894 -
val_loss: 0.9606
Epoch 6/100
242/242 [==============================] - 0s 761us/step - loss: 0.7342 -
val_loss: 0.7924
Epoch 7/100
242/242 [==============================] - 0s 754us/step - loss: 0.7074 -
val_loss: 0.7158
Epoch 8/100
242/242 [==============================] - 0s 759us/step - loss: 0.6971 -
val_loss: 0.6616
Epoch 9/100
242/242 [==============================] - 0s 767us/step - loss: 0.6864 -
val_loss: 0.6363
Epoch 10/100
242/242 [==============================] - 0s 778us/step - loss: 0.6453 -
val_loss: 0.6160
Epoch 11/100
242/242 [==============================] - 0s 744us/step - loss: 0.6323 -
val_loss: 0.5999
Epoch 12/100
242/242 [==============================] - 0s 767us/step - loss: 0.6111 -
val_loss: 0.5855
Epoch 13/100
242/242 [==============================] - 0s 752us/step - loss: 0.5843 -
val_loss: 0.5729
Epoch 14/100
242/242 [==============================] - 0s 749us/step - loss: 0.5785 -
val_loss: 0.5615
Epoch 15/100
242/242 [==============================] - 0s 748us/step - loss: 0.5963 -
val_loss: 0.5509
Epoch 16/100
242/242 [==============================] - 0s 755us/step - loss: 0.5864 -
val_loss: 0.5399
Epoch 17/100
242/242 [==============================] - 0s 748us/step - loss: 0.5564 -
val_loss: 0.5301
Epoch 18/100
242/242 [==============================] - 0s 763us/step - loss: 0.5697 -
val_loss: 0.5210
Epoch 19/100
242/242 [==============================] - 0s 745us/step - loss: 0.5575 -
val_loss: 0.5129
76
Epoch 20/100
242/242 [==============================] - 0s 744us/step - loss: 0.5358 -
val_loss: 0.5062
Epoch 21/100
242/242 [==============================] - 0s 749us/step - loss: 0.5197 -
val_loss: 0.4992
Epoch 22/100
242/242 [==============================] - 0s 752us/step - loss: 0.5448 -
val_loss: 0.4932
Epoch 23/100
242/242 [==============================] - 0s 745us/step - loss: 0.5138 -
val_loss: 0.4875
Epoch 24/100
242/242 [==============================] - 0s 748us/step - loss: 0.5078 -
val_loss: 0.4857
Epoch 25/100
242/242 [==============================] - 0s 739us/step - loss: 0.5057 -
val_loss: 0.4783
Epoch 26/100
242/242 [==============================] - 0s 745us/step - loss: 0.4962 -
val_loss: 0.4746
Epoch 27/100
242/242 [==============================] - 0s 767us/step - loss: 0.5014 -
val_loss: 0.4700
Epoch 28/100
242/242 [==============================] - 0s 757us/step - loss: 0.4995 -
val_loss: 0.4676
Epoch 29/100
242/242 [==============================] - 0s 749us/step - loss: 0.4819 -
val_loss: 0.4687
Epoch 30/100
242/242 [==============================] - 0s 744us/step - loss: 0.4860 -
val_loss: 0.4618
Epoch 31/100
242/242 [==============================] - 0s 756us/step - loss: 0.4709 -
val_loss: 0.4607
Epoch 32/100
242/242 [==============================] - 0s 765us/step - loss: 0.4939 -
val_loss: 0.4630
Epoch 33/100
242/242 [==============================] - 0s 755us/step - loss: 0.4799 -
val_loss: 0.4583
Epoch 34/100
242/242 [==============================] - 0s 754us/step - loss: 0.4540 -
val_loss: 0.4643
Epoch 35/100
242/242 [==============================] - 0s 751us/step - loss: 0.4503 -
val_loss: 0.4591
77
Epoch 36/100
242/242 [==============================] - 0s 766us/step - loss: 0.4635 -
val_loss: 0.4562
Epoch 37/100
242/242 [==============================] - 0s 772us/step - loss: 0.4585 -
val_loss: 0.4539
Epoch 38/100
242/242 [==============================] - 0s 751us/step - loss: 0.4549 -
val_loss: 0.4547
Epoch 39/100
242/242 [==============================] - 0s 769us/step - loss: 0.4663 -
val_loss: 0.4534
Epoch 40/100
242/242 [==============================] - 0s 746us/step - loss: 0.4478 -
val_loss: 0.4523
Epoch 41/100
242/242 [==============================] - 0s 745us/step - loss: 0.4289 -
val_loss: 0.4613
Epoch 42/100
242/242 [==============================] - 0s 748us/step - loss: 0.4342 -
val_loss: 0.4593
Epoch 43/100
242/242 [==============================] - 0s 747us/step - loss: 0.4458 -
val_loss: 0.4497
Epoch 44/100
242/242 [==============================] - 0s 737us/step - loss: 0.4526 -
val_loss: 0.4544
Epoch 45/100
242/242 [==============================] - 0s 759us/step - loss: 0.4302 -
val_loss: 0.4533
Epoch 46/100
242/242 [==============================] - 0s 751us/step - loss: 0.4310 -
val_loss: 0.4497
Epoch 47/100
242/242 [==============================] - 0s 748us/step - loss: 0.4346 -
val_loss: 0.4470
Epoch 48/100
242/242 [==============================] - 0s 762us/step - loss: 0.4268 -
val_loss: 0.4470
Epoch 49/100
242/242 [==============================] - 0s 756us/step - loss: 0.4403 -
val_loss: 0.4532
Epoch 50/100
242/242 [==============================] - 0s 745us/step - loss: 0.4206 -
val_loss: 0.4549
Epoch 51/100
242/242 [==============================] - 0s 748us/step - loss: 0.4437 -
val_loss: 0.4534
78
Epoch 52/100
242/242 [==============================] - 0s 744us/step - loss: 0.4239 -
val_loss: 0.4594
Epoch 53/100
242/242 [==============================] - 0s 766us/step - loss: 0.4189 -
val_loss: 0.4535
Epoch 54/100
242/242 [==============================] - 0s 768us/step - loss: 0.4209 -
val_loss: 0.4484
Epoch 55/100
242/242 [==============================] - 0s 759us/step - loss: 0.4266 -
val_loss: 0.4489
Epoch 56/100
242/242 [==============================] - 0s 760us/step - loss: 0.4238 -
val_loss: 0.4464
Epoch 57/100
242/242 [==============================] - 0s 754us/step - loss: 0.4181 -
val_loss: 0.4489
Epoch 58/100
242/242 [==============================] - 0s 744us/step - loss: 0.4279 -
val_loss: 0.4513
Epoch 59/100
242/242 [==============================] - 0s 742us/step - loss: 0.4194 -
val_loss: 0.4499
Epoch 60/100
242/242 [==============================] - 0s 778us/step - loss: 0.4358 -
val_loss: 0.4441
Epoch 61/100
242/242 [==============================] - 0s 747us/step - loss: 0.4350 -
val_loss: 0.4476
Epoch 62/100
242/242 [==============================] - 0s 740us/step - loss: 0.4226 -
val_loss: 0.4501
Epoch 63/100
242/242 [==============================] - 0s 749us/step - loss: 0.4131 -
val_loss: 0.4432
Epoch 64/100
242/242 [==============================] - 0s 743us/step - loss: 0.4273 -
val_loss: 0.4385
Epoch 65/100
242/242 [==============================] - 0s 752us/step - loss: 0.4199 -
val_loss: 0.4380
Epoch 66/100
242/242 [==============================] - 0s 748us/step - loss: 0.4050 -
val_loss: 0.4440
Epoch 67/100
242/242 [==============================] - 0s 741us/step - loss: 0.4157 -
val_loss: 0.4354
79
Epoch 68/100
242/242 [==============================] - 0s 726us/step - loss: 0.4271 -
val_loss: 0.4382
Epoch 69/100
242/242 [==============================] - 0s 752us/step - loss: 0.4010 -
val_loss: 0.4341
Epoch 70/100
242/242 [==============================] - 0s 738us/step - loss: 0.4111 -
val_loss: 0.4395
Epoch 71/100
242/242 [==============================] - 0s 748us/step - loss: 0.4344 -
val_loss: 0.4340
Epoch 72/100
242/242 [==============================] - 0s 756us/step - loss: 0.3948 -
val_loss: 0.4407
Epoch 73/100
242/242 [==============================] - 0s 743us/step - loss: 0.4121 -
val_loss: 0.4310
Epoch 74/100
242/242 [==============================] - 0s 745us/step - loss: 0.4067 -
val_loss: 0.4328
Epoch 75/100
242/242 [==============================] - 0s 760us/step - loss: 0.3966 -
val_loss: 0.4349
Epoch 76/100
242/242 [==============================] - 0s 747us/step - loss: 0.4146 -
val_loss: 0.4346
Epoch 77/100
242/242 [==============================] - 0s 743us/step - loss: 0.4084 -
val_loss: 0.4338
Epoch 78/100
242/242 [==============================] - 0s 756us/step - loss: 0.4064 -
val_loss: 0.4333
Epoch 79/100
242/242 [==============================] - 0s 760us/step - loss: 0.4073 -
val_loss: 0.4282
Epoch 80/100
242/242 [==============================] - 0s 766us/step - loss: 0.4009 -
val_loss: 0.4354
Epoch 81/100
242/242 [==============================] - 0s 766us/step - loss: 0.3946 -
val_loss: 0.4321
Epoch 82/100
242/242 [==============================] - 0s 748us/step - loss: 0.4032 -
val_loss: 0.4298
Epoch 83/100
242/242 [==============================] - 0s 747us/step - loss: 0.3856 -
val_loss: 0.4290
80
Epoch 84/100
242/242 [==============================] - 0s 738us/step - loss: 0.4027 -
val_loss: 0.4340
Epoch 85/100
242/242 [==============================] - 0s 764us/step - loss: 0.4061 -
val_loss: 0.4218
Epoch 86/100
242/242 [==============================] - 0s 753us/step - loss: 0.3864 -
val_loss: 0.4289
Epoch 87/100
242/242 [==============================] - 0s 747us/step - loss: 0.3967 -
val_loss: 0.4326
Epoch 88/100
242/242 [==============================] - 0s 747us/step - loss: 0.3844 -
val_loss: 0.4303
Epoch 89/100
242/242 [==============================] - 0s 745us/step - loss: 0.4083 -
val_loss: 0.4336
Epoch 90/100
242/242 [==============================] - 0s 743us/step - loss: 0.3898 -
val_loss: 0.4249
Epoch 91/100
242/242 [==============================] - 0s 740us/step - loss: 0.4016 -
val_loss: 0.4234
Epoch 92/100
242/242 [==============================] - 0s 734us/step - loss: 0.3923 -
val_loss: 0.4188
Epoch 93/100
242/242 [==============================] - 0s 766us/step - loss: 0.3795 -
val_loss: 0.4237
Epoch 94/100
242/242 [==============================] - 0s 753us/step - loss: 0.4013 -
val_loss: 0.4202
Epoch 95/100
242/242 [==============================] - 0s 740us/step - loss: 0.4025 -
val_loss: 0.4206
Epoch 96/100
242/242 [==============================] - 0s 738us/step - loss: 0.3884 -
val_loss: 0.4212
Epoch 97/100
242/242 [==============================] - 0s 738us/step - loss: 0.3891 -
val_loss: 0.4247
Epoch 98/100
242/242 [==============================] - 0s 750us/step - loss: 0.3781 -
val_loss: 0.4159
Epoch 99/100
242/242 [==============================] - 0s 740us/step - loss: 0.3767 -
val_loss: 0.4197
81
Epoch 100/100
242/242 [==============================] - 0s 748us/step - loss: 0.3783 -
val_loss: 0.4240
121/121 [==============================] - 0s 397us/step - loss: 0.3897
[CV] END learning_rate=0.0003920021771415983, n_hidden=3, n_neurons=24; total
time= 18.9s
Epoch 1/100
1/242 […] - ETA: 30s - loss: 7.7274
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 845us/step - loss: 3.7946 -
val_loss: 5.2312
Epoch 2/100
242/242 [==============================] - 0s 628us/step - loss: 0.7322 -
val_loss: 26.5013
Epoch 3/100
242/242 [==============================] - 0s 624us/step - loss: 0.7799 -
val_loss: 40.6122
Epoch 4/100
242/242 [==============================] - 0s 640us/step - loss: 0.7607 -
val_loss: 135.6917
Epoch 5/100
242/242 [==============================] - 0s 637us/step - loss: 1.0753 -
val_loss: 237.1147
Epoch 6/100
242/242 [==============================] - 0s 628us/step - loss: 5.9390 -
val_loss: 506.5565
Epoch 7/100
242/242 [==============================] - 0s 634us/step - loss: 5.5124 -
val_loss: 1165.5577
Epoch 8/100
242/242 [==============================] - 0s 624us/step - loss: 19.9616 -
val_loss: 2646.9749
Epoch 9/100
242/242 [==============================] - 0s 624us/step - loss: 12.8133 -
val_loss: 5780.9756
Epoch 10/100
242/242 [==============================] - 0s 621us/step - loss: 113.3383 -
val_loss: 13751.4189
Epoch 11/100
242/242 [==============================] - 0s 618us/step - loss: 202.5718 -
val_loss: 31633.9219
121/121 [==============================] - 0s 365us/step - loss: 81.5957
[CV] END learning_rate=0.006010328378268217, n_hidden=0, n_neurons=2; total
time= 2.0s
82
Epoch 1/100
1/242 […] - ETA: 31s - loss: 5.7349
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 825us/step - loss: 2.6526 -
val_loss: 14.0701
Epoch 2/100
242/242 [==============================] - 0s 616us/step - loss: 0.5878 -
val_loss: 16.8410
Epoch 3/100
242/242 [==============================] - 0s 617us/step - loss: 0.5581 -
val_loss: 19.0635
Epoch 4/100
242/242 [==============================] - 0s 620us/step - loss: 0.5405 -
val_loss: 19.7342
Epoch 5/100
242/242 [==============================] - 0s 615us/step - loss: 0.5304 -
val_loss: 20.0593
Epoch 6/100
242/242 [==============================] - 0s 623us/step - loss: 0.5191 -
val_loss: 20.2376
Epoch 7/100
242/242 [==============================] - 0s 623us/step - loss: 0.5017 -
val_loss: 20.0296
Epoch 8/100
242/242 [==============================] - 0s 629us/step - loss: 0.5172 -
val_loss: 20.3793
Epoch 9/100
242/242 [==============================] - 0s 634us/step - loss: 0.5052 -
val_loss: 20.1103
Epoch 10/100
242/242 [==============================] - 0s 624us/step - loss: 0.4871 -
val_loss: 18.4892
Epoch 11/100
242/242 [==============================] - 0s 641us/step - loss: 0.5093 -
val_loss: 19.4013
121/121 [==============================] - 0s 386us/step - loss: 0.9640
[CV] END learning_rate=0.006010328378268217, n_hidden=0, n_neurons=2; total
time= 2.0s
Epoch 1/100
1/242 […] - ETA: 31s - loss: 9.2609
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
83
warnings.warn(
242/242 [==============================] - 0s 870us/step - loss: 3.7475 -
val_loss: 13.7380
Epoch 2/100
242/242 [==============================] - 0s 641us/step - loss: 0.6583 -
val_loss: 10.0594
Epoch 3/100
242/242 [==============================] - 0s 616us/step - loss: 0.5972 -
val_loss: 41.2693
Epoch 4/100
242/242 [==============================] - 0s 660us/step - loss: 1.4270 -
val_loss: 74.9048
Epoch 5/100
242/242 [==============================] - 0s 641us/step - loss: 0.5524 -
val_loss: 205.5686
Epoch 6/100
242/242 [==============================] - 0s 632us/step - loss: 1.5429 -
val_loss: 246.7373
Epoch 7/100
242/242 [==============================] - 0s 622us/step - loss: 1.7293 -
val_loss: 388.8352
Epoch 8/100
242/242 [==============================] - 0s 619us/step - loss: 10.7464 -
val_loss: 620.5344
Epoch 9/100
242/242 [==============================] - 0s 626us/step - loss: 4.7920 -
val_loss: 919.7242
Epoch 10/100
242/242 [==============================] - 0s 641us/step - loss: 2.2121 -
val_loss: 1082.5522
Epoch 11/100
242/242 [==============================] - 0s 647us/step - loss: 35.8402 -
val_loss: 1471.0354
Epoch 12/100
242/242 [==============================] - 0s 626us/step - loss: 5.9639 -
val_loss: 1957.3052
121/121 [==============================] - 0s 370us/step - loss: 2.0491
[CV] END learning_rate=0.006010328378268217, n_hidden=0, n_neurons=2; total
time= 2.2s
Epoch 1/100
1/242 […] - ETA: 35s - loss: 9.9587
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 939us/step - loss: 2.4113 -
84
val_loss: 22.8633
Epoch 2/100
242/242 [==============================] - 0s 704us/step - loss: 0.7462 -
val_loss: 36.5661
Epoch 3/100
242/242 [==============================] - 0s 710us/step - loss: 0.7579 -
val_loss: 304.7437
Epoch 4/100
242/242 [==============================] - 0s 722us/step - loss: 0.8555 -
val_loss: 71.4701
Epoch 5/100
242/242 [==============================] - 0s 711us/step - loss: 0.5979 -
val_loss: 312.6012
Epoch 6/100
242/242 [==============================] - 0s 712us/step - loss: 4.4233 -
val_loss: 0.4035
Epoch 7/100
242/242 [==============================] - 0s 714us/step - loss: 0.4554 -
val_loss: 0.3813
Epoch 8/100
242/242 [==============================] - 0s 723us/step - loss: 0.3791 -
val_loss: 0.3613
Epoch 9/100
242/242 [==============================] - 0s 717us/step - loss: 0.3656 -
val_loss: 0.3613
Epoch 10/100
242/242 [==============================] - 0s 743us/step - loss: 0.3690 -
val_loss: 0.3454
Epoch 11/100
242/242 [==============================] - 0s 728us/step - loss: 0.3489 -
val_loss: 0.3520
Epoch 12/100
242/242 [==============================] - 0s 723us/step - loss: 0.3376 -
val_loss: 0.3549
Epoch 13/100
242/242 [==============================] - 0s 716us/step - loss: 0.3622 -
val_loss: 0.3438
Epoch 14/100
242/242 [==============================] - 0s 714us/step - loss: 0.3504 -
val_loss: 0.3414
Epoch 15/100
242/242 [==============================] - 0s 715us/step - loss: 0.3501 -
val_loss: 0.3418
Epoch 16/100
242/242 [==============================] - 0s 703us/step - loss: 0.3467 -
val_loss: 0.3412
Epoch 17/100
242/242 [==============================] - 0s 734us/step - loss: 0.3556 -
85
val_loss: 0.3395
Epoch 18/100
242/242 [==============================] - 0s 720us/step - loss: 0.3629 -
val_loss: 0.3413
Epoch 19/100
242/242 [==============================] - 0s 727us/step - loss: 0.3793 -
val_loss: 0.3405
Epoch 20/100
242/242 [==============================] - 0s 729us/step - loss: 0.3448 -
val_loss: 0.3372
Epoch 21/100
242/242 [==============================] - 0s 714us/step - loss: 0.3390 -
val_loss: 0.3418
Epoch 22/100
242/242 [==============================] - 0s 721us/step - loss: 0.3303 -
val_loss: 0.3402
Epoch 23/100
242/242 [==============================] - 0s 715us/step - loss: 0.3463 -
val_loss: 0.3732
Epoch 24/100
242/242 [==============================] - 0s 721us/step - loss: 0.3563 -
val_loss: 0.3528
Epoch 25/100
242/242 [==============================] - 0s 719us/step - loss: 0.3464 -
val_loss: 0.3602
Epoch 26/100
242/242 [==============================] - 0s 699us/step - loss: 0.3321 -
val_loss: 0.3460
Epoch 27/100
242/242 [==============================] - 0s 720us/step - loss: 0.3523 -
val_loss: 0.3553
Epoch 28/100
242/242 [==============================] - 0s 732us/step - loss: 0.3478 -
val_loss: 0.3563
Epoch 29/100
242/242 [==============================] - 0s 725us/step - loss: 0.3400 -
val_loss: 0.3557
Epoch 30/100
242/242 [==============================] - 0s 710us/step - loss: 0.3478 -
val_loss: 0.3449
121/121 [==============================] - 0s 398us/step - loss: 0.3551
[CV] END learning_rate=0.008339092654580042, n_hidden=1, n_neurons=38; total
time= 5.6s
Epoch 1/100
1/242 […] - ETA: 35s - loss: 5.8135
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
86
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 962us/step - loss: 1.4728 -
val_loss: 3.0949
Epoch 2/100
242/242 [==============================] - 0s 729us/step - loss: 0.5447 -
val_loss: 0.4712
Epoch 3/100
242/242 [==============================] - 0s 723us/step - loss: 0.4555 -
val_loss: 0.4231
Epoch 4/100
242/242 [==============================] - 0s 710us/step - loss: 0.4392 -
val_loss: 0.4021
Epoch 5/100
242/242 [==============================] - 0s 709us/step - loss: 0.4306 -
val_loss: 0.4323
Epoch 6/100
242/242 [==============================] - 0s 714us/step - loss: 0.4154 -
val_loss: 0.6513
Epoch 7/100
242/242 [==============================] - 0s 722us/step - loss: 0.3944 -
val_loss: 0.8508
Epoch 8/100
242/242 [==============================] - 0s 718us/step - loss: 0.3985 -
val_loss: 1.0201
Epoch 9/100
242/242 [==============================] - 0s 718us/step - loss: 0.3915 -
val_loss: 1.1757
Epoch 10/100
242/242 [==============================] - 0s 726us/step - loss: 0.3958 -
val_loss: 0.8698
Epoch 11/100
242/242 [==============================] - 0s 696us/step - loss: 0.3928 -
val_loss: 0.9377
Epoch 12/100
242/242 [==============================] - 0s 689us/step - loss: 0.3723 -
val_loss: 1.0793
Epoch 13/100
242/242 [==============================] - 0s 713us/step - loss: 0.3766 -
val_loss: 1.1923
Epoch 14/100
242/242 [==============================] - 0s 725us/step - loss: 0.3620 -
val_loss: 1.1186
121/121 [==============================] - 0s 401us/step - loss: 0.4037
[CV] END learning_rate=0.008339092654580042, n_hidden=1, n_neurons=38; total
time= 2.8s
Epoch 1/100
87
1/242 […] - ETA: 36s - loss: 7.7147
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 940us/step - loss: 1.6459 -
val_loss: 1.2874
Epoch 2/100
242/242 [==============================] - 0s 702us/step - loss: 0.5071 -
val_loss: 0.7809
Epoch 3/100
242/242 [==============================] - 0s 710us/step - loss: 0.4427 -
val_loss: 1.8555
Epoch 4/100
242/242 [==============================] - 0s 732us/step - loss: 0.5716 -
val_loss: 18.7096
Epoch 5/100
242/242 [==============================] - 0s 726us/step - loss: 0.4271 -
val_loss: 78.6912
Epoch 6/100
242/242 [==============================] - 0s 726us/step - loss: 0.6930 -
val_loss: 0.4362
Epoch 7/100
242/242 [==============================] - 0s 713us/step - loss: 0.4296 -
val_loss: 0.3913
Epoch 8/100
242/242 [==============================] - 0s 688us/step - loss: 0.4304 -
val_loss: 0.4217
Epoch 9/100
242/242 [==============================] - 0s 687us/step - loss: 0.4042 -
val_loss: 0.4237
Epoch 10/100
242/242 [==============================] - 0s 714us/step - loss: 0.4109 -
val_loss: 0.3657
Epoch 11/100
242/242 [==============================] - 0s 704us/step - loss: 0.3989 -
val_loss: 0.4483
Epoch 12/100
242/242 [==============================] - 0s 709us/step - loss: 0.3793 -
val_loss: 0.3705
Epoch 13/100
242/242 [==============================] - 0s 725us/step - loss: 0.3596 -
val_loss: 0.3624
Epoch 14/100
242/242 [==============================] - 0s 710us/step - loss: 0.3681 -
val_loss: 0.4260
Epoch 15/100
88
242/242 [==============================] - 0s 709us/step - loss: 0.3914 -
val_loss: 0.3562
Epoch 16/100
242/242 [==============================] - 0s 710us/step - loss: 0.3793 -
val_loss: 0.4255
Epoch 17/100
242/242 [==============================] - 0s 717us/step - loss: 0.3600 -
val_loss: 0.3505
Epoch 18/100
242/242 [==============================] - 0s 709us/step - loss: 0.3731 -
val_loss: 0.3618
Epoch 19/100
242/242 [==============================] - 0s 719us/step - loss: 0.3816 -
val_loss: 0.4293
Epoch 20/100
242/242 [==============================] - 0s 717us/step - loss: 0.3696 -
val_loss: 0.3456
Epoch 21/100
242/242 [==============================] - 0s 712us/step - loss: 0.3674 -
val_loss: 0.3828
Epoch 22/100
242/242 [==============================] - 0s 720us/step - loss: 0.3817 -
val_loss: 0.3408
Epoch 23/100
242/242 [==============================] - 0s 717us/step - loss: 0.3564 -
val_loss: 0.3563
Epoch 24/100
242/242 [==============================] - 0s 707us/step - loss: 0.3570 -
val_loss: 0.4301
Epoch 25/100
242/242 [==============================] - 0s 728us/step - loss: 0.3501 -
val_loss: 0.3361
Epoch 26/100
242/242 [==============================] - 0s 718us/step - loss: 0.3518 -
val_loss: 0.3491
Epoch 27/100
242/242 [==============================] - 0s 715us/step - loss: 0.3681 -
val_loss: 0.4150
Epoch 28/100
242/242 [==============================] - 0s 719us/step - loss: 0.3623 -
val_loss: 0.3396
Epoch 29/100
242/242 [==============================] - 0s 718us/step - loss: 0.5428 -
val_loss: 0.4401
Epoch 30/100
242/242 [==============================] - 0s 716us/step - loss: 0.3784 -
val_loss: 0.3476
Epoch 31/100
89
242/242 [==============================] - 0s 710us/step - loss: 0.3451 -
val_loss: 0.3738
Epoch 32/100
242/242 [==============================] - 0s 714us/step - loss: 0.3640 -
val_loss: 0.4246
Epoch 33/100
242/242 [==============================] - 0s 699us/step - loss: 0.3502 -
val_loss: 0.3970
Epoch 34/100
242/242 [==============================] - 0s 703us/step - loss: 0.3372 -
val_loss: 0.3671
Epoch 35/100
242/242 [==============================] - 0s 715us/step - loss: 0.3431 -
val_loss: 0.3425
121/121 [==============================] - 0s 392us/step - loss: 0.3441
[CV] END learning_rate=0.008339092654580042, n_hidden=1, n_neurons=38; total
time= 6.4s
Epoch 1/100
1/242 […] - ETA: 41s - loss: 5.1910
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 1ms/step - loss: 4.6508 -
val_loss: 7.0502
Epoch 2/100
242/242 [==============================] - 0s 774us/step - loss: 2.5504 -
val_loss: 7.2037
Epoch 3/100
242/242 [==============================] - 0s 760us/step - loss: 1.6914 -
val_loss: 5.5884
Epoch 4/100
242/242 [==============================] - 0s 764us/step - loss: 1.3626 -
val_loss: 3.7640
Epoch 5/100
242/242 [==============================] - 0s 775us/step - loss: 1.1498 -
val_loss: 2.5552
Epoch 6/100
242/242 [==============================] - 0s 781us/step - loss: 1.0632 -
val_loss: 2.0914
Epoch 7/100
242/242 [==============================] - 0s 787us/step - loss: 1.0119 -
val_loss: 1.6989
Epoch 8/100
242/242 [==============================] - 0s 778us/step - loss: 0.9387 -
val_loss: 1.4173
Epoch 9/100
90
242/242 [==============================] - 0s 772us/step - loss: 0.8833 -
val_loss: 1.2066
Epoch 10/100
242/242 [==============================] - 0s 785us/step - loss: 0.8319 -
val_loss: 1.0479
Epoch 11/100
242/242 [==============================] - 0s 781us/step - loss: 0.7739 -
val_loss: 0.9248
Epoch 12/100
242/242 [==============================] - 0s 784us/step - loss: 0.7316 -
val_loss: 0.8264
Epoch 13/100
242/242 [==============================] - 0s 787us/step - loss: 0.7549 -
val_loss: 0.7581
Epoch 14/100
242/242 [==============================] - 0s 785us/step - loss: 0.7192 -
val_loss: 0.7119
Epoch 15/100
242/242 [==============================] - 0s 791us/step - loss: 0.7140 -
val_loss: 0.6743
Epoch 16/100
242/242 [==============================] - 0s 784us/step - loss: 0.6738 -
val_loss: 0.6514
Epoch 17/100
242/242 [==============================] - 0s 782us/step - loss: 0.6862 -
val_loss: 0.6371
Epoch 18/100
242/242 [==============================] - 0s 785us/step - loss: 0.6566 -
val_loss: 0.6283
Epoch 19/100
242/242 [==============================] - 0s 788us/step - loss: 0.6387 -
val_loss: 0.6229
Epoch 20/100
242/242 [==============================] - 0s 778us/step - loss: 0.6456 -
val_loss: 0.6221
Epoch 21/100
242/242 [==============================] - 0s 754us/step - loss: 0.6197 -
val_loss: 0.6180
Epoch 22/100
242/242 [==============================] - 0s 768us/step - loss: 0.6060 -
val_loss: 0.6178
Epoch 23/100
242/242 [==============================] - 0s 787us/step - loss: 0.6194 -
val_loss: 0.6150
Epoch 24/100
242/242 [==============================] - 0s 752us/step - loss: 0.6007 -
val_loss: 0.6175
Epoch 25/100
91
242/242 [==============================] - 0s 751us/step - loss: 0.6106 -
val_loss: 0.6112
Epoch 26/100
242/242 [==============================] - 0s 750us/step - loss: 0.5789 -
val_loss: 0.6049
Epoch 27/100
242/242 [==============================] - 0s 739us/step - loss: 0.5925 -
val_loss: 0.6013
Epoch 28/100
242/242 [==============================] - 0s 752us/step - loss: 0.5956 -
val_loss: 0.5932
Epoch 29/100
242/242 [==============================] - 0s 744us/step - loss: 0.5957 -
val_loss: 0.5873
Epoch 30/100
242/242 [==============================] - 0s 750us/step - loss: 0.5703 -
val_loss: 0.5832
Epoch 31/100
242/242 [==============================] - 0s 757us/step - loss: 0.5484 -
val_loss: 0.5789
Epoch 32/100
242/242 [==============================] - 0s 747us/step - loss: 0.5451 -
val_loss: 0.5713
Epoch 33/100
242/242 [==============================] - 0s 747us/step - loss: 0.5415 -
val_loss: 0.5664
Epoch 34/100
242/242 [==============================] - 0s 744us/step - loss: 0.5481 -
val_loss: 0.5613
Epoch 35/100
242/242 [==============================] - 0s 757us/step - loss: 0.5367 -
val_loss: 0.5537
Epoch 36/100
242/242 [==============================] - 0s 781us/step - loss: 0.5277 -
val_loss: 0.5535
Epoch 37/100
242/242 [==============================] - 0s 785us/step - loss: 0.5339 -
val_loss: 0.5419
Epoch 38/100
242/242 [==============================] - 0s 782us/step - loss: 0.5052 -
val_loss: 0.5418
Epoch 39/100
242/242 [==============================] - 0s 775us/step - loss: 0.5250 -
val_loss: 0.5343
Epoch 40/100
242/242 [==============================] - 0s 763us/step - loss: 0.5021 -
val_loss: 0.5282
Epoch 41/100
92
242/242 [==============================] - 0s 762us/step - loss: 0.5117 -
val_loss: 0.5252
Epoch 42/100
242/242 [==============================] - 0s 774us/step - loss: 0.5180 -
val_loss: 0.5198
Epoch 43/100
242/242 [==============================] - 0s 781us/step - loss: 0.5046 -
val_loss: 0.5159
Epoch 44/100
242/242 [==============================] - 0s 810us/step - loss: 0.4940 -
val_loss: 0.5110
Epoch 45/100
242/242 [==============================] - 0s 791us/step - loss: 0.4971 -
val_loss: 0.5071
Epoch 46/100
242/242 [==============================] - 0s 783us/step - loss: 0.4914 -
val_loss: 0.5049
Epoch 47/100
242/242 [==============================] - 0s 776us/step - loss: 0.4904 -
val_loss: 0.4988
Epoch 48/100
242/242 [==============================] - 0s 767us/step - loss: 0.4738 -
val_loss: 0.4950
Epoch 49/100
242/242 [==============================] - 0s 764us/step - loss: 0.4841 -
val_loss: 0.4902
Epoch 50/100
242/242 [==============================] - 0s 758us/step - loss: 0.4785 -
val_loss: 0.4869
Epoch 51/100
242/242 [==============================] - 0s 759us/step - loss: 0.4601 -
val_loss: 0.4851
Epoch 52/100
242/242 [==============================] - 0s 761us/step - loss: 0.4914 -
val_loss: 0.4779
Epoch 53/100
242/242 [==============================] - 0s 761us/step - loss: 0.4747 -
val_loss: 0.4731
Epoch 54/100
242/242 [==============================] - 0s 769us/step - loss: 0.4734 -
val_loss: 0.4699
Epoch 55/100
242/242 [==============================] - 0s 791us/step - loss: 0.4565 -
val_loss: 0.4657
Epoch 56/100
242/242 [==============================] - 0s 784us/step - loss: 0.4904 -
val_loss: 0.4605
Epoch 57/100
93
242/242 [==============================] - 0s 765us/step - loss: 0.4600 -
val_loss: 0.4583
Epoch 58/100
242/242 [==============================] - 0s 808us/step - loss: 0.4670 -
val_loss: 0.4528
Epoch 59/100
242/242 [==============================] - 0s 765us/step - loss: 0.4664 -
val_loss: 0.4496
Epoch 60/100
242/242 [==============================] - 0s 764us/step - loss: 0.4454 -
val_loss: 0.4473
Epoch 61/100
242/242 [==============================] - 0s 792us/step - loss: 0.4301 -
val_loss: 0.4437
Epoch 62/100
242/242 [==============================] - 0s 793us/step - loss: 0.4441 -
val_loss: 0.4411
Epoch 63/100
242/242 [==============================] - 0s 784us/step - loss: 0.4517 -
val_loss: 0.4392
Epoch 64/100
242/242 [==============================] - 0s 779us/step - loss: 0.4342 -
val_loss: 0.4340
Epoch 65/100
242/242 [==============================] - 0s 776us/step - loss: 0.4191 -
val_loss: 0.4314
Epoch 66/100
242/242 [==============================] - 0s 782us/step - loss: 0.4601 -
val_loss: 0.4298
Epoch 67/100
242/242 [==============================] - 0s 786us/step - loss: 0.4365 -
val_loss: 0.4278
Epoch 68/100
242/242 [==============================] - 0s 788us/step - loss: 0.4366 -
val_loss: 0.4237
Epoch 69/100
242/242 [==============================] - 0s 792us/step - loss: 0.4414 -
val_loss: 0.4218
Epoch 70/100
242/242 [==============================] - 0s 769us/step - loss: 0.4523 -
val_loss: 0.4200
Epoch 71/100
242/242 [==============================] - 0s 762us/step - loss: 0.4277 -
val_loss: 0.4167
Epoch 72/100
242/242 [==============================] - 0s 781us/step - loss: 0.4383 -
val_loss: 0.4143
Epoch 73/100
94
242/242 [==============================] - 0s 781us/step - loss: 0.4355 -
val_loss: 0.4125
Epoch 74/100
242/242 [==============================] - 0s 784us/step - loss: 0.4271 -
val_loss: 0.4105
Epoch 75/100
242/242 [==============================] - 0s 788us/step - loss: 0.4272 -
val_loss: 0.4085
Epoch 76/100
242/242 [==============================] - 0s 753us/step - loss: 0.4215 -
val_loss: 0.4071
Epoch 77/100
242/242 [==============================] - 0s 767us/step - loss: 0.4092 -
val_loss: 0.4048
Epoch 78/100
242/242 [==============================] - 0s 785us/step - loss: 0.4062 -
val_loss: 0.4032
Epoch 79/100
242/242 [==============================] - 0s 784us/step - loss: 0.4078 -
val_loss: 0.4017
Epoch 80/100
242/242 [==============================] - 0s 788us/step - loss: 0.4120 -
val_loss: 0.4002
Epoch 81/100
242/242 [==============================] - 0s 784us/step - loss: 0.3889 -
val_loss: 0.3986
Epoch 82/100
242/242 [==============================] - 0s 772us/step - loss: 0.3950 -
val_loss: 0.3973
Epoch 83/100
242/242 [==============================] - 0s 784us/step - loss: 0.3843 -
val_loss: 0.3959
Epoch 84/100
242/242 [==============================] - 0s 782us/step - loss: 0.4200 -
val_loss: 0.3949
Epoch 85/100
242/242 [==============================] - 0s 782us/step - loss: 0.4112 -
val_loss: 0.3931
Epoch 86/100
242/242 [==============================] - 0s 755us/step - loss: 0.3803 -
val_loss: 0.3926
Epoch 87/100
242/242 [==============================] - 0s 768us/step - loss: 0.3995 -
val_loss: 0.3914
Epoch 88/100
242/242 [==============================] - 0s 784us/step - loss: 0.4035 -
val_loss: 0.3897
Epoch 89/100
95
242/242 [==============================] - 0s 783us/step - loss: 0.3939 -
val_loss: 0.3888
Epoch 90/100
242/242 [==============================] - 0s 775us/step - loss: 0.3815 -
val_loss: 0.3880
Epoch 91/100
242/242 [==============================] - 0s 775us/step - loss: 0.4073 -
val_loss: 0.3866
Epoch 92/100
242/242 [==============================] - 0s 779us/step - loss: 0.4051 -
val_loss: 0.3861
Epoch 93/100
242/242 [==============================] - 0s 785us/step - loss: 0.4188 -
val_loss: 0.3848
Epoch 94/100
242/242 [==============================] - 0s 771us/step - loss: 0.4010 -
val_loss: 0.3839
Epoch 95/100
242/242 [==============================] - 0s 779us/step - loss: 0.3718 -
val_loss: 0.3836
Epoch 96/100
242/242 [==============================] - 0s 777us/step - loss: 0.3869 -
val_loss: 0.3840
Epoch 97/100
242/242 [==============================] - 0s 793us/step - loss: 0.3889 -
val_loss: 0.3822
Epoch 98/100
242/242 [==============================] - 0s 780us/step - loss: 0.3886 -
val_loss: 0.3806
Epoch 99/100
242/242 [==============================] - 0s 782us/step - loss: 0.3868 -
val_loss: 0.3822
Epoch 100/100
242/242 [==============================] - 0s 781us/step - loss: 0.3936 -
val_loss: 0.3795
121/121 [==============================] - 0s 432us/step - loss: 0.3993
[CV] END learning_rate=0.00030107783636342726, n_hidden=3, n_neurons=21; total
time= 19.2s
Epoch 1/100
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 1ms/step - loss: 6.3342 -
val_loss: 2.9725
Epoch 2/100
242/242 [==============================] - 0s 791us/step - loss: 2.6097 -
96
val_loss: 5.9015
Epoch 3/100
242/242 [==============================] - 0s 792us/step - loss: 1.3728 -
val_loss: 10.8119
Epoch 4/100
242/242 [==============================] - 0s 780us/step - loss: 1.1298 -
val_loss: 11.3108
Epoch 5/100
242/242 [==============================] - 0s 786us/step - loss: 0.9871 -
val_loss: 9.9424
Epoch 6/100
242/242 [==============================] - 0s 788us/step - loss: 0.9507 -
val_loss: 8.2069
Epoch 7/100
242/242 [==============================] - 0s 796us/step - loss: 0.9126 -
val_loss: 6.6004
Epoch 8/100
242/242 [==============================] - 0s 777us/step - loss: 0.8405 -
val_loss: 4.8507
Epoch 9/100
242/242 [==============================] - 0s 784us/step - loss: 0.8445 -
val_loss: 3.5263
Epoch 10/100
242/242 [==============================] - 0s 785us/step - loss: 0.8077 -
val_loss: 2.6353
Epoch 11/100
242/242 [==============================] - 0s 783us/step - loss: 0.7772 -
val_loss: 1.9734
Epoch 12/100
242/242 [==============================] - 0s 781us/step - loss: 0.7525 -
val_loss: 1.4481
Epoch 13/100
242/242 [==============================] - 0s 794us/step - loss: 0.7316 -
val_loss: 1.1077
Epoch 14/100
242/242 [==============================] - 0s 790us/step - loss: 0.7188 -
val_loss: 0.8819
Epoch 15/100
242/242 [==============================] - 0s 779us/step - loss: 0.7596 -
val_loss: 0.7221
Epoch 16/100
242/242 [==============================] - 0s 770us/step - loss: 0.7084 -
val_loss: 0.6649
Epoch 17/100
242/242 [==============================] - 0s 784us/step - loss: 0.7167 -
val_loss: 0.6775
Epoch 18/100
242/242 [==============================] - 0s 760us/step - loss: 0.7042 -
97
val_loss: 0.7491
Epoch 19/100
242/242 [==============================] - 0s 764us/step - loss: 0.6807 -
val_loss: 0.8815
Epoch 20/100
242/242 [==============================] - 0s 756us/step - loss: 0.6637 -
val_loss: 1.0684
Epoch 21/100
242/242 [==============================] - 0s 751us/step - loss: 0.6566 -
val_loss: 1.3065
Epoch 22/100
242/242 [==============================] - 0s 751us/step - loss: 0.6570 -
val_loss: 1.5427
Epoch 23/100
242/242 [==============================] - 0s 759us/step - loss: 0.6442 -
val_loss: 1.8315
Epoch 24/100
242/242 [==============================] - 0s 778us/step - loss: 0.6155 -
val_loss: 2.1426
Epoch 25/100
242/242 [==============================] - 0s 785us/step - loss: 0.5986 -
val_loss: 2.5085
Epoch 26/100
242/242 [==============================] - 0s 770us/step - loss: 0.5954 -
val_loss: 2.8640
121/121 [==============================] - 0s 407us/step - loss: 0.6770
[CV] END learning_rate=0.00030107783636342726, n_hidden=3, n_neurons=21; total
time= 5.3s
Epoch 1/100
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 1ms/step - loss: 5.0000 -
val_loss: 3.5308
Epoch 2/100
242/242 [==============================] - 0s 777us/step - loss: 2.9421 -
val_loss: 3.0045
Epoch 3/100
242/242 [==============================] - 0s 767us/step - loss: 1.5277 -
val_loss: 2.5464
Epoch 4/100
242/242 [==============================] - 0s 763us/step - loss: 1.0632 -
val_loss: 1.8717
Epoch 5/100
242/242 [==============================] - 0s 755us/step - loss: 0.8311 -
val_loss: 1.3067
98
Epoch 6/100
242/242 [==============================] - 0s 758us/step - loss: 0.7572 -
val_loss: 0.9966
Epoch 7/100
242/242 [==============================] - 0s 758us/step - loss: 0.7190 -
val_loss: 0.8331
Epoch 8/100
242/242 [==============================] - 0s 753us/step - loss: 0.7035 -
val_loss: 0.7309
Epoch 9/100
242/242 [==============================] - 0s 773us/step - loss: 0.6890 -
val_loss: 0.6922
Epoch 10/100
242/242 [==============================] - 0s 783us/step - loss: 0.6539 -
val_loss: 0.6623
Epoch 11/100
242/242 [==============================] - 0s 755us/step - loss: 0.6642 -
val_loss: 0.6391
Epoch 12/100
242/242 [==============================] - 0s 755us/step - loss: 0.6379 -
val_loss: 0.6199
Epoch 13/100
242/242 [==============================] - 0s 761us/step - loss: 0.6053 -
val_loss: 0.6066
Epoch 14/100
242/242 [==============================] - 0s 749us/step - loss: 0.6072 -
val_loss: 0.5952
Epoch 15/100
242/242 [==============================] - 0s 752us/step - loss: 0.6384 -
val_loss: 0.5855
Epoch 16/100
242/242 [==============================] - 0s 754us/step - loss: 0.6207 -
val_loss: 0.5761
Epoch 17/100
242/242 [==============================] - 0s 776us/step - loss: 0.5912 -
val_loss: 0.5671
Epoch 18/100
242/242 [==============================] - 0s 777us/step - loss: 0.6168 -
val_loss: 0.5590
Epoch 19/100
242/242 [==============================] - 0s 754us/step - loss: 0.5850 -
val_loss: 0.5515
Epoch 20/100
242/242 [==============================] - 0s 764us/step - loss: 0.5741 -
val_loss: 0.5445
Epoch 21/100
242/242 [==============================] - 0s 781us/step - loss: 0.5603 -
val_loss: 0.5376
99
Epoch 22/100
242/242 [==============================] - 0s 758us/step - loss: 0.5874 -
val_loss: 0.5308
Epoch 23/100
242/242 [==============================] - 0s 761us/step - loss: 0.5510 -
val_loss: 0.5241
Epoch 24/100
242/242 [==============================] - 0s 762us/step - loss: 0.5431 -
val_loss: 0.5179
Epoch 25/100
242/242 [==============================] - 0s 774us/step - loss: 0.5475 -
val_loss: 0.5114
Epoch 26/100
242/242 [==============================] - 0s 768us/step - loss: 0.5327 -
val_loss: 0.5049
Epoch 27/100
242/242 [==============================] - 0s 761us/step - loss: 0.5406 -
val_loss: 0.4989
Epoch 28/100
242/242 [==============================] - 0s 750us/step - loss: 0.5348 -
val_loss: 0.4930
Epoch 29/100
242/242 [==============================] - 0s 761us/step - loss: 0.5169 -
val_loss: 0.4880
Epoch 30/100
242/242 [==============================] - 0s 764us/step - loss: 0.5190 -
val_loss: 0.4819
Epoch 31/100
242/242 [==============================] - 0s 773us/step - loss: 0.4954 -
val_loss: 0.4768
Epoch 32/100
242/242 [==============================] - 0s 766us/step - loss: 0.5252 -
val_loss: 0.4726
Epoch 33/100
242/242 [==============================] - 0s 766us/step - loss: 0.5053 -
val_loss: 0.4680
Epoch 34/100
242/242 [==============================] - 0s 761us/step - loss: 0.4751 -
val_loss: 0.4647
Epoch 35/100
242/242 [==============================] - 0s 756us/step - loss: 0.4812 -
val_loss: 0.4611
Epoch 36/100
242/242 [==============================] - 0s 767us/step - loss: 0.4779 -
val_loss: 0.4579
Epoch 37/100
242/242 [==============================] - 0s 765us/step - loss: 0.4804 -
val_loss: 0.4552
100
Epoch 38/100
242/242 [==============================] - 0s 752us/step - loss: 0.4722 -
val_loss: 0.4523
Epoch 39/100
242/242 [==============================] - 0s 786us/step - loss: 0.4810 -
val_loss: 0.4503
Epoch 40/100
242/242 [==============================] - 0s 798us/step - loss: 0.4618 -
val_loss: 0.4481
Epoch 41/100
242/242 [==============================] - 0s 777us/step - loss: 0.4426 -
val_loss: 0.4479
Epoch 42/100
242/242 [==============================] - 0s 790us/step - loss: 0.4495 -
val_loss: 0.4464
Epoch 43/100
242/242 [==============================] - 0s 783us/step - loss: 0.4570 -
val_loss: 0.4442
Epoch 44/100
242/242 [==============================] - 0s 780us/step - loss: 0.4589 -
val_loss: 0.4442
Epoch 45/100
242/242 [==============================] - 0s 778us/step - loss: 0.4456 -
val_loss: 0.4436
Epoch 46/100
242/242 [==============================] - 0s 780us/step - loss: 0.4375 -
val_loss: 0.4428
Epoch 47/100
242/242 [==============================] - 0s 785us/step - loss: 0.4475 -
val_loss: 0.4428
Epoch 48/100
242/242 [==============================] - 0s 783us/step - loss: 0.4362 -
val_loss: 0.4431
Epoch 49/100
242/242 [==============================] - 0s 785us/step - loss: 0.4473 -
val_loss: 0.4436
Epoch 50/100
242/242 [==============================] - 0s 775us/step - loss: 0.4252 -
val_loss: 0.4441
Epoch 51/100
242/242 [==============================] - 0s 792us/step - loss: 0.4523 -
val_loss: 0.4441
Epoch 52/100
242/242 [==============================] - 0s 778us/step - loss: 0.4198 -
val_loss: 0.4456
Epoch 53/100
242/242 [==============================] - 0s 776us/step - loss: 0.4317 -
val_loss: 0.4461
101
Epoch 54/100
242/242 [==============================] - 0s 771us/step - loss: 0.4264 -
val_loss: 0.4467
Epoch 55/100
242/242 [==============================] - 0s 782us/step - loss: 0.4277 -
val_loss: 0.4484
Epoch 56/100
242/242 [==============================] - 0s 781us/step - loss: 0.4249 -
val_loss: 0.4490
Epoch 57/100
242/242 [==============================] - 0s 779us/step - loss: 0.4195 -
val_loss: 0.4493
121/121 [==============================] - 0s 415us/step - loss: 0.4256
[CV] END learning_rate=0.00030107783636342726, n_hidden=3, n_neurons=21; total
time= 11.1s
Epoch 1/100
1/242 […] - ETA: 34s - loss: 5.7856
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 963us/step - loss: 2.1136 -
val_loss: 38.2652
Epoch 2/100
242/242 [==============================] - 0s 712us/step - loss: 0.7686 -
val_loss: 0.6706
Epoch 3/100
242/242 [==============================] - 0s 720us/step - loss: 0.5623 -
val_loss: 0.5520
Epoch 4/100
242/242 [==============================] - 0s 718us/step - loss: 0.5151 -
val_loss: 0.5090
Epoch 5/100
242/242 [==============================] - 0s 697us/step - loss: 0.4713 -
val_loss: 0.4813
Epoch 6/100
242/242 [==============================] - 0s 685us/step - loss: 0.4438 -
val_loss: 0.4761
Epoch 7/100
242/242 [==============================] - 0s 688us/step - loss: 0.4416 -
val_loss: 0.4565
Epoch 8/100
242/242 [==============================] - 0s 688us/step - loss: 0.4308 -
val_loss: 0.4533
Epoch 9/100
242/242 [==============================] - 0s 708us/step - loss: 0.4168 -
val_loss: 0.4502
102
Epoch 10/100
242/242 [==============================] - 0s 683us/step - loss: 0.4202 -
val_loss: 0.4389
Epoch 11/100
242/242 [==============================] - 0s 701us/step - loss: 0.4045 -
val_loss: 0.4360
Epoch 12/100
242/242 [==============================] - 0s 719us/step - loss: 0.3855 -
val_loss: 0.4313
Epoch 13/100
242/242 [==============================] - 0s 727us/step - loss: 0.4205 -
val_loss: 0.4253
Epoch 14/100
242/242 [==============================] - 0s 698us/step - loss: 0.4080 -
val_loss: 0.4228
Epoch 15/100
242/242 [==============================] - 0s 715us/step - loss: 0.4023 -
val_loss: 0.4209
Epoch 16/100
242/242 [==============================] - 0s 696us/step - loss: 0.3953 -
val_loss: 0.4192
Epoch 17/100
242/242 [==============================] - 0s 695us/step - loss: 0.4113 -
val_loss: 0.4156
Epoch 18/100
242/242 [==============================] - 0s 686us/step - loss: 0.4109 -
val_loss: 0.4137
Epoch 19/100
242/242 [==============================] - 0s 680us/step - loss: 0.3907 -
val_loss: 0.4128
Epoch 20/100
242/242 [==============================] - 0s 684us/step - loss: 0.4010 -
val_loss: 0.4104
Epoch 21/100
242/242 [==============================] - 0s 697us/step - loss: 0.3926 -
val_loss: 0.4101
Epoch 22/100
242/242 [==============================] - 0s 717us/step - loss: 0.3816 -
val_loss: 0.4070
Epoch 23/100
242/242 [==============================] - 0s 712us/step - loss: 0.3937 -
val_loss: 0.4080
Epoch 24/100
242/242 [==============================] - 0s 691us/step - loss: 0.3830 -
val_loss: 0.4037
Epoch 25/100
242/242 [==============================] - 0s 700us/step - loss: 0.3926 -
val_loss: 0.4030
103
Epoch 26/100
242/242 [==============================] - 0s 715us/step - loss: 0.3775 -
val_loss: 0.4000
Epoch 27/100
242/242 [==============================] - 0s 711us/step - loss: 0.3982 -
val_loss: 0.3972
Epoch 28/100
242/242 [==============================] - 0s 710us/step - loss: 0.3891 -
val_loss: 0.3974
Epoch 29/100
242/242 [==============================] - 0s 725us/step - loss: 0.3945 -
val_loss: 0.3943
Epoch 30/100
242/242 [==============================] - 0s 705us/step - loss: 0.3872 -
val_loss: 0.3948
Epoch 31/100
242/242 [==============================] - 0s 697us/step - loss: 0.3820 -
val_loss: 0.3922
Epoch 32/100
242/242 [==============================] - 0s 696us/step - loss: 0.3771 -
val_loss: 0.3917
Epoch 33/100
242/242 [==============================] - 0s 689us/step - loss: 0.3736 -
val_loss: 0.3905
Epoch 34/100
242/242 [==============================] - 0s 679us/step - loss: 0.3757 -
val_loss: 0.3893
Epoch 35/100
242/242 [==============================] - 0s 685us/step - loss: 0.3741 -
val_loss: 0.3876
Epoch 36/100
242/242 [==============================] - 0s 710us/step - loss: 0.3700 -
val_loss: 0.3916
Epoch 37/100
242/242 [==============================] - 0s 722us/step - loss: 0.3735 -
val_loss: 0.3831
Epoch 38/100
242/242 [==============================] - 0s 710us/step - loss: 0.3728 -
val_loss: 0.3875
Epoch 39/100
242/242 [==============================] - 0s 701us/step - loss: 0.3697 -
val_loss: 0.3847
Epoch 40/100
242/242 [==============================] - 0s 702us/step - loss: 0.3557 -
val_loss: 0.3846
Epoch 41/100
242/242 [==============================] - 0s 712us/step - loss: 0.3669 -
val_loss: 0.3842
104
Epoch 42/100
242/242 [==============================] - 0s 717us/step - loss: 0.3727 -
val_loss: 0.3814
Epoch 43/100
242/242 [==============================] - 0s 707us/step - loss: 0.3737 -
val_loss: 0.3808
Epoch 44/100
242/242 [==============================] - 0s 713us/step - loss: 0.3707 -
val_loss: 0.3834
Epoch 45/100
242/242 [==============================] - 0s 718us/step - loss: 0.3775 -
val_loss: 0.3804
Epoch 46/100
242/242 [==============================] - 0s 713us/step - loss: 0.3581 -
val_loss: 0.3824
Epoch 47/100
242/242 [==============================] - 0s 721us/step - loss: 0.3659 -
val_loss: 0.3798
Epoch 48/100
242/242 [==============================] - 0s 715us/step - loss: 0.3524 -
val_loss: 0.3800
Epoch 49/100
242/242 [==============================] - 0s 698us/step - loss: 0.3804 -
val_loss: 0.3783
Epoch 50/100
242/242 [==============================] - 0s 702us/step - loss: 0.3626 -
val_loss: 0.3797
Epoch 51/100
242/242 [==============================] - 0s 687us/step - loss: 0.3512 -
val_loss: 0.3820
Epoch 52/100
242/242 [==============================] - 0s 691us/step - loss: 0.3767 -
val_loss: 0.3765
Epoch 53/100
242/242 [==============================] - 0s 687us/step - loss: 0.3610 -
val_loss: 0.3772
Epoch 54/100
242/242 [==============================] - 0s 683us/step - loss: 0.3674 -
val_loss: 0.3766
Epoch 55/100
242/242 [==============================] - 0s 690us/step - loss: 0.3558 -
val_loss: 0.3773
Epoch 56/100
242/242 [==============================] - 0s 718us/step - loss: 0.3912 -
val_loss: 0.3754
Epoch 57/100
242/242 [==============================] - 0s 706us/step - loss: 0.3644 -
val_loss: 0.3750
105
Epoch 58/100
242/242 [==============================] - 0s 693us/step - loss: 0.3700 -
val_loss: 0.3750
Epoch 59/100
242/242 [==============================] - 0s 694us/step - loss: 0.3735 -
val_loss: 0.3766
Epoch 60/100
242/242 [==============================] - 0s 697us/step - loss: 0.3573 -
val_loss: 0.3749
Epoch 61/100
242/242 [==============================] - 0s 687us/step - loss: 0.3500 -
val_loss: 0.3764
Epoch 62/100
242/242 [==============================] - 0s 692us/step - loss: 0.3524 -
val_loss: 0.3759
Epoch 63/100
242/242 [==============================] - 0s 688us/step - loss: 0.3693 -
val_loss: 0.3736
Epoch 64/100
242/242 [==============================] - 0s 696us/step - loss: 0.3503 -
val_loss: 0.3750
Epoch 65/100
242/242 [==============================] - 0s 710us/step - loss: 0.3388 -
val_loss: 0.3727
Epoch 66/100
242/242 [==============================] - 0s 713us/step - loss: 0.3891 -
val_loss: 0.3757
Epoch 67/100
242/242 [==============================] - 0s 712us/step - loss: 0.3613 -
val_loss: 0.3733
Epoch 68/100
242/242 [==============================] - 0s 718us/step - loss: 0.3544 -
val_loss: 0.3719
Epoch 69/100
242/242 [==============================] - 0s 723us/step - loss: 0.3611 -
val_loss: 0.3714
Epoch 70/100
242/242 [==============================] - 0s 713us/step - loss: 0.3815 -
val_loss: 0.3698
Epoch 71/100
242/242 [==============================] - 0s 717us/step - loss: 0.3611 -
val_loss: 0.3689
Epoch 72/100
242/242 [==============================] - 0s 706us/step - loss: 0.3771 -
val_loss: 0.3717
Epoch 73/100
242/242 [==============================] - 0s 716us/step - loss: 0.3701 -
val_loss: 0.3699
106
Epoch 74/100
242/242 [==============================] - 0s 712us/step - loss: 0.3599 -
val_loss: 0.3664
Epoch 75/100
242/242 [==============================] - 0s 720us/step - loss: 0.3567 -
val_loss: 0.3663
Epoch 76/100
242/242 [==============================] - 0s 713us/step - loss: 0.3515 -
val_loss: 0.3689
Epoch 77/100
242/242 [==============================] - 0s 713us/step - loss: 0.3475 -
val_loss: 0.3670
Epoch 78/100
242/242 [==============================] - 0s 707us/step - loss: 0.3475 -
val_loss: 0.3682
Epoch 79/100
242/242 [==============================] - 0s 715us/step - loss: 0.3498 -
val_loss: 0.3647
Epoch 80/100
242/242 [==============================] - 0s 707us/step - loss: 0.3515 -
val_loss: 0.3669
Epoch 81/100
242/242 [==============================] - 0s 705us/step - loss: 0.3373 -
val_loss: 0.3654
Epoch 82/100
242/242 [==============================] - 0s 722us/step - loss: 0.3411 -
val_loss: 0.3639
Epoch 83/100
242/242 [==============================] - 0s 715us/step - loss: 0.3316 -
val_loss: 0.3644
Epoch 84/100
242/242 [==============================] - 0s 709us/step - loss: 0.3642 -
val_loss: 0.3632
Epoch 85/100
242/242 [==============================] - 0s 712us/step - loss: 0.3553 -
val_loss: 0.3619
Epoch 86/100
242/242 [==============================] - 0s 709us/step - loss: 0.3372 -
val_loss: 0.3615
Epoch 87/100
242/242 [==============================] - 0s 696us/step - loss: 0.3409 -
val_loss: 0.3592
Epoch 88/100
242/242 [==============================] - 0s 687us/step - loss: 0.3502 -
val_loss: 0.3633
Epoch 89/100
242/242 [==============================] - 0s 696us/step - loss: 0.3472 -
val_loss: 0.3581
107
Epoch 90/100
242/242 [==============================] - 0s 694us/step - loss: 0.3351 -
val_loss: 0.3597
Epoch 91/100
242/242 [==============================] - 0s 710us/step - loss: 0.3602 -
val_loss: 0.3582
Epoch 92/100
242/242 [==============================] - 0s 717us/step - loss: 0.3612 -
val_loss: 0.3575
Epoch 93/100
242/242 [==============================] - 0s 708us/step - loss: 0.3713 -
val_loss: 0.3572
Epoch 94/100
242/242 [==============================] - 0s 697us/step - loss: 0.3591 -
val_loss: 0.3582
Epoch 95/100
242/242 [==============================] - 0s 690us/step - loss: 0.3306 -
val_loss: 0.3579
Epoch 96/100
242/242 [==============================] - 0s 708us/step - loss: 0.3427 -
val_loss: 0.3626
Epoch 97/100
242/242 [==============================] - 0s 714us/step - loss: 0.3468 -
val_loss: 0.3564
Epoch 98/100
242/242 [==============================] - 0s 707us/step - loss: 0.3474 -
val_loss: 0.3560
Epoch 99/100
242/242 [==============================] - 0s 704us/step - loss: 0.3440 -
val_loss: 0.3618
Epoch 100/100
242/242 [==============================] - 0s 719us/step - loss: 0.3492 -
val_loss: 0.3552
121/121 [==============================] - 0s 391us/step - loss: 0.3645
[CV] END learning_rate=0.005153286333701512, n_hidden=1, n_neurons=22; total
time= 17.5s
Epoch 1/100
1/242 […] - ETA: 34s - loss: 6.8320
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 962us/step - loss: 2.2710 -
val_loss: 0.6451
Epoch 2/100
242/242 [==============================] - 0s 711us/step - loss: 0.6091 -
val_loss: 0.8942
108
Epoch 3/100
242/242 [==============================] - 0s 712us/step - loss: 0.5103 -
val_loss: 1.2421
Epoch 4/100
242/242 [==============================] - 0s 713us/step - loss: 0.4883 -
val_loss: 1.2691
Epoch 5/100
242/242 [==============================] - 0s 717us/step - loss: 0.4682 -
val_loss: 0.9915
Epoch 6/100
242/242 [==============================] - 0s 718us/step - loss: 0.4420 -
val_loss: 0.6535
Epoch 7/100
242/242 [==============================] - 0s 708us/step - loss: 0.4253 -
val_loss: 0.5216
Epoch 8/100
242/242 [==============================] - 0s 699us/step - loss: 0.4264 -
val_loss: 0.4130
Epoch 9/100
242/242 [==============================] - 0s 693us/step - loss: 0.4122 -
val_loss: 0.3818
Epoch 10/100
242/242 [==============================] - 0s 712us/step - loss: 0.4108 -
val_loss: 0.4044
Epoch 11/100
242/242 [==============================] - 0s 702us/step - loss: 0.4073 -
val_loss: 0.4355
Epoch 12/100
242/242 [==============================] - 0s 695us/step - loss: 0.3881 -
val_loss: 0.4276
Epoch 13/100
242/242 [==============================] - 0s 704us/step - loss: 0.3901 -
val_loss: 0.4761
Epoch 14/100
242/242 [==============================] - 0s 689us/step - loss: 0.3783 -
val_loss: 0.5445
Epoch 15/100
242/242 [==============================] - 0s 703us/step - loss: 0.4093 -
val_loss: 0.5613
Epoch 16/100
242/242 [==============================] - 0s 709us/step - loss: 0.3892 -
val_loss: 0.6763
Epoch 17/100
242/242 [==============================] - 0s 725us/step - loss: 0.3903 -
val_loss: 0.6692
Epoch 18/100
242/242 [==============================] - 0s 719us/step - loss: 0.3994 -
val_loss: 0.7573
109
Epoch 19/100
242/242 [==============================] - 0s 715us/step - loss: 0.3808 -
val_loss: 0.6834
121/121 [==============================] - 0s 399us/step - loss: 0.3963
[CV] END learning_rate=0.005153286333701512, n_hidden=1, n_neurons=22; total
time= 3.6s
Epoch 1/100
1/242 […] - ETA: 34s - loss: 7.2460
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 935us/step - loss: 2.0896 -
val_loss: 71.0121
Epoch 2/100
242/242 [==============================] - 0s 714us/step - loss: 0.8697 -
val_loss: 42.2913
Epoch 3/100
242/242 [==============================] - 0s 717us/step - loss: 0.7858 -
val_loss: 1.3112
Epoch 4/100
242/242 [==============================] - 0s 719us/step - loss: 0.5407 -
val_loss: 0.5968
Epoch 5/100
242/242 [==============================] - 0s 699us/step - loss: 0.4735 -
val_loss: 0.4855
Epoch 6/100
242/242 [==============================] - 0s 689us/step - loss: 0.4410 -
val_loss: 0.4448
Epoch 7/100
242/242 [==============================] - 0s 707us/step - loss: 0.4404 -
val_loss: 0.4217
Epoch 8/100
242/242 [==============================] - 0s 710us/step - loss: 0.4545 -
val_loss: 0.4094
Epoch 9/100
242/242 [==============================] - 0s 697us/step - loss: 0.4325 -
val_loss: 0.4025
Epoch 10/100
242/242 [==============================] - 0s 703us/step - loss: 0.4273 -
val_loss: 0.3958
Epoch 11/100
242/242 [==============================] - 0s 707us/step - loss: 0.4366 -
val_loss: 0.3918
Epoch 12/100
242/242 [==============================] - 0s 707us/step - loss: 0.4123 -
val_loss: 0.3892
110
Epoch 13/100
242/242 [==============================] - 0s 712us/step - loss: 0.3925 -
val_loss: 0.3915
Epoch 14/100
242/242 [==============================] - 0s 722us/step - loss: 0.3992 -
val_loss: 0.3996
Epoch 15/100
242/242 [==============================] - 0s 710us/step - loss: 0.4301 -
val_loss: 0.4076
Epoch 16/100
242/242 [==============================] - 0s 709us/step - loss: 0.4131 -
val_loss: 0.4218
Epoch 17/100
242/242 [==============================] - 0s 698us/step - loss: 0.3954 -
val_loss: 0.4317
Epoch 18/100
242/242 [==============================] - 0s 695us/step - loss: 0.4112 -
val_loss: 0.4354
Epoch 19/100
242/242 [==============================] - 0s 713us/step - loss: 0.4170 -
val_loss: 0.4429
Epoch 20/100
242/242 [==============================] - 0s 719us/step - loss: 0.4070 -
val_loss: 0.4453
Epoch 21/100
242/242 [==============================] - 0s 727us/step - loss: 0.3989 -
val_loss: 0.4460
Epoch 22/100
242/242 [==============================] - 0s 712us/step - loss: 0.4140 -
val_loss: 0.4415
121/121 [==============================] - 0s 404us/step - loss: 0.3947
[CV] END learning_rate=0.005153286333701512, n_hidden=1, n_neurons=22; total
time= 4.1s
Epoch 1/100
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 1s 873us/step - loss: 8.2504 -
val_loss: 43.0907
Epoch 2/100
242/242 [==============================] - 0s 643us/step - loss: 5.4519 -
val_loss: 27.4372
Epoch 3/100
242/242 [==============================] - 0s 640us/step - loss: 4.2168 -
val_loss: 17.4473
Epoch 4/100
111
242/242 [==============================] - 0s 646us/step - loss: 3.2806 -
val_loss: 11.0914
Epoch 5/100
242/242 [==============================] - 0s 617us/step - loss: 2.3482 -
val_loss: 7.0664
Epoch 6/100
242/242 [==============================] - 0s 613us/step - loss: 1.9986 -
val_loss: 4.5088
Epoch 7/100
242/242 [==============================] - 0s 601us/step - loss: 1.6719 -
val_loss: 2.9277
Epoch 8/100
242/242 [==============================] - 0s 637us/step - loss: 1.3693 -
val_loss: 1.9631
Epoch 9/100
242/242 [==============================] - 0s 647us/step - loss: 1.1748 -
val_loss: 1.3974
Epoch 10/100
242/242 [==============================] - 0s 641us/step - loss: 1.0381 -
val_loss: 1.0599
Epoch 11/100
242/242 [==============================] - 0s 637us/step - loss: 0.9239 -
val_loss: 0.8770
Epoch 12/100
242/242 [==============================] - 0s 646us/step - loss: 0.8351 -
val_loss: 0.7898
Epoch 13/100
242/242 [==============================] - 0s 635us/step - loss: 0.8418 -
val_loss: 0.7557
Epoch 14/100
242/242 [==============================] - 0s 630us/step - loss: 0.7796 -
val_loss: 0.7567
Epoch 15/100
242/242 [==============================] - 0s 632us/step - loss: 0.7733 -
val_loss: 0.7696
Epoch 16/100
242/242 [==============================] - 0s 634us/step - loss: 0.7238 -
val_loss: 0.7947
Epoch 17/100
242/242 [==============================] - 0s 649us/step - loss: 0.7449 -
val_loss: 0.8231
Epoch 18/100
242/242 [==============================] - 0s 626us/step - loss: 0.7233 -
val_loss: 0.8540
Epoch 19/100
242/242 [==============================] - 0s 624us/step - loss: 0.6904 -
val_loss: 0.8834
Epoch 20/100
112
242/242 [==============================] - 0s 614us/step - loss: 0.6946 -
val_loss: 0.9089
Epoch 21/100
242/242 [==============================] - 0s 622us/step - loss: 0.6678 -
val_loss: 0.9189
Epoch 22/100
242/242 [==============================] - 0s 624us/step - loss: 0.6622 -
val_loss: 0.9382
Epoch 23/100
242/242 [==============================] - 0s 619us/step - loss: 0.6785 -
val_loss: 0.9536
121/121 [==============================] - 0s 369us/step - loss: 0.6673
[CV] END learning_rate=0.0003099230412972121, n_hidden=0, n_neurons=49; total
time= 4.1s
Epoch 1/100
1/242 […] - ETA: 30s - loss: 10.0177
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 869us/step - loss: 8.2416 -
val_loss: 25.5463
Epoch 2/100
242/242 [==============================] - 0s 637us/step - loss: 6.2565 -
val_loss: 23.8232
Epoch 3/100
242/242 [==============================] - 0s 652us/step - loss: 4.5922 -
val_loss: 22.6165
Epoch 4/100
242/242 [==============================] - 0s 648us/step - loss: 3.4697 -
val_loss: 21.7670
Epoch 5/100
242/242 [==============================] - 0s 650us/step - loss: 2.7249 -
val_loss: 21.1673
Epoch 6/100
242/242 [==============================] - 0s 638us/step - loss: 2.1906 -
val_loss: 20.7451
Epoch 7/100
242/242 [==============================] - 0s 635us/step - loss: 1.8076 -
val_loss: 20.4552
Epoch 8/100
242/242 [==============================] - 0s 634us/step - loss: 1.5361 -
val_loss: 20.2628
Epoch 9/100
242/242 [==============================] - 0s 631us/step - loss: 1.3128 -
val_loss: 20.1364
Epoch 10/100
113
242/242 [==============================] - 0s 627us/step - loss: 1.1191 -
val_loss: 20.0567
Epoch 11/100
242/242 [==============================] - 0s 612us/step - loss: 0.9981 -
val_loss: 20.0180
Epoch 12/100
242/242 [==============================] - 0s 608us/step - loss: 0.9184 -
val_loss: 20.0099
Epoch 13/100
242/242 [==============================] - 0s 606us/step - loss: 0.8320 -
val_loss: 20.0241
Epoch 14/100
242/242 [==============================] - 0s 602us/step - loss: 0.7501 -
val_loss: 20.0459
Epoch 15/100
242/242 [==============================] - 0s 612us/step - loss: 0.7671 -
val_loss: 20.0826
Epoch 16/100
242/242 [==============================] - 0s 611us/step - loss: 0.6917 -
val_loss: 20.1289
Epoch 17/100
242/242 [==============================] - 0s 604us/step - loss: 0.6745 -
val_loss: 20.1804
Epoch 18/100
242/242 [==============================] - 0s 627us/step - loss: 0.6572 -
val_loss: 20.2376
Epoch 19/100
242/242 [==============================] - 0s 633us/step - loss: 0.6467 -
val_loss: 20.3030
Epoch 20/100
242/242 [==============================] - 0s 611us/step - loss: 0.6150 -
val_loss: 20.3653
Epoch 21/100
242/242 [==============================] - 0s 618us/step - loss: 0.6367 -
val_loss: 20.4378
Epoch 22/100
242/242 [==============================] - 0s 613us/step - loss: 0.6039 -
val_loss: 20.5061
121/121 [==============================] - 0s 361us/step - loss: 1.0973
[CV] END learning_rate=0.0003099230412972121, n_hidden=0, n_neurons=49; total
time= 3.6s
Epoch 1/100
1/242 […] - ETA: 30s - loss: 7.4915
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
114
242/242 [==============================] - 0s 834us/step - loss: 6.7231 -
val_loss: 7.6683
Epoch 2/100
242/242 [==============================] - 0s 606us/step - loss: 4.8260 -
val_loss: 4.9412
Epoch 3/100
242/242 [==============================] - 0s 613us/step - loss: 3.4818 -
val_loss: 3.3299
Epoch 4/100
242/242 [==============================] - 0s 619us/step - loss: 2.7802 -
val_loss: 2.3535
Epoch 5/100
242/242 [==============================] - 0s 605us/step - loss: 2.1777 -
val_loss: 1.7864
Epoch 6/100
242/242 [==============================] - 0s 623us/step - loss: 1.7499 -
val_loss: 1.4390
Epoch 7/100
242/242 [==============================] - 0s 613us/step - loss: 1.4061 -
val_loss: 1.2303
Epoch 8/100
242/242 [==============================] - 0s 616us/step - loss: 1.2991 -
val_loss: 1.1115
Epoch 9/100
242/242 [==============================] - 0s 608us/step - loss: 1.1093 -
val_loss: 1.0396
Epoch 10/100
242/242 [==============================] - 0s 614us/step - loss: 0.9825 -
val_loss: 0.9896
Epoch 11/100
242/242 [==============================] - 0s 605us/step - loss: 0.9155 -
val_loss: 0.9739
Epoch 12/100
242/242 [==============================] - 0s 608us/step - loss: 0.8496 -
val_loss: 0.9570
Epoch 13/100
242/242 [==============================] - 0s 616us/step - loss: 0.7428 -
val_loss: 0.9426
Epoch 14/100
242/242 [==============================] - 0s 627us/step - loss: 0.7397 -
val_loss: 0.9414
Epoch 15/100
242/242 [==============================] - 0s 627us/step - loss: 0.7434 -
val_loss: 0.9351
Epoch 16/100
242/242 [==============================] - 0s 619us/step - loss: 0.7074 -
val_loss: 0.9457
Epoch 17/100
115
242/242 [==============================] - 0s 617us/step - loss: 0.6917 -
val_loss: 0.9437
Epoch 18/100
242/242 [==============================] - 0s 622us/step - loss: 0.6979 -
val_loss: 0.9404
Epoch 19/100
242/242 [==============================] - 0s 620us/step - loss: 0.6856 -
val_loss: 0.9554
Epoch 20/100
242/242 [==============================] - 0s 620us/step - loss: 0.6577 -
val_loss: 0.9559
Epoch 21/100
242/242 [==============================] - 0s 609us/step - loss: 0.6467 -
val_loss: 0.9558
Epoch 22/100
242/242 [==============================] - 0s 621us/step - loss: 0.6808 -
val_loss: 0.9576
Epoch 23/100
242/242 [==============================] - 0s 626us/step - loss: 0.6454 -
val_loss: 0.9476
Epoch 24/100
242/242 [==============================] - 0s 620us/step - loss: 0.6332 -
val_loss: 0.9447
Epoch 25/100
242/242 [==============================] - 0s 625us/step - loss: 0.6360 -
val_loss: 0.9405
121/121 [==============================] - 0s 376us/step - loss: 0.6455
[CV] END learning_rate=0.0003099230412972121, n_hidden=0, n_neurons=49; total
time= 4.1s
Epoch 1/100
1/242 […] - ETA: 39s - loss: 5.9514
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 1ms/step - loss: 2.3082 -
val_loss: 19.2760
Epoch 2/100
242/242 [==============================] - 0s 746us/step - loss: 0.7060 -
val_loss: 4.6055
Epoch 3/100
242/242 [==============================] - 0s 728us/step - loss: 0.5848 -
val_loss: 0.7004
Epoch 4/100
242/242 [==============================] - 0s 735us/step - loss: 0.5041 -
val_loss: 0.5034
Epoch 5/100
116
242/242 [==============================] - 0s 732us/step - loss: 0.4466 -
val_loss: 0.4495
Epoch 6/100
242/242 [==============================] - 0s 729us/step - loss: 0.4172 -
val_loss: 0.4262
Epoch 7/100
242/242 [==============================] - 0s 729us/step - loss: 0.4067 -
val_loss: 0.4112
Epoch 8/100
242/242 [==============================] - 0s 731us/step - loss: 0.3991 -
val_loss: 0.4155
Epoch 9/100
242/242 [==============================] - 0s 726us/step - loss: 0.3828 -
val_loss: 0.4120
Epoch 10/100
242/242 [==============================] - 0s 724us/step - loss: 0.3832 -
val_loss: 0.4010
Epoch 11/100
242/242 [==============================] - 0s 742us/step - loss: 0.3696 -
val_loss: 0.4074
Epoch 12/100
242/242 [==============================] - 0s 726us/step - loss: 0.3527 -
val_loss: 0.3889
Epoch 13/100
242/242 [==============================] - 0s 728us/step - loss: 0.3835 -
val_loss: 0.3859
Epoch 14/100
242/242 [==============================] - 0s 726us/step - loss: 0.3673 -
val_loss: 0.4045
Epoch 15/100
242/242 [==============================] - 0s 728us/step - loss: 0.3625 -
val_loss: 0.3846
Epoch 16/100
242/242 [==============================] - 0s 725us/step - loss: 0.3631 -
val_loss: 0.3959
Epoch 17/100
242/242 [==============================] - 0s 727us/step - loss: 0.3670 -
val_loss: 0.4089
Epoch 18/100
242/242 [==============================] - 0s 724us/step - loss: 0.3748 -
val_loss: 0.3869
Epoch 19/100
242/242 [==============================] - 0s 726us/step - loss: 0.3501 -
val_loss: 0.3860
Epoch 20/100
242/242 [==============================] - 0s 737us/step - loss: 0.3588 -
val_loss: 0.3805
Epoch 21/100
117
242/242 [==============================] - 0s 743us/step - loss: 0.3540 -
val_loss: 0.3894
Epoch 22/100
242/242 [==============================] - 0s 714us/step - loss: 0.3430 -
val_loss: 0.3799
Epoch 23/100
242/242 [==============================] - 0s 737us/step - loss: 0.3521 -
val_loss: 0.4104
Epoch 24/100
242/242 [==============================] - 0s 736us/step - loss: 0.3502 -
val_loss: 0.3684
Epoch 25/100
242/242 [==============================] - 0s 729us/step - loss: 0.3486 -
val_loss: 0.3799
Epoch 26/100
242/242 [==============================] - 0s 725us/step - loss: 0.3392 -
val_loss: 0.3619
Epoch 27/100
242/242 [==============================] - 0s 724us/step - loss: 0.3567 -
val_loss: 0.3645
Epoch 28/100
242/242 [==============================] - 0s 724us/step - loss: 0.3458 -
val_loss: 0.3707
Epoch 29/100
242/242 [==============================] - 0s 719us/step - loss: 0.3460 -
val_loss: 0.3731
Epoch 30/100
242/242 [==============================] - 0s 736us/step - loss: 0.3413 -
val_loss: 0.3582
Epoch 31/100
242/242 [==============================] - 0s 738us/step - loss: 0.3418 -
val_loss: 0.3508
Epoch 32/100
242/242 [==============================] - 0s 764us/step - loss: 0.3313 -
val_loss: 0.3451
Epoch 33/100
242/242 [==============================] - 0s 764us/step - loss: 0.3343 -
val_loss: 0.3366
Epoch 34/100
242/242 [==============================] - 0s 772us/step - loss: 0.3327 -
val_loss: 0.3431
Epoch 35/100
242/242 [==============================] - 0s 766us/step - loss: 0.3301 -
val_loss: 0.3285
Epoch 36/100
242/242 [==============================] - 0s 756us/step - loss: 0.3276 -
val_loss: 0.3474
Epoch 37/100
118
242/242 [==============================] - 0s 767us/step - loss: 0.3270 -
val_loss: 0.3244
Epoch 38/100
242/242 [==============================] - 0s 760us/step - loss: 0.3357 -
val_loss: 0.3484
Epoch 39/100
242/242 [==============================] - 0s 769us/step - loss: 0.3274 -
val_loss: 0.3235
Epoch 40/100
242/242 [==============================] - 0s 759us/step - loss: 0.3134 -
val_loss: 0.3421
Epoch 41/100
242/242 [==============================] - 0s 760us/step - loss: 0.3237 -
val_loss: 0.3287
Epoch 42/100
242/242 [==============================] - 0s 747us/step - loss: 0.3262 -
val_loss: 0.3236
Epoch 43/100
242/242 [==============================] - 0s 725us/step - loss: 0.3276 -
val_loss: 0.3245
Epoch 44/100
242/242 [==============================] - 0s 722us/step - loss: 0.3226 -
val_loss: 0.3410
Epoch 45/100
242/242 [==============================] - 0s 736us/step - loss: 0.3316 -
val_loss: 0.3232
Epoch 46/100
242/242 [==============================] - 0s 733us/step - loss: 0.3098 -
val_loss: 0.3294
Epoch 47/100
242/242 [==============================] - 0s 742us/step - loss: 0.3201 -
val_loss: 0.3362
Epoch 48/100
242/242 [==============================] - 0s 748us/step - loss: 0.3096 -
val_loss: 0.3341
Epoch 49/100
242/242 [==============================] - 0s 758us/step - loss: 0.3297 -
val_loss: 0.3173
Epoch 50/100
242/242 [==============================] - 0s 747us/step - loss: 0.3164 -
val_loss: 0.3410
Epoch 51/100
242/242 [==============================] - 0s 735us/step - loss: 0.3042 -
val_loss: 0.3400
Epoch 52/100
242/242 [==============================] - 0s 747us/step - loss: 0.3283 -
val_loss: 0.3325
Epoch 53/100
119
242/242 [==============================] - 0s 756us/step - loss: 0.3120 -
val_loss: 0.3270
Epoch 54/100
242/242 [==============================] - 0s 737us/step - loss: 0.3132 -
val_loss: 0.3180
Epoch 55/100
242/242 [==============================] - 0s 730us/step - loss: 0.3083 -
val_loss: 0.3237
Epoch 56/100
242/242 [==============================] - 0s 748us/step - loss: 0.3328 -
val_loss: 0.3194
Epoch 57/100
242/242 [==============================] - 0s 752us/step - loss: 0.3165 -
val_loss: 0.3319
Epoch 58/100
242/242 [==============================] - 0s 767us/step - loss: 0.3163 -
val_loss: 0.3240
Epoch 59/100
242/242 [==============================] - 0s 765us/step - loss: 0.3182 -
val_loss: 0.3100
Epoch 60/100
242/242 [==============================] - 0s 770us/step - loss: 0.3117 -
val_loss: 0.3208
Epoch 61/100
242/242 [==============================] - 0s 762us/step - loss: 0.3022 -
val_loss: 0.3263
Epoch 62/100
242/242 [==============================] - 0s 765us/step - loss: 0.3078 -
val_loss: 0.3355
Epoch 63/100
242/242 [==============================] - 0s 764us/step - loss: 0.3146 -
val_loss: 0.3103
Epoch 64/100
242/242 [==============================] - 0s 759us/step - loss: 0.3029 -
val_loss: 0.3337
Epoch 65/100
242/242 [==============================] - 0s 770us/step - loss: 0.2935 -
val_loss: 0.3249
Epoch 66/100
242/242 [==============================] - 0s 744us/step - loss: 0.3246 -
val_loss: 0.3055
Epoch 67/100
242/242 [==============================] - 0s 758us/step - loss: 0.3108 -
val_loss: 0.3229
Epoch 68/100
242/242 [==============================] - 0s 758us/step - loss: 0.3065 -
val_loss: 0.3248
Epoch 69/100
120
242/242 [==============================] - 0s 765us/step - loss: 0.3051 -
val_loss: 0.3160
Epoch 70/100
242/242 [==============================] - 0s 759us/step - loss: 0.3247 -
val_loss: 0.3117
Epoch 71/100
242/242 [==============================] - 0s 728us/step - loss: 0.3097 -
val_loss: 0.3200
Epoch 72/100
242/242 [==============================] - 0s 731us/step - loss: 0.3251 -
val_loss: 0.3111
Epoch 73/100
242/242 [==============================] - 0s 745us/step - loss: 0.3121 -
val_loss: 0.3038
Epoch 74/100
242/242 [==============================] - 0s 760us/step - loss: 0.3118 -
val_loss: 0.3215
Epoch 75/100
242/242 [==============================] - 0s 764us/step - loss: 0.3035 -
val_loss: 0.3112
Epoch 76/100
242/242 [==============================] - 0s 763us/step - loss: 0.3036 -
val_loss: 0.3152
Epoch 77/100
242/242 [==============================] - 0s 766us/step - loss: 0.2962 -
val_loss: 0.3010
Epoch 78/100
242/242 [==============================] - 0s 743us/step - loss: 0.2948 -
val_loss: 0.3217
Epoch 79/100
242/242 [==============================] - 0s 725us/step - loss: 0.3003 -
val_loss: 0.3045
Epoch 80/100
242/242 [==============================] - 0s 729us/step - loss: 0.3010 -
val_loss: 0.3122
Epoch 81/100
242/242 [==============================] - 0s 756us/step - loss: 0.2849 -
val_loss: 0.3009
Epoch 82/100
242/242 [==============================] - 0s 768us/step - loss: 0.2887 -
val_loss: 0.3379
Epoch 83/100
242/242 [==============================] - 0s 768us/step - loss: 0.2802 -
val_loss: 0.2980
Epoch 84/100
242/242 [==============================] - 0s 727us/step - loss: 0.3055 -
val_loss: 0.3536
Epoch 85/100
121
242/242 [==============================] - 0s 716us/step - loss: 0.3040 -
val_loss: 0.2979
Epoch 86/100
242/242 [==============================] - 0s 723us/step - loss: 0.2856 -
val_loss: 0.3236
Epoch 87/100
242/242 [==============================] - 0s 743us/step - loss: 0.2850 -
val_loss: 0.3018
Epoch 88/100
242/242 [==============================] - 0s 737us/step - loss: 0.2984 -
val_loss: 0.3213
Epoch 89/100
242/242 [==============================] - 0s 752us/step - loss: 0.2915 -
val_loss: 0.2977
Epoch 90/100
242/242 [==============================] - 0s 759us/step - loss: 0.2834 -
val_loss: 0.3187
Epoch 91/100
242/242 [==============================] - 0s 732us/step - loss: 0.3039 -
val_loss: 0.2950
Epoch 92/100
242/242 [==============================] - 0s 756us/step - loss: 0.3093 -
val_loss: 0.3174
Epoch 93/100
242/242 [==============================] - 0s 756us/step - loss: 0.3131 -
val_loss: 0.2980
Epoch 94/100
242/242 [==============================] - 0s 748us/step - loss: 0.2998 -
val_loss: 0.2947
Epoch 95/100
242/242 [==============================] - 0s 726us/step - loss: 0.2782 -
val_loss: 0.3049
Epoch 96/100
242/242 [==============================] - 0s 735us/step - loss: 0.2868 -
val_loss: 0.2970
Epoch 97/100
242/242 [==============================] - 0s 720us/step - loss: 0.2937 -
val_loss: 0.3092
Epoch 98/100
242/242 [==============================] - 0s 717us/step - loss: 0.2914 -
val_loss: 0.2941
Epoch 99/100
242/242 [==============================] - 0s 729us/step - loss: 0.2873 -
val_loss: 0.3262
Epoch 100/100
242/242 [==============================] - 0s 720us/step - loss: 0.2920 -
val_loss: 0.2895
121/121 [==============================] - 0s 393us/step - loss: 0.3205
122
[CV] END learning_rate=0.0033625641252688094, n_hidden=2, n_neurons=42; total
time= 18.4s
Epoch 1/100
1/242 […] - ETA: 36s - loss: 6.2538
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 1ms/step - loss: 2.1569 -
val_loss: 0.8642
Epoch 2/100
242/242 [==============================] - 0s 736us/step - loss: 0.6291 -
val_loss: 0.7994
Epoch 3/100
242/242 [==============================] - 0s 728us/step - loss: 0.5353 -
val_loss: 1.0803
Epoch 4/100
242/242 [==============================] - 0s 736us/step - loss: 0.5002 -
val_loss: 1.1494
Epoch 5/100
242/242 [==============================] - 0s 731us/step - loss: 0.4735 -
val_loss: 0.9498
Epoch 6/100
242/242 [==============================] - 0s 723us/step - loss: 0.4442 -
val_loss: 0.6208
Epoch 7/100
242/242 [==============================] - 0s 722us/step - loss: 0.4210 -
val_loss: 0.4657
Epoch 8/100
242/242 [==============================] - 0s 724us/step - loss: 0.4193 -
val_loss: 0.3888
Epoch 9/100
242/242 [==============================] - 0s 762us/step - loss: 0.4020 -
val_loss: 0.4084
Epoch 10/100
242/242 [==============================] - 0s 735us/step - loss: 0.3975 -
val_loss: 0.4312
Epoch 11/100
242/242 [==============================] - 0s 725us/step - loss: 0.3939 -
val_loss: 0.5341
Epoch 12/100
242/242 [==============================] - 0s 735us/step - loss: 0.3730 -
val_loss: 0.6081
Epoch 13/100
242/242 [==============================] - 0s 720us/step - loss: 0.3728 -
val_loss: 0.7209
Epoch 14/100
123
242/242 [==============================] - 0s 750us/step - loss: 0.3594 -
val_loss: 0.8821
Epoch 15/100
242/242 [==============================] - 0s 729us/step - loss: 0.3869 -
val_loss: 0.9049
Epoch 16/100
242/242 [==============================] - 0s 727us/step - loss: 0.3657 -
val_loss: 0.9792
Epoch 17/100
242/242 [==============================] - 0s 720us/step - loss: 0.3699 -
val_loss: 0.9533
Epoch 18/100
242/242 [==============================] - 0s 716us/step - loss: 0.3774 -
val_loss: 1.0397
121/121 [==============================] - 0s 392us/step - loss: 0.3909
[CV] END learning_rate=0.0033625641252688094, n_hidden=2, n_neurons=42; total
time= 3.5s
Epoch 1/100
1/242 […] - ETA: 38s - loss: 4.8979
/usr/local/lib/python3.9/site-
packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning:
The `lr` argument is deprecated, use `learning_rate` instead.
warnings.warn(
242/242 [==============================] - 0s 951us/step - loss: 1.6995 -
val_loss: 2.2824
Epoch 2/100
242/242 [==============================] - 0s 725us/step - loss: 0.7088 -
val_loss: 2.5063
Epoch 3/100
242/242 [==============================] - 0s 726us/step - loss: 0.5980 -
val_loss: 1.3345
Epoch 4/100
242/242 [==============================] - 0s 720us/step - loss: 0.5484 -
val_loss: 1.8303
Epoch 5/100
242/242 [==============================] - 0s 729us/step - loss: 0.4788 -
val_loss: 1.1690
Epoch 6/100
242/242 [==============================] - 0s 728us/step - loss: 0.4416 -
val_loss: 1.0937
Epoch 7/100
242/242 [==============================] - 0s 736us/step - loss: 0.4296 -
val_loss: 0.5393
Epoch 8/100
242/242 [==============================] - 0s 722us/step - loss: 0.4395 -
val_loss: 0.5528
Epoch 9/100
124
242/242 [==============================] - 0s 736us/step - loss: 0.4101 -
val_loss: 0.4217
Epoch 10/100
242/242 [==============================] - 0s 730us/step - loss: 0.4069 -
val_loss: 0.3978
Epoch 11/100
242/242 [==============================] - 0s 724us/step - loss: 0.4052 -
val_loss: 0.7642
Epoch 12/100
242/242 [==============================] - 0s 720us/step - loss: 0.3847 -
val_loss: 0.3953
Epoch 13/100
242/242 [==============================] - 0s 727us/step - loss: 0.3598 -
val_loss: 0.3690
Epoch 14/100
242/242 [==============================] - 0s 732us/step - loss: 0.3651 -
val_loss: 0.6782
Epoch 15/100
242/242 [==============================] - 0s 721us/step - loss: 0.3875 -
val_loss: 0.5137
Epoch 16/100
242/242 [==============================] - 0s 736us/step - loss: 0.3783 -
val_loss: 1.5716
Epoch 17/100
242/242 [==============================] - 0s 733us/step - loss: 0.3612 -
val_loss: 1.5438
Epoch 18/100
242/242 [==============================] - 0s 722us/step - loss: 0.3768 -
val_loss: 2.5256
Epoch 19/100
242/242 [==============================] - 0s 734us/step - loss: 0.4469 -
val_loss: 1.2077
Epoch 20/100
242/242 [==============================] - 0s 724us/step - loss: 0.3835 -
val_loss: 0.8839
Epoch 21/100
242/242 [==============================] - 0s 725us/step - loss: 0.3575 -
val_loss: 0.3408
Epoch 22/100
242/242 [==============================] - 0s 738us/step - loss: 0.3744 -
val_loss: 0.3928
Epoch 23/100
242/242 [==============================] - 0s 733us/step - loss: 0.3562 -
val_loss: 0.3411
Epoch 24/100
242/242 [==============================] - 0s 750us/step - loss: 0.3505 -
val_loss: 0.4823
Epoch 25/100
125
242/242 [==============================] - 0s 754us/step - loss: 0.3463 -
val_loss: 0.3589
Epoch 26/100
242/242 [==============================] - 0s 743us/step - loss: 0.3505 -
val_loss: 0.3810
Epoch 27/100
242/242 [==============================] - 0s 742us/step - loss: 0.3626 -
val_loss: 0.4593
Epoch 28/100
242/242 [==============================] - 0s 756us/step - loss: 0.3593 -
val_loss: 0.3360
Epoch 29/100
242/242 [==============================] - 0s 745us/step - loss: 0.3502 -
val_loss: 0.4983
Epoch 30/100
242/242 [==============================] - 0s 725us/step - loss: 0.3481 -
val_loss: 0.3747
Epoch 31/100
242/242 [==============================] - 0s 742us/step - loss: 0.3388 -
val_loss: 0.4128
Epoch 32/100
242/242 [==============================] - 0s 733us/step - loss: 0.3576 -
val_loss: 0.5464
Epoch 33/100
242/242 [==============================] - 0s 724us/step - loss: 0.3469 -
val_loss: 0.3827
Epoch 34/100
242/242 [==============================] - 0s 725us/step - loss: 0.3339 -
val_loss: 0.5037
Epoch 35/100
242/242 [==============================] - 0s 730us/step - loss: 0.3414 -
val_loss: 0.3439
Epoch 36/100
242/242 [==============================] - 0s 712us/step - loss: 0.3287 -
val_loss: 0.4822
Epoch 37/100
242/242 [==============================] - 0s 722us/step - loss: 0.3391 -
val_loss: 0.3598
Epoch 38/100
242/242 [==============================] - 0s 756us/step - loss: 0.3429 -
val_loss: 0.6269
121/121 [==============================] - 0s 403us/step - loss: 0.3388
[CV] END learning_rate=0.0033625641252688094, n_hidden=2, n_neurons=42; total
time= 7.1s
Epoch 1/100
363/363 [==============================] - 0s 822us/step - loss: 1.4435 -
val_loss: 7.9910
Epoch 2/100
126
363/363 [==============================] - 0s 679us/step - loss: 0.8791 -
val_loss: 4.4949
Epoch 3/100
363/363 [==============================] - 0s 647us/step - loss: 0.5555 -
val_loss: 0.4376
Epoch 4/100
363/363 [==============================] - 0s 637us/step - loss: 0.4665 -
val_loss: 0.4602
Epoch 5/100
363/363 [==============================] - 0s 665us/step - loss: 0.4358 -
val_loss: 0.4209
Epoch 6/100
363/363 [==============================] - 0s 667us/step - loss: 0.4102 -
val_loss: 0.4768
Epoch 7/100
363/363 [==============================] - 0s 666us/step - loss: 0.4165 -
val_loss: 0.4360
Epoch 8/100
363/363 [==============================] - 0s 676us/step - loss: 0.3978 -
val_loss: 0.3768
Epoch 9/100
363/363 [==============================] - 0s 676us/step - loss: 0.3809 -
val_loss: 0.4160
Epoch 10/100
363/363 [==============================] - 0s 647us/step - loss: 0.3761 -
val_loss: 0.4245
Epoch 11/100
363/363 [==============================] - 0s 652us/step - loss: 0.3666 -
val_loss: 0.3531
Epoch 12/100
363/363 [==============================] - 0s 646us/step - loss: 0.3677 -
val_loss: 0.4498
Epoch 13/100
363/363 [==============================] - 0s 644us/step - loss: 0.3647 -
val_loss: 0.3535
Epoch 14/100
363/363 [==============================] - 0s 638us/step - loss: 0.3598 -
val_loss: 0.3505
Epoch 15/100
363/363 [==============================] - 0s 637us/step - loss: 0.3611 -
val_loss: 0.3513
Epoch 16/100
363/363 [==============================] - 0s 647us/step - loss: 0.3636 -
val_loss: 0.3396
Epoch 17/100
363/363 [==============================] - 0s 651us/step - loss: 0.3473 -
val_loss: 0.3905
Epoch 18/100
127
363/363 [==============================] - 0s 657us/step - loss: 0.3519 -
val_loss: 0.3525
Epoch 19/100
363/363 [==============================] - 0s 665us/step - loss: 0.3411 -
val_loss: 0.3348
Epoch 20/100
363/363 [==============================] - 0s 673us/step - loss: 0.3324 -
val_loss: 0.4457
Epoch 21/100
363/363 [==============================] - 0s 657us/step - loss: 0.3433 -
val_loss: 0.3289
Epoch 22/100
363/363 [==============================] - 0s 664us/step - loss: 0.3302 -
val_loss: 0.4407
Epoch 23/100
363/363 [==============================] - 0s 673us/step - loss: 0.3515 -
val_loss: 0.3499
Epoch 24/100
363/363 [==============================] - 0s 680us/step - loss: 0.3342 -
val_loss: 0.3958
Epoch 25/100
363/363 [==============================] - 0s 675us/step - loss: 0.3342 -
val_loss: 0.3428
Epoch 26/100
363/363 [==============================] - 0s 672us/step - loss: 0.3364 -
val_loss: 0.3885
Epoch 27/100
363/363 [==============================] - 0s 664us/step - loss: 0.3360 -
val_loss: 0.3411
Epoch 28/100
363/363 [==============================] - 0s 672us/step - loss: 0.3373 -
val_loss: 0.4380
Epoch 29/100
363/363 [==============================] - 0s 665us/step - loss: 0.3265 -
val_loss: 0.3209
Epoch 30/100
363/363 [==============================] - 0s 662us/step - loss: 0.3346 -
val_loss: 0.5193
Epoch 31/100
363/363 [==============================] - 0s 662us/step - loss: 0.3265 -
val_loss: 0.3637
Epoch 32/100
363/363 [==============================] - 0s 678us/step - loss: 0.3269 -
val_loss: 0.4826
Epoch 33/100
363/363 [==============================] - 0s 672us/step - loss: 0.3288 -
val_loss: 0.3504
Epoch 34/100
128
363/363 [==============================] - 0s 676us/step - loss: 0.3219 -
val_loss: 0.3878
Epoch 35/100
363/363 [==============================] - 0s 678us/step - loss: 0.3164 -
val_loss: 0.3179
Epoch 36/100
363/363 [==============================] - 0s 674us/step - loss: 0.3119 -
val_loss: 0.3128
Epoch 37/100
363/363 [==============================] - 0s 673us/step - loss: 0.3403 -
val_loss: 0.3940
Epoch 38/100
363/363 [==============================] - 0s 669us/step - loss: 0.3195 -
val_loss: 0.3803
Epoch 39/100
363/363 [==============================] - 0s 673us/step - loss: 0.3360 -
val_loss: 0.7009
Epoch 40/100
363/363 [==============================] - 0s 671us/step - loss: 0.3195 -
val_loss: 0.5399
Epoch 41/100
363/363 [==============================] - 0s 676us/step - loss: 0.3329 -
val_loss: 0.7795
Epoch 42/100
363/363 [==============================] - 0s 679us/step - loss: 0.3202 -
val_loss: 0.4065
Epoch 43/100
363/363 [==============================] - 0s 675us/step - loss: 0.3302 -
val_loss: 0.7454
Epoch 44/100
363/363 [==============================] - 0s 676us/step - loss: 0.3229 -
val_loss: 0.4692
Epoch 45/100
363/363 [==============================] - 0s 659us/step - loss: 0.3184 -
val_loss: 0.9428
Epoch 46/100
363/363 [==============================] - 0s 668us/step - loss: 0.3186 -
val_loss: 0.7037
CPU times: user 4min 57s, sys: 41.8 s, total: 5min 39s
Wall time: 3min 42s
[102]: RandomizedSearchCV(cv=3,
estimator=<tensorflow.python.keras.wrappers.scikit_learn.KerasRegressor object
at 0x18c937d00>,
param_distributions={'learning_rate':
<scipy.stats._distn_infrastructure.rv_frozen object at 0x18a362730>,
'n_hidden': [0, 1, 2, 3],
129
'n_neurons': array([ 1, 2, 3, 4, 5,
6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34,
35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68,
69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85,
86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99])},
verbose=2)
[103]: rnd_search_cv.best_params_
[104]: rnd_search_cv.best_score_
[104]: -0.35007914900779724
[105]: rnd_search_cv.best_estimator_
[106]: -0.3268001675605774
[108]: 0.3268001675605774
13 Exercise solutions
13.1 1. to 9.
See appendix A.
13.2 10.
Exercise: Train a deep MLP on the MNIST dataset (you can load it using
keras.datasets.mnist.load_data(). See if you can get over 98% precision. Try searching
130
for the optimal learning rate by using the approach presented in this chapter (i.e., by growing the
learning rate exponentially, plotting the loss, and finding the point where the loss shoots up). Try
adding all the bells and whistles—save checkpoints, use early stopping, and plot learning curves
using TensorBoard.
Let’s load the dataset:
[109]: (X_train_full, y_train_full), (X_test, y_test) = keras.datasets.mnist.
,→load_data()
Just like for the Fashion MNIST dataset, the MNIST training set contains 60,000 grayscale images,
each 28x28 pixels:
[110]: X_train_full.shape
[111]: X_train_full.dtype
[111]: dtype('uint8')
Let’s split the full training set into a validation set and a (smaller) training set. We also scale the
pixel intensities down to the 0-1 range and convert them to floats, by dividing by 255, just like we
did for Fashion MNIST:
[112]: X_valid, X_train = X_train_full[:5000] / 255., X_train_full[5000:] / 255.
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
X_test = X_test / 255.
Let’s plot an image using Matplotlib’s imshow() function, with a 'binary' color map:
[113]: plt.imshow(X_train[0], cmap="binary")
plt.axis('off')
plt.show()
131
The labels are the class IDs (represented as uint8), from 0 to 9. Conveniently, the class IDs
correspond to the digits represented in the images, so we don’t need a class_names array:
[114]: y_train
The validation set contains 5,000 images, and the test set contains 10,000 images:
[115]: X_valid.shape
[116]: X_test.shape
132
plt.title(y_train[index], fontsize=12)
plt.subplots_adjust(wspace=0.2, hspace=0.5)
plt.show()
Let’s build a simple dense network and find the optimal learning rate. We will need a callback to
grow the learning rate at each iteration. It will also record the learning rate and the loss at each
iteration:
[118]: K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.
,→factor)
[119]: keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
133
We will start with a small learning rate of 1e-3, and grow it by 0.5% at each iteration:
[121]: model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
expon_lr = ExponentialLearningRate(factor=1.005)
134
The loss starts shooting back up violently around 3e-1, so let’s try using 2e-1 as our learning rate:
[124]: keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
[126]: model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=2e-1),
metrics=["accuracy"])
run_logdir
[127]: './my_mnist_logs/run_001'
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
Epoch 1/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4177 -
accuracy: 0.8705 - val_loss: 0.1056 - val_accuracy: 0.9672
Epoch 2/100
1719/1719 [==============================] - 2s 937us/step - loss: 0.0950 -
accuracy: 0.9706 - val_loss: 0.0901 - val_accuracy: 0.9740
Epoch 3/100
1719/1719 [==============================] - 2s 969us/step - loss: 0.0660 -
accuracy: 0.9795 - val_loss: 0.0765 - val_accuracy: 0.9776
Epoch 4/100
1719/1719 [==============================] - 2s 947us/step - loss: 0.0433 -
accuracy: 0.9858 - val_loss: 0.0797 - val_accuracy: 0.9788
135
Epoch 5/100
1719/1719 [==============================] - 2s 952us/step - loss: 0.0346 -
accuracy: 0.9886 - val_loss: 0.0714 - val_accuracy: 0.9794
Epoch 6/100
1719/1719 [==============================] - 2s 962us/step - loss: 0.0241 -
accuracy: 0.9923 - val_loss: 0.0683 - val_accuracy: 0.9818
Epoch 7/100
1719/1719 [==============================] - 2s 963us/step - loss: 0.0189 -
accuracy: 0.9940 - val_loss: 0.0784 - val_accuracy: 0.9798
Epoch 8/100
1719/1719 [==============================] - 2s 951us/step - loss: 0.0121 -
accuracy: 0.9964 - val_loss: 0.0701 - val_accuracy: 0.9820
Epoch 9/100
1719/1719 [==============================] - 2s 971us/step - loss: 0.0079 -
accuracy: 0.9977 - val_loss: 0.0732 - val_accuracy: 0.9830
Epoch 10/100
1719/1719 [==============================] - 2s 975us/step - loss: 0.0046 -
accuracy: 0.9988 - val_loss: 0.0659 - val_accuracy: 0.9850
Epoch 11/100
1719/1719 [==============================] - 2s 992us/step - loss: 0.0031 -
accuracy: 0.9991 - val_loss: 0.0696 - val_accuracy: 0.9854
Epoch 12/100
1719/1719 [==============================] - 2s 955us/step - loss: 8.6432e-04 -
accuracy: 0.9999 - val_loss: 0.0703 - val_accuracy: 0.9860
Epoch 13/100
1719/1719 [==============================] - 2s 948us/step - loss: 5.9146e-04 -
accuracy: 1.0000 - val_loss: 0.0732 - val_accuracy: 0.9852
Epoch 14/100
1719/1719 [==============================] - 2s 943us/step - loss: 6.4432e-04 -
accuracy: 0.9999 - val_loss: 0.0720 - val_accuracy: 0.9866
Epoch 15/100
1719/1719 [==============================] - 2s 941us/step - loss: 2.7277e-04 -
accuracy: 1.0000 - val_loss: 0.0737 - val_accuracy: 0.9868
Epoch 16/100
1719/1719 [==============================] - 2s 943us/step - loss: 2.2512e-04 -
accuracy: 1.0000 - val_loss: 0.0743 - val_accuracy: 0.9866
Epoch 17/100
1719/1719 [==============================] - 2s 937us/step - loss: 1.8048e-04 -
accuracy: 1.0000 - val_loss: 0.0755 - val_accuracy: 0.9868
Epoch 18/100
1719/1719 [==============================] - 2s 934us/step - loss: 1.5622e-04 -
accuracy: 1.0000 - val_loss: 0.0759 - val_accuracy: 0.9868
Epoch 19/100
1719/1719 [==============================] - 2s 939us/step - loss: 1.5396e-04 -
accuracy: 1.0000 - val_loss: 0.0764 - val_accuracy: 0.9872
Epoch 20/100
1719/1719 [==============================] - 2s 932us/step - loss: 1.4264e-04 -
accuracy: 1.0000 - val_loss: 0.0772 - val_accuracy: 0.9868
136
Epoch 21/100
1719/1719 [==============================] - 2s 971us/step - loss: 1.2908e-04 -
accuracy: 1.0000 - val_loss: 0.0776 - val_accuracy: 0.9868
Epoch 22/100
1719/1719 [==============================] - 2s 971us/step - loss: 1.2576e-04 -
accuracy: 1.0000 - val_loss: 0.0782 - val_accuracy: 0.9868
Epoch 23/100
1719/1719 [==============================] - 2s 957us/step - loss: 1.1881e-04 -
accuracy: 1.0000 - val_loss: 0.0788 - val_accuracy: 0.9868
Epoch 24/100
1719/1719 [==============================] - 2s 958us/step - loss: 1.0815e-04 -
accuracy: 1.0000 - val_loss: 0.0791 - val_accuracy: 0.9872
Epoch 25/100
1719/1719 [==============================] - 2s 951us/step - loss: 1.0094e-04 -
accuracy: 1.0000 - val_loss: 0.0795 - val_accuracy: 0.9868
Epoch 26/100
1719/1719 [==============================] - 2s 943us/step - loss: 9.4517e-05 -
accuracy: 1.0000 - val_loss: 0.0796 - val_accuracy: 0.9870
Epoch 27/100
1719/1719 [==============================] - 2s 940us/step - loss: 9.1678e-05 -
accuracy: 1.0000 - val_loss: 0.0799 - val_accuracy: 0.9872
Epoch 28/100
1719/1719 [==============================] - 2s 940us/step - loss: 9.2947e-05 -
accuracy: 1.0000 - val_loss: 0.0806 - val_accuracy: 0.9868
Epoch 29/100
1719/1719 [==============================] - 2s 958us/step - loss: 8.4733e-05 -
accuracy: 1.0000 - val_loss: 0.0810 - val_accuracy: 0.9868
Epoch 30/100
1719/1719 [==============================] - 2s 962us/step - loss: 8.2568e-05 -
accuracy: 1.0000 - val_loss: 0.0811 - val_accuracy: 0.9868
We got over 98% accuracy. Finally, let’s look at the learning curves using TensorBoard:
[130]: %tensorboard --logdir=./my_mnist_logs --port=6006
Reusing TensorBoard on port 6006 (pid 81629), started 0:07:36 ago. (Use '!kill␣
,→81629' to kill it.)
<IPython.core.display.HTML object>
137
[ ]:
138