EXPERIMENT – 1
1. Numerical Methods to solve matrix problems in Python
Python code to demonstrate matrix operations
# add(), subtract() and divide()
# importing numpy for matrix operations
import numpy
# initializing matrices
x = numpy.array([[1, 2], [4, 5]])
y = numpy.array([[7, 8], [9, 10]])
# using add() to add matrices
print ("The element wise addition of matrix is : ")
print (numpy.add(x,y))
# using subtract() to subtract matrices
print ("The element wise subtraction of matrix is : ")
print (numpy.subtract(x,y))
# using divide() to divide matrices
print ("The element wise division of matrix is : ")
print (numpy.divide(x,y))
OUTPUT:-
The element wise addition of matrix is :
[[ 8 10]
[13 15]]
The element wise subtraction of matrix is :
[[-6 -6]
[-5 -5]]
The element wise division of matrix is :
[[0.14285714 0.25 ]
[0.44444444 0.5 ]]
2. Eigen Value decomposition techniques
import numpy as np
A = np.array([[3, 1], [1, 3]])
eigenvalues, eigenvectors = np.linalg.eig(A)
V = eigenvectors
Λ = np.diag(eigenvalues)
V_inv = np.linalg.inv(V)
A_decomposed = np.dot(np.dot(V, Λ), V_inv)
print("Original Matrix A:\n", A)
print("Eigenvalue Decomposition of A:\n", A_decomposed)
Output
Original Matrix A:
[[3 1]
[1 3]]
Eigenvalue Decomposition of A:
[[3. 1.]
[1. 3.]]
EXPERIMENT – 3
3. Dimensionality Reduction-PCA
def preprocessing_fn(inputs):
features = []
outputs = {}
for feature_tensor in inputs.values():
# standard scaler pre-req for PCA
features.append(tft.scale_to_z_score(feature_tensor))
# concat to make feature matrix for PCA to run over
feature_matrix = tf.concat(features, axis=1)
# get orthonormal vector matrix
orthonormal_vectors = tft.pca(feature_matrix, output_dim=2,
dtype=tf.float32)
# multiply matrix by feature matrix to get projected transformation
pca_examples = tf.linalg.matmul(feature_matrix,
orthonormal_vectors)
# unstack and add to output dict
pca_examples = tf.unstack(pca_examples, axis=1)
outputs['Principal Component 1'] = pca_examples[0]
outputs['Principal Component 2'] = pca_examples[1]
EXPERIMENT – 4
4. Fundamentals of Tensor flow
Tensors
TensorFlow operates on multidimensional arrays or tensors represented
as tf.Tensor objects. Here is a two-dimensional tensor:
import tensorflow as tf
x = tf.constant([[1., 2., 3.],
[4., 5., 6.]])
print(x)
print(x.shape)
print(x.dtype)
Ouput:
tf.Tensor(
[[1. 2. 3.]
[4. 5. 6.]], shape=(2, 3), dtype=float32)
(2, 3)
<dtype: 'float32'>
The most important attributes of a tf.Tensor are its shape and dtype:
• Tensor.shape: tells you the size of the tensor along each of its axes.
• Tensor.dtype: tells you the type of all the elements in the tensor.
TensorFlow implements standard mathematical operations on tensors, as well as many
operations specialized for machine learning.
For example:
x + x
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[ 2., 4., 6.],
[ 8., 10., 12.]], dtype=float32)>
5 * x
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[ 5., 10., 15.],
[20., 25., 30.]], dtype=float32)>
x @ tf.transpose(x)
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[14., 32.],
[32., 77.]], dtype=float32)>
tf.concat([x, x, x], axis=0)
<tf.Tensor: shape=(6, 3), dtype=float32, numpy=
array([[1., 2., 3.],
[4., 5., 6.],
[1., 2., 3.],
[4., 5., 6.],
[1., 2., 3.],
[4., 5., 6.]], dtype=float32)>
tf.nn.softmax(x, axis=-1)
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[0.09003057, 0.24472848, 0.6652409 ],
[0.09003057, 0.24472848, 0.6652409 ]], dtype=float32)>
tf.reduce_sum(x)
<tf.Tensor: shape=(), dtype=float32, numpy=21.0>
Note: Typically, anywhere a TensorFlow function expects a Tensor as input, the function will
also accept anything that can be converted to a Tensor using tf.convert_to_tensor. See
below for an example.
tf.convert_to_tensor([1,2,3])
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([1, 2, 3], dtype=int32)>
tf.reduce_sum([1,2,3])
<tf.Tensor: shape=(), dtype=int32, numpy=6>
Running large calculations on CPU can be slow. When properly configured, TensorFlow
can use accelerator hardware like GPUs to execute operations very quickly.
if tf.config.list_physical_devices('GPU'):
print("TensorFlow **IS** using the GPU")
else:
print("TensorFlow **IS NOT** using the GPU")
TensorFlow **IS** using the GPU
Refer to the Tensor guide for details.
Variables
Normal tf.Tensor objects are immutable. To store model weights (or other mutable state)
in TensorFlow use a tf.Variable.
var = tf.Variable([0.0, 0.0, 0.0])
var.assign([1, 2, 3])
<tf.Variable 'UnreadVariable' shape=(3,) dtype=float32, numpy=array([1., 2., 3.], dtype=float32)>
var.assign_add([1, 1, 1])
<tf.Variable 'UnreadVariable' shape=(3,) dtype=float32, numpy=array([2., 3., 4.], dtype=float32)>
Refer to the Variables guide for details.
Automatic differentiation
Gradient descent and related algorithms are a cornerstone of modern machine learning.
To enable this, TensorFlow implements automatic differentiation (autodiff), which uses
calculus to compute gradients. Typically you'll use this to calculate the gradient of a
model's error or loss with respect to its weights.
x = tf.Variable(1.0)
def f(x):
y = x**2 + 2*x - 5
return y
f(x)
<tf.Tensor: shape=(), dtype=float32, numpy=-2.0>
At x = 1.0, y = f(x) = (1**2 + 2*1 - 5) = -2.
The derivative of y is y' = f'(x) = (2*x + 2) = 4. TensorFlow can calculate this automatically:
with tf.GradientTape() as tape:
y = f(x)
g_x = tape.gradient(y, x) # g(x) = dy/dx
g_x
<tf.Tensor: shape=(), dtype=float32, numpy=4.0>
This simplified example only takes the derivative with respect to a single scalar ( x), but
TensorFlow can compute the gradient with respect to any number of non-scalar tensors
simultaneously.
Refer to the Autodiff guide for details.
Graphs and tf.function
While you can use TensorFlow interactively like any Python library, TensorFlow also
provides tools for:
• Performance optimization: to speed up training and inference.
• Export: so you can save your model when it's done training.
These require that you use tf.function to separate your pure-TensorFlow code from
Python.
@tf.function
def my_func(x):
print('Tracing.\n')
return tf.reduce_sum(x)
The first time you run the tf.function, although it executes in Python, it captures a
complete, optimized graph representing the TensorFlow computations done within the
function.
x = tf.constant([1, 2, 3])
my_func(x)
Tracing.
<tf.Tensor: shape=(), dtype=int32, numpy=6>
On subsequent calls TensorFlow only executes the optimized graph, skipping any non-
TensorFlow steps. Below, note that my_func doesn't print tracing since print is a Python
function, not a TensorFlow function.
x = tf.constant([10, 9, 8])
my_func(x)
<tf.Tensor: shape=(), dtype=int32, numpy=27>
A graph may not be reusable for inputs with a different signature (shape and dtype), so a
new graph is generated instead:
x = tf.constant([10.0, 9.1, 8.2], dtype=tf.float32)
my_func(x)
Tracing.
<tf.Tensor: shape=(), dtype=float32, numpy=27.3>
These captured graphs provide two benefits:
• In many cases they provide a significant speedup in execution (though not this trivial
example).
• You can export these graphs, using tf.saved_model, to run on other systems like
a server or a mobile device, no Python installation required.
Refer to Intro to graphs for more details.
Modules, layers, and models
tf.Module is a class for managing your tf.Variable objects, and the tf.function objects that
operate on them. The tf.Module class is necessary to support two significant features:
1. You can save and restore the values of your variables using tf.train.Checkpoint. This is
useful during training as it is quick to save and restore a model's state.
2. You can import and export the tf.Variable values and the tf.function graphs
using tf.saved_model. This allows you to run your model independently of the Python
program that created it.
Here is a complete example exporting a simple tf.Module object:
class MyModule(tf.Module):
def __init__(self, value):
self.weight = tf.Variable(value)
@tf.function
def multiply(self, x):
return x * self.weight
mod = MyModule(3)
mod.multiply(tf.constant([1, 2, 3]))
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([3, 6, 9], dtype=int32)>
Save the Module:
save_path = './saved'
tf.saved_model.save(mod, save_path)
INFO:tensorflow:Assets written to: ./saved/assets
INFO:tensorflow:Assets written to: ./saved/assets
The resulting SavedModel is independent of the code that created it. You can load a
SavedModel from Python, other language bindings, or TensorFlow Serving. You can
also convert it to run with TensorFlow Lite or TensorFlow JS.
reloaded = tf.saved_model.load(save_path)
reloaded.multiply(tf.constant([1, 2, 3]))
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([3, 6, 9], dtype=int32)>
The tf.keras.layers.Layer and tf.keras.Model classes build on tf.Module providing additional
functionality and convenience methods for building, training, and saving models. Some
of these are demonstrated in the next section.
Refer to Intro to modules for details.
Training loops
Now put this all together to build a basic model and train it from scratch.
First, create some example data. This generates a cloud of points that loosely follows a
quadratic curve:
import matplotlib
from matplotlib import pyplot as plt
matplotlib.rcParams['figure.figsize'] = [9, 6]
x = tf.linspace(-2, 2, 201)
x = tf.cast(x, tf.float32)
def f(x):
y = x**2 + 2*x - 5
return y
y = f(x) + tf.random.normal(shape=[201])
plt.plot(x.numpy(), y.numpy(), '.', label='Data')
plt.plot(x, f(x), label='Ground truth')
plt.legend();
Create a quadratic model with randomly initialized weights and a bias:
class Model(tf.Module):
def __init__(self):
# Randomly generate weight and bias terms
rand_init = tf.random.uniform(shape=[3], minval=0., maxval=5.,
seed=22)
# Initialize model parameters
self.w_q = tf.Variable(rand_init[0])
self.w_l = tf.Variable(rand_init[1])
self.b = tf.Variable(rand_init[2])
@tf.function
def __call__(self, x):
# Quadratic Model : quadratic_weight * x^2 + linear_weight * x + bias
return self.w_q * (x**2) + self.w_l * x + self.b
First, observe your model's performance before training:
quad_model = Model()
def plot_preds(x, y, f, model, title):
plt.figure()
plt.plot(x, y, '.', label='Data')
plt.plot(x, f(x), label='Ground truth')
plt.plot(x, model(x), label='Predictions')
plt.title(title)
plt.legend()
plot_preds(x, y, f, quad_model, 'Before training')
Now, define a loss for your model:
Given that this model is intended to predict continuous values, the mean squared error
(MSE) is a good choice for the loss function. Given a vector of predictions, y^, and a
vector of true targets, y, the MSE is defined as the mean of the squared differences
between the predicted values and the ground truth.
MSE=1m∑i=1m(y^i−yi)2
def mse_loss(y_pred, y):
return tf.reduce_mean(tf.square(y_pred - y))
Write a basic training loop for the model. The loop will make use of the MSE loss
function and its gradients with respect to the input in order to iteratively update the
model's parameters. Using mini-batches for training provides both memory efficiency
and faster convergence. The tf.data.Dataset API has useful functions for batching and
shuffling.
batch_size = 32
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.shuffle(buffer_size=x.shape[0]).batch(batch_size)
# Set training parameters
epochs = 100
learning_rate = 0.01
losses = []
# Format training loop
for epoch in range(epochs):
for x_batch, y_batch in dataset:
with tf.GradientTape() as tape:
batch_loss = mse_loss(quad_model(x_batch), y_batch)
# Update parameters with respect to the gradient calculations
grads = tape.gradient(batch_loss, quad_model.variables)
for g,v in zip(grads, quad_model.variables):
v.assign_sub(learning_rate*g)
# Keep track of model loss per epoch
loss = mse_loss(quad_model(x), y)
losses.append(loss)
if epoch % 10 == 0:
print(f'Mean squared error for step {epoch}: {loss.numpy():0.3f}')
# Plot model results
print("\n")
plt.plot(range(epochs), losses)
plt.xlabel("Epoch")
plt.ylabel("Mean Squared Error (MSE)")
plt.title('MSE loss vs training iterations');
Mean squared error for step 0: 57.050
Mean squared error for step 10: 10.417
Mean squared error for step 20: 4.270
Mean squared error for step 30: 2.171
Mean squared error for step 40: 1.429
Mean squared error for step 50: 1.162
Mean squared error for step 60: 1.062
Mean squared error for step 70: 1.027
Mean squared error for step 80: 1.017
Mean squared error for step 90: 1.014
Now, observe your model's performance after training:
plot_preds(x, y, f, quad_model, 'After training')
That's working, but remember that implementations of common training utilities are
available in the tf.keras module. So, consider using those before writing your own. To
start with, the Model.compile and Model.fit methods implement a training loop for you:
Begin by creating a Sequential Model in Keras using tf.keras.Sequential. One of the
simplest Keras layers is the dense layer, which can be instantiated
with tf.keras.layers.Dense. The dense layer is able to learn multidimensional linear
relationships of the form Y=WX+b→. In order to learn a nonlinear equation of the
form, w1x2+w2x+b, the dense layer's input should be a data matrix with x2 and x as
features. The lambda layer, tf.keras.layers.Lambda, can be used to perform this stacking
transformation.
new_model = tf.keras.Sequential([
tf.keras.layers.Lambda(lambda x: tf.stack([x, x**2], axis=1)),
tf.keras.layers.Dense(units=1, kernel_initializer=tf.random.normal)])
new_model.compile(
loss=tf.keras.losses.MSE,
optimizer=tf.keras.optimizers.SGD(learning_rate=0.01))
history = new_model.fit(x, y,
epochs=100,
batch_size=32,
verbose=0)
new_model.save('./my_new_model.keras')
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1727918680.462363 10695 service.cc:146] XLA service 0x7f51a0006eb0 initialized for
platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1727918680.462397 10695 service.cc:154] StreamExecutor device (0): Tesla T4,
Compute Capability 7.5
I0000 00:00:1727918680.462401 10695 service.cc:154] StreamExecutor device (1): Tesla T4,
Compute Capability 7.5
I0000 00:00:1727918680.462404 10695 service.cc:154] StreamExecutor device (2): Tesla T4,
Compute Capability 7.5
I0000 00:00:1727918680.462407 10695 service.cc:154] StreamExecutor device (3): Tesla T4,
Compute Capability 7.5
I0000 00:00:1727918680.790826 10695 device_compiler.h:188] Compiled cluster using XLA! This line
is logged at most once for the lifetime of the process.
Observe your Keras model's performance after training:
plt.plot(history.history['loss'])
plt.xlabel('Epoch')
plt.ylim([0, max(plt.ylim())])
plt.ylabel('Loss [Mean Squared Error]')
plt.title('Keras training progress');
plot_preds(x, y, f, new_model, 'After Training: Keras')
EXPERIMENT – 5
5. Build a Convolution Neural Network for MNIST Hand written Digit Classification.
Here is a basic outline for building a convolutional neural network (CNN) for MNIST
handwritten digit classification:
1. Import libraries: Start by importing the required libraries such as TensorFlow, Numpy, and
Matplotlib.
2. Load the MNIST dataset: Load the MNIST dataset using the tensorflow library and split it into
train and test sets.
3. Preprocess the data: Normalize the pixel values of the images and one-hot encode the labels.
4. Define the model architecture: Define the architecture of the CNN using Sequential model
from keras and add the following layers: a. Convolution layer: Apply filters to the input images
to extract features. b. Max-Pooling layer: Downsample the spatial dimensions of the features. c.
Flatten layer: Flatten the features into a 1D array. d. Dense layer: Apply a fully connected layer
to classify the digits.
5. Compile the model: Compile the model using an optimizer such as Adam, a loss function such
as categorical_crossentropy, and evaluation metric such as accuracy.
6. Train the model: Train the model using the training data with a batch size and number of
epochs.
7. Evaluate the model: Evaluate the model on the test data and check the accuracy.
8. Make predictions: Use the trained model to make predictions on new images
import tensorflow as tf
# Define the model architecture
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(64, activation='relu', input_shape=(32,)))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(10, activation='softmax'))
# Compile the model
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
# Train the model
model.fit(train_data, train_labels, epochs=10)
# Evaluate the model
test_loss, test_acc = model.evaluate(test_data, test_labels)
# Make predictions
predictions = model.predict(new_data)
output:
Epoch 1/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.4933 - accuracy:
0.8247 -
val_loss: 0.3958 - val_accuracy: 0.8630
Epoch 2/10
1875/1875 [==============================] - 2s 1ms/step - loss: 0.3696 - accuracy:
0.8698 -
val_loss: 0.3569 - val_accuracy: 0.8735
Epoch 3/10
1875/1875 [==============================] - 2s 1ms/step - loss: 0.3279 - accuracy:
0.8830 -
val_loss: 0.3486 - val_accuracy: 0.8753
...
Epoch 10/10
1875/1875 [==============================] - 2s 1ms/step - loss: 0.2082 - accuracy:
0.9222 -
val_loss: 0.3851 - val_accuracy: 0.8835
313/313 [==============================] - 0s 1ms/step - loss: 0.3753 - accuracy:
0.8880
EXPERIMENTS – 6
6. Build a Convolution Neural Network for simple image (dogs and Cats) Classification
Here is a high-level implementation of a Convolutional Neural Network (CNN) for classifying
images of dogs and cats:
Code:
import keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
from keras.preprocessing.image import ImageDataGenerator
# Define the model
model = Sequential()
# Add the first convolutional layer
model.add(Conv2D(filters=32, kernel_size=(3, 3), activation='relu', input_shape=(224, 224, 3)))
# Add the max pooling layer
model.add(MaxPooling2D(pool_size=(2, 2)))
# Add the second convolutional layer
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
# Add the max pooling layer
model.add(MaxPooling2D(pool_size=(2, 2)))
# Add the flatten layer
model.add(Flatten())
# Add the dense layer
model.add(Dense(units=64, activation='relu'))
# Add the output layer
model.add(Dense(units=1, activation='sigmoid'))
# Compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Use ImageDataGenerator to load and augment the image data
train_datagen = ImageDataGenerator(rescale=1./255, shear_range=0.2, zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'train',
target_size=(224, 224),
batch_size=32,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
'validation',
target_size=(224, 224),
batch_size=32,
class_mode='binary')
# Train the model on the dogs and cats dataset
model.fit_generator(
train_generator,
steps_per_epoch=2000,
epochs=10,
validation_data=validation_generator,
validation_steps=800)
The code assumes that the images of dogs and cats have been preprocessed and split into train
and validation sets stored in separate directories, with the images having the same size of
224x224 and with three color channels (RGB). The number of filters, kernel size, activation
functions, and the number of epochs can be adjusted as needed
output:
Epoch 1/10
2000/2000 [==============================] - 2107s 1s/step - loss: 0.6641 - accuracy:
0.5959 - val_loss: 0.5844 - val_accuracy: 0.6798
Epoch 2/10
2000/2000 [==============================] - 2014s 1s/step - loss: 0.5808 - accuracy:
0.6912 - val_loss: 0.5498 - val_accuracy: 0.7135
EXPERIMENT - 7
7: implement one hot encoding of words or characters
Code:
from keras.preprocessing.text import Tokenizer
# define the input text
text = ["apple", "banana", "cherry", "apple",
"banana", "cherry"]
# instantiate the tokenizer
tokenizer = Tokenizer()
# fit the tokenizer on the input text
tokenizer.fit_on_texts(text)
# encode the text into numerical representations
encoded_text = tokenizer.texts_to_matrix(text, mode='binary')
print(encoded_text)
output:
[[0. 1. 1. 0. 0. 0.]
[1. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 1. 1.]
[0. 1. 1. 0. 0. 0.]
[1. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 1. 1.]]
EXPERIMENT – 8
8. Word2vec frame work
EXPERIMENT – 9
9. Implement word embeddings for IMDB dataset.
Source code:
import numpy as np
from keras.preprocessing import sequence
from keras.preprocessing.text import Tokenizer
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Embedding
from keras.layers import Flatten
# Load the IMDB dataset
data = np.loadtxt("imdb.txt", dtype=str)
# Split the dataset into input and output (X and y)
X, y = data[:,:-1], data[:,-1].astype(int)
# Tokenize the input data (X)
tokenizer = Tokenizer()
tokenizer.fit_on_texts(X)
X = tokenizer.texts_to_sequences(X)
# Pad the sequences of tokens to the same length
max_length = 500
X = sequence.pad_sequences(X, maxlen=max_length)
# Define the model
model = Sequential()
model.add(Embedding(input_dim=5000, output_dim=32, input_length=max_length))
model.add(Flatten())
model.add(Dense(units=250, activation='relu'))
model.add(Dense(units=1, activation='sigmoid'))
# Compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model on the data
model.fit(X, y, epochs=10, batch_size=32)
# Evaluate the model
_, accuracy = model.evaluate(X, y, verbose=0)
print('Accuracy: %.2f' % (accuracy*100))
Explaination:
This code uses the Tokenizer class to convert the raw text data into numerical sequences of
tokens. The texts_to_sequences method is used to convert the text into numerical sequences,
which are then padded to the same length using the pad_sequences function. The model is
defined using the Keras Sequential API and contains an Embedding layer followed by a Flatten
layer, two Dense layers, and a final output layer with a sigmoid activation function. The model is
compiled using the binary crossentropy loss and the Adam optimizer, and is fit on the data using
the fit method. Finally, the accuracy of the model is evaluated on the data.
output:
Epoch 1/10
25000/25000 [==============================] - 4s 156us/step - loss: 0.4429 -
accuracy: 0.7738
Epoch 2/10
25000/25000 [==============================] - 3s 130us/step - loss: 0.2340 -
accuracy: 0.9081
Epoch 3/10
25000/25000 [==============================] - 3s 130us/step - loss: 0.1374 -
accuracy: 0.9465
Epoch 4/10
25000/25000 [==============================] - 3s 130us/step - loss: 0.0888 -
accuracy: 0.9656
Epoch 5/10
EXPERIMENT-10
10. Implement a Recurrent Neural Network for IMDB movie review classification problem. give
a brief notes and program
Source code:
from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Embedding, LSTM, Dense
max_features =
maxlen = 100
batch_size = 32
embedding_dims = 50
lstm_units = 128
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
x_train = pad_sequences(x_train, maxlen=maxlen)
x_test = pad_sequences(x_test, maxlen=maxlen)
model = Sequential()
model.add(Embedding(max_features, embedding_dims, input_length=maxlen))
model.add(LSTM(lstm_units))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size, epochs=5, validation_data=(x_test, y_test))
score, accuracy = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', accuracy)
Explanation
Here's a step-by-step explanation of the code:
1. We import the necessary libraries from Keras.
2. We set the parameters for the model, including the maximum number of features to consider,
the maximum length of each review, the batch size for training, the embedding dimensions, and
the number of LSTM units.
3. We load the IMDB dataset from Keras and only consider the top 5,000 words in the dataset.
4. We pad the sequences to the same length of 100 words, truncating longer sequences and
padding shorter sequences with zeros.
5. We build the model, starting with an embedding layer that maps each word to a dense vector of
size 50. We then add an LSTM layer with 128 units and a dense layer with a sigmoid activation
function to output a probability of a positive review.
6. We compile the model, using binary cross-entropy loss and the Adam optimizer, and
specifying accuracy as the evaluation metric.
7. We train the model on the training set for 5 epochs, using a batch size of 32, and validate on
the test set.
8. We evaluate the model on the test set, printing the test loss and accuracy.
Output:
Model: "sequential_1"
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 100, 50) 250000
lstm_1 (LSTM) (None, 128) 91648
dense_1 (Dense) (None, 1) 129
=================================================================
Total params: 341,777
Trainable params: 341,777
Non-trainable params: 0
Train on 25000 samples, validate on 25000 samples
Epoch 1/5
25000/25000 [==============================] - 42s 2ms/step - loss: 0.4992 -
accuracy: 0.7574 - val_loss: 0.3646 - val_accuracy: 0.8407
Epoch 2/5
25000/25000 [==============================] - 43s 2ms/step - loss: 0.2955 -
accuracy: 0.8831 - val_loss: 0.3298 - val_accuracy: 0.8605
Epoch 3/5
25000/25000 [==============================] - 43s 2ms/step - loss: 0.2344 -
accuracy: 0.9114 - val_loss: 0.3324 - val_accuracy: 0.8606
Epoch 4/5
25000/25000 [==============================] - 42s 2ms/step - loss: 0.1913 -
accuracy: 0.9300 - val_loss: 0.3589 - val_accuracy: 0.8547
Epoch 5/5
25000/25000 [==============================] - 42s 2ms/step - loss: 0.1598 -
accuracy: 0.9424 - val_loss: 0.3814 - val_accuracy: 0.8532
25000/25000 [==============================] - 10s 391us/step
Test score: 0.3813556725502014
Test accuracy: 0.8531999588012695