2/18/24, 3:28 PM LSTMImplementation.
ipynb - Colaboratory
The following steps are taken in the program:
1) Import necessary libraries.
import tensorflow as tf
from [Link] import Sequential
from [Link] import LSTM, Dense, Embedding
from [Link] import imdb
from [Link] import sequence
2) Load and prepare the IMDB dataset.
max_features = 10000 # Number of words to consider as features
maxlen = 500 # Cut texts after this number of words
print("Loading data...")
(input_train, y_train), (input_test, y_test) = imdb.load_data(num_words=max_features)
print(len(input_train), "train sequences")
print(len(input_test), "test sequences")
print("Pad sequences (samples x time)")
input_train = sequence.pad_sequences(input_train, maxlen=maxlen)
input_test = sequence.pad_sequences(input_test, maxlen=maxlen)
print("input_train shape:", input_train.shape)
print("input_test shape:", input_test.shape)
Loading data...
Downloading data from [Link]
17464789/17464789 [==============================] - 1s 0us/step
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
input_train shape: (25000, 500)
input_test shape: (25000, 500)
3) Define the model.
model = Sequential()
[Link](Embedding(max_features, 32))
[Link](LSTM(32))
[Link](Dense(1, activation='sigmoid'))
4) Compile the model.
[Link](optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
5) Train the model.
history = [Link](input_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
output Epoch 1/10
157/157 [==============================] - 57s 347ms/step - loss: 0.6067 - acc: 0.6564 - val_loss: 0.5497 - val_acc: 0.7200
Epoch 2/10
157/157 [==============================] - 51s 327ms/step - loss: 0.3575 - acc: 0.8517 - val_loss: 0.4184 - val_acc: 0.8042
Epoch 3/10
157/157 [==============================] - 52s 332ms/step - loss: 0.2714 - acc: 0.8957 - val_loss: 0.4682 - val_acc: 0.8214
Epoch 4/10
157/157 [==============================] - 52s 332ms/step - loss: 0.2313 - acc: 0.9123 - val_loss: 0.2921 - val_acc: 0.8858
Epoch 5/10
157/157 [==============================] - 52s 332ms/step - loss: 0.2017 - acc: 0.9245 - val_loss: 0.3899 - val_acc: 0.8634
Epoch 6/10
157/157 [==============================] - 50s 321ms/step - loss: 0.1854 - acc: 0.9326 - val_loss: 0.3171 - val_acc: 0.8656
Epoch 7/10
[Link] 1/2
2/18/24, 3:28 PM [Link] - Colaboratory
157/157 [==============================] - 52s 329ms/step - loss: 0.1660 - acc: 0.9398 - val_loss: 0.3038 - val_acc: 0.8798
Epoch 8/10
157/157 [==============================] - 51s 323ms/step - loss: 0.1494 - acc: 0.9483 - val_loss: 0.3052 - val_acc: 0.8770
Epoch 9/10
157/157 [==============================] - 51s 328ms/step - loss: 0.1390 - acc: 0.9518 - val_loss: 0.4212 - val_acc: 0.8686
Epoch 10/10
157/157 [==============================] - 52s 334ms/step - loss: 0.1275 - acc: 0.9557 - val_loss: 0.3880 - val_acc: 0.8438
6) Evaluate the model.
evaluation = [Link](input_test, y_test)
print(f'Test Loss: {evaluation[0]} - Test Accuracy: {evaluation[1]}')
782/782 [==============================] - 34s 43ms/step - loss: 0.4023 - acc: 0.8432
Test Loss: 0.40225595235824585 - Test Accuracy: 0.8432000279426575
[Link] 2/2