DL LAB MANUAL - Merged
DL LAB MANUAL - Merged
REGISTER NUMBER :
BONAFIDE CERTIFICATE
Reg.No.
PSO 1 – Applying AI principles and practices for developing innovative solutions to the society.
PSO 2 – To adapt the emerging technologies and tools for solving the existing/novel problems
Program Outcomes (POs)
SYLLABUS
COURSE OBJECTIVES:
LIST OF EXPERIMENTS
COURSE OUTCOMES:
P PSO
CO O
PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO PO PO PSO1 PSO2
10 11 12
I 3 2 1 1 1 - - - 3 2 3 2 3 3
1 3 2 2 2 - - - 3 2 2 2 1 3
II
III 3 2 1 2 1 - - - 2 3 1 1 2 3
IV 3 3 1 2 1 - - - 1 3 2 2 3 2
V 3 3 3 3 2 - - - 1 2 3 1 3 3
CO BKL PO Justification
Number
Substantial: Strongly mapped as students analyze the efficiency of recursive
PO1 and non-recursive algorithms using engineering principles and mathematical
knowledge.
Substantial: Strongly mapped as students identify and analyze complex
PO2 problems in algorithmic efficiency, applying first principles of mathematics
and engineering.
Substantial: Strongly mapped as students design efficient algorithms and
PO3
evaluate them, considering various constraints.
Slight: Slightly mapped as students use research methods to conduct
PO4
investigations and interpret data on algorithm efficiency.
Slight: Slightly mapped as students use modern tools to analyze algorithms
PO5
with an understanding of their limitations.
C307.1 K3 Slight: Slightly mapped as students document and communicate their
PO9
analysis and findings regarding algorithm efficiency.
Slight: Slightly mapped as students apply management principles to tasks
P010
involving algorithm analysis.
Moderate: Moderately mapped as students recognize the importance of
PO11
lifelong learning in understanding evolving algorithmic techniques.
Moderate: Moderately mapped as students apply AI principles to develop
PO12
innovative algorithmic solutions.
Substantial: Strongly mapped as students adapt new technologies for
PSO1
improving algorithm efficiency.
Moderate: Moderately mapped as students analyze the efficiency of
PSO2 recursive and non-recursive algorithms using engineering principles and
mathematical knowledge.
Moderate: Moderately mapped as students apply engineering and
PO1 mathematical principles to analyze brute force, divide and conquer, and other
algorithmic techniques.
Slight: Slightly mapped as students identify key elements in different
PO2
algorithmic techniques and formulate their analysis.
Slight: Slightly mapped as students design components using various
PO3
algorithmic techniques, considering different constraints.
C307.2 K3 Substantial: Strongly mapped as students conduct research-based analysis of
PO4
algorithmic techniques and validate their efficiency.
Moderate: Moderately mapped as students use modern tools to apply
PO5
algorithmic techniques to solve complex problems.
Moderate: Moderately mapped as students communicate their analysis and
PO9
findings through effective documentation and presentations.
Moderate: Moderately mapped as students apply management principles in
P010
tasks involving the analysis of algorithmic techniques.
Slight: Slightly mapped as students understand the need for continuous
PO11
learning to stay updated with new algorithmic techniques.
Moderate: Moderately mapped as students apply AI and data science
PO12
principles to develop solutions using different algorithmic techniques.
Moderate: Moderately mapped as students adapt emerging technologies and
PSO1
tools to solve existing problems using algorithmic techniques.
Moderate: Moderately mapped as students apply engineering and
PSO2 mathematical principles to analyze brute force, divide and conquer, and other
algorithmic techniques.
Substantial: Strongly mapped as students implement and analyze problems
PO1
using dynamic programming and greedy algorithmic techniques.
Moderate: Moderately mapped as students review and formulate problems
PO2
suitable for dynamic programming and greedy methods.
Slight: Slightly mapped as students design system components using
PO3
dynamic programming and greedy techniques.
Moderate: Moderately mapped as students conduct experiments and
PO4
interpret data using dynamic programming and greedy methods.
Moderate: Moderately mapped as students use modern engineering tools to
PO5
implement dynamic programming and greedy techniques.
Moderate: Moderately mapped as students document and present their
PO9
C307.3 K3 analysis of problems solved using these techniques.
Slight: Slightly mapped as students manage projects involving dynamic
P010
programming and greedy algorithms.
Slight: Slightly mapped as students recognize the importance of lifelong
PO11 learning to stay updated on new methods in dynamic programming and
greedy algorithms.
Moderate: Moderately mapped as students apply AI principles to solve
PO12
problems using dynamic programming and greedy techniques.
Slight: Slightly mapped as students adapt dynamic programming and greedy
PSO1
methods to solve novel problems.
Substantial: Strongly mapped as students implement and analyze problems
PSO2
using dynamic programming and greedy algorithmic techniques.
Substantial: Strongly mapped as students solve problems using iterative
PO1
improvement techniques for optimization.
Moderate: Moderately mapped as students analyze optimization problems
PO2
using iterative improvement methods.
Substantial: Strongly mapped as students design and implement solutions
PO3
using iterative improvement techniques, considering various constraints.
Moderate: Moderately mapped as students conduct investigations and
C307.4 K3 PO4
interpret data related to optimization problems.
Moderate: Moderately mapped as students use modern tools and techniques
PO5
to apply iterative improvement methods for solving optimization problems.
Substantial: Strongly mapped as students effectively communicate their
PO9
process and results in solving optimization problems.
Substantial: Strongly mapped as students manage projects involving the
P010
application of iterative improvement techniques.
Substantial: Strongly mapped as students recognize the importance of
PO11
lifelong learning to keep up with advancements in optimization techniques.
Moderate: Moderately mapped as students apply AI principles to optimize
PO12
solutions using iterative improvement techniques.
Moderate: Moderately mapped as students adapt iterative improvement
PSO1
methods to solve novel optimization problems.
Slight: Slightly mapped as students solve problems using iterative
PSO2
improvement techniques for optimization.
Substantial: Strongly mapped as students compute the limitations of
PO1
algorithmic power and solve problems using backtracking techniques.
Slight: Slightly mapped as students analyze and formulate problems suitable
PO2
for backtracking.
Moderate: Moderately mapped as students design system components or
PO3
processes using backtracking techniques.
Substantial: Strongly mapped as students conduct research to understand the
PO4
limitations of algorithms and validate backtracking techniques.
Substantial: Strongly mapped as students use modern tools to implement and
PO5
solve problems using backtracking.
Moderate: Moderately mapped as students effectively communicate their
C307.5 K3 PO9
findings through reports and presentations.
Moderate: Moderately mapped as students apply project management
P010
principles in tasks related to backtracking.
Moderate: Moderately mapped as students engage in continuous learning to
PO11 understand the evolving nature of algorithmic limitations and backtracking
techniques.
Moderate: Moderately mapped as students apply AI principles to solve
PO12
complex problems using backtracking.
PSO1 Substantial: Strongly mapped as students adapt backtracking methods to
solve new and existing problems.
PSO2 Slight: Slightly mapped as students compute the limitations of algorithmic
power and solve problems using backtracking techniques.
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND DATA SCIENCE
Aim :
To write a program for Solving XOR problem using DNN
Algorithm:
1. Initialize weights and biases randomly for each neuron in the network.
2. Set the learning rate and number of epochs for training.
3. Define activation function (sigmoid or other suitable function).
Activation Function: f(x) = 1 / (1 + exp(-x))
5. Training:
Repeat for each epoch:
For each training example (XOR input pairs: [0,0], [0,1], [1,0], [1,1]):
a. Compute the weighted sum for each neuron in the hidden layer using input and
weights.
b. Apply the activation function to the hidden layer neurons.
c. Compute the weighted sum for the output neuron using hidden layer outputs and
weights.
d. Apply the activation function to the output neuron.
e. Compute the error between the predicted output and the actual XOR output.
f. Backpropagate the error to adjust weights and biases using gradient descent:
- Update output layer weights and biases.
- Update hidden layer weights and biases.
6. Testing:
For each XOR input pair:
a. Pass the input through the trained neural network.
b. Check the output and compare it to the expected XOR output (0 or 1).
7. Adjust hyperparameters (learning rate, number of hidden neurons, epochs) as needed for
better performance.
8. Once the neural network produces accurate results for the XOR problem, it's trained and
ready to solve XOR inputs.
9. Use the trained neural network to predict XOR outputs for new inputs.
Program:
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
# Make predictions
predictions = model.predict(X)
print("Predictions:")
print((predictions > 0.5).astype(int)) # Threshold predictions for binary output
Output:
1/1━━━━━━━━━━━━━━━━━━━━0s 143ms/step - accuracy:
0.2500 - loss: 0.6931
Model accuracy: 25.00%
1/1━━━━━━━━━━━━━━━━━━━━0s 51ms/step
Predictions:
[[0]
[0]
[0]
[1]]
Result:
Thus the program was executed successfully.
Ex: 02 Character Recognition using CNN Date:
Aim:
To write a python program to implement the Character recognition using CNN.
Algorithm:
1. Start the program.
2. Get the relevant packages for Recognition
3. Load the 0 to 9 Handwritten Data.csv from the directory mnist.data.
4. Reshape data for model creation
5. Train the model and Prediction on test data
6. Prediction on External Image
7. Stop the program
Program:
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Flatten
from tensorflow.keras.utils import to_categorical
# Reshape data to fit the CNN (28x28 pixels with 1 color channel)
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
Aim:
To write a python program to implement the Face recognition using CNN.
Algorithm:
1. Start the program.
2. Get the relevant packages for
Face Recognition . Reshape
data for model creation
3. Train the model and Prediction on test data
4. Prediction on External Image
5. Stop the program
Program:
import numpy as np
import pandas as pd
from sklearn.datasets import fetch_lfw_people
import matplotlib.pyplot as plt
import seaborn as sns
from collections import Counter
from tensorflow.keras.utils import to_categorical
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
import tensorflow.keras.preprocessing.image as image
# Load dataset
faces = fetch_lfw_people(min_faces_per_person=100, resize=1.0,
slice_=(slice(60, 188), slice(60, 188)), color=True)
class_count = len(faces.target_names)
# Count
counts = Counter(faces.target)
names = {faces.target_names[key]: counts[key] for key in counts.keys()}
df = pd.DataFrame.from_dict(names, orient='index')
df.plot(kind='bar')
plt.title("Number of Images per Person")
plt.show()
# Limit
mask = np.zeros(faces.target.shape, dtype=bool)
for target in np.unique(faces.target):
mask[np.where(faces.target == target)[0][:100]] = 1
x_faces = faces.data[mask]
y_faces = faces.target[mask]
x_faces = np.reshape(x_faces, (x_faces.shape[0], faces.images.shape[1],
faces.images.shape[2], faces.images.shape[3]))
# Compile
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
# Train
history = model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=20,
batch_size=25)
y_predicted = model.predict(x_test)
conf_matrix = confusion_matrix(y_test.argmax(axis=1), y_predicted.argmax(axis=1))
plt.figure(figsize=(10, 8))
sns.heatmap(conf_matrix.T, square=True, annot=True, fmt='d', cbar=False, cmap='Blues',
xticklabels=faces.target_names, yticklabels=faces.target_names)
plt.xlabel('Actual Label')
plt.ylabel('Predicted Label')
plt.show()
OUTPUT:
Result:
Thus the program was executed successfully.
Ex. No : 4 LANGUAGE MODELING USING RNN Date:
Aim:
To write a python program to implement the Language modeling using RNN.
Algorithm:
1. Start the program.
2. Get the relevant packages for Language modeling
3. Read a file and split into lines.
4. Build the category_lines dictionary, a list of lines per category Add the
Random item from a list
5. Get a random category and random line from
that category. One-hot vector for category
6. Make category, input, and target tensors from a random category, line
pair Sample from a category and starting letter
7. Get multiple samples from one category and multiple starting letters.
8. Train the model and Prediction on test data Prediction on
External Image
9. Stop the program
Program:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
# Preprocessing
tokenizer = Tokenizer()
tokenizer.fit_on_texts([text])
total_words = len(tokenizer.word_index) + 1
# Example usage
seed_text = "Once upon a time"
next_words = 10
print(generate_text(seed_text, next_words, max_sequence_len))
Output:
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━
━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇
━━━━━━━━━━━━━━━━━┩
│ embedding (Embedding) │? │ 0 (unbuilt) │
├──────────────────────────────────────┼────────
─────────────────────┼─────────────────┤
│ lstm (LSTM) │? │ 0 (unbuilt) │
├──────────────────────────────────────┼────────
─────────────────────┼─────────────────┤
│ lstm_1 (LSTM) │? │ 0 (unbuilt) │
├──────────────────────────────────────┼────────
─────────────────────┼─────────────────┤
│ dense (Dense) │? │ 0 (unbuilt) │
└──────────────────────────────────────┴────────
─────────────────────┴─────────────────┘
Total params: 0 (0.00 B)
Trainable params: 0 (0.00 B)
Non-trainable params: 0 (0.00 B)
Epoch 1/200
2/2━━━━━━━━━━━━━━━━━━━━7s 29ms/step - accuracy:
0.0000e+00 - loss: 3.5840
Epoch 2/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.1118
- loss: 3.5733
Epoch 3/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.0559
- loss: 3.5643
Epoch 4/200
2/2━━━━━━━━━━━━━━━━━━━━0s 22ms/step - accuracy: 0.0839
- loss: 3.5572
Epoch 5/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.0630
- loss: 3.5491
Epoch 6/200
2/2━━━━━━━━━━━━━━━━━━━━0s 32ms/step - accuracy: 0.0839
- loss: 3.5320
Epoch 7/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.0839
- loss: 3.5062
Epoch 8/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.0735
- loss: 3.4734
Epoch 9/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.0735
- loss: 3.4287
Epoch 10/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.0839
- loss: 3.3882
Epoch 11/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.0839
- loss: 3.3500
Epoch 12/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.0839
- loss: 3.3104
Epoch 13/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1957
- loss: 3.2830
Epoch 14/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1573
- loss: 3.2317
Epoch 15/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.1678
- loss: 3.1714
Epoch 16/200
2/2━━━━━━━━━━━━━━━━━━━━0s 22ms/step - accuracy: 0.1573
- loss: 3.1163
Epoch 17/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1294
- loss: 3.0763
Epoch 18/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1014
- loss: 2.9948
Epoch 19/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.1398
- loss: 2.9977
Epoch 20/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1118
- loss: 2.9365
Epoch 21/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.1573
- loss: 2.8617
Epoch 22/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1645
- loss: 2.8407
Epoch 23/200
2/2━━━━━━━━━━━━━━━━━━━━0s 29ms/step - accuracy: 0.1749
- loss: 2.7446
Epoch 24/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1749
- loss: 2.7097
Epoch 25/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.1678
- loss: 2.5829
Epoch 26/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.2412
- loss: 2.5273
Epoch 27/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.2412
- loss: 2.4467
Epoch 28/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.2029
- loss: 2.4166
Epoch 29/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.1853
- loss: 2.3107
Epoch 30/200
2/2━━━━━━━━━━━━━━━━━━━━0s 28ms/step - accuracy: 0.2204
- loss: 2.2684
Epoch 31/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.2867
- loss: 2.2096
Epoch 32/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.2867
- loss: 2.1237
Epoch 33/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.3322
- loss: 2.0782
Epoch 34/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.3147
- loss: 2.0488
Epoch 35/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.4090
- loss: 1.9603
Epoch 36/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.3043
- loss: 2.0264
Epoch 37/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.3427
- loss: 1.9290
Epoch 38/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.2971
- loss: 1.8503
Epoch 39/200
2/2━━━━━━━━━━━━━━━━━━━━0s 29ms/step - accuracy: 0.3427
- loss: 1.8667
Epoch 40/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.4929
- loss: 1.7641
Epoch 41/200
2/2━━━━━━━━━━━━━━━━━━━━0s 31ms/step - accuracy: 0.4057
- loss: 1.7534
Epoch 42/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.4545
- loss: 1.7764
Epoch 43/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.4720
- loss: 1.7404
Epoch 44/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.5000
- loss: 1.6499
Epoch 45/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.4720
- loss: 1.6377
Epoch 46/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.4337
- loss: 1.7798
Epoch 47/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.5104
- loss: 1.6135
Epoch 48/200
2/2━━━━━━━━━━━━━━━━━━━━0s 38ms/step - accuracy: 0.4512
- loss: 1.6144
Epoch 49/200
2/2━━━━━━━━━━━━━━━━━━━━0s 45ms/step - accuracy: 0.3882
- loss: 1.6681
Epoch 50/200
2/2━━━━━━━━━━━━━━━━━━━━0s 40ms/step - accuracy: 0.5559
- loss: 1.5623
Epoch 51/200
2/2━━━━━━━━━━━━━━━━━━━━0s 38ms/step - accuracy: 0.4441
- loss: 1.6093
Epoch 52/200
2/2━━━━━━━━━━━━━━━━━━━━0s 39ms/step - accuracy: 0.4825
- loss: 1.5660
Epoch 53/200
2/2━━━━━━━━━━━━━━━━━━━━0s 35ms/step - accuracy: 0.5000
- loss: 1.5230
Epoch 54/200
2/2━━━━━━━━━━━━━━━━━━━━0s 40ms/step - accuracy: 0.5526
- loss: 1.5135
Epoch 55/200
2/2━━━━━━━━━━━━━━━━━━━━0s 44ms/step - accuracy: 0.6294
- loss: 1.4114
Epoch 56/200
2/2━━━━━━━━━━━━━━━━━━━━0s 43ms/step - accuracy: 0.6398
- loss: 1.4013
Epoch 57/200
2/2━━━━━━━━━━━━━━━━━━━━0s 38ms/step - accuracy: 0.6957
- loss: 1.3729
Epoch 58/200
2/2━━━━━━━━━━━━━━━━━━━━0s 37ms/step - accuracy: 0.6573
- loss: 1.3742
Epoch 59/200
2/2━━━━━━━━━━━━━━━━━━━━0s 38ms/step - accuracy: 0.6398
- loss: 1.3274
Epoch 60/200
2/2━━━━━━━━━━━━━━━━━━━━0s 31ms/step - accuracy: 0.7029
- loss: 1.3167
Epoch 61/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.7308
- loss: 1.3420
Epoch 62/200
2/2━━━━━━━━━━━━━━━━━━━━0s 46ms/step - accuracy: 0.7133
- loss: 1.2698
Epoch 63/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.6469
- loss: 1.2536
Epoch 64/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.6924
- loss: 1.2184
Epoch 65/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.7308
- loss: 1.1643
Epoch 66/200
2/2━━━━━━━━━━━━━━━━━━━━0s 28ms/step - accuracy: 0.7029
- loss: 1.1866
Epoch 67/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.7621
- loss: 1.1167
Epoch 68/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.7692
- loss: 1.1334
Epoch 69/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.7308
- loss: 1.1132
Epoch 70/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8427
- loss: 1.0600
Epoch 71/200
2/2━━━━━━━━━━━━━━━━━━━━0s 34ms/step - accuracy: 0.7029
- loss: 1.1707
Epoch 72/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.7692 - loss:
1.0260
Epoch 73/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.7588 - loss:
1.0937
Epoch 74/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8251 - loss:
1.0503
Epoch 75/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.7763 - loss:
1.0264
Epoch 76/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.7204 - loss:
1.0726
Epoch 77/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.8322 - loss:
0.9551
Epoch 78/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8147 - loss:
0.9436
Epoch 79/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.7796 - loss:
0.9205
Epoch 80/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.8322 - loss:
0.9597
Epoch 81/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.8810 - loss:
0.8720
Epoch 82/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.8427 - loss:
0.9067
Epoch 83/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.9058
Epoch 84/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.8882 - loss:
0.8967
Epoch 85/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.8986 - loss:
0.8346
Epoch 86/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.8810 - loss:
0.8123
Epoch 87/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8810 - loss:
0.8355
Epoch 88/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.8706 - loss:
0.8047
Epoch 89/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9161 - loss:
0.8032
Epoch 90/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9265 - loss:
0.7901
Epoch 91/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.8810 - loss:
0.7846
Epoch 92/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8986 - loss:
0.7756
Epoch 93/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8986 - loss:
0.7779
Epoch 94/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.8810 - loss:
0.7669
Epoch 95/200
2/2━━━━━━━━━━━━━━━━━━━━0s 29ms/step - accuracy: 0.8810 - loss:
0.7228
Epoch 96/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9265 - loss:
0.7962
Epoch 97/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.8810 - loss:
0.7474
Epoch 98/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9161 - loss:
0.7317
Epoch 99/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8602 - loss:
0.7792
Epoch 100/200
2/2━━━━━━━━━━━━━━━━━━━━0s 31ms/step - accuracy: 0.8986 - loss:
0.6991
Epoch 101/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.8706 - loss:
0.6710
Epoch 102/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.8882 - loss:
0.6977
Epoch 103/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8986 - loss:
0.6713
Epoch 104/200
2/2━━━━━━━━━━━━━━━━━━━━0s 28ms/step - accuracy: 0.9161 - loss:
0.6683
Epoch 105/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.6703
Epoch 106/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.8986 - loss:
0.6425
Epoch 107/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.8986 - loss:
0.6057
Epoch 108/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.9161 - loss:
0.6048
Epoch 109/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9545 - loss:
0.5686
Epoch 110/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9545 - loss:
0.5939
Epoch 111/200
2/2━━━━━━━━━━━━━━━━━━━━0s 23ms/step - accuracy: 0.9265 - loss:
0.5727
Epoch 112/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.5913
Epoch 113/200
2/2━━━━━━━━━━━━━━━━━━━━0s 33ms/step - accuracy: 0.9265 - loss:
0.5579
Epoch 114/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.8986 - loss:
0.5621
Epoch 115/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.5509
Epoch 116/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9265 - loss:
0.5637
Epoch 117/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9441 - loss:
0.5524
Epoch 118/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9441 - loss:
0.5461
Epoch 119/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.9441 - loss:
0.5370
Epoch 120/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.9825 - loss:
0.5415
Epoch 121/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.5137
Epoch 122/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9720 - loss:
0.5187
Epoch 123/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.5122
Epoch 124/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.5017
Epoch 125/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9720 - loss:
0.4730
Epoch 126/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.5038
Epoch 127/200
2/2━━━━━━━━━━━━━━━━━━━━0s 32ms/step - accuracy: 0.9720 - loss:
0.4844
Epoch 128/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9545 - loss:
0.4841
Epoch 129/200
2/2━━━━━━━━━━━━━━━━━━━━0s 31ms/step - accuracy: 0.9720 - loss:
0.4608
Epoch 130/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9720 - loss:
0.4723
Epoch 131/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.4915
Epoch 132/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.4458
Epoch 133/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9265 - loss:
0.4381
Epoch 134/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.4651
Epoch 135/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9265 - loss:
0.4869
Epoch 136/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9161 - loss:
0.4631
Epoch 137/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9545 - loss:
0.4328
Epoch 138/200
2/2━━━━━━━━━━━━━━━━━━━━0s 32ms/step - accuracy: 0.9441 - loss:
0.4275
Epoch 139/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9720 - loss:
0.4099
Epoch 140/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.9825 - loss:
0.4114
Epoch 141/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9441 - loss:
0.4068
Epoch 142/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9441 - loss:
0.3988
Epoch 143/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.3901
Epoch 144/200
2/2━━━━━━━━━━━━━━━━━━━━0s 38ms/step - accuracy: 0.9720 - loss:
0.4079
Epoch 145/200
2/2━━━━━━━━━━━━━━━━━━━━0s 35ms/step - accuracy: 0.9720 - loss:
0.3807
Epoch 146/200
2/2━━━━━━━━━━━━━━━━━━━━0s 36ms/step - accuracy: 0.9720 - loss:
0.3938
Epoch 147/200
2/2━━━━━━━━━━━━━━━━━━━━0s 38ms/step - accuracy: 0.9720 - loss:
0.4153
Epoch 148/200
2/2━━━━━━━━━━━━━━━━━━━━0s 36ms/step - accuracy: 0.9720 - loss:
0.3941
Epoch 149/200
2/2━━━━━━━━━━━━━━━━━━━━0s 36ms/step - accuracy: 0.9720 - loss:
0.3837
Epoch 150/200
2/2━━━━━━━━━━━━━━━━━━━━0s 38ms/step - accuracy: 0.9720 - loss:
0.3875
Epoch 151/200
2/2━━━━━━━━━━━━━━━━━━━━0s 37ms/step - accuracy: 0.9545 - loss:
0.3751
Epoch 152/200
2/2━━━━━━━━━━━━━━━━━━━━0s 43ms/step - accuracy: 0.9720 - loss:
0.3957
Epoch 153/200
2/2━━━━━━━━━━━━━━━━━━━━0s 46ms/step - accuracy: 0.9441 - loss:
0.3783
Epoch 154/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9441 - loss:
0.3978
Epoch 155/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9441 - loss:
0.4134
Epoch 156/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9161 - loss:
0.4372
Epoch 157/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.9161 - loss:
0.3709
Epoch 158/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9161 - loss:
0.3626
Epoch 159/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.3587
Epoch 160/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9441 - loss:
0.3294
Epoch 161/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.3388
Epoch 162/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9545 - loss:
0.3331
Epoch 163/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9720 - loss:
0.3741
Epoch 164/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 0.9441 - loss:
0.3406
Epoch 165/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9161 - loss:
0.3554
Epoch 166/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9441 - loss:
0.3656
Epoch 167/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9720 - loss:
0.3303
Epoch 168/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9545 - loss:
0.3160
Epoch 169/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9720 - loss:
0.3174
Epoch 170/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9720 - loss:
0.3161
Epoch 171/200
2/2━━━━━━━━━━━━━━━━━━━━0s 30ms/step - accuracy: 0.9441 - loss:
0.3486
Epoch 172/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9825 - loss:
0.2976
Epoch 173/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.3367
Epoch 174/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9441 - loss:
0.3185
Epoch 175/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 0.9720 - loss:
0.3014
Epoch 176/200
2/2━━━━━━━━━━━━━━━━━━━━0s 31ms/step - accuracy: 0.9825 - loss:
0.2910
Epoch 177/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9720 - loss:
0.3001
Epoch 178/200
2/2━━━━━━━━━━━━━━━━━━━━0s 28ms/step - accuracy: 0.9720 - loss:
0.2954
Epoch 179/200
2/2━━━━━━━━━━━━━━━━━━━━0s 28ms/step - accuracy: 1.0000 - loss:
0.2937
Epoch 180/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 1.0000 - loss:
0.2819
Epoch 181/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 1.0000 - loss:
0.2852
Epoch 182/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2709
Epoch 183/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.2684
Epoch 184/200
2/2━━━━━━━━━━━━━━━━━━━━0s 36ms/step - accuracy: 0.9720 - loss:
0.2753
Epoch 185/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 1.0000 - loss:
0.2686
Epoch 186/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2689
Epoch 187/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2618
Epoch 188/200
2/2━━━━━━━━━━━━━━━━━━━━0s 27ms/step - accuracy: 1.0000 - loss:
0.2519
Epoch 189/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 1.0000 - loss:
0.2552
Epoch 190/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2559
Epoch 191/200
2/2━━━━━━━━━━━━━━━━━━━━0s 26ms/step - accuracy: 1.0000 - loss:
0.2440
Epoch 192/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2484
Epoch 193/200
2/2━━━━━━━━━━━━━━━━━━━━0s 34ms/step - accuracy: 1.0000 - loss:
0.2361
Epoch 194/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2457
Epoch 195/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 0.9720 - loss:
0.2482
Epoch 196/200
2/2━━━━━━━━━━━━━━━━━━━━0s 36ms/step - accuracy: 0.9720 - loss:
0.2463
Epoch 197/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2492
Epoch 198/200
2/2━━━━━━━━━━━━━━━━━━━━0s 25ms/step - accuracy: 1.0000 - loss:
0.2250
Epoch 199/200
2/2━━━━━━━━━━━━━━━━━━━━0s 24ms/step - accuracy: 0.9720 - loss:
0.2343
Epoch 200/200
2/2━━━━━━━━━━━━━━━━━━━━0s 33ms/step - accuracy: 0.9720 - loss:
0.2366
Once upon a time there was a young programmer who wanted to create amazing
Result:
Thus the program was executed successfully.
Ex.No: 5 Sentiment Analysis using LSTM Date:
Aim:
To write a python program for implementing sentiment analysis using LSTM.
Algorithm:
1. Load the dataset, keeping the top 10,000 most frequent words.
2. Preprocess the data by padding sequences to a fixed length (200 in this case).
3. Build a simple LSTM model with an embedding layer, LSTM layer, and a dense
output layer with a sigmoid activation function for binary sentiment classification.
6. Perform sentiment analysis on custom text by tokenizing, padding, and using the
trained model to make predictions.
7. We can adjust the number of training epochs, batch size, model architecture, and
hyper parameters to improve performance based on your specific use case.
Program:
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout
from tensorflow.keras.datasets import imdb
Output:
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-
datasets/imdb.npz
17464789/17464789━━━━━━━━━━━━━━━━━━━━0s 0us/step
Model: "sequential_1"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━
━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃Output Shape ┃ Param # ┃
┡ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇ ━━━━
━━━━━━━━━━━━━━━━━━━━━━━━━╇ ━━━━━━━━━━━━━━━━━┩
│ embedding_1 (Embedding) │? │ 0 (unbuilt) │
├──────────────────────────────────────┼────────
─────────────────────┼─────────────────┤
│ lstm_2 (LSTM) │? │ 0 (unbuilt) │
├──────────────────────────────────────┼────────
─────────────────────┼─────────────────┤
│ dropout (Dropout) │? │ 0 (unbuilt) │
├──────────────────────────────────────┼────────
─────────────────────┼─────────────────┤
│ lstm_3 (LSTM) │? │ 0 (unbuilt) │
├──────────────────────────────────────┼────────
─────────────────────┼─────────────────┤
│ dense_1 (Dense) │? │ 0 (unbuilt) │
└──────────────────────────────────────┴────────
─────────────────────┴─────────────────┘
Total params: 0 (0.00 B)
Trainable params: 0 (0.00 B)
Non-trainable params: 0 (0.00 B)
Epoch 1/5
313/313━━━━━━━━━━━━━━━━━━━━251s 787ms/step - accuracy: 0.7127 -
loss: 0.5322 - val_accuracy: 0.8372 - val_loss: 0.3869
Epoch 2/5
313/313━━━━━━━━━━━━━━━━━━━━259s 779ms/step - accuracy: 0.8893 -
loss: 0.2925 - val_accuracy: 0.8708 - val_loss: 0.3101
Epoch 3/5
313/313━━━━━━━━━━━━━━━━━━━━247s 787ms/step - accuracy: 0.9291 -
loss: 0.1946 - val_accuracy: 0.8626 - val_loss: 0.3258
Epoch 4/5
313/313━━━━━━━━━━━━━━━━━━━━259s 779ms/step - accuracy: 0.9516 -
loss: 0.1350 - val_accuracy: 0.8668 - val_loss: 0.3931
Epoch 5/5
313/313━━━━━━━━━━━━━━━━━━━━264s 784ms/step - accuracy: 0.9665 -
loss: 0.0982 - val_accuracy: 0.8520 - val_loss: 0.3982
782/782━━━━━━━━━━━━━━━━━━━━115s 147ms/step - accuracy: 0.8460 -
loss: 0.4178
Test Accuracy: 0.85
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-
datasets/imdb_word_index.json
1641221/1641221━━━━━━━━━━━━━━━━━━━━0s 0us/step
---------------------------------------------------------------------------
Result:
Thus the program was executed successfully.
Ex. No: 6 PARTS OF SPEECH TAGGING USING SEQUENCE TO
SEQUENCEARCHITECTURE Date:
Aim:
To write a python program to implement the parts of speech tagging using
Sequence to Sequence architecture.
Algorithm:
1. Import the NLTK library and download the necessary data (tokenizers
and POS tagger).
2. Define a sample text that you want to perform POS tagging on.
3. Tokenize the text into words using nltk.word_tokenize.
4. Use nltk.pos_tag to perform POS tagging on the words.
5. Finally, loop through the tagged words and print each word along with
its corresponding POS tag
6. Run this program, it will tokenize the input text and output the POS
tags for each word. The POS tags will be something like 'NN' for
noun, 'VB' for verb, 'JJ' for adjective, and so on, depending on the part
of speech.
7. Stop the program
Program:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, LSTM, Dense, Embedding
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
import nltk
nltk.download('averaged_perceptron_tagger')
# Encode
tags = set(tag for tags in target_tags for tag in tags)
tag_to_index = {tag: i for i, tag in enumerate(tags)}
index_to_tag = {i: tag for tag, i in tag_to_index.items()}
num_tags = len(tags)
# Define encoder
encoder_inputs = Input(shape=(max_sequence_length,))
embedding_layer = Embedding(input_dim=num_words, output_dim=embedding_dim,
input_length=max_sequence_length)
encoder_embeddings = embedding_layer(encoder_inputs)
encoder_lstm = LSTM(hidden_dim, return_state=True)
_, encoder_state_h, encoder_state_c = encoder_lstm(encoder_embeddings)
encoder_states = [encoder_state_h, encoder_state_c]
# Define decoder
decoder_inputs = Input(shape=(max_sequence_length,))
decoder_embeddings = embedding_layer(decoder_inputs)
decoder_lstm = LSTM(hidden_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_embeddings, initial_state=encoder_states)
decoder_dense = Dense(num_tags, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# Build the model
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()
# Encoder
encoder_inputs = Input(shape=(None,))
encoder_embedding = Embedding(input_dim=5000, output_dim=256)(encoder_inputs)
encoder_lstm = LSTM(256, return_state=True)
_, state_h, state_c = encoder_lstm(encoder_embedding)
encoder_states = [state_h, state_c]
# Decoder
decoder_inputs = Input(shape=(None,))
decoder_embedding = Embedding(input_dim=5000, output_dim=256)(decoder_inputs)
decoder_lstm = LSTM(256, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_embedding, initial_state=encoder_states)
decoder_dense = Dense(5000, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# Model
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.compile(optimizer='adam', loss='categorical_crossentropy')
# Summary
model.summary()
Output:
Model: "functional"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━
━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃ Connected to ┃
┡ ━━━━━━━━━━━━━━━━━━━━━━━━━━━╇ ━━━━━━━━━━━━━━━━━━━
━━━━━╇ ━━━━━━━━━━━━━━━━╇ ━━━━━━━━━━━━━━━━━━━━━━━━┩
│ input_layer (InputLayer) │ (None, None) │ 0│- │
├───────────────────────────┼────────────────────────
┼────────────────┼────────────────────────┤
│ input_layer_1 │ (None, None) │ 0│- │
│ (InputLayer) │ │ │ │
├───────────────────────────┼────────────────────────
┼────────────────┼────────────────────────┤
│ embedding (Embedding) │ (None, None, 256) │ 1,280,000 │ input_layer[0][0]
│
├───────────────────────────┼────────────────────────
┼────────────────┼────────────────────────┤
│ embedding_1 (Embedding) │ (None, None, 256) │ 1,280,000 │
input_layer_1[0][0] │
├───────────────────────────┼────────────────────────
┼────────────────┼────────────────────────┤
│ lstm (LSTM) │ [(None, 256), (None, │ 525,312 │ embedding[0][0] │
│ │ 256), (None, 256)] │ │ │
├───────────────────────────┼────────────────────────
┼────────────────┼────────────────────────┤
│ lstm_1 (LSTM) │ [(None, None, 256), │ 525,312 │ embedding_1[0][0],
│
│ │ (None, 256), (None, │ │ lstm[0][1], lstm[0][2] │
│ │ 256)] │ │ │
├───────────────────────────┼────────────────────────
┼────────────────┼────────────────────────┤
│ dense (Dense) │ (None, None, 5000) │ 1,285,000 │ lstm_1[0][0] │
└───────────────────────────┴────────────────────────
┴────────────────┴────────────────────────┘
Total params: 4,895,624 (18.68 MB)
Trainable params: 4,895,624 (18.68 MB)
Non-trainable params: 0 (0.00 B)
Result:
Thus the program was executed successfully.
Ex.No:8 Image Augmentation using GANs Date:
Aim:
To write a python program for the Implementation of image augmentation using deep
RBM.
Algorithm:
1. Define the augmentation pipeline
2. Load an image you want to augment
3. Apply the augmentation to the image
4. Display the original and augmented images
Program:
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Reshape, Flatten, BatchNormalization
from tensorflow.keras.layers import Conv2D, Conv2DTranspose, LeakyReLU
from tensorflow.keras.models import Sequential
from tensorflow.keras.datasets import mnist
import matplotlib.pyplot as plt
plt.figure(figsize=(5, 5))
for i in range(n_images):
plt.subplot(5, 5, i + 1)
plt.imshow(generated_images[i, :, :, 0], cmap='gray')
plt.axis('off')
plt.tight_layout()
plt.savefig(f"gan_generated_epoch_{epoch}.png")
plt.close()
Output:
Result:
Thus the program was executed successfully.