UNITED COLLEGE OF EDUCATION,
GREATER NOIDA
Deep Learning with Python LAB FILE
(BCA 354)
SESSION 2023-24
DEPARTMENT OF COMPUTER APPLICATION
Submitted To: Ms. priyanshi Submitted By:
(H.O.D) Roll no. :
BCA 6th SEM
United College of Education, Greater Noida
BCA VI Sem (2023– 24 Session)
BCA 354 Introduction to Deep Learning with Python Lab
List of Experiment
[Link]. List of Program Date of Execution Faculty Signature
1. write a program for creating a perceptron.
2. Write a Program to implement multi-layer perceptron using
Tensorflow. Apply multi-layer perceptron (MLP) on the Iris
dataset.
3. a. write a program to implement a convolution neural
networks CNN in Keras. Perform predictions using the train
convolution neural network (CNN).
b. write a program to build an image classifier with CIFAR-
10 Data.
4. a. Write a program to perform face detection using CNN.
b. Write a program to demonstrate hyperparameter tunning in
CNN.
C. predicting bike-sharing patterns – build and train neural
networks from scratch to predict the number of bike share
users on a given day.
5. Write a program to build auto-encoder in keras
6. Write a program to implement basic reinforcement learning
algorithm to teach a bot to reach its destination.
7. a. Write a program to implement a recurrent neural network.
b. Write a program to implement LSTM and perform time
series analysis using LSTM.
8. a. write a program to perform object detection using deep
learning.
b. Dog breed classifier -Design a train a convolutional neural
network to analyze images of dogs and correctly identify their
breeds. Use transfer learning and well-Know architectures to
improve this model.
9. a. write a program to demonstrate different activation
functions.
b) write a program in TensorFlow to demonstrate different
loss functions.
10 write a program to build a artificial neural network by
implementing the back propagation algorithm and test the
same using appropriate data sets.
Practical- 1
[Link] a program for creating a perceptron.
CODE:-
import torch
import [Link] as nn
import [Link] as optim
# Define the Perceptron model
class Perceptron([Link]):
def __init__(self, input_size):
super(Perceptron, self).__init__()
[Link] = [Link](input_size, 1)
[Link] = [Link]()
def forward(self, x):
x = [Link](x)
x = [Link](x)
return x
# Example data
X_train = [Link]([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=torch.float32)
y_train = [Link]([[0], [0], [0], [1]], dtype=torch.float32)
# Initialize the Perceptron model
input_size = 2
perceptron = Perceptron(input_size)
# Define loss function and optimizer
criterion = [Link]()
optimizer = [Link]([Link](), lr=0.1)
# Training loop
epochs = 1000
for epoch in range(epochs):
# Forward pass
outputs = perceptron(X_train)
# Calculate loss
loss = criterion(outputs, y_train)
# Backward pass and optimization
optimizer.zero_grad()
[Link]()
[Link]()
if (epoch+1) % 100 == 0:
print(f'Epoch [{epoch+1}/{epochs}], Loss: {[Link]():.4f}')
# Test the model
with torch.no_grad():
outputs = perceptron(X_train)
predicted = (outputs > 0.5).float()
print("Predictions:", [Link]())
output
Epoch [100/1000], Loss: 0.5207
Epoch [200/1000], Loss: 0.5048
Epoch [300/1000], Loss: 0.4953
Epoch [400/1000], Loss: 0.4884
Epoch [500/1000], Loss: 0.4831
Epoch [600/1000], Loss: 0.4788
Epoch [700/1000], Loss: 0.4753
Epoch [800/1000], Loss: 0.4723
Epoch [900/1000], Loss: 0.4697
Epoch [1000/1000], Loss: 0.4674
Predictions: tensor([0., 0., 0., 1.])
Practical- 2
[Link] a program to implement multi-layer perceptron using tensorflow. apply
multi-layer perceptron (MLP) on the Iris dataset.
CODE:-
import tensorflow as tf
from [Link] import load_iris
from sklearn.model_selection import train_test_split
from [Link] import StandardScaler
# Load the Iris dataset
iris = load_iris()
X, y = [Link], [Link]
# Split the dataset into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Standardize features
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = [Link](X_test)
# Define the MLP model
model = [Link]([
[Link](64, activation='relu', input_shape=(X_train.shape[1],)),
[Link](64, activation='relu'),
[Link](3, activation='softmax') # Output layer with 3 units for 3 classes
])
# Compile the model
[Link](optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Train the model
[Link](X_train_scaled, y_train, epochs=100, batch_size=32, verbose=2)
# Evaluate the model
loss, accuracy = [Link](X_test_scaled, y_test, verbose=0)
print(f'Test Accuracy: {accuracy}')
# Make predictions
predictions = [Link](X_test_scaled)
output
Epoch 1/100
4/4 - 0s - loss: 1.1635 - accuracy: 0.3500
Epoch 2/100
4/4 - 0s - loss: 1.0796 - accuracy: 0.3500
...
Epoch 100/100
4/4 - 0s - loss: 0.2013 - accuracy: 0.9667
Test Accuracy: 1.0
Practical- 3
[Link] a program to implement a convolution neural networks CNN in Keras.
perform predictions using the train convolution neural network (CNN).
CODE:-
import numpy as np
import tensorflow as tf
from [Link] import mnist
from [Link] import Sequential
from [Link] import Conv2D, MaxPooling2D, Flatten, Dense
from [Link] import to_categorical
# Load and preprocess the MNIST dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32') / 255
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32') / 255
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# Define the CNN model
model = Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
MaxPooling2D((2, 2)),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D((2, 2)),
Flatten(),
Dense(64, activation='relu'),
Dense(10, activation='softmax')
])
# Compile the model
[Link](optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
# Train the model
[Link](X_train, y_train, epochs=5, batch_size=64, verbose=1)
# Evaluate the model
loss, accuracy = [Link](X_test, y_test, verbose=0)
print(f'Test Accuracy: {accuracy}')
# Make predictions
predictions = [Link](X_test)
output
The output of the code will show the training progress, test accuracy, and will not directly
display predictions. Here's how the output might look:
Epoch 1/5
938/938 [==============================] - 30s 32ms/step - loss: 0.1701 - accuracy:
0.9498
Epoch 2/5
938/938 [==============================] - 29s 31ms/step - loss: 0.0517 - accuracy:
0.9839
Epoch 3/5
938/938 [==============================] - 29s 31ms/step - loss: 0.0351 - accuracy:
0.9890
Epoch 4/5
938/938 [==============================] - 29s 31ms/step - loss: 0.0263 - accuracy:
0.9918
Epoch 5/5
938/938 [==============================] - 29s 31ms/step - loss: 0.0200 - accuracy:
0.9939
Test Accuracy: 0.9901000261306763
B. write a program to build an image classifier with CIFAR-10 Data.
CODE:-
import tensorflow as tf
# Display the version
print(tf.__version__)
# other imports
import numpy as np
import [Link] as plt
from [Link] import Input, Conv2D, Dense, Flatten, Dropout
from [Link] import GlobalMaxPooling2D, MaxPooling2D
from [Link] import BatchNormalization
from [Link] import Model
# Load in the data
cifar10 = [Link].cifar10
# Distribute it to train and test set
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print(x_train.shape, y_train.shape, x_test.shape, y_test.shape)
# Reduce pixel values
x_train, x_test = x_train / 255.0, x_test / 255.0
# flatten the label values
y_train, y_test = y_train.flatten(), y_test.flatten()
# visualize data by plotting images
fig, ax = [Link](5, 5)
k=0
for i in range(5):
for j in range(5):
ax[i][j].imshow(x_train[k], aspect='auto')
k += 1
[Link]()
output-
# number of classes
K = len(set(y_train))
# calculate total number of classes
# for output layer
print("number of classes:", K)
# Build the model using the functional API
# input layer
i = Input(shape=x_train[0].shape)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(i)
x = BatchNormalization()(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D((2, 2))(x)
x = Flatten()(x)
x = Dropout(0.2)(x)
# Hidden layer
x = Dense(1024, activation='relu')(x)
x = Dropout(0.2)(x)
# last hidden layer i.e.. output layer
x = Dense(K, activation='softmax')(x)
model = Model(i, x)
# model description
[Link]()
ouput-
Practical- 4
a. Write a program to perform face detection using CNN.
CODE:-
import cv2
import numpy as np
# Load the pre-trained CNN model
model = [Link]('[Link]', 'res10_300x300_ssd.caffemodel')
# Load the input image
image = [Link]('[Link]')
# Preprocess the image
image = [Link](image, (300, 300))
image = [Link](image, cv2.COLOR_BGR2RGB)
image = [Link](image)
image = [Link]((2, 0, 1))
# Perform face detection
detections = [Link](image, 1.0, 1, cv2.CASCADE_SCALE_IMAGE, (30, 30))
# Draw bounding boxes around the detected faces
for (x, y, w, h) in detections:
[Link](image, (x, y), (x + w, y + h), (0, 255, 0), 2)
# Display the output image
[Link]('Output', image)
[Link](0)
[Link]()
b. Write a program to demonstrate hyperparameter tunning in CNN.
CODE:-
import numpy as np
from tensorflow import keras
from [Link] import layers
from sklearn.model_selection import GridSearchCV
from [Link].scikit_learn import KerasClassifier
# Define the CNN model function
def create_model(learning_rate=0.001, num_filters=32, kernel_size=3, dropout_rate=0.2):
model = [Link]([
layers.Conv2D(num_filters, kernel_size, activation='relu', input_shape=(28, 28, 1)),
layers.MaxPooling2D(2),
[Link](dropout_rate),
[Link](),
[Link](128, activation='relu'),
[Link](10, activation='softmax')
])
optimizer = [Link](learning_rate=learning_rate)
[Link](optimizer=optimizer, loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
# Load dataset (for demonstration purposes, let's use a subset of MNIST)
(x_train, y_train), (x_test, y_test) = [Link].load_data()
x_train = x_train[:10000]
y_train = y_train[:10000]
# Preprocess the data
x_train = x_train.reshape(-1, 28, 28, 1).astype('float32') / 255.0
x_test = x_test.reshape(-1, 28, 28, 1).astype('float32') / 255.0
# Define hyperparameters for tuning (reduced set for quicker demo)
param_grid = {
'learning_rate': [0.001, 0.01],
'num_filters': [16, 32],
'kernel_size': [3, 5],
'dropout_rate': [0.1, 0.2]
# Create Keras classifier
model = KerasClassifier(build_fn=create_model, epochs=3, batch_size=32, verbose=0)
# Perform grid search for hyperparameters
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=2)
grid_result = [Link](x_train, y_train)
# Summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
output:-
0.894000 (0.004000) with: {'dropout_rate': 0.1, 'kernel_size': 3, 'learning_rate': 0.001,
'num_filters': 16}
0.900500 (0.003500) with: {'dropout_rate': 0.1, 'kernel_size': 3, 'learning_rate': 0.001,
'num_filters': 32}
...
C. predicting bike-sharing patterns – build and train neural networks from
scratch to predict the number of bike share users on a given day.
CODE:-
import numpy as np
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = [Link](0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = [Link](0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
[Link] = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
# self.activation_function = lambda x: 0 # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
def sigmoid(x):
return 1 / (1 + [Link](-x)) # Replace 0 with your sigmoid calculation here
self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
--------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = [Link][0]
delta_weights_i_h = [Link](self.weights_input_to_hidden.shape)
delta_weights_h_o = [Link](self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
final_outputs, hidden_outputs = self.forward_pass_train(X) # Implement the forward
pass function below
# Implement the backproagation function below
delta_weights_i_h, delta_weights_h_o = [Link](final_outputs,
hidden_outputs, X, y,
delta_weights_i_h, delta_weights_h_o)
self.update_weights(delta_weights_i_h, delta_weights_h_o, n_records)
def forward_pass_train(self, X):
''' Implement forward pass here
Arguments
---------
X: features batch
'''
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = [Link](X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = [Link](hidden_outputs, self.weights_hidden_to_output) # signals into final
output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs, hidden_outputs
def backpropagation(self, final_outputs, hidden_outputs, X, y, delta_weights_i_h,
delta_weights_h_o):
''' Implement backpropagation
Arguments
---------
final_outputs: output from forward pass
y: target (i.e. label) batch
delta_weights_i_h: change in weights from input to hidden layers
delta_weights_h_o: change in weights from hidden to output layers
'''
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and
actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = [Link](self.weights_hidden_to_output, error)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
return delta_weights_i_h, delta_weights_h_o
def update_weights(self, delta_weights_i_h, delta_weights_h_o, n_records):
''' Update weights on gradient descent step
Arguments
---------
delta_weights_i_h: change in weights from input to hidden layers
delta_weights_h_o: change in weights from hidden to output layers
n_records: number of records
'''
self.weights_hidden_to_output += [Link] * delta_weights_h_o / n_records # update
hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += [Link] * delta_weights_i_h / n_records # update input-
to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = features # signals into hidden layer
hidden_outputs = self.forward_pass_train(hidden_inputs)[1] # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = hidden_outputs # signals into final output layer
final_outputs = self.forward_pass_train(hidden_inputs)[0] # signals from final output
layer
return final_outputs
#########################################################
# Set your hyperparameters here
##########################################################
iterations = 5000
learning_rate = 0.56
hidden_nodes = 32
output_nodes = 1
Practical- 5
Write a program to build auto-encoder in keras
CODE:-
import numpy as np
from [Link] import Input, Dense
from [Link] import Model
# Generate some random data for demonstration
data = [Link](1000, 50)
# Define the dimensions of the input and encoding layers
input_dim = [Link][1]
encoding_dim = 10 # Choose an arbitrary size for the encoding layer
# Define the input layer
input_layer = Input(shape=(input_dim,))
# Define the encoding layer
encoded = Dense(encoding_dim, activation='relu')(input_layer)
# Define the decoding layer
decoded = Dense(input_dim, activation='sigmoid')(encoded)
# Create the autoencoder model
autoencoder = Model(input_layer, decoded)
# Compile the model
[Link](optimizer='adam', loss='binary_crossentropy')
# Train the autoencoder
[Link](data, data, epochs=50, batch_size=32, shuffle=True)
# Once trained, you can use the encoder part to get the encoded representation of the input
data
encoder = Model(input_layer, encoded)
encoded_data = [Link](data)
# You can also use the decoder part to reconstruct the input data from the encoded
representation
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = [Link][-1]
decoder = Model(encoded_input, decoder_layer(encoded_input))
reconstructed_data = [Link](encoded_data)
output-
Practical - 6
Write a program to implement basic reinforcement learning algorithm to teach
a bot to reach its destination.
CODE:-
import numpy as np
# Define the grid world
GRID_SIZE = 5
START_STATE = (0, 0)
END_STATE = (4, 4)
# Define actions
ACTIONS = ['UP', 'DOWN', 'LEFT', 'RIGHT']
# Define rewards
REWARDS = {
(3, 4): 100, # Reward for reaching the destination
(1, 1): -10, # Penalty for entering a specific state
(2, 2): -5 # Penalty for entering a specific state
# Initialize Q-values
Q_values = [Link]((GRID_SIZE, GRID_SIZE, len(ACTIONS)))
# Define parameters
LEARNING_RATE = 0.1
DISCOUNT_FACTOR = 0.9
EPISODES = 1000
EPSILON = 0.1
# Function to choose action using epsilon-greedy policy
def choose_action(state):
if [Link](0, 1) < EPSILON:
return [Link](ACTIONS)
else:
return ACTIONS[[Link](Q_values[state[0]][state[1]])]
# Function to update Q-values using Q-learning
def update_Q_values(state, action, reward, next_state):
max_next_reward = [Link](Q_values[next_state[0]][next_state[1]])
Q_values[state[0]][state[1]][[Link](action)] += \
LEARNING_RATE * (reward + DISCOUNT_FACTOR * max_next_reward -
Q_values[state[0]][state[1]][[Link](action)])
# Function to perform one episode of training
def run_episode():
state = START_STATE
while state != END_STATE:
action = choose_action(state)
next_state = state
if action == 'UP':
next_state = (max(state[0] - 1, 0), state[1])
elif action == 'DOWN':
next_state = (min(state[0] + 1, GRID_SIZE - 1), state[1])
elif action == 'LEFT':
next_state = (state[0], max(state[1] - 1, 0))
elif action == 'RIGHT':
next_state = (state[0], min(state[1] + 1, GRID_SIZE - 1))
reward = 0
if next_state in REWARDS:
reward = REWARDS[next_state]
update_Q_values(state, action, reward, next_state)
state = next_state
# Train the agent
for _ in range(EPISODES):
run_episode()
# Function to get the optimal path
def get_optimal_path():
path = []
state = START_STATE
while state != END_STATE:
action = ACTIONS[[Link](Q_values[state[0]][state[1]])]
[Link]((state, action))
if action == 'UP':
state = (max(state[0] - 1, 0), state[1])
elif action == 'DOWN':
state = (min(state[0] + 1, GRID_SIZE - 1), state[1])
elif action == 'LEFT':
state = (state[0], max(state[1] - 1, 0))
elif action == 'RIGHT':
state = (state[0], min(state[1] + 1, GRID_SIZE - 1))
[Link]((state, 'GOAL'))
return path
# Print the optimal path
optimal_path = get_optimal_path()
for step in optimal_path:
print(step)
Practical – 7
a. Write a program to implement a recurrent neural network.
CODE:-
import numpy as np
# Define the sigmoid activation function
def sigmoid(x):
return 1 / (1 + [Link](-x))
# Define the RNN class
class RNN:
def __init__(self, input_size, hidden_size, output_size):
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
[Link] = [Link](hidden_size, input_size)
[Link] = [Link](hidden_size, hidden_size)
[Link] = [Link](output_size, hidden_size)
[Link] = [Link]((hidden_size, 1))
[Link] = [Link]((output_size, 1))
def forward(self, inputs):
h = [Link]((self.hidden_size, 1))
for x in inputs:
h = [Link]([Link]([Link], x) + [Link]([Link], h) + [Link])
output = [Link]([Link], h) + [Link]
return output
# Example usage
input_size = 3
hidden_size = 4
output_size = 2
rnn = RNN(input_size, hidden_size, output_size)
inputs = [[Link](input_size, 1) for _ in range(5)]
output = [Link](inputs)
print(output)
b. Write a program to implement LSTM and perform time series analysis
using LSTM.
CODE:-
import numpy as np
from [Link] import Sequential
from [Link] import LSTM, Dense
# Generate some random data for demonstration
data = [Link](1000, 1)
# Prepare the data for LSTM
def prepare_data(data, n_steps):
X, y = [], []
for i in range(len(data)):
end_ix = i + n_steps
if end_ix > len(data)-1:
break
[Link](data[i:end_ix, 0])
[Link](data[end_ix, 0])
return [Link](X), [Link](y)
n_steps = 3
X, y = prepare_data(data, n_steps)
# Reshape data for LSTM [samples, timesteps, features]
X = [Link]([Link][0], [Link][1], 1)
# Define the LSTM model
model = Sequential()
[Link](LSTM(50, activation='relu', input_shape=(n_steps, 1)))
[Link](Dense(1))
[Link](optimizer='adam', loss='mse')
# Fit the model
[Link](X, y, epochs=200, verbose=0)
# Make predictions
predictions = [Link](X, verbose=0)
Practical- 8
a) write a program to perform object detection using deep learning
CODE:-
import numpy as np
# Define the environment
class Environment:
def __init__(self, size):
[Link] = size
self.bot_position = 0
def step(self, action):
# Move the bot based on the action
if action == 0: # Move left
self.bot_position = max(0, self.bot_position - 1)
elif action == 1: # Move right
self.bot_position = min([Link] - 1, self.bot_position + 1)
# Calculate reward
if self.bot_position == [Link] - 1:
reward = 1 # Destination reached
done = True
else:
reward = 0
done = False
return self.bot_position, reward, done
# Define the Recurrent Neural Network
class RNN:
def __init__(self, input_size, hidden_size, output_size):
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
# Initialize weights
[Link] = [Link](hidden_size, input_size) * 0.01
[Link] = [Link](hidden_size, hidden_size) * 0.01
[Link] = [Link](output_size, hidden_size) * 0.01
[Link] = [Link]((hidden_size, 1))
[Link] = [Link]((output_size, 1))
def forward(self, x):
# Initialize hidden state
h_prev = [Link]((self.hidden_size, 1))
# Forward pass
h = [Link]([Link]([Link], x) + [Link]([Link], h_prev) + [Link])
y = [Link]([Link], h) + [Link]
return h, y
# Define the main function
def main():
# Define the environment and bot
env = Environment(size=5)
rnn = RNN(input_size=1, hidden_size=10, output_size=2)
# Training loop
num_episodes = 1000
for episode in range(num_episodes):
state = [Link]([[env.bot_position]])
action_probs = [Link](state)[1]
action = [Link](range([Link]), p=action_probs.ravel())
next_state, reward, done = [Link](action)
# Update the RNN (not implemented here)
if done:
print(f"Episode {episode+1}: Destination reached!")
break
if __name__ == "__main__":
main()
Output:
b) Dog breed classifier -Design a train a convolutional neural network to
analyze images of dogs and correctly identify their breeds. Use transfer
learning and well-Know architectures to improve this model.
CODE:-
import numpy as np
import [Link] as plt
from [Link] import Sequential
from [Link] import LSTM, Dense
# Generate sample time series data
def generate_time_series_data(num_points):
x = [Link](0, 20, num_points)
y = [Link](x)
return y
# Prepare data for LSTM
def prepare_data_for_lstm(data, time_steps):
X, y = [], []
for i in range(len(data) - time_steps):
[Link](data[i:i+time_steps])
[Link](data[i+time_steps])
return [Link](X), [Link](y)
# Define LSTM model
def build_lstm_model(time_steps):
model = Sequential()
[Link](LSTM(units=50, input_shape=(time_steps, 1)))
[Link](Dense(units=1))
[Link](optimizer='adam', loss='mse')
return model
# Train LSTM model
def train_lstm_model(model, X_train, y_train, epochs):
history = [Link](X_train, y_train, epochs=epochs, verbose=1)
return history
# Plot loss history
def plot_loss_history(history):
[Link]([Link]['loss'], label='Training Loss')
[Link]('Epochs')
[Link]('Loss')
[Link]('Training Loss History')
[Link]()
[Link]()
# Main function
def main():
# Generate time series data
num_points = 1000
time_steps = 10
data = generate_time_series_data(num_points)
# Prepare data for LSTM
X, y = prepare_data_for_lstm(data, time_steps)
# Split data into training and testing sets
split_ratio = 0.8
split_index = int(split_ratio * len(X))
X_train, X_test = X[:split_index], X[split_index:]
y_train, y_test = y[:split_index], y[split_index:]
# Reshape input data for LSTM
X_train = [Link](X_train, (X_train.shape[0], X_train.shape[1], 1))
X_test = [Link](X_test, (X_test.shape[0], X_test.shape[1], 1))
# Build LSTM model
lstm_model = build_lstm_model(time_steps)
# Train LSTM model
epochs = 100
history = train_lstm_model(lstm_model, X_train, y_train, epochs)
# Plot loss history
plot_loss_history(history)
# Evaluate LSTM model
loss = lstm_model.evaluate(X_test, y_test)
print(f'Test Loss: {loss}')
if __name__ == "__main__":
main()
Output:
Practical- 9
a) write a program to demonstrate different activation functions.
CODE:-
import numpy as np
import [Link] as plt
# Define activation functions
def sigmoid(x):
return 1 / (1 + [Link](-x))
def relu(x):
return [Link](0, x)
def tanh(x):
return [Link](x)
def softmax(x):
exp_values = [Link](x - [Link](x, axis=1, keepdims=True))
return exp_values / [Link](exp_values, axis=1, keepdims=True)
# Generate input data
x = [Link](-5, 5, 100)
# Apply activation functions
y_sigmoid = sigmoid(x)
y_relu = relu(x)
y_tanh = tanh(x)
# Plot activation functions
[Link](figsize=(12, 8))
[Link](2, 2, 1)
[Link](x, y_sigmoid, label='Sigmoid')
[Link]('Sigmoid Activation Function')
[Link]('Input')
[Link]('Output')
[Link]()
[Link](2, 2, 2)
[Link](x, y_relu, label='ReLU')
[Link]('ReLU Activation Function')
[Link]('Input')
[Link]('Output')
[Link]()
[Link](2, 2, 3)
[Link](x, y_tanh, label='Tanh')
[Link]('Tanh Activation Function')
[Link]('Input')
[Link]('Output')
[Link]()
plt.tight_layout()
[Link]()
Output:
b) write a program in TensorFlow to demonstrate different loss functions.
CODE:-
import tensorflow as tf
import numpy as np
# Sample data (you can replace this with your actual data)
y_true = [Link]([1, 0, 2, 1, 0])
y_pred = [Link]([0.8, 0.2, 1.5, 0.9, 0.3])
# Define functions to calculate different loss functions
def mean_squared_error(y_true, y_pred):
"""Mean Squared Error (MSE) loss function."""
return tf.reduce_mean([Link](y_true - y_pred))
def mean_absolute_error(y_true, y_pred):
"""Mean Absolute Error (MAE) loss function."""
return tf.reduce_mean([Link](y_true - y_pred))
def binary_crossentropy(y_true, y_pred):
"""Binary Crossentropy loss function (for binary classification)."""
return [Link].binary_crossentropy(y_true, y_pred, from_logits=False)
def categorical_crossentropy(y_true, y_pred):
"""Categorical Crossentropy loss function (for multi-class classification)."""
# One-hot encode y_true if it's not already encoded
if len(y_true.shape) == 1:
y_true = tf.one_hot(y_true, depth=[Link](y_true) + 1)
return [Link].categorical_crossentropy(y_true, y_pred, from_logits=False)
# Calculate loss values for each function
mse_loss = mean_squared_error(y_true, y_pred)
mae_loss = mean_absolute_error(y_true, y_pred)
bce_loss = binary_crossentropy(y_true, y_pred)
cce_loss = categorical_crossentropy(y_true, y_pred)
# Print the calculated loss values
with [Link]() as sess:
print("Mean Squared Error (MSE):", [Link](mse_loss))
print("Mean Absolute Error (MAE):", [Link](mae_loss))
print("Binary Crossentropy (BCE):", [Link](bce_loss)) # Assuming binary classification here
print("Categorical Crossentropy (CCE):", [Link](cce_loss)) # Assuming multi-class
classification here (one-hot encoded)
Output:
Practical- 10
10) write a program to build a artificial neural network by implementing the
back propagation algorithm and test the same using appropriate data sets
CODE-
import tensorflow as tf
from [Link] import load_iris
from sklearn.model_selection import train_test_split
# Hyperparameters
learning_rate = 0.01
epochs = 100
# Load the Iris dataset
iris = load_iris()
X = [Link] # Features
y = [Link] # Target labels (one-hot encoded later)
# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# One-hot encode target labels
y_train = tf.one_hot(y_train, depth=3) # 3 classes (Iris setosa, versicolor, virginica)
y_test = tf.one_hot(y_test, depth=3)
# Define the model (2-layer ANN with ReLU activation)
class ANN([Link]):
def __init__(self, num_features, num_classes):
super(ANN, self).__init__()
self.hidden_layer = [Link](10, activation='relu') # Hidden layer with 10
neurons
self.output_layer = [Link](num_classes) # Output layer with 3 neurons for 3
classes
def call(self, inputs):
x = self.hidden_layer(inputs)
output = self.output_layer(x)
return output
# Create the model instance
model = ANN(X_train.shape[1], 3) # Number of features and classes
# Define loss function (categorical crossentropy) and optimizer
loss_fn = [Link](from_logits=True)
optimizer = [Link](learning_rate=learning_rate)
# Training loop
for epoch in range(epochs):
with [Link]() as tape:
predictions = model(X_train)
loss = loss_fn(y_train, predictions)
gradients = [Link](loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
# Evaluate the model on test data
loss, accuracy = [Link](X_test, y_test)
# Print output
print(f"Test Loss: {loss:.4f}, Test Accuracy: {accuracy:.4f}")
Output: