0% found this document useful (0 votes)
18 views10 pages

DL Prac

Deep Learning Practical.

Uploaded by

vhoratanvir1610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views10 pages

DL Prac

Deep Learning Practical.

Uploaded by

vhoratanvir1610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

12202080501072

Practical : 1

AIM : Write a code to read a dataset using the appropriate python library
and display it.

Code:

import pandas as pd
df = pd.read_csv('dl1.csv')
df.head() print(df)

cols_to_clean = ['Applicantincome', 'Coapplicantincome', 'LoanAmount'] for


col in cols_to_clean:
df[col] = pd.to_numeric(df[col].astype(str).str.replace(',', ''), errors='coerce')
print(df[cols_to_clean].head())

GCET 1
12202080501072

print("Shape:", df.shape) print("Means:")


print(df[cols_to_clean].mean())

GCET 2
12202080501072

Practical : 2

AIM : Write a code to create Perceptron algorithms for the following logic
gates

1. AND
2. OR
3. NOT
4. NAND
5. NOR
6. XOR

Code:

import numpy as np

# Simple unit step activation function def


unitStep(v):
return 1 if v >= 0 else 0

# Perceptron model for 2 inputs


def percepModel(x, w, b): v
= np.dot(w, x) + b return
unitStep(v)

# Define logic gate functions using appropriate weights and bias

def AND(x):
return percepModel(x, w=[1, 1], b=-1.5)

def OR(x):
return percepModel(x, w=[1, 1], b=-0.5)

def NAND(x):
return percepModel(x, w=[-1, -1], b=1.5)

def NOR(x):
return percepModel(x, w=[-1, -1], b=0.5)

def XOR(x):
# XOR is not linearly separable, so we fake it using NAND, OR, AND combo (like a circuit)
return AND([OR(x), NAND(x)])
GCET 3
12202080501072

def NOT(x): # x is a single input, not an array


return percepModel([x], w=[-1], b=0.5)

# Testing all logic gates test_inputs


=[
(0, 0),
(0, 1),
(1, 0),
(1, 1),
]

print("\n--- Logic Gates using Perceptron ---") for


x1, x2 in test_inputs:
x_pair = [x1, x2]
print(f"\nInputs: ({x1}, {x2})")
print("AND =", AND(x_pair))
print("OR =", OR(x_pair))
print("NAND =", NAND(x_pair))
print("NOR =", NOR(x_pair))
print("XOR =", XOR(x_pair))

print("\n--- NOT Gate ---") for


x in [0, 1]:
print(f"NOT({x}) =", NOT(x))

Output:

GCET 4
12202080501072

Practical : 3

AIM : Implementation of Multi-layer network and study various


parameters for any application

Code:

import numpy as np

# Inputs and outputs


inputs = np.array([[2, 3]])
op = np.array([[1]])

#
Hyperparameters
epochs = 75 lr =
0.1

# Neuron structure inputLayerNeurons, hiddenLayerNeurons,


outputLayerNeurons = 2, 2, 1

# Convert weights to NumPy arrays (important!)


hidden_weights = np.array([[0.11, 0.12], [0.21, 0.08]])
output_weights = np.array([[0.14], [0.15]])

print("Initial hidden weights: ", hidden_weights)


print("Initial output weights: ", output_weights)

for _i in range(epochs): # Forward pass


hidden_layer_output = np.dot(inputs, hidden_weights)
print("\nHidden layer output: ", hidden_layer_output)

predicted_output = np.dot(hidden_layer_output, output_weights)


print("Predicted output: ", predicted_output)

# Loss and error delta =


predicted_output - op
print("Delta: ", delta)

error = 0.5 * (delta) ** 2


print("Error: ", error)

GCET 5
12202080501072

# Backpropagation hidden_layer_error =
np.dot(delta, output_weights.T) print("Hidden layer
error: ", hidden_layer_error)

# Update weights output_weights -=


np.dot(hidden_layer_output.T, delta) * lr print("\n------------
-------------------------------------------------") print("Updated
output weights: ", output_weights)

hidden_weights -= np.dot(inputs.T, hidden_layer_error) * lr print("\n------


-------------------------------------------------------") print("Updated hidden
weights: ", hidden_weights)

# Final values print("\n\nFinal hidden weights: ",


hidden_weights) print("Final output weights: ",
output_weights) print("Final predicted output: ",
predicted_output)

Output:

GCET 6
12202080501072

Final epochs:

GCET 7
12202080501072

Practical : 4

AIM : Study of Tensor Flow, Keras and PyTorch Frameworks

Deep learning frameworks are libraries that simplify the development and training of
complex neural networks. These frameworks provide pre-built and optimized
components such as tensor manipulation, automatic differentiation, neural network
layers, optimizers, and GPU acceleration support. Among these, TensorFlow, Keras,
and PyTorch are the most widely used in both academic research and industrial
applications.
TensorFLow:
Developer: Google
Brain Initial Release:
2015
Language: Python, C++ (with bindings for other
languages) Latest Version: TensorFlow 2.x
Overview:
TensorFlow is an end-to-end open-source platform for machine learning. It offers a
comprehensive ecosystem with tools for model building, training, and deployment at
scale. Initially built around static computational graphs, TensorFlow 2.x adopted a more
dynamic and Pythonic approach, largely integrating Keras as its official high-level API.
Key Features:
• Ecosystem: TensorFlow Lite (mobile), TensorFlow.js (browser),
TensorBoard (visualization), TF-Serving (deployment).
• Performance: Highly optimized for CPU and GPU usage, supports TPU acceleration.
• Scalability: Enables distributed training across GPUs, TPUs, and multiple devices.
• Serialization: Models can be saved and exported for use across platforms
using the SavedModel format.
• Automatic Differentiation: Supports gradient calculation for
backpropagation using tf.GradientTape.
Use Cases:
• Industrial deployment (e.g., Google Search, Gmail)
• Research requiring production-level scalability
Time series prediction, object detection, NLP

GCET 8
12202080501072

Keras:
Developer: François
Chollet Initial
Release: 2015
Language: Python
Latest Version: Integrated with TensorFlow 2.x
(tf.keras) Overview:
Keras is a high-level neural networks API that enables fast experimentation. It was
originally designed to be modular and extensible, running on top of multiple backends
like Theano, CNTK, and TensorFlow. With the release of TensorFlow 2.x, Keras became
tightly integrated and is now the official high-level API for TensorFlow.
Key Features:
• User-Friendly: Intuitive API for beginners and researchers alike.
• Modular Design: Each component (layer, loss, optimizer) is a standalone
module that can be reused.
• Pre-trained Models: Supports many state-of-the-art models (e.g., ResNet, MobileNet).
• Rapid Prototyping: Simplifies model development with functions like
Sequential and Model.
Strengths:
• Simplified training using .fit(), .evaluate(), and .predict().
• Clean syntax and readable code, ideal for education and experimentation.
• Strong integration with TensorFlow tools like TensorBoard
and tf.data. Use Cases:
• Educational purposes
• Small to medium research experiments
Quick model prototyping and testing

PyTorch:
Developer: Facebook AI Research
(FAIR) Initial Release: 2016
Language: Python (with C++
backend) Latest Version: PyTorch
2.x
Overview:
PyTorch is a popular open-source deep learning framework known for its dynamic

GCET 9
12202080501072

computation graph, making it more intuitive and easier to debug compared to static
graph frameworks. Its imperative style is closer to Python’s native programming flow,
making it highly suitable for research.
Key Features:
• Dynamic Computation Graphs: Graphs are built on-the-fly during runtime,
providing more flexibility.
• Pythonic: Tightly integrated with Python, enabling seamless
debugging and customization.
• TorchScript: Allows transition from eager mode to static graph for deployment.
• ONNX Support: Enables export of PyTorch models for cross-framework
compatibility. Libraries & Tools:
• TorchVision, TorchText, TorchAudio for domain-specific datasets and models
• Lightning / Ignite / HuggingFace support for higher-level abstractions
• Native AMP (Automatic Mixed Precision) for better training performance
on GPUs Use Cases:
• Cutting-edge AI research
Reinforcement learning

Custom architectures requiring runtime control

Feature TensorFlow Keras PyTorch

Backend Engine TensorFlow Runs on PyTorch (native)


TF/Theano/CNTK
API Level Low & Mid High-level only Low & Mid

Control Flow Functional + Imperative Functional only Imperative

Model Saving SavedModel, HDF5 HDF5 (via TF) .pt/.pth

Custom Layers Verbose Simplified Very Flexible

Industrial Usage Widely Adopted Mostly Prototyping Increasing in


Production
Multi-GPU Training Built-in Support Yes (via TF) Manual or DDP

Origin Developed by Google Developed by Google Developed by


Facebook

GCET 10

You might also like