0% found this document useful (0 votes)
23 views5 pages

Practical2 Perceptron DL Formatted

Practical2_Perceptron_DL_Formatted.

Uploaded by

vhoratanvir1610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views5 pages

Practical2 Perceptron DL Formatted

Practical2_Perceptron_DL_Formatted.

Uploaded by

vhoratanvir1610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Deep Learning And Applications(202047804) 12202080501060

Practical 2: Perceptron Implementation

AIM: Write a code to create Perceptron algorithms for the following logic gates 1. AND
2. OR 3. NOT 4. NAND 5. NOR 6. XOR

Introduction
In this practical, we implement the Perceptron algorithm — one of the simplest types of
artificial neural networks used for binary classification. A perceptron is a linear classifier
that makes predictions based on a linear predictor function. It is a foundational concept in
deep learning and understanding it is essential for grasping more complex architectures.
The implementation helps us understand how weights are updated using the error
correction rule, and how decision boundaries are formed in a linearly separable dataset.

Theory: The Perceptron Algorithm

🔹 What is a Perceptron?

A perceptron is the simplest form of a feedforward neural network, consisting of a


single layer of weights and a step activation function. It is used for binary classification
tasks, especially when the data is linearly separable.

🔹 Structure

A perceptron takes input features, applies weights, adds a bias, and passes the result
through an activation function (usually a step function) to make a decision.

Mathematically:

Where:

 xi: Input features

 wi : Weights

 b: Bias

 f: Activation function (typically unit step)

GCET 5
Deep Learning And Applications(202047804) 12202080501060

 y: Output (0 or 1)

🔹 Perceptron Learning Rule

The weights of a perceptron are updated iteratively using the following rule:

 η: Learning rate (typically small, e.g., 0.1)

 t: Target output

 y: Predicted output

The process continues until the output stabilizes (converges) or for a fixed number of
epochs.

🔹 Logic Gates and Linear Separability:

Logic Gate Linearly Separable? Implemented with Single Perceptron?

AND Yes Yes

OR Yes Yes

NOT Yes Yes

NAND Yes Yes

NOR Yes Yes

XOR No Needs Multi-Layer Perceptron

XOR is a non-linearly separable function and cannot be learned by a single-layer


perceptron — highlighting the limitations of early neural models and the need for deeper
architectures (like MLPs).

GCET 6
Deep Learning And Applications(202047804) 12202080501060

Implementation

Code:

import numpy as np

def unitStep(v):
if v >= 0:
return 1
else:
return 0

def perceptronModel(x, w, b):


v = np.dot(w, x) + b
y = unitStep(v)
return y

def AND_logicFunction(x):
w = np.array([1, 1])
b = -1.5
return perceptronModel(x, w, b)

def OR_logicFunction(x):
w = np.array([1, 1])
b = -0.5
return perceptronModel(x, w, b)

def NOT_logicFunction(x):
w = -1
b = 0.5
return perceptronModel(x, w, b)

def XOR_logicFunction(x):
and_ab = AND_logicFunction(x)
not_a = NOT_logicFunction(x[0])
not_b = NOT_logicFunction(x[1])
and_not_a_not_b = AND_logicFunction(np.array([not_a, not_b]))
or_result = OR_logicFunction(np.array([and_ab, and_not_a_not_b]))
xor_result = NOT_logicFunction(or_result)
return xor_result

GCET 7
Deep Learning And Applications(202047804) 12202080501060

test1 = np.array([0, 1])


test2 = np.array([1, 1])
test3 = np.array([0, 0])
test4 = np.array([1, 0])

test5 = 0
test6 = 1

print("AND({}, {}) = {}".format(0, 1, AND_logicFunction(test1)))


print("AND({}, {}) = {}".format(1, 1, AND_logicFunction(test2)))
print("AND({}, {}) = {}".format(0, 0, AND_logicFunction(test3)))
print("AND({}, {}) = {}".format(1, 0, AND_logicFunction(test4)))

print("OR({}, {}) = {}".format(0, 1, OR_logicFunction(test1)))


print("OR({}, {}) = {}".format(1, 1, OR_logicFunction(test2)))
print("OR({}, {}) = {}".format(0, 0, OR_logicFunction(test3)))
print("OR({}, {}) = {}".format(1, 0, OR_logicFunction(test4)))

print("NOT({}, {}) = {}".format(0, 1, NOT_logicFunction(test5)))


print("NOT({}, {}) = {}".format(1, 1, NOT_logicFunction(test6)))

print("XOR({}, {}) = {}".format(0, 1, XOR_logicFunction(test1)))


print("XOR({}, {}) = {}".format(1, 1, XOR_logicFunction(test2)))
print("XOR({}, {}) = {}".format(0, 0, XOR_logicFunction(test3)))
print("XOR({}, {}) = {}".format(1, 0, XOR_logicFunction(test4)))

GCET 8
Deep Learning And Applications(202047804) 12202080501060

def NOR_logicFunction(x):
w = np.array([-1, -1])
b = 0.5
return perceptronModel(x, w, b)
print("NOR({}, {}) = {}".format(0, 1, NOR_logicFunction(test1)))
print("NOR({}, {}) = {}".format(1, 1, NOR_logicFunction(test2)))
print("NOR({}, {}) = {}".format(0, 0, NOR_logicFunction(test3)))
print("NOR({}, {}) = {}".format(1, 0, NOR_logicFunction(test4)))

Conclusion
This practical effectively demonstrates how perceptrons can model various fundamental
logic gates using simple linear computations. Through this hands-on implementation, we
observed the elegance and limitations of single-layer networks. While AND, OR, NOT,
NAND, and NOR gates were successfully implemented using basic perceptron logic, the
XOR gate required a composition of gates — showcasing the necessity for multi-layer
architectures in non-linearly separable problems. This activity reinforces the importance
of perceptron as a conceptual and practical foundation for advancing toward deep
learning models like MLPs, CNNs, and beyond.

GCET 9

You might also like