1)Explain Perceptron model with Bias
Perceptron Model with Bias
1. Definition
The Perceptron is a single-layer neural network and the simplest form of a feedforward
network. It serves as a linear classifier for binary decisions. The inclusion of bias allows the
model to shift the decision boundary away by introducing an adjustable offset.
2. Mathematical Formulation
The output of a Perceptron with bias is calculated as:
n
yin = b + ∑ xi wi
i=1
where:
• xi = Input features
• wi = Weights
• b = Bias term
The final output is determined by an activation function:
1 ; if yin ≥ θ
y={
0 ; if yin <θ
(where θ is the threshold, often set to 0).
3. Role of Bias
• Decision Boundary Adjustment: Enables the model to learn patterns not passing through
the origin.
• Flexibility: Adds a degree of freedom to the model, improving its ability to fit data.
• Implementation: Treated as an extra input with fixed value 1 and adjustable weight b.
5. Learning Rule (Weight Update)
The Perceptron adjusts weights and bias iteratively using:
6. Example: AND Gate Implementation
x1 x2 yAND
0 0 0
0 1 0
1 0 0
1 1 1
Parameters:
• Weights: w1 = 1, w2 = 1
• Bias: b = −1.5
• Threshold: θ = 0
Calculation:
• For x1 = 1, x2 = 1: yin = 1 ⋅ 1 + 1 ⋅ 1 − 1.5 = 0.5 ≥ 0 ⇒ y = 1
7. Limitations
• Linearly Separable Data: Cannot solve non-linear problems (e.g., XOR).
• Fixed Threshold: Requires manual tuning of θ.
8. Applications
• Spam detection (SPAM/NON-SPAM filtering)
• Medical diagnosis (benign/malignant classification)
• Logic gate implementation (AND, OR)
2)Draw a block diagram of the Error Back Propagation Algorithm and
explain with the flow chart the Error Back Propagation Concept.
Introduction
• Error Backpropagation Algorithm (BPA) is the most widely used algorithm for
training Multilayer Perceptrons (MLP).
• It is based on the gradient descent method to minimize the error by adjusting
weights.
• It involves two passes:
o Forward pass: Inputs are passed through the network to get the output.
o Backward pass: Errors are propagated back to adjust the weights.
Block Diagram of Error Backpropagation
Blocks:
1. Input Layer: Receives input features.
2. Hidden Layer(s): Process inputs through weights and activation functions.
3. Output Layer: Produces the final predicted output.
4. Error Computation: Difference between target and actual output.
5. Backpropagation: Adjusts weights to reduce the error.
Flowchart of Error Backpropagation Algorithm
Here is a step-by-step flow:
[Start]
|
Initialize weights and biases randomly
|
Input the training sample
|
Forward propagate:
- Compute the net input at each neuron
- Apply activation function to get output
|
Compute error at output neurons:
- Error = (Target output - Actual output)
|
Backward propagate:
- Compute gradient (error signals) for output layer
- Propagate error signals back to hidden layers
|
Update weights and biases:
- Adjust weights to minimize the error
|
Check for convergence (error below threshold or max epochs)
|
|----[No]----> Go to next training sample and repeat
|
[Yes]
Training complete
|
[Stop]
3)What are activation functions? Explain Binary, Bipolar, Continuous, and
Ramp activation functions.
4)Write a short note on LMS-Widrow Hoff
The Least Mean Squares (LMS) algorithm, also known as the Widrow-Hoff learning rule, is a
foundational adaptive filtering technique introduced by Bernard Widrow and Ted Hoff in
1960. It is designed for linear regression tasks and forms the basis of training adaptive linear
neurons (ADALINE). Unlike the perceptron, which focuses on classification, LMS
minimizes the mean squared error (MSE) between predictions and targets, making it ideal for
signal processing and system identification.
Applications
1. Adaptive Filtering:
o Noise cancellation in audio signals.
o Echo suppression in telecommunications.
2. System Identification:
o Modeling unknown systems (e.g., channel equalization).
3. Neural Networks:
o Training linear layers in ADALINE networks.
o Precursor to backpropagation in multilayer perceptrons.
Advantages
• Simplicity: Easy to implement with minimal computational overhead.
• Online Learning: Processes data incrementally, making it memory-efficient.
• Robustness: Works well with non-stationary data (e.g., changing environments).
Limitations
• Linear Limitation: Only models linear relationships; fails on non-linear problems.
• Learning Rate Sensitivity: Requires careful tuning of η\etaη.
• Input Scaling: Performance degrades if inputs are not normalized.
5)Draw and explain a biological neuron
Components:
• Dendrites: Receive signals from other neurons.
• Soma (Cell Body): Processes incoming signals.
• Axon: Transmits signals to other neurons.
• Synapse: Junction between axon terminals and another neuron’s dendrites.
Working:
• Electrical impulses (action potentials) are generated based on received chemical
signals.
• If the signal exceeds a threshold, it is transmitted down the axon.
6) Explain in detail the MP neuron model.
7) Draw delta learning rule (LMS WIDROW HOFF) model and explain it
with a training process flowchart.
Key Components
1. Linear Activation Function:
Unlike the perceptron, Adaline uses a linear activation (y=yiny=yin) instead of a step function.
2. Error Calculation:
Error is computed using the net input (yinyin), not the activated output.
3. Convergence Criteria:
• Total error falls below a threshold (ϵϵ).
• Maximum number of epochs is reached.
8)short note on artificial neural networks
9) Compare biological neural network with artificial neural network
10)Implement XOR function using McCulloch Pitts Model
11)Discuss different activation functions used in Neural Networks.
(Formula, Graph and Range).
12) Implement the ANDNOT logic functions using McCulloch Pitts Model.
13) Explain Multilayer perceptron with a neat diagram and its working
with flowchart or algorithm.
14) Explain Back Propagation Neural Network with flowchart.
15)Implement OR function using a single layer perceptron. Assume w1=0.6,
w2=1.1, learning rate=0.5, threshold=1