Advanced Software Testing
Lab Report
Submitted by
Name:- Subrat Das
Reg No:- 23BAI10495
Submited to
Dr. Praveen Kumar Tyagi
VIT Bhopal University
Bhopal, Madhya Pradesh
13thMay, 2025
Table of Contents
SL.NO Name of the experiment DATE:
1. To study and understand the fundamental 22/07/25
concepts, architecture, components, and
types of Artificial Neural Networks (ANN).
2 To perform matrix addition, which is a 22/07/25
foundational operation in ANN
computations such as forward
propagation and weight updates.
3 To perform matrix multiplications, which 22/07/25
is a foundational operation in ANN
computations such as forward
propagation and weight.
updates.
4 To perform matrix transpose, which is a 22/07/25
foundational operation in ANN
computations.
Name:- Debmalya Karmakar
Reg No:- 24MCA10167
DATE of Submission: 13/05/2025
Date: 13/05/24 Advanced Software Testing
Exp. No: 2 Creating Test Plans, Test Cases, Test Scenarios and Test Data
for different websites.
Aim: Understanding the Fundamentals of Artificial Neural Networks (ANN)
Objective:
To study and understand the fundamental concepts, architecture, key components, and different
types of Artificial Neural Networks (ANNs).
Theory :
Artificial Neural Networks (ANNs) are computational models inspired by the biological neural
networks in the human brain. They consist of interconnected nodes (neurons) organized in layers—
input layer, hidden layers, and output layer. Each neuron receives inputs, applies weights, adds a bias,
and passes the result through an activation function (e.g., Sigmoid, ReLU, Tanh) to introduce non-
linearity.
Key Components:
1. Neurons: Basic processing units that compute weighted sums of inputs.
2. Weights & Biases: Adjusted during training to minimize error.
3. Activation Functions: Determine neuron output (e.g., ReLU for hidden layers, Softmax for
classification).
4. Layers:
o Input Layer: Receives raw data.
o Hidden Layers: Extract features (deep networks have multiple hidden layers).
o Output Layer: Produces final predictions.
Types of ANNs:
Feedforward Neural Networks (FNN): Simplest type, data flows in one direction.
Convolutional Neural Networks (CNN): Used for image processing (e.g., filters detect edges).
Recurrent Neural Networks (RNN): Process sequential data (e.g., time series, NLP).
Example:
A 3-layer Feedforward ANN for binary classification:
Input Layer: 2 neurons (features).
Hidden Layer: 4 neurons (ReLU activation).
Output Layer: 1 neuron (Sigmoid activation).
Procedure:
1. Sketch the ANN architecture.
2. Label weights, biases, and activation functions.
3. Explain how forward propagation computes predictions.
Code (Python - Keras):
Conclusion :
This experiment introduced the fundamental structure of ANNs, including neurons, layers, and
activation functions. A simple Feedforward Neural Network was implemented using Keras,
demonstrating how input data propagates through layers to produce an output. Understanding ANN
architecture is crucial before diving into complex operations like matrix computations in deep
learning.
Aim: Matrix Addition in ANN (Bias Addition in Forward Propagation)
Objective:
To perform matrix addition, a fundamental operation used in ANN for bias addition during forward
propagation.
Theory :
Matrix addition is a critical operation in ANNs, particularly when adding bias terms to the weighted
sum of inputs. The formula for matrix addition is:
Cij=Aij+BijCij=Aij+Bij
Role in ANN:
1. Bias Addition: After computing the weighted sum Z=W⋅XZ=W⋅X, a bias vector bb is added:
Z=W⋅X+bZ=W⋅X+b
2. Layer Outputs: Used in residual networks (ResNets) where skip connections add outputs from
previous layers.
Key Properties:
Element-wise operation: Matrices must have the same dimensions.
Broadcasting: NumPy allows adding a scalar or vector to a matrix.
Example:
Given:
A=[1234],b=[0.10.2]A=[1324],b=[0.10.2]
After broadcasting:
A+b=[1.12.23.14.2]A+b=[1.13.12.24.2]
Procedure:
1. Define input matrix AA and bias vector bb.
2. Perform element-wise addition.
Code (Python - NumPy):
Conclusion :
Matrix addition is essential in ANNs for incorporating bias terms into neuron computations. This
experiment demonstrated how biases are added to weighted inputs during forward propagation.
Understanding this operation is crucial for implementing and debugging neural networks.
Aim; Matrix Multiplication in ANN (Forward Propagation)
Objective:
To perform matrix multiplication, a fundamental operation used in ANN for computing weighted sums
during forward propagation.
Theory :
Matrix multiplication is the backbone of neural network computations. It is used to calculate the
weighted sum of inputs in each layer. The general formula is:
Z=W⋅X+bZ=W⋅X+b
where:
WW = Weight matrix
XX = Input matrix
bb = Bias vector
Key Concepts:
1. Dot Product Requirement: Number of columns in WW must match rows in XX.
2. Dimensionality:
o If WW is m×nm×n and XX is n×pn×p, the result ZZ is m×pm×p.
3. Role in ANN:
o Input-Hidden Layer: Transforms input features into hidden representations.
o Hidden-Output Layer: Produces final predictions.
Example:
Given:
W=[0.50.60.70.8],X=[12],b=[0.10.2]W=[0.50.70.60.8],X=[12],b=[0.10.2]
The weighted sum ZZ is:
Z=W⋅X+b=[0.5×1+0.6×20.7×1+0.8×2]+[0.10.2]=[1.72.3]Z=W⋅X+b=[0.5×1+0.6×20.7×1+0.8×2]+[0.10.2
]=[1.72.3]
Procedure:
1. Define weight matrix WW, input XX, and bias bb.
2. Compute Z=W⋅X+bZ=W⋅X+b using matrix multiplication.
Code (Python - NumPy):
Conclusion :
Matrix multiplication is crucial for computing neuron activations in ANNs. This experiment
demonstrated how weights transform input data into hidden layer outputs. Mastering this operation
is essential for implementing forward propagation efficiently.
Aim: Matrix Transpose in ANN (Gradient Computation in Backpropagation)
Objective:
To perform matrix transpose, a key operation used in ANN for gradient computation during
backpropagation.
Theory (200 words):
The transpose of a matrix flips its rows and columns, denoted as ATAT. In ANNs, it is used in:
1. Backpropagation:
o Transposing weight matrices to propagate errors backward.
o If δ(l)δ(l) is the error at layer ll, then:
δ(l−1)=(W(l))T⋅δ(l)⊙σ′(Z(l−1))δ(l−1)=(W(l))T⋅δ(l)⊙σ′(Z(l−1))
2. Dimension Matching:
o Ensures matrix multiplications are valid during gradient updates.
Properties:
(AT)T=A(AT)T=A
(A+B)T=AT+BT(A+B)T=AT+BT
(A⋅B)T=BT⋅AT(A⋅B)T=BT⋅AT
Example:
Given:
W=[1324]W=[1234]
The transpose WTWT is:
WT=[1234]WT=[1324]
Procedure:
1. Define matrix WW.
2. Swap rows and columns to compute WTWT.
Code (Python - NumPy):
Conclusion :
Matrix transposition is vital for backpropagation in ANNs, enabling efficient gradient flow. This
experiment illustrated how transposing weight matrices helps compute errors for earlier layers.
Understanding this operation is key to training deep neural networks effectively.