0% found this document useful (0 votes)
10 views4 pages

Assignment 2 Regression2

The document outlines an assignment for a course on Neural Networks and Deep Learning, focusing on implementing various regression and classification algorithms in Python. It includes tasks such as linear regression, logistic regression, and multinomial logistic regression using different optimization techniques like stochastic gradient descent and Adam. Additionally, it emphasizes plotting loss curves, predicting outcomes, and evaluating model performance using metrics like precision and recall.

Uploaded by

anvithmkulal2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views4 pages

Assignment 2 Regression2

The document outlines an assignment for a course on Neural Networks and Deep Learning, focusing on implementing various regression and classification algorithms in Python. It includes tasks such as linear regression, logistic regression, and multinomial logistic regression using different optimization techniques like stochastic gradient descent and Adam. Additionally, it emphasizes plotting loss curves, predicting outcomes, and evaluating model performance using metrics like precision and recall.

Uploaded by

anvithmkulal2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Department of Electronics & Communication Engineering

National Institute of Technology Karnataka


Surathkal, Mangalore-575025(Karnataka), India
EC460- Neural Networks and Deep Learning
Assignment 2: Linear Regression and Logistic Regression
Issue date: 04-09-2021 Submission date: 14-09-2021
Q.1. Write python code from scratch for simple Linear Regression problem, the following training
data are given.
X = [2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6]
Y = [5.1, 6.1, 6.9, 7.8, 9.2, 9.9, 11.5, 12, 12.8]
The model Y as a linear function of X (a) Use batch gradient descent learning algorithm to learn
model parameters for α = 0.01 choose random values of weights and bias and epochs=1000. Use
MSE as loss function with an appropriate convergence criterion. (b) Plot cost function (J) for the
learning duration (c) plot the regression line (d) repeat (b) to (c) for stochastic gradient descent and
Adam optimization algorithm (e) Plot comparative loss curve

Q.2. The neural network shown in Fig.1 has the following hyper parameters and input: Choose
random weights and bias of the neuron and learning rate =0.01 and inputs to the neuron and target
values are as follows.

X1 X2 Y(target)
4 1 2
2 8 -14
1 0 1
3 2 -1
1 4 -7
6 7 -8

Fig.1
(a).Write a python code for predicted the output of neural network for given set of inputs using
Stochastic Gradient Descent algorithm for the loss functions: ((i) Mean Square Error (ii) Squared
Epsilon Hinge Loss (b) Plot comparative loss curve (c) repeat(a) and Adam optimization
algorithm
Q 3. A group of 20 students studied 0 to 6 hours for the exam. Some passed and others failed. Results
are given below
| Student | Hours studied - x | Result (0 – fail, 1 – pass) - y |
| - | --- | --- |
| 1 | 0.5 | 0 |
| 2 | 0.75 | 0 |
| 3 | 1.00 | 0 |
| 4 | 1.25 | 0 |
| 5 | 1.50 | 0 |
| 6 | 1.75 | 0 |
| 7 | 1.75 | 1 |
| 8 | 2.00 | 0 |
| 9 | 2.25 | 1 |
| 10 | 2.50 | 0 |
| 11 | 2.75 | 1 |
| 12 | 3.00 | 0 |
| 13 | 3.25 | 1 |
| 14 | 3.50 | 0 |
| 15 | 4.00 | 1 |
| 16 | 4.25 | 1 |
| 17 | 4.50 | 1 |
| 18 | 4.75 | 1 |
| 19 | 5.00 | 1 |
| 20 | 5.50 | 1 |
(a). Write python code for scratch to build neural network model to determine the optimal linear
hypothesis using linear regression to predict if a student passes or not based on the number hours
studied with the use for stochastic gradient descent and Adam optimization algorithm with model
parameters for α = 0.01 choose random values of weights and bias and epochs=10000. Use
appropriate regression loss function.
(b). (i).Write python code from scratch to determine the optimal logistic hypothesis using logistic
regression to predict if a student passes or not based on the number hours studied with the use for
stochastic gradient descent with model parameters for α = 0.01 choose random values of weights and
bias and epochs=40000; Loss function: Binary Cross Entropy (BCE), Threshold value=0.5 (a) plot
the cost function vs epoch (b) Predict pass or failed result of your designed model on random study
hours enter by you. (ii) Repeat part (i) analysis with Dice Loss function.
(iii) Repeat part (i) analysis with Adam optimization algorithm.

Q.4. Build a model to recognize different handwritten digits from MNIST dataset by using
multinomial logistic regression. Use of Adam optimization algorithm to learn model with parameters
for α = 0.01, epoch = 40000 and random parameters of the model and Loss function: Softmax loss
function. (a) Plot the cost function vs epoch (b) Predict the digit of your designed model on random
test data enter by you (c) print confusion matrix (d) calculate classification metrics such as precision,
recall, f1-score and accuracy
Q.5. Build a model to discriminate the red, green and blue points in 2-dimensional space shown
below:

The input data and target are as follows:


X=np.array([[-0.1, 1.4],
[-0.5, 0.2],
[ 1.3, 0.9],
[-0.6, 0.4],
[-1.6, 0.2],
[ 0.2, 0.2],
[-0.3,-0.4],
[ 0.7,-0.8],
[ 1.1,-1.5],
[-1.0, 0.9],
[-0.5, 1.5],
[-1.3,-0.4],
[-1.4,-1.2],
[-0.9,-0.7],
[ 0.4,-1.3],
[-0.4, 0.6],
[ 0.3,-0.5],
[-1.6,-0.7],
[-0.5,-1.4],
[-1.0,-1.4]])

y = np.array ([0, 0, 1, 0, 2, 1, 1, 1, 1, 0, 0, 2, 2, 2, 1, 0, 1, 2,
2, 2]); Here, 0=red, 1=green and 2= blue dots
In other words, given a point in 2-dimensions, x=(x1, x2), predict output either red, green or blue by
using multinomial logistic regression. (a) (i) Compare predicted results with ground truth using bar
chat plot (ii) plot loss curve (iii) print confusion matrix (iv) calculate classification metrics such as
precision, recall, f1-score and accuracy (v) Visualize classified data by Scatter plot. Use of gradient
descent learning algorithm to learn model with parameters for α = 0.01, Softmax loss function and
random parameters of the model. (b) repeat part (a) Use Stochastic gradient descent algorithm to
learn model (c) repeat part (a) with use of Adam Optimization algorithm to learn model

You might also like