0% found this document useful (0 votes)
23 views2 pages

Deep Learning Assignment 1

Uploaded by

Aayu Gaming
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views2 pages

Deep Learning Assignment 1

Uploaded by

Aayu Gaming
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Lamrin Tech Skills University, Punjab

University School of Engineering & Technology


B.Tech CSE (AI & ML) – 5th Semester
Assignment - 1
UGCS-333 (Deep Learning)

Date: 3rd September 2025


Submission Date: on or before 9th September 2025, 5:00 PM
Maximum Marks: 15

Instructions
• Assignment should be handwritten.

• A scanned copy of the sheets should be submitted in PDF format.

• Submission can be done via e-mail, WhatsApp, or as a hard copy.

• The PDF file should be named with Course, Name, and Roll No.

• Submissions after the due date will be awarded 0 marks.

Questions
1. Since a single perceptron cannot solve the XOR problem, does it mean the percep-
tron model is useless? Can non-linear activation functions alone solve XOR without
multiple layers? Discuss.

2. If you were to design a perfect activation function, what mathematical properties


should it possess? Based on this analysis, critically evaluate the suitability of com-
monly used activation functions such as Sigmoid, Tanh, ReLU, Leaky ReLU, and
Softmax.

3. (a) If we remove all activation functions from a deep neural network, what happens
to the model’s capacity? Why is a network without activation functions unable
to learn complex patterns?
(b) You are building a binary classification model and observe that the output
layer with a sigmoid activation is producing probabilities very close to 0 or 1
(e.g., 0.001 or 0.999), even for uncertain cases. What could be causing this
issue, and how would you modify your activation strategy?

4. “Feed forward is only about computation, not learning.” Do you agree? Explain
with the help of a simple neural network diagram, showing how inputs are processed
through layers to produce an output.

1
5. Consider the two neural networks (NNs) shown in Figures 1 and 2, with ReLU
activation (ReLU(z) = max{0, z}, ∀z ∈ R). R denotes the set of real numbers. The
connections and their corresponding weights are shown in the Figures. The biases
at every neuron are set to 0. For what values of p, q, r in Figure 2 are the two NNs
equivalent, when x1 , x2 , x3 are positive?

Figure 1

x1
1
2
1 3
2
1

x2
2
1

1 3
2
x3 1

Figure 2

x1
p

q
x2

r
x3

You might also like