0% found this document useful (0 votes)
50 views1 page

EC3606

This document is the mid-semester examination for the 6th semester B.Tech (EC & EI) class in the subject Introduction to Machine Intelligence at the National Institute of Technology, Rourkela. The exam contains 6 questions worth a total of 30 marks. Question 1 contains 5 short answer parts on knowledge representation rules, unsupervised/supervised learning examples, stochastic gradient descent learning rate, and overfitting. Question 2 discusses the learning algorithm and convergence of the single layer perceptron and its limitations. Question 3 contains parts on finding the optimal solution to a cost function and developing the stochastic gradient descent algorithm for it. Question 4 develops an optimal Bayesian classifier. Question 5 develops the learning algorithm for a multilayer perceptron

Uploaded by

smarakpatra2711
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views1 page

EC3606

This document is the mid-semester examination for the 6th semester B.Tech (EC & EI) class in the subject Introduction to Machine Intelligence at the National Institute of Technology, Rourkela. The exam contains 6 questions worth a total of 30 marks. Question 1 contains 5 short answer parts on knowledge representation rules, unsupervised/supervised learning examples, stochastic gradient descent learning rate, and overfitting. Question 2 discusses the learning algorithm and convergence of the single layer perceptron and its limitations. Question 3 contains parts on finding the optimal solution to a cost function and developing the stochastic gradient descent algorithm for it. Question 4 develops an optimal Bayesian classifier. Question 5 develops the learning algorithm for a multilayer perceptron

Uploaded by

smarakpatra2711
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Department of Electronics & Communication Engg.

National Institute of Technology, Rourkela


MID-SEMESTER EXAMINATION. Spring 2022-23
CLASS: B.Tech (EC & EI) 6th sem. Full Marks: 30
SUBJECT: Introduction to Machine Intelligence Time: 2 hours
SUBJECT CODE: EC3606
Answer all questions. Figures in the right-hand side margin indicate marks.
All parts of a question should be answered in one place.
The question paper contains one page.

1. Answer the followings: [1 × 5]


a. Briefly explain two knowledge representation rules.
b. Briefly explain the following terms:
i. Unsupervised learning with example,
ii. Supervised learning with example.
c. Briefly explain the significance of step size/learning rate in the stochastic gradient
algorithm.
d. Briefly discuss batch and online learning in neural network.
e. Discuss the concept of overfitting and ill posed problem in regression.

2. Discuss the learning algorithm of single layer perceptron and establish its convergence.
Also explain the limitations of single layer perceptron which motivates for multi-layer
perceptron. [5]

3. Answer the followings: [2.5+2.5]


a. Find the optimal solution of the following cost function and explain the
significance parameter 𝜆 in the cost function

𝐽(𝑤) = ‖𝑦 − 𝑋 𝑤‖ + 𝜆‖𝑤‖

where 𝑋 ∈ ℝ × , 𝑦 ∈ ℝ , 𝑤 ∈ ℝ are the regressor, response and regression


parameter.
b. Develop the stochastic gradient algorithm for the above cost function.

4. Develop an optimal Bayesian classifier for the data {0,1} which probability of generation
is 𝑝(0) = 0.4 and 𝑝(1) = 0.6 in presence of the Gaussian noise 𝒩(0, 𝜎 ). Show that, as
distance between the points 0 and 1 increases, the probability of miss classification
decreases. [5]

5. Develop the learning algorithm for a multilayer perceptron of type [2 × 4 × 1]. In this
algorithm, explain the parameter initialization, activation function, forward- and
backward-pass. [5]

6. Develop backpropagation algorithm for the following architecture. Use kernel variables
𝒲 ( ) and 𝒲 ( ) for the first and second convolution layer. Number of nodes in the hidden
layer of the fully connected layer is 𝑁 . Input tensor is a vector of dimension 𝑁. There is
one neuron in the output layer of fully connected layer. You can consider activation
function of your choice. Assume least square error cost function for classification.

[5]
Con Relu Con Fully Connected
v. v.

Classifier
𝒙 ∈ ℝ𝑴×𝟏

You might also like