0% found this document useful (0 votes)
20 views8 pages

SELFMADEMCQASCST2

The document provides a comprehensive overview of Artificial Neural Networks (ANNs) and their differences from Biological Neural Networks (BNNs), including key concepts such as learning rules, activation functions, and types of neural networks. It covers various applications of ANNs in fields like healthcare, finance, and autonomous vehicles, as well as advanced topics like optimization and problem-solving techniques. The content includes multiple-choice questions and answers to reinforce understanding of the material.

Uploaded by

shaurayavohra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views8 pages

SELFMADEMCQASCST2

The document provides a comprehensive overview of Artificial Neural Networks (ANNs) and their differences from Biological Neural Networks (BNNs), including key concepts such as learning rules, activation functions, and types of neural networks. It covers various applications of ANNs in fields like healthcare, finance, and autonomous vehicles, as well as advanced topics like optimization and problem-solving techniques. The content includes multiple-choice questions and answers to reinforce understanding of the material.

Uploaded by

shaurayavohra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

1. Artificial Neural Networks (ANN) vs.

Biological Neural Networks (BNN)

1. Which of the following is a key difference between ANNs and BNNs?


a) ANNs have slower processing speed than BNNs.
b) BNNs store information in contiguous memory locations.
c) ANNs have fault tolerance, while BNNs do not.
d) BNNs process information in milliseconds, while ANNs process in nanoseconds.
Answer: d)

2. In BNNs, information is stored in:


a) Contiguous memory locations
b) Synaptic strength
c) External hard drives
d) Binary code
Answer: b)

3. ANNs lack fault tolerance because:


a) They cannot recover corrupted information.
b) They are biologically inspired.
c) They use parallel processing.
d) They have no synaptic connections.
Answer: a)

2. Basic Models of ANN

4. A single-layer feed-forward network has:


a) Multiple hidden layers
b) Only input and output layers
c) Recurrent connections
d) No activation function
Answer: b)

5. In a multilayer feed-forward network, the layer between input and output is called:
a) Processing layer
b) Hidden layer
c) Feedback layer
d) Convolutional layer
Answer: b)

6. A recurrent neural network (RNN) is used for:


a) Static data classification
b) Sequential data processing
c) Image recognition only
d) Linear regression
Answer: b)

7. Maxnet is a type of neural network used for:


a) Reinforcement learning
b) Competitive learning (winner-takes-all)
c) Supervised classification
d) Unsupervised clustering
Answer: b)

3. Learning Rules

8. Supervised learning requires:


a) Only input data
b) Labeled input-output pairs
c) Random weight initialization
d) No feedback mechanism
Answer: b)

9. Unsupervised learning is used in:


a) Classification with labeled data
b) Clustering without predefined labels
c) Reinforcement feedback
d) Regression tasks
Answer: b)

10. Reinforcement learning involves:


a) Learning from labeled datasets
b) Trial-and-error with rewards/penalties
c) Only backpropagation
d) Static weight adjustments
Answer: b)

11. Incremental training updates weights:


a) After processing all training data
b) Only at the final epoch
c) After each training sample
d) Never
Answer: c)

12. Batch mode training updates weights:


a) After each sample
b) Only once at the end
c) After processing the entire dataset
d) Randomly
Answer: c)

4. Activation Functions

13. The binary step function outputs:


a) Continuous values between 0 and 1
b) 0 or 1 based on threshold
c) -1 or +1
d) Linear values
Answer: b)
14. Sigmoidal functions are preferred in backpropagation because they are:
a) Linear
b) Non-differentiable
c) Differentiable and nonlinear
d) Only used in CNNs
Answer: c)

15. ReLU (Rectified Linear Unit) outputs:


a) Negative values only
b) Zero for negative inputs, linear for positive
c) Sigmoidal probabilities
d) Binary values
Answer: b)

5. Types of Neural Networks

16. CNNs are best suited for:


a) Time-series forecasting
b) Image processing
c) Text classification
d) Reinforcement learning
Answer: b)

17. RNNs are used for:


a) Static data
b) Sequential data (e.g., speech, text)
c) Only image recognition
d) Binary classification
Answer: b)

18. A GAN consists of:


a) Two competing networks (Generator & Discriminator)
b) Only a single perceptron
c) A self-organizing map
d) A radial basis function network
Answer: a)

19. RBFN uses which activation function in hidden layers?


a) Sigmoid
b) ReLU
c) Gaussian
d) Binary step
Answer: c)

20. Self-Organizing Maps (SOMs) are used for:


a) Supervised classification
b) Unsupervised clustering and visualization
c) Regression tasks
d) Reinforcement learning
Answer: b)

6. Perceptrons and Learning Algorithms

21. The McCulloch-Pitts neuron uses:


a) Linear regression
b) Threshold logic
c) Only backpropagation
d) No weights
Answer: b)

22. The Hebbian learning rule states:


a) "Neurons that fire together, wire together."
b) Weights are updated based on error gradients
c) Only inhibitory connections are strengthened
d) No weight updates occur
Answer: a)

23. A perceptron can solve:


a) XOR problem
b) Only linearly separable problems
c) All non-linear problems
d) No classification tasks
Answer: b)

24. The XOR problem requires:


a) A single-layer perceptron
b) A multi-layer perceptron (MLP)
c) No hidden layers
d) Only linear activation
Answer: b)

25. Gradient descent minimizes:


a) The accuracy
b) The loss function
c) The number of layers
d) The input size
Answer: b)

7. Backpropagation & Deep Learning

26. Backpropagation is used to:


a) Initialize weights randomly
b) Adjust weights based on error gradients
c) Only in CNNs
d) Replace activation functions
Answer: b)
27. In MLPs, hidden layers help in:
a) Only increasing computation time
b) Feature extraction and hierarchical learning
c) Reducing the number of inputs
d) Linear transformations only
Answer: b)

28. Vanishing gradient problem occurs when:


a) Gradients become too large
b) Gradients become too small to update weights
c) There are no hidden layers
d) Using ReLU activation
Answer: b)

29. The learning rate in gradient descent controls:


a) The number of epochs
b) The step size for weight updates
c) Only the bias term
d) The activation function
Answer: b)

30. A high learning rate may cause:


a) Slow convergence
b) Oscillations or divergence
c) Perfect accuracy immediately
d) No weight updates
Answer: b)

8. Applications of ANN

31. ANN in healthcare is used for:


a) Only administrative tasks
b) Disease prediction and medical imaging
c) Only drug manufacturing
d) No real-world applications
Answer: b)

32. In finance, ANNs are used for:


a) Only cash counting
b) Stock prediction and fraud detection
c) Only ledger maintenance
d) No predictive tasks
Answer: b)

33. Speech recognition uses:


a) Only CNNs
b) RNNs/LSTMs for sequential data
c) No neural networks
d) Only binary classifiers
Answer: b)

34. Autonomous vehicles rely on ANNs for:


a) Only engine control
b) Object detection and path planning
c) Only passenger entertainment
d) No real-time processing
Answer: b)

35. GANs are used for:


a) Only classification
b) Generating synthetic data (e.g., images)
c) Only regression
d) No creative tasks
Answer: b)

9. Advanced Topics

36. K-means clustering is a type of:


a) Supervised learning
b) Unsupervised learning
c) Reinforcement learning
d) Semi-supervised learning
Answer: b)

37. The derivative of the sigmoid function is maximum at:


a) x = 0
b) x → ∞
c) x → -∞
d) x = 1
Answer: a)

38. The chain rule in backpropagation is used to compute:


a) Input data
b) Gradients for weight updates
c) Only the learning rate
d) The number of layers
Answer: b)

39. A saddle point in optimization is where:


a) The gradient is zero but not a minimum/maximum
b) The loss is minimized perfectly
c) The learning rate is too high
d) No training occurs
Answer: a)

40. The role of the input layer in ANN is to:


a) Perform complex computations
b) Distribute raw data to the network
c) Apply activation functions
d) Replace hidden layers
Answer: b)

10. Problem Solving & Case Studies

41. The XOR problem is solved using:


a) A single perceptron
b) A multilayer perceptron with hidden layers
c) Only linear regression
d) No neural network
Answer: b)

42. In Hebbian learning, if two neurons fire together, their connection weight:
a) Decreases
b) Increases
c) Stays the same
d) Becomes zero
Answer: b)

43. If the learning rate is too small, gradient descent may:


a) Converge too quickly
b) Get stuck in local minima
c) Diverge
d) Skip optimal solutions
Answer: b)

44. A CNN's pooling layer helps in:


a) Increasing spatial dimensions
b) Reducing computation and overfitting
c) Only adding noise
d) Removing all features
Answer: b)

45. The critic in reinforcement learning provides:


a) Only rewards
b) Evaluative feedback (reward/penalty)
c) No feedback
d) Only weights
Answer: b)

11. Miscellaneous

46. The main property of an ANN is its ability to:


a) Store data permanently
b) Learn from data
c) Replace all traditional algorithms
d) Work without weights
Answer: b)

47. The threshold in a neuron determines:


a) Its memory capacity
b) Whether it fires based on input
c) Only its weight values
d) The number of layers
Answer: b)

48. The bias in a neuron allows:


a) Only negative outputs
b) Flexibility in decision boundaries
c) No impact on learning
d) Fixed outputs
Answer: b)

49. The main challenge in training deep networks is:


a) Too few parameters
b) Vanishing/exploding gradients
c) No need for activation functions
d) Linear separability
Answer: b)

50. The output layer in a classification ANN uses:


a) Linear activation
b) Softmax for multi-class
c) No activation
d) Only ReLU
Answer: b)

You might also like