MACHINE LEARNING LTPC
30 03
COURSE OBJECTIVES:
• To understand the concepts and mathematical foundations of machine learning and types of problems
tackled by machine learning.
• To explore the different supervised learning techniques including ensemble methods.
• To learn different aspects of unsupervised learning and reinforcement learning.
• To learn the role of probabilistic methods for machine learning.
• To understand the basic concepts of neural networks
• To understand the basic concepts of deep learning
UNIT I INTRODUCTION AND MATHEMATICAL FOUNDATIONS 8
What is Machine Learning? Need –History – Definitions – Applications - Advantages, Disadvantages & Challenges -
Types of Machine Learning Problems – Mathematical Foundations - Linear Algebra & Analytical Geometry -Probability
and Statistics- Bayesian Conditional Probability -Vector Calculus & Optimization - Decision Theory - Information
theory
UNIT II SUPERVISED LEARNING 8
Introduction-Discriminative and Generative Models -Linear Regression - Least Squares -Under-fitting / Overfitting -
Cross-Validation – Lasso Regression- Classification - Logistic Regression- Gradient Linear Models -Support Vector
Machines –Kernel Methods -Instance based Methods - K-Nearest Neighbors - Tree based Methods –Decision Trees –
ID3 – CART - Ensemble Methods –Random Forest - Evaluation of Classification Algorithms
UNIT III UNSUPERVISED LEARNING AND REINFORCEMENT LEARNING 8
Introduction - Clustering Algorithms -K – Means – Hierarchical Clustering - Cluster Validity - Dimensionality
Reduction –Principal Component Analysis – Recommendation Systems - EM algorithm. Reinforcement Learning –
Elements -Model based Learning – Temporal Difference Learning
UNIT IV PROBABILISTIC METHODS FOR LEARNING 8
Introduction -Naïve Bayes Algorithm -Maximum Likelihood -Maximum Apriori -Bayesian Belief Networks -
Probabilistic Modelling of Problems -Inference in Bayesian Belief Networks – Probability Density Estimation -
Sequence Models – Markov Models – Hidden Markov Models
UNIT V NEURAL NETWORKS 7
Neural Networks – Biological Motivation- Perceptron – Multi-layer Perceptron – Feed Forward Network – Back
Propagation- gradient descent optimization – stochastic gradient descent, error backpropagation, from shallow networks
to deep networks- Activation and Loss Functions- Limitations of Machine Learning. Natural Language Processing –
Computer Vision – Speech Recognition – Recommender Systems.
UNIT VI – DEEP LEARNING 7
Deep Learning– Convolution Neural Networks – Recurrent Neural Networks – Model Evaluation – Auto encoders
And Generative Models- Deep Generative Models: Variational auto encoders – Generative adversarial networks.-Use
cases
TOTAL: 45 PERIODS
COURSE OUTCOMES:
Upon the completion of course, students will be able to
CO1: Understand and outline problems for each type of machine learning
CO2: Design a Decision tree and Random forest for an application
CO3: Implement Probabilistic Discriminative and Generative algorithms for an application andanalyze the results.
CO4: Use a tool to implement typical Clustering algorithms for different types of applications.
CO5: Design and implement an application and identify applications using neural networks.
CO6: implement an application using deep learning
REFERENCES
1. Stephen Marsland, “Machine Learning: An Algorithmic Perspective”, Chapman & Hall/CRC,2nd
Edition, 2014.
2. Kevin Murphy, “Machine Learning: A Probabilistic Perspective”, MIT Press, 2012
3. Ethem Alpaydin, “Introduction to Machine Learning”, Third Edition, Adaptive Computation andMachine
Learning Series, MIT Press, 2014
4. Tom M Mitchell, “Machine Learning”, McGraw Hill Education, 2013.
5. Peter Flach, “Machine Learning: The Art and Science of Algorithms that Make Sense of Data”,First
Edition, Cambridge University Press, 2012.
CO’s & PO’s MAPPING
CO PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12
CO1 1 2 1 3 1 1 - - - - - -
CO2 2 3 1 2 1 2 - - - - - -
CO3 1 1 2 1 - 2 - - - - - -
CO4 2 2 - - - 3 - - - - - -
CO5 3 3 1 1 1 3 - - - - - -
CO6 3 3 1 1 1 3 - - - - - -