Fundamentals of Machine Learning - Lecture Notes
Machine Learning (ML) is a branch of artificial intelligence that focuses on building systems that
learn from data.
1. Introduction to Machine Learning:
ML enables systems to learn and make predictions without being explicitly programmed. Types of
ML:
- Supervised Learning: Learning from labeled data (e.g., classification, regression).
- Unsupervised Learning: Finding patterns in unlabeled data (e.g., clustering).
- Reinforcement Learning: Learning via feedback from actions (e.g., rewards).
2. Key Algorithms:
- Decision Trees: Model decisions with a tree structure based on feature values.
- k-Nearest Neighbors (k-NN): Classifies data based on the majority class among its k closest
points.
- Support Vector Machines (SVM): Finds the optimal hyperplane to separate classes.
- Linear Regression: Predicts a continuous output from input features.
- Logistic Regression: Used for binary classification tasks.
3. Model Evaluation:
- Accuracy, precision, recall, F1 score.
- Confusion matrix for classification tasks.
4. Bias-Variance Tradeoff:
- Bias: Error due to overly simplistic assumptions.
- Variance: Error due to too much sensitivity to training data.
- Goal: Find a model that generalizes well (low total error).
5. Overfitting and Underfitting:
- Overfitting: Model captures noise and performs poorly on new data.
- Underfitting: Model is too simple to capture underlying patterns.
6. Loss Functions:
- Mean Squared Error (MSE) for regression.
- Cross-Entropy for classification.
These notes offer a foundational overview for students beginning in ML. Further learning should
include hands-on coding, mathematical foundations, and advanced topics such as deep learning.