0% found this document useful (0 votes)
67 views7 pages

Machine Learning Lab (3) Report (21 CP 81)

This lab report discusses tasks completed for a machine learning lab class. The tasks covered linear regression, gradient descent, and stochastic gradient descent algorithms. Specifically, [1] the report analyzed code demonstrating that linear regression is not effective for random data patterns, [2] code implemented gradient descent for linear regression to minimize error, and [3] code was modified to apply gradient descent to multivariate linear regression and stochastic gradient descent.

Uploaded by

Islam Ulhaq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views7 pages

Machine Learning Lab (3) Report (21 CP 81)

This lab report discusses tasks completed for a machine learning lab class. The tasks covered linear regression, gradient descent, and stochastic gradient descent algorithms. Specifically, [1] the report analyzed code demonstrating that linear regression is not effective for random data patterns, [2] code implemented gradient descent for linear regression to minimize error, and [3] code was modified to apply gradient descent to multivariate linear regression and stochastic gradient descent.

Uploaded by

Islam Ulhaq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

UNIVERSITY OF ENGINEERING AND TECHNOLOGY, TAXILA

LAB REPORT # 02

MACHINE LEARNING (LAB)

NAME : Muhammad Islam ul haq

ROLL NO. : 21-CP-81

SECTION : Alpha
TASKS

Task 1:

Statement:
Download the folder from Teams Lab 03 Pre_tasks and upload the notebook in google colab and upload
the required csv your task is to run and analyze the code write your understanding.

Answer:
After uploading and analyzing the code we came to know to know that Linear Regression is good untill
line for data points is straightforward when data points start to come in random location than it is
difficult to predict value so for that purpose linear regression is not good.

Task 2:
Statement:
Download the 3_gradient_descent zip file from Teams and Extract it find the files and Exercise folder run
the code and get your understanding ?
Answer:
This code is an implementation of the gradient descent algorithm for simple linear regression. The goal of
the algorithm is to find the best-fitting line (linear relationship) between a single independent variable (x)
and a dependent variable (y). The code iteratively updates the slope (m) and y-intercept (b) of the line to
minimize the mean squared error between the predicted and actual values.

Task 3:
Multivariate linear regression.
Statement:
Modify the Example 1 code to handle multivariate linear regression. Update the synthetic dataset
to include multiple features.
Code:
import numpy as np
import pandas as pd

def gradient_descent(X, y, learning_rate, num_iterations):


# Initialize coefficients
theta = np.zeros(X.shape[1])
m = len(y)

for iteration in range(num_iterations):


# Calculate predictions
predictions = np.dot(X, theta)

# Calculate errors
errors = predictions - y

# Update coefficients
gradient = np.dot(X.T, errors) / m
theta -= learning_rate * gradient

# Calculate and print the cost


cost = np.sum(errors**2) / (2 * m)
print(f"Iteration {iteration + 1}/{num_iterations}, Cost: {cost}")

return theta

# Load data from CSV file


data = pd.read_csv('/canada_per_capita_income.csv')

# Extract 'year' and 'per_capita_income' columns


X = data[['year']].values
y = data['per_capita_income'].values
# Add a column of ones to X for the intercept term
X_b = np.c_[np.ones((X.shape[0], 1)), X]

# Apply gradient descent


learning_rate = 0.01
num_iterations = 10
theta = gradient_descent(X_b, y, learning_rate, num_iterations)

print("Final Coefficients:", theta)

OUTPUT:

Task 4:
Statement:
Modify the Example 1 code to implement stochastic gradient descent. Update the algorithm to update the
coefficients based on a single randomly chosen instance at each iteration.
Code:
import numpy as np

def stochastic_gradient_descent(X, y, learning_rate, num_iterations):


# Initialize coefficients
theta = np.zeros(X.shape[1])
m = len(y)

for iteration in range(num_iterations):


# Randomly shuffle the data indices
indices = np.random.permutation(m)

for i in indices:
# Select a random instance
X_i = X[i:i+1]
y_i = y[i:i+1]

# Calculate prediction
prediction = np.dot(X_i, theta)

# Calculate error
error = prediction - y_i

# Update coefficients based on the single instance


gradient = np.dot(X_i.T, error)
theta -= learning_rate * gradient
# Calculate and print the cost for the entire dataset
predictions = np.dot(X, theta)
errors = predictions - y
cost = np.sum(errors**2) / (2 * m)
print(f"Iteration {iteration + 1}/{num_iterations}, Cost: {cost}")

return theta

# Usage example
X = np.random.rand(100, 1) # Example feature
y = 2 * X + 1 + 0.1 * np.random.randn(100, 1) # Example linear relationship with noise

# Add a column of ones to X for the intercept term


X_b = np.c_[np.ones((100, 1)), X]

# Apply stochastic gradient descent


learning_rate = 0.01
num_iterations = 100
theta = stochastic_gradient_descent(X_b, y, learning_rate, num_iterations)

print("Final Coefficients:", theta)

OUTPUT:
------------------------------------------------------------------------------

You might also like