Question 1(a):
% Course: MATH*2130
% Name: Tran Thanh Thao Vuong (Darnell)
% Student Number: 1221600
clear
clc
clear all
% Define given constants
g = 9.81; % Acceleration due to gravity
m = 80; % Mass of the skydiver
v = 50; % Velocity
t = 10; % Time
% Define the function f(cd)
f = @(cd) (g * m / cd) * (1 - exp(-cd * t / m)) - v;
% Define the intial interval and the error tolerance
a = 1; % Lower bound for cd
b = 20; % Upper bound for cd
err = 0.01; % Error tolerance for stopping the iteration
nmax = 100; % Maximum number of iterations allowed
% Using Bisection Method to find cd
[c, fc] = bisect(f, a, b, err);
disp(['Bisection Method: c_d = ', num2str(c)]);
% Using False Position Method to find cd
[c, fc] = false_p(f, a, b, err, nmax);
disp(['False Position Method: c_d = ', num2str(c)]);
% Bisection Method Function
function [c, fc] = bisect(f, a, b, err)
% Inputs:
% f - represent function f(cd)
% a - lower bound of the interval
% b - upper bound of the interval
% err - error tolerance for stopping iteration
% Outputs:
% c - approximate root cd
% fc - function value at c
c_old = a; % Initialize previous midpoint to track the change
while true
% Compute the midpoint c of the interval
c = (a + b) / 2;
fc = f(c); % Evaluate the function at the midpoint
% Check for the convergence (c is within the error tolerance)
if abs(c - c_old) <= err % Convergence condition
break;
end
% Determine the new interval
if f(a) * fc < 0
b = c; % Root is in the left subinterval
else
a = c; % Root is in the right subinterval
end
% Update previous midpoint for the next iteration
c_old = c;
end
end
% False Position Method
function [c, fc] = false_p(f, a, b, err, nmax)
% Inputs:
% f - represent function f(cd)
% a - lower bound of the interval
% b - upper bound of the interval
% err - error tolerance for stopping iteration
% nmax - Maximum number of iterations
% Outputs:
% c - approximate root cd
% fc - function value at c
% Evaluate the function at the initial interval endpoints
fa = f(a);
fb = f(b);
% Initialize previous approximation of c
c_old = a;
for i = 1:nmax
% Compute the False Position formula
c = b - fb * (b - a) / (fb - fa);
fc = f(c); % Evaluate function at new estimate
% Check for convergence
if abs(c - c_old) <= err
break;
end
% Update previous c approximation
c_old = c;
% Determine the new interval
if fc == 0
return; % Stop execution as the root is found
elseif fa * fc < 0 % Root is in the left subinterval
b = c; % Update upper bound
fb = fc; % Update function value at b
else % Root is in the right subinterval
a = c; % Update lower bound
fa = fc; % Update function value at a
end
end
% Update new estimate for the next iteration
c = b - fb * (b - a) / (fb – fa);
end
Outputs:
Bisection Method: c_d = 12.3462
False Position Method: c_d = 12.3419
Yes, the outputs of the root approximation have absolute errors less than err = 0.01 because we
have written the code that will ensure that the root operation will stop when the absolute of c
value is smaller than the err = 0.01, which will also be the result for the program. Besides, both
values from 2 methods are closed together, only have 0.0043 difference, which prove that they
are closed to the true root.
Question 1(b):
% Course: MATH*2130
% Name: Tran Thanh Thao Vuong (Darnell)
% Student Number: 1221600
clear
clc
clear all
% Define the function f(x) = x^10 - 1
f = @(x) x^10 - 1;
% Define the initial interval and error tolerance
a = 0; % Lower bound for x
b = 1.6; % Upper bound for x
err = 0.01; % Error tolerance for stopping iteration
nmax = 100; % Maximun number of iterations of False Position method
% Using Bisection Method
[c, fc] = bisect(f, a, b, err);
disp(['Bisection Method: Root = ', num2str(c), ', f(c) = ', num2str(fc)]);
% Using False Position Method
[c, fc] = false_p(f, a, b, err, nmax);
disp(['False Position Method: Root = ', num2str(c), ', f(c) = ', num2str(fc)]);
% Bisection Method Function
function [c, fc] = bisect(f, a, b, err)
% Inputs:
% f - represent function f(x)
% a - lower bound of the interval
% b - upper bound of the interval
% err - error tolerance for stopping iteration
% Outputs:
% c - approximate root of f(x)
% fc - function value at x
c_old = a; % Initialize previous midpoint to track the change
while true
% Compute the midpoint c of the interval
c = (a + b) / 2;
fc = f(c); % Evaluate the function at the midpoint
% Check for the convergence (c is within the error tolerance)
if abs(c - c_old) <= err % Convergence condition
break;
end
% Determine the new interval
if f(a) * fc < 0
b = c; % Root is in the left subinterval
else
a = c; % Root is in the right subinterval
end
% Update previous midpoint for the next iteration
c_old = c;
fc = f(c);
end
end
% False Position Method
function [c, fc] = false_p(f, a, b, err, nmax)
% Inputs:
% f - represent function f(x)
% a - lower bound of the interval
% b - upper bound of the interval
% err - error tolerance for stopping iteration
% nmax - Maximum number of iterations
% Outputs:
% c - approximate root f(x)
% fc - function value at x
% Evaluate the function at the initial interval endpoints
fa = f(a);
fb = f(b);
% Initialize previous value of c
c_old = a;
for i = 1:nmax
% Compute the False Position formula
c = b - fb * (b - a) / (fb - fa);
fc = f(c); % Evaluate function at new estimate
% Check for convergence
if abs(c - c_old) <= err
break;
end
% Update previous c approximation
c_old = c;
% Determine the new interval
if fc == 0
return; % Stop execution as the root is found
elseif fa * fc < 0 % Root is in the left subinterval
b = c; % Update upper bound
fb = fc; % Update function value at b
else % Root is in the right subinterval
a = c; % Update lower bound
fa = fc; % Update function value at a
end
end
% Update new estimate for the next iteration
c = b - fb * (b - a) / (fb - fa);
fc = f(c);
end
Outputs:
Bisection Method: Root = 1.1938, f(c) = 4.8767
False Position Method: Root = 0.50985, f(c) = -0.99881
For the Bisection Method, the root approximation is closed to the true root, which can have an
absolute error of less than 0.01, because although the approach is slow, it does not have the risk.
However, the False Position Method has the risk that can be converge because the slope at x = 0
is so small, which can lead the root approximation going away from the true root, which will
have the absolute error larger than 0.01.
Question 1(c):
% Course: MATH*2130
% Name: Tran Thanh Thao Vuong (Darnell)
% Student Number: 1221600
clear;
clc;
close all;
function [c, fc] = mod_false_p(f, a, b, err, nmax)
% Compute function values at endpoints
fa = f(a);
fb = f(b);
% Ensure that the function has opposite signs at a and b
if fa * fb > 0
error('Function values at a and b must have opposite signs.')
end
c_old = a; % Initialize previous c
stuck_a = 0; stuck_b = 0;
for n = 1:nmax
% Compute the new c using the false position formula
c = (a * fb - b * fa) / (fb - fa);
fc = f(c);
% Check for convergence
if abs(fc) < err || abs(c - c_old) < err
return;
end
% Update the interval based on the function sign at c
if fa * fc < 0
b = c;
fb = fc;
stuck_b = 0;
stuck_a = stuck_a + 1;
% If the method is stuck at a, modify fa
if stuck_a >= 2
fa = fa / 2;
end
else
a = c;
fa = fc;
stuck_a = 0;
stuck_b = stuck_b + 1;
% If the method is stuck at b, modify fb
if stuck_b >= 2
fb = fb / 2;
end
end
c_old = c; % Store the previous c for checking convergence
end
% If max iterations reached, return last computed values
warning('Maximum number of iterations reached without full convergence.');
end
% Test the modified false position method
% Define the function
f = @(x) x.^10 - 1;
% Define the interval and parameters
a = 0;
b = 1.6;
err = 0.01;
nmax = 100;
[c, fc] = mod_false_p(f, a, b, err, nmax);
fprintf('Approximate root: %.6f\n', c);
fprintf('Function value at root approximation: %.6e\n', fc);
Outputs:
Approximate root: 0.999374
Function value at root approximation: -6.239464e-03
Question 2:
% Course: MATH*2130
% Name: Tran Thanh Thao Vuong (Darnell)
% Student Number: 1221600
clear; % Clears all workspace variables
clc; % Clears command window
close all; % Closes all figure windows
format long; % Sets numeric output format to long precision
digits(100); % Sets symbolic variable precision to 100 digits
%% Secant Method Function
function [x2, f2] = secant(f, x0, x1, p, nmax)
f0 = double(f(x0)); % Evaluate function at initial guess x0
f1 = double(f(x1)); % Evaluate function at initial guess x1
re = 1; % Initialize relative error
n = 0; % Initialize iteration counter
% Loop until desired precision or max iterations are reached
while (re >= (0.5) * 10^(-p) && (n < nmax))
if abs(f1 - f0) < eps
fprintf('Division by zero detected in Secant method.\n');
x2 = NaN; % Return NaN if division by zero occurs
return;
end
% Update approximation using Secant formula
x2 = x1 - f1 * ((x1 - x0) / (f1 - f0));
f2 = double(f(x2)); % Evaluate function at new approximation
% Shift values for next iteration
x0 = x1;
f0 = f1;
x1 = x2;
f1 = f2;
% Compute relative error
re = abs((x1 - x0) / x1);
n = n + 1; % Increment iteration counter
end
% Check if max iterations were reached
if n == nmax
fprintf('Max iterations reached before convergence to %d significant digits in Secant method.\n', p);
else
fprintf('Secant method converged to %d significant digits after %d iterations.\n', p, n);
end
end
%% Newton Method Function
function [xr, fr] = newton(f, g, x0, p, nmax)
xr = x0; % Initial guess for root
re = 1; % Initialize relative error
n = 0; % Initialize iteration counter
% Loop until desired precision or max iterations are reached
while (re >= (0.5) * 10^(-p) && (n < nmax))
if abs(double(g(xr))) < eps
fprintf('Derivative too small in Newton method.\n');
xr = NaN; % Return NaN if derivative is too small
return;
end
% Update approximation using Newton's formula
xr = xr - double(f(xr)) / double(g(xr));
fr = double(f(xr)); % Evaluate function at new approximation
% Compute relative error for convergence check
re = abs(double(f(xr)) / double(g(xr)));
n = n + 1; % Increment iteration counter
end
% Check if max iterations were reached
if n == nmax
fprintf('Max iterations reached before convergence to %d significant digits in Newton method.\n', p);
else
fprintf('Newton method converged to %d significant digits after %d iterations.\n', p, n);
end
end
%% Define Function and Derivatives
syms x; % Define symbolic variable x
f(x) = exp(-0.01 * x) + x *(0.1 * x - 100) - x; % Define function f(x)
g(x) = diff(f, x); % First derivative of f(x)
h(x) = diff(g, x); % Second derivative of f(x)
%% Parameters
p = 8; % Number of significant digits for convergence
nmax = 500; % Maximum number of iterations
%% Secant Method Approximations
% Smaller root
x0 = 4.58 * 10^(2); % Initial guess x0 for smaller root
x1 = 4.63 * 10^(2); % Initial guess x1 for smaller root
[xs1, fs1] = secant(f, x0, x1, p, nmax); % Approximate smaller root
fprintf('Approximation of smaller root (Secant): %.8e\n', xs1);
% Larger root
x0 = 4.80 * 10^(2); % Initial guess x0 for larger root
x1 = 5.50 * 10^(2); % Initial guess x1 for larger root
[xs2, fs2] = secant(f, x0, x1, p, nmax); % Approximate larger root
if ~isnan(xs2)
fprintf('Approximation of larger root (Secant): %.8e\n', xs2);
else
fprintf('Failed to approximate larger root using Secant method.\n');
end
% Point of Inflection
x0 = 4.58 * 10^(2); % Initial guess x0 for POI
x1 = 4.63 * 10^(2); % Initial guess x1 for POI
[xs3, fs3] = secant(g, x0, x1, p, nmax); % Approximate point of inflection
fprintf('Approximation of POI (Secant): %.8e\n', xs3);
%% Newton Method Approximations
% Smaller root
x0 = 450; % Initial guess for smaller root
[xn1, fn1] = newton(f, g, x0, p, nmax); % Approximate smaller root
if ~isnan(xn1)
fprintf('Approximation of smaller root (Newton): %.8e\n', xn1);
else
fprintf('Failed to approximate smaller root using Newton method.\n');
end
% Larger root
x0 = 520; % Initial guess for larger root
[xn2, fn2] = newton(f, g, x0, p, nmax); % Approximate larger root
if ~isnan(xn2)
fprintf('Approximation of larger root (Newton): %.8e\n', xn2);
else
fprintf('Failed to approximate larger root using Newton method.\n');
end
Outputs:
Max iterations reached before convergence to 8 significant digits in Secant
method.
Approximation of smaller root (Secant): 4.62767202e+02
Secant method converged to 8 significant digits after 13 iterations.
Approximation of larger root (Secant): 1.01000000e+03
Secant method converged to 8 significant digits after 3 iterations.
Approximation of POI (Secant): 5.05000320e+02
Newton method converged to 8 significant digits after 14 iterations.
Approximation of smaller root (Newton): 9.90010698e-03
Newton method converged to 8 significant digits after 9 iterations.
Approximation of larger root (Newton): 1.01000000e+03