0% found this document useful (0 votes)
56 views7 pages

Finance Time Series Analysis

The document summarizes a time series analysis of Brent Spot Price data conducted in four stages: model specification, parameter estimation, model checking, and forecasting. Based on the analysis, the ARMA(2,1) model was identified as the best fitting model for the data. Model checking showed that residuals of both the ARMA(2,1) and AR(1) models were independently distributed. Forecasting accuracy measures also indicated that the ARMA(2,1) model produced more accurate forecasts than the AR(1) model.

Uploaded by

Radhwen Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views7 pages

Finance Time Series Analysis

The document summarizes a time series analysis of Brent Spot Price data conducted in four stages: model specification, parameter estimation, model checking, and forecasting. Based on the analysis, the ARMA(2,1) model was identified as the best fitting model for the data. Model checking showed that residuals of both the ARMA(2,1) and AR(1) models were independently distributed. Forecasting accuracy measures also indicated that the ARMA(2,1) model produced more accurate forecasts than the AR(1) model.

Uploaded by

Radhwen Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Time Series Analysis

Project Report

Presented By:
Rim Moalla
Nour Fourati
Rayda Mallek
May Boulaares Elharzi
(Senior Finance/BA)

Submission Date: 20/01/2023

1
I. Introduction
The report contains the implementation of Box-Jenkins Method on the “Brent Spot
Price” dataset which includes a four-stage process as follows:
1. Model Specification
2. Parameter Estimation
3. Model Checking
4. Forecasting

II. Model Identification


1. Checking Stationarity and Seasonality

◆ The initial data’s plot suggests non-constant mean with time-dependent


variance:
◆ The graph displays low prices during the first period but starting from
2005 it shows a significant increase with high fluctuations.

➔ Our initial data is not stationary.


➔ It is essential to transform it into a stationary form prior to analysis.

2
● The first difference’s plot shows a significant transformation of our initial data:
○ The absence of the previous clear trend.
○ Stabilized mean and variance; Yet, we can observe a slight drop in the
mean in the second half of our period with the presence of few outliers.
➔ Visually, it is safe to conclude that our data is stationary.

● Implementing the necessary stationarity tests leads to a more sure conclusion


about our data’s stationarity:

○ Unit Root Tests:

For both tests, ADF and PP we have:


H0: One unit root (i.e. Non Stationarity)

3
H1: Zero unit root (i.e. Stationarity)
➢ The results show the two p-values = 0.01 < 0.05
➢ We reject H0 => Stationary data.

○ Stationarity Test:

For KPSS, we have:


H0: Stationary
H1: Not Stationary
➢ The results show a p-value = 0.1 > 0.05
➢ We fail to reject H0 => Stationary data.

⇒ According to the three tests coupled with the first difference’s plot, it is
safe to assume that our data is stationary.

2. Choosing model specification


Confidence interval = [-2/√396;+2/√396]

● The ACF plot cuts


off after lag 1, the remaining
autocorrelations fall within the
interval (dashed lines).
=> We can assume that the
model MA(1) is appropriate.

● The PACF plot shows


only one significant spike at lag 1
and all the other spikes fall within
the interval (dashed lines)
=> We can assume that the
model AR(1) is appropriate.

4
III. Parameter Estimation

AR(1) MA(1) ARMA(1,1) ARMA (1,7) ARMA(2,1) ARMA (2,7)

a0 0.09 0.09 0.09 0.10 0.09 0.10


(0.36) (0.30) (0.34) (0.17) (0.19) (0.18)

ɸ1 0.37 0.27 0.91 1.28 -0.09


(0.04) (0.11) (0.13) (0.10) (0.19)

ɸ2 -0.39 0.78
(0.04) (0.16)

𝛉1 0.35 0.12 -0.52 -0.89 0.49


(0.04) (0.11) (0.14) (0.10) (0.20)

𝛉2 -0.24 -0.63
(0.07) (0.24)

𝛉3 -0.15 -0.34
(0.06) (0.10)

𝛉4 -0.06 -0.20
(0.06) (0.06)

𝛉5 0.06 -0.005
(0.05) (0.08)

𝛉6 -0.059 -0.01
(0.06) (0.08)

𝛉7 0.05 -0.02
(0.063) (0.07)

AIC 2308.23 2312.03 2309.04 2310.62 2304.04 2309.81

Q(1) 0.25 0.65 0.002 0.00 0.00 0.00

Q(12) 12.19 16.06 11.09 3.05 6.60 4.56

● Based on the models’ AIC, we can consider ARMA(2,1) and AR(1) as our
candidate models as they demonstrate the minimum AICs, 2304.04 and 2308.23
respectively.

5
IV. Model Checking
In order to examine the independence of the different models estimated we will base our
conclusions on the Ljung-Box test statistic which includes:

H0: The residuals are independently distributed.


H1: The residuals are not independently distributed (i.e. they exhibit serial
correlation)

AR(1) ARMA(2,1)

P-value (lag =1) 0.6169 0.9976

P-value (lag=12) 0.4303 0.8828

Conclusion Both are greater than 0.05: Both are greater than 0.05:
➢ Fail to reject H0. ➢ Fail to reject H0.

➔ Both models’ residuals are independent


➔ The candidate models conform to the specifications of a stationary univariate
process.

V. Forecasting
1. Forecasts Characteristics
We based our forecasts on 50 observations out of 396 variables (our total sample).

Forecasted AR(1) Forecasted ARMA(2,1)

Mean -0.095 -0.033

Variance 5.095 5.697

2. Forecasts’ Accuracy

6
● Almost all the forecast’s accuracy measures (except for the RMSE
measure) present lower values for the model ARMA(2,1) than for the
AR(1) model.
➔ It is safe to conclude that the ARMA(2,1) model best fits our data.

You might also like