0% found this document useful (0 votes)
84 views66 pages

Understanding Value at Risk (VaR) Methods

The document discusses various methods for calculating Value at Risk (VaR), which is a measure of the risk of loss on a portfolio of assets. It describes the historical, variance-covariance, and Monte Carlo simulation methods for calculating VaR. The historical method examines past returns, while the variance-covariance method assumes returns are normally distributed. The document also discusses calculating VaR for single assets and portfolios with multiple assets using a variance-covariance matrix. It provides examples of calculating VaR for both single and multi-asset portfolios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views66 pages

Understanding Value at Risk (VaR) Methods

The document discusses various methods for calculating Value at Risk (VaR), which is a measure of the risk of loss on a portfolio of assets. It describes the historical, variance-covariance, and Monte Carlo simulation methods for calculating VaR. The historical method examines past returns, while the variance-covariance method assumes returns are normally distributed. The document also discusses calculating VaR for single assets and portfolios with multiple assets using a variance-covariance matrix. It provides examples of calculating VaR for both single and multi-asset portfolios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 66

Volatility Measurement

VAR, ARCH and GARCH


Unit – 3
FRMD
Introduction – Idea Behind VAR

• The most popular and traditional measure of risk is volatility.

• The main problem with volatility, however, is that it does not care

about the direction of an investment's movement: stock can be

volatile because it suddenly jumps higher. Of course, investors

aren't distressed by gains.

• For investors, the risk is about the odds of losing money, and VAR is

based on that common-sense fact. By assuming investors care

about the odds of a really big loss, VAR answers the question,

"What is my worst-case scenario?"


What is VAR

• What is the maximum I can lose on this investment?

• This is a question that almost every investor who has invested or is

considering investing in a risky asset asks at some point in time.

• Value at Risk tries to provide an answer, at least within a

reasonable estimate.

• In Layman terms Value at Risk measures the largest loss likely (in

future) to be suffered on a portfolio position over a holding period

with a given probability (confidence level).


• Value-at-risk (VaR) is a Probabilistic Metric of Market
Risk (PMMR) used by banks and other organizations
to monitor risk in their trading portfolios.
• For a given probability and a given time horizon,
value-at-risk indicates an amount of money such
that there is that probability of the portfolio not
losing more than that amount of money over that
horizon.
Value at Risk

• Value at risk (VaR) is a way to quantify the risk of potential losses

for a firm or an investment portfolio.

• This metric can be computed in several ways, including the

historical, variance-covariance, and Monte Carlo methods.

•  A VAR statistic has three components: a time period, a confidence

level and a loss amount (or loss percentage).

• E.g.: What is the most I can—with a 95% or 99% level of confidence

—expect to lose in Rupees over the next day/month/year?


VAR Methodologies

• Historical method - looks at one's prior returns history and orders them from

worst losses to greatest gains—following the premise that past returns

experience will inform future outcomes.

• The variance-covariance method - Rather than assuming the past will inform the

future, this method instead assumes that gains and losses are normally

distributed. This way, potential losses can be framed in terms of standard

deviation events from the mean.

• Monte Carlo simulation - This technique uses computational models to simulate

projected returns over hundreds or thousands of possible iterations. Then, it

takes the chances that a loss will occur, say 5% of the time, and reveals the

impact.
Historical Method

• The historical method simply re-organizes actual historical returns,

putting them in order from worst to best. It then assumes that history

will repeat itself, from a risk perspective.

• For example, suppose we want to calculate the 1-day 95% VaR for an

equity using 100 days of data. The 95th percentile corresponds to the

least worst of the worst 5% of returns. In this case, because we are using

100 days of data, the VaR simply corresponds to the 5th worst day.

• The historical approach is non-parametric. We have not made any

assumptions about the distribution of historical returns.


VaR Calculation using the
Historical Simulation approach in Excel

• The steps for VaR calculation using the historical method in Excel are as follows:

• Similar to the variance-covariance approach, first we calculate the returns of the

stock 

• Returns = Ln (Today’s Price/ Yesterday’s Price)

• Sort the returns from worst to best.

• Next, we calculate the total count of the returns using count function.

• The VaR(90) is the sorted return corresponding to the 10% of the total count.

• Similarly, the VaR(95) and VaR(99) is the sorted return corresponding to the 5%

and 1% of the total count respectively.


Variance-Covariance Method
• The variance-covariance method assumes that a stock investment's returns

will be normally distributed around the mean of a normal or bell-shaped

probability distribution.

• Since returns are distributed in a normal or bell curve format, we need the

standard deviation of the returns. These can be looked up or computed for

most traded stocks.

• A complicating factor of this method is that stocks can have a tendency to

move up and down together, usually caused by some external factor. That

means we need the covariance of returns for all of the stocks in a portfolio

against all of the other stocks.


Computing the variance-covariance for a
one-stock portfolio

• This method requires the stock's price and its standard deviation, along

with a confidence level.

• Most value at risk calculations use either a 95% or 99% confidence level.

• From a statistics table, one can look up the z-value that corresponds to the

desired confidence level. In the example here, the z-value for a 95%

confidence level is 1.645.

• Then the numbers go into the formula:

• VAR= Stock price/ investment amount/Mean * standard deviation * z-value


• If you want to calculate VaR for an investment in QRS
Co. The price for QRS Co. stock is Rs.100, its standard
deviation for monthly returns is 10%, and we would
like a 95% confidence level for the greatest monthly
losses for this stock. The z-value for a 95%
confidence level is 1.645. The calculation is:
• Rs.100 * 0.10 * 1.645 = Rs.16.45

• This means that 95% of the time, you will not have a
monthly loss greater than Rs.16.45 per share
• Consider a portfolio that includes only one security, stock
ABC. Suppose Rs. 5,00,000 is invested in stock ABC. The
standard deviation over 252 days, or one trading year, of
stock ABC, is 7%. Following the normal distribution, the
one-sided 95% confidence level has a z-score of 1.645.

• The value at risk in this portfolio is

• Rs.57,575 = (Rs.5,00,000*1.645*.07).

• Therefore, with 95% confidence, the maximum loss will


not exceed Rs.57,575 in a given trading year.
VaR Calculation in Excel using Variance-Covariance
approach (Single Stock)

• Calculate the returns of the closing price 

• Returns = ln(Today’s Price / Yesterday’s Price)

• Calculate the mean of the returns using the average function

• Calculate the standard deviation of the returns using STDEV function

• Finally, we calculate the VaR for 90, 95, and 99 confidence level

using NORM.S.INV function.

• This function has three parameters: probability, mean, and standard

deviation. In probability, we use 0.1, 0.05, 0.01 respectively for the VaR(90),

VaR(95), and VaR(99)


Example of VAR with Two Securities

• The value at risk of a portfolio with two securities can be determined

by first calculating the portfolio's volatility.

• Multiply the square of the first asset's weight by the square of the first

asset's standard deviation and add it to the square of the second

asset's weight multiplied by the square of the second asset's standard

deviation.

• Add that value to two multiplied by the weights of the first and second

assets, the correlation coefficient between the two assets, asset one's

standard deviation, and asset two's standard deviation. Then multiply

the square root of that value by the z-score and the portfolio value.
• For example, suppose a risk manager wants to calculate the value at

risk using the parametric method for a one-day time horizon.

• The weight of the first asset is 40%, and the weight of the second asset

is 60%. The standard deviation is 4% for the first and 7% for the

second asset. The correlation coefficient between the two is 25%. The

z-score is -1.645. The portfolio value is Rs.50 million.

• The parametric value at risk over a one-day period, with a 95%

confidence level, is: Rs.3.99 million = (Rs. 50,000,000*-

1.645)*√(0.4^2*0.04^2)+(0.6^2*0.07^2)+[2(0.4*0.6*0.25*0.04*0.07*

)]
VAR – Multiple Asset Portfolio

• If a portfolio has multiple assets, its volatility is calculated using a

matrix. A variance-covariance matrix is computed for all the assets.

The vector of the weights of the assets in the portfolio is multiplied by

the transpose of the vector of the weights of the assets multiplied by

the covariance matrix of all of the assets.

• In practice, the calculations for VaR are typically done

through financial models. Modeling functions will vary depending on

whether the VaR is being calculated for one security, two securities, or

a portfolio with three or more securities.


Time Series Forecasting

• Time series forecasting occurs when scientific predictions are


made based on historical time stamped data.
• Forecasting a time series can be broadly divided into two
types.
• If only the previous values of the time series are used to
predict its future values, it is called Univariate Time Series
Forecasting.
• And if predictors other than the series (exogenous variables)
are used to forecast it is called Multi Variate Time Series
Forecasting.
Statistical stationarity

• A stationary time series is one whose statistical properties such

as mean, variance, autocorrelation, etc. are all constant over

time.

• A time series whose statistical properties change over time is

called a non-stationary time series. Thus a time series with a

trend or seasonality is non-stationary in nature. This is because

the presence of trend or seasonality will affect the mean,

variance and other properties at any given point in time.


Stationary Time Series Non-Stationary Time Series

Statistical properties of a stationary time Statistical properties of a non-stationary


series are independent of the point in time time series is a function of time where it is
where it is observed. observed.

Mean, variance and other statistics of a


Mean, variance and other statistics of a
non-stationary time series changes with
stationary time series remains constant.
time. Hence, the conclusions from the
Hence, the conclusions from the analysis of
analysis of a non-stationary series might be
stationary series is reliable.
misleading.

A stationary time series always reverts to A non-stationary time series does not revert
the long-term mean. to the long term mean.

A stationary time series will not have Presence of trends, seasonality makes a
trends, seasonality, etc. series non-stationary.
Why is stationarity one of the key components in time series analysis?”
• inferences drawn from a non-stationary process will not be reliable as its

statistical properties will keep on changing with time. While performing

analysis, one would typically be interested in the expected value of the mean,

the variance, etc.


• But if these parameters are continuously changing, estimating them by

averaging over time will not be accurate. Hence, stationary data are easier to

analyze and any forecast made using a non-stationary data would be erroneous

and misleading.
• Because of this, many statistical procedures applied in time series analysis

makes an assumption that the underlying time series data is stationary.


Detecting Stationarity

• Looking at the Data - Both stationary and non-stationary series have

some properties that can be detected very easily from the plot of the

data. For example, in a stationary series, the data points would always

return towards the long-run mean with a constant variance. In a non-

stationary series, the data points might show some trend or seasonality.

• Augmented Dickey-Fuller Test - The Augmented Dickey-Fuller test is one

of the most popular tests to check for stationarity.

• If the p-value is less than or equal to 0.05, you reject H0 and conclude

that the time series is stationary.


Transforming a Non-Stationary Series into a
Stationary Series

• Since stationary series are easy to analyze, you can convert a


non-stationary series into a stationary series by the method
of Differencing.

•  In fact, it is necessary to convert a non-stationary series into a


stationary series in order to use time series forecasting models.

• In this method, the difference of consecutive terms in the series


is computed as below
Regression Analysis

• Regression analysis is commonly used for modeling the relationship between a

single dependent variable Y and one or more predictors.  When we have one

predictor, we call this "simple" linear regression:

• When we have more than one predictor, we call it multiple linear regression

• There are four assumptions associated with a linear regression model:

– Linearity: The relationship between X and the mean of Y is linear.

– Homoscedasticity: The variance of residual is the same for any value of X.

– Independence: Observations are independent of each other.

– Normality: For any fixed value of X, Y is normally distributed.


ARIMA - Auto Regressive Integrated Moving Average

• ARIMA is a class of models that ‘explains’ a given time series based on


its own past values, that is, its own lags and the lagged forecast errors.

• An ARIMA model is characterized by 3 terms: p, d, q

• where,
– p is the order of the Auto Regressive term

– q is the order of the Moving Average term

– d is the number of differencing required to make the time series stationary 

• The first step to build an ARIMA model is to make the time series
stationary.
• Term ‘Auto Regressive’ in ARIMA means it is a linear regression model that uses its
own lags as predictors. Linear regression models, as you know, work best when
the predictors are not correlated and are independent of each other.
• So how to make a series stationary?
• The most common approach is to difference it. That is, subtract the previous value
from the current value. Sometimes, depending on the complexity of the series,
more than one differencing may be needed.
• The value of d, therefore, is the minimum number of differencing needed to make
the series stationary. And if the time series is already stationary, then d = 0.
• Next, what are the ‘p’ and ‘q’ terms?
• ‘p’ is the order of the ‘Auto Regressive’ (AR) term. It refers to the number of lags
of Y to be used as predictors. And ‘q’ is the order of the ‘Moving Average’ (MA)
term. It refers to the number of lagged forecast errors that should go into the
ARIMA Model.
• A pure Auto Regressive (AR only) model is one where Yt depends only

on its own lags. That is, Yt is a function of the ‘lags of Yt

• Likewise a pure Moving Average (MA only) model is one where Yt

depends only on the lagged forecast errors.

• An ARIMA model is one where the time series was differenced at least

once to make it stationary and you combine the AR and the MA terms.
• An AR(1) autoregressive process is one in
which the current value is based on the
immediately preceding value, while an AR(2)
process is one in which the current value is
based on the previous two values.
• An AR(0) process is used for white noise and
has no dependence between the terms. 
• ARIMA, a form of BOX JENKINS model was named after the authors

George Box and Gwilym Jenkins, who proposed a three steps

method to select appropriate ARIMA models to forecast economics

variables.

• We will try to find a model that fits the data well and can forecasting

appropriate values.  

The method consists of three basic steps: 

• Stage 1: Identification

• Stage 2: Estimation

• Stage 3: Diagnostics and Forecasting


stage 1: Identification

• The Graph

• The Correlogram – ACF & PACF

• Formal tests: Augmented Dickey Fuller,


Non Stationary EOD Close Price
ADF Test Results – Non Stationary Data
Stationary Data
ADF Test Result – Stationary Data

P <0.05 is significant, so
reject the null hypothesis.
This means the data series
is stationary
Descriptive Statistics
Correlogram – Test of Auto Correlation
Correlogram – Auto correlation – Differenced Data
Stage 2: ARIMA Model Estimates
ARIMA model Estimates
ARIMA Models Sign. Value Sigma Sq. Adj. R2 AIC SIC

ARIMA (5,1,5) 0 7.459 0.027 4.86 4.89

ARIMA (5,1,6) 2 7.347 0.042 4.84 4.88

ARIMA (6,1,5) 2 7.346 0.042 4.84 4.86

ARIMA (6,1,6) 1 7.499 0.022 4.86 4.90

Model Evaluation Criteria:


• Significance of the ARMA : select the model with most significant terms (p-values<0.05)
• Sigma SQ: is a measure of volatility. Select the lowest volatility.
• Adjusted R2 – Highest R2 (Good Fit)
• Model selection criteria: Select the model with smallest Akaike, Schwarz
Adjusted ARIMA Model
Stage 3: Diagnostics and Forecasting

• We identified possible models and estimated them in stage 2.


We also selected the most appropriate model based on
diverse criterions. Now it is time to ensure the model satisfies
the requirements to forecast and predict future values.

• In ARIMA Box Jenkins Method, Stage 3 we:


– Ensure the model satisfies the stability conditions

– There is no residual autocorrelation

• If the above requirements are met, then we can forecast!


ARIMA Forecast
ARIMA Results

% change in Close Forecasted Value


Day Yt -1
Price (Yt + 1)

T+1 0.18% 781.15 782.55

T+2 0.029% 782.55 782.77

T+3 0.047% 782.77 783.13

T+4 0.245% 783.13 785.04

T+5 -0.126% 785.04 784.051


Checking the existence of ARCH Effects

• View  Residual Diagnostics  Heteroskedasticity Test select


ARCH Option
• H0 = There is No exiting ARCH effects
• H1 = There is ARCH effect
Hint: if P<0.05, we reject the null hypothesis and confirm the
existence of ARCH effect
Residual Diagnostics
• Correlogram Squared Residuals
• Heteroskedasticity Test
Heteroskedasticity Test indicating ARCH Effect

H0 = No ARCH Effect
H1= There is ARCH Effect
P value is <0.05
so reject H0 and accept H1

There is an indication of
Heteroskedasticity

So, the next step is to go for


ARCH and GARCH models
ARCH – Auto Regressive
Conditional Heteroscedasticity

• In econometrics, the autoregressive conditional

heteroskedasticity (ARCH) model is a statistical model for time

series data that describes the variance of the current error term.

• The ARCH model is appropriate when the error variance in a time

series follows an autoregressive (AR) model

• If an autoregressive moving average (ARMA) model is assumed for

the error variance, the model is a generalized autoregressive

conditional heteroskedasticity (GARCH) model.


• ARCH models are commonly employed in

modeling financial time series that exhibit time-

varying volatility and volatility clustering. 

• In finance, volatility clustering refers to the

observation, first noted by Mandelbrot (1963), that

"large changes tend to be followed by large changes, of

either sign, and small changes tend to be followed by

small changes.
Varying Volatility, Volatility Clustering
& Leverage Effect
ARCH – Auto Regressive Conditional
Heteroskedasticity

• Autoregressive: The current value can be expressed as a function


of the previous values i.e. they are correlated.
• Conditional: This informs that the variance is based on past
errors.
• Heteroskedasticity: This implies the series displays unusual
variance (varying variance).
• ARCH(p) model is simply an AR(p) model applied to the variance
of a time series.
ARCH Model
GARCH
GARCH

• Generalized Autoregressive Conditional Heteroskedasticity, or GARCH, is an

extension of the ARCH model that incorporates a moving average component

together with the autoregressive component.

• Specifically, the model includes lag variance terms (e.g. the observations if

modeling the white noise residual errors of another process), together with

lag residual errors from a mean process.

• The introduction of a moving average component allows the model to both

model the conditional change in variance over time as well as changes in the

time-dependent variance. Examples include conditional increases and

decreases in variance.
Results of
AR(5) AR(6) MA(6)
ARCH 1 GARCH 1 model

P values are less than 0.05


indicating that the model is
significant

Next step is to again check for


heteroskedasticity
P values are more than 0.005
indicating
No heteroskedasticity and
this is a good fitting model
T ARCH vs. GARCH

• T ARCH model incorporate the asymmetric volatility.

• T ARCH model captures that is not contemplated by the GARCH

model, which is the empirically observed fact that negative

shocks at time t−1 have a stronger impact in the variance at

time t than positive shocks.

• This asymmetry used to be called leverage effect because the

increase in risk was believed to come from the increased

leverage induced by a negative shock.


Asymmetric Effect
T ARCH Equation
• Empirical studies have shown that EGARCH
model provides a more accurate result
compared to the conventional GARCH model
(Alberg, Shalit & Yosef 2008) indicating that
incorporating the asymmetric volatility yields a
more adequate result.

You might also like