A Practical Guide To
Using Econometrics
A. H. Studenmund
Chapter 9
Time Series
• Time series data involve a single entity over multiple
points in time.
• Notation for time series is different than cross-sectional.
• Cross-sectional:
where i goes from 1 to N
• Time series:
where t goes from 1 to T
β 9-2
Time Series (continued)
• Time-series have some characteristics that make them
more difficult to deal with than cross-section.
1. The order of observations in a time series is fixed.
2. Time-series samples tend to be much smaller
than cross-sectional ones.
3. The theory underlying time-series analysis can
be quite complex.
4. The stochastic error term in a time-series is often
affected by events that took place in a previous
time period. This is called serial correlation!
β 9-3
Pure Serial Correlation
• Pure serial correlation occurs when Classical
Assumption IV is violated in a correctly specified
equation.
• Most commonly assumed form of serial correlation is
first-order serial correlation.
where:
ε = the error term of the equation in question
ρ = the first-order autocorrelation coefficient
u = a classical (not serially correlated) error term
β 9-4
Pure Serial Correlation (continued)
• ρ is called the first-order autocorrelation coefficient.
• Magnitude of ρ indicates the strength of the serial
correlation.
• If ρ = 0, there is no serial correlation.
• As ρ approaches 1 in absolute value a high degree of
serial correlation exists.
• In general:
• If ρ > 0, there is positive serial correlation.
• If ρ < 0, there is negative serial correlation.
β 9-5
Pure Serial Correlation (continued)
β 9-6
Pure Serial Correlation (continued)
β 9-7
Pure Serial Correlation (continued)
β 9-8
Pure Serial Correlation (continued)
• Serial correlation can take on many forms other than
first-order.
• For quarterly data, the current quarter’s error term may
be functionally related to the observation of the error term
from the same quarter in the previous year:
• The error term in an equation might be a function of more
than one previous observation of the error term, such as:
β 9-9
Impure Serial Correlation
• Impure Serial Correlation is serial correlation caused by
a specification error.
• The error term of an incorrectly specified equation
includes a portion of the effect of the misspecification.
• Even if the true error term is not serially correlated, the
error term containing the specification error can be.
• Consider two cases of specification error:
1. Omitted variable
2. Incorrect functional form
β 9-10
Impure Serial Correlation (continued)
• Suppose the true equation:
• If X2 is accidently omitted, then:
where
• will tend to exhibit detectable serial correlation when:
1. X2 itself is serially correlated.
2. The size of ε is small compared to the size of .
β 9-11
Impure Serial Correlation (continued)
• Suppose the true equation:
• But the following regression is run:
• The new error term is now a function of the true error
term and of the differences between the linear and
polynomial functional forms.
• As Figure 9.4 displays, these differences often follow an
autoregressive pattern.
β 9-12
Impure Serial Correlation (continued)
β 9-13
The Consequences of Serial Correlation
• Serial correlation in the error term has at least three
consequences:
1. Pure serial correlation does not cause bias in the
coefficient estimates.
2. Serial correlation causes OLS to no longer be the
minimum variance estimator (of all linear
estimators).
3. Serial correlation causes the OLS estimates of
the to be biased, leading to unreliable
hypothesis testing.
β 9-14
The Durbin-Watson Test
• The Durbin-Watson test is used to determine if there is
first-order serial correlation.
• It requires three assumptions:
1. The regression model includes an intercept term.
2. The serial correlation is first-order in nature.
3. The regression model does not include a lagged
dependent variable as an independent variable.
β 9-15
The Durbin-Watson Test (continued)
• The equation for the Durbin-Watson statistic for T
observations is:
where ets are the OLS residuals
• Extreme positive serial correlation: d = 0
• Extreme positive negative correlation: d ≈ 4
• No serial correlation: d ≈ 2
β 9-16
Using the Durbin-Watson Test
• The Durbin-Watson test is unusual in two respects:
1. Econometricians almost never test the one-sided
null hypothesis that there is negative serial
correlation.
2. The Durbin-Watson test has an acceptance
region, a rejection region but also an inconclusive
region.
• With these exceptions, the Durbin-Watson test is quite
similar to the use of the t-test.
β 9-17
Using the Durbin-Watson Test (continued)
• To test for positive serial correlation:
1. Obtain OLS residuals from the equation to be
tested and calculate d statistic.
2. Determine the sample size and number of
explanatory variables and consult table to find dU and
dL.
3. The null hypothesis of positive serial correlation
and a one-sided alternative hypothesis:
(no positive serial correlation)
(positive serial correlation)
β 9-18
Using the Durbin-Watson Test (continued)
• The appropriate decision rule:
If d < dL Reject H0
If d > dU Do not reject H0
If dL< d < dU Inconclusive
β 9-19
Using the Durbin-Watson Test (continued)
• In rare cases a two sided Durbin-Watson might be used:
(no serial correlation)
(serial correlation)
• The appropriate decision rule:
If d < dL Reject H0
If d > 4 – dU Reject H0
If 4-dU > d > 4 – dU Do not reject H0
otherwise Inconclusive
β 9-20
Examples of the Use of
the Durbin-Watson Statistic
Example: 5% test, 3 variables, and 25 observations
• Critical values: dL = 1.12 and dU = 1.66
• Hypothesis (no positive serial correlation)
Statement: (positive serial correlation)
• Decision rule: If d < 1.12 Reject H0
If d > 1.66 Do not reject H0
If 1.12 < d < 1.66 Inconclusive
• If d = 1.78? If d = 1.28? If d = 0.60?
β 9-21
Examples of the Use of
the Durbin-Watson Statistic (continued)
β 9-22
Examples of the Use of
the Durbin-Watson Statistic (continued)
Example: Chicken demand model
where:
Yt = per capita chicken consumption (in pounds) in
year t
PCt = the price of chicken (in cents per pound) in
year t
PBt = the price of beef (in cents per pound) in year t
YDt = U.S. per capita disposable income (in
hundreds of dollars) in year t
β 9-23
Examples of the Use of
the Durbin-Watson Statistic (continued)
• The Durbin-Watson statistic is calculated to be 0.99.
• DL = 1.20 and DU = 1.65
• Decision rule: If d < 1.20 Reject H0
If d > 1.65 Do not reject H0
If 1.20 < d < 1.65 Inconclusive
β 9-24
The Lagrange Multiplier (LM) Test
• The Lagrange multiplier (LM) test tests for serial
correlation by analyzing how well the lagged residuals
explain the residual of the original equation in an
equation that also includes all the original explanatory
variables.
• If lagged residuals are statistically significant, then the
null hypothesis of no serial correlation is rejected.
β 9-25
The Lagrange Multiplier (LM) Test
(continued)
• The LM test involves three steps (assume an equation
with two independent variables):
1. Obtain residuals from estimated equation.
2. Specify the auxiliary equation:
β 9-26
The Lagrange Multiplier (LM) Test
(continued)
3. Use OLS to estimate auxiliary and test the null
hypothesis that α3 = 0 with the following test statistic:
where N = sample size and R2 is the unadjusted
coefficient of determination (both of the auxiliary
equation).
• For large sample, LM has a chi-square distribution with
degrees of freedom equal to one.
• If LM is greater than critical value, reject the null.
β 9-27
Remedies for Serial Correlation
• The first place to start in correcting serial correlation is to
look carefully at the specification of the equation for
possible errors that might be causing impure serial
correlation.
• Only after a careful review should the possibility of an
adjustment for pure serial correlation be considered.
• The appropriate response if you have pure serial
correlation is to consider:
1. Generalized Least Squares
2. Newey-West standard errors
β 9-28
Generalized Least Squares
• Generalized least squares (GLS) rids an equation of
pure first-order serial correlation and restores the
minimum variance property to its estimation.
• GLS starts with an equation that has pure serial
correlation and transforms it into one that does not.
• It is instructive to examine this transformation in order to
understand GLS.
β 9-29
Generalized Least Squares (continued)
• Start with an equation that has first-order serial
correlation:
• If: εt = ρεt-1 + ut, then:
• Multiply Equation (9.18) by ρ and lag by one period:
• Subtract Equation (9.20) from Equation (9.19).
β 9-30
Generalized Least Squares (continued)
• Equation (9.21) can be rewritten:
where:
β 9-31
Generalized Least Squares (continued)
• Notice that in Equation 9.22:
1. The error term is not serially correlated.
2. The slope coefficient β1 is the same as the slope
coefficient of the original equation.
3. The dependent variable has changed which means
the GLS is not comparable to the OLS .
• OLS cannot estimate a GLS model.
• There are a number of techniques that can be used to
estimate GLS equation (Prais-Winsten method is one).
β 9-32
Generalized Least Squares (continued)
• The Prais-Winsten method: two-step, iterative technique.
Step 1: Estimate ρ by running a regression based
on residuals:
Step 2: Use to estimate GLS Equation (9.21) with
OLS with the adjusted data.
• These steps are repeated until further improvement
results in little change in .
• The last estimate of step 2 is the final estimate.
β 9-33
Generalized Least Squares (continued)
Example: Chicken demand model
• OLS:
• GLS:
β 9-34
Newey-West Standard Errors
• Newey-West standard errors take account of the serial
correlation by changing the estimated standard errors
without changing the estimated betas.
• Newey-West standard errors are biased but are generally
more correct than uncorrected standard errors.
• Newey-West standard errors tend to be larger than OLS
standard errors—thus producing lower t-scores.
β 9-35
Newey-West Standard Errors (continued)
Example: Chicken demand model
• OLS:
• Newey-West:
β 9-36
β
CHAPTER 9: the end