In: Economics
The null hypothesis is that the variable contains a unit root, and the alternative is that the variable was generated by a stationary process. You may optionally exclude the constant, include a trend term, and include lagged values of the difference of the variable in the regression.
Dickey and Fuller developed a procedure for testing whether a variable has a unit root or, equivalently, that the variable follows a random walk. Hamilton (1994, 528–529) describes the four different cases to which the augmented Dickey–Fuller test can be applied. The null hypothesis is always that the variable has a unit root. They differ in whether the null hypothesis includes a drift term and whether the regression used to obtain the test statistic includes a constant term and time trend. Becketti (2013, chap. 9) provides additional examples showing how to conduct these tests. The true model is assumed to be yt = α + yt−1 + ut.
where ut is an independently and identically distributed zero-mean error term. In cases one and two, presumably α = 0, which is a random walk without drift. In cases three and four, we allow for a drift term by letting α be unrestricted. The Dickey–Fuller test involves fitting the model yt = α + ρyt−1 + δt + ut.
by ordinary least squares (OLS), perhaps setting α = 0 or δ = 0. However, such a regression is likely to be plagued by serial correlation. To control for that, the augmented Dickey–Fuller test instead fits a model of the form
∆yt = α + βyt−1 + δt + ζ1∆yt−1 + ζ2∆yt−2 + · · · + ζk∆yt−k + t
where k is the number of lags specified in the lags() option. The noconstant option removes the constant term α from this regression, and the trend option includes the time trend δt, which by default is not included. Testing β = 0 is equivalent to testing ρ = 1, or, equivalently, that yt follows a unit root process. In the first case, the null hypothesis is that yt follows a random walk without drift, and (1) is fit without the constant term α and the time trend δt. The second case has the same null hypothesis as the first, except that we include α in the regression. In both cases, the population value of α is zero under the null hypothesis. In the third case, we hypothesize that yt follows a unit root with drift, so that the population value of α is nonzero; we do not include the time trend in the regression. Finally, in the fourth case, the null hypothesis is that yt follows a unit root with or without drift so that α is unrestricted, and we include a time trend in the regression.
To understand the econometric issues associated with unit root and stationarity tests, consider the stylized trend-cycle decomposition of a time series yt:
yt = TDt + zt
T Dt = κ + δt
zt = φzt−1 + εt, εt ∼ WN(0, σ2)
where TDt is a deterministic linear trend and zt is an AR(1) process. If |φ| < 1 then yt is I(0) about the deterministic trend TDt. If φ = 1, then zt = zt−1 + εt = z0 + Pt j=1 εj , a stochastic trend and yt is I(1) with drift. Simulated I(1) and I(0) data with κ = 5 and δ = 0.1 are illustrated in Figure 4.1. The I(0) data with trend follows the trend TDt = 5+0.1t very closely and exhibits trend reversion. In contrast, the I(1) data follows an upward drift but does not necessarily revert to TDt.
Autoregressive unit root tests are based on testing the null hypothesis that φ = 1 (difference stationary) against the alternative hypothesis that φ < 1 (trend stationary). They are called unit root tests because under the null hypothesis the autoregressive polynomial of zt, φ(z) = (1 − φz) = 0, has a root equal to unity. Stationarity tests take the null hypothesis that yt is trend stationary. If yt is then first differenced it becomes
∆yt = δ + ∆zt
∆zt = φ∆zt−1 + εt − εt−1
Phillips and Perron (1988) developed a number of unit root tests that have become popular in the analysis of financial time series. The Phillips-Perron (PP) unit root tests differ from the ADF tests mainly in how they deal with serial correlation and heteroskedasticity in the errors. In particular, where the ADF tests use a parametric autoregression to approximate the ARMA structure of the errors in the test regression, the PP tests ignore any serial correlation in the test regression. The test regression for the PP tests is ∆yt = β0 Dt + πyt−1 + ut