A Primer on Stationarity

A PRIMER ON THE CONDITION FOR STATIONARITY
OF
SYSTEM-RELATED MULTIPLE TIME SERIES

A SUPPLEMENT READING FOR
AGECON 993.07: TIME SERIES ANALYSIS WORKSHOP

The Ohio State University, 1998

KRASSIMIR PETROV

This primer investigates the issue of stability of a system of time-series processes. It should be considered as a direct extension of the primer on single time series condition of stationarity. The approach and objectives of the primer remain unchanged – an informal and intuitive understanding of the condition for stationarity.

As a reminder, roughly stated, an autoregressive process is stationary iff all its unit roots lie outside the unit circle.

Our current interest is in a stationarity condition for a system of multiple time series. Oftentimes these are referred in the literature as vector autoregressions. For vector autoregressions, the condition for stationarity is very similar to that of a single autoregression. We will cite the theorem, as it appears on the very bottom of p. 259 in Hamilton’s textbook.

PROPOSITION: a VAR(p) is covariance-stationary if all values of z satisfying  det(InF1z –F2z2 – …- Fpzp)=0 lie outside the unit circle.

Let us attempt to clarify this proposition. First, the form of the vector autoregressioin is  yt = c + F1yt-1 + F2yt-2 + … +Fpyt-p + et.  This should make clear what FI means and how it is defined. We need notice only that it is an n by n matrix, where n represents the number of components in y, i.e., the number of time series in the vector autoregression.

Second, our roots must still lie outside the unit circle.

Third, variance-covariance stationarity is the term used by Hamilton to refer to stationarity in the broad sense.

Fourth, but critically important is that this is a determinantal equation. Not very many textbooks in numerical methods discuss the topic of a solution of a determinantal equation, and this is considered a fairly advanced topic.

In conclusion, everything else in this proposition should be clear for those familiar and understanding the single-equation stationarity condition. We will leave this problem here, and will state a theorem which won’t require finding roots of determinantal equations (notice that the order of the equation is n*p) but will require finding characteristic roots. If you remember your assignment by prof. Thraen, there were two standard methods for determining whether a process is stationary: the eigenvalue method and the polynomial root method. Now, for the purpose of applications, without building new numerical techniques, we must resort to the eigenvalue method. The condition is stated in the following

THEOREM. A VAR(p) is variance-covariance stationary if all eigenvalues of F  lie inside the unit circle.

First of all, we must clarify the definition of the matrix F. It is defined as follows (eq. [10.1.10] from Hamilton):

This equation is derived by rewriting a VAR(p) model into VAR(1) model by constructing a “megamatrix” F. This approach, while somewhat artificial, should be crystal-clear to any student. Now, the student is strongly recommended that s/he attempt to rewrite a single AR(p) model into an AR(1). This is all done in Hamilton on pages 7-8. Note how the matrix F in [1.2.3] looks just like our matrix F.

Second, the requirements are that the eigenvalues lie inside the init circle, not outside. For an AR(1) process yt=A*yt-1 +et we would normally require that A<1, but for the multiple time-series case, A is a matrix, not a scalar, and the requirement changes to requiring that all eigenvalues of A be less than one. The intuition for this requirement is somewhat harder to comprehend. Here is the way you may think about it. When the matrix A is not diagonal, there are very complicated relationships among the variables. However, when A is diagonal, the variables are not related. In this case where an intercation between the variables is precluded, our single-equation condition remains: we want all of our autoregressive coefficients to be less than one, that is, we require that all diagonal elements of A be less than one. Now, the crux in unedrstanding our requirement lies in the fact that we may transform A into different equivalent forms. The form that is most convenient is, of course, the diagonal form. It turns out that when a matrix is diagonalized, its elements on the main diagonal are just its characteristic roots. Moreover, a deeper understandinfgof the definition of characteristic roots should tell you that we define charactersicit roots in such a way that upon diagonalizing the matrix the roots themselves should appear in the matrix. For more on charactersitic roots, the reader is referred to the textbook Linear Algebra by Leon or to the Bellman’s classic Introduction to the Theory of Matrices.

At this moment we should clear the confusion about the outside/inside unit circle requirement. The first refers to the polynomial equation and the latter refers to the eigenvalues of the tranform matrix F. Nonetheless, you may sometimes read in the literature that the requirement is that the eigenvalues are positive, or outside the unit circle. However, the eigenvalues of which matrix??

The answer is in the following

Theorem: The following three conditions are equivalent:

(a) the eigenvalues of A lie inside the unit circle

(b) the eigenvalues of A-1 lie outside the unit circle

(c) the eigenvalues of (I-A) are positive.

Leave a Reply