A Primer on Beveridge-Nelson Decomposition


The Ohio State University, 1998

This primer is on Beveridge-Nelson decomposition. It will state the theorem and give a basic insight into its meaning. The discussion will follow closely Hamilton(pp. 504-505).  For a rigorous proof see pp. 534-535.

THEOREM [Beveridge-Nelson (1981)]. Let ut=y(L)et=Syjet-j, where

e={et} is an innovation process and S(j çyj ç) < µ.

Then, S1tut=y(1)(S1tet) + h0 ht , where

y(1)= Syj , ht= S ajet-j , aj= -(yj+1 + yj+2 + yj+3 + …) , and S êaj ê<µ.

First of all, this is a theorem about unit root processes. A process is call a unit root process, or a process integrated of order one if it first differences are stationary in the broad sense. These are usually denoted as I(1) processes. All of the analysis of non-stationary time series revolves around I(1) processes and you will be dealing till the end of the course with them. You are expected to have a profound understanding about them and they will not be discussed here.

Next, let us consider the condition  S(j çyj ç) < µ. Without j in the brackets, this is an ordinary absolute summabilyty condition which is natural to impose in case like these in order to guarantee convergence of the sums. In our case, however, we impose a stronger condition that allows the theorem to work, though at no substantial cost: any stationary ARMA process staisfies this stronger condition, thus the condition is not as restrictive as it may seem. For those familiar with big O- little O notation, it merely states that asymptotically y tends to zero at an order of small liitle o of n-2:  lim y = o( n-2).

The third observation we should make is that yt = S1tut = u1 + u2 + … + ut is a unit root process, since its first differences, i.e., yt – yt-1 = ut is a stationary process. This is the basis which allowed us to make the very first observation that this theorem is about unit root processes.

Fourth, aj are represented as infinite sums of y’s:

aj= -(yj+1 + yj+2 + yj+3 + …). It just happens that this representation does the job for the theorem. What one may notice here is that since the sum of y’s is infinite, and the y’s are of order o(n-2), i.e., the y’s tend to zero very fast, we have the guarantee that the order of aj is little o of 1/n. This is a basic result from real analysis: the sum of series of order (-k) is of order (-k+1). In our case k=2. At this moment we can see why the theorem requires the stronger condition discussed above: it guarantees that the aj’s tend to zero fast enough to guarantee the convergence of ht= S ajet and of  y(1)(S1tet).

Our fifth observation is that  S1tet is a very special type of an I(1) stochastic process known everywhere in the literature as a random walk. For a discussion of the basics of this process, you may refer to pages 435-436-… in Hamilton.

Having noticed already that we have imposed on the a’s an absolute summability condition, that is, S êaj ê<µ, we have guaranteed ourselves that the process h={ht} is stationary. This final observation allows us to assemble the pieces of the theorem.

Now we are ready to interprete the conclusion of the theorem. It states that any I(1) process [in our case u1 + u2 + …+ ut] may be rpresented as the sum of a random walk [ y(1)(e1 + e2 + … + et), notice that y(1) is just a constant], a stationary process[ in our case h] and some initial condition [h0]. Namely this theorem allows us to think of I(1) processes as random walks plus some stationary components. Moreover, it tells us that an insight into the process of random walk is sufficient for a deep understanding of (discrete) unit root processes.

The reader is advised to carefully go through the example of how this result may be used which starts at the last paragraph on Hamilton’s p.504.

Leave a Reply