Skip to content

Karhunen-Loève expansions: A primer

March 17, 2011

Somehow I’ve managed to have nothing to do with Karhunen-Loève expansions of random processes at work in lab. A couple of weeks ago, though, I started working with stochastic collocation for partial differential equations and there they were, Karhunen-Loève expansions. That name, Karhunen-Loève, is long enough to promise good entertainment, and as probability theory remains in the list of topics that give me an insta-headache everytime I have to deal with them, it seemed to me, well, not a recipe for a good time. To my surprise, though, after a couple of days of hands-on I can say this is actually a lot of fun (Well, at least as fun as probability theory can be in your book). The topic allows going from a state of almost-no-idea-whatsoever to oh-now-I-kinda-get-what’s-going-on in a not painfully long time. Let’s summarize then some of the stuff I’ve learned about K-L expansions and in the meantime practice some \LaTeX.

Let’s say we have a second order centered random process {Y(x)}, {x \in D} with {D=[0,L]} (that is, with zero mean, or {\langle Y(x)\rangle=0}). We’ll assume that {Y(x)} is continuous in quadratic mean (or q.m.), so that we can employ the calculus in q.m. developed by Loève (1977) (for more information on that, and expositions on the topic with actual technical rigour, check that reference and Potthoff, 2010). This process can be represented as an infinite linear combination of orthonormal functions whose coefficients are zero-mean random variables {\{Y_i\}}, that is,

\displaystyle  Y(x)=\sum_{i=1}^{\infty} Y_i\phi_i(x).

As they are orthonormal, the functions {\phi_i} satisfy {\int_D \phi_m(x)\phi_n(x)dx=\delta_{mn}}. The coefficients are then given by

\displaystyle  Y_i=\int_D Y(x)\phi_i(x)dx,

as is customary with orthogonal expansions. K-L expansions require the random variables to be mutually uncorrelated, i.e., {\langle Y_i Y_j\rangle=\sigma^2_{Y_j}\delta_{ij}}, with {\sigma^2_{Y_j}} the variance of {Y_j}. What does this say about the random variables and the orthonormal functions? To investigate this, we’ll follow the procedure outlined in this article, starting with using the expression above for the {Y_i} to rewrite the covariance of the random variables:

\displaystyle  \langle Y_iY_j \rangle = \left\langle \int_D Y(x)\phi_i(x)dx \int_D Y(x')\phi_j(x')dx \right\rangle = \int_{D\times D} C_Y(x,x')\phi_i(x)\phi_j(x')dxdx',

where {C_Y(x,x')} is the covariance of {Y(x)} (which exists, as {Y(x)} is second order). This last expression can be rewritten as

\displaystyle  \int_D \phi_i(x) \left\{\int_D C_Y(x,x') \phi_j(x') dx'\right\}dx=\sigma^2_{Y_j}\delta_{ij},

which can be easily satisfied by setting the stuff in the curly brackets equal to {\sigma^2_{Y_j}\phi_j(x)}. If we let {\sigma^2_{Y_j}=\lambda_j} we obtain the following relation:

\displaystyle   \int_D C_Y(x,x') \phi_j(x') dx' = \lambda_j \phi_j(x) \ \ \ \ \ (1)

This result, a Friedholm equation, states the relationship satisfied by each of the {\phi_j} and the variance of its corresponding random variable. It is not difficult to see that {\{\phi_j\}} and {\{\lambda_j\}} are the set of eigenfunctions and eigenvalues of the linear operator

\displaystyle  [\mathcal{T}_{C_Y} \phi](x) = \int_D C_Y(x,x') \phi(x') dx',

that is, {\mathcal{T}_{C_Y} \phi_j(x) = \lambda_j \phi_j(x)}.

The usual form of the K-L expansion is obtained by writing {Y_i=\sqrt{\lambda_i} \xi_i}, where the {\xi_i} are zero-mean, unit variance random variables. The final result is then

\displaystyle   Y(x)=\sum_{i=1}^{\infty} \sqrt{\lambda_i} \xi_i \phi_i(x), \ \ \ \ \ (2)

where {\psi_i}, {\lambda_i} satisfy (1) and the random variables {\xi_i} are given by

\displaystyle   \xi_i=\frac{1}{\sqrt{\lambda_i}}\int_D Y(x)\phi_i(x)dx, \ \ \ \ \ (3)

with {\langle \xi_i \xi_j \rangle = \delta_{ij}}. It can be proved that the series (2) converges uniformly in {D}. For this, we’ll follow the procedure of Loève (1977) and start introducing Mercer’s theorem: A nonnegative-definite type function {C_Y(x,x')} continuous in {D \times D} can be expanded as

\displaystyle  C_Y(x,x') = \sum^{\infty}_{i=1} \lambda_i \phi_j(x) \phi_j(x'),

where {\psi_i}, {\lambda_i} are again the solutions of (1). This series converges absolutely and uniformly of {D \times D}. Covariance functions are positive-definite, so good times. Now, equation (3) implies {\langle Y(x) \xi_i \rangle = \sqrt{\lambda_i} \phi_i(x)}; also, we need to define

\displaystyle  Y_n(x)=\sum_{i=1}^{n} \sqrt{\lambda_i} \xi_i \phi_i(x).

With these elements, it’s easy to see that

\displaystyle  \left\langle [Y(x)-Y_n(x)]^2 \right\rangle = \sigma^2_{Y(x)} - \sum^n_{i=1} \lambda_i \psi^2_i(x).

In virtue of Mercer’s theorem, the last term on the RHS converges to {\sigma^2_{Y(x)}} as {n \rightarrow \infty} and the LHS converges to zero uniformly on {D}.

Well, that’s gonna be pretty much it for the moment. I just want to add that this entry was brought to you with the help of LaTeX2WP, this super-neat Python script that converts LaTeX file to discernible HTML ready to copy-paste to the WordPress editor. Awesome! Now go and do something fun.

References
Adler, R.J. (1990). An Introduction to Continuity, Extrema and Related Topics for General Gaussian Processes. Institute of Mathematical Statistics.
Johnson, D. (n.d.). Karhunen-Loève Expansions. In Connexions. Retrieved March 17, 2011 from http://cnx.org/content/m11259/latest/
Loève, M. (1978). Probability theory. Vol. II, 4th ed. New York: Springer-Verlag.
Potthoff, J. (2010). Sample properties of random fields — III: Differentiability. Commun. Stoch. Anal., 4, 335-353.
And obviously, the Wikipedia article:
Karhunen-Loève theorem. (n.d.). In Wikipedia. Retrieved March 17, 2011 from http://en.wikipedia.org/wiki/Karhunen-Loève_theorem

About these ads
3 Comments
  1. March 17, 2011 6:40 pm

    Me quedo con mis small Pastor López. Sin embargo, karhunen love parade kicks asses! o rulea!

  2. May 10, 2011 2:50 am

    Hello,

    The Karhunen-Loève expansion does NOT require the random variables to be uncorrelated – this is just a special case of the KL-expansion (Gaussian random fields). In general it is wrong to assume uncorrelated random variables.

    ..see:
    Stochastic finite elements: a spectral approach
    By Roger G. Ghanem, Pol D. Spanos
    page: 23 – after equation 2.30

    http://books.google.com/books?id=WzgKyTQQcAwC&printsec=frontcover&dq=spanos&hl=en&ei=4RXJTdPnDMr1-gaS96XSBg&sa=X&oi=book_result&ct=result&resnum=3&ved=0CDUQ6AEwAg#v=onepage&q=karhunen&f=false

    Best Regards,
    Wolfgang Betz

    • May 11, 2011 8:39 am

      I was wrong, the random variables are uncorrelated (since they have zero mean) – but not necessarily independent.

Comments are closed.

Follow

Get every new post delivered to your Inbox.