A Discrete Fourier Transform View of Polynomial Approximation Methods in Uncertainty Quantification

This post was inspired by a recent email exchange with Leo Ng — a graduate student in Karen Willcox‘s research group in MIT’s Aero/Astro department.

There is no shortage of names for polynomial approximation methods in uncertainty quantification (UQ): polynomial chaos, generalized polynomial chaos (gPC), stochastic Galerkin, stochastic collocation, non-intrusive spectral projection, pseudospectral methods, stochastic expansion methods, polynomial dimensional decomposition — did I miss any? And then there are various tweaks for improving each of these, like sparse grids, sparse sampling strategies, sparse approximation strategies (e.g., compressed sensing), anisotropic methods that preference certain input variables over others, or dimension reductions that ignore unimportant input variables. Everyone has his or her favorite name, and the variations are easy to mix and match to suit a particular application (or publish a new paper).

The common element here is the polynomial approximation; I have an unknown function – presumably sufficiently smooth – that I want to approximate with a polynomial of the input variables. How do I arrange the monomial terms into a set of basis polynomials? And once I decide on that, how do I compute coefficients of a series in that bases such that the resulting polynomial is close to the unknown function? Often there is a system of equations that defines the function implicitly, like a partial differential equation with operators that depend on input parameters representing uncertainty. In such cases, the system residual can be used to build a minimization problem to compute the coefficients — and this gives us the Galerkin approximation. I’ll leave the Galerkin system for another post and instead focus on the case where I have a function that I can evaluate directly given values for the inputs — the so-called non-intrusive variety.

I’m going to discuss these polynomial approximation ideas using terms more familiar to folks in signal processing, like discrete Fourier transform. This is certainly not the first time this has been done with polynomial approximation. In fact, the top-notch team on Nick Trefethen‘s Chebfun project has been analyzing and implementing polynomial approximation in this manner since it got started. They take advantage of the connections between Chebyshev polynomials/points and cosine transforms to implement fast transform algorithms.

In contrast, I’ll discuss similar transforms using Gaussian quadrature points, since these are more common in UQ due to their connections with density functions of random variables. Unfortunately, we won’t get any fast algorithms out of this approach; all of the transform algorithms will be “order en squared” instead of “order en log en.” This isn’t typically a problem in UQ applications where we assume that the function evaluations are significantly more costly than any transform.

Okay, why do I keep mentioning transforms? Let’s get to that.

Assume I have a function f(x) that maps the interval [-1,1] into the reals. Equip the input space with a uniform measure of 1/2, which seasoned UQ-ers will recognize as the case corresponding to a uniform random variable x. I want to approximate f(x) by a polynomial in x. My approximation model is

f(x) \;\approx\; \sum_{i=1}^n \hat{f}_i\, \pi_i(x) \;=\; \hat{\mathbf{f}}^T\mathbf{\pi}(x),

where \pi_i(x) is the i-1 degree polynomial from the set of orthonormal Legendre polynomials. Note that I wrote this in linear algebra notation with \hat{\mathbf{f}} being an n-vector of coefficients and \mathbf{\pi}(x) being a vector containing the polynomial bases. This notation will really come in handy.

If we set the coefficients to be the Fourier (or spectral) coefficients,

\hat{f}_i = \int_{-1}^1 f(x)\,\pi_i(x)\,dx,

then the resulting polynomial would be the best nth degree polynomial approximation to f in the mean-squared sense. But we can’t compute integrals with f in the integrand if we don’t know f! So we instead approximate them with a Gaussian quadrature rule. Let x_j and w_j be the points and weights, respectively, of the n-point Gauss-Legendre quadrature rule. We’ll come back to why we use a rule with the same number of points as terms in the expansion.

To approximate the Fourier coefficients, we compute

\hat{f}_i = \sum_{j=1}^n f(x_j)\,\pi_i(x_j)\,w_j,

which are the pseudospectral coefficients. (Get it?) We can write all of these sums  — one for each i — conveniently using linear algebra notation. Let \mathbf{f} be the vector of all the f_j, i.e., all the evaluations of f at the quadrature points. Define the matrix \mathbf{P} with elements

\mathbf{P}_{ij} = \pi_i( x_j ).

Then define a diagonal matrix \mathbf{W}^2 with \mathbf{W}^2_{ii}=w_i. The squaring will make sense in a moment. Now we can write the computation of the pseudospectral coefficients as

\hat{\mathbf{f}} = \mathbf{P}\mathbf{W}^2\mathbf{f},

and I’ll let you verify that if it’s not obvious. The next thing to note is that the matrix \mathbf{Q}=\mathbf{P}\mathbf{W} is orthogonal, that is \mathbf{Q}^{-1}=\mathbf{Q}^T. There are two ways to verify this: One is to know — like I do — that \mathbf{Q} is the matrix of normalized eigenvectors of the symmetric, tridiagonal Jacobi matrix of recurrence coefficients for the Legendre polynomials. The other way is to use the polynomial exactness of the Gaussian quadrature rule for one side and the Christoffel-Darboux formula for the other side. I’ll let the interested reader pursue this further.

With \mathbf{Q}, we can write

\hat{\mathbf{f}} = \mathbf{Q}\mathbf{W}\mathbf{f},

which is the discrete Fourier transform, and

\mathbf{f} = \mathbf{W}^{-1}\mathbf{Q}^T\hat{\mathbf{f}},

which is the inverse discrete Fourier transform. Voila! In signal processing parlance, these operations transform the function between physical space (the Gauss points) and coefficient space (the pseudospectral coefficients).

This is the primary reason I keep the number of points in the quadrature rule the same as the number of terms in the polynomial expansion. It is not strictly necessary in other derivations. (At some point I’ll post about least-squares methods.) But you can see from this perspective that you lose the forward/inverse discrete transform relationship without it.

If you look carefully at the inverse discrete transform, you see that it’s just evaluating the expansion at the quadrature points and getting back the true function at the quadrature points. If you remember that the polynomial interpolant of degree n-1 on a set of n distinct points is unique, then you’ve just proven that the Lagrange interpolant at the Gauss quadrature points — which in UQ we call stochastic collocation — is equivalent to the pseudospectral approximation. Cool, huh?

All of this works for univariate functions, and it extends to multi-variate functions with tensor product constructions; the multivariate transform operators become Kronecker products of the 1d transform operators. However, if you give up tensor product construction, then it gets really messy really quickly. We’ll leave that for the subject of another post.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s