variability. The natural variability is uncorrelated from one realization to another. The geographical dependence of the natural fluctuations will be taken to be stationary in time. The mathematical expression for the MSE is
After formulating the expression for the MSE, we find that it can be written in terms of a weighted space and time integral over the unknown filter. If we take the variation of the MSE with respect to infinitesimal changes in the unknown filter function in space and time and set the resulting expression equal to zero, we end up with an integral equation for which the filter is the unknown. An important special case is that for which a no-bias constraint is imposed. We would like to make our estimator of the signal be unbiased. For this case, the ensemble average of the filtered data stream is identical to the true signal except that it is multiplied by a constant factor that is realization dependent. The constraint can be incorporated by the use of the familiar method of Lagrange multipliers. The result is again an integral equation that has to be solved for the optimal filter as its unknown,
where ρ(r,t;r',t') is equal to and Λ is the Lagrange multiplier.
The integral equation for the optimal filter is a simple nonhomogeneous linear integral equation, in which the desired filter function occurs under the integral sign, weighted by a kernel. The kernel of the integral equation happens to be the space-time lagged covariance of the field in question (e.g., the surface temperature field). This tells us that the optimal filter is strongly dependent on the structure of the space-time covariance of the natural variability. In addition, the optimal filter depends on the signal itself. Later we will discuss solutions for which there is some uncertainty in the signal (as actually occurs).
In solving a linear integral equation in which the unknown occurs underneath the integral sign, weighted by a kernel that is symmetric in its arguments (e.g., a covariance), it is advantageous to expand the kernel into its eigenfunctions. Since the kernel is symmetric and well behaved, we can assume the eigenfunctions form a complete basis set with positive eigenvalues that satisfy
where Ψn(r,t) is the frequency-dependent EOF and λn is the eigenvalue. The eigenfunctions of a covariance kernel are known as the empirical orthogonal functions (EOFs). The eigenvalues are the variance attributable to the corresponding EOF "mode." We can reduce the integral equation to so-called diagonal form by expanding all quantities in terms of the eigenfunctions of the kernel. This is equivalent to inverting a matrix by using a basis in which the matrix is diagonal, which simply requires inverting the elements along the diagonal.
Since the natural variability can be taken to be stationary in time, the temporal part of the problem can be solved using the Fourier (integral) basis. The spatial part of the problem is solved at each frequency by using the EOF basis set for that frequency. The choice of frequency-dependent EOFs completely diagonalizes the space- and time-dependent kernel of the integral equation, rendering the problem invertible. Hence, the problem of constructing an optimal filter amounts to obtaining an adequate knowledge of the fdEOFs and an a priori knowledge of the signal shape in space and time. In principle, each could be obtained from simulations with sufficiently reliable coupled ocean-atmosphere climate models.
The signal waveform in space and time and the data stream are to be expanded into the frequency-dependent EOF basis set. After the insertion of these into the integral equation, we find the following formula for the optimal filter:
The unbiased optimal filter for a known non-random signal turns out to be exactly proportional to the known signal waveform. This space-and time-dependent shape factor can be moved outside the integral sign when the data are filtered (see equations (2) and (6)). The filtered data stream then is the imposed signal, as a function of space and time, multiplied by a realization-dependent dimensionless scale factor whose expectation value is unity. The signal-to-noise ratio is the reciprocal of the standard deviation of the scale factor. This standard deviation is a key property of the optimal filter. The square of the signal-to-noise ratio can be written as a sum of terms, each of which represents the contribution from a particular fdEOF mode:
This representation is particularly convenient, since it allows us to see how adding more modes in the expansion affects the performance of the filter. A filter in which the series is