errors for specific applications and quantities of interest (Becker and Rannacher, 2001; Oden and Prudhomme, 2001; Ainsworth and Oden, 2000). This research incorporates ingredients needed to control numerical errors for QOIs in simulation problems at hand. Recent extensions include the ability to treat error in stochastic PDEs (Almeida and Oden, 2010) and errors for multiscale and multiphysics problems (Estep et al., 2008; Oden et al., 2006), including molecular and atomistic models and combined atomistic-to-continuum hybrids (Bauman et al., 2009). Parallel adaptive mesh refinement (AMR) methods (Burstedde et al., 2011) have been integrated with adjoint-based error estimators to bring error estimation to very large-scale problems on parallel supercomputers (Burstedde et al., 2009).
Despite the recent successes in the development of goal-oriented, adjoint-based methods, a number of challenges remain. These include the development of two-sided bounds for a broader class of problems (beyond elliptic PDEs), further extensions to stochastic PDEs, and the generalizations of adjoints for nonsmooth and chaotic systems. Additionally, challenges remain for the development of the theory and scalable implementations for error estimation on adaptive and complex meshes (e.g., p- and r-adaptive discretizations and AMR). The development of rigorous a posteriori error estimates and adaptive control of all components of error for complex, multiphysics, multiscale models is an area that will remain ripe for research in computational mathematics over the coming decade.
Finding: Methods exist for estimating tight two-sided bounds for numerical error in the solution of linear elliptic PDEs. Methods are lacking for similarly tight bounds on numerical error in more complicated problems, including those with nonlinearities, coupled physical phenomena, coupling across scales, and stochasticity (as in stochastic PDEs).
The results of the solution-verification process help in quantitatively estimating the numerical error impacting the quantity of interest. More sophisticated techniques allow the numerical error to be controlled in the simulation, allowing researchers to target a particular maximum tolerable error and adapt the simulation to meet that requirement, provided sufficient computational time and memory are available. Typically, such adaptation controls the discretization error present in the model. Such techniques can then lead to managing the total error in a simulation, including discretization error as well as errors introduced by iterative algorithms and other approximation techniques.
Finding: Methods exist for estimating and controlling spatial and temporal discretization errors in many classes of PDEs. There is a need to integrate the management of these errors with techniques for controlling errors due to incomplete convergence of iterative methods, both linear and nonlinear. Although work has been done on balancing discretization and solution errors in an optimal way in the context of linear problems (e.g., McCormick, 1989; Rüde, 1993), research is needed on extending such ideas to complex nonlinear and multiphysics problems.
Managing the total error of a solution offers opportunities to gain efficiencies throughout the verification, validation, and uncertainty quantification (VVUQ) process. As with other aspects of the verification process, managing total error is best done in the context of the use of the model and the QOIs. The error may be managed differently for a “best physics” estimate to a particular quantity of interest versus an ensemble of models being used to train a reduced-order model. Managing the total error appropriately throughout the VVUQ study may allow improvement of the turnaround time of the study.
Important principles that emerge from the above discussion of code and solution verification are as follows:
• Solution verification must be done in terms of specified QOIs, which are usually functionals of the full computed solution.