In Chapter 1, verification is defined as the process of determining how accurately a computer program (“code”) correctly solves the equations of a mathematical model. This includes code verification (determining whether the code correctly implements the intended algorithms) and solution verification (determining the accuracy with which the algorithms solve the mathematical model’s equations for specified quantities of interest [QOIs]).

In this chapter verification is discussed in detail. The chapter begins with overarching remarks, then discusses code verification and solution verification, and closes with a summary of verification principles.

Many large-scale computational models are built up from a hierarchy of models, as is illustrated in Figure 1.1. Opportunities exist to perform verification studies that reflect the hierarchy or collection of these models into the integrated simulation tool. Both code and solution verification studies may benefit by taking advantage of this composition of submodels because the submodels may be more amenable to a broader set of techniques. For example, code verification can fruitfully employ “unit tests” that assess whether the fundamental software building blocks of a given code correctly execute their intended algorithms. This makes it easier to test the next level in the code hierarchy, which relies on the previously tested fundamental units. As another example, solution verification in a calculation that involves interacting physical phenomena is aided and enhanced if it is performed for the individual phenomena. In the following sections, it is implicitly assumed that this principle of hierarchical decomposition is to be followed when possible.

The processes of verification and validation (V&V) and uncertainty quantification (UQ) presuppose a computational model or computer code that has been developed with software quality engineering practices appropriate for its intended use. Software quality assurance (SQA) procedures provide an essential foundation for the verification of complex computer codes and solutions. The practical implementation of SQA in software development may be approached using risk-based grading with respect to software quality. The basic notion of risk-based grading is straightforward—the higher the risk associated with the usage of the software, the greater the care that must be taken in the software development. This approach attempts to balance the programmatic drivers, scientific and technological creativity, and quality requirements. Requirements governing the development of software may manifest themselves in regulations, orders, guidance, and contracts; for example, the Department of Energy (DOE) provides documents detailing software quality requirements in (DOE, 2005). Standards are available that outline activities that are elements for ensuring appropriate software quality (American National Standards Institute, 2005). The discipline of software quality engineering presents a breadth of practices that can be put into place.

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.

Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 31

3
Verification
3.1 INTRODUCTION
In Chapter 1, verification is defined as the process of determining how accurately a computer program (“code”)
correctly solves the equations of a mathematical model. This includes code verification (determining whether the
code correctly implements the intended algorithms) and solution verification (determining the accuracy with which
the algorithms solve the mathematical model’s equations for specified quantities of interest [QOIs]).
In this chapter verification is discussed in detail. The chapter begins with overarching remarks, then discusses
code verification and solution verification, and closes with a summary of verification principles.
Many large-scale computational models are built up from a hierarchy of models, as is illustrated in Figure
1.1. Opportunities exist to perform verification studies that reflect the hierarchy or collection of these models
into the integrated simulation tool. Both code and solution verification studies may benefit by taking advantage
of this composition of submodels because the submodels may be more amenable to a broader set of techniques.
For example, code verification can fruitfully employ “unit tests” that assess whether the fundamental software
building blocks of a given code correctly execute their intended algorithms. This makes it easier to test the next
level in the code hierarchy, which relies on the previously tested fundamental units. As another example, solution
verification in a calculation that involves interacting physical phenomena is aided and enhanced if it is performed
for the individual phenomena. In the following sections, it is implicitly assumed that this principle of hierarchical
decomposition is to be followed when possible.
The processes of verification and validation (V&V) and uncertainty quantification (UQ) presuppose a computa-
tional model or computer code that has been developed with software quality engineering practices appropriate for
its intended use. Software quality assurance (SQA) procedures provide an essential foundation for the verification
of complex computer codes and solutions. The practical implementation of SQA in software development may
be approached using risk-based grading with respect to software quality. The basic notion of risk-based grading
is straightforward—the higher the risk associated with the usage of the software, the greater the care that must
be taken in the software development. This approach attempts to balance the programmatic drivers, scientific and
technological creativity, and quality requirements. Requirements governing the development of software may
manifest themselves in regulations, orders, guidance, and contracts; for example, the Department of Energy (DOE)
provides documents detailing software quality requirements in (DOE, 2005). Standards are available that outline
activities that are elements for ensuring appropriate software quality (American National Standards Institute,
2005). The discipline of software quality engineering presents a breadth of practices that can be put into place.
31

OCR for page 31

32 ASSESSING THE RELIABILITY OF COMPLEX MODELS
For example, the DOE provides a suggested set of goals, principles, and guidelines for software quality practices
(DOE, 2000). The use of these practices can be tailored to the development environment and application area.
For example, pervasive use of software configuration management and regression testing is the state of practice
in many scientific communities.
3.2 CODE VERIFICATION
Code verification is the process of determining whether a computer program (“code”) correctly implements
the intended algorithms. Various tools for code verification and techniques that employ them have been proposed
(Roache, 1998, 2002; Knupp and Salari, 2003; Babuska, 2004). The application of these processes is becoming
more prevalent in many scientific communities. For example, the computer model employed in the electromagnet -
ics case study described in Section 4.5 uses carefully verified simulation techniques. Tools for code verification
include, but are not limited to, comparisons against analytic and semianalytic (i.e., independent, error-controlled)
solutions and the “method of manufactured solutions.” The latter refers to the process of postulating a solution
in the form of a function, followed by substituting it into the operator characterizing the mathematical model in
order to obtain (domain and boundary) source terms for the model equations. This process then provides an exact
solution for a model that is driven by these source terms (Knupp and Salari, 2003; Oden, 2003).
The comparison of code results against independent, error-controlled (“reference”) solutions allows researchers
to assess, for example, the degree to which code implementation achieves the expected solution to the mathematical
system of equations. Because the reference solution is exact and the code implements numerical approximation
to the exact solution, one can test convergence rates against those predicted by theory. As a separate, complemen -
tary activity one can often construct a reference solution to the discretized problem, providing an independent
solution of the computational model (not the mathematical model). This verification activity allows assessment
of correctness in the pre-asymptotic regime. To make such reference solutions mathematically tractable, typically
simplified model problems (e.g., ones with lower dimensionality and with simplified physics and geometry) are
chosen. However, there are few analytical and semianalytical solutions for more complex problems. There is a need
to develop independent, error-controlled solutions for increasingly complex systems of equations—for example,
those that represent coupled-physics, multiscale, nonlinear systems. Developing such solutions that are relevant to
a given application area is particularly challenging. Similarly, developing manufactured solutions becomes more
challenging as the mathematical models become more complex, since the number of terms in the source expressions
grows in size and complexity, requiring great care in managing and implementing the source terms into the model.
Another challenge is the need to construct manufactured solutions that expose different features of the model
relevant to the simulation of physical systems, for example, different boundary conditions, geometries, phenomena,
nonlinearities, or couplings. The method of manufactured solutions is employed in the verification methodology
used in the Center for Predictive Engineering and Computational Sciences (PECOS) study described in Section 5.9.
Finally as indicated in Section 5.9, manufactured or analytical solutions should reproduce known challenging fea -
tures of the solution, such as boundary layers, effects of interfaces, anisotropy, singularities, and loss of regularity.
Some communities employ cross-code comparisons (in which different codes solve the same discretized
system of partial differential equations [PDEs]) and refer to this practice as verification. Although this activity
provides valuable information under certain conditions and can be useful to ensure accuracy and correctness, this
activity is not “verification” as the term is used in this report. Often the reference codes being compared are not
themselves verifiable. One of the significant challenges in cross-code comparisons is that of ensuring that the
codes are modeling identical problems; these codes tend to vary in the effects that they include and in the way
that the effects are included. It may be difficult to simulate identical physics processes and problems. One needs
to model the same problem for the different codes; ideally the reference solution is arrived at using a distinct
error-controlled numerical technique.
Upon completion of a code-verification study, a statement can be made about the correctness of the imple -
mentation of the intended algorithms in the computer program under the conditions imposed by the study (e.g.,
selected QOIs, initial and boundary conditions, geometry, and other inputs). To ensure that the code continues

OCR for page 31

33
VERIFICATION
to be subjected to verification tests as changes are made to it, verification problems from the study are typically
incorporated in a code-development test suite, established as part of the software quality practices.
In practice, regression test suites are composed of a variety of tests. Since the problems used for regression
suites may be constructed for that purpose, it may be that a solution in continuous variables is known or that
the problem is constructed with a particular solution in mind (manufactured solution). Such a suite may include
verification tests (including comparison against continuous solutions, solutions to a discretized version of the
problem, and manufactured solutions) in addition to other types of tests—unit tests (tests of a particular part or
“unit” of the code), integration tests (tests of integrated units of the code), and user-acceptance tests. Perform -
ing these verification studies and augmenting test suites with them help to ensure the quality and pedigree of the
computer code. A natural question arises as to the sufficiency and adequacy of regularly passing these test suites
as code development continues. Various metrics (coverage metrics) have been developed to measure the ability of
the tests to cover various aspects of the code, including source lines of computer code, functions of the code, and
features of the code (Jones, 2000; Westfall, 2010). Care must be taken when interpreting the results of these cover -
age metrics, particularly for scientific code development, because coverage metrics tend to measure the execution
of a particular portion of the computer code, independent of the input values. Uncertainty quantification studies
explore a broad variety of input parameters, which may result in unexpected results from algorithms and physics
models, even those that have undergone extensive testing.
Some software development teams have found utility in employing static analysis tools, including those that
incorporate logic-checking algorithms, for source code checking (Ayewah et al., 2008). Static code analysis is
a software analysis technique that is performed without actually executing the software. Modern static analysis
tools parse the code in a way similar to what compilers do, creating a syntax tree and database of the entire pro -
gram’s code, which is then analyzed against a set of rules or models (Cousot, 2007). On the basis of those rules,
the analysis tool can create a report of suspected defects in the code. The formalism associated with these rules
allows potential defects to be categorized according to severity and type. Since the analysis tools have access to a
database of the entire source code, defects that are a combination of source code statements in disparate locations
in the code implementations can be identified (e.g., allocation of memory in one portion of the implementation
without release of that memory prior to returning control flow). Such tools can aid in verifying that the source
code implementation is a correct realization of the intended algorithms. However, to date these tools have been
able to answer only limited questions about codes of limited complexity. The expansion of these tools to science
and engineering simulation software, and the kinds of questions that they may ultimately be able to answer, remain
topics for further study.
3.3 SOLUTION VERIFICATION
Solution verification is the process of determining or estimating the accuracy with which algorithms solve the
mathematical-model equations for the given QOIs. A breadth of tools have been developed for solution verification;
these include, but are not limited to, a priori and a posteriori error estimation 1 and grid adaptation to minimize
numerical error. The most sophisticated solution-verification techniques incorporate error estimation with error
control (by means of h-, p-, or r-adaptivity) in the physics simulation.
QOIs are typically expressed as functionals of the fully computed solution across the problem domain. The
solution of the mathematical-model equations is often a set of dependent-variable values that are evaluated at a
large number of points in a space defined by a set of independent variables. For example, the dependent variables
could be pressure, temperature, and velocity; the independent variables could be position and time. In the usual
case, it is not the value at each point that is of interest but rather the more aggregated quantities—such as the
average pressure in a space-time region—that are functionals of the complete solution.
1 A priori error estimation is done by an examination of the model and computer code; a posteriori error estimation is done by an examination
of the results of the execution of the code.

OCR for page 31

34 ASSESSING THE RELIABILITY OF COMPLEX MODELS
Finding: Solution verification (determining the accuracy with which the numerical methods in a code solve the
model equations) is useful only in the context of specified quantities of interest, which are usually functionals of
the fully computed solution.
The accuracy of the computed solution may be very different for different pointwise quantities and for dif -
ferent functionals. It is important to identify the QOIs because the discretization and resolution requirements for
predicting these quantities may vary (e.g., predicting an integral quantity across the spatial domain may be less
restrictive than predicting local, high-order derivatives). The PECOS case study presented in Section 5.9 employs
quantities of interest as a fundamental aspect of solution verification.
Solution verification is a matter of numerical-error estimation, the goal being to estimate the error present in
the computational QOI relative to an exact QOI from the underlying mathematical model. While code verification
considers generic formulations of simplified problems within a class that the code was designed to treat, solution
verification pertains to the specific, large-scale modeling problem that is at the center of the simulation effort,
with specific inputs (boundary and initial conditions, constitutive parameters, solution domains, source terms) and
outputs (the QOIs). The goal of the solution-verification process is to estimate and control the error for the simu-
lation problem at hand. The most sophisticated realization of this technique is online during the solution process
to ensure that the actual delivered numerical solution from the code is a reliable estimate of the true solution to
the underlying mathematical model. Not all discretization techniques and simulation problems lend themselves to
this level of sophistication. Solution verification may also employ relevant reference solutions, self-convergence,
and other techniques for estimating and controlling numerical error prior to performing the simulation at hand.
Solution-verification practices may employ independent, error-controlled (“reference”) solutions. Comparing
code results against reference solutions allows researchers to estimate, for example, the numerical error introduced
by the discretized equations being employed and to assess the order of accuracy. Maintaining second-order, or even
first-order, convergence can be challenging in complex, nonlinear, multiphysics simulations. Obtaining reference
solutions that are demonstrably relevant to the simulation at hand is challenging, particularly for highly complex
large-scale models; thus, the application of this approach to solution verification is limited. More reference solutions
that exhibit features of the phenomena of interest are needed for complex problems, including those with strong
nonlinearities, coupled physical phenomena, coupling across scales, and stochastic behavior. Generating relevant
reference solutions for these and other complex, nonlinear, multiphysics problems would extend the breadth of
problems for which this approach to solution verification could be employed.
Solution verification may also be performed by using the code itself to produce high-resolution reference
solutions—a practice referred to as performing “self-convergence” studies. If rigorous error estimates are available,
they can be used to extrapolate successive discrete calculations to estimate the infinite-resolution solution. In the
absence of such an error estimate, the highest-resolution simulation may be used as the reference “converged”
solution. Such studies can be used to assess the rate at which self-convergence is achieved in the QOIs and to
inform the discretization and resolution requirements to control numerical error for the simulation problem at hand.
This approach has the benefit that the complexity of the problem of interest is limited only by the capabilities
of the code being studied and the computer being used, removing the limitation typically imposed by requiring
independent error-controlled solutions.
Methods of numerical-error estimation generally fall into two categories: a priori estimates and a posteriori
estimates. The former, when available, can provide useful information on the convergence rates obtainable as
approximation parameters (e.g., mesh sizes) are refined, but they are of little use in quantifying numerical error in
quantities of interest. A posteriori estimates aim to achieve quantitative estimates of numerical error (Babuska and
Stromboulis, 2001; Ainsworth and Oden, 2000). Methods in this category include explicit and implicit residual-
based methods for global error measures, variants of Richardson extrapolation, 2 superconvergence recovery meth-
ods, and goal-oriented methods based on adjoint solutions. The recent development of goal-oriented adjoint-based
methods, in particular, has produced methods that are capable of yielding, in many cases, guaranteed bounds on
2 Richardson extrapolation is a numerical technique used to accelerate the rate of convergence of a sequence. See Brezinski and Redivo-
Zaglia (1991).

OCR for page 31

35
VERIFICATION
errors for specific applications and quantities of interest (Becker and Rannacher, 2001; Oden and Prudhomme,
2001; Ainsworth and Oden, 2000). This research incorporates ingredients needed to control numerical errors
for QOIs in simulation problems at hand. Recent extensions include the ability to treat error in stochastic PDEs
(Almeida and Oden, 2010) and errors for multiscale and multiphysics problems (Estep et al., 2008; Oden et al.,
2006), including molecular and atomistic models and combined atomistic-to-continuum hybrids (Bauman et al.,
2009). Parallel adaptive mesh refinement (AMR) methods (Burstedde et al., 2011) have been integrated with
adjoint-based error estimators to bring error estimation to very large-scale problems on parallel supercomputers
(Burstedde et al., 2009).
Despite the recent successes in the development of goal-oriented, adjoint-based methods, a number of chal-
lenges remain. These include the development of two-sided bounds for a broader class of problems (beyond elliptic
PDEs), further extensions to stochastic PDEs, and the generalizations of adjoints for nonsmooth and chaotic systems.
Additionally, challenges remain for the development of the theory and scalable implementations for error estimation
on adaptive and complex meshes (e.g., p- and r-adaptive discretizations and AMR). The development of rigorous
a posteriori error estimates and adaptive control of all components of error for complex, multiphysics, multiscale
models is an area that will remain ripe for research in computational mathematics over the coming decade.
Finding: Methods exist for estimating tight two-sided bounds for numerical error in the solution of linear elliptic
PDEs. Methods are lacking for similarly tight bounds on numerical error in more complicated problems, including
those with nonlinearities, coupled physical phenomena, coupling across scales, and stochasticity (as in stochastic
PDEs).
The results of the solution-verification process help in quantitatively estimating the numerical error impacting
the quantity of interest. More sophisticated techniques allow the numerical error to be controlled in the simulation,
allowing researchers to target a particular maximum tolerable error and adapt the simulation to meet that require -
ment, provided sufficient computational time and memory are available. Typically, such adaptation controls the
discretization error present in the model. Such techniques can then lead to managing the total error in a simula -
tion, including discretization error as well as errors introduced by iterative algorithms and other approximation
techniques.
Finding: Methods exist for estimating and controlling spatial and temporal discretization errors in many classes
of PDEs. There is a need to integrate the management of these errors with techniques for controlling errors due
to incomplete convergence of iterative methods, both linear and nonlinear. Although work has been done on bal -
ancing discretization and solution errors in an optimal way in the context of linear problems (e.g., McCormick,
1989; Rüde, 1993), research is needed on extending such ideas to complex nonlinear and multiphysics problems.
Managing the total error of a solution offers opportunities to gain efficiencies throughout the verification,
validation, and uncertainty quantification (VVUQ) process. As with other aspects of the verification process,
managing total error is best done in the context of the use of the model and the QOIs. The error may be managed
differently for a “best physics” estimate to a particular quantity of interest versus an ensemble of models being
used to train a reduced-order model. Managing the total error appropriately throughout the VVUQ study may allow
improvement of the turnaround time of the study.
3.4 SUMMARY OF VERIFICATION PRINCIPLES
Important principles that emerge from the above discussion of code and solution verification are as follows:
• Solution verification must be done in terms of specified QOIs, which are usually functionals of the full
computed solution.

OCR for page 31

36 ASSESSING THE RELIABILITY OF COMPLEX MODELS
• The goal of solution verification is to estimate and control, if possible, the error in each QOI for the
simulation problem at hand.
• The efficiency and effectiveness of code- and solution-verification processes can often be enhanced by
exploiting the hierarchical composition of codes and solutions, verifying first the lowest-level building
blocks and then moving successively to more complex levels.
• Verification is most effective when performed on software developed under appropriate software quality
practices. These include software-configuration management and regression testing.
3.5 REFERENCES
Ainsworth, M., and J.T. Oden. 2000. A Posteriori Error Estimation in Finite Element Analysis. New York: Wiley Interscience.
Almeida, J., and T. Oden. 2010. Solution Verification, Goal-Oriented Adaptive Methods for Stochastic Advection-Diffusion Problems. Com-
puter Methods in Applied Mechanics and Engineering 199(37-40):2472-2486.
American National Standards Institute. 2005. American National Standards: Quality Management Systems—Fundamentals and Vocabulary
ANSI/ISO/ASQ Q9001-2005. Milwaukee, Wisc.: American Society for Quality.
Ayewah, N., D. Hovemeyer, J.D. Morgenthaler, J. Penix, and W. Pugh. 2008. Using Static Analysis to Find Bugs. IEEE Software 25(5):22-29.
Babuska, I. 2004. Verification and Validation in Computational Engineering and Science: Basic Concepts. Computer Methods in Applied Me-
chanics and Engineering 193:4057-4066.
Babuska, I., and T. Strouboulis. 2001. The Finite Element Method and Its Reliability. Oxford, U.K.: Oxford University Press.
Bauman, P.T., J.T. Oden, and S. Prudhomme. 2009. Adaptive Multiscale Modeling of Polymeric Materials with Arlequin Coupling and Goals
Algorithms. Computer Methods in Applied Mechanics and Engineering 198:799-818.
Becker, R., and R. Rannacher. 2001. An Optimal Control Approach to a Posteriori Error Estimation in Finite Element Methods. Acta Numerica
10:1-102.
Brezinski, C., and M. Redivo-Zaglia. 1991. Extrapolation Methods. Amsterdam, Netherlands: North-Holland.
Burstedde, C., O. Ghattas, T. Tu, G. Stadler, and L. Wilcox. 2009. Parallel Scalable Adjoint-Based Adaptive Solution of Variable-Viscosity
Stokes Flow Problems. Computer Methods in Applied Mechanics and Engineering 198:1691-1700.
Burstedde, C., L.C. Wilcox, and O. Ghattas. 2011. Scalable Algorithms for Parallel Adaptive Mesh Refinement on Forests of Octrees. SIAM
Journal on Scientific Computing 33(3):1103-1133.
Cousot, P. 2007. The Role of Abstract Interpretation in Formal Methods. Pp. 135-137 in SEFM 2007, 5th IEEE International Conference on
Software Engineering and Formal Methods, London, U.K., September 10-14. Mike Hinchey and Tiziana Margaria (Eds.). Piscataway,
N.J.: IEEE Press.
DOE (Department of Energy). 2000. ASCI Software Quality Engineering: Goals, Principles, and Guidelines. DOE/DP/ASC-SQE-2000-
FDRFT-VERS2. Washington, D.C.: Department of Energy.
DOE. 2005. Quality Assurance. DOE O414. Washington, D.C.: Department of Energy.
Estep, D., V. Carey, V. Ginting, S. Tavener, and T. Wildey. 2008. A Posteriori Error Analysis of Multiscale Operator Decomposition Methods for
Multiphysics Models. Journal of Physics: Conference Series 125:1-16.
Jones, C. 2000. Software Assessments, Benchmarks, and Best Practices. Upper Saddle River, N.J.: Addison Wesley Longman.
Knupp, P., and K. Salari. 2003. Verification of Computer Codes in Computational Science and Engineering. Boca Raton, Fla.: Chapman and
Hall/CRC.
McCormick, S. 1989. Multilevel Adaptive Methods for Partial Differential Equations. Philadelphia, Pa.: Society for Industrial and Applied
Mathematics.
Oden, J.T. 2003. Error Estimation and Control in Computational Fluid Dynamics. Pp. 1-23 in The Mathematics of Finite Elements and Applica-
tions. J.R. Whiteman (Ed.). New York: Wiley.
Oden, J.T., and A. Prudhomme. 2001. Goal-Oriented Error Estimation and Adaptivity for the Finite Element Method. Computer Methods in
Applied Mechanics and Engineering 41:735-756.
Oden, J.T., S. Prudhomme, A. Romkes, and P. Bauman. 2006. Multi-Scale Modeling of Physical Phenomena: Adaptive Control of Models.
SIAM Journal on Scientific Computing 28(6):2359-2389.
Roache, P. 1998. Verification and Validation in Computational Science and Engineering. Albuquerque, N.Mex.: Hermosa Publishers.
Roache, P. 2002. Code Verification by the Method of Manufactured Solutions. Journal of Fluids Engineering 124(1):4-10.
Rüde, U. 1993. Mathematical and Computational Techniques for Multilevel Adaptive Methods. Philadelphia, Pa.: Society for Industrial and
Applied Mathematics.
Westfall, L. 2010. Test Coverage: The Certified Software Quality Handbook. Milwaukee, Wisc.: ASQ Quality Press.