Computational models that simulate real-world physical processes are playing an ever-increasing role in engineering and physical sciences. These models, encoding physical rules and principles such as Maxwell’s equations or the conservation of mass, are typically based on differential and/or integral equations. Advances in computing hardware and algorithms have dramatically improved the ability to computationally simulate complex processes, enabling simulation and analysis of phenomena that in the past could be addressed only by resource-intensive experimentation, if at all.
Computational models are being used to study processes as large scale as the evolution of the universe and as small scale as protein folding. They are used to predict the future state of Earth’s climate and to decide among alternative product designs in manufacturing. Nevertheless, regardless of their underlying mathematical formalism or their intended purpose, they share a common feature—they are not reality.
Models differ from reality for a variety of reasons. Key model inputs—initial conditions, boundary conditions, or important parameters controlling the model—are usually not known with certainty or are inadequately described. For example, an ocean model must be initialized with temperature, salinity, pressure, velocity, and so on over the entire planet before it can run, but these variables are not precisely known. Another source of discrepancy between model and reality is the approximations that are necessary for representing mathematical concepts within a computational model. For example, the ocean must be represented on a grid, or some other finite data structure, and computational operations propagating this ocean over time are only approximations of mathematics defined on the continuum. More fundamentally still, models deviate from reality because they necessarily ignore some phenomena and represent others as simpler than they really are. Without such omissions and simplifications the models would be intractably complicated.
Given inevitable flaws and uncertainties, how should computational results be viewed by those who wish to act on them? The appropriate level of confidence in the results must stem from an understanding of a model’s limitations and the uncertainties inherent in its predictions. Ideally this understanding can be obtained from three interrelated processes that answer key questions:
• Verification. How accurately does the computation solve the underlying equations of the model for the quantities of interest?
• Validation. How accurately does the model represent reality for the quantities of interest?
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1
Summary Computational models that simulate real-world physical processes are playing an ever-increasing role in engi - neering and physical sciences. These models, encoding physical rules and principles such as Maxwell’s equations or the conservation of mass, are typically based on differential and/or integral equations. Advances in computing hardware and algorithms have dramatically improved the ability to computationally simulate complex processes, enabling simulation and analysis of phenomena that in the past could be addressed only by resource-intensive experimentation, if at all. Computational models are being used to study processes as large scale as the evolution of the universe and as small scale as protein folding. They are used to predict the future state of Earth’s climate and to decide among alternative product designs in manufacturing. Nevertheless, regardless of their underlying mathematical formalism or their intended purpose, they share a common feature—they are not reality. Models differ from reality for a variety of reasons. Key model inputs—initial conditions, boundary condi - tions, or important parameters controlling the model—are usually not known with certainty or are inadequately described. For example, an ocean model must be initialized with temperature, salinity, pressure, velocity, and so on over the entire planet before it can run, but these variables are not precisely known. Another source of discrepancy between model and reality is the approximations that are necessary for representing mathematical concepts within a computational model. For example, the ocean must be represented on a grid, or some other finite data structure, and computational operations propagating this ocean over time are only approximations of mathematics defined on the continuum. More fundamentally still, models deviate from reality because they necessarily ignore some phenomena and represent others as simpler than they really are. Without such omissions and simplifications the models would be intractably complicated. Given inevitable flaws and uncertainties, how should computational results be viewed by those who wish to act on them? The appropriate level of confidence in the results must stem from an understanding of a model’s limitations and the uncertainties inherent in its predictions. Ideally this understanding can be obtained from three interrelated processes that answer key questions: Verification. How accurately does the computation solve the underlying equations of the model for the • quantities of interest? Validation. How accurately does the model represent reality for the quantities of interest? • 1
OCR for page 1
2 ASSESSING THE RELIABILITY OF COMPLEX MODELS Uncertainty quantification (UQ). How do the various sources of error and uncertainty feed into uncertainty • in the model-based prediction of the quantities of interest? Computational scientists and engineers have made significant progress in developing these processes and using them to produce not just a single predicted value of a physical quantity of interest (QOI) but also information about the range of values that the QOI may have in light of the uncertainties and errors inherent in a computational model. However, there remain many open questions, including questions about the mathematical foundations on which various processes and methods are based or could be based. In recognition of the importance of computational simulations and the need to understand uncertainties in their results, the Department of Energy’s (DOE’s) National Nuclear Security Administration, the DOE’s Office of Science, and the Air Force Office of Scientific Research requested that the National Research Council study the mathematical sciences foundations of verification, validation, and uncertainty quantification (VVUQ) and recom - mend steps that will lead to improvements in VVUQ capabilities. The statement of task is as follows: A committee of the National Research Council will examine practices for VVUQ of large-scale compu - • tational simulations in several research communities. The committee will identify common concepts, terms, approaches, tools, and best practices of VVUQ. • The committee will identify mathematical sciences research needed to establish a foundation for building • a science of verification and validation (V&V) and for improving the practice of VVUQ. The committee will recommend educational changes needed in the mathematical sciences community and • mathematical sciences education needed by other scientific communities to most effectively use VVUQ. KEY PRINCIPLES AND PRACTICES The Committee on Mathematical Foundations of Verification, Validation, and Uncertainty Quantification views its charge as emphasizing the mathematical aspects of VVUQ and, because of the breadth of the subject overall, has limited it focus to physics-based and engineering models. However, much of its discussion applies more broadly. Although the case studies presented in this report include physics or engineering considerations, they are meant to illuminate mathematical aspects of the associated VVUQ analysis. The committee noted several key VVUQ principles: As a first step toward identifying best practices, VVUQ tasks are interrelated. A solution-verification study may incorrectly characterize the accuracy of • a code’s solution if code verification was inadequate. A validation assessment depends on the assessment of numerical error produced by solution verification and on the propagation of model-input uncertainties to computed QOIs. The processes of VVUQ should be applied in the context of an identified set of QOIs. A model may pro - • vide an excellent approximation to one QOI in a given problem while providing poor approximations to other QOIs. Thus, the questions that VVUQ must address are not well posed unless the QOIs have been defined. Verification and validation are not yes/no questions with yes/no answers, but rather are quantitative assess- • ments of differences. Solution verification characterizes the difference between a computational model’s solution and that of the underlying mathematical model. Validation involves quantitative characterization of the difference between computed QOIs and true physical QOIs. Specific to verification, the committee identified several guiding principles and associated best practices. The main text discusses all of these and provides supporting detail. Some of the more important principles and practices are summarized here: Principle: Solution verification is well defined only in terms of specified quantities of interest, which are • usually functionals of the full computed solution.
OCR for page 1
3 SUMMARY —Best practice: Clearly define the QOIs for a given VVUQ analysis, including the solution-verification task. Different QOIs will be affected differently by numerical errors. —Best practice: Ensure that solution verification encompasses the full range of inputs that will be employed during UQ assessments. Principle: The efficiency and effectiveness of code and solution verification can often be enhanced by • exploiting the hierarchical composition of codes and mathematical models, with verification performed first on the lowest-level building blocks and then on successively more complex levels. —Best practice: Identify hierarchies in computational and mathematical models and exploit them for code and solution verification. It is often worthwhile to design the code with this approach in mind. —Best practice: Include in the test suite problems that test all levels in the hierarchy. The goal of solution verification is to estimate, and control if possible, the error in each QOI for the • problem at hand. —Best practice: When possible in solution verification, use goal-oriented a posteriori error estimates, which give numerical error estimates for specified QOIs. In the ideal case the fidelity of the simulation is chosen so that the estimated errors are small compared to the uncertainties arising from other sources. —Best practice: If goal-oriented a posteriori error estimates are not available, try to perform self-conver- gence studies (in which QOIs are computed at different levels of refinement) on the problem at hand, which can provide helpful estimates of numerical error. Many VVUQ tasks introduce questions that can be posed, and in principle answered, within the realm of math - ematics. Validation and prediction introduce additional questions whose answers require judgments from the realm of subject-matter expertise. For validation and prediction, the committee identified several principles and associated best practices, which are detailed in the main text. Some of the more important of these are summarized here: Principle: A validation assessment is well defined only in terms of specified quantities of interest (QOIs) • and the accuracy needed for the intended use of the model. —Best practice: Early in the validation process, specify the QOIs that will be addressed and the required accuracy. —Best practice: Tailor the level of effort in assessment and estimation of prediction uncertainties to the needs of the application. Principle: A validation assessment provides direct information about model accuracy only in the domain • of applicability that is “covered” by the physical observations employed in the assessment. —Best practice: When quantifying or bounding model error for a QOI in the problem at hand, systemati - cally assess the relevance of supporting data and validation assessments (which were based on data from different problems, often with different QOIs). Subject-matter expertise should inform this assessment of relevance (as discussed above and in Chapter 7). —Best practice: If possible, use a broad range of sources of physical observations so that the accuracy of a model can be checked under different conditions and at multiple levels of integration. —Best practice: Use “holdout tests” to test validation and prediction methodologies. In such a test some vali - dation data are withheld from the validation process, the prediction machinery is employed to “predict” the withheld QOIs, with quantified uncertainties, and finally the predictions are compared to the withheld data. —Best practice: If the desired QOI was not observed for the physical systems used in the validation process, compare sensitivities of the available physical observations with those of the QOI. —Best practice: Consider multiple metrics for comparing model outputs against physical observations. • Principle: The efficiency and effectiveness of validation and prediction assessments are often improved by exploiting the hierarchical composition of computational and mathematical models, with assessments beginning on the lowest-level building blocks and proceeding to successively more complex levels. —Best practice: Identify hierarchies in computational and mathematical models, seek measured data that facilitate hierarchical validation assessments, and exploit the hierarchical composition to the extent possible.
OCR for page 1
4 ASSESSING THE RELIABILITY OF COMPLEX MODELS —Best practice: If possible, use physical observations, especially at more basic levels of the hierarchy, to constrain uncertainties in model inputs and parameters. • Principle: Validation and prediction often involve specifying or calibrating model parameters. —Best practice: Be explicit about what data/information sources are used to fix or constrain model parameters. —Best practice: If possible, use a broad range of observations over carefully chosen conditions to produce more reliable parameter estimates and uncertainties, with less “trade-off” between different model parameters. • Principle: The uncertainty in the prediction of a physical QOI must be aggregated from uncertainties and errors introduced by many sources, including discrepancies in the mathematical model, numerical and code errors in the computational model, and uncertainties in model inputs and parameters. —Best practice: Document assumptions that go into the assessment of uncertainty in the predicted QOI, and also document any omitted factors. Record the justification for each assumption and omission. —Best practice: Assess the sensitivity of the predicted QOI and its associated uncertainties to each source of uncertainty as well as to key assumptions and omissions. —Best practice: Document key judgments—including those regarding the relevance of validation studies to the problem at hand—and assess the sensitivity of the predicted QOI and its associated uncertainties to reasonable variations in these judgments. —Best practice: The methodology used to estimate uncertainty in the prediction of a physical QOI should also be equipped to identify paths for reducing uncertainty. • Principle: Validation assessments must take into account the uncertainties and errors in physical observa- tions (measured data). —Best practice: Identify all important sources of uncertainty/error in validation data—including instru - ment calibration, uncontrolled variation in initial conditions, variability in measurement setup, and so on—and quantify the impact of each. —Best practice: If possible, use replications to help estimate variability and measurement uncertainty. —Remark: Assessing measurement uncertainties can be difficult when the “measured” quantity is actually the product of an auxiliary inverse problem—that is, when it is not measured directly but is inferred from other measured quantities. PROMISING RESEARCH AREAS After surveying today’s VVUQ methods and their mathematical foundations, the committee identified several research topics that offer the promise of improved methods and improved outcomes. The areas identified for veri - fication research are discussed in detail in Chapter 3 and summarized in Chapter 7; they include: • Development of goal-oriented a posteriori error-estimation methods that can be applied to mathematical models that are more complicated than linear elliptic partial differential equations (PDEs). • Development of algorithms for goal-oriented error estimates that scale well on massively parallel archi- tectures, especially given complicated grids (including adaptive-mesh grids). • Development of methods to estimate error bounds when meshes cannot resolve important scales. An example is turbulent fluid flow. • Development of reference solutions, including “manufactured” solutions, for the kinds of complex math- ematical models described above. • For computational models that are composed of simpler components, including hierarchical models: develop- ment of methods that use numerical-error estimates from the simpler components, along with information about how the components are coupled, to produce numerical-error estimates for the overall model. Research needed to improve uncertainty quantification methodologies is discussed in Chapter 4 and sum - marized in Chapter 7. Key identified UQ research topics include:
OCR for page 1
5 SUMMARY • Development of scalable methods for constructing emulators that reproduce the high-fidelity model results at training points, accurately capture the uncertainty away from training points, and effectively exploit salient features of the response surface. • Development of phenomena-aware emulators, which would incorporate knowledge about the phenomena being modeled and thereby enable better accuracy away from training points. • Development of methods for characterizing rare events, for example by identifying input configurations for which the model predicts significant rare events, and estimating their probabilities. • Development of methods for propagating and aggregating uncertainties and sensitivities across hierar- chies of models. (For example, how to aggregate sensitivity analyses across micro-scale, meso-scale, and macro-scale models to give accurate sensitivities for the combined model remains an open problem.) • Research and development in the compound area of (1) extracting derivatives and other features from large-scale computational models and (2) developing UQ methods that efficiently use this information. • Development of techniques to address high-dimensional spaces of uncertain inputs. • Development of algorithms and strategies across the spectrum of UQ-related tasks that can efficiently use modern and future massively parallel computer architectures. Promising research topics to support validation and prediction are discussed in Chapter 5 and summarized in Chapter 7. Identified topics for validation and prediction include: • Development of methods and strategies to quantify the effect of subject-matter judgments, which neces- sarily are involved in validation and prediction, on VVUQ outcomes. • Development of methods that help to define the “domain of applicability” of a model, including methods that help quantify the notions of near neighbors, interpolative predictions, and extrapolative predictions. • Development of methods or frameworks that help with the important problem of relating model-to-model differences, among models in an ensemble, to the discrepancy between models and reality. • Development of methods to assess model discrepancy and other sources of uncertainty in the case of rare events, especially when validation data do not include such events. Computational modeling and simulation will continue to play key roles in research in engineering and physical sciences (and in many other fields). It already aids scientific discovery, advances understanding of complex physical systems, augments physical experimentation, and informs important decisions. Future advances will be determined in part by how well VVUQ methodology can integrate with the next generation of computational models, high- performance computing infrastructure, and subject-matter expertise. This integration will require that students in these various areas be adequately educated in the mathematical foundations of VVUQ. The committee observes that students in VVUQ-dependent fields are not as well prepared today as they could be to deal with uncertainties that invariably affect problem formulation, software development, and interpretation and presentation of results. As requested by its tasking, the committee identified several actions that could help to address this. Recommendation: An effective VVUQ education should encourage students to confront and reflect on the ways that knowledge is acquired, used, and updated. Recommendation: The elements of probabilistic thinking, physical-systems modeling, and numerical methods and computing should become standard parts of the respective core curricula for scientists, engineers, and statisticians. Recommendation: Researchers should understand both VVUQ methods and computational modeling to more effectively exploit synergies at their interface. Educational programs, including research programs with graduate- education components, should be designed to foster this understanding. Recommendation: Support for interdisciplinary programs in predictive science, including VVUQ, should be made available for education and training to produce personnel that are highly qualified in VVUQ methods.
OCR for page 1
6 ASSESSING THE RELIABILITY OF COMPLEX MODELS Recommendation: Federal agencies should promote the dissemination of VVUQ materials and the offering of informative events for instructors and practitioners. SUMMARY APPROACH In summary, the committee has studied VVUQ as it applies to predictive science and engineering, with a focus on the mathematical foundations of VVUQ methodologies. It has identified key principles that it finds helpful and has identified best practices that it has observed in the application of VVUQ to difficult problems in computational science and engineering. It has identified research areas that promise to improve the mathematical foundations that undergird VVUQ processes. Finally, it has discussed changes in the education of professionals and dissemination of information that should enhance the ability of future VVUQ practitioners to improve and properly apply VVUQ methodologies to difficult problems, enhance the ability of VVUQ customers to understand VVUQ results and use them to make informed decisions, and enhance the ability of all VVUQ stakeholders to communicate with each other. The committee offers its observations and recommendations in the hope that they will help the VVUQ community as it continues to improve VVUQ processes and broaden their applications.