Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
A VALIDATION EXPERIMENT WITH TRIM2 277 The primary goals of the experiment were to (1) determine the resources required to conduct a sensitivity analysis and an external validation and, more generally, examine the feasibility of microsimulation model validation; (2) identify those modules that when altered have an appreciable effect on model outputs; (3) obtain an admittedly limited measure of model validity against comparison values from administrative records (limited because only one time period was examined and the comparison values used are subject to several sources of error); and (4) illustrate some of the data analysis techniques that analysts can use to help answer questions about the sources and magnitude of errors in projections from microsimulation models. In addition, we were interested in attempting to identify component alternatives that were superior to those currently used in TRIM2. Other efforts to validate microsimulation models are presented in Cohen (Chapter 7 in this volume). Two closely related studies can be found in Kormendi and Meguire (1988) and Doyle and Trippe (1989). OVERVIEW OF THE EXPERIMENT The experiment was a combination of a sensitivity analysis and an external validation of TRIM2. The general idea was to use a previous year's TRIM2 database to predict various quantities associated with a program change put into effect in a later year. The sensitivity analysis was used to measure the variability of TRIM2's projections resulting from the specification of various components of the model. This was accomplished by using alternatives for the components in various combinations to generate projections associated with the program change. The variability of these projections provided evidence of their sensitivity to use of the various combinations of component alternatives. For the external validation, comparison values from the Integrated Quality Control System (IQCS) were used in a variety of ways to measure the correspondence of projections resulting from various combinations of alternatives to the comparison values. Depending on the closeness of the comparison values to the true values, these correspondences may be used to rate or rank model versions defined by the choices for various component alternatives. Specification of the experiment involved identification of the year for the TRIM2 database, the year for the program that was instituted, the component modules to be examined and the alternatives used, and finally, what is meant by a good model, which entails deciding which outputs in which form are important to predict well. As part of this decision, it is important to note that analysts