Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
A VALIDATION EXPERIMENT WITH TRIM2 278 working in microsimulation modeling are more interested in estimating changes, such as changes in total costs, resulting from a choice between two competing programs, than in estimating, say, costs of an individual program. While this focus is open to question (since there are occasions when estimates are needed for entirely new programs), if change is of primary importance, a more relevant experiment would have investigated the difference between projections for two or more rules changes. This would have been feasible in terms of a sensitivity analysis, but since only one rules change can ever be instituted in a given year, an external validation of projections of program change is rarely feasible. Therefore, we examined instead estimates of a program change based on an earlier database with measures of what actually happened in the year the change took effect. CHOICE OF MODEL YEAR, PROGRAM YEAR, AND COMPARISON VALUES The first step in planning the experiment was to choose a previous year's TRIM2 database to predict the costs and caseload characteristics for a program change in existence in a later year, also to be determined. We decided to use TRIM2 with 1983 data (based on the March 1984 Current Population Survey [CPS]) to estimate the costs of the 1987 law for the AFDC program in that year as well as other distributional characteristics of the program. (See Giannarelli [1990b], for a description of differences between the 1983 and 1987 laws for AFDC and supplemental security income [SSI]. Simulations were also run for SSI for the experiment, but the results were not analyzed.) The years 1983 and 1987 were chosen because the March 1988 CPS (which was needed to generate known population control totals) was the latest file available at the time; at least a 3-year forecasting window was desired; and definitional and other comparability problems began cropping up for CPS data as the forecasting horizon grew appreciably longer. Although some comparability problems were avoided, the time period examined and the March 1984 and 1988 CPS files exhibited some unique features that limited the generalizability of the results. (This is discussed further below and in Giannarelli [1990c].) Of course, every time period and every database are unique in some respects, and therefore no single experiment can be used to make general inferences about the efficacy of a model. Throughout the remainder, use is made of the 1987 quality control data as comparison values (i.e., as surrogates for the truth). Used in this way, the quality control data are potentially flawed, being subject to sampling variability, as well as bias from the use of different quality control systems in each state, and to response and procedural errors. In particular, the data are believed to be subject to nonsampling errors in the measurement of such characteristics as the composition of the AFDC unit's household. In this analysis, problems raised