Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
INTRODUCTION 3 They typically include static âagingâ routines to bring their databases up to date or project them into the future. Such routines reweight the individual records to match outside control totals for key demographic characteristics and make other adjustments for changes in income and employment. Dynamic models operate on longitudinal databases that contain individual histories. They âgrowâ their databases forward in time by applying transition probabilities to each record for such events as birth, death, marriage, labor force status change, and so on. Within these two distinct model types, there are variations in handling common functions that result from such factors as differences in client needs and in styles of the model developers. In Chapter 3 Citro and Ross describe the different approaches taken by three static modelsâTRIM2, MATH, and HITSM (see below)âto two important functions of models that simulate income support programs such as AFDC and food stamps: the routines to simulate the participation decision and the routines to convert annual to monthly values. In Chapter 4 Ross compares and contrasts two major dynamic modelsâDYNASIM2 and PRISM (see below)âand reflects generally on the dynamic modeling approach. Computing Technology Given their complexity and size, microsimulation models are very dependent on computer hardware and software capabilities to operate in a cost-effective manner. Most models that are in widespread use today are designed for mainframe, batch-oriented processing that minimizes the cost of single computer runs but imposes barriers to access and inhibits flexible, timely adaptation to meet new policy needs. In Chapter 5 Cotton and Sadowsky compare and contrast the mainframe computing environment for the TRIM2 model with the personal computer-based environment for the model developed by Statistics Canada, SPSD/M (see below). Cotton and Sadowsky assess likely future directions for computer hardware and software that offer potential benefits for improved microsimulation model capabilities. Model Evaluation Assessment of the quality of outputs from models is a vitally important component of the process of using model estimates in the policy debate and of determining fruitful directions for investment in improved model capabilities. However, for a variety of reasons, validation of microsimulation models has been a largely neglected activity. In Chapter 6 Cohen discusses the potential for using relatively new, computer-intensive sample reuse techniques for developing variance estimates for the outputs of microsimulation models. In Chapter 7 Cohen reviews the scanty literature of previous microsimulation model validation studies.