Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
STATISTICAL MATCHING AND MICROSIMULATION MODELS 78 iterative proportional fitting may provide an alternative to statistical matching. Suppose that in the recent past a survey did collect information on all needed variables. However, more recent data collection efforts have only updated the marginal information about certain variables but not information about their joint distribution. Iterative proportional fitting could then make use of the more recent marginal information to update the older information on the joint distribution. This procedure successively modifies the frequency counts in the relevant k- way table one dimension at a time to bring the marginal totals of the contingency table into agreement with the newer marginal totals until a modified contingency table exists with the updated marginals. Iterative proportional fitting therefore retains some of the joint distributional structure present in the original contingency table. (For a good reference to iterative proportional fitting see Bishop, Fienberg, and Holland, 1975.) There are at least two advantages of iterative proportional fitting in comparison with statistical matching of individual files from the newer surveys: the statistical match generally will require more computation; the statistical match, as typically accomplished, will ignore the information about the joint distribution present in the older comprehensive survey. Paass (1988) presents a new algorithm that has advantages over iterative proportional fitting when the table has a large number of dimensions. More Data Colle ction In order to avoid the need to assume that Y and Z are conditionally independent given X, in some situations it ma y be possible to collect data on a small subset of individualsâa subset that is in some sense representative of the entire data setâand then directly estimate the amount of conditional dependence. Such estimates of conditional dependence could then be used to direct the statistical matching process. Suppose one collected data on a special survey of 500 individuals, a training data set, enabling the rough estimation of V(Y,Z). Then, one would add the following (additional) constraints into the statistical match: where the left-hand term was computed from the small study, and the right-hand term was a function of the two large samples. The computation of V(Y,Z|X) involves wij, the weight given to matching the ith record from file A to the jth record from file B. Clearly, this last constraint is considerably nonlinear in w ij, which would greatly increase the computational complexity of the algorithm, both constrained and previously unconstrained. While this procedure has many advantages, including the ability to retain many of the benefits of the statistical match with respect to increased disclosure avoidance and reduced respondent burden, the variability of the estimate of