Effects on Formula Outputs of Errors in Formula Inputs
As discussed in Chapter 2, several presentations at the workshop focused on what inputs and features (e.g., thresholds) should be included in a particular formula to obtain the desired allocations or what data sources and estimation methods should be used to estimate a formula's inputs. Presenters also discussed the many sources of errors in estimates of formula inputs. These errors include sampling variability, bias, and lack of conceptual fit between the inputs specified by legislation or regulation and the inputs that are estimated from survey or other data.
Although it is widely understood that such errors in formula inputs can lead to errors in formula outputs (i.e., allocations to states or other jurisdictions), it is less well known that the effects of errors in formula inputs can be amplified, attenuated, or influenced in other ways when the errors in inputs interact with the properties of formulas. Thus, one session of the workshop was devoted to formal, statistical issues via presentation and discussion of two papers that explored how estimation errors in formula inputs can affect formula outputs in a single year and over time. This session focused on the interactions among data sources, estimation methods, and formula features and their combined effects on formula outputs, which are—like the formula inputs—statistical quantities. The papers presented showed how such interactions can produce sometimes unanticipated results.
First, Alan Zaslavsky of Harvard Medical School discussed findings from work conducted jointly with Allen Schirm of Mathematica Policy
Research. Zaslavsky began by reviewing the statistical properties of estimation methods and data sources. He contrasted direct and indirect estimates. Direct estimates for a “domain” (defined by both geography and time) are based on data from that domain only. Indirect estimates are constructed using direct information and information from other domains, such as other geographic areas or other time periods. Thus, indirect estimates can be spatially or temporally indirect, or both. Zaslavsky noted that many if not most estimates that have been used for fund allocations are indirect.
Next, Zaslavsky discussed the relative limitations of the principal data sources for estimating formula inputs: the decennial census, current surveys such as the Current Population Survey (CPS) and the Survey of Income and Program Participation, and administrative records. He then described the potential implications of introducing the American Community Survey (ACS) as a source of data for allocating funds. His main point was that introducing a new data source could have substantial consequences for allocations because it changes the estimation errors in formula inputs and may change the frequency at which allocations are recomputed.
Zaslavsky concluded his review of formulas and their inputs by discussing some common formula features. Formulas often contain features that cause allocations to be disproportionate to need even though proportional allocation may be the primary objective. Such features include hold-harmless provisions that limit downward fluctuations in funding and thresholds that require a minimum level of need for distribution of funds, thus concentrating funding where it is most needed. 1
In the remainder of his presentation, Zaslavsky focused on the interactions between the statistical properties of data sources, estimation methods, and the resulting estimates of formula inputs and the features of funding formulas. He began his discussion of these interactions by stating several general results:
In the fund allocation process, the procedure for estimating formula inputs cannot be separated entirely from the funding formula. For example, a formula that specifies the use of a moving average of estimates
1Some threshold provisions are designed to avoid distribution of amounts so small that they could not be used effectively by the areas receiving them. Others are designed to channel resources to those areas whose needs are greatest in absolute or relative terms.
for three single years produces the same allocations as a formula that specifies the use of the “best” estimate for a year if that estimate is technically implemented as the moving average over three years.
Two main implications of using decennial census estimates of formula inputs are that allocations are stable in most years with possibly large shifts every 10 years (depending on what hold-harmless provision may pertain) and that allocations are sensitive to the particular socioeconomic and demographic conditions in the census reference year rather than to the average conditions over a decade.
The effects of a hold-harmless provision depend on the frequency with which fund allocations are recomputed.
Averaging over time reduces the variances of estimates of formula inputs.
If the estimation procedure and the funding formula are linear, allocations will be unbiased, that is, correct on average over time.
After describing these relatively straightforward, general results, Zaslavsky presented simulation results that illustrate the more complex interactions between the statistical properties of estimates and the features of allocation formulas. The simulation scenarios were defined by those statistical properties (e.g., method of estimation and magnitude of sampling error) and formula features (e.g., presence of a hold-harmless provision or a threshold). Amounts allocated to each geographic area were determined independently. 2
Principal findings from these simulations were:
When there is a threshold in a formula, sampling variability in estimates of formula inputs smooths allocations toward the threshold, an effect that is strongest for areas whose true need is near the threshold. As sampling variability rises, areas whose true need is below the threshold for receiving funds are more likely to receive funds, and those whose true need is above the threshold are more likely to receive nothing. On average, areas with true need below the threshold get more than they deserve, while areas with true need above the threshold get less than they deserve. The amount of smoothing of average allocations toward the threshold increases as sam-
2For a full description of the specifications for these simulations and the results, see Zaslavsky and Schirm, 2000.
pling variability increases. Thus, there is a tendency for the allocations for smaller areas (which typically have smaller samples and larger sampling errors) to be distorted more than the allocations for larger areas. This implies that the sampling plan for the data source used to produce estimates can affect the allocation of funds, an effect that is almost certainly not anticipated when statisticians specify the sampling plan or when policy makers specify a threshold for a formula.
When there is a hold-harmless provision in a formula that allows an area's allocation to rise by any amount but fall by only a limited amount, sampling variability in estimates of formula inputs “ratchets up” allocations over time. The amount of ratcheting increases as sampling variability increases. Thus, smaller areas tend to benefit more from a hold-harmless provision than larger areas because the upward bias in allocations is greater for the smaller areas.
Using moving average estimates (e.g., averaging information from the most recent three years) can greatly reduce the biasing effect of a hold-harmless provision.
Next, Zaslavsky discussed the implications of assuming statistical independence of the geographic areas. He noted that for most formula allocation programs, this specification is not strictly true because the sum of the amounts allocated for each time period must equal the total amount appropriated for the program. When total funding is fixed, an undeservedly high allocation to one area (due to a fortuitous sampling error) comes at the expense of areas that were not so fortunate, so these other areas are allocated less than they would have been if total program funding were open-ended. Nevertheless, based on algebraic derivations, Zaslavsky argued that under some—but not all—circumstances, the relative biases in allocations to different areas under such fixed funding are essentially the same as the relative biases found in the simulations under the assumption of open-ended funding. One circumstance under which biases might be notably different with fixed funding is when a small number of areas substantially influence the estimate of the basic allocation parameter (e.g., the dollars allocated per eligible person).
Zaslavsky concluded his presentation by observing that there are inherent conflicts of values in allocating funds. If allocations are responsive to changes in need, an area's funding may be unstable, whereas if funding is stable, it may not be responsive to changes in need. When policy makers specify a formula that is responsive to increases in need and impose a hold-
harmless provision to ensure stability (in the form of protection against a substantial drop in funding), allocations will be biased. The biases will be different across areas in ways that are unrelated to differences in need. Some of these conflicts in values could be lessened if all participants in the allocation process worked together and better appreciated that data sources, estimation methods, and the funding formula are all part of a single process. Policy makers could then take account of the properties of potential data sources and estimation methods when designing formulas, and statisticians could take account of policy objectives and formula properties when evaluating new data sources or estimation methods.
David Betson of the University of Notre Dame gave the second presentation. Betson first addressed whether the estimated biases reported in the paper by Zaslavsky and Schirm are sensitive to the specification of open-ended funding used in their simulations. To investigate this issue, Betson conducted simulations under both fixed and open-ended funding. He discussed the following findings:
In the absence of hold-harmless provisions and thresholds, there are no biases in allocations if funding is open-ended. However, if total funding is fixed, there may be small biases. A few of the largest areas (those with stable estimates) may receive slightly less than they would receive under open-ended funding, while some of the remaining areas may receive slightly more.
In the presence of a threshold, the biases in allocation under open-ended and fixed total funding are about the same except for large areas with true need substantially above the threshold. Under open-ended funding, allocations to such areas are approximately unbiased, while under fixed funding they may be somewhat downwardly biased. However, this finding pertains only when a substantial fraction of the total true need is in areas with true need near the threshold.
Next, Betson addressed the goal of stable funding for every area. Stability is often justified on grounds of equity, with the claim that large decreases in funding are unfair. Betson proposed that stable, predictable funding is also needed if funds are to be spent effectively. Money may be wasted if there are large swings in funding, whether they are up or down. Two methods of achieving stability are to include a hold-harmless provision in a formula or to smooth estimates of formula inputs by calculating moving averages. Alternatively, the allocation formula can be smoothed in a way
that is not explicitly temporal. For example, Betson noted that stability in funding could be increased if the step function that defines a funding threshold is replaced by a logistic function or other smooth function. The limitation of this approach is that there is a tradeoff between the goal of stability and the goal of concentrating funds where need is greatest. As the allocation formula becomes smoother, there is less concentration of funding. Thus, as noted earlier in the presentation by Alan Zaslavsky, attractive policy objectives are often in conflict in practice, even when policy makers agree on the objectives.
Paul Siegel of the U.S. Census Bureau led off the discussion of these two presentations. He began by observing that allocations are the result of combining a formula and estimates of the formula's inputs, and those estimates are the result of combining truth and statistical error. Generally, allocations based on estimates will not be the same as allocations based on truth. The Zaslavsky-Schirm and Betson papers demonstrate that the differences between these allocations are exacerbated by features of the formulas. But, according to Siegel, that may simply reflect the fact that formulas result from many inevitable compromises over many different goals. Those compromises and perhaps some of the biases that may result from them are not necessarily inefficiencies or deficiencies that must be eliminated.
However, Siegel was troubled by the fact that, as demonstrated by Zaslavsky and Schirm, areas with the same true need could receive different expected allocations simply because need was measured with varying precision. This finding suggested to him that one important advantage of using model-based estimates, such as the estimates from the U.S. Census Bureau's Small Area Income and Poverty Estimates (SAIPE) Program, is that the model-based estimates are more precise and are much more equal in precision across areas than are direct estimates.
Siegel concluded by remarking that the paper by Zaslavsky and Schirm has illustrated many of the potentially troubling ways in which properties of estimates and features of formulas interact. He noted that Betson has shown that these interaction effects may be significant in some areas and negligible in others. Thus, it is important to begin assessing how well actual allocation processes are performing.
Robin Fisher of the U.S. Census Bureau was the second discussant. He showed graphically how, in the presence of a threshold, an area's expected allocation could change as the areas true need changes. Fisher noted a problem that Zaslavsky and Schirm found in their simulations: when an area's true need is not far above the threshold, the area may receive much
less than it deserves on average. This bias occurs because when there is substantial variability in estimated need, there is a substantial probability that the area's estimated need falls below the threshold and the area receives no funds.
Fisher noted that he interprets a formula as an expression of policy makers' intent, although some of the previous discussion at the workshop had led him to question that interpretation. If a formula truly reflects such intent, abstract arguments to make formulas smoother—despite their appeal to statisticians—might not convince policy makers. However, illustrating how very small errors in estimates can cause some areas to lose all of their funding may persuade policy makers that some changes to formulas could enhance fairness.
The remarks by Siegel and Fisher were followed by a period of open discussion. One workshop participant asked Alan Zaslavsky what specific suggestions he would make for replacing a step function in a funding formula. Zaslavsky responded by noting that there are two problems with step functions.
3 First, they can produce allocations that are highly variable. Second, the allocations, on average, depend on the sampling properties of estimates of formula inputs. Surely, neither of these outcomes were intended by policy makers when they specified a step function. Thus, it may not be hard to convince policy makers that a smoother function would be desirable. One drawback to a smoother function that would have to be addressed is that some of the amounts allocated will be smaller than the amount implied by the threshold of the original step function. Thus, one would need to consider the specific program to determine whether such amounts would be too small to be effectively spent. Zaslavsky observed that piecewise linear functions like those that Congress has specified in the tax code would be simpler and easier to explain than the logistic function (discussed by David Betson) but have a similar effect. Betson added that
3A threshold, in which nothing is received if the estimate of need is below a specified number or proportion, is one form of step function. Another example occurred in the Title I education allocations for fiscal year 1997, in which the hold-harmless provision applied to basic grants at variable rates. Counties and school districts with 30 percent or more poor school-age children were guaranteed at least 95 percent of the previous year's grant. The guarantee dropped to 90 percent for areas with 15-30 percent poor school-age children and 85 percent for areas with fewer than 15 percent poor school-age children (National Research Council, 1997:50).
although smoother functions will reduce some of the problems under discussion, they will not eliminate the problems if they are nonlinear.
One workshop attendee raised the potentially important issue that improving estimates of a formula input may not be all that helpful if the input is only weakly associated with objective measures of program success. For example, obtaining better estimates of poor children for Title I allocations may not improve the overall effectiveness of Title I funds in improving educational outcomes for the target population. Alan Zaslavsky responded to this point, noting that even if the input to a formula were exactly the sole measure of program success and there were no conceptual problems in measuring the input from available data, the problem of sampling error in estimating the input would still exist. Even in this best-case situation, there would still be the unintended effects of interactions between properties of estimates and features of allocation formulas. Many of the approaches to reducing those effects would cost very little and would surely be worth pursuing. A key point of Zaslavsky's response was that the issues raised by the presenters in this session are relevant to real fund allocation processes, even though there may be other important concerns, such as whether a formula's inputs, even if they could be perfectly measured, are appropriate.