Page 45

4

Roundtable and Concluding Sessions

The penultimate session of the workshop was a roundtable with presentations by four speakers, followed by open discussion. The speakers had been asked in advance to discuss one or more of the following questions:

  • Are there problems with the quality or timeliness of available data?

  • Are there features of new or future datasets that are particularly relevant to issues of formula allocation?

  • Do you think that the estimates used in formulas or the features of formulas have unintended consequences with respect to equity between jurisdictions? If so, what changes might resolve such problems?

  • Do you have any suggestions for changing formulas, data, and estimation procedures?

  • What issues could be usefully addressed by the Committee on National Statistics in a study of statistical and data needs for allocation formulas?

Paula Schneider, of the U.S. Census Bureau, led off by discussing the use of the U.S. Census Bureau's data products in funding formulas. She focused on potential future uses of data from the American Community Survey (ACS), which is now being developed. The ACS has been designed as a continuing household survey which, when fully implemented, will provide annually updated demographic and economic information for small areas. Content will be similar to that of the decennial census long form. Testing for the ACS started in 1996. In April 2000 the survey was operat-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 45
Page 45 4 Roundtable and Concluding Sessions The penultimate session of the workshop was a roundtable with presentations by four speakers, followed by open discussion. The speakers had been asked in advance to discuss one or more of the following questions: Are there problems with the quality or timeliness of available data? Are there features of new or future datasets that are particularly relevant to issues of formula allocation? Do you think that the estimates used in formulas or the features of formulas have unintended consequences with respect to equity between jurisdictions? If so, what changes might resolve such problems? Do you have any suggestions for changing formulas, data, and estimation procedures? What issues could be usefully addressed by the Committee on National Statistics in a study of statistical and data needs for allocation formulas? Paula Schneider, of the U.S. Census Bureau, led off by discussing the use of the U.S. Census Bureau's data products in funding formulas. She focused on potential future uses of data from the American Community Survey (ACS), which is now being developed. The ACS has been designed as a continuing household survey which, when fully implemented, will provide annually updated demographic and economic information for small areas. Content will be similar to that of the decennial census long form. Testing for the ACS started in 1996. In April 2000 the survey was operat-

OCR for page 45
Page 46 ing at 31 sites across the country, with sufficient sample size to produce estimates down to the census tract level, so it will be possible to compare these estimates with data from the 2000 census long form. Assuming that Congress provides funding, the ACS will become fully operational in 2003, with estimates for areas as small as 65,000 population available in 2004 and updated annually. By 2010 there will be no need for a long form in the decennial census. As part of the 2000 census evaluation program, an ACS questionnaire was sent to a national sample of about 700,000 households. State estimates based on this supplementary survey will be available in July 2001. These data can be compared with the long form data and could be used in conjunction with selected funding formulas to see what would happen if the new data were used. Schneider emphasized that it will be up to Congress, working with program agencies and the U.S. General Accounting Office, to specify what data should be used in funding formulas. She believed that data from the ACS, supplemented by data from other surveys and administrative records, have the potential to improve the timeliness and quality of estimates used in funding formulas. She gave two examples: Grants to states under the Individuals with Disabilities Education Act. At present, the state shares are still being determined on the basis of data from the 1990 census long form. Community development block grants that go to metropolitan areas. ACS data for the largest areas will be released starting in July 2004, to be followed by data for successively smaller areas over the next four years. Linda Gage, the state demographer with the California Department of Finance, discussed the role of current population estimates in determining the relative amounts received by states under various funding formulas. For example, the U.S. Census Bureau's estimates of total population are used as control totals for Current Population Survey (CPS) estimates and for the estimates of school-age children in poverty that are produced by its Small Area Income and Poverty Estimates (SAIPE) Program for use in the allocation of Title I education grant funds. In the future, they will be used in the same way to produce estimates from the ACS. She noted the importance of the issue of whether all of these estimates should be adjusted for the census undercount. A 1999 U.S. General Accounting Office study showed that if adjusted 1990 census numbers had been used in fiscal year

OCR for page 45
Page 47 1998, California would have received an additional $223 million in formula grant funding. If the adjusted 1990 census numbers had been used throughout the decade, the state might have received an additional $2.2 billion. A recently released study by PricewaterhouseCoopers estimated that if unadjusted rather than adjusted numbers from the 2000 census are used, California will lose an estimated $5 billion in the years 2002 to 2012. Some states, including California, have their own population estimates programs used for allocating grant funds within their states. During the 1980s, the U.S. Census Bureau averaged its estimates for California with the state's independent estimates, which were higher, and published these averages as their official estimates. The 1990 census count for California exceeded both the U.S. Census Bureau and the state estimates. After the 1990 census, both the U.S. Census Bureau and the state of California evaluated their estimation procedures and introduced changes. The two sets of estimates continued to diverge and currently the state's estimate exceeds that of the U.S. Census Bureau by about 900,000 persons. Taking into account this difference plus the 1990 census estimate of undercount for California, Gage believes that the numbers currently being used in funding formulas are about 5 to 6 percent below the state's true population. She expressed her concern about the effects of differential underestimation on funding to the states, and her hope that evaluation of estimation procedures following the 2000 census can lead to changes that will reduce the extent of the divergence between their estimates and those produced by the U.S. Census Bureau. The third roundtable speaker was Katherine Wallman, chief statistician, U.S. Office of Management and Budget. She is responsible for the development and periodic updating of several classification systems used in the collection and presentation of official statistics, covering concepts such as industry, occupation, race and ethnicity, poverty, and metropolitan areas. These classification systems are developed solely for statistical purposes; programmatic uses, such as regulation or allocation of funds, do not influence their general structure or specifics. Nevertheless, in practice they are often used to determine eligibility for federal assistance or to allocate funds to eligible areas. She provided several examples of how the metropolitan-area classifications have been used in these ways. For example: Under the legislation that governs the Medicare program, reimbursement rates for hospitals vary significantly, depending on whether or not they are located within metropolitan areas.

OCR for page 45
Page 48 Under the Rural Revitalization Through Forestry Program, the term “rural community” means any county that is not contained within a metropolitan area as defined by the U.S. Office of Management and Budget. In the Urban Park and Recreation Recovery Program, the secretary of the interior is authorized to establish eligibility to general-purpose local governments in standard metropolitan statistical areas. Under the Rural Homelessness Grant Program, which makes grants to organizations providing direct emergency assistance to homeless individuals and families in rural areas, the terms “rural area” and “rural community” mean any area or community, no part of which is within an area designated as a standard metropolitan statistical area by the U.S. Office of Management and Budget. Other examples covered such diverse areas as mortgage insurance, tax credits for low-income housing, organ transplants, and immigrant visas. In a recent review of the U.S. Code, her office found at least 20 such references to the metropolitan-area construct. John Rolph, of the University of Southern California and chair of the Committee on National Statistics, was the final speaker. He reviewed some of the earlier discussion of the effects of thresholds and hold-harmless provisions used in fund allocation formulas. Clearly, it is not always well understood in advance how the formulas will operate, and they do sometimes have unintended effects. As noted by several speakers, there are important conceptual and measurement error questions associated with data elements included in funding formulas. Two fundamental questions need to be addressed: (1) What might be done to mitigate the effects of errors or uncertainty in the formulas? (2) Is correcting these flaws of principal importance or are there more fundamental questions that need to be addressed about how formula allocation processes operate? Commenting on the first question, Rolph noted that some ways to improve formulas had been suggested, including replacing thresholds by S-shaped, continuous functions and using moving averages rather than hold-harmless provisions to dampen the effects of large year-to-year changes. He suggested that better correspondence between the intentions of those who draft legislation and the actual formula might be achieved by creating an analytical resource, perhaps located in the U.S. General Accounting Office or the Congressional Research Service, that would provide real-time advice and appropriate modeling and simulation capabilities to legislative staff. With regard to what may be more fundamental issues, Rolph noted

OCR for page 45
Page 49 that the Committee on National Statistics had been exploring the possibility of undertaking a panel study on formula allocation. Such a panel would be a logical follow-on to the Panel on Estimates of Poverty for Small Geographic Areas, to this workshop, and to the panel that was being established to evaluate the estimates used in the WIC program. He asked workshop participants for suggestions as to what issues should be on the agenda for the proposed panel. In the general discussion that followed, several themes were suggested for consideration by the proposed panel on formula allocations. Some speakers proposed that the panel should go beyond an examination of the formulas themselves and how they are affected by the quality of the statistical data used as inputs. It should study the conceptual foundations of grant formulas and the desired outcomes with regard to efficiency and equity. It should address questions such as how to integrate measures of need, cost, and fiscal capacity into a formula and whether there are other components, such as outcome measures, that should be included. Some participants alluded to the resources provided for research on improving the data and the formula allocation procedures used in the Title I education and WIC programs, and suggested that other formula allocation programs might benefit from similar provisions. However, one participant cautioned that not all program agencies have the same ability to sponsor and monitor relevant research and to apply research findings to the allocation process. Others pointed out that there is a limit to how much accuracy is needed; there is some point beyond which the costs of further improvements outweigh the benefits from any gains in efficiency and equity. Some felt that more resources should be devoted to the evaluation of program outcomes. One attendee, referring back to the presentation by Linda Gage, reminded the group that current population estimates will continue to play an important role in providing updated estimates for use in allocation formulas. Finally, one person urged careful consideration by the panel of the intended audience for its findings. If the intention of the panel is to affect the process of writing formulas for the allocation of funds, its reports should be prepared with the idea that they are directed primarily at Congress and the agencies that have been assisting Congress in this process. Henry Aaron of the Brookings Institution wrapped up the workshop. He started by noting that there has been considerable dedicated and talented work to produce improved measurements for use in formula allocation. However, in his view, relative to other kinds of needed work, such as

OCR for page 45
Page 50 exploring the effects of varying the program rules, the marginal value of additional efforts to improve the quality of input data is relatively small. He then asked what role formula allocation plays in the political process, considering that it would be possible for Congress to vote direct dollar amounts and that it sometimes does so. He answered this question by noting that allocation formulas are a device to achieve closure to what would otherwise be unendable debates. Members of Congress are elected by constituents to serve their interests. If they do not strive to get the largest possible appropriations for their states and districts, they will pay for it later. However, they can shield themselves from unpleasant consequences if they can point to a plausibly objective formula and say they did the best they could, but this is what the formula produced. To support his subsequent remarks about formula allocation programs, Aaron defined the following concepts: G = the goal(s) of a federal grant program, e.g., educational outcomes. T = a government transfer of funds. Ti = transfer to the ith jurisdiction. Ii = an indicator of true need for the ith jurisdiction. ei = the difference between Ti and Ii. P = a politically determined goal for the transfer. R = program rules to determine resource allocations within districts. The error term has two parts. It consists of any conceptual error separating the transfer amount from the (unknowable) indicator of true need and errors in measurement or estimation that distinguish the numbers actually used from ones that are potentially more accurate. Aaron emphasized that it was important to make the distinction between T and R. The former is simply a transfer of money but the program rules determine what goes on in each jurisdiction when the money is received. Transfers of funds can affect recipients in ways that are not necessarily intended or obvious. For example, in the legislation under which Medicare distributes extra funds to hospitals that take care of disproportionate numbers of low-income persons, the amounts hospitals receive are determined by the numbers of Medicare and Medicaid patients they serve. This measure of need provides an incentive for states to have a broad but shallow Medicaid program that covers as many people as possible. Not surprisingly, the legislation that established this program was initiated and

OCR for page 45
Page 51 supported by the Senate Finance Committee at a time when Russell Long, whose state of Louisiana had such a program, was chair of the committee. It is also important, Aaron said, to look at the effects of grant funding on overall government spending. By giving money, you may change incentives within a state. Under a matching grant program, you can bring in additional funds. If the expenditures are in a new area, political constituencies may arise on behalf of activities that never before existed. But state legislators might find funds for these new activities by reducing appropriations for other programs. There is a case for a quasi-anthropological approach that looks very carefully at the institutions and details associated with particular grant programs. Aaron said he believed that, for two reasons, the impact of mistakes on the achievement of program goals is typically very small. First, suppose we start from an allocation where marginal dollars spent yield equal marginal benefits to all jurisdictions. If we make a mistake, the marginal benefit to the jurisdiction that receives more money will decline a bit and for the jurisdiction that receives less money it will increase slightly, but the impact of these changes on welfare is distinctly second order. Second, evaluation studies suggest that some allocation programs are not very effective in achieving program goals. For example, evidence that the Title I education program has had significant effects on educational outcomes is very limited. In his judgment, Aaron said, the process of collecting data for use in allocation formulas on topics, such as poor children, nutritional intake, and various illnesses, has an important educational function in shaping political views about what is considered decent and acceptable. Nevertheless, in the final analysis, to some degree the choice of formula inputs reflects a political compromise or consensus, and this is what determines the amounts of money transferred. The effects of these choices over time are difficult to predict. He stressed the importance of program rules. In discussing the Title I education program, the workshop participants reviewed the procedures for allocations to each state, county, and school district, but that is where the analysis ended. He suggested that what happens after the funds get to the school district is an order of magnitude more important than marginal adjustments in the distribution formulas. Should poor children receive special instruction outside the classroom? If a poor child transfers to a magnet school, should Title I funds also be transferred to that school? Answers to questions like these could have a major effect on the evolution of

OCR for page 45
Page 52 the educational system and its efforts to meet the special needs of poor children. It is important to pay attention to formulas and their inputs to guard against the possibility that some groups may try to manipulate the process to get grossly more than their fair share. What is more important, however, is to understand what makes a program work. Aaron observed that statisticians, working to address this question in collaboration with economists, sociologists, and other social scientists, can make important contributions.