Click for next page ( 70


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 69
4 Data Quality and Statistical Methods L ike most other government statistical agencies, both here and around the world, the U.S. Census Bureau defines quality as “fitness for use,” a definition crafted with an eye toward the needs of data users (U.S. Census Bureau, 2006a). Following the lead of the U.S. Office of Manage- ment and Budget (2002), the Census Bureau defines fitness for use in terms of three attributes: utility (to the intended users); objectivity (whether the information is accurate, reliable, and unbiased and is presented as such); and integrity (the security or protection of the information from unauthor- ized access or revision). The Census Bureau further defines these attributes in terms of their dimensions or elements. After a brief discussion of dimensions of quality, this chapter considers several issues of statistical methodology and reporting that affect the quality and usability of the data from the Census of Governments and annual and quarterly surveys of government finances and employment: sample frame development and design, data collection, unit nonresponse, editing and imputation, estimation, data processing, revision policies, and cognitive testing of questionnaires. For each of these topics, the panel makes rec- ommendations for the Governments Division’s research and development program. The last two sections of the chapter discuss the planned redesign of the Quarterly Tax Survey and the division’s infrastructure for improve- ments in statistical methodology. 69

OCR for page 69
70 STATE AND LOCAL GOVERNMENT STATISTICS AT A CROSSROADS DIMENSIONS OF QUALITY It is generally accepted that there are several dimensions to data quality. A quality schematic proposed by Brackstone (1999) was subsequently re- fined by the Census Bureau. The dimensions of data quality adapted in the Census Bureau publication Definition of Data Quality include relevance, accuracy, timeliness, accessibility, interpretability, and transparency. Relevance—the degree to which data products provide information that meets user needs. Accuracy—the coherence between an estimate and its true value, usu- ally characterized in terms of systematic (bias) and random (vari- ance) errors. Timeliness—the length of time between the reference period of the information and when the information is delivered to users. Accessibility—the ease with which users can identify, obtain, and use the information. Interpretability—the availability of documentation to aid users in un- derstanding the data. Transparency—the existence of evidence that users can employ to assess the accuracy of the data, including information on assump- tions, methods, and results presented in a manner that would allow a third party to reproduce the information, subject to constraints of confidentiality and privacy. We address aspects of accuracy and transparency that apply to the Governments Division’s data collections generally, including the Census of Governments and the annual and quarterly surveys. The special case of the Quarterly Tax Survey, which has been undergoing a major redesign while the panel examined the state and local government statistics programs, is discussed separately. The accuracy of data from a census or a survey begins with the develop- ment of concepts, methods, and design; continues with the necessary steps of data collection; and includes processing and editing, the development of estimates, and data analysis. The responsibility for several key aspects of accuracy for the Census Bureau’s data on state and local governments is split among two divisions in the Census Bureau’s Economic Directorate. Generally, the Governments Division is responsible for data collection and editing, and a branch of the Economic Statistical Methods and Program- ming Division selects samples of local governments for the annual and   Brackstone’s sixth dimension was “coherence,” which the Census Bureau replaced with “transparency.”

OCR for page 69
DATA QUALITY AND STATISTICAL METHODS 71 quarterly surveys and is responsible for imputation and estimation. The re- sponsibility for development and testing of questionnaires is shared among the two divisions. Despite this division of labor, the Governments Division has overall responsibility for ensuring the relevance, accuracy, timeliness, accessibility, and other aspects of the quality of the data. SAMPLE FRAME DEVELOPMENT AND DESIGN Frame Development and Coverage It is not easy to build and maintain a complete roster of governments, since there is considerable churning among governmental units. Govern- ments may dissolve or be incorporated into larger units, and new govern- ments may be formed. Still, this should be a fairly manageable activity for the Governments Division, since the universe of governments is much smaller and substantially more stable than, say, the universe of businesses or households. The list of state and local governments is maintained in the Govern- ments Integrated Directory. This list is updated periodically with infor- mation on newly established government units that meet Census Bureau definitions, and dissolved or inactive units. General-purpose govern- ments are updated on the basis of the annual Boundary and Annexation Survey, conducted by the Geography Division of the Census Bureau, and school districts are updated with a list maintained by the U.S. Depart- ment of Education’s National Center for Education Statistics. Updating the list of special districts is more complicated and involves several steps: a review of state legislation; a review of published bond and other finan- cial transactions; a mail survey of city and county clerks; and analysis of information provided to respondents to annual surveys. This updated directory is further screened by the Directory Survey of Local Governments, conducted as part of the quinquennial Census of Gov- ernments, which both updates the directory and collects basic characteris- tics of the governmental unit. The survey produces an up-to-date list of all local governments, which is used for the census and as a sample frame for the design of the annual and quarterly surveys. Since it is used in developing a list for the census and a sample frame for the surveys, the completeness of the directory is a quality issue. Based on the collective judgment of the staff of the Governments Division and the us- ers contacted by the panel, there is no significant problem with incomplete enumeration of governments in the directory. Two other issues were noted, however, both potentially affecting the quality of the Quarterly Tax Survey. First, there is concern about duplication and bad addresses in the universe listing of local tax collection agencies used to select the sample for the prop-

OCR for page 69
72 STATE AND LOCAL GOVERNMENT STATISTICS AT A CROSSROADS erty tax component of the survey (Hogue, 2005a). Second, the adequacy of coverage of special districts in this same survey has been questioned as well. Special districts present a problem for sample selection because many of them have existed for relatively brief periods and have wide geographic boundaries, which can cross state lines. The best opportunity to address these concerns is during the redesign of the Quarterly Tax Survey, discussed later in this chapter, and the panel encourages the Census Bureau to take full advantage of this and similar opportunities to assess the quality of the sample frame for the survey. Conclusion 4-1: Coverage of the universe of general governments in the Census of Governments and annual and quarterly surveys appears to be complete for virtually all analytical purposes. Sample Design About two years after each Census of Governments, the Census Bureau draws a new sample for the annual surveys. Table 4-1 lists the annual and quarterly surveys of state and local governments, the universes they cover, and their current or most recent sample sizes. The sample design methods employed for each of the surveys listed in Table 4-1 are discussed in Ap- pendix A. For example, the selection of the 13,000 state and local govern- ments for the Annual Survey of State and Local Government Finances is a “size-based sampling procedure . . . based on the size of its long-term debt, expenditure, population, or enrollment. All local governments above variable size cutoffs (such as a population of at least 50,000) or perform- ing key functions (such as mass transit) are selected with certainty.” Each sample consists of a fixed set of units that are surveyed for the next 5 years, supplemented with all identified births. This 5-year cycle of sample redesign has, on occasion, been relaxed for varying reasons, mostly to do with budget shortfalls. The undesirable result is a floating redesign program. For example, the post-1992 census redesign of the Annual Finance Survey was conducted in 1993 using 1987 census information because the 1992 data had not yet been collected and edited. The next redesign was in 2000, based on 1997 census results. In 2001, a smaller sample was selected to allow more time for developing a new questionnaire and new editing processes. The post-2002 census redesign, conducted in 2004, returned to a larger sample. These irregularly scheduled redesigns and fluctuating sample sizes can affect the consistency of time series and the level of detail provided and have been of concern to users (see Chapter 3). Whether they have affected the quality of the data in other ways, for example, by introducing system- atic biases, is not clear. In addition, samples are about half the size of what

OCR for page 69
DATA QUALITY AND STATISTICAL METHODS 73 TABLE 4-1 Sample Surveys of State and Local Governments Periodicity and Survey Name Universe Covered Sample Size Annual Surveys Annual Survey of State All state and local All states and 13,000 local and Local Government governmentsa governments Finances Local Government Public school systems All systems in census years; School System Finance providing elementary or 15,000 in 2004 Survey secondary education State Government Tax All states All states Collections Survey Annual Public All state and local All states and 11,000 local Employment Survey governmentsa governments State and local All state and local All state systems and more Government Public government public than 1,000 local systems Employee Retirement employee retirement System Survey systemsb Quarterly Surveys Quarterly Tax Survey All state and local 50 states, about 600 counties, governments with tax and all local governments collection authority; within those counties with local governments with local property tax collection large non-property tax authority; over 100 local collections governments with significant non-property tax collectionsc Quarterly Public 100 largest public 100 Employee Retirement employee retirement Systems Survey systemsd aThis includes counties, municipalities, townships, special districts, and school districts, which collectively currently number about 87,000. bThe universe of public employee retirement systems is about 2,600; the list is updated regularly. cThese local governments account for about 65 percent of all local nonproperty tax collections. dThe 100 largest systems account for about 85 percent of all national activity.

OCR for page 69
74 STATE AND LOCAL GOVERNMENT STATISTICS AT A CROSSROADS they were 20 to 30 years ago due to reductions in sample allotments to smaller units over time—a possible source of error. These cutbacks have raised questions for users as to the relevance and accuracy of the survey estimates and whether the decrease in sample sizes has limited the analytical capabilities of the data. The Governments Division can provide users with better information for estimating the statistical significance of changes over time in the data given changing designs and sample sizes. If the division were to provide the standard errors and confidence intervals for estimates of change (for example, estimates of year-to-year increases or decreases in revenues and expenditures by type of governments), this information would provide guid- ance to data users on interpreting the significance of the changes. Recommendation 4-1: With respect to future modifications of its meth- odologies, the Governments Division should conduct research to deter- mine the effects of any redesigns of its surveys or changes in sample sizes on the accuracy of the data, especially the accuracy of measures of change. The division should provide information to users, including standard errors and confidence intervals, to help them assess the effects of redesigns and changes in sample sizes on the accuracy and usefulness of time series. Most of the samples for the Governments Division surveys are drawn using standard probability methods in which each governmental unit’s probability of inclusion can be calculated, estimates can be produced along with estimates of the sampling error, and inferences can be made about the population. The Annual State and Local Government Public Employee Re- tirement System Survey was converted to a probability basis in 2004. How- ever, nonprobability methods continue to be used to select the sample for the Quarterly Public Employee Retirement System Survey and a portion of the Quarterly Tax Survey. Converting these remaining two nonprobability samples to a probability basis, a priority goal for the Governments Divi- sion, should lead to improved estimates of national aggregates and would allow the estimation of variances, enabling the Governments Division to conform to Census Bureau statistical standards in this area. DATA COLLECTION METHODS States are sovereign governments, and local governments are their subdivisions (Keffer, 2006, p. 8). As a result, in contrast to other Census Bureau economic statistics data collections, there is no provision for man- datory reporting by state and local governments. In dealing with state and local governments, the Census Bureau does not have an opportunity to im-

OCR for page 69
DATA QUALITY AND STATISTICAL METHODS 75 pose accounting systems and standards across the board that would work to ensure that all data at all levels are defined, collected, and aggregated in the same way, although voluntary mechanisms, such as the Government Accounting Standards Board, are playing an important and growing role in standardization (discussed in Chapter 6). Due to the sovereignty of the states and the traditions of data collec- tion devised by the Governments Division and the states over the years, reporting arrangements are unusually complex. As an example, the division now operates three separate collection systems for gathering the periodic Census of Governments and ongoing survey data: a division-managed mail- out/mail-back questionnaire for state and local governmental units in 22 states and the District of Columbia; direct collection from 48 large units, which is also managed by the division; and a central collection program, managed by the states, under which data on the state’s localities are col- lected and transmitted to the division by 28 states. To complicate matters further, in some cases the data are provided on paper; in other cases, the data are provided in standardized electronic formats; and in still other cases, the data are provided in nonstandardized electronic formats that are negotiated between the reporters and the Governments Division. It is clear that the division has leaned over backward to work with the differ- ing data processing systems maintained by the states and to accommodate the desires of many state governments to serve as intermediaries for their subordinate units of government in dealings with Washington. In so doing, the data collection system has built in layers of complexity that can affect the accuracy, timeliness, and relevance of the data. Central collection states constitute both a source of strength and a potential weakness in the system. The Governments Division cultivates long-standing arrangements whereby local governments report to state governments, which in turn report to the division. These arrangements include a rich variety of procedures. Data collection can be by electronic filing, Internet response, or mail response. These pass-through arrangements ascribe an appropriate role to inter- vening sovereign units of government; they can reduce the burden on the Census Bureau, which would otherwise have to contact and follow up on many more governmental units; and they ensure that another level of qual- ity control can be brought to bear. Central collection states have far and away the best record of obtaining and forwarding complete responses for   There may be additional opportunities for consolidating reporting along the lines of the central collection procedures. For example, elementary and secondary education finance data are collected centrally in most states but in several, collection is by the state department of education. This suggests that state departments of education could serve as collection inter- mediaries and add value to the data.

OCR for page 69
76 STATE AND LOCAL GOVERNMENT STATISTICS AT A CROSSROADS TABLE 4-2 Response Rates for Noncentral Collection States by General Purpose Governments and Special Districts, 2004 Annual Finance Survey State General Purpose (%) Special Districts (%) Alabama 68 62 Arkansas 93 55 Colorado 89 64 Connecticut 73 52 Delaware 76 57 District of Columbia n/a 100 Florida 93 60 Hawaii 100 33 Idaho 76 54 Louisiana 100 90 Maine 79 66 Mississippi 61 53 New Jersey 90 71 New Mexico 61 42 Ohio 89 91 Oregon 86 79 Rhode Island 79 51 South Dakota 67 61 Tennessee 83 72 Texas 85 67 Vermont 54 45 Virginia 79 76 West Virginia 69 62 Median 79 62 NOTE: The response rates are unweighted usable responses. In 2004, the national response rate was 88 percent. This list includes only states that do not collect data centrally and trans- mit a consolidated report to the Census Bureau. Omits elementary and secondary education finance data. SOURCE: Governments Division, U.S. Census Bureau. all units in their areas. Nonresponse for central collection states is virtually nonexistent, in contrast to states in which the Governments Division col- lects the data by mail-out/mail-back means. Table 4-2 shows the response rates for noncentral collection states for the 2004 Annual Finance Survey: the median rate for general governments (counties, cities, etc.) is 79 percent, while the median rate for special districts is only 62 percent. Nonetheless, the arrangements layer the reporting in ways that com- plicate the ability of the Governments Division to change content, add new content, and edit the data. In central collection states, the Governments Division has limited ability to provide some of the kinds of quality control at the data source that, for other surveys, would be obtained in random

OCR for page 69
DATA QUALITY AND STATISTICAL METHODS 77 reinterviews and other methods for understanding and controlling data quality. Finally, the Governments Division is at the mercy of the timing of the collection, summation, and verification activities of the state govern- ments. This is one reason for the delay in the publication of estimates from the surveys. The Governments Division has not conducted a rigorous study of the impact on data quality of state central collection for the Annual Finance Survey. Division staff offered several observations to the panel on the basis of anecdotal and historical information: • Local knowledge. State central collection provides a source of local knowledge and expertise to the Governments Division staff as they discuss data issues with the state agency contacts. • State data review. State agencies often review the data for complete- ness and accuracy before providing the information to the Govern- ments Division. • Greater detail. The detail requested by the states from their local governments normally far surpasses the detail needed for the Gov- ernments Division surveys. When states edit the data from local jurisdictions, they have access to more detailed categories than those established by the division. For example, if a state maintains 10 categories for reporting property taxes, editing is likely to take place among the 10 categories, whereas the Governments Division collects and edits only a single property tax total. • Translation of categories. States provide a service by translating Governments Division categories into local accounting conventions called for by state and local laws. As more and more states and localities move to standardized accounting conventions, such as those in the Government Accounting Standards Board’s Statement 34 (see Chapter 6), the need for this service may diminish. • Completeness. The unit and item response rates for central collec- tion states are exceptionally high—often in excess of 95 percent— reducing or eliminating the need for nonresponse imputation. Although probably indicative of a high level of accuracy in the data from central collection states, the above observations are not a substitute for a rigorous evaluation that could help understand the implications of central collection and help the Census Bureau determine the efficacy of this collection methodology. Such an evaluation would need to take into account other dimensions of quality, such as the effects on timeliness of central collection compared with other collection modes and the ability to release preliminary estimates.

OCR for page 69
78 STATE AND LOCAL GOVERNMENT STATISTICS AT A CROSSROADS Recommendation 4-2: The Governments Division should evaluate the data received from states that have central collection to ensure that high response rates are associated with high quality of the data. The divi- sion should rigorously assess the costs and benefits of central collection compared with other collection modes. NONRESPONSE In seeking to achieve high response rates, the Governments Division must confront two challenges. First, as discussed above, participation in the Census of Governments and the division’s annual and quarterly surveys is voluntary rather than mandatory. The voluntary aspect of the government surveys almost certainly contributes to unit nonresponse and may increase item nonresponse. Second, the data collected in the government surveys are not confidential, and, depending on the survey, many of the respond- ing governments are identified individually in publications and data files. The awareness that an entity’s responses will be subject to public scrutiny could dampen enthusiasm for responding, and lower response rates. The public and the media have a consummate interest in tax loads, payrolls, and other measures of the effectiveness of state and local governments, but those governments may not welcome such scrutiny. Response Rates Unit response rates vary across the government surveys and between census and noncensus years. While a few surveys achieve high response rates by any standard, nonresponse is generally higher among the Govern- ments Division surveys and especially the Census of Governments than among the other major economic surveys. For the Annual Finance Survey, the response rates in recent years have ranged between 87 and 90 percent in noncensus years, but the rate fell to 77 percent during the last census year, 2002. Because governments vary in population size across orders of mag- nitude from a few hundred to millions of people and in annual revenues and expenditures from thousands to billions of dollars, the implications of a given response rate depend on how the likelihood of response varies by importance of the unit in the final estimate of the variable of interest. While the Governments Division regularly publishes unit response rates to its surveys, these typically include only unweighted rates. An exception is the 2005 Annual Survey of Public Employee Retirement Systems, for which the unweighted response rate among eligible governments was 92.5 percent, but the weighted response rate, based on the value of holdings and invest-

OCR for page 69
DATA QUALITY AND STATISTICAL METHODS 79 ments, was 99.5 percent. The difference in rates indicates that govern- ments with larger retirement systems were more likely to respond. If this is true generally, then the potential impact of unit nonresponse on estimated aggregates is not as great as the observed nonresponse rates might suggest. Clearly, it is important to see both types of rates. Understanding of response is also affected by the use of third-party data to substitute for nonresponse. Sometimes, data that state and local governments have submitted in response to other surveys are used by the Governments Division to substitute when units of government fail to re- spond to the division’s surveys. This situation raises an issue about the calculation of unit response rates. If a government fails to respond to one of the division’s surveys, but the same data are available from another source, should the government be counted as not responding or responding? The Governments Division will have to make a decision about the handling of such cases as it works to improve its documentation of unit response rates. In the panel’s view, the government in question is in fact a nonrespondent to the division’s survey, regardless of the quality of the substituted data. A response rate that treats this situation as nonresponse is useful for tracking response trends and assessing the cooperation of respondents. At the same time, there is value in calculating a second statistic that measures the end result: a completed survey. The panel contends that both types of response rates should be developed and made available. While the Governments Division routinely publishes unit response rates, it has not historically calculated item nonresponse rates, except for the portion that is attributable to unit nonresponse and which is not ordi- narily counted as part of item nonresponse. In part, this practice reflects the ambiguous nature of nonresponse in some cases. For example, some respondents refer the Governments Division to their websites rather than reporting certain subsets of items. Is this nonresponse, or is the respondent providing information with which the division can extract a response that better fits what it is requesting? The way in which the Census Bureau approaches the issue of item nonresponse plays a role here. Item nonresponse appears to be addressed only by analysts in the Governments Division as part of their editing func- tion. It has not been taken on as a quality issue by the statistical staff in another division, the Economic Statistical Methods and Programming Division (ESMPD), who have been assigned responsibilities with regard to understanding the nature of and adjusting for unit nonresponse. One consequence of this approach is that item nonresponse rates have not been calculated, and little is known about the character and effect of item non-   Available: http://www.census.gov/govs/retire05quality.html.

OCR for page 69
DATA QUALITY AND STATISTICAL METHODS 83 for each item is taken from the donor and applied to the missing unit’s population size. The unit nonresponse imputation methods are continuously evolving at the Census Bureau, and it is expected that further refinements are in store for the imputation procedures. In researching possible improvements in methodology, ESMPD could benefit by considering methods used in other agencies to impute missing units. For example, facing a similar problem, the National Center for Education Statistics imputes enrollment, finance, graduation rate, and student financial aid data for nonresponding institu- tions in the Integrated Postsecondary Education Data System (IPEDS). Editing for Item Nonresponse Item nonresponse is handled by analysts in the Governments Divi- sion. Missing items are filled in by call-backs to respondents and searches through other data sources. For example, debt is a commonly missing item in the Annual Finance Survey, but there are external sources of data on debt at all levels of government. Imputation generally is not used to com- pensate for item nonresponse, the Quarterly Tax Survey being a notable exception, in which missing items often are filled in with reported data for earlier quarters. An intensive editing process is used to correct items for apparent misre- porting. After the Governments Division has completed editing, the edited file is transmitted to the ESMPD for imputation of missing units. The file contains no units with partial data, as all missing items are filled in before the file is sent. It would be very useful if both the Governments Division and ESMPD would report the amount of data that are edited and imputed, both unweighted and weighted by dollars. Research on Nonresponse Adjustment Procedures The currently used imputation methods for unit nonresponse may be the most accurate way to deal with unit nonresponse in the government surveys, but they tend to be very labor intensive, and staff resources in the ESMPD are scarce. If statistically based imputation were found to be as effective in correcting for unit nonresponse among nonself-representing units, this could free up resources to pursue quality improvements in other areas. Likewise, by all accounts, the currently used editing procedures for item nonresponse produce substantial added value, but too little is known about how these edits affect overall data quality. This has not been systematically studied, even though internal memoranda from ESMPD have repeatedly recommended that the effects of editing be evaluated. Moreover, the edit-

OCR for page 69
84 STATE AND LOCAL GOVERNMENT STATISTICS AT A CROSSROADS ing procedures are very resource-intensive. Conducting research to identify ways to reduce the time that Governments Division analysts spend on edit- ing without adversely affecting data quality could give them more time for analysis of the data, which could lead them to identify potential quality improvements elsewhere in the survey process. Research in this area should consider the effects of different procedures for nonresponse adjustment on microanalytical uses of the data in addi- tion to the effects on uses of aggregate estimates. For example, the use of weighting to compensate for unit nonresponse could produce reasonably accurate estimates of aggregates at reduced cost, but it would result in less useful estimates for analysis of individual governments than imputation using third-party data or growth rates estimated from prior responses. The Census Bureau publishes estimates for individual governments, so, regard- less of the method used, published estimates should indicate the source of the data as survey response, third-party response, imputation, or editing by the use of growth rates. The information on the source of the data should be included in the metadata that should accompany the electronically pub- lished data (as discussed in Chapter 5). Recommendation 4-6: The Governments and Economic Statistical Methods and Programming Divisions (ESMPD) should review their programs for editing and imputation of data to evaluate the costs and benefits compared with other methods. The review should investigate: • the potential limitations of using prior-year data with an assumed growth rate for editing and imputing values for nonresponding governments and • the merits of weighting as an alternative to imputation to compen- sate for unit nonresponse among nonself-representing localities. ESTIMATION Estimates for the annual and quarterly surveys are produced for a va- riety of domains—state totals and subtotals of local governments as well as national totals, depending on the survey. Two issues for estimation are the effectiveness of regression adjustments of direct sample estimates and providing information to calculate the precision of estimates of change within and across sample design periods.   A domain may also refer to a particular type of government function rather than all func- tions combined.

OCR for page 69
DATA QUALITY AND STATISTICAL METHODS 85 Regression Adjustments To improve the precision of domains for a number of the surveys, the ESMPD applies a regression adjustment to the direct sample estimate. This is called the “simple unbiased estimate.” The regression adjustment is based on the relationship between the last Census of Governments estimate, using all of the units in a domain, and an alternative census estimate using census data for just those units that are included in the current sample. When the number of sample units in a domain is small (generally fewer than 20), the regression estimate is not used because the regression coefficients are too unstable. The estimates for these domains are the direct sample estimates, which are unbiased but less precise than they would be if a suitable regres- sion adjustment could be developed. It is ironic that the domains for which the regression adjustment is not applied are those in which the direct sample estimates are the least precise. A research question that the Governments Division should explore is whether suitable regression coefficients could be developed for combina- tions of similar domains that would circumvent the sample size limitation. While the domains that are combined may not have identical “true” regres- sion coefficients, the bias that is introduced by combining them may be outweighed by the reduction in variance achieved with an increased sample size, yielding an overall reduction in mean squared error. This is the prin- ciple behind methods of “borrowing strength” that have gained popularity among statisticians in recent years (National Research Council, 2000). An alternative and perhaps more effective approach to borrowing strength in this situation would involve the application of model-based methods, which would allow the regression coefficients for small areas to vary but would derive them from a model estimated over the full sample. The regression estimator encounters difficulties when there has been a break in the data series, such as recently happened when the Annual Fi- nance Survey was redesigned in 2005. The regression methodology links to prior-year or prior census data in computing the regressions. When there is a break in series, the ESMPD has selected a crosswalk method to link the prior results with the new data to determine if the regression estimator could be used for any of the new variables (Hogue, 2005b, p. 1) Recommendation 4-7: The Governments Division and the Economic Statistical Methods and Programming Division (ESMPD) should eval- uate the effectiveness of a model-based approach or other method of borrowing strength in yielding improved estimates for small do- mains from state and local government surveys. Overall, the appli- cation of regression-based adjustments to direct sample estimates should be reviewed to determine which adjustments produce the most improvement.

OCR for page 69
86 STATE AND LOCAL GOVERNMENT STATISTICS AT A CROSSROADS Computing Precision of Estimates of Change After a redesign, each annual and quarterly survey sample includes the same sample units—adjusted by births, deaths, and mergers—for a period of 5 years. This design feature greatly increases the precision of estimates of change calculated for any combination of years within the 5-year period compared with a design that drew a completely new sample every year, but users may not recognize or be able to take advantage of this feature. By the same token, estimates of change that cross different sample designs will be less precise than estimates of change covering years in the same design. Recommendation 4-8: The Governments Division should provide its users with the information needed to correctly calculate the precision of estimates of change between specific pairs of years from its surveys, including years that fall within a 5-year design period and years that cross periods. DATA PROCESSING The errors associated with data processing can be generated in any of a number of processing steps, and they range in type from simple record- ing errors during data entry to complex errors arising from misspecifica- tion of an edit or imputation. The solution to these errors is influenced by survey planning, resources, constraints, and technology (Federal Commit- tee on Statistical Methodology, 2001, p. 7-1). Such errors are linked to administrative and management processes, and they can be mitigated by the use of process control techniques and continuous quality improvement (Morganstein and Marker, 1997, pp. 475–500). Possible improvements may occur as the Governments Division con- verts its surveys to the Standard Economic Processing System (StEPS), as part of an overall standardization of processing systems in the Economics Directorate. The conversion is scheduled to be completed in 2011. StEPS is a generalized processing system that is now used by other economic surveys at the Census Bureau. It has standard data set structures and modules that perform administrative functions, post-collection processes, and support for data collection technologies (Ahmed and Tasky, 1999). StEPS has the ad- vantage of standardizing processes and enforcing a discipline on the various survey operations to develop common approaches to instituting definitional and conceptual rigor into their operations. The conversion to StEPS will require changes in the way data are col- lected, received, and processed, and it remains to be seen how successfully StEPS can facilitate electronic data collection from state and local govern- ments and how well it can handle the imputation and regression estimation

OCR for page 69
DATA QUALITY AND STATISTICAL METHODS 87 procedures currently employed by the ESMPD. Experience in previous installations suggests that StEPS is capable of handling complex estimation procedures, but these procedures must be programmed and require special output for analysts to confirm that they have worked correctly. REVISION POLICIES Because estimation is improved with the benefit of hindsight through the collection of additional data, the Governments Division revises its data series when new data are collected. With respect to all annual and quarterly series (except for the Quarterly Tax Survey), the revision policy is to adjust two survey cycles prior to the survey that is currently being completed. This policy means, for example, that when the division produces the fiscal year (FY) 2006 Annual Finance Survey data, it would revise the comparable FY 2005 and FY 2004 survey data. Similarly, for an employment survey, the division would revise the March 2005 and March 2004 data when the March 2006 series is released. This two-cycle revision policy drives a fairly complex revision process. For example, the finance survey is a series of surveys, each building on an- other, so when the Governments Division changes one or more elements of a component survey, the change could affect a large number of statistical calculations and measures from that survey and related surveys, as well as the ratio estimators and the coefficients of variation. This possibility neces- sitates a staff-intensive process of review and a redissemination of survey results. The revision process is even more complex when the Governments Division obtains improved data for, say, a special district in the form of an audited financial statement to replace data that were originally imputed. Depending on the extent of the difference between the imputed and actual data, the change might require recalculating all the imputations for that time period because the scarcity of special district data in specific imputa- tion cells means that the original imputations were developed using regional or national groupings. Revisions provide an opportunity for the Governments Division to estimate the levels of such key sources of nonsampling error as nonresponse and misreporting. This information can be extremely important for users to understand the importance of new releases in historical context. As an example, Figure 4-1 shows revisions to 12 quarters of the Quar- terly Tax Survey. Revisions can typically be expected to be anywhere from a downward 1.2 percent to an upward 5.8 percent, with an average upward change of 2.3 percent (using control limits of 95 percent). An average size of revision of this magnitude would suggest that users should be cautious in interpreting quarter-to-quarter changes on a current basis.

OCR for page 69
88 STATE AND LOCAL GOVERNMENT STATISTICS AT A CROSSROADS 7 6 5 4 Percent change 3 2 1 Lower 95% limit Revision Upper 95% limit 0 Average -1 -2 Q1 - Q2 - Q3 - Q4 - Q1 - Q2 - Q3 - Q4 - Q1 - Q2 - Q3 - Q4 - 2003 2003 2003 2003 2004 2004 2004 2004 2005 2005 2005 2005 Quarter FIGURE 4-1  Quarterly Tax Survey revisions, 2003–2005. SOURCE: Data furnished to the panel by the Census Bureau. 4-1 The fact that all 12 consecutive revisions were upward revisions also suggests that there may be a downward bias in the original estimates. Such a bias could result from the fact that the Governments Divisions fills in missing items by carrying over reported items for the same government from previous quarters. Given that taxes on average are likely to rise over time due to growth in the taxpaying population, growth in the real economy, and inflation, this “no-growth” method of imputation probably understates the initial estimates. Recommendation 4-9: The Governments Division should review its revision policies. The division should regularly report typical revision levels when initial data are released from its surveys. In addition, if intermediate data are released, such as 1-year revisions when 2-year revisions will be released later, estimates should be provided of the likely final revisions based on past experience.

OCR for page 69
DATA QUALITY AND STATISTICAL METHODS 89 COGNITIVE TESTING OF QUESTIONNAIRES Good questions can be understood and answered by respondents. They do not adversely affect cooperation (U.S. Census Bureau, 2006c, p. 2). In order to determine if questions are good, there are a number of prefield and field techniques that can be employed to identify if respondents have difficulty with regard to question content, order and context effects, skip instructions, and formatting. These techniques generally include respondent focus groups, exploratory and feasibility site visits, interviews that focus on the cognitive processes that respondents use to answer surveys (cognitive interviews), techniques for evaluating the usability of the questionnaires (usability techniques), expert reviews, respondent debriefings, and split- sample tests. Postfield evaluation methods include analysis of nonresponse rates, imputation rates, edit failures, and response distributions (U.S. Cen- sus Bureau, 2006d, p. 2). The Governments Division has not often employed techniques for determining if their questions are good. The redesign of the 2005 version of the Annual Finance Survey is the only example in the Governments Di- vision domain of the use of modern prefield cognitive design techniques. The cognitive testing was a cooperative effort between the Governments Division experts and the survey design staff of the ESMPD. The analysts examined the content and determined detail that could be eliminated, and the questionnaire design team conducted cognitive interviews. The ESMPD hailed this effort as a great success, and the Governments Division has plans to use cognitive testing for other surveys. However, in the implementation of the cognitive redesign of the 2005 survey, the Gov- ernments Division did not conduct a bridge sample to introduce the new questions. Consequently, it is difficult to differentiate actual change between 2004 and 2005 in the phenomena being measured (revenues, expenditures) from change due to the revision of the survey instrument. In addition to assisting analysts in their understanding of the meaning of the information that is being provided by respondents, cognitive testing can help identify areas that may lead to errors in reporting. Information about the knowledge, experience, and record-keeping practices of the re- spondents would also be helpful in providing training and the kind of in- tensive interaction that now takes place between many of the respondents and Governments Division staff experts. Although the success of the initial cognitive testing project was not measured, this use of sophisticated design and testing techniques is com- mended. However, the real impact of the pioneering cognitive testing of the   Presentation of Carma Hogue to the panel’s data-gathering workshop, June 2006.

OCR for page 69
90 STATE AND LOCAL GOVERNMENT STATISTICS AT A CROSSROADS Annual Finance Survey can be ascertained only through a careful analysis using some of the methods described above. Recommendation 4-10: The Governments Division should carefully document and assess the results of the cognitive redesign of the 2005 Annual Finance Survey to determine the cost-benefit trade-off of imple- menting a policy calling for conducting a similar pretesting process for other questionnaires. In future redesigns, major revisions in survey instruments should be implemented using a bridge sample or other technique to isolate changes in the survey instrument from changes in the economic phenomena that are being measured. REDESIGN OF THE QUARTERLY TAX SURVEY The upcoming redesign of the Quarterly Tax Survey, officially known as the Quarterly Summary of State and Local Government Tax Revenue, offers a rare opportunity to make quality improvements in this survey and, at the same time, to test and develop improvements in other aspects of the survey that could contribute beneficially to other state and local govern- ment surveys (Hogue, 2005a). Accordingly, the upcoming redesign received some attention in the panel’s workshop. The Quarterly Tax Survey now consists of three pieces: universal cover- age of state government taxes, a probability sample of county areas to ob- tain county property tax collections, and a nonprobability sample of local government nonproperty taxes. While the property tax sample is drawn at the county level, data must be collected in each county for all local areas with independent taxing authority. In recent years, the total number of respondents has been about nine times the number of counties sampled. The third component is based on the local governments that have the larg- est amounts of nonproperty tax collections; the sampled areas account for 65 percent of the nonproperty tax revenue collected by local areas. This nonprobability sample is updated on the basis of information from the last Census of Governments. The Governments Division was twice forced to forgo the normal sam- ple selection updates for property tax collection under the Quarterly Tax Survey program because of limited resources. This means that in 2006, the division was still using a sample based on the 1992 Census of Governments. There is an opportunity to introduce a new sample design to coincide with the selection of a new sample in the near future. An even more important change that the Census Bureau is considering is to convert the current nonprobability method used to select the sample for estimation of local nonproperty tax revenue to a probability methodol- ogy. One proposal would use the same sample of county areas to collect

OCR for page 69
DATA QUALITY AND STATISTICAL METHODS 91 both the property tax and nonproperty tax. There are both sample design and questionnaire design issues to be addressed in combining the two samples. Even if separate samples are maintained, substituting a probability sample for the current nonprobability sample would enable the Govern- ments Division to estimate total local tax revenue directly and more reliably than at present. Currently, the estimate of local nonproperty tax collections obtained from the nonprobability sample must be inflated to derive an es- timate of the total local revenue from this source, as no data are collected from areas accounting for about 35 percent of total local nonproperty tax revenue. Converting the nonprobability sample to a probability basis would also allow the estimation of variances for local nonproperty tax and total tax revenue, which has not been possible to date. The Governments Division faces five other challenges regarding the new sample design: 1. Questionnaire redesign. As noted, the new sample could be de- signed to collect income and sales taxes from the same sample of county areas as the property tax sample. The forms would have to be redesigned to include sales, income, and property tax items and tailored to the respondent, since not all jurisdictions have all three types of taxes. This tailoring of the questionnaire to reflect local practices will present a challenge to survey designers, but it also constitutes a unique opportunity to consider further fine-tuning of data collection in this and other surveys so as to improve response rates and timeliness. 2. Variance computation. The Governments Division has never cal- culated coefficients of variation for its estimates of local property taxes, nonproperty taxes, or total tax revenue. This is contrary to good statistical practice and violates Census Bureau standards, so a system must be designed to compute these coefficients in conjunc- tion with the sample redesign. 3. Adjustment for unit nonresponse. The current method of adjusting for unit nonresponse for state governments generally amounts, in effect, to applying the national growth rate for each item to the previously reported data for the state—assuming, in essence, that tax revenues and other amounts for the missing state would grow at the national rate. It is possible that this approach may result in acceptable estimates of national totals, serving users primarily in- terested in national data, but it clearly does not use all of the avail- able information and will not result in state-level estimates that are useful to those who care about state differences. For example, Western states tend to have faster growing populations and econo- mies than Northeastern states, so that, all else being equal, one

OCR for page 69
92 STATE AND LOCAL GOVERNMENT STATISTICS AT A CROSSROADS would expect tax revenue in Western states to grow more quickly than in the Northeast, but the Governments Division adjustment method does not allow for this pattern. A method that takes into account regional differences might yield more accurate state-level estimates. 4. Imputation for item nonresponse. The current method of imputa- tion for item nonresponse in the Quarterly Tax Survey is to pull forward data for several years. The Governments Division recog- nizes that this is not statistically defensible and that the current method may be leading to underestimates of property taxes and other taxes, as noted earlier. A more reliable imputation procedure would reflect the growth of the variable being measured. 5. Editing. Editing now is done by three analysts. Automating and otherwise modernizing the editing procedures would assist in get- ting the data out in a more timely manner and in freeing the ana- lysts to actually analyze the data. These challenges, in sum, suggest that a great deal of statistical methods research and development is needed to bring these Governments Division programs into compliance with Census Bureau standards and generally recognized good statistical practices. They will require a fairly significant investment of technical resources, the deployment of sophisticated statis- tical and survey methodology skills, and a commitment to conduct the necessary research and development on an ongoing basis. In view of the shortage of resources, it may be advisable for the Governments Division to start out by integrating the testing and development of new methods and procedures into the upcoming redesign of the Quarterly Tax Survey, to which it is already committed. Recommendation 4-11: The Governments Division should use the re- design of the Quarterly Tax Survey to assess the quality of the sample frame, to develop a probability sample of local governments for non- property tax measurement, to streamline questionnaires, and to develop cost-effective variance estimation, editing, and imputation procedures that meet Census Bureau standards. PLANNING FOR IMPROVEMENTS IN STATISTICAL METHODOLOGY Taken as a whole, the panel’s recommendations in this chapter com- prise a tall order for the staff of the Governments Division and its sup- porting organizations in the Census Bureau, in particular, the ESMPD. The statistical methods underpinning federal government surveys require

OCR for page 69
DATA QUALITY AND STATISTICAL METHODS 93 continuous attention and the commitment of scarce human, technological, and financial resources. The panel has outlined some steps that can be taken in the short term to shore up the statistical infrastructure for the state and local governments program that are not excessively resource-intensive and that could well have an immediate payoff. Any program of improvement begins with a plan. In this case, the strategic plan for the Governments Division can be enriched with time- phased activities to enhance statistical methods. Scheduled survey redesigns and recurring post–Census of Governments sample updating operations can be used as test beds for improving practices and procedures that can be widely applied across the various data collections. Outside advice and guidance can be obtained from advisory structures, such as the advisory group discussed in Chapter 6, and by regularly scheduling sessions with the American Statistical Association component of the Census Advisory Com- mittee of Professional Associations. A clear delineation of responsibility for statistical activities between the Governments Division and the supporting organizations, particularly for imputation and variance estimation func- tions, can be an early priority. Finally, the payoff from a relatively small investment in professional staff training and development can be quite significant. While Governments Division staff participate in a wide variety of professional meetings, the ESMPD staff who provide statistical support to the Governments Division rarely have the opportunity to do so. Extending the opportunity to attend and actively participate in statistical conferences and workshops can pay exceptional dividends for maintaining the currency of methodological skills in the fast-changing world of survey methodology at a relatively low cost.