Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 31
CHAPTER 4 Using ACS Data Once a data user has obtained the ACS data that are needed for the specific analysis, she or he will need to consider the special issues that affect the ACS data. The issues described in this sec- tion will affect how analyses are done and how analysis results are interpreted and reported. As discussed above, the Census Bureau's migration to ACS adds some complexity to common data uses, but also introduces the ability to perform new and better data analyses. This section begins with a discussion of ACS data quality, focusing on non-sampling errors, bias, and other issues that could affect how well an ACS estimate reflects the actual population. The issues identified in this section can help data users understand their results better, and can help explain why unexpected results may be occurring. The issues discussed in Section 4.1 are related to the ACS data quality and accuracy. Sections 4.2 and 4.3 describe two Census Bureau data processing issues that affect how users will need to work with and interpret the ACS data. Section 4.2 considers the effects of the Census Bureau's ACS data accumulation over time and geography, and the use of one-, three-, and five-year aver- ages. Section 4.3 discusses potential data use and analysis challenges introduced by data disclo- sure limitations. These first three sections outline many of the key issues that data users will need to be aware of to design and implement ACS analyses. The following three sections describe issues related to how analysts actually perform their analyses. Section 4.4 describes the need to consider the effects of sampling error on the ACS estimates. Section 4.5 describes the issues analysts will need to consider in comparing ACS results with Census 2000 results. Finally, Section 4.6 outlines the implications and opportunities of ACS's frequent data releases. 4.1 Accuracy of ACS Data A key objective for the Census Bureau in migrating from decennial census Long Form data collection to the continuous data collection approach of ACS was to improve the quality of the data collected by improving the ways the data are collected and processed. To evaluate whether this objective is being achieved, the Census Bureau and other researchers have evaluated quality measures for the initial ACS effort and have compared the early ACS results to the decennial cen- sus Long Form. 4.1.1 Census Bureau Evaluation of ACS The Census Bureau has published 11 reports discussing ACS data quality issues based on the test site data and the C2SS experiment. The 11 Census Bureau reports are published under the title, "Meeting 21st Century Demographic Data Needs--Implementing the American Community 31
OCR for page 32
32 A Guidebook for Using American Community Survey Data for Transportation Planning Survey," and are made available at www.census.gov/acs/www/AdvMeth/Reports.htm. The indi- vidual reports are · Report 1: Demonstrating Operational Feasibility (issued July 2001); · Report 2: Demonstrating Survey Quality (issued May 2002); · Report 3: Testing the Use of Voluntary Methods (issued December 2003); · Report 4: Comparing General Demographic and Housing Characteristics with Census 2000 (issued May 2004); · Report 5: Comparing Economic Characteristics with Census 2000 (issued May 2004); · Report 6: The 2001-2002 Operational Feasibility Report of the American Community Survey (issued May 2004); · Report 7: Comparing Quality Measures: The American Community Survey's Three-Year Averages and Census 2000's Long Form Sample Estimates (issued June 2004); · Report 8: Comparison of the American Community Survey Three-Year Averages and the Census Sample for a Sample of Counties and Tracts (issued June 2004); · Report 9: Comparing Social Characteristics with Census 2000 (issued June 2004); · Report 10: Comparing Selected Physical and Financial Characteristics of Housing with the Census 2000 (issued July 2004); and · Report 11: Testing Voluntary Methods--Additional Results (issued December 2004) These reports are summarized throughout the remainder of Section 4. Data Quality Measures Measuring how accurately a survey like ACS captures the attributes of the survey sample and the population from which it is drawn is very difficult because in order to do so one would need to know the true characteristics of the population (in which case, the survey would not be a very useful effort). Therefore, survey researchers try to detect clues to potential problems in different survey components. In surveys, non-sampling error can result from a variety of problems, including · Coverage errors, · Reporting errors, · Non-response error, and · Processing and coding errors. As discussed below, some of these errors lend themselves to quantitative analyses, so that indicators can be used to assess the presence and degree of these non-sampling errors. Coverage Rates Survey coverage refers to how closely the sampling frame covers the target population. Coverage error occurs · If housing units that belong to the target population are excluded (called under-coverage), · If housing units that belong to the target population are counted more than once (over-cover- age), or · If out-of-scope housing units (i.e., those not in the target population) are included in the sampling frame (over-coverage). The sample completeness rate indicates how well a target population is covered by a survey's sample population. This rate is calculated by dividing the survey's weighted population estimates, without non-response or coverage error adjustments, by the independently derived population estimates or counts. Unit Response Rates Unit response rates measure the degree of participation of sampled housing units in the survey. Non-response due to inability or unwillingness of housing units to participate can cause bias if the characteristics of non-respondents are different from those of respondents.
OCR for page 33
Using ACS Data 33 Table 4.1. Comparison of C2SS and census 2000 population item imputation rates. Percent of Eligible Items Variable Census 2000 Imputation C2SS Imputation Relationship 2.2 % 1.5 % Gender 1.0 % 0.5 % Age 3.6 % 2.4 % Hispanic Origin 4.2 % 3.6 % Race 3.9 % 2.4 % Source: United States Census Bureau, 2002. Item Non-Response Item non-response occurs when a given respondent does not provide answers for one or more items on the questionnaire. Robust methods for reducing item non- response were employed through different ACS phases. For mail responses, the automated clerical review and the follow-up operations contribute to reducing item non-response. During the CATI and CAPI procedures, the fact that a response is received to every question by the auto- mated instrument before the next question is asked reduces item non-response significantly even when "don't know" responses are allowed. After all data collection phases, items that were still missing were obtained by borrowing the data from respondents with similar characteristics, a process known as imputation or item allocation. ACS Data Quality The Census Bureau's first assessment of the potential data quality of ACS was the assessment of the accuracy and timeliness of the C2SS data that is reported in the second report of the Census Bureau evaluation series, available at http://www.census.gov/acs/www/ Downloads/Report02.pdf. In this report, Census Bureau experts and managers concluded When implemented, the ACS will improve survey quality compared to the decennial census Long Form. That is, some increase in sampling error will occur due to smaller sample sizes in any given year. However, timeliness will greatly improve, and non-sampling error should be reduced by the use of permanent, highly trained field staff.13 The report evaluated C2SS on the basis of unit non-response, item non-response, sample completeness, control of processing/measurement errors, and sampling errors. Unit non-response rates for C2SS were found to be quite low (and lower than other Census Bureau surveys), but statistically significant differences in the response rates were found between census tracts with different dominant racial/ethnic groups. Tracts with 75 percent or more of the population reporting a race or ethnicity of African American/black or American Indian/Alaskan Indian had statistically lower response rates than tracts that were similarly dominated by a pop- ulation reporting to be white. In terms of item non-response, the C2SS imputation rates for basic demographic items were significantly lower than for the decennial census. Significant differences in the imputation requirements were found for several key population variables, as shown in Table 4.1. The C2SS sample completeness was evaluated in relation to Census 2000, and was compared to the sample completeness ratio for the 1990 Census Long Form in relation to 1990 decennial counts (sample completeness measures for the year 2000 Long Form were not yet available at the writing of the report). The percent of the population represented in the C2SS sample was slightly higher than for the 1990 Long Form sample. 13 U.S. Census Bureau, Meeting 21st Century Demographic Data Needs Implementing the American Community Survey: Report 2: Demonstrating Survey Quality (May 2002), p. 7.
OCR for page 34
34 A Guidebook for Using American Community Survey Data for Transportation Planning The researchers believe that through the ongoing monitoring of ACS quality measures, improvements in unit and item response rates can be realized in the future. In addition, improvements in the MAF that are underway and will continue as part of the ACS program will lead to improvements in sample completeness. It was not possible for the researchers to fully evaluate the potential processing and measure- ment errors. However, they do note several procedures that help to reduce these errors and have been implemented under the ACS quality assurance program. Sampling error was the one quality measure that the analysts determined would be adversely affected by ACS. Data users have concluded that the higher sampling error of ACS will have a significant impact on the usefulness of the data. With a sampling rate of 3 million housing units per year, data accumulated over five years will correspond to a sample size of less than three- fourths of the roughly 16.7 percent sampling rate achieved with the Long Form Survey. Considering the effect of sample size alone on the standard error of the estimates and assum- ing a constant sampling rate of 2.5 percent, the ACS estimates will have a standard error equal to 2.8 times, 1.6 times, and 1.25 times that of the Long Form for annual estimates, three- and five-year moving averages, respectively.14 By examining the ACS test site data, the Census Bureau researchers drew the following conclusions: While the targeted levels of sampling error for single year estimates were met overall, differ- entials in levels of mail response for some population groups indicate that sampling error is dis- proportionately higher, suggesting the need for design changes.15 Even with improved survey follow-up procedures to address the problem of differential response to the initial mail surveys, the authors concluded that The ACS five-year averages are expected to have somewhat higher [relative standard error levels] than corresponding Census 2000 Long Form estimates... The premise of the ACS design is that this moderate increase in [standard errors] for a five-year average is worthwhile in order to obtain regular updates of the estimates throughout the decade, and to obtain what is expected to be a generally lower level of non-sampling error.16 The best assessments of actual ACS (as opposed to C2SS) non-sampling error are the Census Bureau's Accuracy of the Data reports, which are updated annually and available at www.census. gov/acs/www/UseData/Accuracy/Accuracy1.htm), and Report 7 of the Census Bureau's ACS evaluation series, which is available at www.census.gov/acs/www/AdvMeth/acs_census/creports/ Report07.pdf. The evaluation report compares Census 2000 data quality measures to the same ACS (1999- 2001) data quality measures at the county and census tract level for the ACS test sites. To analyze the differences at smaller geographic breakdowns, census tracts within the ACS test sites were divided into five groups: 1. County population less than 100,000; 2. County population between 100,000 and 1 million, with tract population less than 4,000; 3. County population between 100,000 and 1 million and tract population greater than 4,000; 14 Ronald Eash, Impacts of Sample Sizes in the ACS, presented at TRB Census Data for Transportation Planning: Planning for the Future Conference, May 12, 2005. 15 U.S. Census Bureau, Meeting 21st Century Demographic Data Needs Implementing the American Community Survey: Report 2: Demonstrating Survey Quality (May 2002), p. 27. 16 U.S. Census Bureau, Meeting 21st Century Demographic Data Needs Implementing the American Community Survey: Report 2: Demonstrating Survey Quality (May 2002), p. 29.
OCR for page 35
Using ACS Data 35 Table 4.2. Comparison of quality measures at the county level. Characteristic ACS Census 2000 Self-Response Rate 55.3% 68.1% Total Housing Unit Non-Response 4.4% 9.7% Occupied Housing Unit Non-Response Rate 5.2% 8.7% Allocation Rates Population Item Total Allocation Rates 6.5% 11.2% Occupied Housing Unit Total Allocation Rates 7.7% 15.8% Vacant Housing Unit Total Allocation Rates 23.2% 19.8% Population and Occupied Housing Unit Total Allocation Rates 6.9% 12.8% Sample Completeness Rates Housing Sample Completeness 92.9% 90.3% Household Population Sample Completeness 90.4% 91.1% Source: United States Census Bureau, 2004. 4. County population greater than 1 million and tract population less than 4,000; and 5. County population greater than 1 million and tract population greater than 4,000.17 Table 4.2 summarizes some of the key quality measures compared in the ACS and Census 2000 at the county level. The figures shown reflect the Census Bureau's weighted definitions of response and completion rates. Based on their evaluation of all of these items, the authors concluded The quality measures suggest that the ACS multiyear averages are at least as good as the esti- mates from the Long Form. When we also consider the enhanced timeliness of information from the ACS, the superiority of reengineering the 2010 Census over retaining traditional methods is clear.18 ACS Unit Response The self-response rate for Census 2000 was 68.1 percent while ACS was lower at 55.3 percent. This means that Census 2000 respondents were more likely to mail back their questionnaires than were ACS respondents. The authors note . . . the higher census Long Form self-response rates mean that the success of the census depended less on follow-up operations than did the success of the ACS. This was an expected result--past experience has consistently indicated that the census will produce mail return rates of between 10 to 20 percentage points higher than other similar operations, even decennial tests.19 The decennial census benefits from a large advertising and public relations campaign, and, therefore, has much higher visibility. The authors also point out, "Census 2000 used questionnaires in languages other than English, especially in Spanish, which would have increased self-response rates in linguistically isolated areas--the ACS used English questionnaires only." 17 Tracts with population less than 500 were discarded for this study; there are about 590 such tracts in the coun- try. The average tract population in the United States (65,000 tracts) is about 4,300. 18 U.S. Census Bureau, Meeting 21st Century Demographic Data Needs Implementing the American Community Survey: Report 7: Comparing Quality Measures: The American Community Survey's Three-Year Averages and Cen- sus 2000's Long Form Sample Estimates (June 2004), p. vii. 19 U.S. Census Bureau, Meeting 21st Century Demographic Data Needs Implementing the American Community Survey: Report 7: Comparing Quality Measures: The American Community Survey's Three-Year Averages and Cen- sus 2000's Long Form Sample Estimates (June 2004), p. 15.
OCR for page 36
36 A Guidebook for Using American Community Survey Data for Transportation Planning Similar statistically different self-response rates were found for each tract group that was analyzed. Despite the lower initial return rate, the non-response rates for total housing units and occupied housing units were lower in the ACS than in Census 2000. At the tract level, ACS also consistently showed lower rates. ACS Sample Completeness Rates The sample completeness rate indicates how well a target population is covered by a survey's sample population. Rates greater than 100 would indicate over coverage of the population, and rates less than 100 would indicate under coverage. Both efforts failed to include the whole universe in their samples. The housing unit sample completeness rate for ACS was reported to be 92.9 percent compared to 90.3 percent for Census 2000, while the household population sample completeness rates were 90.4 percent and 91.1 percent respectively. ACS Item Response Rates and Item Allocation (Imputation) The reported total allocation rates in Table 4.2 are the weighted averages of the item allocation rates for the individual corre- sponding variables. Both Census 2000 and ACS allocate (impute) responses when items are left blank or responses are out of range. For all population items (54 responses) and occupied hous- ing unit items (29 responses), Census 2000 had higher allocation rates than ACS. Both for pop- ulation and occupied housing unit responses, the ACS allocation/imputation rate was about five percent lower than the Census 2000 rate, with a similar trend across the five tract groups. The differences in the vacant housing unit items (12 responses) are most likely the result of issues related to the comparability of the two estimates. The lower ACS imputation rates are a strong indication that the quality of ACS data compares favorably with census Long Form data. The lower (improved) levels of item non-response for ACS can be seen in Table 4.2 in the pre- vious section. While the reduced need for item allocation is very good news for data users, as noted in the previous section, the item allocation procedures used by the Census Bureau are still limited by the individual sequencing of these allocations. Although individual transportation-related items show reasonable allocation rates, many of the household items, when combined with person items, show unusual results. This is likely to result from the Census Bureau's practice of processing the allocations of household items and person items separately without any cross-referencing. ACS Operational Quality Measures Reports 1 and 6 of the Census Bureau's evaluation series (available at www.census.gov/acs/www/Downloads/Report01.pdf and www.census.gov/acs/ www/Downloads/Report06.pdf) reviewed the operational feasibility of ACS. In Report 1, Cen- sus Bureau staff reviewed the outcome of the C2SS and the 1999 and 2000 ACS test site deploy- ment to evaluate ACS from an operational standpoint. The key findings of this effort were 1. Implementing the ACS should improve the year 2010 decennial census; and 2. The successful implementation of the C2SS during 2000 demonstrated that full implementa- tion of the ACS is operationally feasible. According to the report: By having only a Short Form in 2010, the Census Bureau can more sharply focus on its consti- tutional mandates--to fully enumerate the population to apportion the House of Representatives. The ACS development program--supported by a complete and accurate address system--will simplify the decennial design, resulting in improved coverage in 2010.20 The researchers also report that C2SS achieved the quality standards, budgets, and schedules that the Census Bureau had established. The C2SS effort came in slightly under budget, and most of the workload issues identified with the effort were attributed to the fact that the C2SS was 20 U.S. Census Bureau, Meeting 21st Century Demographic Data Needs Implementing the American Community Survey: Report 1: Demonstrating Operational Feasibility (July 2001), p. 7.