National Academies Press: OpenBook

Using the American Community Survey: Benefits and Challenges (2007)

Chapter: PART I: Using the American Community Survey, 2 Essentials for Users

« Previous: 1 Introduction
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

PART I
Using the American Community Survey

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

This page intentionally left blank.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

2
Essentials for Users

This chapter addresses users of the decennial census long-form sample who want to know, in general terms, what benefits—and challenges—the new American Community Survey (ACS) presents to them. The chapter first summarizes the basics that every user should know about the ACS and key ways in which it is similar to and differs from the decennial census long-form sample. It then addresses two central issues: (1) why users should care about the ACS in terms of the benefits it offers and (2) some of the challenges those benefits present for users. Finally, it offers the panel’s assessment of the value of the ACS to users based on the available knowledge about its properties.

2-A
ACS DESIGN BASICS

To work with data from the ACS, users should be acquainted with the following features of its design and operations: the population or universe covered, rules for assigning people to a place of residence, questionnaire content and reference periods, sample size and design, data collection procedures, data products, and data-processing procedures to generate the products. The key factor to keep in mind is that, unlike the census long-form sample, the ACS is continuous: a fresh sample of addresses is surveyed every month, and data products represent cumulations of monthly data for 1-year, 3-year, and 5-year periods. The discussion below pertains to the ACS

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

BOX 2-1

The Puerto Rico Community Survey (PRCS)

The Puerto Rico Community Survey (PRCS) is identical in most respects to the American Community Survey in the 50 states and the District of Columbia. There were no PRCS test surveys or test sites for Puerto Rico in 2000–2004, so that 2005 PRCS data are the first post-2000 long-form-type data available for Puerto Rico.

Following the same basic design as the ACS, the initial sample size of the PRCS is 3,000 housing units each month or 36,000 housing units each year—about 2.4 percent of the total of about 1.5 million residential addresses in Puerto Rico. Initial sampling rates for blocks vary by the estimate of occupied housing units in the governmental jurisdiction or census tract (see Table 2-3, Part A), although the PRCS rates are slightly higher than the ACS rates (see U.S. Census Bureau, 2006:Table 4.1).

Data collection in the PRCS uses mailout, CATI, and CAPI, like the ACS. However, because of low mail response, all mailout/CATI nonrespondents are sampled at a 50 percent rate for the CAPI follow-up as of June 2005.

Areas for which PRCS products are published include:

  • 78 municipios (county equivalents): 12 will have 1-year estimates; 65 will have 3-year estimates

  • 455 barrios (subdivisions of municipios, similar to minor civil divisions): 5 will have 1-year estimates; 34 will have 3-year estimates (based on 2000 census counts)

  • 225 zonas urbanas (census designated places that are governmental centers of municipios) and communidads (other census designated places): 9 will have 1-year estimates; 20 will have 3-year estimates (based on 2000 census counts)

  • 871 census tracts and 2,477 block groups (5-year estimates only)

The PRCS is explained in greater detail by U.S. Census Bureau (2006).

in the United States; see Box 2-1 for a brief overview of the Puerto Rico Community Survey (PRCS).1

2-A.1
Population Coverage (Universe)

The ACS for 2005 covered the household population. The 2006 ACS covered not only the household population, but also people who live in college dormitories, armed forces barracks, prisons, nursing homes, and other group quarters. The 2006 ACS population coverage was the same as the census long-form-sample coverage, except that the ACS did not in-

1

All of Section 2-A draws heavily on U.S. Census Bureau (2006).

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

clude people found at soup kitchens or street locations frequented by the homeless and a few other transient situations. Table 2-1 lists the types of residences included in the 2006 ACS.

2-A.2
Residence Rules

The ACS instructs the respondent for a household to provide data on all people who, at the time of filling out the questionnaire, are living or staying at the household address for more than 2 months (including usual residents who are away for less than 2 months). In contrast, the long-form sample asked household respondents to report all people who usually lived at the address as of Census Day, April 1, meaning they lived or stayed there most of the time. People whom the ACS samples in group quarters beginning in 2006 are counted at the group quarters location, in effect applying a de facto residence rule regardless of how long an individual has lived or expects to live in the group quarters. The long-form sample also in effect generally applied a de facto residence rule for group quarters residents, although residents of some types of group quarters were allowed the option of indicating another usual place of residence. (An unduplication process was used to determine the correct enumeration for people listed at the group quarters and the other residence; such a process would not be possible for the ACS because it is not embedded in a census.)

For many people, their ACS residence will be the same as their long-form-sample residence. However, some people may report a different residence: for example, people who live in a house or apartment in New York most of the year but reside in Florida in December through March should report Florida as their address if sampled for the ACS in Florida in the winter, whereas their Census Day address is in New York.

2-A.3
Content and Reference Periods

The 2005 ACS includes about 55 questions for every person and 30 questions for every housing unit in the sample—approximately the same content as in the 2000 census long-form sample. There are some differences:

  • The ACS mail questionnaire uses a matrix layout for questions on sex, age, race, ethnicity, and household relationship, compared with a person-by-person format in the long-form questionnaire.

  • The ACS mail questionnaire provides room to respond for 5 household members compared with 6 on the long-form questionnaire (telephone follow-up is used to obtain information on additional household members).

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

TABLE 2-1 Types of Residences in the American Community Survey (ACS)

Residence Type

2000 Population (Percentage)

Housing Unit Residencea

97.2

 

 

Single-family, detached

 

64.6

 

Single-family, attached

 

5.4

 

2-or-more-unit structure

 

20.4

 

 

In an apartment building (including condominium or co-op)

 

 

 

 

In an assisted living facility with separate apartments

 

 

 

 

In a group quarters (e.g., house master’s residence)

 

 

 

 

In a home (e.g., basement apartment, upstairs apartment)

 

 

 

 

In multi-unit military family housing on or off base

 

 

 

Mobile home that is occupied or, if vacant, that is permanently sited

 

6.7

 

Boat at a mooring, RV, or occupied van

 

0.1

Institutional Group Quarters Residence (beginning in 2006 ACS)

1.4

 

 

Nursing home or other long-term care facility

 

0.6

 

Correctional institution (for example, prison or jail)

 

0.7

 

Other institutions (for example, hospital or residential school for people with disabilities, long-term care home for juveniles)

 

0.1

Noninstitutional Group Quarters Residence (beginning in 2006 ACS)

1.3

 

 

College dormitory

 

0.7

 

Military quarters (in barracks on a base; on a ship assigned to home port)

 

0.1

 

Other noninstitutional group quarters

 

0.5b

 

 

Residence in the ACS and the 2000 Long-Form Sample

 

 

 

 

 

Convent, monastery

 

 

 

 

 

Group home

 

 

 

 

 

Halfway home

 

 

 

 

 

Hospice

 

 

 

 

 

Job Corps center

 

 

 

 

 

Migrant worker quarters

 

 

 

 

 

Shelter, emergency shelter

 

 

 

 

 

YMCA, YWCA, hostel

 

 

 

 

Residence NOT in the ACS but in the 2000 Long-Form Sample

 

 

 

 

 

Circus quarters

 

 

 

 

 

Crews on merchant ships

 

 

 

 

 

Domestic violence shelter

 

 

 

 

 

Recreational vehicle in a campground

 

 

 

 

 

Soup kitchen or mobile food van site

 

 

 

 

 

Street location for the homeless

 

 

aHousing units are separate living quarters with direct access from the outside or through a common hall (U.S. Census Bureau, 2006:D-17).

bIncludes 170,706 people (0.06 percent of the population of 281.4 million in 2000) living in emergency shelters for the homeless, shelters for runaway children, transitional shelters, and hotels and motels used to provide shelter for people without conventional housing (U.S. Census Bureau, 2001).

SOURCES: Types of residences adapted from U.S. Census Bureau (2006:Ch. 8, Attachment A); population percentages from http://factfinder.census.gov, Summary File 1, Table P37; Summary File 3, Table H33.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
  • Many ACS items refer to a time period different from that of the corresponding items on the long-form questionnaire: for example, usual hours worked per week, weeks worked per year, and income items on the ACS refer to the 12 months prior to the day when the household filled out the questionnaire, whereas these items on the long form always referred to the previous calendar year (1999 for the 2000 census).

  • The ACS currently includes three items not on the 2000 long form: (1) whether the household received food stamps in the previous 12 months and their value; (2) the length of time and main reason for staying at the address (for example, permanent home, vacation home, to attend school or college); and, for women ages 15–50, whether they gave birth to any children in the past 12 months.

Table 2-2 compares the items on the 2005 ACS questionnaire with the items on the 2000 census long form. The Census Bureau is proposing several changes to the ACS questionnaire beginning in 2008. These changes, if approved, will include the addition of three new questions on marital history, health insurance coverage, and veterans’ service-related disability, the deletion of the question on length of time and main reason for staying at the address, changes to the basic demographic items for consistency with the 2010 census questionnaire, and changes in wording and format to improve reporting of several other questions as determined by a 2006 test. A question on field of bachelor’s degree will be tested in 2007 and may be added to the ACS beginning in 2009.

2-A.4
Sample Design and Size

The ACS sends out questionnaires to about 250,000 housing unit addresses every month that have been sampled from the Census Bureau’s Master Address File (MAF; see Chapter 4 for details of the sampling operation). Each month’s sample includes addresses in every one of the nation’s 3,141 counties. The monthly samples cumulate to about 3 million addresses over a year, or about 2.3 percent of the total number of about 129.5 million housing unit addresses in the United States in 2005. The sample is constructed so that no housing unit address will be included more than once every 5 years.

The ACS sample is very large compared with the samples for major national household surveys. However, the long-form sample was even larger: in 2000, the long form was sent to about 18 million addresses, or one-sixth of total housing unit addresses in the United States at the time, and 16.4 million usable long-form questionnaires were included in the final edited data file. The ACS monthly and even yearly samples cannot be as large as

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

TABLE 2-2 Items on the 2005 ACS Questionnaire and the 2000 Census Long Form

2005 ACS Item (in question order)

Asked in 2000 Census?

Person Items

 

Sex

Yes (short-form item)

Age (at interview)

Yes (as of April 1, short form)

Date of birth (month, day, year)

Yes (short form)

Relationship to household reference person (person 1)

Yes (more detail, short form)

Marital status

Yes

Hispanic origin

Yes (short form)

Race (option for multiple races)

Yes (short form)

Place of birth

Yes

Citizenship

Yes

Year of immigration

Yes

Attended school in last 3 months

Yes (since February 1, 2000)

Grade attending

Yes

Highest degree completed

Yes

Ancestry or ethnic origin

Yes

Language spoken at home

Yes

How well speaks English

Yes

Place of residence 1 year ago (city or town, county, state)

Yes (5 years ago)

Disability involving sight or hearing

Yes

Disability limiting physical activity

Yes

Difficulty learning, remembering due to disability of 6 or more months

Yes

Difficulty dressing, bathing, or getting around the home

Yes

For people ages 15 and older

 

Difficulty going outside the home alone to shop, etc.

Yes (ages 16 and older)

Difficulty working at a job or business

Yes (ages 16 and older)

Given birth in past 12 months (women ages 15–50)

No

Responsible for own grandchildren in the home

Yes

How long responsible for grandchildren

Yes

Veteran status (active duty)

Yes

Period of active military service

Yes

Number of automobiles, vans, trucks for use by household members

Yes

Years of active military service (less than 2 years, 2 years or more)

Yes

Working last week for pay or profit

Yes

Place of work (address)

Yes

Usual means of transportation to work last week

Yes

If by car, truck, or van, how many people used it

Yes

Time left home for work

Yes

Minutes to work

Yes

On layoff last week

Yes

Temporarily absent from work last week

Yes

Whether will be recalled to work

Yes

Looking for work last 4 weeks

Yes

Could have worked last week

Yes

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

2005 ACS Item (in question order)

Asked in 2000 Census?

When last worked

Yes

Weeks worked, last 12 months

Yes (1999)

Hours usually worked per week, last 12 months

Yes (1999)

Class of worker of current or most recent employment

Yes

Industry of current or most recent employment

Yes

Occupation of current or most recent employment

Yes

Wage and salary income, last 12 months

Yes (1999)

Self-employment income (farm and nonfarm), last 12 months

Yes (1999)

Interest, dividend, net rent, royalty, and trust income, last 12 months

Yes (1999)

Social Security income, last 12 months

Yes (1999)

Supplemental Security Income, last 12 months

Yes (1999)

State or local public assistance income, last 12 months

Yes (1999)

Retirement, survivor, or disability pension income, last 12 months

Yes (1999)

Any other regular income, last 12 months

Yes (1999)

Total income, last 12 months

Yes (1999)

Housing Items

 

Type of building/number of units in structure

Yes

Year building built

Yes

When household reference person (person 1) moved in

Yes

Number of acres on property (single-family or mobile home)

Yes

Agricultural sales, last 12 months (single-family or mobile home on 1 or more acres)

Yes (1999)

Whether business on property (single-family or mobile home)

Yes

Rooms in unit

Yes

Bedrooms in unit

Yes

Complete plumbing facilities

Yes

Complete kitchen facilities

Yes

Telephone service available

Yes

Number of automobiles, vans, trucks for use by household members

Yes

Heating fuel most used

Yes

Electricity cost, last month

Yes (annual cost)

Gas cost, last month

Yes (annual cost)

Water and sewer cost, last 12 months

Yes (annual cost)

Oil, coal, kerosene cost, last 12 months

Yes (annual cost)

Receive food stamps, value last 12 months

No

Monthly condominium fee

Yes

Owner or renter (tenure)

Yes (short-form item)

Monthly rent (and whether includes various utilities)

Yes

Whether rent includes meals

Yes

Value of property if were for sale

Yes

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

2005 ACS Item (in question order)

Asked in 2000 Census?

Annual real estate taxes

Yes (last year)

Annual hazard insurance

Yes

Monthly mortgage payment

Yes

Whether mortgage payment includes taxes

Yes

Whether mortgage payment includes insurance

Yes

Whether a second mortgage and/or home equity loan

Yes

Second mortgage/home equity loan monthly payment

Yes

Annual costs for mobile home and site (personal property taxes, site rent, fees and licenses)

Yes (last year, also includes installment loans)

Whether any household members live here year round

No

Number of months members live here

No

Main reason members stay at this address

No

NOTES: The 2005 ACS and 2000 census long-form sample provided room on the mailback questionnaire for characteristics of up to 5 and 6 household members, respectively. Questionnaires should be consulted for precise question wording.

SOURCES: See http://www.census.gov/acs/www/SBasics/SQuest/SQuest1.htm for the 2005 ACS; Anderson (2000:388-399) for the 2000 long form.

the long-form sample because the costs would be too great. Accumulated over 5 years, the ACS sample will total about 15 million housing unit addresses, but the ACS sample is then reduced by the subsampling for in-person follow-up of households not responding to the mail and telephone data collection procedures (see below). This subsampling may reduce the ACS 5-year sample to 10–11 million housing unit addresses.

Because data on governmental jurisdictions will be an important output of the ACS and because many governmental units are very small in population size, the ACS oversamples housing unit addresses in small governmental units relative to other areas similar to the 2000 long-form-sample design. Oversampling provides more precise estimates for small counties, places, townships, school districts, and American Indian and Alaska Native areas than would otherwise be possible. In a similar manner, for the personal visit follow-up operation, the ACS oversamples mail and telephone nonrespondents in census tracts that are expected to have low mail and telephone response rates relative to other census tracts. In order to afford the costs for the additional follow-up, not only are smaller subsamples followed up in person in census tracts that are expected to have high mail and telephone response, but also the initial sample for these tracts is reduced by 8 percent. Tables 2-3a, 2-3b, and 2-3c provide initial annual and 5-year sampling rates for governmental units and census tracts of different popu-

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

TABLE 2-3a Housing Unit Addresses, 2005 ACS and 2000 Census Long-Form Sample: Approximate Initial Block-Level Sampling Rates

Type and Size of Smallest Area Containing a Block

2005 American Community Survey

2000 Long-Form- Sample Census Day Sampling Rate

Annual Initial Sampling Rate

Cumulative 5-Year Initial Sampling Rate

Governmental unit (county, place, township in 12 states, school district, American Indian or Alaska Native area)

 

 

 

With < 200 occupied housing units (fewer than about 500 people)

10.0% (1 in 10)

50.0% (1 in 2)

50.0% (1 in 2)

With 200–800 occupied housing units (about 500-2,000 people)

6.9% ( 1 in 14)

34.5% (1 in 3)

50.0% (1 in 2)

With 800–1,200 occupied housing units (about 2,000-3,000 people)

3.5% (1 in 28)

17.5% (1 in 6)

25.0% (1 in 4)

Census tract with > 2,000 occupied housing units (more than about 5,000 people)a

1.7% (1 in 59) or 1.6% (1 in 63)

8.5% (1 in 12) or 8.0% (1 in 13)

12.5% (1 in 8)

Other areaa

2.3% (1 in 44) or 2.1% (1 in 48)

11.5% (1 in 9) or 10.5% (1 in 10)

16.7% (1 in 6)

Overal l

2.3% (1 in 44)

11.5% (1 in 9)

16.7% (1 in 6)

NOTES: Number of occupied housing units is estimated from the MAF. Because the initial ACS sample size will be kept at approximately 3 million residential addresses per year, the initial sampling rates shown will be slightly reduced as the number of occupied housing units grows. Townships and other minor civil divisions are recognized for sampling purposes in 12 states where they are functioning governments: Conne cticut, Maine, Massachusetts, Michigan, Minnesota, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, Vermont, and Wisconsin.

aThe smaller of the two ACS sampling rates shown applies for blocks in census tracts with predicted mail/CATI response rates gre ater than 60% (see Table 2-3b).

SOURCE: Adapted from U.S. Census Bureau (2006:Tables 4.1, 4.2) for the ACS.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

TABLE 2-3bHousing Unit Addresses, 2005 ACS and 2000 Census Long-Form Sample: Census Tract-Level CAPI Subsampling Rates in the 2005 ACS for Mail/CATI Nonrespondentsa

Subsampling Rate Category

CAPI Subsampling Rate

Illustrative Completed Sample Cases as Percentage of Initial Sample

Census tracts with predicted mail/CATI response rate less than or equal to 35%

50% (1 in 2)

If, say, 20% mail/CATI response, then completed sample will be 60% ofinitial sample—20% mail/CATI plus 40% CAPI (1/2 of 80%)

Census tracts with predicted mail/CATI response rate between 36 and 50%

40% (2 in 5)

If, say, 40% mail/CATI response, then completed sample will be 64% ofinitial sample—40% mail/CATI plus 24% CAPI (2/5 of 60%)

Census tracts with predicted mail/CATI response rate between 51 and 60%

33% (1 in 3)

If, say, 55% mail/CATI response, then completed sample will be 70% ofinitial sample—55% mail/CATI plus 15% CAPI (1/3 of 45%)

Census tracts with predicted mail/CATI response rate greater than 60%b

33% (1 in 3)

If, say, 80% mail/CATI response, then completed sample will be 87% ofinitial sample—80% mail/CATI plus 7% CAPI (1/3 of 20%)

aNonmailable addresses are followed up in the CAPI data collection phase at a rate of 67% (2 in 3).

bIn census tracts outside oversampled governmental units with high predicted mail/CATI response rates, the initial sample is reduced by a factor of 0.92 (see Table 2-3a). This reduction is implemented to satisfy a budget constraint for personal interviewing.

SOURCE: Adapted from U.S. Census Bureau (2006:Tables 4.1, 4.2) for the ACS.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

TABLE 2-3c Housing Unit Addresses, 2005 ACS and 2000 Census Long-Form Sample: Illustrative Rates of Completed Sample Casesa

Type and Size of Area Mail/CATI Response Rate for CAPI Subsampling

2005 ACS Cumulative 5-Year Rate of Completed Sample Cases (Percent)

2000 Long-Form-Sample Census Day Rate of Completed Sample Cases

20

40

55

80

Governmental unit (county, place, township in 12 states, school district, American Indian or Alaska Native area)

 

 

 

 

 

With < 200 occupied housing units (fewer than about 500 people)

30.0

32.0

35.0

43.5

50.0

With 200–800 occupied housing units (about 500–2,000 people)

20.7

22.1

24.2

30.0

50.0

With 800–1,200 occupied housing units (about 2,000–3,000 people)

10.5

11.2

12.3

15.2

25.0

Census tract with > 2,000 occupied housing units (more than about 5,000 people)

5.1

5.4

6.0

7.0b

12.5

Other area

6.9

7.4

8.1

9.1b

16.7

NOTES: The ACS rate of completed sample cases for an area is the initial sampling rate (see Table 2-3a, second column under ACS) times the percentage calculated in the last column of Table 2-3b to illustrate the effect of CAPI subsampling. The illustrative rates of completed sample cases shown will be reduced by unit nonresponse, which was 3% in the 2005 ACS and 7% in the 2000 long-form sample. CAPI: computer-assisted personal interviewing; CATI: computer-assisted telephone interviewing.

aFor nonmailable addresses, the illustrative rate of completed sample cases (before nonresponse) will be 67% of the initial sampling rates in Table 2-3a, or, reading from top to bottom, 33.5%, 23.0%, 11.7%, 5.7% (using the higher of the two initial sampling rates in Table 2-3a), and 7.7% (using the higher of the two initial sampling rates in Table 2-3a).

bUses the lower of the two initial sampling rates in Table 2-3a.

SOURCE: Adapted from U.S. Census Bureau (2006:Tables 4.1, 4.2) for the ACS.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

lation sizes, subsampling rates for addresses that must be followed up in person, and estimated 5-year rates of sample cases after subsampling; Table 2-4 provides counts of governmental units by type and population size.

2-A.5
Data Collection

Each month the residential housing unit addresses in the ACS sample with mailable addresses—about 95 percent of each month’s sample of 250,000 addresses—are sent a notification letter followed 4 days later by a questionnaire booklet. A reminder postcard is sent out 3 days after the questionnaire mailing. Whenever a questionnaire is not returned by mail within the following 3 weeks, a second questionnaire is mailed to the address. If there is still no response and if the Census Bureau is able to obtain a telephone number for the address, then trained interviewers conduct telephone follow-up using computer-assisted telephone interviewing (CATI) equipment. Telephone follow-up is also used to obtain missing information from households that mailed back incomplete questionnaires. About 33 percent of mail questionnaires in 2005 required telephone follow-up because key items were missing or because the household reported more members than there was room to provide information (U.S. Census Bureau, 2006:7-9).

For samples of addresses for which no mail or CATI responses are received after 2 months, or the postal service returned the questionnaire because it could not be delivered as addressed, or the address is not in street name and number format and so was not mailed out in the first place (for example, post office box or rural route addresses), interviewers are sent into the field with laptop computers. They visit housing units in person (or, in about 20 percent of cases, make contact by telephone) and collect the ACS data through computer-assisted personal interviewing (CAPI). The personal interview follow-up is conducted on a sample basis in order to save costs: about two-thirds of nonmailable addresses and between one-third and one-half of mailable addresses in each census tract—depending on the expected mailback and CATI response rate for the census tract—are followed up in person. Interviewers also visit group quarters in person to collect data for residents, using paper-and-pencil questionnaires.

An important difference between the ACS and 2000 census long-form-sample data collection procedures is that all nonresponding housing units were included in the long-form follow-up operations. Long-form-sample questionnaires were sent out in mid-March 2000, preceded by a notification letter and followed by a reminder postcard. For every address for which a questionnaire was not returned by mail, temporary interviewers (enumerators) went into the field to try to obtain responses in the period of late April-June. The enumerators were often not successful in obtaining

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

TABLE 2-4 Types of Governmental Units by Population Size in 2000

Population Size Category

Countiesa

Placesb

Minor Civil Divisionsc

School Districtsd

No.

Percentage Total Pop.

No.

Percentage Total Pop.

No.

Percentage Total Pop.

No.

Percentage Total Pop.

Total

3,141

100.00%

19,849

76.57%

8,414

12.46%

14,125

100.00%

Under 500 people

6

<0.01

6,230

0.54

1,966

0.18

854

0.07

500–2,000

71

0.03

6,194

2.33

3,226

1.27

2,380

0.93

2,000–3,000

72

0.06

1,498

1.30

847

0.74

1,151

0.91

Areas containing blocks that areoversampled, subtotale

149

0.09

13,922

4.17

6,039

2.19

4,385

1.91

3,000–5,000

143

0.20

1,498

2.07

868

1.19

1,638

2.05

5,000–10,000

404

1.08

1,677

4.23

719

1.78

2,505

5.76

10,000–20,000

652

3.40

1,147

5.70

448

2.25

2,370

10.75

20,000–50,000

879

10.09

971

10.69

280

2.90

2,009

19.61

50,000–65,000

167

3.41

189

3.82

31

0.62

326

5.89

65,000–100,000

223

6.41

189

5.35

19

0.55

379

9.58

100,000–250,000

293

15.88

175

9.17

6

0.34

380

18.32

250,000–500,000

119

14.80

45

5.85

3

0.38

87

9.53

500,000–1,000,000

78

19.67

19

4.29

1

0.26

32

6.79

1,000,000–2,500,000

28

15.36

10

4.66

0

0.00

11

4.85

2,500,000 or more

6

9.60

7

16.57

0

0.00

3

4.96

Other areas, subtotal

2,992

99.91

5,927

72.40

2,375

10.27

9,740

98.09

aSource for counties: 2000 Census Gazetteer File (extract of Summary File 1) (http://www.census.gov/geo/www/gazetteer/); total 2000 population of 281.4 million.

bPlaces include all incorporated places plus census-designated places in Hawaii; census-designated places in the other states will have estimates published for them (see Table 2-5) but are not recognized for purposes of oversampling; see note a above for source of estimates.

cMinor civil divisions (not coterminous with places) are recognized in 12 states for purposes of oversampling (Connecticut, Maine, Massachusetts, Michigan, Minnesota, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, Vermont, and Wisconsin); they are recognized in additional states for publication but do not benefit from oversampling in these states; see note a above for source of estimates.

dPopulation size for school districts is as of 2003; some school districts (for example, elementary and secondary districts) overlap in population, so the population base for percentages is 314.6 million and not the 2003 population of 290.8 million. Source for school districts: 2003 population estimates (http://www.census.gov/hhes/www/saipe/school/sd03layout.html).

eSee Table 2-3a for oversampling rates for blocks in small jurisdictions of fewer than 500 people, 500-2,000 people, and 2,000-3,000 people.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

complete information. Moreover, there was no telephone follow-up to fill in missing items on mailed-back questionnaires. Consequently, missing data rates were very high for many long-form questionnaire items in 2000—considerably higher than in the ACS to date (see Section 2-B.2 below).

2-A.6
Data Products

The ACS will produce data products that resemble those from the 2000 long-form sample. The products will be available primarily through the American FactFinder on the Census Bureau’s web site: http://factfinder.census.gov (see Box 2-2). Because of the continuous design of the ACS, its data products will differ in two important respects from the long-form-sample products. First, the ACS data will be available every year instead of once a decade. As was just done for the ACS 2005 data, each year’s data products will be released in waves from August through November.2 Second, the ACS data that are released each year will not pertain to a point in time, like the long-form-sample data for Census Day; instead, they will be cumulated over a 12-, 36-, or 60-month period for governmental and statistical units depending on population size in order to provide sufficiently precise estimates for publication (see Table 2-5).

ACS products will include tables and profiles of characteristics for governmental and statistical areas; see Box 2-2. The confidentiality of these products will be protected by various means. One method is to combine detailed categories into broader categories when the individual categories contain too few sample cases. Another method is termed “data swapping,” in which computer programs may swap the data for an entire household that is at risk of being identified (for example, the only minority household in a block group) with the data for another similar household in a different area. Only a small percentage of records, which are never identified, are swapped. In addition to the various procedures that are implemented to protect confidentiality for the 1-year, 3-year, and 5-year period estimates, the Census Bureau will combine individual categories and even delete entire tabulations from 1-year and 3-year period products when the sampling errors are very large (see Chapter 4 for details).

ACS products will also include public use microdata samples (PUMS). PUMS files comprise samples of individual records, suitably processed to protect confidentiality by such means as:

  • deleting names and addresses from the records;

  • limiting geographic identification to large areas known as public

2

See http://www.census.gov/acs/www/Special/Alerts/Latest.htm for the release schedule for data products from the 2005 ACS.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

use microdata areas (PUMAs), which are defined to include about 100,000 people; and

  • limiting the detail that is provided for sensitive variables—for example, assigning a catchall code to income amounts over a certain threshold, such as $100,000 or more, and not identifying the specific amount.

2-A.7
Data Processing—1-Year Period Estimates

The computer programs to generate the ACS 1-year period data products each summer operate on 12 months of data collected in the preceding calendar year. These data include all of the mailed-back, CATI, and CAPI responses that were received in January through December of that year (including additional information obtained by telephone for incompletely filled out mail questionnaires). The major data processing steps are described briefly below.

2-A.7.a
Coding, Editing, and Imputation

As in the long-form sample, the first data-processing step for the ACS is to assign codes for write-in responses for such items as ancestry, industry, and occupation by using automated and clerical coding procedures. Coding is performed on a monthly basis. Then once a year, the raw data, the codes assigned to write-in items, and various operational data for the responses for the preceding January–December are assembled into an “edit-input file.” Computer programs review the records on this file for each household to determine if the data are sufficiently complete to be accepted for further processing and to determine the best set of records to use in instances when more than one questionnaire was obtained for a household.

Computer programs then edit the data on the accepted, unduplicated records in various ways. Computer programs also supply values for any missing information that remains after editing, using data from neighboring households with similar characteristics in a process called “hot-deck imputation.” The goal of editing and imputation is to make the ACS housing and person records complete for all persons and households

Because of the varying reference periods in the ACS, dollar amounts of income are adjusted for inflation using the national consumer price index for all urban consumers research series (CPI-U-RS) so that every amount reflects the average value of the dollar for the calendar year. For example, a person who reported an income of $20,000 in February 2005 for the period February 2004–January 2005 would have this amount inflated by a figure of 1.031 to give an amount of $20,620. The figure of 1.031 comes from dividing the average annual CPI-U-RS for 2005 (which has the value

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

BOX 2-2

Data Products from the American Community Survey

Tabulations

  • Base or detailed tables, for 1-year, 3-year, and 5-year periods, all geographies that meet the relevant population size cutoff—Hundreds of tables that cross-classify two or more characteristics for a wide variety of subjects (for example, employment by sex and age); race and Hispanic-origin iterations for key characteristics; tables providing item imputation rates. For 1-year and 3-year periods, collapsed tables may be provided when categories in a detailed table are suppressed because the estimates do not meet minimum precision criteria.

    Similar to the tabulations in Summary File 3 from the 2000 long-form sample; 5-year period ACS estimates will also include tabulations of journey-to-work items for traffic analysis zones (one or more blocks, block groups, or census tracts) that, in 2000, were produced on a special tabulation basis (known as the Census Transportation Planning Package).

  • Subject tables, for 1-year, 3-year, and 5-year periods, all geographies as above—Over 60 single-topic tables of frequently requested information, with distributions and medians.

    Comparable to the Quick Tables from the 2000 long-form sample but with more detail.

  • Population profiles for selected race, ethnicity, and ancestry groups, for 1-year, 3-year, and 5-year periods, areas with 1 million or more people—Key distributions (availability of 1-year and 3-year profiles depends on the size of the group). New data product for the ACS. Profiles will be produced for most of the groups tabulated in Summary File 4 from the 2000 long-form sample.

  • Data profiles (single year), all geographies with 65,000 or more people—Four profiles of demographic, social, economic, and housing characteristics and one narrative. Comparable to profiles from the 2000 long-form sample with more geographic detail; narrative profile, which covers all four subject areas, is new.

  • Multiyear profiles, all geographies with 65,000 or more people—Four profiles for the current year and four prior years, indicating differences for a specified year from the current year that are statistically significant with 90 percent margin of error. New data product for the ACS; multiyear profiles are available for the 2000–2004 ACS test surveys; the first release of multiyear profiles from the full ACS will be in 2008.

  • Ranking tables and charts, for 1-year periods—86 ranking tables for states that

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

enable the user to determine which differences among states are statistically significant with 90 percent confidence.

Expanded from 19 subjects from the 2000 long-form sample.

Downloadable Tables

These tables are accessible from the ACS File Transfer Protocol (FTP) site (http://www.census.gov/acs/www/Special/acsftp.html).

  • Base tables, single-year profiles, multiyear profiles, and ranking tables—Permits user analysis, such as summing categories, computing percentages, etc.

  • ACS summary files for 1-year, 3-year, and 5-year period estimates (under development)—In response to users, the Census Bureau is developing an ACS product similar to Summary File 3 from the 2000 long-form sample. ACS summary files will contain all of the base tables for 1-year, 3-year, and 5-year period estimates and will be readily analyzable for such uses as comparing areas and population groups.

Public Use Microdata Sample (PUMS) Files

  • 1 percent sample file, available for each calendar year of ACS data—Each year’s file will contain about 1.25 million household and 3 million person records selected from the final realized sample of about 2 million housing units, or 40 percent of the final sample.

    Content: All housing and person items, together with imputation flags.

    Geographic identification: states, within-state areas of 100,000 or more population (public use microdata areas or PUMAs).

    Comparable, when cumulated over 5 years, to the 5 percent sample file from the 2000 long-form sample; the 1 percent ACS sample file provides finer geographic identification than the 1 percent long-form-sample file (PUMAs of 100,000 or more population compared with super-PUMAs of 400,000 or more population). Permits multivariate, microlevel analysis.

Geographic Products

  • Geographic comparison tables—86 tables (same topics as ranking tables, but not in rank order) for geographic components of the nation and states (for example, a table of median age for counties in Alabama).

  • Thematic maps—86 maps (same topics as ranking tables) for geographic components of the nation and states.

NOTE: See Table 2-5 for types of geographic areas and population sizes for which 1-year, 3-year, and 5-year period estimates are available. Unless otherwise noted, products are available from http://www.census.gov/acs/www and provide 90 percent margins of error for estimates.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

TABLE 2-5 Major Types of Geographic Areas for Which 1-Year, 3-Year, and 5-Year Period Estimates Are Available from the American Community Survey

Area Type

Estimate Type

1-Year Period

3-Year Period

5-Year Period

States and District of Columbia

51

51

51

Congressional districts

436

436

436

Public use microdata areas (PUMAs) (these areas have at least 100,000 people)

2,071

2,071

2,071

Metropolitan and micropolitan statistical areas

492

905

936

Urban areas

363

809

3,607

Counties and county equivalents

775

1,812

3,141

Cities, towns, and census-designated places

492

2,062

25,112

Townships and villages (minor civil divisions) (recognized for publication in 28 states)

186

984

21,200

School districts (elementary, secondary, and unified)

878

3,257

14,394

American Indian and Alaska Native areas

14

36

603

Census tracts

0

0

65,433

Block groups

0

0

208,790

NOTES: 1-year period estimates are available for governmental and statistical areas with at least 65,000 people; 3-year period estimates are available for governmental and statistical areas with at least 20,000 people; 5-year period estimates are available for all governmental and statistical areas, including census tracts (statistical areas of about 4,000 people) and block groups (statistical areas of about 1,500 people). Other areas for which estimates are provided (not shown) include combined statistical areas, Hawaiian Home Lands, urban and rural territory, areas inside and outside the principal city of a metropolitan or micropolitan statistical area, areas outside metropolitan and micropolitan areas.

SOURCE: Tabulation provided by the U.S. Census Bureau, February 21, 2007. Because of changes in population and geographic boundaries, the actual numbers of areas with estimates published may differ from the numbers shown.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

of 284.3 relative to the base index of 100 in December 1977) by the average of the monthly CPIs for the 12 months from February 2004 to January 2005 (which is 275.8).3

2-A.7.b
Weighting

The edited, filled-in data records for the 12 months in a calendar year are weighted in a series of steps to produce 1-year period estimates that represent the entire population. Chapter 5 provides a complete description of the nine steps in the weighting process for housing units and their members; four key steps (1, 3, 5, and 6) are briefly described here. (Similar steps will be used to weight the sample records for residents of group quarters beginning in 2006.)


Step 1, Base Weights Initially, the ACS housing unit and person records are assigned “base” weights that reflect the rate at which the unit was originally sampled from the MAF and, for CAPI responses, the rate at which it was subsampled for follow-up. Housing unit base weights can vary from as low as 10 (for housing units selected at a rate of 1 in 10 that mailed back their questionnaire or responded by telephone) to as high as 180 (for housing units selected at a rate of 1 in 60 that were followed up by CAPI at a rate of 1 in 3).


Step 3, Nonresponse Adjustment An important adjustment is made to the base weights for occupied housing units to account for unit nonresponse; in this adjustment, the weights are inflated to account for the failure to interview all housing units in the sample.


Steps 5 and 6, Housing and Population Controls Two other key adjustments are made to the weights to improve the precision of the survey estimates and to compensate for the fact that some people are overlooked in sample households and some addresses are left off the MAF. Each of these adjustments is performed for estimation areas, which are large counties or groups of small counties. First (step 5), the weights for housing units are adjusted (controlled) to agree with independently derived estimates of total housing units in the applicable estimation area as of July 1 of the year being processed. The housing controls are derived by updating the previous census counts with information on new construction building permits, shipments of mobile homes, and estimates of housing loss (see Chapter 5). Second (step 6), the weights for persons are adjusted to agree with independently derived estimates of people in age, sex, race, and ethnic groups in the

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

applicable estimation area as of July 1. The population controls are derived by updating the previous census counts with information on births, deaths, and net migration (see Chapter 5). In a similar, but much more detailed, procedure for 2000, long-form-sample responses were weighted up to agree with the census complete counts for age, sex, race, and ethnic groups and type and size of household in subcounty areas as of April 1, 2000 (see U.S. Census Bureau, 2003:Ch. 8).

The multistep weighting process is designed to produce estimates of people and housing units that are as complete as possible and that take into account the various aspects of the complex ACS design. A key point for users to keep in mind is that the weights will vary—sometimes a great deal—both within and across many governmental units. On one hand, this variation in weights will make the estimates less reliable than they would be with an equal probability sample of the same size. On the other hand, the variations in initial sampling and CAPI subsampling rates are intended to serve specific purposes, such as improving the precision of estimates for small governmental units within a budget constraint that limits the total sample size.

2-A.7.c
Tabulation

The final data-processing steps for the 1-year period estimates are to generate tabulations, profiles, and other products, such as PUMS. At this stage, procedures are implemented to protect data confidentiality and to combine categories (or delete entire tables) to meet precision standards.

Throughout, the ACS data for a calendar year are processed as a whole and not month by month. The only exception to date is that for areas in states affected by Hurricane Katrina and Hurricane Rita, the Census Bureau issued two sets of 2005 data products in early June 2006—one set for January–August 2005 and the other set for September–December 2005, reflecting conditions before and after the hurricanes.4

2-A.8
Data Processing—3-Year and 5-Year Period Estimates

The computer programs to produce 3-year period products use the fully processed records for the 3 preceding calendar years, containing 36 months’ worth of responses; the programs to produce 5-year period products use the fully processed records for the 5 preceding calendar years, containing 60 months’ worth of responses. The only new steps are to modify the inflation adjustments and the weights.

The income inflation adjustments are modified so that every amount

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

is expressed in terms of the average value of the dollar for the most recent calendar year. For example, for 3-year or 5-year period estimates released in 2010, covering 2007–2009 and 2005–2009, respectively, all income amounts would be adjusted to reflect the average value of the dollar for 2009. Amounts for housing value and costs are also inflated to reflect the average value of the dollar for the most recent calendar year.

While the Census Bureau has not worked out all of the details of the weighting for 3-year and 5-year period data products, the general procedure will be to remove the adjustments to the 1-year period weights for housing unit nonresponse and agreement with housing unit and population controls and to make new adjustments. Unit nonresponse adjustments will be implemented for all occupied housing units for which data were obtained in the relevant 36 months or 60 months. Averages of the independent housing unit and population estimates for 3 years or 5 years, as applicable, will be used to adjust the weights of each housing unit and person for whom data were obtained during the relevant 36 months or 60 months.

The final data-processing steps for the 3-year and 5-year period estimates are to generate the various data products. At this stage, procedures are implemented to protect data confidentiality; also, for the 3-year period estimates, procedures are implemented (as for the 1-year period estimates) to combine categories (or delete entire tables) to meet precision standards. No screening for precision is applied to 5-year period estimates, as they are considered to be the building blocks for user-defined areas, such as groups of census tracts or block groups in a city.

2-B
ACS BENEFITS

Two paramount benefits that users will gain from the ACS in comparison with the census long-form sample are the more timely issuance of the data and the greater frequency with which the data will be released. Timeliness refers to the speed with which estimates are produced after the data are collected; frequency refers to how often the estimates are produced. A third important benefit will very likely be improved data quality in that the ACS data will likely be more complete and accurate than the long-form-sample data.

2-B.1
Timeliness and Frequency

Instead of producing point-in-time estimates once a decade for governmental and statistical areas, every year the ACS will produce period estimates—5-year period estimates for all areas, including small neighborhoods (census tracts and block groups) and small governmental units; 3-year period estimates for all areas with at least 20,000 people; and 1-year period

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

estimates for areas with at least 65,000 people. Table 2-6 shows how the various period estimates will become available over the period 2005–2009, as sufficient months of data are accumulated, and how the ACS estimates will continue to be produced from that point onward.

The Census Bureau’s data release schedule calls for each set of estimates to become available 8–10 months after all the data needed to produce the estimates are collected. Long-form-sample tabulations typically required 2 years or more after Census Day to become available. Even more important than the faster schedule for data processing is that the ACS estimates released each summer and fall will provide a continual flow of updated

TABLE 2-6 Release Year and Calendar Year of Period Estimates from the ACS

Type of Period Estimate

Release Year (Late Summer-Fall)

2006

2007

2008

2009

2010

2011

2012

2013

2014

2015

Calendar Year(s) of Data

1-year period estimates for areas with 65,000 or more people

2005

2006

2007

2008

2009

2010

2011

2012

2013

2014

3-year period estimates for areas with 20,000 or more people

 

 

2005– 2007

2006– 2008

2007– 2009

2008– 2010

2009– 2011

2010– 2012

2011– 2013

2012– 2014

5-year period estimates for block groups, census tracts, and all other areas

 

 

 

 

2005– 2009

2006– 2010

2007– 2011

2008– 2012

2009– 2013

2010– 2014

NOTE: See Table 2-5 for major types of governmental and statistical areas for which 1-year, 3-year, and 5-year period estimates are available.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

information, enabling users to analyze recent data for an area or population group and compare it with other areas and groups.

Moreover, the annual ACS products will enable users to construct time series for analyzing change. To the extent permitted by sampling error, these data will make it possible to detect trends in the percentage of people in an area who are employed, live in poverty, or have attained a college degree, and whether the trends for an area mirror or deviate from national trends. Similarly, the data will show changes in the ethnic makeup of an area, housing costs for homeowners and renters, and many other characteristics of interest to data users. Of course, several factors can interrupt a time series for an area of interest, such as a change in geographic boundaries, a change in the wording of the question used to measure a characteristic of interest, or, occasionally, a revision to the county population estimates that are used to control the ACS estimates.

2-B.2
Data Quality

Another major benefit of the ACS over the census long-form sample should be higher quality of the data in terms of the completeness and accuracy of response. Missing and inaccurate responses are components of nonsampling error that can result in bias in survey estimates, as distinct from the variable error due to the use of a sample (discussed in Section 2-C below). Both kinds of error are important: sampling variability can be so large as to render an unbiased estimate of little use for decision making, while even a very precise estimate in terms of sampling error can be misleading if the bias in the estimate is large (see Box 2-3 for brief descriptions of elements of bias and variability).

The assessment of the likely higher quality of the ACS rests primarily on comparisons of estimates from the Census 2000 Supplementary Survey (C2SS) and the 2000 long-form sample. The C2SS was a full-scale test of the ACS questionnaire and data collection procedures. It included about 587,000 responses from a nationwide sample in 1,203 counties plus samples in 36 counties that were ACS test sites.

By comparing the C2SS and the 2000 long-form sample, the Census Bureau was able to evaluate the relative quality to be expected from the ACS in terms of unit (household) weighted response rates, population coverage, item response rates, and quality control processes. See Box 2-4 for indicators of sample size, household response, population coverage, and item response for each year of the ACS.5 The C2SS equaled or outperformed

5

The indicators can be accessed from the main ACS web site (http://www.census.gov/acs/www) under “Using the Data” or by selecting a subject area for which ACS data are desired, clicking on “survey methodology” and then on “quality measures”: http://www.census.gov/acs/www/UseData/sse/index.htm.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

BOX 2-3

Sources of Sampling and Nonsampling Error in Survey Estimates

Multiple sources of error can affect the estimates from a survey such as the ACS. In this context, error has a statistical meaning; namely, it refers to the difference between an estimate and the (unknown) true population parameter (for example, the true percent poor school-age children). Despite the colloquial meaning of the word, such an error is not necessarily an indication that a mistake has been made.

Survey methodologists generally classify statistical errors into two major categories—variability, or errors that lead to variation in the survey estimates across hypothetical repetitions of the survey process under identical survey conditions; and bias, or the systematic component of errors that results in a difference between the average of the survey estimates across these hypothetical repetitions and the true value of the parameter being estimated. Some sources of error (for example, differences among interviewers) can substantially affect both variability and bias. Much survey research is devoted not only to the measurement of variability and bias, but also to the development of procedures to reduce their effects on survey estimates. (For categorizations of sources of error, see Groves et al., 2004, and Biemer and Lyberg, 2003.)


Variability

  • The estimates from a survey are never precise but vary to a greater or lesser extent from their average over hypothetical repetitions of the survey under identical survey conditions.

  • For most surveys, the major source of imprecision is sampling variance that arises when estimates are based on a sample and not a complete census of the universe. Other things equal, sampling error decreases as the size of the sample

the 2000 long-form sample on all of these attributes, as did the C2SS-like ACS test surveys conducted in 2001–2004 and the full 2005 ACS. Population coverage and unit and item response rates have also been higher in the ACS than in the Current Population Survey Annual Social and Economic Supplement (CPS ASEC), which is the nation’s premier household survey of income, participation in government programs, employment, and family relationships.6

In addition to examining basic quality measures, the Census Bureau also compared the distributions of responses to individual items for the C2SS and the long-form sample, which mainly identified consistencies be-

6

In comparing the ACS and the CPS ASEC, users should bear in mind the many differences between the two surveys (see Nelson, 2006, and Section 3-F.2).

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

increases, and vice versa. (See Box 2-5 for explanations of statistical terms associated with the measurement of sampling error.)

  • Other elements of a survey can increase the imprecision of estimates, including variability introduced by respondents, interviewers, and coders, and by procedures to impute values for missing responses. Errors arising from such other sources are referred to as nonsampling errors. For example, the same respondent may give different answers to a question about income or race when interviewed more than once due to random factors, such as how the respondent interprets the question. For large-scale surveys for estimates of totals, these other sources of variance may contribute much more to imprecision than sampling variance.

Bias

  • The estimates from a survey may differ systematically from the true value for any number of reasons. Nonsampling error sources often give rise to bias.

  • Sources of bias include that the question wording elicits responses that differ from the construct intended by the survey designer; that respondents consistently overestimate or underestimate the true value (for example, the amount of their income last year); that imputation and weighting adjustment procedures may not compensate adequately for nonresponse and noncoverage; and that the weighting adjustment controls used to correct for coverage errors are inaccurate for certain areas and population groups.

Some variability and bias in survey estimates is inevitable (bias and some sources of variability also affect censuses). The challenge for users is, with the help of methodologists, to understand enough of the extent and nature of sampling and nonsampling errors in survey estimates to assess the utility of the estimates for the user’s purpose and identify possible strategies for ameliorating the effects of these errors on survey inferences.

tween the two surveys, as well as a few differences. These comparisons were performed for the nation as a whole and for individual counties in the ACS test sites, which were oversampled relative to the other C2SS counties. The finding of consistency between estimates from the C2SS and the long-form sample cannot prove that the C2SS estimates are unbiased. Consistency, however, does offer reassurance that the C2SS—and, by extension, the ACS—are measuring items in the same way.

The highlights of the overall and individual item evaluations of the C2SS compared with the 2000 long-form sample are summarized below; the complete findings are available in seven reports issued by the Census Bureau (U.S. Census Bureau, 2002b, 2004a-f; see also National Research Council, 2004b:Ch. 7; Schneider, 2004).

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

BOX 2-4

Four Quality Measures Available for the American Community Survey

The Census Bureau currently provides four indicators of nonsampling errors for the nation and states. They can be accessed from the main ACS web site under “Using the Data” (http://www.census.gov/acs/www, middle of the page) or by selecting a subject area for which ACS data are desired, clicking on “survey methodology” and then on “quality measures” (http://www.census.gov/acs/www/UseData/sse/index.htm).


Sample Size


Sample size is critical for estimating the level of sampling error in survey estimates. The ACS web site provides two sample sizes for the year in question: (1) initial sample addresses, or the total number of addresses selected from the MAF to receive a questionnaire and (2) final interviews, or the total number of questionnaires received by mail, CATI, or CAPI. The second measure is smaller than the first—for example, 2.9 million addresses were initially selected for the 2005 ACS, but the number of final interviews was only 1.9 million. The principal reason is because CAPI is used to follow up only a subsample of addresses that lack a mail or CATI response; in addition, some sampled addresses turn out to be nonexistent or nonresidential, and some households do not respond even after follow-up.


Coverage Error


Coverage error occurs in the ACS as in other household surveys. Undercoverage occurs when the sampling frame does not include all addresses and when not all people in sampled addresses are included on the questionnaire; overcoverage occurs when households or individuals are duplicated or otherwise erroneously included. Net coverage is defined relative to decennial census–based population estimates.

The ACS web site provides net coverage rates for men and women for states and the United States and for six race/ethnicity categories for the United States. These rates are the weighted ACS estimate for the year in question for the relevant demo

2-B.2.a
High Unit Response Rate in the C2SS

The Census Bureau never expected that the mail response would be as high in the ACS as in the census, nor has it been: it was 56 percent in the C2SS compared with 71 percent in the 2000 long-form sample. To save on costs, the Census Bureau specified that only about one-third of ACS mail and telephone nonrespondents would be interviewed in person. Hence, to obtain a final household response rate that can be compared to the long-form-sample rate as a measure of public cooperation, the subsample of households sent for CAPI (in-person) interviewing in the ACS must be weighted to account for the subsampling before they are added to the mail and CATI (telephone) respondents.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

graphic group and geographic area before being controlled to population estimates, divided by the corresponding census-based population estimate.

The population and housing unit estimates used to adjust the ACS estimates for coverage errors (see Sections 5-C and 5-D) pertain to only a few characteristics (age, sex, race, ethnicity, and total housing), are only applied for large counties and groups of small counties (estimation areas), and themselves contain errors. To the extent that the controls are flawed and that noncovered or duplicated people differ from correctly covered people, then estimates from the ACS may be biased.


Unit Nonresponse


Unit nonresponse relates to the number of final interviews from a survey. To the extent that responding and nonresponding units differ, estimates from a survey may be biased. The ACS web site provides unit response rates and unit nonresponse rates by type (refusal, unable to locate, no one home, temporarily absent, language problem, other, and insufficient data from an interview to be included in the data set). The numerator for unit response rates is the number of mail, CATI, and CAPI responses for the year in question, weighted to account for different initial sampling and CAPI subsampling rates. The denominator is a similarly weighted estimate of the number of cases eligible to be interviewed. The intent in estimating the denominator is to exclude that fraction of the sample of addresses that turn out to be nonexistent, nonresidential, or otherwise ineligible for inclusion in the ACS.


Item Nonresponse


Item nonresponse occurs when interviews are complete enough to include in the estimation but answers are missing (or invalid) for one or more questions. Item nonresponse rates indicate the potential for measurement error due to differences between the values imputed for missing responses and the actual values. The ACS web site provides nonresponse rates for individual tabulated items. The numerator for each rate is the number of allocated responses (imputations that use reported information from other persons or housing units); the denominator is the total number of responses, including valid responses, allocations, and assignments (assignments use other information for the same person to fill in or correct a response).

Using a weighted response rate, the C2SS compares favorably with the 2000 long-form-sample rate. The 2000 final edited long-form-sample data file included 93 percent of sampled households; the remaining 7 percent were dropped because the information collected for them was too scant (National Research Council, 2004b:Table 7.7). By comparison, the C2SS weighted household response rate was 95 percent. The weighted household response rates in the 2001–2003 ACS test surveys were higher than the C2SS rate, averaging 97 percent. The 2004 rate was lower (93 percent) because of a funding reduction that necessitated dropping telephone and personal follow-up operations for January 2004. The 2005 weighted household response rate (the first year under full implementation) was 97

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

percent.7 The 2005 ACS mail response rate declined to 51 percent from 56 percent in the C2SS, but the CATI and CAPI operations more than made up the difference.

2-B.2.b
Good Population Coverage

When weighted to account for sampling and unit nonresponse, the ACS estimates of the population, like those from other household surveys, typically fall short of census counts (or census-based population estimates). More people are missed in surveys than in the census, either because the sampling frame of addresses is less complete or because larger numbers of people are not reported by sampled households. People may also be duplicated or included erroneously in surveys as in the census, but surveys more often miss people, resulting in greater net undercoverage of the population.

The process of controlling the survey weights to the population estimates attempts to compensate for coverage errors, but the controls are available for only a few characteristics (see Section A.7.b above), so that achieving high coverage rates to begin with is important. Before applying controls, the C2SS covered 97 percent of the household population; subsequent supplementary surveys covered 94 percent of the population, and the 2005 ACS covered 95 percent of the household population.8 By comparison, the 2004 CPS ASEC covered only 88 percent of the population age 16 and over (Nelson, 2006:Table B).

2-B.2.c
More Complete Item Response in the C2SS

Imputation rates for questionnaire items—that is, the percentage of item responses for households (for housing questions) or household residents (for person questions) for which an answer had to be imputed from another household’s responses because the item was missing—are a commonly used measure of missing data. By this measure, the C2SS significantly outperformed the 2000 long-form sample. The C2SS had lower imputation rates for household members for 26 of 27 housing items and 48 of 54 population items that were included on both questionnaires (Schneider, 2004: Appendix). For example, 19.3 percent of household residents in the 2000 long-form sample were imputed a response for the question on number of weeks they worked last year, compared with only 9.6 percent of household residents in the C2SS who were imputed a response for the question on number of weeks they worked in the past 12 months.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

For most items, the 2000 census imputation rates for enumerator returns exceeded those for mailed-back returns, sometimes by large margins. In contrast, for many items, the C2SS imputation rates were lower for interviewer returns than for mailed-back returns, indicating the higher quality of the follow-up effort in the C2SS (National Research Council, 2004b:Table 7.5).

Moreover, there is evidence suggesting that the quality of the ACS data collection improved after 2000. The imputation rates for 20 of 36 housing items and 51 of 57 population items were lower in the 2001–2004 ACS test surveys compared with the C2SS—sometimes substantially lower—and no rate was higher.9 For example, only 18.8 percent of household members age 15 and over had some or all of their income imputed in the 2004 supplementary survey, compared with 23.9 percent in the C2SS and 29.7 percent in the 2000 long-form sample. Item imputation rates remained at low levels in the 2005 ACS.

2-B.2.d
Greater Quality Control in the C2SS and the ACS

For the C2SS the Census Bureau implemented quality assurance procedures that were not included in the 2000 long-form-sample procedures because of cost and timing constraints. These same procedures are being used in the ACS.

An important operation that contributes to quality for the ACS is the telephone follow-up of mailed-back questionnaires that do not meet standards for completeness of coverage of household members or content. Moreover, telephone and in-person follow-up of nonrespondents is conducted by experienced, highly trained interviewers who are assisted by computerized questionnaires with built-in edit checks and skip patterns. The 2000 census lacked telephone follow-up for mailed-back questionnaires that were missing several items, and the enumerators who conducted the in-person follow-up of households that did not mail back their forms used paper-and-pencil questionnaires. Moreover, the temporary, lightly trained enumerators focused on obtaining the answers to the basic questions and not on the additional long-form-sample questions.

Another quality assurance procedure in the ACS is not to allow proxies for household respondents, such as neighbors or landlords. In 2000, 6.2 percent of long forms were obtained by proxy, and three-fifths of these had to be dropped from the final tabulation file because the data were so incomplete (National Research Council, 2004b:291).

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
2-B.2.e
Consistency of Responses for Many Items

Comparisons of the C2SS and the 2000 long-form-sample estimates for individual items cannot establish which is closer to a measure of truth—additional research would be required to examine this issue, using such techniques as matches with consistently defined administrative records and reinterviews of households. Yet such comparisons can identify the extent to which items are broadly consistent between the two surveys, thereby giving users confidence in the ACS as a replacement for the long-form sample. Complete consistency should not be expected even with the same questions because of differences in reference periods and residence rules, question formatting, editing, interview mode, and other survey procedures, yet the finding of major differences would be cause for disquiet and suggest further needed research.

The comparisons of the C2SS and the 2000 long-form sample for household residents found that most items were broadly consistent between the two sources at the national level. Individual comparisons performed for 18 of the 36 counties in the ACS test sites also found a high degree of consistency for most items. (It was not possible to perform comparisons for group quarters residents because they were not included in the C2SS.) “Driving to work alone” was an instance of a category in which national estimates were similar between the C2SS and the long-form sample, but estimates differed substantially for some of the counties that were examined individually. For this category, the C2SS estimates were appreciably higher in three counties and appreciably lower in three counties than the corresponding long-form-sample estimates (U.S. Census Bureau, 2004b:23).

National-level differences between the two surveys that were statistically and substantively significant occurred for race, ancestry, vacancy status, tenure (owner/renter), number of rooms in the housing unit, disability status for people ages 5–64, employment status, and median income. Users should always exercise care when comparing estimates from any of the ACS data sets (2005, 2001–2004 test surveys, C2SS) with the 2000 long-form sample because of differences in the ACS and long-form-sample design and operations. They should be particularly careful when making comparisons for the items discussed below.


Race and Ethnicity The C2SS estimated a higher percentage of “white alone” and a lower percentage of “some other race alone” compared with the 2000 long-form sample (77.5 versus 75.3 percent for “white alone” and 3.9 versus 5.5 percent for “some other race alone”). These results appear due in part to differences in wording and format of the race and ethnicity questions, which the Census Bureau is investigating (U.S. Census Bureau, 2004a:32). Another contributing factor is that census enumerators, who

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

were not as well trained as C2SS interviewers, often failed to ask the race question appropriately (Martin and Gerber, 2003:42-43).


Ancestry The C2SS estimated higher percentages in many ancestry groups compared with the 2000 long-form sample, particularly for Germans (17.0 versus 15.4 percent), English (10.3 versus 8.8 percent), and Irish (12.1 versus 11.0 percent). The reason is that the Census Bureau does not impute an ancestry when none is reported, and the C2SS had more complete reporting of this item than the long-form sample: 88 percent of people reported at least one ancestry in the C2SS compared with only 81 percent in the 2000 long-form sample (U.S. Census Bureau, 2004e:45-47).


Vacancy Status The C2SS estimated a higher percentage of vacant housing units (9.7 percent) than the 2000 long-form sample (9.0 percent). The higher estimated vacancy rate in the C2SS applied not only to the nation as a whole, but also to most of the counties examined individually in spite of wide variations among them in vacancy rates. This result is contrary to the expectation that the ACS 2-month residence rule and 3-month data collection period implemented in the C2SS would lead to a lower estimated percentage of vacant housing. (For example, a vacant unit to which a questionnaire is mailed in the first month of collection could well be occupied by the time it is visited by an interviewer in month 3.) Why the C2SS estimated a lower vacancy rate than the 2000 long-form sample is not clear; it may be that census enumerators were more apt to classify a vacant unit as an occupied unit than the C2SS interviewers (U.S. Census Bureau, 2004a:vi-vii, 4-41). Continued assessment of vacancy status estimates in comparison with the American Housing Survey and other data sources will be required to evaluate the accuracy of this item in the ACS.


Tenure (Owner/Renter) Status The C2SS estimated a lower percentage of owner-occupied housing units (65.4 percent) than the 2000 long-form sample (66.2 percent), which may be due to a high rate of imputation for this item in the census, so that the C2SS estimates may be more accurate (U.S. Census Bureau, 2004a:43, 45).


Rooms in Housing Unit The C2SS produced different estimates of numbers of rooms in the housing unit compared with the 2000 long-form sample—specifically, the C2SS estimated fewer small units of 1–2 rooms (5.5 versus 7.0 percent), more mid-sized units of 4–5 rooms (39.7 versus 36.9 percent), and fewer large units of 7 or more rooms (26.4 versus 27.9 percent). The extent to which these differences may be due to differences in question wording, format, and sequencing, differences in data capture and editing, inconsistent definitions of what constitutes a separate room by

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

interviewers and respondents, and other factors is not known (U.S. Census Bureau, 2004f:36-41).


Disability Status The C2SS estimated a substantially lower percentage of disabled people ages 21–64 than the 2000 long-form sample (13.8 versus 19.1 percent), as well as a lower percentage of disabled people ages 5–20 (6.8 versus 8.0 percent). The problem seems to involve the nonresponse follow-up phase of the 2000 census. It appears that people who were visited by census enumerators misunderstood the questions about employment disability and difficulty going outside the home alone. An indicator that supports this hypothesis is that 75 percent of people in the census who reported an employment difficulty to an interviewer were actually employed, compared with only 21 percent in the C2SS (U.S. Census Bureau, 4004e:33-36; see also Israel, 2006).


Employment Status The C2SS estimated a higher percentage of employed people than the 2000 long-form sample (62.3 versus 61.4 percent). This difference may be due to several factors, including different reference periods. The C2SS and the 2000 long-form-sample estimates of unemployment were comparable (3.5 versus 3.4 percent) but significantly lower than the Current Population Survey estimate of 4.0 percent civilian unemployment for 2000 (U.S. Census Bureau, 2004b:18-20). Over the period 2000–2003, the CPS persistently estimated not only substantially higher annual unemployment rates than the ACS test surveys, but also somewhat higher employment rates (Palumbo, 2005). The CPS is the official source of unemployment figures, and its estimates are presumably more accurate because they are based on responses to a more detailed set of questions obtained by trained interviewers using CATI and CAPI interviewing. In addition, the CPS uses a fixed reference period for reporting employment status (see Bureau of Labor Statistics and U.S. Census Bureau, 2002:Ch. 16).


Median Income The C2SS estimated lower median household and family income than the 2000 long-form sample—$40,137 versus $41,994 for household median income and $48,014 versus $50,046 for family median income. Estimates of families and people in poverty were virtually the same, although the C2SS estimated higher percentages of poor children and poor single-woman families with children than the 2000 long-form sample (16.8 versus 16.1 percent poor children, 35.4 versus 34.3 percent poor single-woman families with children) (U.S. Census Bureau, 2004b:33-38).

The reasons for these differences are not known, although they may be due to differences in completeness of reporting and reference periods. The income reference periods for the C2SS spanned January–December 1999 (for people interviewed in January 2000) to December 1999–November 2000 (for people interviewed in December 2000), while the income

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

reference period for the 2000 long-form sample was a uniform January–December 1999. It is possible that the C2SS captured the onset of recession in a way that the 2000 long-form sample could not. The inflation adjustment procedure for the C2SS could also be a factor. For the Census Bureau’s comparative analysis, the C2SS data were backward adjusted to reflect the average inflation experience for 1999, not 2000. Analysis by Turek, Denmead, and James (2005) suggests that inflation adjustments to the ACS may not accurately reflect economic growth (or decline) over a year.

2-C
ACS CHALLENGES

The ACS will benefit users by providing more timely and frequent data for small areas that are likely to be based on responses of higher quality than the census long-form sample. However, there are challenges to using the data that stem principally from the continuous design of the ACS. Two major challenges are (1) the period nature of the ACS 1-year, 3-year, and 5-year estimates in contrast to the point-in-time nature of the long-form-sample estimates (Section C.1) and (2) the greater sampling error of the ACS estimates compared with the long-form-sample estimates (Section C.2).

2-C.1
Period Estimates

All estimates from the ACS will be period estimates—that is, the estimates will represent averages of months of data—12 months for 1-year estimates, 36 months for 3-year estimates, and 60 months for 5-year estimates. The issue is how to interpret the period estimates from the ACS, which are not the same as the (approximately) point-in-time estimates for Census Day (April 1) from the census long-form sample.

Consider first the 1-year period estimates, which include responses in all 12 months of the calendar year for sampled housing units that existed on the MAF as of January of the year (see Section 4-A). An independent estimate of total housing for July 1 is used to control the estimated number of housing units in an estimation area, but the reported characteristics of the units may vary throughout the year. Consequently, for housing characteristics (utility costs, value, rent, number of rooms, and others), the 1-year period estimates are 12-month averages, which may often differ from long-form-type point-in-time estimates.

Similarly, even though independently derived census-based population estimates for July 1 for major age, sex, race, and ethnicity groups are used to control the 1-year period estimates of people, such characteristics as education, income, and others may vary during the year. The 1-year period estimates are consequently 12-month averages of such population characteristics as education, income, veterans status, and others. A 1-year period estimate for an area will correspond to a point-in-time estimate for July

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

1 only if the population and its characteristics are stable during the year, which will not be true of areas with distinct seasonal populations, such as summer and winter residents who differ appreciably not only in numbers, but also in socioeconomic characteristics (see Section 3-C.3).

The 3-year and 5-year period estimates have similar attributes to the 1-year estimates. They do not represent the characteristics of the population for either the end year or the middle year—interpretations that may appeal to users but are misleading. Rather, they are period estimates, or averages, over 36 or 60 months. Such period estimates have lower sampling error than other types of estimates that could be constructed from the data, such as a middle-year estimate (see Chapter 6). They place more of a burden on the user, however, in interpreting them individually and in interpreting trends in them over time (see Section 3-C.1.b).

To become comfortable with the 3-year and 5-year period estimates, users need to think of them as pertaining to a period of time, not to a specific year or date. Thus, the poverty estimate for a small town from averaging the ACS data for the 5-year period from 2010 to 2014 could be termed “the average poverty estimate for our town for the first half of the decade,” while the estimate from averaging the ACS data for the 5-year period from 2015 to 2019 could be termed “the average poverty estimate for our town for the second half of the decade.” Similarly, poverty rates based on 3 years of ACS data could be assigned such terms as “the average poverty estimate for our city for the early [or middle, or later] part of the decade.” This kind of description will not work when a 3-year or a 5-year period estimate does not neatly correspond to a readily identifiable portion of a decade (for example, an estimate for 2012–2015). Yet the general point remains, which is the need for users to develop descriptive phrases and other ways to reinforce the idea that all ACS estimates pertain to a period of time.

Once the ACS has been fully operational for a sufficient number of years, many large areas will have estimates available each summer and fall for more than one period, such as 1-year, 3-year, and 5-year estimates for areas with 65,000 or more people (refer back to Table 2-6). Unless very little or no change has occurred in the area’s population or characteristics, these 1-year, 3-year, and 5-year estimates are likely to differ. Users who are only interested in the estimates for an area as a whole, and not for any smaller components, can decide which set of period estimates best suits the goals of their analysis, considering such factors as the likely variability in the characteristic over time and the level of sampling error that is tolerable for their application (see Section 2-C.2 below). Users who want to look at larger areas and also their components—for example, a city and its planning areas made up of groups of census tracts—will need to use the same period estimates throughout to ensure comparability. Most likely, users will have to work with the 5-year estimates, which will be the only estimates available for the smallest areas. Alternatively, they will need to develop

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

methods to relate estimates for different time periods, such as using 1-year period estimates for a state, large county, or PUMA to update 5-year period estimates for small governmental jurisdictions (see Section 3-B.3).

In using the ACS data to study trends and changes over time, users will need to keep in mind the implications of changes in an area’s geographic boundaries and population size for their analysis. With regard to population size, a governmental unit may gain or lose population so that it crosses a population size threshold for the publication of estimates. For example, a small city may grow from 60,000 to 70,000 over a 5-year period. Beginning in the year when the city achieves 65,000, it will have 1-year as well as 3-year and 5-year period estimates produced. Significant population decline, however, if sustained, could cause an area to be dropped from the 1-year or even the 3-year period estimates series. Population changes may also increase or decrease the initial sampling rate for an area.

With regard to boundaries, the Census Bureau will continue to update regularly the geographic boundaries of most types of governmental units every year—for example, to reflect an annexation or a combination or splitting up of units. It will update school district boundaries every 2 years and update the boundaries of statistical areas, including metropolitan areas, urbanized areas, PUMAs, census tracts, and block groups every decade in conjunction with the census. For ACS estimates for such governmental units as counties and cities, the Census Bureau will use the geographic boundaries as of January 1 of the most recent year of data that figure into the particular set of estimates. Consider a large city that annexed territory in late 2008 for which the user is working with 1-year, 3-year, and 5-year period estimates pertaining to 2010, 2008–2010, and 2006–2010, respectively. All these estimates will include data for the current enlarged city boundaries. The Census Bureau will not, however, revise estimates that precede the most recent 5 years to reflect boundary changes.

2-C.2
Sampling Error

The use of a sample rather than a complete enumeration introduces sampling error that affects the precision of the estimates from a survey. Such error is related to the variability of the characteristic in the population, the size of the sample, and the sample design. For a given estimate and sample design, the larger the sample size, the smaller is the sampling error.

2-C.2.a
Design Factors

Overall, the ACS 5-year period estimates for an area will exhibit greater sampling error than the 2000 census long-form-sample estimates for the same area. (The sampling errors for 3-year and 1-year period estimates will be greater yet.) Two reasons are that the ACS cumulative 5-year initial

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

sample size is only about three-fourths that of the long-form sample and that the ACS then uses subsampling for the CAPI interviews. For planning purposes, the Census Bureau estimated that the sampling errors (known as the standard errors) of the ACS 5-year period estimates would be about 20 percent greater than the errors of the long-form-sample estimates, but recent work (Starsinic, 2005) suggests that the ACS sampling errors will exceed the long-form-sample errors by about 50 percent.

The ACS design, like the long-form-sample design, oversamples very small governmental jurisdictions (refer back to Table 2-3a). This oversampling reduces the sampling errors of estimates for those units, but it increases the errors for larger areas that are undersampled, as well as somewhat increasing the errors for larger units that include some oversampled and some undersampled areas relative to a design with the same sampling rate for all areas.

The subsampling used for CAPI interviews in the ACS increases sampling error for two reasons: first, the subsampling reduces the final sample

BOX 2-5

Brief Descriptions of Statistical Terms Used in This Report

  • Standard error of an estimate: A commonly used statistic that expresses the imprecision in an estimate that is due to sampling. This imprecision is known as sampling error. It is to be distinguished from nonsampling errors from such sources as misreporting and nonresponse, which are often systematic in nature and result in biased survey estimates (see Box 2-3).

  • Coefficient of variation (CV) or relative standard error: The standard error expressed as a percentage of the estimate. CVs of 10–12 percent or less are often accepted as a reasonable standard of precision for an estimate.

  • 90 percent margin of error (MOE): Plus or minus 1.65 times the standard error of an estimate.

  • 90 percent confidence interval (CI): The 90 percent MOE expressed as a range around the estimate.

Example Calculations


Consider the example of MEDIUM CITY, 5-Year Period ACS Estimate (see Tables 2-7a, 2-7b, and 2-7c). Assume that MEDIUM CITY has a population of 100,000 with an estimated 20,000 school-age children, of whom 3,000 (15 percent) are estimated to be poor. For a 15.0 percent poverty rate for school-age children with a 1.28 percentage point standard error:

  • CV = 8.5 percent (1.28/15.0)

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

size; second, the additional weighting that is needed to compensate for the subsampling increases the sampling error relative to a design without subsampling. The reduction in sample size would be particularly severe for areas in which households are less likely to mail back their questionnaires, but, to ameliorate this effect, the Census Bureau follows up somewhat higher proportions of nonresponding households in census tracts with lower expected mail and telephone nonresponse rates than other areas (refer back to Table 2-3b).

2-C.2.b
Illustrative, Approximate Sampling Error Estimates for the ACS

The sampling error in an estimate may be measured by its standard error (see Box 2-5 for definitions of statistical terms). In the case of a percentage estimate, the estimated standard error depends on the size of the percentage and on the sample size for the relevant population that is used as the base for estimating the percentage. Sample size is affected not only

  • 90 percent MOE ± 2.1 (1.28 × 1.65)

  • 90 percent CI = 12.9–17.1 percent poor children

Interpretation of Example


What does it mean to say that the 90 percent MOE for this estimate is plus or minus 2 percent, which translates into a CI of 13 to 17 percent poor children? This interval provides a measure of the uncertainty in the estimate due to taking a sample rather than measuring the city’s entire population. A different sample would give a slightly different estimate—perhaps 14 percent or 16 percent poor school-age children. If one could look at all the possible samples that could be selected for the city using the ACS sample selection method and construct a 90 percent CI from each sample, one would expect 90 percent of these intervals to include the true percentage of poor school-age children in the city.

Another way to look at this is to consider the 90 percent CI for the percent poor school-age children for all U.S. cities. One would expect 90 percent of the city CIs to include the true percent for their respective cities. However, if the city samples are selected independently, one would expect 10 percent of the cities to have samples for which the percentage of poor school-age children is far enough away from the true value that their 90 percent CIs do not include the true value.


NOTE: The ACS data products show 90 percent MOEs. This practice is not standard in survey research. The standard 95 percent MOE (1.96 times the standard error of an estimate) results in wider CIs, which are more likely to cover the true value.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

by the original design, but also by nonresponse and, in the case of the ACS, by the extent of CAPI subsampling that is done for personal visit follow-up to contain costs.

Tables 2-7a, 2-7b, and 2-7c provide rough, approximate estimates of sampling error for an estimated 15 percent poor school-age children from the ACS (1-year, 3-year, and 5-year period estimates) and the 2000 long-form sample for areas ranging in population from 500 to 2.5 million people. The calculations assume that school-age children are 20 percent of the total population and that areas with 3,000 or fewer people are oversampled. The calculations take account—for both the ACS and the long-form sample—of the added sampling error from household nonresponse but not the added error from item nonresponse.

Specifically, Table 2-7a shows relative standard errors—that is, the standard error as a percentage of the estimate, also called the coefficient of variation (see Box 2-5). Table 2-7b shows approximate 90 percent margins of error (MOEs) plus or minus the estimate of 15 percent poor school-age children for each size area (90 percent MOEs are 1.65 times the corresponding standard error). Finally, Table 2-7c translates the MOEs into 90 percent confidence intervals surrounding the 15 percent school-age poverty estimates.

The tables and text use 90 percent MOEs and confidence levels to follow the long-standing practice of Census Bureau publications; however, this practice is not standard in statistical work. It gives smaller MOEs and confidence intervals than is the case when the 95 percent standard is used: with the 95 percent standard, the MOEs and confidence intervals would be about 20 percent larger.

The panel developed the sampling error estimates in the tables by starting with a generalized variance estimation function provided by the Census Bureau for the 2000 long-form sample; we then computed the sampling error estimates for the ACS as multiples of the long-form-sample estimates (see notes at the end of Table 2-7c). The multiplication factors are derived from Census Bureau research with the ACS test sites, the C2SS, and the 2001–2004 ACS test surveys.

For the 2005 ACS, the Census Bureau directly estimated the sampling errors for specific estimates, including not only school-age poverty, but also other characteristics, using a repeated replication method (U.S. Census Bureau, 2006:Ch.12). The 2005 ACS data were only recently released, however, and the panel was not able to analyze their sampling errors; moreover, these estimates pertain only to areas with 65,000 or more people. Nevertheless, an unsystematic examination of the sampling errors for selected 2005 ACS poverty estimates suggests that they are similar to those shown in Tables 2-7a, 2-7b, and 2-7c.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
2-C.2.c
Assessment of Sampling Error from Illustrative Estimates

Looking at Table 2-7a for the ACS 5-year period estimates, the relative standard errors, or coefficients of variation, for all but the smallest governmental units are half again as large (51 percent) as the corresponding relative standard errors for the 2000 long-form sample. For the ACS 3-year period estimates, the relative standard errors are, in turn, almost 30 percent larger than those for the ACS 5-year estimates and 95 percent larger than those for the long-form-sample estimates. For the ACS 1-year period estimates, the relative standard errors are more than 2 times larger than those for the ACS 5-year period estimates and more than 3 times larger than those for the long-form-sample estimates.

To illustrate, consider first the best case shown in Tables 2-7a, 2-7b, and 2-7c, which is the long-form-sample estimate of 15 percent poor school-age children for an area with 2.5 million people. For this estimate, the relative standard error is only 1.1 percentage points, the 90 percent MOE is only ±0.3 percentage points, and the 90 percent confidence interval is quite narrow—14.7 to 15.3 percent. In other words, the estimate is very precise and provides useful information for a variety of applications, such as fund allocation and program planning. For the same estimate for the same size area from ACS data accumulated over 5 years, the relative standard error is only somewhat larger at 1.7 percentage points, and the ACS data have the advantage of being more up to date.

At the other extreme, the worst case is for estimates of 15 percent poor school-age children for areas with 500 people. These areas are oversampled in both the ACS and the long-form sample, but the sample sizes are so small that the estimates are very imprecise. The 90 percent confidence interval for the estimate of 15 percent poor school-age children from the long-form sample ranges from 5.8 to 24.2 percent poor (90 percent MOE of ±9.2 percentage points), while that from the ACS 5-year period estimates ranges from 3.9 to 26.1 percent poor (90 percent MOE of ±11.1 percentage points). Intervals this wide are not helpful to users, and the range would be wider yet for areas with 500 people that are not oversampled—for example, a township in one of the 38 states for which the Census Bureau does not recognize townships as functioning governments for purposes of oversampling (refer back to Table 2-3), or a block group in a large area.

What constitutes an acceptable level of precision for a survey estimate depends on the uses to be made of the estimate. A commonly used standard for many uses is that a sample estimate should have a relative standard error, or coefficient of variation, of 10 percent or less—sometimes increased to 12 percent or less for a characteristic like poverty, which is clustered within a household or family. This standard does not apply in some instances: specifically, for estimates of proportions that are less than 5 percent

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

TABLE 2-7a Illustrative, Approximate Relative Standard Errors (Coefficients of Variation, or CVs) for an Estimate of 15 Percent Poor School-Age Children from the ACS and the 2000 Census Long-Form Sample, by Population Size of Area

Population Size of Area (1)

Children Ages 5–17

ACS 1-Year Period Estimate

ACS 3-Year Period Estimate

ACS 5-Year Period Estimate

2000 Long-Form Sample Estimate

Total (20% of total pop.) (2)

Poor (15% of ages 5–17) (3)

CV(%) (4a)

(Sample Cases) (4b)

CV(%) (5a)

(Sample Cases) (5b)

CV(%) (6a)

(Sample Cases) (6b)

CV(%) (7a)

(Sample Cases) (7b)

2,500,000

500,000

75,000

3.8 %

(7,900)

2.2 %

(23,700)

1.7 %

(39,550)

1.1 %

(77,500)

1,000,000

200,000

30,000

6.0

(3,150)

3.5

(9,500)

2.7

(15,800)

1.8

(31,000)

500,000

100,000

15,000

8.5

(1,600)

4.9

(4,750)

3.8

(7,900)

2.5

(15,550)

250,000

50,000

7,500

12.1

(800)

7.0

(2,350)

5.4

(3,950)

3.6

(7,750)

100,000

20,000

3,000

19.1

(300)

11.0

(950)

8.5

(1,600)

5.6

(3,100)

65,000

13,000

1,950

23.7

(200)

13.6

(600)

10.6

(1,050)

7.0

(2,000)

50,000

10,000

1,500

27.0

(150)

15.6

(450)

12.1

(800)

8.0

(1,550)

25,000

5,000

750

38.2

(80)

22.0

(250)

17.1

(400)

11.3

(800)

20,000

4,000

600

42.7

(60)

24.6

(200)

19.1

(300)

12.6

(600)

10,000

2,000

300

60.4

(30)

34.8

(90)

27.0

(150)

17.9

(300)

5,000

1,000

150

85.4

(20)

49.2

(50)

38.1

(100)

25.2

(150)

3,000

600

90

95.6

(10)

55.0

(40)

42.7

(80)

28.3

(150)

1,500

300

45

72.8

(10)

41.9

(40)

32.5

(70)

21.5

(150)

500

100

15

100.2

(10)

57.7

(20)

44.7

(30)

37.3

(50)

NOTES: The coefficient of variation (CV) is the standard error (SE) of an estimate expressed as a percentage of the estimate (see Box 2-5). CVs that indicate often acceptable levels of precision are in bold italics. ACS estimates are not published below the solid line in column 4 (below 65,000 people) and column 6 (below 20,000 people).

Column 1: Assumed population size of an area.

Column 2: Assumed number of school-age children (ages 5–17); assumed to be 20 percent of column 1.

Column 3: Assumed number of poor school-age children; assumed to be 15 percent of column 2.

Column 4a: ACS 1-year period estimate CV; based on SE estimated as 2.24 times ACS 5-year period estimate SE (see column 6a).

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

Column 4b: Approximate number of completed sample cases of school-age children (column 2) for 1 year of ACS; estimated as 20 percent of the ACS 5-year number of completed sample cases in column 6b; rounded to nearest 50 (nearest 10 when fewer than 100 cases).

Column 5a: ACS 3-year period estimate CV; based on SE estimated as 1.29 times ACS 5-year period estimate SE (see column 6a).

Column 5b: Approximate number of completed sample cases of school-age children (column 2) for 3 years of ACS; estimated as 60 percent of the ACS 5-year number of completed sample cases in column 6b; rounded to nearest 50 (nearest 10 when fewer than 100 cases).

Column 6a: ACS 5-year period estimate CV; for areas with 1,500 or more people (column 1), based on SE estimated as 1.51 times 2000 long-form-sample SE (from Starsinic, 2005); for areas with 500 people (column 1), based on SE estimated as 1.2 times 2000 long-form sample SE (see column 7a). The factor of 1.51 accounts for the smaller initial 5-year ACS sampling rate compared with the 2000 long-form-sample rate, as well as CAPI subsampling and nonresponse in the ACS. The factor of 1.2 takes into account that the ACS initial sampling rate is the same as the long-form sampling rate for areas of this small size (see Table 2-3, Part A).

Column 6b: Approximate number of completed sample cases of school-age children (column 2) for 5 years of ACS; estimated as 0.51 and 0.73 times the long-form number of completed sample cases (column 7b) for areas with 1,500 or more people and 500 people, respectively, times 0.97 to allow for nonresponse in the ACS; rounded to nearest 50 (nearest 10 when fewer than 100 cases). The 0.51 and 0.73 factors are based on the ratio of ACS 5-year period cumulative rates of completed sample cases to the 2000 long-form-sample rate (see Table 2-3, Part C). These factors assume a 60 percent mail and CATI response rate from the initial sample as in the 2005 ACS.

Column 7a: 2000 long-form sample CV; based on SE estimated according to the formula in U.S. Census Bureau (2005:8-23), which is: SE(p) = F(√(5/b)p (100-p)), where b is the population base of the estimated percentage, p, and F is a design factor. The base, b, is the number of school-age children in column 2; p is 15 percent poor school-age children; and F varies by the characteristic estimated (poverty) and the assumed long-form-sample sizes for different size areas (from U.S. Census Bureau, 2005:Table C):

• 1.5 design factor for areas of 5,000 or more people, with assumed sample sizes of about 15 percent, instead of 16.7 percent, of school-age children (allowing for unit nonresponse);

• 1.3 design factor for oversampled areas of 3,000 people, with assumed sample sizes of about 20 percent instead of 25 percent of school-age children; and

• 0.7 design factor for oversampled areas of 1,500 people or fewer, with assumed sample sizes of about 45 percent instead of 50 percent of school-age children.

SEs for areas of 3,000 or fewer people that are not oversampled (including census tracts in larger governmental units and townships not in one of the 12 states in which they are recognized as functioning governments for purposes of oversampling—see Table 2-3, Part A) will be larger than those calculated.

Column 7b: Approximate number of completed sample cases of school-age children (column 2) for 2000 long-form sample; estimated using 2000 long-form sampling rates from Table 2-3 times 0.93, which is the percentage of usable cases of the total sample in 2000. These estimates do not enter into the SE and CV calculations, which are based on design factors estimated for the actual 2000 long-form sample.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

TABLE 2-7b Illustrative, Approximate 90 Percent Margins of Error (MOEs), Plus or Minus an Estimate of 15 Percent Poor School-Age Children from the ACS and the 2000 Census Long-Form Sample, by Population Size of Area

Population Size of Area

Children Ages 5–17

ACS 1-Year Period Estimate 90% MOE

ACS 3-Year Period Estimate 90% MOE

ACS 5-Year Period Estimate 90% MOE

2000 Long-Form Sample Estimate 90% MOE

Total (20%of total pop.)

Poor (15% of ages 5–17)

2,500,000

500,000

75,000

±0.9%

±0.5%

±0.4%

±0.3%

1,000,000

200,000

30,000

±1.5

±0.9

±0.7

±0.4

500,000

100,000

15,000

±2.1

±1.2

±0.9

±0.6

250,000

50,000

7,500

±3.0

±1.7

±1.3

±0.9

100,000

20,000

3,000

±4.7

±2.7

±2.1

±1.4

65,000

13,000

1,950

±5.9

±3.4

±2.6

±1.7

50,000

10,000

1,500

±6.7

±3.8

±3.0

±2.0

25,000

5,000

750

±9.5

±5.4

±4.2

±2.8

20,000

4,000

600

±10.6

±6.1

±4.7

±3.1

10,000

2,000

300

±14.9

±8.6

±6.7

±4.4

5,000

1,000

150

(±21.1)

±12.2

±9.4

±6.2

3,000

600

90

(±23.6)

±13.6

±10.6

±7.0

1,500

300

45

(±18.0)

±10.4

±8.0

±5.3

500

100

15

(±24.8)

±14.3

±11.1

±9.2

NOTES: The 90 percent margin of error (MOE) is plus or minus (±) the standard error of an estimate times 1.65 (see Table 2-7a notes). The MOEs these cases, the subtraction of the MOE from the 15 percent estimate yields a negative value, which is an impossible result. Although the standard procedure for deriving the MOE is applied throughout the table, the underlying assumption of that procedure—that the sampling distribution of the estimate is approximately the normal distribution—is not applicable in these cases.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

TABLE 2-7c Illustrative, Approximate 90 Percent Confidence Intervals (CIs) Around an Estimate of 15 Percent Poor School-Age Children from the ACS and the 2000 Census Long-Form Sample, by Population Size of Area

Population Size of Area

Children Ages 5–17

ACS 1-Year Period Estimate 90% CI

ACS 3-Year Period Estimate 90% CI

ACS 5-YearPeriod Estimate 90% CI

2000 Long-Form Sample Estimate 90% CI

Total (20%of pop. total)

Poor (15% of ages 5–17)

2,500,000

500,000

75,000

14.1–15.9%

14.5–15.5%

14.6–15.4%

14.7–15.3%

1,000,000

200,000

30,000

13.5–16.5

14.1–15.9

14.3–15.7

14.6–15.4

500,000

100,000

15,000

12.9–17.1

13.8–16.2

14.1–15.9

14.4–15.6

250,000

50,000

7,500

12.0–18.0

13.3–16.7

13.7–16.3

14.1–15.9

100,000

20,000

3,000

10.3–19.7

12.3–17.7

12.9–17.1

13.6–16.4

65,000

13,000

1,950

9.1–20.9

11.6–18.4

12.4–17.6

13.3–16.7

50,000

10,000

1,500

8.3–21.7

11.2–18.8

12.0–18.0

13.0–17.0

25,000

5,000

750

5.5–24.5

9.6–20.4

10.8–19.2

12.2–17.8

20,000

4,000

600

4.4–25.6

8.9–21.1

10.3–19.7

11.9–18.1

10,000

2,000

300

0.1–29.9

6.4–23.6

8.3–21.7

10.6–19.4

5,000

1,000

150

(0.0–36.1)

2.8–27.2

5.6–24.4

8.8–21.2

3,000

600

90

(0.0–38.6)

1.4–28.6

4.4–25.6

8.0–22.0

1,500

300

45

(0.0–33.0)

4.6–25.4

7.0–23.0

9.7–20.3

500

100

15

(0.0–39.8)

0.7–29.3

3.9–26.1

5.8–24.2

NOTES: The 90 percent confidence interval (CI) ranges from an estimate minus the 90 percent margin of error to the estimate plus the 90 percent margin of error (see Table 2-7b). The 90 percent confidence intervals in parentheses are inexact. The lower limit of the confidence interval calculated in the standard way is a negative number, which is not possible. For simplicity, the lower limit has been set to 0 in these cases. See also the notes for Table 2-7b.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

of a population group in an area. The formula for estimating the coefficient of variation is very unstable for estimates of small proportions, and the estimated coefficients can be misleadingly large.

Table 2-7a shows that estimates from the 2000 long-form sample of 15 percent poor school-age children meet the 12 percent standard of precision for areas with a minimum population between 20,000 and 25,000 people (4,000–5,000 school-age children), but estimates from accumulated ACS 5-year data meet this standard only for areas with at least 50,000 people (10,000 school-age children). Estimates from the ACS 3-year and 1-year data meet this standard only for areas with at least 80,000 people (16,000 school-age children) and 250,000 people (50,000 school-age children), respectively.

The relative standard errors in Table 2-7a are calculated for estimates of 15 percent poor children among all school-age children. The latter group, in turn, is assumed to be 20 percent of the total population, so that poor school-age children are only 3 percent of the total population. If, instead, the table were to provide relative standard errors for estimates of 15 percent poor people—including all children and adults—among the total population, then the levels of precision shown would be considerably improved (see Table 2-8). Thus, the long-form sample would provide estimates that meet the 12 percent or less precision standard for areas as small as 1,500 people, while estimates from accumulated ACS 5-year data would meet this standard for areas as small as 10,000 people. Estimates from accumulated ACS 3-year and 1-year data would meet this standard for areas as small as about 15,000 and 50,000 people, respectively (see Table 2-8). In other words, simple one-way tabulations from the ACS may meet common standards for precision for relatively small areas, although that is not likely to be the case once another variable is introduced, such as age or race.

Users should not simply rely on commonly cited precision standards in deciding whether to use particular estimates. They also need to take into account the specific requirements of their application. For example, deciding which subset of school districts should receive additional funding directed to low-income students may require a narrower confidence interval than the standard. Thus, a 90 percent confidence interval of 12 to 18 percent poor school-age children, which corresponds to a 12 percent relative standard error for an estimate of 15 percent poor school-age children, may be too wide an interval for purposes of fund allocation. Still, for some applications, a ballpark estimate with an even wider confidence interval may suffice.

In deciding which set of ACS estimates is best suited for a particular application, users will need to make trade-offs between timeliness and sampling error. For example, a user could decide that a 3-year period estimate is preferable to a 1-year period estimate for a large city or county in order to achieve a greater level of precision. Alternatively, a user could decide that

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

TABLE 2-8 Illustrative, Approximate Relative Standard Errors (Coefficients of Variation, or CVs) for an Estimate of 15 Percent Poor People from the ACS and the 2000 Census Long-Form Sample, by Population Size of Area

Population Size of Area (1)

Poor People (15% of total pop.) (2)

ACS 1-Year Period Estimate

ACS 3-Year Period Estimate

ACS 5-Year Period Estimate

2000 Long-Form-Sample Estimate

CV(%) (3a)

(Sample Cases) (3b)

CV(%) (4a)

(Sample Cases) (4b)

CV(%) (5a)

(Sample Cases) (5b)

CV(%) (6a)

(Sample Cases) (6b)

2,500,000

375,000

1.7

%(39,550)

1.0%

(118,600)

0.8%

(197,650)

0.5%

(387,500)

1,000,000

150,000

2.7

(15,800)

1.6

(47,450)

1.2

(79,050)

0.8

(155,000)

500,000

75,000

3.8

(7,900)

2.2

(23,700)

1.7

(39,550)

1.1

(77,500)

250,000

37,500

5.4

(3,950)

3.1

(11,850)

2.4

(19,750)

1.6

(38,750)

100,000

15,000

8.5

(1,600)

4.9

(4,750)

3.8

(7,900)

2.5

(15,500)

65,000

13,000

10.6

(1,050)

6.1

(3,100)

4.7

(5,150)

3.1

(10,100)

50,000

7,500

12.1

(800)

7.0

(2,350)

5.4

(4,050)

3.6

(7,750)

25,000

3,750

17.1

(400)

9.8

(1,200)

7.6

(2,000)

5.1

(3,900)

20,000

3,000

19.1

(300)

11.0

(950)

8.5

(1,600)

5.6

(3,100)

10,000

1,500

27.0

(150)

15.6

(450)

12.1

(800)

8.0

(1,550)

5,000

750

38.2

(80)

22.0

(250)

17.1

(400)

11.3

(800)

3,000

450

42.7

(70)

24.6

(200)

19.1

(350)

12.6

(700)

1,500

225

32.5

(70)

18.7

(200)

14.5

(350)

9.6

(700)

500

75

44.8

(30)

25.8

(100)

20.0

(150)

16.7

(250)

NOTES: See Notes for Table 2-7a—columns 3a–6b in Table 2-8 correspond to columns 4a–7b, respectively, in Table 2-7a. Population sizes for calculating standard errors are in column 1. To obtain an approximate 90 percent margin of error, multiply 15 percent by the estimated coefficient of variation (CV) above to obtain the estimated standard error and multiply the result by 1.65. For example, the 90 percent margin of error for an ACS 1-year period estimate of 15 percent poor people in an area of 65,000 total population is 15 times 0.106 equals 1.6, times 1.65 equals ±2.6, which, in turn, gives a 90 percent confidence interval of 12.4–17.6 percent poor people.

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

several years of 1-year period ACS estimates will be informative regarding trends and the current situation for the city, even though the estimates are less precise (see discussion in Chapter 3).

2-C.2.d
Documentation of Sampling Error

The Census Bureau commendably is trying to impress upon users the extent of sampling error in the ACS estimates. Originally, for data products issued through mid-2005 from the C2SS and the ACS test surveys for 2001–2004, the Census Bureau published upper and lower 90 percent confidence interval bounds (for example, 13–17 percent for a 15 percent estimate of poor school-age children). In response to users, who are more accustomed to the MOE concept (as reported in the media for public opinion polls), the Census Bureau decided to replace the upper and lower bounds in tables with the 90 percent MOEs for specific estimates (such as ±0.2 percentage points). In addition, the Census Bureau will not publish 1-year or 3-year estimates when their imprecision is deemed to be too great. In these instances, the standard tabulation categories will be combined to the point at which the tabulations meet the Census Bureau’s threshold for a minimally acceptable level of precision. The 5-year period estimates will not be treated in this manner, even for very small areas for which they are highly imprecise, because the 5-year small-area estimates are the building blocks for a wide range of user applications similar to how the long-form-sample data were used (see Section 4-D.2).

In contrast, the sampling error of the long-form-sample estimates was not highlighted, but instead was contained in footnotes and auxiliary documentation. Moreover, margins of error were not provided for specific estimates; instead, users were provided with general formulas for making their own computations of sampling error. As a result, many users have been unaware of the sampling error in the long form-sample estimates they have been using.

2-D
SUMMARY ASSESSMENT

The ACS promises to be of great benefit to many users for a wide range of applications for which they previously relied on information from the decennial census long-form sample. The three major benefits of the ACS are its timeliness, frequency, and the improved quality of the responses when compared with the long-form sample. Not only will the ACS information be released within 8–10 months of completion of data collection, compared with 2 years or more for the long-form sample, but it will also be updated every year instead of once a decade. Moreover, there is strong evidence that

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

the ACS will provide data with reduced nonsampling error because of such factors as the use of trained interviewers to collect the information from nonrespondents. In tests of the ACS, improvements in quality are evident in more complete response to almost every item compared with the long-form sample. Furthermore, in personal interviews, some items will be more accurately reported because the computer-assisted interviewing can more readily correct respondent misperceptions about what is being asked and resolve inconsistent responses.

A complication for users of switching from the census long-form sample to the ACS is the continuous fielding and processing of the ACS. This design produces estimates that pertain to periods of time—averages over 12, 36, or 60 months—instead of the traditional point-in-time estimates with which users are familiar from the long-form sample and other household surveys. Users will need to work together and with the Census Bureau to develop strategies for application of the ACS information that take account of the survey’s continuous design. In Chapter 3 we outline some of these strategies.

Sampling error or imprecision of the estimates is a problematic aspect of the ACS, although users should remember that many long-form-sample estimates did not meet common standards of precision for small areas, either (see Tables 2-7a, 2-7b, 2-7c, and 2-8). When the data are averaged over 5 years, it appears that the ACS will provide reasonably precise estimates for small population groups, such as poor school-age children, for areas with 50,000 or more people but not for smaller areas. The ACS 1-year estimates for such a small population group will have low precision unless the area has at least 250,000 people. For larger population groups, such as total poor, the ACS 5-year estimates will likely provide reasonably precise estimates for areas of at least 10,000 people, while the ACS 1-year estimates will meet that standard for areas of at least 50,000 people.

ACS estimates for census tracts, which average 4,000 people, and block groups, which average 1,500 people, will be very imprecise. Indeed, they were not precise from the long-form sample for other than large population groups. However, these areas can be combined in various ways by users who want to compare planning districts, wards, or other components of large cities, counties, and other areas.

The bottom line for large geographic areas—such as states, congressional districts, and large metropolitan areas, cities, and counties—is that the ACS estimates will be a great asset to data users. The data will be timely, up to date, of good quality, and reasonably precise. The 5-year data for census tracts and block groups, while not precise in and of themselves, will provide building blocks that should enable detailed analyses of the populations of large geographic areas.

Estimates from the ACS for small governmental units, even with over-

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×

sampling, are the most problematic from the perspective of sampling error. Consider a place of 1,500 people and 300 school-age children, of whom 45 children or 15 percent are estimated to be poor. Table 2-7c shows a 90 percent confidence interval of 7 to 23 percent poor school-age children from 5 years of ACS data. Based on the calculations used to derive Table 2-7c, the margin of error of the ACS estimate is 51 percent greater than that from the 2000 long-form sample, which already has a high margin of error, and this increase may be somewhat underestimated. Moreover, the option of combining small governmental units into larger analytical units in order to improve the precision of estimates is less applicable than in the case of combining census tracts or block groups within a larger jurisdiction.

Chapter 3 discusses possible strategies for data users who are interested in very small governmental units to make effective use of the ACS estimates. It will also be imperative to maintain the planned sample sizes for the ACS over time and, furthermore, for the Census Bureau, in cooperation with users, to seek ways to improve the precision of the estimates for small areas (see Section 4-A.5).

Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 27
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 28
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 29
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 30
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 31
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 32
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 33
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 34
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 35
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 36
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 37
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 38
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 39
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 40
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 41
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 42
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 43
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 44
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 45
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 46
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 47
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 48
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 49
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 50
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 51
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 52
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 53
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 54
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 55
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 56
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 57
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 58
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 59
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 60
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 61
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 62
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 63
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 64
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 65
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 66
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 67
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 68
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 69
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 70
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 71
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 72
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 73
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 74
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 75
Suggested Citation:"PART I: Using the American Community Survey, 2 Essentials for Users." National Research Council. 2007. Using the American Community Survey: Benefits and Challenges. Washington, DC: The National Academies Press. doi: 10.17226/11901.
×
Page 76
Next: 3 Working with the ACS: Guidance for Users »
Using the American Community Survey: Benefits and Challenges Get This Book
×
Buy Paperback | $80.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The American Community Survey (ACS) is a major new initiative from the U.S. Census Bureau designed to provide continuously updated information on the numbers and characteristics of the nation’s people and housing. It replaces the “long form” of the decennial census. Using the American Community Survey covers the basics of how the ACS design and operations differ from the long-form sample; using the ACS for such applications as formula allocation of federal and state funds, transportation planning, and public information; and challenges in working with ACS estimates that cover periods of 12, 36, or 60 months depending on the population size of an area.

This book also recommends priority areas for continued research and development by the U.S. Census Bureau to guide the evolution of the ACS, and provides detailed, comprehensive analysis and guidance for users in federal, state, and local government agencies, academia, and media.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!