Since its origin 23 years ago as a pilot test conducted in four U.S. counties, the U.S. Census Bureau’s American Community Survey (ACS) has been the focus of continuous research, development, and refinement. The survey cleared critical milestones 14 years ago when it began full-scale operations, including comprehensive nationwide coverage, and 5 years later when the ACS replaced a long-form sample questionnaire in the 2010 census as a source of detailed demographic and socioeconomic information. Throughout that existence and continuing today, ACS research and testing has worked to improve the survey’s conduct in the face of challenges ranging from detailed and procedural (i.e., the best means for collecting ACS information from “group quarters” or nonhousehold populations, such as residents of correctional or health care facilities) to the broad and existential (i.e., assessing the true importance for the level of survey participation of the legal requirement that individuals respond).
As summarized by Victoria Velkoff (U.S. Census Bureau) in opening the workshop covered by this document, the ACS is an ongoing monthly survey sent to 3.5 million addresses per year. ACS data collection also involves visiting about 20,000 group quarters facilities and obtaining data on roughly 194,000 of their residents. A separate Puerto Rico Community Survey complements ACS collection on the mainland. At present, the ACS supports over 300 known federal government uses, and it provides the basis for the allocation of more than $765 billion in federal funds each year. The ACS yields estimates on an annual basis, in two varieties. In September of each year, the Census Bureau releases ACS estimates for all geographic areas and population groups of population
65,000 or greater based on 1 year (12 months) of ACS responses.1 In December of each year, ACS estimates are released for every part of the United States, but based on the accumulation of 5 years (60 months) of survey responses, in order to generate sufficient sample size to sustain estimation for small areas.
To understand the content and structure of the ACS, one must also briefly understand the basic evolution of data collection in the U.S. decennial census because the two are inextricably linked. Velkoff explained that the decennial censuses conducted between 1790 and 1930 involved the same set of questions asked of all responding households—a steadily increasing number of data items, to be sure, but the same items collected from all respondents. In 1940, to make census data collection more tractable, some census content was shifted to a “long-form” interview administered only to a sample of respondents. This practice of having both a short form and a long form continued over the decades, including during the switch to a primarily mail-based census, through the 2000 census. Cognizant of pressure to report more data more often, the Census Bureau began pilot testing of what would become the ACS in 1996, and this demonstration period continued through 2004. In early planning for the 2010 census, the Census Bureau decided that the next decennial count would be conducted on a short-form-only basis and that the full-scale ACS, which achieved coverage of all geographies in 2005, would become the source of information previously available through the census long-form sample. Response to the ACS is required by law, a legacy of the survey’s history as the long-form sample of the decennial.
Procedures have been developed for the review and refinement of content of the ACS, which currently covers more than 35 topics (see Box 1.1) over 71 questions, and those questions become the stuff of more than 1,000 tables and 11 billion individual estimates on an annual basis, Velkoff said. The ACS is conducted by three response modes, self-response via mail or the Internet as well as personal interviews with households that do not utilize the self-response channels. On average, the Census Bureau states that it takes about 40 minutes to complete the survey—and the data collection that began its life as a burden-reduction measure, shifting the load of response to dozens of detailed census questions to a separate long-form questionnaire, is under constant pressure to further reduce respondent burden.
Velkoff described quality as the central and leading priority—the “North Star”—for the ACS program, but she added that the ACS program views its other program priorities as being essential as well: maintaining the accuracy and utility of the ACS estimates, improving the survey process underlying the ACS, and ensuring that the respondent experience is as pleasant as possible. She pointed out that the introduction of the online ACS questionnaire in 2013
1 Velkoff noted that it has become practice for the Census Bureau to release some limited 1-year ACS estimates for selected areas of population 20,000 or greater, in October of each year.
was a major step in that direction, and online response also appears to be paying dividends in terms of data quality and accuracy. Tightening budgets have prompted the ACS program to make some major operational cuts in recent years, reducing cost without compromising quality. Notably, this meant eliminating computer-assisted telephone interviewing (CATI) as a mode of ACS data collection and eliminating the production of estimates based on 3 years of accumulated ACS responses.
Accordingly, as noted in opening this summary, research and testing have always been an integral part of the ACS experience, and this workshop proceedings represents a continued step along that path. Following the release of the first version of its Agility in Action report (U.S. Census Bureau, 2015), the Census Bureau requested that the National Academies of Sciences, Engineering, and Medicine (NASEM) convene a workshop in spring 2016 on the formulation of a research agenda for the next few years of ACS research. That workshop was held on March 8–9, 2016, and is summarized in National Academies of Sciences, Engineering, and Medicine (2016). It was followed by four expert meetings, digging deeper into four of the principal strategies for reducing respondent burden that were raised at the workshop: matrix sampling, administrative records, group quarters data collection, and communication and messaging.2 Velkoff described the 2016 workshop and the expert meetings as “invaluable to us” in directly shaping the research agenda, noting that the first hour of the session on matrix sampling made it clear that “the juice is not worth the squeeze” and subsequent discussion of exactly how much telephone interviewing costs in the ACS environment contributing to the decision to terminate CATI response. Velkoff said that the workshop, expert meetings, and subsequent discussions (including with the Behavioral Insight Group at Harvard University in December 2016) fueled the development of version 2.0 of Agility in Action, which was published on May 31, 2017 (U.S. Census Bureau, 2017).
Since that time, Velkoff said that ACS program staff had published over 30 research papers, following the agenda articulated in Agility in Action. To both provide a platform for discussing the research that has been completed and, more importantly, to look further into the future of ACS research and development, the Census Bureau requested that NASEM conduct another workshop and set of expert meetings. The statement of task for the project was:
A steering committee will organize and execute a 1.5–day public workshop for the U.S. Census Bureau to discuss recent methodological research
2 Respondent burden is a complicated concept, and a unique definition of it was not developed and used in structuring the workshop. Interested readers are directed to the proceedings of the 2016 workshop (National Academies of Sciences, Engineering, and Medicine, 2016) for further discussion of dimensions of respondent burden and, in particular, to the summary of Scott Fricker’s presentation at pages 8–12 of that proceedings on defining and measuring respondent burden.
conducted by the Census Bureau to improve the American Community Survey (ACS), which replaced the previous decennial census long-form sample. The workshop will examine efforts to improve communication materials provided to respondents to encourage response as well as the follow-up procedures with respondents. The workshop will also address evaluating administrative data as a replacement for survey questions, which may also improve accuracy and/or reduce costs. After the workshop, a proceedings of the presentations and discussions will be prepared by a designated rapporteur in accordance with institutional guidelines.
Subsequent discussion with the Census Bureau staff prompted the ordering of these two broad topics to be reversed in structuring the workshop, and for the portion concerning respondent communication to focus on the self-response channels (mail and Internet) rather than the in-person follow-up interviews. The workshop and another set of expert meetings are intended to provide the Census Bureau with material to promulgate version 3.0 of Agility in Action.
This proceedings volume encapsulates the presentations and discussion at the September 26–27, 2018, Workshop on Improving the American Community Survey (ACS), sponsored by the U.S. Census Bureau and organized by a planning committee appointed by NASEM. As requested, the workshop covered two broad topic areas:
- Uses of Administrative Records and Third-Party Data to Improve ACS Operations: Broad visions for use of these auxiliary data sources, including replacement of questionnaire items, incorporation into editing or imputation routines, and creation of new blended data products, as well as the articulation of lessons learned or best practices from roughly analogous combinations of administrative and survey data; and
- Boosting Respondent Participation Through Improved Communication: Revisiting the full suite of print materials received by respondents in the ACS process and suggesting improvements, as well as generally refining the communication and contact strategy.
The workshop was held in Washington but webcast live. Video clips of all workshop presentations are available through the Committee on National Statistics website at http://sites.nationalacademies.org/DBASSE/CNSTAT/DBASSE_188596. The workshop agenda and a list of registered participants appears in Appendix A. Approximately 80 audience members or viewers followed the workshop, at peak on both days.
This proceedings has been prepared by the workshop rapporteur as a factual summary of what occurred at the workshop. The planning committee’s role was limited to planning and convening the workshop. The views contained in the proceedings are those of individual workshop participants and do not necessarily represent the views of all workshop participants, the planning committee, or the National Academies of Sciences, Engineering, and Medicine.