As described in Section 1–A, the Workshop on the Benefits (and Burdens) of the American Community Survey (ACS) sought to emphasize the benefits of the ACS to a wide array of data users while also giving its burdens—its challenges and drawbacks—an honest and appropriate airing. The workshop presentations and discussions summarized in previous chapters reflect different aspects of burden, including the relative disadvantage rural areas confront in access to and accuracy of ACS estimates compared to more populous areas (Sections 5–B and 6–A), questions and concepts that do not completely mesh with pressing policy interests (Section 2–A), and restrictive data embargo protocols that can hinder the work of groups serving as “interpreters” of the data (Section 3–B). In addition to burden-related material in these and other individual presentations, the workshop steering committee devoted a separate session to issues of burden, assembling a small group to speak about a selection of important aspects or components of burden:
- The workshop was intended to focus on nonfederal users of ACS data, and so no speakers or applications from the federal executive agencies were included in the workshop program. Yet a major “burden” associated with the ACS is that it needs to fill the role of the previous census long-form-sample data in informing general policy. The committee felt that the U.S. Government Accountability Office (GAO) is uniquely positioned to speak to the application of ACS data to the full sweep of policy decisions and could also speak to the potential costs and benefits of a voluntary ACS. Ron
Fecso summarized GAO perspectives on behalf of himself and GAO senior analyst Kathleen Padulchick (Section 8–A).
- Relatedly, and extending themes raised during the state, local, and tribal perspectives session (Chapter 5), the committee sought someone to speak about the challenges of communicating estimates, and uncertainty, to state and local decision makers. Accordingly, Warren Brown (Cornell University) discussed examples from his time as state demographer in Georgia (Section 8–B).
- As touched on in the newly swirling legislative discussion of a voluntary ACS (Section 1–B), the ACS is continually open to criticism along privacy and confidentiality lines; some questions on the survey—such as the presence or absence of a flush toilet in the housing unit or what time each person leaves home for work—are routinely challenged as being intrusive or tantamount to identity theft. Hence, the committee asked Barry Steinhardt (former associate director of the American Civil Liberties Union) for perspectives on how one privacy rights advocate reads the ACS (Section 8–C).
- Michigan state demographer Kenneth Darga was asked to comment on the respondent burden issues associated with the ACS—how long it takes to complete the questionnaire, whether the resulting information and data justify the “imposition” of the survey on a large annual sample, and—consistent with the new debate—how response to the ACS might change if the survey were made voluntary (Section 8–D).
- Finally, the committee asked longtime demographic consultant Stephen Tordella to play devil’s advocate—to get a sense of and comment on the general complaints and concerns about the ACS and its content raised by respondents in the public, and how those complaints are registered with decision makers (Section 8–E).
At the workshop, each of the speakers gave a short opening statement before the floor was opened up to discussion; that round of questions and answers is summarized in Section 8–F.
This chapter is also an appropriate point to summarize the brief period of closing discussion for the workshop. The workshop steering committee invited Steve Murdock (Rice University)—former director of the U.S. Census Bureau and former Texas state demographer—to wrap up the workshop with brief remarks on what he heard at the workshop and on the prospects of the ACS. This discussion is summarized in Section 8–G.
Ron Fecso (U.S. Government Accountability Office [GAO], on behalf of himself and Kathleen Padulchick) began with a short introduction of GAO’s role as an investigative arm of the legislative branch. Formerly known as the General Accounting Office, GAO’s name was changed about 10 years ago to reflect that the core of its work had changed from primarily auditing programs to evaluating them, across many dimensions. Most of GAO’s studies are done at the request of Congress—often 800 or more jobs a year, but Fecso said that the office is also uniquely positioned to occasionally scan the environment and inform Congress of developments and trends that it should know about—“and the ACS, believe me, is one where we try to get their ear as often as we can.”
Fecso reiterated that the basic goal of GAO is to provide Congress with information that is “objective, fact-based, non-partisan”—“totally nonpolitical”—and that, as a result, GAO is extremely serious about the quality of the data that it uses in its reports. Though it most often relies on already available data, GAO will occasionally conduct its own data collections as studies warrant, including surveys of local governments and school systems, but it is not equipped for doing big population surveys. Fecso said that GAO makes a point of conducting a data reliability assessment on data sources available to it, be they public- or private-sector-generated sources, and that this assessment includes such factors as the competence of the source, the reasonableness of the resulting estimates, and the soundness of the methodology. If the data do not meet GAO’s standards, “we cannot use them.”
One concern as the ACS was launched was whether the estimates from the survey could satisfy all the functions and demands then placed on estimates from decennial census long-form samples. Fecso said that the breadth of applications to which GAO has used the ACS (after satisfying itself of the ACS’s fitness to the task) testifies to the fact that the ACS has proven itself a desirable replacement. Some of these wide-ranging applications of the ACS in GAO studies (with their GAO report number, to facilitate easy reference to the studies at http://www.gao.gov) are:
- Study of veterans’ housing characteristics and the affordability of rental housing among low-income veterans (GAO-07-1012); ACS data revealed a surprisingly large group of veteran renter households with low incomes (about 2.3 million) but ultimately suggested that—at the time—veterans were not significantly different from nonveteran renter households in having problems affording housing.
- Study of differences in educational attainment and income among detailed Asian and Pacific Islander demographic subgroups (GAO-07-925); ACS data suggested, for instance, higher propensity for Asian Indians and Chi-
nese persons in the United States to hold college degrees than other subgroups like Vietnamese and Native Hawaiians.
- Evaluation of the National Flood Insurance Program (NFIP) and the possible financial impact of changes in policy and premium rates (GAO-09-20); in this case, ACS data on such variables as median household income and median value of owner-occupied homes were used to target a sample of counties for intensive case-study analysis—somewhat similar to AIR Worldwide’s use of ACS data to characterize areas and properties at risk of catastrophes (see Section 6–E).
- Comparison of alternative methodologies for allocating grant monies for vocational rehabilitation programs across the states (GAO-09-798); the ACS data proved ideal for the analysis not only for its detailed information on a wide variety of disabilities—disability rates being a key factor in the need for vocational rehabilitation programs—but also its coverage of the nonhousehold “group quarters” population, including people living in group homes or specific rehabilitation facilities.
- Assessment of the possible relationship between limited English proficiency and financial literacy, or awareness of consumer finance issues (GAO-10-518); while the study required external data (besides the ACS) for the financial literacy component, GAO appreciated that the multiple questions on the ACS gave it great flexibility in defining and constructing its own standard definitions of “limited English proficiency” for small areas.
- Description of the characteristics of women in managerial positions in the workplace (GAO-10-892R); in a 2010 update of a report originally done in 2001 using Current Population Survey (CPS) data, GAO was able to use ACS data to examine pay and demographic differences between women and men in management positions—for a wider range of industries than was possible with the CPS data.
- Profiling the demographic and economic characteristics of one extremely specific industry, people working in early child care and education (GAO-12-248); among the interesting findings, about 93 percent of the workers in this field who have a bachelor’s degree do not have a degree specifically in early childhood education.
Fecso noted that GAO has occasionally been called upon to examine and advise upon technical aspects of the ACS. GAO’s legal staff was asked by Members of Congress to render an opinion on the legal justification for the ACS, and their resulting memo (GAO report B-289852) is still an important part of the ongoing voluntary-versus-mandatory-response debate because GAO concurred that the Census Bureau has the legal authority to make the ACS mandatory. In 2002, GAO’s The American Community Survey: Accuracy and Timeliness Issues
(GAO-02-956R) forecast the decline in response rate—and with it the decline in accuracy—that could accompany a switch from mandatory to voluntary ACS collection. Most recently, earlier in 2012, GAO issued a report looking at the whole portfolio of federal household surveys and the role of the ACS, making the case that the ACS was successfully able to add a question on field of degree that permits the ACS to serve as the sampling basis for the National Science Foundation’s National Survey of College Graduates but that—large though it is—the ACS’s current sample size is too small for it to be piggybacked upon for other follow-up survey designs.
Fecso closed by offering brief comments on general ACS issues. He reiterated his, and GAO’s, concern that a switch to voluntary collection could degrade the quality of the ACS estimates. Additional funding—and additional sample units—could get around the basic problem of a lower initial sample size, but he said that he would still be very worried about the biases that might be introduced by nonresponse. He acknowledged that the ACS does genuinely incur burdens as well as benefits—it can be a lengthy questionnaire to complete, and it can be a challenge to communicate. That said, he expressed hope that communication and education could provide reassurance to ACS respondents, first that only a very small percentage of the population receives the survey and, second, that completing the survey is positive civic participation (rather than rote civic duty). He said that he appreciates the privacy and confidentiality concerns associated with the survey, and the arguments about the questions being overly intrusive, but—in his assessment—the wealth of information that those seemingly invasive questions can provide outweighs the burden. To that extent, he said (similar to Terri Ann Lowenthal’s discussion in Section 7–D) that concrete, accessible examples of ACS like those at the workshop should really be developed and spread around to bolster the case for the survey.
Warren Brown (Cornell University) began his remarks by commenting that a frequently raised argument in favor of the ACS—not one dwelt upon at all in this workshop, but invoked in other settings—is that it is essential to the federal government and specifically to the distribution of more than $450 billion of federal funds. Important—and true—though this might be, Brown said that he recognized that this argument is not very persuasive in some circles: There are those “who would like to kill the beast” altogether and feel strongly that such large sums should not be collected and redistributed in the first place. That said, he argued, there is a legitimate question underlying this counterargument—what are the proper roles of and relationships between federal and state governments?—in light of which it is useful to consider the role
of federally collected data like the ACS. “Even if we went back to the Articles of Confederation”—weak federal government and strong state and local governments—Brown argued that it would still be incumbent on the state and local governments to provide some services to their populations; would it or would it not be short-sighted to shun a federally collected data collection that provides detailed information on all states and localities?
Brown continued that he wanted to use his time to talk about the uses of ACS data by state and local governments in addressing the needs of their residents—and, in that light, the burdens that the ACS in its present form creates for state and local users. He said that the basic point he wanted to express is that the ACS creates tradeoffs for users—fix or relieve one burden, create or exacerbate another—and he said that he would briefly illustrate his point through an example of work done with the Georgia Division of Aging Services, during his 3-year period at the University of Georgia as state demographer.
Over his 35-year career, Brown said that his role has been to serve as a “data intermediary”—making the data from statistical agencies more useful to analysts, policy makers, and general staff members of state and local governments. In that capacity, he said that he has heard—extensively—-complaints about the inadequacies and burdens of government statistics. In the days of the decennial census long-form sample, the recurring complaint was that the data are so out of date as to be useless. When the ACS came online, the complaints changed dramatically. Some challenged the sheer volume of the data and found it to be almost too timely—the thrust of the complaint being that “this flow of information is overwhelming.” Others found more subtle, technical grounds for complaints; officials from small-population areas with significant group quarters populations (e.g., prisons, college dormitories, nursing homes) critiqued the “erratic and inaccurate” nature of the ACS group quarters data.1 Significantly, state and local government users also complained about the burdens of interpreting multiyear period estimates relative to the point-in-time estimates of the old long form and of grappling with more prominent margins of error; so many state and local programs have specific cutoffs or thresholds that estimates that look like they are bouncing above and below those thresholds are highly disconcerting.
With that as context, he moved to the example. The Georgia Division of Aging Services administers a variety of programs to deliver services to the elderly, to persons with disabilities, and specifically to military veterans with disabilities—the central objective of which is to enable people who wish to remain in their own homes (without having to be institutionalized) to do so. Consequently, the division has to coordinate extensively with a variety of service providers, to streamline delivery approaches so that the maximum of services
1To that end, Brown held up a copy of—and commended—the recent National Research Council (2012) report on measuring the group quarters population in the ACS for offering recommendations on resolving these problems.
SOURCE: American Community Survey, 2008–2010, Table B21100; adapted from workshop presentation by Warren Brown.
can be provided with a minimum of cost—“a laudable goal,” Brown said, and undoubtedly one that is replicated by similar agencies in other states as well.
When the Division of Aging Services came to him with a problem, Brown emphasized that they did so with tabulations in hand: “They used data from the American Community Survey; I didn’t bring this data to them,” or have to bring it to them. The heart of the problem is that one specific question on the ACS makes it, seemingly, ideal for the division’s purposes. The U.S. Department of Veterans Affairs assigns service-connected disability ratings—expressed as a percentage in the sequence 0, 10, 20, through 100 percent—to affected veterans, and the ACS asks about those disability ratings.2 To their thinking, the division hoped that the ACS would be able to provide them “statistically reliable information to provide more accurate assessments,” Brown said; it seemed as though the ACS would be a very valuable tool in generating regional estimates of current and anticipated demand for services specifically aimed at veterans with disabilities. Indeed, Brown said, the division had used the ACS to explore the age of veterans, their living conditions, and their military service—all information collected in the ACS.
Figure 8-1 displays the estimates—and the upper and lower confidence limits
2Person Question 28a on the 2012 questionnaire asks “Does this person have a VA service-connected disability rating?” If the answer is “Yes (such as 0%, 10%, 20%, …, 100%),” then the respondent is asked to report the rating using the five categories implied in Figures 8-1 and 8-2: “0 percent,” “10 or 20 percent,” “30 or 40 percent,” “50 or 60 percent,” or “70 percent or higher.”
on those estimates, shown as red bars—for the service-connected disability rating question, based on 3-year ACS data (2008–2010) for the state of Georgia as a whole. The confidence bands are quite tight—“but we don’t deliver the program at the state level.” Instead, the program for outreach to veterans with disabilities is administered through regional offices for the aging, so there is a need to drill down to finer levels of geography.
The top graph in Figure 8-2 shows the same service-connected disability estimates (again from 2008–2010 ACS data) for the largest geographic subpopulation in the state: the Atlanta metropolitan area, in which over half of Georgia’s total population resides. Looking at the graph, Brown said, “you can do some program planning at that level”—“the confidence limits are still pretty acceptable,” and there is still some clear separation in group sizes between the extremes of disability rating groups. But the bottom graph shows the results for the Macon metropolitan area—“not a small population,” at around 250,000 total population, roughly 17,5000 veterans in total, and about 2,300 of those having a service-connected disability rating. And, there, the division came to Brown, wondering what to do. The confidence limits widen a great deal and, from this picture, Brown said that “it is difficult to justify how many persons in need” of the veteran-specific services are in the Macon area. Yet “this is the only information they have, to set those kinds of planning objectives.”
Brown closed by arguing that what is needed for effective programming in state and local government programs is “reliable, consistent, accurate, precise estimates, as best we can get them.” Brown said that the example of the disabled veterans brings home to him that the ACS creates, and will continue to create, tradeoffs. Making responses voluntary might benefit the privacy of individual responses, but the resulting diminution of the sample size could hurt the accuracy of estimates. Brown said that his worry is that these shifts in quality might disproportionately “create tangible negative consequences for some of our most vulnerable residents: youth, the elderly, and disabled veterans.”
Barry Steinhardt (Friends of Privacy USA) opened his comments with an important disclaimer; some people in the workshop know him from his lengthy career with the American Civil Liberties Union (ACLU) and, in particular, from his service as ACLU’s representative to the Census Advisory Committee. However, he emphasized that he does not represent or speak for the ACLU today. What he said he does speak from is the perspective of a long-time observer of the intersection between the census and privacy and confidentiality issues—in which capacity he asked to start by stating his “incredible respect” for the Census Bureau’s privacy safeguards. In his assessment, the Census Bureau is the federal government agency with the most effective program for “protect-
SOURCE: American Community Survey, 2008–2010, Table B21100; adapted from workshop presentation by Warren Brown.
ing the privacy of the respondents and the integrity and the privacy of the data that it receives.” Steinhardt added that he currently sits on the Privacy Advisory Committee of the U.S. Department of Homeland Security (DHS) and that “they largely roll their eyes when I set the Census Bureau as a model” for DHS to follow; still, he finds the contrast between the Census Bureau and DHS in terms of privacy protection to be remarkable.
Steinhardt also stated his conclusion that “it is very difficult and nearly impossible to glean personally identifiable information from aggregated census data,” in large part because of the Census Bureau’s well-developed techniques (including data swapping) for minimizing identifiable information. He knows of no example since World War II—the “infamous example” of disclosure of information involving Japanese Americans—of the Census Bureau willfully or inadvertently releasing personally identifiable information about an individual, which is an extremely commendable effort. Steinhardt did note that one might not be able to identify particular individuals in aggregate census data but one can tell when there are some outliers—far different in some characteristic than the norm, within some fairly discrete answer. How much information one can glean from outliers and whether marketers and others in the for-profit sector make use of them are open and interesting questions, Steinhardt observed.
With that said, and recognizing the many “extraordinarily valuable role[s]” played by the ACS, as illustrated by the presentations at the workshop, Steinhardt said that it has to be recognized that some of the questions on the ACS involve personally sensitive information. Moreover, many of the ACS questions are inherently highly personal in nature—Steinhardt said that there is “no other way to get the value out of the ACS without asking those questions,” using the personal answers to produce estimates at aggregate levels.
So, getting to the way in which a privacy rights advocate reads the ACS, Steinhardt said that the first natural question is how well the Census Bureau protects the data. As he said before, “the answer is ‘very well’”—the Census Bureau has a “sterling record of protecting the data.” He recalled—from his tenure at the ACLU—actively urging people to fill out the decennial census form and the ACS questionnaire, saying that they could do so “with great confidence that their data would be protected.” So, on that level, the ACS is not troubling.
However, the question of whether responses to the ACS should be voluntary or mandatory is another matter. Steinhardt said that, in his opinion, “it is a close constitutional question” as to whether the Census Bureau can compel responses to all of the questions on the ACS. To be clear, he continued, “I am hardly a constitutional originalist on this”—census practices have had to change, and have changed, since ratification of the Constitution and the conduct of the 1790 census, and he does not want to try to parse exactly what the founders could have possibly envisioned regarding the census. Moreover, there is “certainly a whole range of questions which I believe people have a constitutional obligation to answer.” But, he suggested, “I think it is a stretch” to argue that questions of
“whether or not you heat your home with natural gas” and whether that natural gas is in a bottle or comes through a pipe3 flow directly from the constitutional mandate to apportion the Congress.
In the voluntary-versus-mandatory debate, Steinhardt observed, it is frequently mentioned that the Census Bureau has never prosecuted anyone for failure to answer the questions. In the cases that he is aware of, where litigants have challenged the penalties for nonresponse, Steinhardt said that the courts have uniformly declined to hear the question; because the Census Bureau has not brought charges against anyone, the litigants have “no credible fear of prosecution” and so the courts rule that they lack standing to bring the case. In his opinion, Steinhardt said that he thinks “the courts are wrong in that,” and that actual judicial examination of the issues would be very interesting—and “an extraordinarily close question about whether or not you can be required to answer all the questions.”
Concluding his remarks, Steinhardt said that “if you ask me as a privacy advocate what I worry about,” the mandatory-versus-voluntary issue is not at the top of the list. Again commending the Census Bureau’s “sterling record” in resisting such entreaties, Steinhardt argued that a breach in the Census Bureau’s protections and the misuse of census or ACS data by other elements in the federal government would be vastly more damaging to the enterprise of the ACS. He said that he appreciated hearing about the manifold uses of the ACS, that he thinks that the ACS plays an extremely valuable role, and that Congress should continue to support the ACS. He is, however, firm in his conclusion that the question of whether all of the ACS questions should be mandatory responses is a close constitutional question.
The Paperwork Reduction Act of 1995 requires federal agencies like the Census Bureau to submit proposed “information collections”—essentially, any gathering of information from 10 or more respondents—to the U.S. Office of Management and Budget (OMB) for clearance, at least once every 3 years for an ongoing survey like the ACS. OMB, in turn, is asked to weigh—among other things—whether the demands that the collection puts on respondents’ time are appropriate (and reduced to the extent possible). In its most recent request for clearance of the ACS (viewable by searching for Information Collection Review 201202-0607-003 on http://www.reginfo.gov), the Census Bureau estimated that the standard ACS housing unit questionnaire takes about 40 minutes to complete on the mailed paper form and about 27 minutes administered
3Housing Question 10 on the 2012 ACS questionnaires asks “Which FUEL is used MOST for heating this house, apartment, or mobile home?” The first two response categories, out of nine, are “Gas: from underground pipes serving the neighborhood” and “Gas: bottled, tank, or LP.”
by phone or personal interview. (The actual time to complete the survey depends critically on the number of people in the household—and, with it, the number of person-level questions that must be answered.) Based on these assumptions, and including quality control interview and data collection from group quarters as well as households, the Census Bureau said that it anticipates the average annual respondent burden for 2012–2015 to be roughly 2,435,568 hours across 3,805,200 respondents—a large commitment of time and resources that invites continued and active discussion to keep that burden in check.
Asked to comment specifically on issues of respondent burden, Kenneth Darga (Michigan state demographer) framed the problem by noting that true respondent burden includes at least two major components: demands that the survey puts on respondents’ time and effort (as described above) and the cost/burden of revealing personal information. He said that his comments are intended to place these components into context through comparison with other demands for time and effort and with disclosure of personal information.
He first asked the workshop audience to consider that various levels of government—federal, state, local—place myriad demands on individuals’ time and effort (not to mention, in some cases, their money). Among these, the federal government demands compulsory military registration (and, in some wartime periods, compulsory service); it also directs that individuals wait in line for Transportation Security Administration screening in order to travel by air. Depending on the letter of the law in individual states, state government makes a strong, mandatory claim on young peoples’ time: mandatory school attendance from anywhere from 10 to 14 years. And state and local governments both demand that people wait at traffic lights, and that they keep their lawn mowed and their sidewalks clear of snow in winter.
Against this backdrop, Darga argued, is the particular demand by the federal government to complete the ACS questionnaire—a demand, roughly speaking, that is made of a specified household “about once every 45 years.” As has been said of the ACS, Darga commented that it is conceptually possible to make any of the other demands that he listed voluntary rather than mandatory—but, as Brown had suggested in his remarks, there are tradeoffs. If waiting at traffic lights were made voluntary, it could appear on the surface like giving people a freedom that they did not have before—yet, at the same time, Darga said that the move would work to take away or impair the special freedom of being able to drive safely. Likewise, as others have argued, it might appear that making the ACS voluntary would free up individuals’ time, but it could introduce major costs in impeding good and relatively unbiased data. Darga reasoned that the ACS is the least of the demands that he listed, especially amortized over a 45-year period, but that the fundamental problem is that the ACS is a little-understood (and hence much-resented) demand; people instinctively understand why they have to stop at traffic lights or mow their lawns, but the reasons for completing the ACS questionnaire are less clear.
Darga added that, to be sure, the actual time and effort needed to complete the ACS varies considerably across households. Most fundamentally, most of the questions on the ACS are asked about each individual in the household; larger households necessarily require longer times to complete the questionnaire. Other factors that can inflate the time needed to complete the ACS questionnaire—in essence, Darga completed his analogy, mowing the lawn several times rather than just once—include language barriers and literacy issues. He said that he would shortly offer some suggestions for reducing this burden but that, in his mind, the relevant question is, essentially: Is completing the ACS as important as spending the same number (or more)minutes cutting grass, waiting at traffic lights, or—generally—doing any of the large number of other things that various governments require us to do for the public good?
Turning to the second broad component of burden, Darga echoed Steinhardt’s conclusion that—for some people—some of the information called for by the ACS questionnaire can be sensitive. Someone’s actual level of educational attainment might be less than most people think it is; racial or ethnic background might be a deep family secret; in some contexts, and in a climate of concern about identity theft or government intrusiveness, even the basic data item of a person’s name might be deemed sensitive. On this point, Darga said that it is impossible for the Census Bureau to fully anticipate which data items might be deemed sensitive by which particular individuals, so its only natural response—a commendable one—is to zealously guard all personal data as confidential. Continuing, Darga suggested an important contrast with other demands for personal information: “when the [Internal Revenue Service] or an insurance company or a police officer wants information about you it is generally because they want to make some sort of decision about you.” But the Census Bureau is fundamentally different in that what it really wants is aggregate information, not personal information—the catch being that it is not possible to arrive at those aggregate data without asking respondents inherently personal questions and then adding the responses together.
Under the broad heading of disclosing personal information, Darga said that he wanted to make two other points (and draw two additional contrasts between the ACS and other situations). One is that it should be acknowledged that some (but definitely not all) of the information requested by the ACS is already known and disclosed to many individuals and organizations. Tax records, employer files, driver records, local assessor records, conversation with neighbors, and so forth—Darga said that if the FBI or some other entity wants specific information about a particular person, they can get it from these kinds of sources. But the contrast with the ACS is important to bear in mind. These other data sources are well suited to the revelation of personal information but not producing the aggregate characteristics of an entity like a census tract, which is the ACS’s strength. Another point is that information about specific individuals is not available from the ACS—and, arguably, would not be very useful if it were.
- Procedurally, the confidentiality safeguards lauded by Steinhardt prevent browsing or look-up of individual information for purposes other than compiling statistics or evaluating the survey, and
- Logically, one could not possibly find information about Darga from the ACS for the simple reason that “my household has not gotten an ACS form yet”—the same statement that could be made by the vast majority of persons and households in the country.
Put more succinctly, Darga said that “if Big Brother wants information about me, he does not really need the ACS”—“his best bet is the Internet.”
Darga concluded by offering four specific suggestions that could be considered for reducing respondent burden and alleviating privacy concerns:
- Improve the layout of the ACS questionnaire to make it easier and quicker to report information for large families: Darga observed that the current ACS questionnaire can seem intimidatingly long but that a major reason for that is cosmetic in nature—the same core of questions is repeated five times, for each member of the family. So, he said, the person responding for a large family “needs to read through four pages of questions five separate times”—frustrating for many people, and actually difficult to do in households where language or literacy concerns exist. Darga suggested that strategies for developing and deploying a large-family form—a columnar format where each question would only have to be read once and then answers for each person made across a row—could shorten the form and make it less imposing.
- Make better provisions for complex households: For living situations like coop student housing or group homes, the ACS data collection task is one of collecting all the information of households of many unrelated residents—and, Darga said, it is tough to envision or rely on a “house secretary who is going to take responsibility for answering the ACS” on behalf of 20 unrelated residents. In these kinds of households, the Census Bureau’s effective “all-or-nothing” approach to collection—assuming that the response will be automatically coordinated for everyone in the household—may not be effective. It might be worth considering asking the lead respondent to make the roster of people in the household but to permit checkboxes to be filled for people (or subhouseholds) where the Census Bureau would be better off making a separate contact (by mail or phone). This kind of approach could prove helpful in settings where people do not want to disclose personal information to a housemate or where some household members are unavailable when the (single) form is being filled out.
- Suggest an alternative for people who do not want to provide their legal names: Darga said that simply reporting one’s name can take respondents
aback; they might fear identity theft, they might fear exposure of immigration status, or—for other reasons—they might not want to give their name to a government agency for any other purpose. So, he reasoned, one possible fix would be to take the person’s name out of the equation. Structurally, Darga said that the primary roles played by person name in the current ACS are to simply “keep people straight while answering the questions” (e.g., to ensure that “Person 3” is actually the same person through the whole questionnaire), and to help follow-up workers clarify missing or contradictory information. It may be feasible to achieve both of those objectives by allowing respondents to use or claim a nickname or an alias; Darga added that deemphasizing the need for reporting names could help the Census Bureau make clear its interest in aggregate information rather than “building a master database of personal information on individuals.”
- Consider the use of an incentive to help reverse attitudes toward participating in the ACS: Acknowledging that “it is probably not feasible to include a cash incentive payment in the Census Bureau’s budget” for the ACS, Darga suggested “many politicians—and taxpayers—do like tax cuts.” Darga suggested that some small tax credit for people who have submitted a complete ACS form could be a reasonable way to offset a sample household’s time and effort in completing the survey. Indeed, he remarked, “we might even start to see people complaining that they haven’t received an ACS form instead of complaining that they have.”
Acknowledging that he had been asked by the workshop planners to serve as a sort of “designated complainer,” Stephen Tordella (Decision Demographics, Inc.) opened his remarks by stating that “the ACS really is a burden; there is no way of getting around that.” But, he continued, his remarks are meant to underscore two basic points, the first being that the “burden” of the ACS is—and should be—-partially borne by Congress, one of the survey’s most important stakeholders. As to the second, he recalled that he used to work for a sales-driven organization, in which the constant mantra was “nothing happens until somebody makes a sale.” Tordella said that it is equally true that, for the census and the ACS, “nothing happens until people send in their responses.” Hence, his second major point—that respondents should be viewed as “the most valuable commodity we have”—“they should be revered and treasured, not threatened,” and they deserve more prominence in census and ACS operations.
First, specific to the burden argument, Tordella said that he had talked to a lot of people in the weeks before the workshop, to get their reactions to some of
the ACS questions. From those conversations, Tordella concluded that the data and the underlying estimates are certainly important but that even the advocates of the estimates have to concede that there is something ridiculous, something odd about asking people “when they left the house that morning or whether they have toilets.” He said that his own wife’s reactions—the morning of the workshop, previewing his comments—were telling. Asking her whether she knew about the question about toilets, she shrugged, and admitted that she did not understand why the Census Bureau would be asking about toilets. He followed up: “How about if somebody asked you what time you left the house this morning?” His wife’s immediate response was that the question “seemed kind of creepy,” and Tordella suggested that “there is no way that it will ever not seem creepy to some group of people.”
No one knows the real number of complaints lodged against the ACS, Tordella said—certainly not the congressional staff members he talked to about the ACS. The Census Bureau “has a little better idea,” keeping a record of correspondence that they receive, but even that misses the silent complaints—questionnaires not filed out of anger or aggravation. Still, he suggested that the correspondence suggests some of the flavor and the magnitude of complaints about the survey—even if 1 in 1,000 people complain, 1 out of 1,000 of the roughly 3 million households reached by the ACS each year, “that is still 3,000 complaints,” from which insight can be gleaned.
From his interactions with Census Bureau staff, Tordella said that he received information about some broad categories of reactions to the ACS. The most voluminous of the complaints are those objecting to the perception of intrusive and invasive questions. He quoted from some of these complaints:
- “This is a lengthy questionnaire that asks very personal questions that are frankly no one’s business.”
- “Why would you ask, or need to know, what time I leave for work each day, and how long it takes me to get to work? Nor do I understand what my monthly bills have to do with it?”
- “I find this survey very invasive and really none of the Bureau’s business.”
A second broad category of complaints comes from people who did not return the mailed questionnaire, and so complain about the telephone and field visit follow-up steps. Again, Tordella recited some quotes from the correspondence to the Census Bureau:
- “I spoke with a Census rep last month and since then I have been getting five to ten calls a day from random people claiming to be employees of Census.”
- “I called the number for assistance [and] was horrified at the rude and insulting and harassing and threatening language used by the person supposedly there to provide assistance. Naturally, this made me more suspicious of the survey of its intended purpose.”
- “A worker came to my home and she was very hostile. She said that it was mandatory that we fill out the Census. We told her we already did the Census. She said we had to fill it out. We told her that we were under the impression that the Census was voluntary. She went on to say that there was personal information that was needed. We did report her to our local police department.”
Following the theme of the last quote, the next set of complaints concerns the mandatory nature of response:
- “In which country am I living [if this is mandatory]?”
- “I am only responding so I do not have to pay a fine—but I am EXTREMELY uncomfortable about providing this information” (Tordella emphasized that “extremely” was written in all-capitals).
- “Being forced to complete a survey like this gives me the feeling of being a disenfranchised federal tax-paying citizen with no option but to shut up, fill out the survey used to distribute money that we don’t have, and pay higher taxes.”
Other complaints are fewer in number but can be particularly vocal and passionate. Tordella said that worries about whether the ACS questionnaire is really a scam seem to be particularly acute among the elderly and infirm; their surrogates or caretakers will make appeals and inquiries raising that concern. Other complaints question the constitutionality of the ACS and the degree of overlap between the ACS and other agencies’ data sources. He closed the recitation of common complaints by reading a longer quote from one letter:
I find this American Community Survey to be appalling, invasive, and intrusive, and none of the government’s business and I intend to let my senators and congressmen know how I feel. Take note of that.
And, Tordella concluded, “a good number of them do.” Even if it is 1 in 1,000, “these are legitimate and heartfelt beliefs,” and that is the backdrop against which the ACS must operate. He said that these kinds of complaints are never going to go away completely—but neither are they a completely new phenomenon. He recalled being interviewed on Wisconsin Public Radio during the preparations for the 1980 census; reviewing some of the reported complaints about the ACS brought back memories of that day in Wisconsin because many of the same issues and claims of harassment by the census were raised then.
The dots are not hard to connect concerning the way these kinds of ACS complaints register on Capitol Hill, Tordella said. Congress is necessarily “part of the complaint bureau for the ACS”—their staff members field these kind of complaints and the ACS, being in continuous operation, spurs such complaints every week of the year—“and Congress controls your budget.” To be sure, he added, Congress is a diverse body just as the American public is diverse, and so contains a range of views on the role of the government in the census and
ACS. Tordella noted his sincere personal belief that “there should be some people in Congress who hate the census and everything it is about—just to keep the Census Bureau on its toes.” But there are many other members of Congress for whom the ACS only exists—to the extent that they recognize it—as a source of “continual pain,” the source of those complaints from constituents.
From his conversations with congressional staff members, Tordella said that he concluded that there is a feeling out there that the Census Bureau is doing little to alleviate the perception problems with the ACS. “Press these congressional staffers about the idea of a voluntary ACS and the cost and quality implications, and their response is, ‘You fix it.’” Use cost savings from Internet data collection, or work out some new methodology, but “just go and fix it.”
With that in mind, Tordella suggested that one possible solution—or at least “a place to start”—would be for the Census Bureau to recognize that “respondents really should be king.” Newspapers have ombudsmen to take readers’ perspectives in mind and challenge editorial approaches—Tordella asked “why shouldn’t the respondent have an ombudsman” appointed at the Census Bureau? There are mechanisms within the Census Bureau that serve to protect respondents’ rights—a chief privacy officer and a Disclosure Review Board, for instance—but those are little understood (or appreciated) by the public. A highly visible ombudsman and a citizen’s advisory panel, “where people feel like they can be heard,” could go a long way to improving perceptions of the ACS.
Tordella then briefly displayed some recent screenshots of the Census Bureau’s homepage to suggest that “there should really be a lot made on the website about respondents.” On the particular day he visited the site, one of the four “top stories” on a few-seconds rotation at the top of the page actually did speak to respondents: “Your Response Makes a Difference for Small Businesses” read the headline, above a link to more information about the 2012 Economic Census. That is good, but seemingly a one-shot deal—if you have received a questionnaire from the ACS or some other survey you have to scroll all the way to the bottom of the page and find, in small type, the link “Are You in a Survey?” in order to start having your questions answered. He argued that the placement of the link has probably been tested in some way—“Census tests most things”—but he concluded that it ought to be much easier for respondents to find supporting information, to find justification for the questions they are being asked, and to feel as though their concerns are heard.
He closed by nothing that these kinds of “respondent relations” steps might not alleviate all of the complaints, but that the Census Bureau could still benefit from them—and learn from the examples of other agencies. Arguably, he said, the federal agency “that has the most teeth and the most penalties and the most influence over the American people” is the Internal Revenue Service (IRS), which went through a substantial modernization in the 1990s. That modernization succeeded, he said, not solely through upgrades in information technology,
but also through concerted attention to taxpayer needs and concerns. These days, “even the IRS is kinder and gentler”—surely, he said, there must be ways for the Census Bureau to be kinder and gentler as well.
In the general discussion session following the speakers’ opening statements, Darga’s suggestion of finding some way to incentivize ACS response drew a variety of reactions. Patrick Jankowski (Greater Houston Partnership) asked other session speakers to comment on that specific proposal, and Fecso said that he worried about the idea “snowballing to all the other federal surveys”—ultimately, the financial costs of the incentive might outweigh “what you are getting out” of the incentives in terms of good response. Recalling his immersion in the complaints expressed about the ACS, Tordella replied that people might look at an incentive program and ask “why are we giving away more tax dollars for nothing?”—an incentive might boost response, but it could also create more hard-set opposition to the survey. Brian Harris-Kojetin (U.S. Office of Management and Budget) reminded the workshop audience that—just the week before the workshop—the House of Representatives approved an amendment to an appropriations bill (albeit not the Commerce, Justice, and Science bill that funds the Census Bureau) prohibiting the use of money as a respondent incentive in a survey, so the legality of federal survey incentives is a matter of considerable current debate.4 Dan Weinberg (U.S. Census Bureau) argued that even the idea of a tax credit, rather than a cash incentive, is infeasible because the Census Bureau would have to violate its own confidentiality provisions in order to tell the IRS who had completed the survey; Darga countered that some kind of stub or “receipt” from the ACS response could be attached with a tax return if the respondent wanted to claim the credit, but Weinberg argued that simply confirming a person or household’s inclusion in the ACS could constitute a Title 13 violation.
Alan Zaskavsky (Harvard University) addressed a question to Brown and his specific example of estimating service-connected disability ratings among veterans (though, he noted, other speakers in the workshop sessions could address it in their own fields). Would clients or end users like the Division of Aging Services accept a move to much more model-based estimates? Such estimates might be smoother and have smaller intervals—at the risk of possibly looking
4On June 6, 2012, the House voted 355–51 in favor of an amendment to H.R. 5325, the Energy and Water appropriations bill for fiscal year 2013, offered by Rep. Scott Tipton (R-Colorado). Motivated specifically by $2 and $20 incentives offered in a 1,000-household Bureau of Reclamation survey on attitudes toward removing dams on the Klamath River, the amendment’s language swept wider: “None of the funds made available by this Act may be used to conduct a survey in which money is included or provided for the benefit of the responder.” The whole bill, as amended, was passed that same day, and awaits consideration in the Senate.
more like a constant rate across the different disability rate categories. Put another way, is the problem for the division that the margins of error are so large and the intervals so wide, or is it that the statistical evidence does not differentiate much between the categories? Brown indicated that he thought such model-baed estimates could certainly win acceptance, and could be viewed as credible compared to estimates with much larger margins of error. Brown said that his sense is that people generally accept that the highly publicized monthly unemployment statistics are comprised of model-based estimates.
Terri Ann Lowenthal (Funders Census Initiative and the Census Project) and Steinhardt engaged in a colloquy over Steinhardt’s assertion that the mandatory nature of the ACS is a close constitutional question. Lowenthal said that she had to “respectfully but firmly disagree” about mandatory response being a close call; she mentioned discussions in the first Congress about the content of the 1790 census (asking for more than a simple headcount) and court cases upholding the constitutionality of the content on the previous census long form. In hermind, the constitutionality of mandatory response “seems to be on pretty solid ground.” Steinhardt replied that there are legal scholars who see the question as similarly clear-cut—in the opposite direction. He said that what he finds significant is that the courts have never really addressed the issue head-on—that is, dismissing cases for lack of standing is not quite the same as strongly supporting the constitutionality of the survey content. He reiterated his comment from his opening statement that “the level of detail in the ACS”—down to the method of delivery of natural gas—“is such that I think it is a close question.” Lowenthal answered that she saw Steinhardt’s point but sees things in a different way. She said that the issue is not really whether the framers could have possibly envisioned specific questions, but the more general issue of “whether the government has a right” and the authority “to gather information for the public good.”
Asked for his comments on the main topics of the workshop and the prospects of the ACS—from his perspective as a long-time researcher, as former Texas demographer, and as former director of the U.S. Census Bureau—Steve Murdock began with a slight apology for not being able to sit in on more of the workshop. However, he said, one of the reasons that he could not be present for longer was a previous speaking engagement for officers of the American Association for Affirmative Action. When those officers learned where Murdock was heading next, he said that several asked him to “please let everybody know how important the ACS is to us—we hear it’s under attack in Congress.”
Murdock said that the ACS is “critical to what the country does for decision making,” and he commended the workshop speakers for providing very
clear indications of the value of the data in a wide range of applications. He said that he wanted to make two basic points in his short remarks, the first of which is that the ACS sample size is unlikely to ever be so large as to completely mitigate the challenges that confront users of small-area ACS estimates. He described preparing a speech in a community of about 25,000 people, and he said that he commonly prepares standard demographic profiles—from the ACS—for such speeches, together with his Rice University colleague George Hough. One of the standard figures that he likes to display is change in poverty rates over time, by race. In this particular instance, the 1990 and 2000 census data both showed the percent of Hispanic households in poverty in this community as being roughly 36 percent; generating the same variable using 2005–2009 ACS data, the figure was 11 percent. Clearly, something seemed off, and Murdock said that he decided not to use the city-level figures in his talk—and he did not. But, “sure enough,” he was challenged during discussion by someone in the audience who noted that he had shown these data for the nation as a whole, for the state, for large metropolitan areas, and that he had presented all manner of other data for the particular community, but not that specific poverty figure. As it turned out, this particular questioner worked for the local mayor and approached Murdock after the talk, saying that the local government was currently trying to figure out how to allocate funds to various community groups—she asked whether they could cut the distribution to Hispanics in the community because the ACS shows so many few Hispanics in poverty, and Murdock explained that this probably was not the case. The questioner replied, “Oh, so you’re saying these data aren’t really all that good?,” and Murdock tried to explain that the numbers have to be understood in a broader context.
Murdock explained that he brought up this story because perceptions matter a great deal; consistent with Tordella’s earlier remarks, Murdock said that it does not take a majority to create major problems for the ACS—single, vocal complaints from key stakeholders can do just as much damage.
Acknowledging that it might sound to some “like I’m a traitor to the cause”—“I am not”—Murdock said that his second main point is that it is prudent for the survey’s stakeholders to give serious thought “about what happens if our defense of the ACS Alamo doesn’t work.” The history of the battle at the Alamo suggests that many of the casualties at the fort “died so bravely because there was no way out.” Murdock emphasized that he “has absolutely no doubt in the utility of the ACS data”; having led the Bureau, he also has “nothing but complete and total admiration for the Census Bureau staff that does this work.” Nonetheless, he suggested that stakeholders need to “start talking quietly, constructively,” about viable alternatives and about managing the tradeoffs that might come through alternative sampling sizes or aggregations of periods over time. He closed by expressing his hope that this workshop could be followed up with one that asks: “What would you do? What can we do? What can we suggest?” Work should certainly continue to support the current ACS de-
At the end of Murdock’s remarks, steering committee co-chair Linda Gage (California Department of Finance, retired) asked whether anyone from the Census Bureau had any summary comments that they would like to offer. Jim Treat (chief, American Community Survey Office, U.S. Census Bureau) said that he came to the workshop with “high expectations of what I was going to hear,” and that those expectations had been exceeded by all the presentations and discussions. He said that he and his colleagues have a great deal of information to take back—examples of specific data uses to flesh out and emphasize, challenges in communication to work on—and he thanked the presenters and participants for their efforts. Constance Citro (Committee on National Statistics) added that the day and a half had been “fantastic,” outlining the breadth and depth of uses of ACS data—alone and in combination with other data sources—for a wide range of policy areas. She noted that the workshop would contribute, in part, to the work of a new National Research Council panel on ACS technical issues, and credited the Census Bureau with taking a hard, open look at the whole ACS program and finding ways “where it can be focused, improved, and made more useful.” Like Murdock, she said that she did “not want to be defeatist in any way,” but that she agreed that ongoing examination of the ACS demands serious alternatives—ways to make the ACS “as cost-effective as possible in a very tough political environment.”