The workshop’s second session maintained a focus on particular themes—in this case, the applications of the American Community Survey (ACS) in planning and administering social services (not just the allocation of funds but the way federal and state funds are administered at subnational levels) and in preparing for and responding to disasters. But, by its construction, it also served to focus on a particular sector of ACS users: nonprofit organizations.
Section 3–A summarizes a presentation on the use of the ACS for studying welfare “safety net” policies and its strength relative to other data sources, while Section 3–B outlines the way in which a research organization serves as an “interpreter” of ACS estimates for state and local policy makers. The disaster planning and recovery portion of the session included both a specific example—work to assess the impacts of Hurricanes Katrina and Rita and New Orleans’ recovery from those natural disasters (Section 3–C)—and a more general description of the framework for using the ACS and other data for disaster preparedness (Section 3–D). The workshop would revisit the theme of disaster preparedness in one of its featured uses of ACS data by private business; see Section 6–E. (The only questions in the closing minutes of discussion were to clarify individual remarks, so that material is woven into the other sections of the chapter rather than a final standalone section.)
Linda Giannarelli (Urban Institute) began her discussion by noting that her remarks could not be exhaustive of the hundreds, if not thousands, of ways that the ACS data have been brought to bear on examining the low-income population and social welfare. Rather, she said that she wanted to focus specifically on examples from work at the Urban Institute and elsewhere. Touched on briefly in Kathleen Thiede Call’s discussion (Section 2–A), the comparison of ACS data with those from the Current Population Survey (CPS) was a principal focus of Giannarelli’s presentation.
Giannarelli began with a brief overview of the relevant piece of the broader CPS—the module that longtime data users think of as the “March supplement” but which is formally known as the Annual Social and Economic Supplement (ASEC) to the CPS. A detailed battery of questions on income, work experience, family structure, and other topics, the ASEC is administered to roughly 100,000 households per year; this includes the households already in the CPS sample in March of a particular year as well as additional Hispanic households (since 1976) and additional households with children ages 18 or younger (since 2001; this latter category is also known as the “CHIP sample” because it was designed to improve estimates of participation in state Children’s Health Insurance Programs) (U.S. Census Bureau, 2006:11-5–11-6).1 This major supplement to the CPS has particular prominence because its information on income in the preceding year is the government’s source of its official poverty statistics; echoing Call’s description in the previous session, Giannarelli observed that the CPS ASEC has been “the workhorse of federal surveys regarding population issues” for decades, given its inclusion of questions on family structure and demographic issues as well as (dozens of questions on) types of income. The ASEC sample is designed to be representative of the nation as a whole, of broad census regions, and of individual states. However, Giannarelli added the significant caveat that the sample size of the CPS is not big enough to support analysis of low-income households in most states.
There are sharp trade-offs in the detail and scope of analysis that can be done using the ACS rather than the ASEC. In terms of income data, all types of welfare income are collected in one variable while they are split across different questions, and programs, in the ASEC; with the ASEC, for instance,
1The CPS differs from the ACS in that the CPS universe is intended to be the civilian noninstitutional population (completely excluding military personnel), while the ACS includes a sample of persons living in nonhousehold group quarters. The ASEC differs from its parent CPS sample in that it “includes military personnel who live in households with at least one other civilian adult” (U.S. Census Bureau, 2006:11-6). As would be borne out in questions later in the workshop, the CPS (and ASEC) also differ from the ACS in that response to the CPS is voluntary, not required by law.
one can look specifically at benefits received under the Temporary Assistance for Needy Families (TANF) Program. Likewise, some policy-relevant types of income such as child support, unemployment compensation, and workers compensation are lumped together into one “all other income” variable in the ACS. On employment, the ACS asks only for the duration of a person’s employment in ranges of weeks and not a more precise measure;2 it asks only a yes/no question on whether a person has “been ACTIVELY looking for work” in the previous 4 weeks, and not a question on how many weeks a person has been seeking work.3 Finally, ACS questions on benefits received—arguably most crucial to studying welfare “safety net” policies—are limited to specific amount breakdowns for “Social Security or Railroad Retirement,” “Supplemental Security Income (SSI),” “any public assistance or welfare payments from the state or local welfare offices,” and “any other sources of income” such as veterans’ benefits or unemployment compensation.4 Clearly, Giannarelli said, this lack of detail and confounding of different income types is not ideal for safety net analysis—among other things, the ACS provides no direct insight on income received in Supplemental Nutrition Assistance Program (SNAP) benefits (formerly called Food Stamps),5 Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) benefits, or public housing assistance or vouchers.
That said, Giannarelli echoed Call’s bottom-line conclusion: the great advantage of the ACS over the CPS ASEC, and the thing that makes the ACS “irresistible” to researchers, is sheer sample size. To illustrate the point that the ACS supports state and substate analysis that the CPS ASEC simply cannot, Giannarelli displayed the 2009 ASEC and 2008 ACS sample sizes (expressed as number of people, not households) for the states of Georgia, Illinois, Massachusetts, and Wisconsin; she also displayed the numbers of those persons in each state sample with low income (less than 200 percent of the poverty line).6 For Illinois, for example, ASEC sampled roughly 8,200 people, 2,600 of whom are low income; the ACS’s coverage for Illinois was about 123,000 people, just over 30,000 of whom would quality as low income—and, Giannarelli said, there is simply no comparison in the degree to which that low-income population in the ACS sample could be scrutinized without having to either combine years of ASEC data or look only at very high-level geography (state or region).
2In the 2012 version of the questionnaire, Person Question 39a asks “During the PAST 12 MONTHS (52 weeks), did this person work 50 or more weeks?” (yes/no) and 39b asks “How many weeks DID this person work, even for a few hours …?,” with responses being 50–52 weeks, 48–49 weeks, 40–47 weeks, 27–39 weeks, 14–26 weeks, and 13 weeks or less.
3This latter question is Person Question 36 on the 2012 version of the ACS questionnaire.
4These are Person Questions 47d, e, f, and h on the 2012 ACS questionnaire.
5As discussed in Section 5–B, the ACS does include a yes/no indicator of whether anyone in the household received SNAP/Food Stamp benefits during the past year, but respondents are not asked about specific levels of assistance received.
6Persons in group quarters were removed from the ACS totals, to match the coverage of the CPS ASEC.
Giannarelli outlined four types of safety net analyses that can be done using ACS data, two of which she described briefly and two of which she sketched out with specific examples. The two mentioned in brief are highly similar to the types of profile analyses described by previous speakers in making use of the wide range of characteristic information available in the ACS. First, demographic profiles (numbers and characteristics) can be constructed for persons and families in poverty—either using the official measure or working through the definitions in the expanded Supplemental Poverty Measure (SPM)—at the state and, often, substate level. The advantage with the ACS is that these profiles can be derived from a single product; in the past, many states would have to combine averages across multiple years of CPS data even to get a sense of state-level poverty. And, second, for those benefit types that are clearly delineated and captured in the ACS (e.g., Social Security or SSI), the ACS permits profiles and characteristics of families or persons receiving those benefits using measures not available in administrative data.
The third type of safety net analysis made possible by the ACS makes use of the additional covariate information in the ACS to compute person and household eligibility for benefits under state-level requirements, and then to compare the eligible population with those who actually receive the benefits. As specific examples—and further contrast between the CPS and ACS as sources—Giannarelli mentioned two examples of work done by the Urban Institute for different clients. First, the U.S. Department of Health and Human Services (HHS) periodically asks the Urban Institute to generate state-level estimates of eligibility for federally funded child care subsidies under the Child Care and Development Fund. In the past, this work has required using 2 years of CPS ASEC data to construct the state-level estimates but—though this work does generate some useful insights—Giannarelli conceded that the resulting standard errors on the estimates are sufficiently large as to make one question the utility of the estimates for the states. For instance, a state-level point estimate of children eligible for these subsidies might be 25,000 but, given the dollars involved, one has to wonder whether the natural next statement—that a 95 percent confidence interval for the number of eligible children suggests that the count is between 15,000 and 35,000—is really useful or informative. By comparison, the Food and Nutrition Service of the U.S. Department of Agriculture contracts with the Urban Institute to estimate state-level eligibility for WIC benefits, and the institute has been doing this work using a combination of CPS and ACS—a combination, in part, because data on WIC benefits are not directly collected by the ACS questionnaire. Simply put, Giannarelli said, there is no way that the Urban Institute could have even attempted to compute state-level eligibility estimates for a program focused on such a precise population (women, infants, and children under age 4) using the CPS ASEC alone.
As further examples of work of this form being done by other organizations, Giannarelli noted New York City’s Center for Economic Opportunity (CEO),
which is tracking trends in poverty over time, adopting a revised measure of poverty initially suggested by a National Research Council (1995) panel. Their work—including efforts to adapt the New York City methods to estimates of poverty for New York State as a whole—has relied on the ACS. Likewise, she said that the ACS has been used extensively by the Institute for Research on Poverty at the University of Wisconsin–Madison; that group has looked at Wisconsin’s eligibility estimates at both the statewide and substate levels.
The New York City and Wisconsin groups have also done work in what Giannarelli called the fourth type of ACS-based safety net analysis—“what if” analysis, to simulate outcomes had a certain policy not been implemented or had a specific mix of eligibility requirements been altered. She specifically cited New York City work making use of the ACS Public Use Microdata Sample (PUMS) data, comparing the city’s actual and hypothetical poverty rates (computed under the city’s adopted formula) had the changes in SNAP eligibility included in the 2009 American Recovery and Reinvestment Act not been implemented.7 Absent the 2009 act’s changes, the city’s study found that the percent of population in poverty might have ticked over the course of 3 years by about 3 percent (New York City Center for Economic Opportunity, 2012).
Wrapping up, Giannarelli described a few safety net analyses conducted by the Urban Institute with the ACS data. In one study, funded by the Annie E. Casey Foundation, the motivating question was how states’ current safety net policies affect child and non-elderly adult poverty. She and her Urban Institute colleagues focused on three states—Georgia, Illinois, and Massachusetts—classifying those states’ existing policies as narrow, medium, and broad safety nets, respectively. The study calculated state-level estimates using SPM-type poverty measures and made use of 2008 ACS data supplemented by data from the institute’s TRIM3 microsimulation model.8 Key results from that work are evident in the graphs reproduced in Figure 3-1. In the top graph, comparing the absence of safety net policies (the leftmost “no safety net” bars) with the full set of safety net policies (the rightmost “all benefits” bars) suggests that the safety net policies serve to cut child poverty rates by half; the bottom graph shows that two specific safety net programs (TANF and SNAP) both serve to reduce the child poverty rate to varying degrees by state. Giannarelli commented that the Urban Institute could never have conducted this type of analysis using the
7The American Recovery and Reinvestment Act, commonly referred to as the 2009 stimulus bill, increased SNAP benefits by 13.6 percent. The hypothetical estimates were based on the normal expansion that would have applied to Food Stamps without the 2009 act; it also made assumptions about the growth rate of the SNAP caseload in the city. See New York City Center for Economic Opportunity (2012:32–34).
8Formally the Transfer Income Model, Version 3, TRIM3 simulates tax and health programs, generating estimates at the individual and family levels as well as for geographic entities (state and nation). TRIM3 is developed by the Urban Institute with primary funding from the Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services. Additional information on TRIM3 is available at http://trim3.urban.org.
Child Poverty Rate by General Extent of “Safety Net” Provisions
Effects of Two Specific Safety Net Programs on Child Poverty Rates
NOTES: SNAP, Supplemental Nutrition Assistance Program; SPM, Supplemental Poverty Measure; TANF, Temporary Assistance for Needy Families Program.
SOURCE: Workshop presentation by Linda Giannarelli, based on data from the 2008 American Community Survey.
CPS alone; even if they had combined years of CPS data to get a sufficiently large sample of low-income people in each of the states, the safety net programs change sufficiently from year to year that the results would not be credible.
She added that the Urban Institute had developed a body of work doing “what if” state poverty analysis depending primarily on CPS (in combination with TRIM3); for instance, this type of analysis was done for poverty commissions in Connecticut and Minnesota in 2008–2009. In 2009–2010, the Urban
Institute used funding from the Annie E. Casey Foundation to adapt TRIM3 to use and work with ACS data for these “what if” studies. To date, the institute has completed work on two projects making use of the new ACS-based methods, studying poverty analyses and the effects of specific packages of policy changes in Illinois (for Heartland Alliance) and Wisconsin (for Community Advocates).
Established in 1989 as the Heartland Alliance Mid-America Institute on Poverty and bearing its current name since 2010, the Social IMPACT Research Center (IMPACT) is a research and evaluation arm of the nonprofit Heartland Alliance for Human Needs and Human Rights in Chicago.9 For more than 12 years, IMPACT has been involved in the generation of an annual Report on Illinois Poverty (the most current version of which is Social IMPACT Research Center, 2011); as IMPACT’s associate director Amy Terpstra noted in her workshop presentation, IMPACT’s work focuses on populations or issues impacting populations who are economically vulnerable or experiencing economic hardship. Its primary role is to convey important issues and trends affecting quality of life for low-income individuals to local social service agencies, policy makers, and the general public. In this work, Terpstra suggested that IMPACT and agencies like it play an important role as “interpreters” of ACS data for the broader public—and that, for purposes of informing policy debates, the availability of ACS data has been a “game changer.”
Terpstra began her remarks by recalling when she started getting into this work; she said that she was shocked at the degree with which policy decisions related to low-income individuals were made based on gut feelings—or whims and fancies—rather than empirical evidence. IMPACT takes as its goal equipping decision makers with good information, or the best information available, and to make it accessible and easily digestible.
Terpstra structured her presentation around a few major ways in which IMPACT uses ACS data, the first of which is to educate and promote policy change—in particular, policies to help people experiencing economic hardship. IMPACT’s annual report on Illinois poverty is a primary resource in this regard. The annual report itself grew out of concerns in the late 1990s when the national economy seemed flush yet Heartland was still seeing people “coming through our doors needing jobs, needing extra support to make ends meet.” The annual report grew out of a desire to educate elected officials, primarily, on continuing struggles with poverty. Terpstra said that the structure of the annual report has
evolved over the years, but that IMPACT still endeavors to make it a very visual data book, using information graphics and graphic design to make the data visually appealing and accessible. She displayed a summary page from the 2011 report (reproduced in Figure 3-2); as she observed, the page is not particularly infographic-laden, but it does cleanly break down high-level figures on poverty in the state. For people well steeped in the data, the figures are nothing new, but for most people—and many decision makers—the concept that there are about 1.6 million people in Illinois who are poor is “mind-blowing.” In addition to presenting the basic facts of poverty, IMPACT’s annual report also focuses on what it calls “pathways out of poverty”—summarizing statistical indicators of employment, health and nutrition, assets, and housing.
Consistent with Call and Giannarelli’s experiences, Terpstra said that IMPACT used to rely heavily on the CPS for the facts and figures in its reports, but has now transitioned to using the ACS. She said that this has provided greater flexibility in analysis and greater confidence in the results. In 2012, with the availability of 5-year ACS numbers, IMPACT revisited the way that it presented information for all 102 counties in Illinois. In the past, the available information on the counties was basically put into an appendix listing in the print report; this year, IMPACT created a web-based portal at http://www.ilpovertyreport.org to allow users to directly access ACS results and other data for the counties in an interactive manner. In addition to screen display, the site provides users with the capacity to download data tables and to access readymade “fact sheets” for easy reference.
Terpstra said that IMPACT and Heartland Alliance are finding that county social service agencies are using this web portal quite extensively. This is encouraging because those local agencies are probably IMPACT’s primary user constituency; through IMPACT’s analyses, the local agencies are able to better understand needs in their community and decide what information they can take to their elected officials, to inform debates on budgets and priorities. Live since December 2011, the Illinois Poverty Report portal has had on the order of 6,000–7,000 unique users.
The online data and report portal is one increasingly important tool but, for purposes of IMPACT’s second principal use of ACS data—documenting local trends in poverty and related phenomena—some more old-fashioned approaches remain effective. Each fall, when new ACS (and CPS) data generally become available, Terpstra said that IMPACT does a very big push to pull in that information and then to generate updated series of custom “fact sheets” for a wide assortment of geographies (counties and cities) in Illinois. She displayed some pages of the standard fact sheet, providing data for Winnebago County in northern Illinois; one of those pages is reprinted in Figure 3-3. Like the annual poverty report, the intent is to present the information in an easy- and attractive-to-read format. IMPACT then disseminates these fact sheets to its list of interested users and agency partners, and also launches a fairly extensive me-
Poverty and hardship in Illinois are not limited to one region of the state; counties all across Illinois struggle with poverty-related issues.
Visit www.ilpovertyreport.org to access county-level data and download the state poverty map.
SOURCE: Extract (p. 4) from Social IMPACT Research Center (2011) (with minor cropping for size), as displayed in workshop presentation by Amy Terpstra.
SOURCE: Extract (p. 2) from Winnebago County factsheet, downloaded from http://www.scribd.com/doc/65972489/Winnebago-County-Fact-Sheet, as displayed in workshop presentation by Amy Terpstra.
On this note, Terpstra said that she wanted to express one soapbox position in the presentation—concerning the Census Bureau’s embargo policy on ACS releases. In 2011, the Bureau’s policy changed—tightened—so that IMPACT and policy groups like it had their early-access rights (under an embargo period) removed, which severely handicaps their ability to serve as “interpreters” for local media reporters. IMPACT had to switch to doing all (or the bulk) of its analysis and trying to update dozens of fact sheets on the day of release. She argued that the Bureau should revise its embargo policy or implement a secondary policy covering agencies that provide media support services. Otherwise, the tighter embargo policy serves to blunt the media attention that might otherwise accompany the new data. IMPACT has found a niche in serving reporters and media sources without the means to conduct analysis on their own—in many cases, even lacking access to Microsoft Excel or the most basic of spreadsheet programs to manipulate data. Those small media outlets, technically, are the ones that could have access to the data under the embargo period but they completely lack the wherewithal to do anything with embargoed data; meanwhile, organizations that could serve to parse the data for those reporters are locked out. (The topic of the ACS embargo policy would naturally arise again in the workshop session on media perspectives; see Chapter 4.)
Another use that IMPACT tries to make of the ACS is to actually have an impact on programs and policies concerning people experiencing economic hardship—to drive informed, solutions-based change. The example that Terpstra walked through in this area is work that IMPACT has done for the Greater Chicago Food Depository, one of the numerous social service agencies that make data requests of IMPACT. IMPACT’s relationship with the Food Depository goes back many years, and IMPACT has performed many different iterations of the analysis of the need for nutrition support among seniors in Cook County and the city of Chicago. In the work for the Food Depository, the basic objective is to estimate hunger among seniors in small areas in Chicago and then to compare those findings with actual disbursements (of funds from federal nutrition programs and of food from the Depository itself). The most recent iterations of the analysis make use of 5-year estimates from the ACS, building census tracts into the 77 long-established community areas that are commonly used as “neighborhoods” in studies of Chicago. The community-area ACS estimates were then compared to administrative data from the nutrition programs; maps were then drawn up to clearly portray areas where need was greatest (and areas where services were most lacking). The Food Depository has taken this analysis and started to draw up plans for new service in some areas of the city; similar action has been taken based on a similar analysis IMPACT did on children’s nutrition programs.
Completing this train of thought, Terpstra quickly displayed a screenshot
of an ongoing data book (and data product) that IMPACT continually updates, which aggregates ACS data to the previously mentioned community areas that are well known and understood in the city. Again, she said, IMPACT serves as an interpreter and conduit for the data; most of their downstream data users do not have the capacity to directly manipulate ACS files into the familiar community areas, so IMPACT provides a service by assembling the tract-level ACS data into a more usable format.
Finally, Terpstra summarized two examples of IMPACT’s use of the ACS to inform targeted poverty reduction strategies. First, IMPACT does analysis of extreme poverty conditions for the state’s Commission on the Elimination of Poverty, beyond the production of the annual poverty report described above. The commission it self was established by the state based on Heartland Alliance’s work, noticing an increasing trend in extreme poverty (defined as people with income below 50 percent of the federal poverty level); IMPACT was then engaged to conduct some specialized tables and graphics on this population and its characteristics. The first cut at this work was done early in IMPACT’s work with ACS data. If it is repeated, Terpstra suggested that it would likely be more sophisticated given the experience that they have acquired in working with ACS PUMS files. But the work was sufficient to demonstrate three clear subgroups of interest in the extreme poverty population. One segment consists of vulnerable populations who are prone to extreme poverty because they either cannot work or are not expected to work; this segment includes young children, seniors, and persons with disabilities. A second segment consists of people who are working in the labor force, but who do not have enough work (e.g., part-time) or simply do not make enough at work to lift themselves out of extreme poverty. But the third segment raised questions and concerns because they are unemployed but—on paper, at least—look like they should be able to work; they are of working age, they are not disabled, and they are not in school or college. This research was useful to the commission in structuring its work and framing its recommendations to the Illinois legislature; the commission is still in existence and using the analytic framework developed by IMPACT, and Terpstra said they intend to update the analysis for the commission.
Second, IMPACT has also been engaged to perform analyses in support of the state’s Human Services Commission. Working primarily with 1- and 3-year ACS estimates, IMPACT has generated extensive analyses of areas of need for youth services, disability services, and housing and homeless services in the state. For instance, ACS data have been used to study extremely rent-burdened households—those paying over half of their income on rent—and then projected potential demand and need for assistance from state funds targeted to relieve such households.
Terpstra concluded by reiterating that the ACS has been a game changer for IMPACT’s work; very little of the work they have performed in recent years would be possible with any other data set. Accordingly, she noted that
IMPACT is worried about the prospect of the ACS being either defunded or hobbled. Such a development would radically change the way that social service organizations, advocacy groups, local media outlets, and state legislators have come to expect reliable information.
Allison Plyer began her remarks by noting a similarity of mission between her organization and Terpstra’s. The Greater New Orleans Community Data Center (GNOCDC)—at which Plyer serves as deputy director—is a member of the National Neighborhood Indicators Partnership (NNIP)10 organized by the Urban Institute. More colloquially, Plyer described the member organizations of the partnership as “a group of data geeks from around the country trying to help local communities work with data”; like Terpstra’s organization, a major part of this effort is interpretive, making those data more understandable to a variety of users. In Plyer’s case, a great deal of GNOCDC’s work in recent years has been marshaling data resources to explain the devastating impact of Hurricane Katrina in late August 2005 and New Orleans’ recovery. As she explained, data from the ACS have proved instrumental and, without ACS data, many constituencies would be “flying blind.”
Though the Katrina example looms large, Plyer displayed a county-level map, shaded based on the number of times each county had been included in a presidential disaster declaration between 1964 and 2010.11 The red shading on the map—particularly dense in areas like eastern North Dakota (flooding), southern California and the Pacific Northwest (flooding and wildfires), the tornado belt in the plains, and the Gulf Coast and Florida (hurricanes and severe storms)—underscores the basic notion that large parts of the United States are at risk of catastrophic disasters, natural and man-made. It is also well known that many heavily populated areas face significant disaster risk—the San Francisco Bay area and the large cities of Florida—but other risks are not quite as obvious. To illustrate this point, Plyer showed two graphics; one, showing New York City’s defined hurricane evacuation zones, is sufficiently abstract to provide some “emotional distance” when viewed. But the other—a rendered aerial map of what central Boston could look like in the year 2100, under the combined brunt of rising sea levels, natural subsidence, and a category 2 hurricane storm surge—is considerably more arresting.
11An expanded version of this map is available from the Federal Emergency Management Agency at http://gis.fema.gov/maps/FEMA_Presidential_Disaster_Declarations_1964_2011.pdf; the higher-level page at http://gis.fema.gov/DataFeeds.html (cited by Plyer as her source for the rendered map in her presentation) contains links to the underlying data in a variety of formats.
Plyer observed that a previous National Research Council (2007a:72) panel had commented that “people who are responding to disasters complain that they are often operating in a data vacuum” and that, in this vacuum, “responders have difficulty setting short-term priorities, allocating scarce resources efficiently, or establishing strategic plans for longer-term recovery efforts.” Plyer said that when a disaster strikes, the first “data” that are commonly available are images—photos from the media—that can have an effect on their viewers but that do not really provide a sound basis for characterizing who has been affected by the disaster, how much assistance is needed, and where those resources should be steered. GNOCDC originated in 1997 to help civic and nonprofit leaders in New Orleans to use data to work and plan strategically. By 2001, GNOCDC had developed its website (http://www.gnocdc.org) with easy-to-use data profiles (using data from the 2000 census and its long-form sample), broken down by the 73 New Orleans neighborhoods defined by the city’s planning commission. GNOCDC also emphasized a willingness to answer questions from users unable to find information on the site; Plyer briefly displayed an “Ask Allison” page from the site, asking interested (or perplexed) site visitors contact information and offering New Orleans-area nonprofit organizations some free consulting time to address concerns. Plyer displayed graphs showing that the GNOCDC website averaged about 5,000 visits per month between August 2003 through July 2005—many visitors involved in planning and advocacy activities, as Terpstra described, and many local groups seeking data to support grant applications.
That steady state of visitors to the website (and the operations of GNOCDC itself) was upended as Hurricane Katrina gained strength over the Gulf of Mexico and tracked toward southeast Louisiana between August 26–28, 2005, prompting an unprecedented mandatory evacuation order from the New Orleans city government on the morning of August 28. As is now—tragically—well known, the storm made landfall southeast of New Orleans early on August 29; storm surge waters and catastrophic failure of the city’s protective levees and floodwalls caused flooding in roughly 80 percent of the city’s land area; the hurricane maintained strength as it hit the Gulf Shore of Mississippi before finally starting to weaken inland. The scramble for information in the buildup and aftermath of the storm is evident from the website access graphs; from a historic steady clip of 5,000 visits per month, GNOCDC’s site experienced 40,000 visits in August 2005 and 80,000 in September 2005. Starting in October 2005, the visits to the GNOCDC site began to stabilize around the new steady average it has since maintained, about 15,000 visits per month. This massive spike in web traffic came as GNOCDC’s small staff was—like the rest of the city’s populace—physically displaced, to Georgia, Texas, and elsewhere. (Fortunately, the center’s computer server was physically located in Kentucky.)
Plyer recounted some of the queries received by GNOCDC in the days and weeks following Katrina, and the first period of recovery, all of which she de-
- From a national charitable organization, a request for information on the number of low-income seniors in areas across Louisiana, to steer their aid;
- From a state emergency preparedness agency, a request for solid counts of non-English-speaking persons in small areas of southeast Louisiana, in order to best print and circulate evacuation guides and other materials in Spanish, Vietnamese, and French;
- From the local public defenders’ office, a request for a comprehensive demographic profile of the post-Katrina city—without which they would have no basis for determining whether court juries are actually representative of the population;
- From a state health agency, a request for detailed demographic statistics by small area, in order to make sure that state HIV/AIDS outreach efforts were being appropriately planned during the area’s economic recovery; and
- From a large real estate group, an updated demographic profile for a specific high-ground New Orleans neighborhood, to make the case to a major potential client that the specific neighborhood was booming.
Faced with these questions, Plyer said that GNOCDC had to make use of the best data then available—flagging census blocks by their extent of flood damage (as determined by the U.S. Geological Survey and others) and aggregating small-area population statistics from the 2000 census (and its long-form sample) for flooded and nonflooded areas. This provided some useful insight on the number of people who could return to relatively undamaged (and high ground) parts of New Orleans when the city reopened. But the data—already 5 years old—were static, and so were not ideal for chronicling the city’s recovery. As the city began to repopulate, it remained an open question of how the demographics of the city were changing, and which pre-Katrina residents were returning and which were not. Accordingly, GNOCDC eagerly welcomed the ACS as it entered full-scale collection—and was greatly relieved that the region would not have to wait until the 2010 census for a good reading on New Orleans demographics.
As 2006 and 2007 ACS data became available, GNOCDC began to generate series of analyses that it has since updated on an annual basis. For example, the ACS data showed that the populace of Orleans Parish had changed strikingly along some key variables: significantly fewer people who completed a high school degree and fewer households lacking access to a vehicle, a drop in the percentage of population living in poverty, and an uptick in the percentage of foreign-born population. Plyer conceded that their analyses lack a clear base for pre- and post-Katrina comparison because the ACS data for 2004 were still being
collected at the reduced, Census 2000 Supplementary Survey level; GNOCDC used the 2000 census long-form sample data as the basis for comparison with 2006 and 2007 ACS data. Later, in response to a question, Plyer noted that use of the ACS data was not struggle-free because they rely on the Census Bureau’s population estimates breakdowns to weight the sample responses; if those estimates are off—as might reasonably happen in an area undergoing drastic population shifts—then the ACS estimates might be problematic. Plyer answered that this was a problem that GNOCDC and the Census Bureau struggled with; there is a mechanism for localities to challenge the Bureau’s population estimates if they can submit alternative data and arguments, and Orleans, Parish had its 2007 population estimates revised upward after GNOCDC and local officials argued that the Census Bureau estimate seemed low.
ACS data on economic conditions have proven particularly useful in studying the rebuilding area in recent years because they are actually responding to multiple shocks. Post-Katrina, the number of people in poverty in New Orleans dropped significantly because, as Plyer said, “those folks had a hard time returning.” But then the poverty rate ticked upward with the national recession. On related lines, GNOCDC began partnering with the Urban Institute to produce a series of housing reports, examining trends in greater New Orleans (principally using ACS numbers) and comparing them with other cities or the nation as a whole. These analyses document noticeably higher housing costs in Orleans Parish, post-Katrina and regardless of whether the housing is rented or owned/mortgaged. They also show that, in 2010, about 35 percent of New Orleans homeowners are housing-cost-burdened, in that they pay at least 30 percent of their pre-tax income on housing—a figure greater than the national average (but less than levels in cities like Las Vegas and New York). Similarly, GNOCDC has partnered with researchers from the Brookings Institution on a major project to track New Orleans’ recovery and to put it into long-term perspective: comparisons with on the order of 30 years of trend data.
Plyer acknowledged that other sources can provide specific glimpses—for instance, Louisiana Department of Education data, compiled from local districts, were the source of one displayed graph tracking public school enrollment before and after Katrina. And, arguably, other sources might provide more detail (for instance, wage data from the Bureau of Economic Analysis). But she emphasized that, “to look at community well-being, we really needed the ACS.” She said the ACS was pivotal in answering the myriad questions that came into GNOCDC, and without it all of those requesters—“the business sector, the legal community, policymakers, emergency preparedness folks, public health, nonprofits, and the media”—“really would have been flying blind for five years.”12 When disasters strike, “everything is uncertain”—and uncertainty in informa-
12In mentioning the media, Plyer endorsed Terpstra’s comments about serving an interpretive role for media outlets; many of the reporters “definitely count on us” at GNOCDC because they
Plyer used her closing minutes to discuss some work that GNOCDC is doing to work with one major challenge associated with ACS data, which is the presentation of the uncertainty inherent in the estimates. As Terpstra and IMPACT do for neighborhoods in Chicago and places within neighborhoods, GNOCDC is planning on rolling out additional ACS data tables by neighborhood for New Orleans. Plyer displayed a couple of screenshots of the tabular interface, including columns for the standard error of each estimate. She said that other NNIP partners had just posted the standard errors there, without much more context than that. GNOCDC is currently completing work on a small online widget as a companion piece for the new ACS tables; for instance, users can enter the percentages and the associated standard errors and have displayed a plain-English, yes/no statement as to whether there is a statistically significant difference or not. Users will be able to access a similar widget after they use other features of the site, such as combining income categories. Through these easy-to-use features, they hope to make the margins of error less mysterious (or frightening) to their downstream data users.13
Closing the session, Russ Paulsen (executive director for community preparedness and resilience, American Red Cross) conceded that his remarks would be unlike other presentations in that they would not be ACS-centric; he said that he would be unable to sort out exactly what information Red Cross derives from the ACS versus the CPS versus any other data source, and that analysts at the American Red Cross national headquarters tend to use data products prepared by outside vendors. What the workshop steering committee asked him to do was to talk through a general framework through which data like those from the ACS are used in disaster response, recovery, and preparedness planning—the field in which Paulsen said he has some 20 years of experience, including the major response to Hurricane Katrina discussed by Plyer.
simply “don’t have spreadsheets” and lack the ability to directly manipulate data. This thread would be revisited in the media perspectives session; see Chapter 4.
13The ACS margins of error at the census tract level, and some oddities in the data, are “the bane of our existence with ACS data,” Plyer said—ending her talk by showing a map of tract-level median household income derived from 2006–2010 ACS data. Much of the picture that results makes sense; relatively wealthy and relatively poor areas of the city stand out from each other—except for a one-tract pocket of the otherwise low-income Lower Ninth Ward that shows a “higher than average” median income. As is well known, the Lower Ninth Ward suffered Katrina’s worst devastation, and still struggles to recover. Put bluntly by Plyer, “we don’t know what to do with this”—the anomalous item has baffled GNOCDC, local housing planners, and other officials.
Paulsen said that the American Red Cross uses data (and the ACS) throughout the entire “disaster cycle”: preparedness for the event, response when it happens, recovery from the effects, and back to preparedness. Similar to the point Plyer made by showing the map of presidentially declared disaster incidences, Paulsen displayed statistics for American Red Cross disaster response in 2011: mobilizing about 28,000 disaster workers and 2.6 million relief items and opening just over 1,000 emergency shelters. The organization fielded responses costing at least $10,000 in nearly every state as well as Puerto Rico. Though the major, large-scale disasters—among them the 29 tornadoes, 27 floods, and 15 hurricanes American Red Cross responded to in 201114—are most prominent in media coverage, Paulsen noted that the organization responds to thousands of smaller disasters—on the order of 70,000 house fires alone.
That demographic information from a collection like the ACS can be useful in responding to a large-scale disaster is fairly clear and was made vivid by Plyer’s remarks. Granted, Paulsen continued, it might not be immediately obvious how that demographic information might be useful in responding to a house fire. But it is crucial for an organization like the Red Cross to have sound data on which to base its projections and its decisions on allocating resources and staff. The American Red Cross has developed formulas to project how many temporary shelters might be needed in an area when a disaster strikes, and this formula is based heavily on the demographics of the affected area. As Paulsen said, it is important that those demographic data be up-to-date for the model to work; projections based on 10-year-old data would not be terribly helpful. Similar projections and formulas are used in estimation and planning—predicting how many meals might need to be served during a disaster response, how many responders or vehicles are needed, and the potential cost of the response. In addition to these projections of the scope of the disaster and the requisite magnitude of response, data play a role in a variety of ad hoc reports while the response is in progress. The Red Cross is a broad national organization building on the work of individual chapters, so data from the disaster response stage inform the reports from local chapters and are used by the national headquarters to process local reports and to assess how resources are being allocated throughout the organization.
For a particular disaster—a fairly localized one, like a tornado—Paulsen sketched the basic way in which data like ACS estimates are used in disaster response. The first step harkens back to the beginnings of the frameworks described for using ACS data in health care and transportation planning (Sections 2–C and 2–D): quick assessment of the demographic profile of the affected area (say, a county), and at a more granular level as appropriate, to try to get a sense of where impacts are likely to be worst and where Red Cross ser-
14Per Paulsen’s slides, other major disasters covered by American Red Cross in 2011 were 45 multifamily fires, 10 wildfires, 4 blizzards, and the August 2011 Virginia earthquake.
vices might be needed most urgently. Specific data variables considered in this stage include population density, age breakdown (with particular emphasis on the elderly, who are more likely to need—and use—emergency shelter service), language barriers, and combinations of age and ethnicity that might correlate with dietary and nutritional requirements. A variety of economic variables—percentage living below the poverty level and housing tenure (renter/owner)—are also important to assessing outcome—so, too, is the variable on the extent of housing vacancies (including seasonal homes) in the area. Mapping these data is often the most effective way to focus services and resources to the subareas of most acute need.
In disaster response, Paulsen said, the “name of the game is getting ahead of the curve as fast as you can.” In its planning, the Red Cross works to have people on hand with the specialized skills needed to address particular problems—but it still takes time to get them into position where they can do the most good. A community like Pascagoula, Mississippi, might ordinarily have a local Red Cross staff of two people; during the response to Katrina, the Red Cross needed to deploy on the order of 25,000 people to Pascagoula, many (if not most) from outside the local area. So, he stressed, the data-driven assessments of impact and need in the wake of a disaster must be completed quickly, and the value of data in making effective resource allocations decreases sharply with the oldness of the data. Put bluntly, he said, “we are going to make bad decisions” if all that is available are 10-year-old data. Paulsen diverged from his presentation to comment on the challenge raised by Census Bureau staff in the earlier discussion session (Section 2–F) on whether a shift of a few weeks or months in data release time would really affect results. As an end user and a manager doing disaster response, he said, “I am really hungry for speed because I feel like I am going to make better decisions the more current the data is”; the actual effective difference in response of a few weeks or months of increased data recency would depend on the magnitude of the change caused by the disaster. For longer-term planning and for disaster preparedness—as he would discuss next—change is slower and so lags might not matter. But fresh, recent data are invaluable in the immediate response and recovery phases of the disaster cycle.
Paulsen echoed Plyer’s comment about the usefulness of timely, accurate data in the next step in the cycle—recovery from the disaster. However, given that Plyer had discussed recovery in great depth, Paulsen moved on in the cycle to discuss the role of data in disaster preparedness. He said that preparedness is his new major focus in work at the Red Cross; having led recovery from Katrina and having worked in disaster response for a long time, he said that he wants to really get ahead of the curve and try to reach a point where people do not have to suffer so much when disasters occur. As he put it, Red Cross services cannot be instantaneous at every disaster everywhere, no matter how big the Red Cross is; “people have to do some stuff on their own, and they can reduce their own risk.”
- The first step is community assessment—just as in disaster response, thinking through the first steps of a preparedness plan is to take stock of available existing resources.
- Integral to that overall assessment is understanding the demographics of the community.
- These steps combine to allow a community to document its assets and to assess gaps—in other words, to map the community’s vulnerabilities and be aware of them.
- Finally, with the vulnerabilities known, an action plan to address them is developed.
Paulsen said that this kind of strategy is difficult to accomplish at high levels of geography; it is tough to do in a focused way in whole states, counties/parishes, or cities. Rather, he said, this is work that needs to be done at the neighborhood level to be most effective—and so data from the ACS, to drill down to very small areas, are critical.
Specific demographic variables covered by the ACS that Paulsen suggested are most useful in a pareparedness strategy include total population, age breakdowns, race and ethnicity breakdowns, foreign-born population, language other than English spoken at home, educational attainment, household structure and median household income, percentage living in nonpermanent housing stock (i.e., mobile homes), percentage lacking phone service, percentage lacking an available vehicle, and percentage unemployed (persons over 16 not in labor force). These data are important to getting the “big picture” of the community and honing in on vulnerabilities—and they are vital to Red Cross staff when disasters do strike, to focus attention on areas of greatest need.
In summary, Paulsen said that he cannot imagine having implemented response and recovery to Katrina without the work done by Plyer and GNOCDC—or working through longer-term recovery issues in the area without data more current than 2000 vintage. As a professional in disaster response, he cannot imagine an effective response with really old data; likewise, it is inconceivable that either effective disaster planning or preparedness can be done well without fine-grained small-area data like those provided in the ACS.
Paulsen closed by making brief note of one specific application in which Red Cross staff use census, and now ACS, data regularly and extensively: blood services. The major blood types in the commonly used ABO classification system (A, B, AB, and O) and their + and - variants (depending on the presence or absence of the Rhesus [Rh] factor) occur in different proportions among race and ethnicity groups. Beyond those main types, some rare blood types are ef-
fectively unique to specific demographic groups.15 As a result, the Red Cross uses ACS and census data to understand the potential blood donor markets and the potential recipient markets for local hospitals. By effectively modeling population by blood type, Red Cross can also mount special collection efforts in particular small areas and target collection sites.
15The American Red Cross provides overview pages on blood types and the rare blood types that vary by race and ethnic origin at http://www.redcrossblood.org/learn-about-blood/blood-types and http://www.redcrossblood.org/learn-about-blood/blood-and-diversity, respectively.