National Academies Press: OpenBook
« Previous: Appendix A: Breakout Session Presentations
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

Appendix B

White Papers

Six workshop participants, through a keynote presentation and associated white paper, were tasked with presenting a vision that would help guide the deliberations of the workshop participants. Each discussed a key component of earthquake engineering research—community, lifelines, buildings, information technology, materials, and modeling and simulation—and considered the four cross-cutting dimensions—community resilience, pre-event prediction and planning, design of infrastructure, and post-event response and recovery. The white papers were distributed to all participants prior to the workshop, and they are published here in their original form. Final responsibility for their content rests entirely with the individual author.

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

TRANSFORMATIVE EARTHQUAKE ENGINEERING
RESEARCH AND SOLUTIONS FOR ACHIEVING
EARTHQUAKE-RESILIENT COMMUNITIES

Laurie A. Johnson, PhD, AICP

Principal, Laurie Johnson Consulting | Research

Summary

This paper is prepared for the National Science Foundation–sponsored, and National Research Council–led, Community Workshop to describe the Grand Challenges in Earthquake Engineering Research, held March 14–15, 2011, in Irvine, California. It offers ideas to help foster workshop discussions on transformative earthquake engineering research and achieving earthquake resilience in communities. Over the next 50 years, America’s population will exceed 400 million, and much of it will be concentrated in the earthquake-prone, mega-regions of the Northeast, Great Lakes, Pacific Northwest, and northern and southern California. To achieve an earthquake-resilient nation, as envisioned by the National Earthquake Hazards Reduction Program, earthquake professionals are challenged to strengthen the physical resilience of our communities’ buildings and infrastructure while simultaneously addressing the environmental, economic, social, and institutional resilience of these increasingly dense, complex, and interdependent urban environments. Achieving community resilience will require a whole host of new, innovative engineering solutions, as well as significant and sustained political and professional leadership and will, an array of new financial mechanisms and incentives, and concerted efforts to integrate earthquake resilience into other urban design and social movements.

There is tremendous need and opportunity for networked facilities and cyberinfrastructure in support of basic and applied research on community resilience. Key ideas presented in this paper include developing better models of community resilience in order to establish a baseline and to measure resilience progress and effectiveness at an urban scale; developing more robust models of building risk/ resiliency and aggregate inventories of community risk/ resiliency for use in mitigation, land use planning, and emergency planning; enhancing efforts to upgrade the immense inventory of existing structures and lifelines to be more earthquake-resilient; developing a broader understanding of resiliency-based performance objectives for building and lifeline design and construction; building the next generation of post-disaster damage assessment tools and emergency response and recovery “dashboards” based upon sensing networks; and sustaining systematic monitoring of post-disaster response and recovery activities for extended periods of time.

Envisioning Resilient Communities, Now and in the Future

The National Earthquake Hazards Reduction Program (NEHRP) envisions: A nation that is earthquake-resilient in public safety, economic strength, and national security. The White House National Security Strategy, released in May 2010, offers the following definition of resilience: the ability to prepare for, withstand, and rapidly recover from disruption, and adapt to changing conditions (White House, 2010). The first part of this definition encapsulates the vast majority of work that has been done under NEHRP and as part of modern earthquake engineering research and practice: strengthening the built environment to withstand earthquakes with life-safety standards and codes for new buildings and lifeline construction, developing methods and standards for retrofitting existing construction, and preparing government institutions for disaster response. The second half of this definition captures much of the recent learning and research in earthquake engineering: codes and standards that consider post-disaster performance with minimal to no disruption, as well as the linkages between building and lifeline performance and business, macro-economic, societal, and institutional recovery. But, there is much more work yet to be done, particularly in translating research into practice.

What the 1994 Northridge, 1995 Kobe, and 2010 Chile earthquake and the 2005 Hurricane Katrina disasters have in common is that they all struck relatively dense, modern urban settings, and collectively illustrate varying degrees of resilience in modern societies. Resilient communities need more than physical resilience, which is best characterized by the physical condition of communities’ buildings, infrastructure, and hazard defenses. They need to have environmental, economic, social, and institutional resilience as well. They also need to do more than withstand disruption; resilient communities need to be able to rapidly recover and adapt to the new conditions created by a disaster.

We are now familiar with the physical vulnerabilities of New Orleans’ levee system, but Hurricane Katrina struck a city that lacked resilience across these other dimensions as well; conditions that likely influenced New Orleans’ lack of adaptive capacity and slow recovery in the five years following the disaster (Public Strategies Group, 2011). Prior to Hurricane Katrina, New Orleans’ population (455,000 people in 2005) had been in decline for 40 years, resulting in 40,000 vacant lots or abandoned residences, a stagnant economy, and local budgetary challenges that severely affected the maintenance of local services, facilities, and infrastructure, most notably the school, water, and sewer systems (Olshansky and Johnson, 2010). In addition, New Orleans’ social fabric was also very fragile. In 2005, the city’s median household income of $27,000 was well below the national average of $41,000, as were the home-ownership and minimum literacy rates of 46 and 56 percent, respectively (compared with the national averages of 68 and 75 percent, respectively) (U.S. Census Bureau, 2000; U.S. Department of Education, 2003). The city’s poverty rate of 23.2 percent was also much higher than the national rate of 12.7 percent, and 29 percent of residents didn’t own cars (U.S. Census Bureau, 2004).

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

Although, in aggregate, these statistics might seem like an extreme case in community vulnerability, they are not dissimilar from some of the conditions of, at least portions of, many of our most earthquake-prone communities in southern and northern California, the Pacific Northwest, and the central and eastern United States. And, with the exception of a few pockets in northern and southern California, and Seattle, none of the most densely urbanized and vulnerable parts of our earthquake-prone communities have been impacted by a recent, large, damaging earthquake. Our modern earthquake experience, like most of our disaster experience in the United States, has largely been a suburban experience, and our engineering and preparedness efforts of the past century have not yet been fully tested by a truly catastrophic, urban earthquake.

In April 2010, the U.S. Census officially marked the country’s resident population at 308,745,538, and we are expected to add another 100 million in the next 50 years (U.S. Census Bureau, 2011). This population growth is expected to be accommodated in the country’s fifth wave of migration, a wave of re-urbanism that began in the 1980s (Fishman, 2005). By the time the fifth migration is complete, it is expected that 70 percent of the country’s population will be concentrated within 10 “mega-regions” of the country (Barnett, 2007; Lang and Nelson, 2007). Half of these mega-regions are in earthquake-prone regions of the Northeast (from Washington D.C. to Boston); the Great Lakes (Cleveland, Cincinnati, Detroit, and Chicago/Milwaukee); Cascadia (Seattle and Portland); northern California (San Francisco Bay Area); and southern California.

As these metropolitan areas continue to grow, it is predicted that development patterns will get increasingly dense as older urban cores are revitalized and the suburban land use patterns of the last half of the 20th century become more costly to both inhabit and serve (Barnett, 2007). These assumptions are based upon expected increases in energy costs, an emphasis on transportation and climate change policies that promote more centralized development, and the significant fiscal challenges that local agencies are likely to have in supporting distributed patterns of services. The demographics of these regions are also likely to shift as more affluent younger professionals, aging empty-nesters, and immigrant populations concentrate in the metropolitan cores, a trend that is already advanced in Boston, New York, Chicago, Los Angeles, and San Francisco/Oakland (Nelson and Lang, 2007). In general, our population will be older and more diverse than previous decades, adding to the social vulnerabilities of metropolitan areas.

To accommodate the next 100 million people, 70 million housing units will need to be added to the current stock of 125 million; 40 million are likely to be new housing units, while the remaining 30 million are likely to replace damaged or demolished units on existing property (Nelson and Lang, 2007). Also, to accommodate these growing mega-economies, 100 billion square feet of nonresidential space will likely be added; 30 billion of which is likely to be new square footage, and 70 billion square feet will be rebuilt or replaced (Lang and Nelson, 2007). These statistics were developed before “The Great Recession” slowed housing starts from annual rates of more than 2 million in 2005 and 2006 to 0.5 million in 2009 and 2010, and pushed annual foreclosure rates to more than 3 million (U.S. Census, 2011). The recent recession has also dramatically slowed commercial development and postponed the upgrade of local facilities and infrastructure, much of which was already in sore need of modernization and maintenance before the recent fiscal crisis.

To achieve community resilience, now and in the foreseeable future, we must take a more holistic approach to our work as earthquake professionals. With physical resilience as the foundation of our communities’ resilience, we also need to focus on the environmental, economic, social, and institutional resilience of our increasingly dense, complex, and interdependent communities. Also, as past as well as future projections suggest, physical resilience can’t be achieved through expected rates of new construction and redevelopment. It is going to require a whole host of new, innovative engineering solutions, as well as significant and sustained political and professional leadership and will, an array of new financial mechanisms and incentives, and concerted efforts to integrate earthquake resilience into other urban design and social movements. Otherwise, “an earthquake-resilient nation” will remain an idealistic mantra of our profession, and the expected earthquakes of the 21st century will cause unnecessary human, socioeconomic, and physical hardship for the communities they strike.

A “Sputnik Moment” in Earthquake Engineering Research

In his 2011 State of the Union address, President Obama referred to recent news of technological advances by other nations as this generation of Americans’ “sputnik moment,” and he called for a major national investment “in biomedical research, information technology, and especially clean energy technology—an investment that will strengthen our security, protect our planet, and create countless new jobs for our people” (White House, 2011). Following the Soviet Union’s launch of the “sputnik” satellite into space in 1957, the United States responded with a major sustained investment in research—most visibly through the establishment of the National Aeronautics and Space Administration (NASA)—and education. The National Defense Education Act of 1958 dramatically increased federal investment in education and made technological innovation and education into national-security issues (Alter, 2011).

It is well known that disasters are focusing events for public policy agenda setting, adoption, and change (Birkland, 1997). The September 11, 2001, terrorist attacks put man-made threats at the forefront of disaster policy making, management, and program implementation. September 11 has also been described by some as the major

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

focusing event that significantly expanded the size and scope of the federal government as well as its debt (Stone, 2010). Similarly, Hurricane Katrina has been another focusing event for hazard and disaster management policy and program implementation. To some extent, it has reversed some of the trends started after September 11, but disaster recovery and mitigation have yet to regain their former status as officially preferred disaster policy responses (Waugh, 2006).

For earthquake engineering research and seismic policy making, adoption, and change in the United States, the 1971 San Fernando earthquake has been the most recent and salient focusing event. It led to the formation of the California Seismic Safety Commission in 1975 and the passage of the National Earthquake Hazards Reduction Act in 1997 and the formation of NEHRP thereafter (Birkland, 1997). But, was the 1971 earthquake or any other more recent U.S. earthquake a sputnik moment for the United States? The pairing of the 1994 Northridge and 1995 Kobe earthquakes may well have been a sputnik moment for Japan. The tremendous human loss, economic consequences, and, in some cases, surprising causes and levels of damage to structures and infrastructure all contributed to Japan’s major investment in earthquake engineering and disaster management research, education, and policy reform over the past decade. Will we have to wait until a major catastrophic urban earthquake strikes the United States causing unprecedented human and economic losses to have our sputnik moment in earthquake engineering research, practice, and policy reform?

If some of the underlying motivations of a sputnik moment stem from shock as well as a sense of being surpassed, then is there any way for our earthquake professional community to better communicate the lessons from Chile versus Haiti and other disasters around the world to compel a more focused policy and investment in earthquake engineering and risk reduction research, education, and action? What can we learn from the biomedical, high-tech, and “green” engineering movements, as examples, which may have recently had, or may currently be a part of sputnik moments, in which policy makers and private investors are motivated to take action in ways that earthquake preparedness has not been able to do with the same growth trajectory and enthusiasm?

Ideas for Transformative Earthquake Engineering Research and Solutions

The remainder of this paper presents ideas to help foster workshop discussions on transformative earthquake engineering research and on achieving earthquake-resilient communities. It is organized around four dimensions: community resilience, pre-event prediction and planning, design of infrastructure, and post-event response and recovery. Particular emphasis is given to community-level ideas that might utilize the networked facilities and cyberinfrastructure of the George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES).

Community Resilience

Drawing upon the research literature of several social science disciplines, Norris et al. (2008) define community resilience as a process linking a network of adaptive capacities in “economic development, social capital, information and communication, and community competence.” To build collective resilience, they recommend that “communities must reduce risk and resource inequities, engage local people in mitigation, create organizational linkages, boost and protect social supports, and plan for not having a plan, which requires flexibility, decision-making skills, and trusted sources of information that function in the face of unknowns” (Norris et al., 2008). To achieve earthquake resilience, we, as earthquake engineering researchers and professionals, need to look beyond earthquakes to other disasters, and even outside of disasters, to understand how our work fits in and how to link our work with other initiatives to build adaptive capacities and incite resiliency-related policy and actions.

In 2006, earthquake professionals and public policy advocates joined forces to develop a set of policy recommendations for enhancing the resiliency of existing buildings, new buildings, and lifelines in San Francisco (SPUR, 2009). The San Francisco Planning and Urban Research Association’s (SPUR’s) “Resilient City Initiative” chose to analyze the “expected” earthquake, rather than the “extreme” event, because it is a large event that can reasonably be expected to occur once during the useful life of structures and lifeline systems in the city. It also defined a set of performance goals—as target states of recovery within hours, days, and months following the expected earthquake—in terms of four community clusters: critical response facilities and support systems; emergency housing and support systems; housing and neighborhood infrastructure; and community recovery (SPUR, 2009).

Lacking a theoretical model or set of quantifiable measures of community resilience, SPUR relied on expert opinion to set the target states of recovery for San Francisco’s buildings and infrastructure and to assess the current performance status of the cluster elements. For example, SPUR set a target goal to have 95 percent of residences available for “shelter-in-place” by their inhabitants within 24 hours after an expected earthquake; it also estimated that it would take 36 months for the current housing stock to be available for “shelter-in-place” in 95 percent of residences. But, is 95 percent an appropriate target for ensuring an efficient and effective recovery in the city’s housing sector following an expected earthquake? Does San Francisco really need to achieve all the performance targets defined by SPUR to be resilient following an expected earthquake? Which target should be worked on first, second, and so forth? And, given all the competing community needs, when is the most appropriate time to promote an earthquake resiliency policy agenda?

There is tremendous need for networked facilities and cyberinfrastructure in support of basic and applied research on community resilience. This includes:

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
  • Developing an observatory network to measure, monitor, and model the earthquake vulnerability and resilience of communities, including communities’ adaptive capacities across many physical, social, economic, and institutional dimensions. Clearer definitions, metrics, and timescales are needed to establish a baseline of resilience and to measure resilience progress and effectiveness on an urban scale.
  • Collectively mapping and modeling the individual and organizational motivations to promote earthquake resilience, the feasibility and cost of resilience actions, and the removal of barriers and building of capacities to achieve successful implementation. Community resilience depends in large part on our ability to better link and “sell” physical resilience with environmental, economic, social, and even institutional resilience motivations and causes.
  • Developing the quantitative models or methods that prioritize and define when public action and subsidy are needed (and how much) to fund seismic rehabilitation of certain building or infrastructure types, groups, or systems that are essential to a community’s resilience capacity versus ones that can be left to market forces, attrition, and private investment to address.
  • Developing a network of community-based earthquake resiliency pilot projects to apply earthquake engineering research and other knowledge to reduce risk, promote risk awareness, and improve community resilience capacity. Understanding and developing effective, alternative methods and approaches to building local resilience capacity are needed because earthquake-prone communities have varying cultures, knowledge, skills, and institutional capacities.

Pre-Event Prediction and Planning

To date, much of the pre-event research and practice has focused on estimating the physical damage to individual structures and lifeline systems, creating inventories and scenarios for damage and loss estimation, and preparing government institutions for disaster response. Ideas for “operationalizing” a vision of community-level resiliency include:

  • Developing more robust earthquake forecasting and scenario tools that address multiple resiliency performance objectives and focus on community-level resilience impacts and outcomes.
  • Developing more holistic models of individual building risk/resiliency that extend structural simulations and performance testing to integrate information on soil and foundation interaction, non-structural systems, and lifeline systems with the structure and contents information and that model post-disaster building functionality and lifeline dependency and interdependency and how these affect building functionality, time required to recover various levels of building functionality, and other economic and social resilience factors.
  • Developing aggregate inventories and models of community or regional risk/resiliency that can be used in mitigation, land use planning, and emergency planning. Local building and planning review processes and emergency management practices need tools to assess the incremental changes in community risk/resiliency over time caused by new construction, redevelopment, and implementation of different mitigation policies and programs. Real estate property valuation and insurance pricing also need better methods to more fully reflect risk and resilience in risk transfer transactions. Within current decision frameworks and practices, redevelopment of a low-density, low-rise, but structurally vulnerable neighborhood into a high-density, high-rise development is likely to be viewed as a lowering of earthquake risk and an increase in economic value to the community. But is it really? Tools that more accurately value the aggregation of risk across neighborhoods, incremental changes in community resiliency, effects of aging and densification of the urban environment and accumulation of risk over time, and the dynamics of adaptive capacity of a community post-disaster are needed.
  • Developing models of the effects of institutional practices and governance on community resilience in terms of the preparedness, recovery, and adaptive capacities. This includes modeling the effects of building and land use planning regulatory regimes, emergency decision-making processes, institutional leadership and improvisational capacities, and post-disaster financing and recovery management policies.

Design of Infrastructure

Achieving community resilience will require enhanced efforts to upgrade the immense inventory of existing structures and lifelines to be more earthquake-resilient and a broader understanding of resiliency-based performance objectives for building and lifeline design and construction. Ideas include:

  • Developing enhanced methods for evaluating and retrofitting existing buildings and lifeline systems. These methods need to reliably model the expected responses of existing buildings and lifelines to different levels of ground motions and multiple resiliency performance objectives. Methods need to go beyond estimating the costs to retrofit toward developing more robust models that consider the full range of resiliency benefits and costs of different mitigation
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

      policy and financing strategies. These alternatives need to think creatively of ways to reuse existing stock, cost-effectively piggy-back on other rehabilitation efforts, and incentivize and ease the burden of retrofitting existing stock. Current and future political challenges are likely to include pressures to preserve historic and cultural integrity, and resistance to invest limited capital resources in seismic rehabilitation projects. Concepts of a federal infrastructure bank might be expanded to include all seismically vulnerable structures and infrastructure, and new public-private financing mechanisms may need to be developed. Mechanisms to effectively communicate the vulnerability of existing structures and lifeline systems to owners, occupants, and policy makers to incite and reward action, such as an earthquake-resilience certification system, also need to be carefully assessed. Sustained efforts to build consensus for standards and actions to evaluate and retrofit existing building and lifeline systems, develop guidelines, and transfer knowledge and technology to building officials, owners, and engineers; utility owners, operators, regulators, and engineers; and other policy and decision makers are also needed.

  • Advancing performance-based earthquake engineering for buildings, lifelines, geotechnical structures, and hazard defenses. Performance-based engineering needs to take a broader look at the integrated performance of a structure as well as the layers of substructures, lifeline systems, and surrounding community infrastructure that it depends upon. For example, a 50-plus story residential high-rise is, in fact, a neighborhood- or community-vertical, thus making the lifeline conveyance and social resilience of a single structure. Even if the structure is deemed safe following a disaster, lifeline disruptions may impact evacuations and render the structure uninhabitable with sealed windows and lack of elevator service, as examples.
  • To have resilient communities, we cannot think of building-specific performance only. Community-level performance-based engineering models are needed. These may require a systems approach to consider the complex interactions of lifeline systems, critical network vulnerability and dependencies, and dependencies between physical, social, economic, and institutional sectors of a community, and to develop guidelines and “urban-level design standards” for community-level performance.
  • Making seismic risk reduction and resilience an integral part of many professional efforts to improve the built environment, and building new alliances and coalitions with interest groups working on these goals. This includes the Green Building Council and the Council’s Leadership in Energy and Environmental Design (LEED) program; architects and engineers developing green, adaptive, building “skins,” construction materials, and sensing networks; and urban designers working on sustainable community standards and practices. Current efforts to build new “smart” buildings and cities could potentially benefit from the networked facilities and cyberinfrastructure that the earthquake engineering community has developed in managing and processing the sensing data. In turn, earthquake engineering could potentially assist in helping to develop better and more cost-effective “sensing retrofits” of existing structures and lifeline systems to be “smarter” and to better integrate disaster resilience into the green building and sustainable community standards and practices. In November 2010, the Green Building Council reached a major milestone in its short 10-year life span, having certified more than 1 billion square feet of commercial building space (Koch, 2010). Since it was introduced in 2000, the Council’s LEED program has had more than 36,000 commercial and 38,000 single-family homes participating in the program, of which 7,194 commercial projects and 8,611 homes have been completed and certified as LEED compliant (Koch, 2010). Although the costs for becoming LEED certified may be substantially lower than the costs for enhancing seismic performance, it is not a full explanation for the program’s comparative national and marketing successes. Minimizing damage and reducing the deconstruct/construct cycles of development with higher building performance levels should also be considered as benefits in building valuation.

Post-Event Response and Recovery

To date, much of the post-event research and practice has focused on estimating the physical damage and economic losses caused by earthquakes and aiding government institutions in disaster response. Ideas for enhancing community-level capabilities to rapidly recover from disruption and adapt to changing conditions include:

  • Creating a more integrated multi-disciplinary network and information management system to capture, distill, integrate, and disseminate information about the geological, structural, institutional, and socioeconomic impacts of earthquakes, as well as post-disaster response and recovery. This includes the creation and maintenance of a repository for post-earthquake reconnaissance data.
  • Developing the next generation of post-disaster damage assessments. Post-disaster safety assessment programs are now well institutionalized in local emergency management and building inspection departments, with legal requirements, procedures,
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

     and training. The next generation of post-disaster assessments might integrate the sensing networks of “smart” buildings and lifeline systems to make it more quickly possible for emergency responders, safety inspectors, and system operators, as well as residential and commercial building occupants themselves, to understand the post-disaster conditions of buildings or systems and resume occupancy and operation safely or seek alternatives. The next generation of assessments could also take a more holistic view of the disaster impacts and losses, focusing on the economic and social elements as well as the built environment. Just like physical damage assessments, these socioeconomic, or resilience, assessments need to be done quickly after a disaster, and also iteratively so that more-informed, and appropriately timed, program and policy responses can be developed. Such assessments need to look at the disaster-related losses, including the ripple effects (i.e., lost wages, tax revenue, and income); the spectrum of known available resource capital (both public and private wealth and disaster financing resources) for response and recovery; social capital; and the potential unmet needs, funding gaps, and shortfalls, to name a few.

  • Developing the next-generation emergency response and recovery “dashboard” that uses sensing networks for emergency response and recovery, including impact assessment, resource prioritization and allocation, and decision making. Research from recent disasters has reported on the use of cell phones, social networking, and Internet activity as a validation of post-disaster human activity. They also caution that sensing networks need to be designed to be passive and part of the act of doing something else, rather than requiring deliberate reporting or post-disaster surveys. They also need to be reasonable and statistically active, culturally appropriate, and conscious of the “digital divide” in different socioeconomic and demographic groups. These systems can also push, and not just pull, information that can be valuable in emergency response management and communication.
  • Sustained documentation, modeling, and monitoring of emergency response and recovery activities, including the mix of response and recovery activities; multi-organizational and institutional actions, funding, interdependencies, and disconnections that both facilitate and impede recovery; and resiliency outcomes at various levels of community (i.e., household, organizational, neighborhood, and regional levels). This is longitudinal research requiring sustained efforts for 5 to 10 years and possibly even longer, which does not fit well with existing research funding models.

Acknowledgments

This paper was developed with the support of the National Research Council and the organizers of Grand Challenges in Earthquake Engineering Research: A Community Workshop. The ideas and opinions presented in this paper are those of the author and also draw upon the work of the National Research Council’s Committee on National Earthquake Resilience—Research, Implementation, and Outreach, of which the author was a member, and its final report (NRC, 2011). In addition, the author acknowledges the work of members of the San Francisco Planning and Urban Research Association’s “Resilient City Initiative.” The author also appreciates the suggestions and detailed review of this paper provided by workshop Co-chair Chris Poland and session moderator Arrietta Chakos; Liesel Ritchie for her sharing of ideas; and Greg Deierlein and Reggie DesRoches (authors of companion papers for the workshop) for their helpful discussion while preparing the paper. Appreciation is also extended to the conference Co-chairs, Greg Fenves and Chris Poland, and the NRC staff for their leadership and efforts in organizing this workshop.

References

Alter, J. 2011. A script for “Sputnik”: Obama’s State of the Union challenge. Newsweek. January 10. Available at www.newsweek.com/2011/01/05/obama-sputnik-and-the-state-of-the-union.html.

Barnett, J. 2007. Smart growth in a changing world. Planning 73(3):26-29.

Birkland, T. 1997. After Disaster: Agenda Setting, Public Policy, and Focusing Events. American Governance and Public Policy. Washington, DC: Georgetown University Press.

Fishman, R. 2005. The fifth migration. Journal of the American Planning Association 7(4):357-366.

Koch, W. 2010. U.S. Green Building Council certifies 1 billion square feet. USAToday.com. Green House. November 15. Available at www.content.usatoday.com/communities/greenhouse/post/2010/11/green-buildingcouncil-one-billion-square-feet-/1.

Lang, R. E., and A. C. Nelson. 2007. The rise of the megapolitans. Planning 73(1):7-12.

Nelson, A. C., and R. E. Lang. 2007. The next 100 million. Planning 73(1):4-6.

Norris, F. H., S. P. Stevens, B. Pfefferbaum, K. F. Wyche, and R. L. Pfefferbaum. 2008. Community resilience as a metaphor, theory, set of capacities, and strategy for disaster readiness. Dartmouth Medical School, Hanover, New Hampshire. American Journal of Community Psychology 41(1):127-150.

NRC (National Research Council). 2011. National Earthquake Resilience—Research, Implementation, and Outreach. Washington, DC: The National Academies Press.

Olshansky, R. B., and L. A. Johnson. 2010. Clear as Mud: Planning for the Rebuilding of New Orleans. Washington, DC: American Planning Association.

Public Strategies Group. 2011. City of New Orleans: A Transformation Plan for City Government. PSG, March 1. Available at media.nola.com/politics/other/NOLA%20Transformation%20Plan.pdf.

SPUR (San Francisco Planning and Urban Research Association). 2009. The Resilient City: A New Framework for Thinking about Disaster Planning in San Francisco. The Urbanist. Policy Initiative. San Francisco, CA. Available at www.spur.org/policy/the-resilient-city.

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

Stone, D. 2010. Hail to the chiefs: The presidency has grown, and grown and grown, into the most powerful, most impossible job in the world. Newsweek. November 13.

U.S. Census Bureau. 2000. Census of Housing. Available at www.census.gov/hhes/www/housing/census/historic/owner.html.

U.S. Census Bureau. 2004. Statistical Abstract of the United States. Available at www.census.gov/prod/www/abs/statab2001_2005.html.

U.S. Census Bureau. 2011. Census Bureau Home Page. Available at www.census.gov.

U.S. Department of Education. 2003. State and County Estimates of Low Literacy. Institute of Education Sciences. Available at nces.ed.gov/naal/estimates/StateEstimates.aspx.

Waugh, W. L. 2006. The political costs of failure in the Katrina and Rita disasters. The Annals of the American Academy of Political and Social Science 604(March):10-25.

White House. 2010. National Security Strategy. May 27. Washington, DC. Available at www.whitehouse.gov/sites/default/files/rss_viewer/national_security_strategy.pdf.

White House. 2011. Remarks of President Barack Obama in State of the Union Address—As Prepared for Delivery. January 25. Available at www.whitehouse.gov/the-press-office/2011/01/25/remarks-presidentbarack-obama-state-union-address.

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

GRAND CHALLENGES IN LIFELINE EARTHQUAKE
ENGINEERING RESEARCH

Reginald DesRoches, PhD

Professor and Associate Chair, School of Civil & Environmental Engineering

Georgia Institute of Technology

Summary

Lifeline systems (transportation, water, waste disposal, electric power, gas and liquid fuels, and telecommunication) are intricately linked with the economic well-being, security, and social fabric of the communities they serve, and the nation as a whole. The mitigation of earthquake risks for lifeline facilities presents a number of major challenges, primarily because of the vast inventory of facilities, their wide range in scale and spatial distribution, the fact that they are partially or completely buried and are therefore strongly influenced by soil-structure interaction, their increasing interconnectedness, and their aging and deterioration. These challenges will require a new set of research tools and approaches to adequately address them. The increasing access to high-speed computers, inexpensive sensors, new materials, improved remote sensing capabilities, and infrastructure information modeling systems can form the basis for a new paradigm for lifeline earthquake engineering in the areas of pre-event prediction and planning, design of the next-generation lifeline systems, post-event response, and community resilience. Traditional approaches to lifeline earthquake engineering have focused on component-level vulnerability and resilience. However, the next generation of research will also have to consider issues related to the impact of aging and deteriorating infrastructure, sustainability considerations, increasing interdependency, and system-level performance. The current generation of the George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES) was predicated on large-coupled testing equipment and has led to significant progress in our understanding of how lifeline systems perform under earthquake loading. The next generation of NEES can build on this progress by adapting the latest technological advances in other fields, such as wireless sensors, machine vision, remote sensing, and high-performance computing.

Introduction: Lifelines—The Backbone of American Competitiveness

The United States is served by an increasingly complex array of critical infrastructure systems, sometimes referred to as lifeline systems. For the purposes of the paper, lifeline systems include transportation, water, waste disposal, electric power, gas and liquid fuels, and telecommunication systems. These systems are critical to our economic competitiveness, national security, and overall quality of life. Water and wastewater systems support population growth, industrial growth, and public health. Power systems provide lighting to homes, schools, and businesses and energize communications. Transportation systems are the backbone of mobility and commerce and connect communities. Telecommunications systems provide connectivity on the local, national, and global scale.

Lifeline systems are the basis for producing and delivering goods and services that are key to economic competitiveness, emergency response and recovery, and overall quality of life. Following an earthquake, lifeline systems provide a critical link to communities and individuals, including water for putting out fires, roads for evacuation and repopulation of communities, and connectivity for emergency communications. The resilience of lifeline systems has a direct impact on how quickly a community recovers from a disaster, as well as the resulting direct and indirect losses.

Challenges in Lifeline Earthquake Engineering

The mitigation of earthquake hazards for lifeline facilities presents a number of major challenges, primarily because of the vast inventory of facilities, their wide range in scale and spatial distribution, the fact that they are partially or completely buried and strongly influenced by interactions with the surrounding soil, and their increasing interconnectedness. Because of their spatial distribution, they often cannot avoid crossing landslide areas, liquefaction zones, or faults (Ha et al., 2010).

One of the challenges in the area of lifeline systems, when it comes to testing, modeling, or managing these systems, is the vast range of scales. Testing or modeling of new innovative materials that might go into bridges or pipelines could occur at the nano (10–9 m), micro (10–6 m), or milli (10–3 m) level, while assessment of the transportation network or fuel distribution system occurs at the mega (10+6 m) scale. Multi-scale models required for lifeline systems involve trade-offs between the detail required for accuracy and the simplification needed for computational efficiency (O’Rourke, 2010).

A second challenge related to the assessment of the performance of lifeline systems is that many lifeline systems have a substantial number of pipelines, conduits, and components that are completely below ground (e.g., water supply, gas and liquid fuel, electric power) or partially underground (e.g., bridge or telecommunication tower foundations) and are heavily influenced by soil-structure interaction, surface faulting, and liquefaction. Hence, a distinguishing feature in evaluating the performance of lifelines is establishing a thorough understanding of the complex soil-structure interaction.

A third and critical challenge related to lifeline systems is their vast spatial distribution and interdependency between lifeline systems—either by virtue of physical proximity or via operational interaction. Damage to one infrastructure component, such as a water main, can cascade into damage to

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

a surrounding lifeline component, such as electrical or telecommunications cables, because they are often co-located. From an operational perspective, the dependency of lifelines on each other complicates their coupled performance during an earthquake (Duenas-Osorio et al., 2007), as well as their post-event restoration. For example, electrical power networks provide energy for pumping stations and control equipment for transmission and distribution systems for oil and natural gas.

A fourth challenge is the aging of lifeline systems. Many lifelines were designed and constructed 50-100 years ago without special attention to earthquake hazards and are deteriorating (ASCE, 2009). Moreover, many lifeline systems have demands placed on them that are much higher than they were originally designed to have. Many lifeline systems are already damaged prior to an earthquake, which increases their vulnerability.

Recent Advances in Lifeline Earthquake Engineering

The field of lifeline earthquake engineering has experienced significant progress over the past decade. Early studies in lifeline earthquake engineering focused on component behavior and typically used simple system models. They often looked at the effects of earthquakes on the performance of sub-components within each infrastructure system (e.g., columns in a bridge). As more advanced experimental and computational modeling facilities came online via the NEES program, the effects of larger systems (e.g., entire bridge) and coupled soil-structure systems (e.g., pile-supported wharf) were assessed (McCullough et al., 2007; Johnson et al., 2008; Kim et al., 2009). Most recently, advances in modeling and computation have led to the ability to study entire systems (e.g., transportation networks, power networks, etc.), including the local and regional economic impact of earthquake damage to lifeline systems (Kiremidjian et al., 2007; Gedilkli et al., 2008; Padgett et al., 2009; Romero et al., 2010; Shafieezadeh et al., 2011).

Transformative Research in Lifeline Earthquake Engineering

A new set of research tools is needed to adequately address the critical challenges noted above, namely the vast range in scales, complex mix of soil-structure-equipment systems, interdependencies, and aging and deteriorating lifeline systems. Modeling and managing interdependent systems, such as electric power, water, gas, telecommunications, and transportation systems require testing and simulation capabilities that can accommodate the many geographic and operational interfaces within a network, and among the different networks.

The increasing access to high-speed computers, closed-form techniques for near-real-time network analysis, inexpensive sensors, new materials, improved remote sensing capabilities, and building or bridge information modeling (BIM or BrIM) systems can form the basis for a new paradigm for lifeline earthquake engineering in the areas of pre-event prediction and planning, design of the next-generation lifeline systems, post-event response, and community resilience.

Although the current generation of NEES is predicated on large-coupled testing equipment and has led to significant progress in our understanding of how lifeline systems perform under earthquake loading, the next generation of NEES can build on this progress by adapting the latest technological advances in other fields, such as wireless sensors, machine vision, remote sensing, and high-performance computing. In addition, the coupling of seismic risk mitigation with other pressing global needs, such as sustainability, will require a different way of thinking about lifeline earthquake engineering. Sustainability, in this paper, is defined as the ability to meet the needs of current and future generations by being resilient, cost-effective, environmentally viable, and socially equitable. Lifeline systems account for 69 percent of the nation’s total energy use, and more than 50 percent of greenhouse gas emissions are from lifeline systems, so their continued efficient performance is critical for sustainable development (NRC, 2009).

Pre-Event Prediction and Planning

Because earthquakes are low-probability, high-consequence events, effective planning is critical to making informed decisions given the risk and potential loses. One of the key tools in pre-event planning is the use of seismic risk assessment and loss estimation methodologies, which combines the systematic assessment of regional hazards with infrastructure inventories and vulnerability models through geographic information systems.

The performance of lifeline systems is strongly a function of the seismic hazard and the geological conditions on which the lifeline systems are sited. Lifeline systems are strongly affected by the peak ground deformation, which often comes from surface faulting, landslides, and soil liquefaction. Development of approaches to quantitatively predict various ground motion parameters, including peak ground displacement, will be important for understanding the performance of lifeline systems. This quantitative assessment has traditionally been performed using costly and time-consuming approaches that are only typically done on a local scale. The advent of advanced remote sensing products, from air- and spaceborne sensors, now allows for the exploration of land surface parameters (i.e., geology, temperature, moisture content) at different spatial scales, which may lead to new approaches for quantifying soil conditions and properties (Yong et al., 2008).

One of the main challenges in regional risk assessment is the lack of reliable and consistent inventory data. Research is needed in finding better ways to acquire data on the vast

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

inventories contained within a lifeline network. Although researchers have effectively deployed remote sensing technologies following natural disasters (such as LiDAR), research is needed in developing ways that these technologies can be effectively used in acquiring inventory data, including physical attributes of different lifeline systems and at different scales.

Pre-event planning will require that we learn from past earthquake events. This will require us to vastly improve post-earthquake information acquisition and management. Comprehensive and consistent information on the earthquake hazard, geological conditions and responses, structural damage, and economic and social impacts observed in previous earthquakes are invaluable in planning for future events. This will provide unprecedented information on the performance of individual lifelines but will also provide critical information on the interaction among lifeline systems. A major effort of the future NEES program should be to develop a comprehensive effort among professional, governmental, and academic institutions to systematically collect, share, and archive information on the effects of significant earthquakes, including on the built and natural infrastructures, society, and the economy. The information will need to be stored, presented, and made available in structured electronic data management systems. Moreover, the data management systems should be designed with input from the communities that they are intended to serve.

The use of regional seismic risk assessment is key to pre-event planning for various lifeline systems under conditions of uncertainty. For example, detailed information on the performance of the bridges in a transportation network, coupled with traffic flow models, can inform decision makers on the most critical bridges for retrofit, and which routes would best serve as evacuation routes following an earthquake (Padgett et al., 2010). Significant progress has been made in understanding the seismic performance of lifeline components (e.g., bridges) via component and large-scale testing and analysis; however, much less is known about the operability of these components, and the system as a whole, as a function of various levels of damage. The use of sensors and data management systems would better allow us to develop critical relationships between physical damage, spatio-temporal correlations, and operability.

Finally, as infrastructure systems continue to age and deteriorate, it will be necessary to quantify the in situ condition of these systems so that we can properly assess the increased vulnerability under earthquake loading. A dense network of sensors, coupled with advanced prognostic algorithms, will enable the assessment of in situ conditions, which will allow for better predictions of the expected seismic performance (Kim et al., 2006; Lynch and Loh, 2006; Glaser et al., 2007).

Performance-Based Design of Lifeline Systems

The earthquake performance of a lifeline system is often closely correlated with the performance of a lifeline component (e.g., pipes, bridges, substations). Significant progress has been made in understanding the performance of the lifeline components using the current generation of NEES facilities (Johnson et al., 2008; O’Rouke, 2007; Abdoun et al., 2009; Ivey et al., 2010; Shafieezadeh et al., 2011). However, additional work is needed in designing these systems, considering their role within a larger network, the interdependent nature of lifeline systems, and the trade-offs in terms of cost and safety associated with various design decisions. One critical tool for performing this type of analysis is regional risk assessment programs, such as HAZUS or REDARS. These programs have traditionally been used to assess and quantify risks; however, they can also be the foundation for design of infrastructure systems based on system performance. One key element that goes into these analyses is a fragility or vulnerability curve. Fragility curves are critical not only for comparing the relative vulnerability of different systems, but also for serving as input to cost-benefit studies and life-cycle cost (LCC) analyses. Although cost-benefit analyses are often conducted for scenario events or deterministic analyses, probabilistic cost-benefit analyses are more appropriate for evaluation of the anticipated return on investment in a novel high-performance system, by considering the risk associated with damage and cost due to potential seismic damage. Additionally, LCC analyses provide an effective approach for characterizing the lifetime investment in a system. Models often incorporate costs associated with construction, maintenance, upgrade, and at times deconstruction (Frangopol et al., 1997). The LCC models can be enhanced to also include costs associated with lifetime exposure to natural hazards (Chang and Shinozuka, 1996). Such models offer viable approaches for evaluating the relative performance of different structural systems. Given the increased emphasis on sustainability, the next generation of LCC models can also include aspects of environmental impacts (both in terms of materials usage and construction, and deconstruction resulting from earthquake damage) and weigh them against resilience. For example, although greater material investment is often required to make infrastructure systems more resilient, this may make them less sustainable. Conducting this systems-level design will require access to data on both structural parameters (e.g., bridge configuration), environmental, and operational data (such as traffic flows). One research challenge will be how to design our infrastructure systems using an “inverse-problem” paradigm. For example, a goal in design might be to have power and telecommunications restored within four hours of an earthquake event. Using this information as a constraint, the systems (and subsystems) can be designed to achieve these targets.

The next generation of BIM or BrIM systems will provide unprecedented information that can be used in the performance-based seismic design community (Holness, 2008). Building information modeling and associated data acquisition sensors (e.g., 3-D scanning and map-

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

ping) and visualization tools (e.g., augmented and virtual reality) cover precise geometry, spatial relationships, light analysis, geographic information, and quantities and properties of building and other infrastructure components. The earthquake-resistant-design community can take advantage of the BrIM or BIM platform to develop and demonstrate design trade-offs that include seismic vulnerability, constructability, costs, schedule, and energy usage.

Post-Event Response and Recovery

There is an ever-increasing need for quick, yet data-driven response to earthquake disasters. To be most effective, a focused response to an earthquake needs to be initiated and executed within seconds of the event. The challenge for lifeline systems is the need to assess the damage over a wide range of scales. For example, one must rapidly assess the damage to an entire transportation network, to enable rapid determination of emergency routes and to determine where critical resources should be focused. Also, once investigation teams are deployed to the areas of likely damage, advanced tools are necessary to quantify the damage and structural integrity, particularly in cases where the extent of damage is not obvious.

Currently, the U.S. Geological Survey (USGS) program ShakeCast (ShapeMap BroadCast) is a fully automated system for delivering specific ShakeMap products to critical users (Wald and Lin, 2007). Its role is primarily in emergency response, loss estimation, and public information. Caltrans uses ShakeCast to set priorities for traffic re-routing, closures, and inspections following a damaging earthquake. The current generation of ShakeCast flags bridges as high, medium-high, medium, or low priority for inspection, primarily based on the expected level of shaking and the system-level performance of bridges. Advancements in bridge modeling and fragility analysis can provide much more informed decision making for emergency response. For example, fragility curves that provide component-level fragility information (e.g., probability of damage to columns, footings, or shear keys) can be much more informative to bridge inspectors and can significantly increase the speed and effectiveness of bridge inspections following an earthquake (Nielson and DesRoches, 2007; Padgett and DesRoches, 2009).

Another challenge that needs to be addressed is the assessment of structural integrity, given damage to various components within a lifeline. Current approaches to assessing structural integrity are qualitative and often biased by personal experience. Research is needed to find ways to exploit recent advances in sensor technology and/or machine image technology for post-earthquake assessment of structural integrity. Researchers have recently proposed using high-resolution video cameras mounted on first responders’ outfits for determining structural integrity of buildings following a disaster (Zhu and Brilakis, 2010; German et al., 2011). Using the camera, damage inflicted on critical structural elements (in the form of cracks, spalling, bar buckling, etc.) are detected using state-of-the-art recognition techniques. The detected damage is then superimposed on the detected concrete column element to measure the damage properties (e.g., length, width, and position of crack). This information could be used to query a database of column tests to determine the likely load-carrying capacity of the column and the structural system as a whole. Significant research is needed to better correlate the damage to individual components with the overall structural integrity of the building system.

Recent earthquakes in Haiti, Chile, and New Zealand have illustrated the power of remote sensing and have transformed the way that earthquake reconnaissance is performed (Ghosh et al., 2011). Remote sensing was the source of much information in the early days following the earthquake as data providers, such as DigitGlobe and GeoEye, released newly captured imagery with spatial resolutions of up to 50 cm to aid response efforts. Within days of the earthquake, an aerial remote sensing data collection effort was commissioned by the World Bank, in collaboration with ImageCat and the Rochester Institute of Technology. In direct response, the Global Earthquake Observation Catastrophe Assessment Network (GEO-CAN) community was formed to assist in quantifying building damage using the remotely sensed data and to harness online “crowds” of experts, allowing critical damage assessment tasks to be completed rapidly by a distributed network. Such an approach can be adopted for lifeline systems, although challenges would remain for lifelines that are completely or partially buried.

Community Resilience

The damage to lifeline systems during an earthquake, and the disruption to the vital services that they provide, are critical to the resulting resilience of a community. Resilience refers to the ability of an individual (or community) to respond and recover following a disturbance. It includes those inherent conditions that allow the system to absorb impacts and cope with the event, as well as post-event adaptive processes that facilitate the ability of the system to reorganize, adapt, and learn in the response to the event (Cutter et al., 2008). Researchers have shown that the economic and social impacts from an earthquake are strongly linked to the performance of lifeline systems (Chang et al., 2008).

Vulnerability of communities arises from the complex intersection of the built environment, the natural environment, and human systems. Early social science research on community resilience focused on earthquake prediction, forecasting, and warning. This research led to the development of conceptual and empirical models of risk communication and perception, and warning responses. Advances in mapping and modeling the physical vulnerability of a community or region, through GIS and remote sensing technology, significantly improved our understanding of how disasters put communities at risk.

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

The recent increase in access to broadband connections, the widespread availability of affordable global positioning systems (i.e., via cell phones), and social media perhaps provide the greatest opportunity to learn from, respond to, and prepare for earthquakes at the community and individual level. These technologies essentially provide the opportunity for individuals to act as sensors (Shade et al., 2010). The potential for tens of thousands to millions of people to monitor and report on the state of damage following an earthquake, as well as the environmental impacts, organization, and human behavior, can help to provide a wealth of information that is useful for better understanding how disasters unfold. More research is needed, however, on the development of tools to collect, synthesize, verify, and redistribute the information in a useful manner.

Research Needs for Lifeline Systems

The sections of the paper above highlighted the challenges related to the performance of lifeline systems during earthquakes and opportunities for transformative research on lifeline systems. Below is a summary of the key research needs on lifelines systems in the areas of pre-earthquake planning, design, post-earthquake response, and community resilience.

1. Site Response Using Remote Sensing

Lifeline systems are strongly affected by the peak ground deformation, which often comes from surface faulting, landslides, and soil liquefaction. There is a need to develop approaches to quantitatively predict various ground motion parameters and to reduce the uncertainty associated with these parameters. Moreover, there is a major research need to use advances in remote sensing products, from air- and spaceborne sensors, to explore land surface parameters at different spatial scales that can be used to perform quantitative assessment, which has traditionally been performed using costly and time-consuming approaches that are only typically done on a local scale.

2. Inventory Assessment Using Remote Sensing Technologies

One of the main barriers to regional risk assessment is the lack of reliable and consistent inventory data. Research is needed to identify better ways to acquire data on the vast inventories contained within lifeline networks. Although researchers have effectively deployed remote sensing technologies following natural disasters (such as LiDAR), research is needed in developing ways that these technologies can be effectively used in acquiring inventory data, including physical attributes of different lifeline systems.

3. Improved Data Management System for Enhancements in Learning from Earthquakes

Pre-event planning will require that we learn from past earthquake events. This will require us to vastly improve post-earthquake information acquisition and management. Comprehensive and consistent information on the earthquake hazard, geological conditions and responses, structural damage, and economic and social impacts observed in previous earthquakes is invaluable in planning for future events. This will provide not only unprecedented information on the performance of individual lifelines, but also critical information on the interaction among lifeline systems. A major effort of the future NEES program should be to develop a comprehensive effort among professional, governmental, and academic institutions to systematically collect, share, and archive information on the effects of significant earthquakes, including on the built and natural infrastructures, society, and the economy. The information will need to be stored, presented, and made available in structured electronic data management systems. Moreover, the data management systems should be designed with input from the communities that they are intended to serve.

4. Improved Fragility Relationships for Lifeline Component and Systems

Testing of lifeline components has provided important information on fragility relationships. However, more research is needed to develop simulation-based fragility curves, which include component fragility, system fragility, and critical information on functionality, repair time, and repair cost. Such enhanced curves will significantly improve regional seismic risk assessment and form a basis for system-level design of lifelines.

5. Use of Infrastructure Information Modeling Systems for Performance-Based Lifeline Design

The next generation of BIM or BrIM systems will provide unprecedented information that can be used in the performance-based seismic design of lifeline systems. The earthquake-resistant-design community can take advantage of the BIM-type platform to perform high-resolution simulation modeling and demonstrate design trade-offs that include seismic vulnerability, constructability, costs, schedule, and energy usage.

6. Pre-Earthquake and Post-Earthquake Condition Assessment

Many lifelines were designed and constructed 50-100 years ago, and they are rapidly aging and deteriorating. The recent advances in sensor technology and non-destructive health monitoring methodologies provide significant oppor-

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

tunities for assessing the condition of infrastructure systems in the field. Research is needed in quantifying the deterioration of infrastructure systems and how this deterioration increases the vulnerability of various systems. In addition, there is a need to use sensor technology and/or machine vision technology, coupled with damage detection algorithms and analytical models, to conduct rapid post-earthquake damage assessment and determination of structural integrity.

7. Post-Earthquake Damage Assessment Using Remote Sensing

Recent earthquakes in Haiti, Chile, and New Zealand have illustrated the power of remote sensing and have transformed the way that earthquake reconnaissance is performed. Additional research is needed to exploit the use of remote sensing technologies to identify the spatial distribution of damage to lifeline systems and the associated economic and operational impacts.

8. Development of Relationships Between Physical Damage and Operability

Although significant progress has been made in understanding the seismic performance of lifeline components via component testing, large-scale testing, and analysis, much less is known about the operability of these components and the system as a whole, as a function of various levels of damage. Research is needed to develop more accurate relationships between component- and system-level damage, and the corresponding functionality, repair cost, and downtime of the lifeline system. The use of sensors and data management systems would better allow us to develop critical relationships between physical damage, spatio-temporal correlations, and operability.

9. Systems-Level Design of Lifeline Systems

Significant progress has been made in designing individual lifeline components (e.g., bridges, water mains, substation equipment, etc.); however, additional work is needed in designing these systems, considering their role within a larger network, the interdependent nature of lifeline systems, and the trade-offs in terms of cost and safety associated with various design decisions. One research challenge will be how to design our infrastructure systems using an “inverse-problem” paradigm. For example, a goal in design might be to have power and telecommunications restored within four hours of an earthquake event, considering the relationship between damage and functionality, and the expected interdependency between lifeline systems.

10. Improvement in Probabilistic Cost-Benefit Methodologies

Research is needed to determine how probabilistic cost-benefit analyses can be used to assess anticipated return on investment in new materials, novel high-performance systems, and retrofit. Moreover, research is needed on how lifecycle cost analyses can be used to characterize the lifetime investment in lifeline systems. The models can incorporate costs associated with construction, maintenance, upgrade, and, at times, deconstruction.

11. Sustainability

Given the increased emphasis on sustainability, the design of the next generation of lifeline systems must consider both resilience and sustainability. The major research need is to determine how life-cycle cost models can be used to incorporate environmental impacts (both in terms of materials usage and construction, and deconstruction resulting from earthquake damage) and weigh them against resilience.

12. Interdependencies

The interdependencies of lifelines complicate their coupled performance during an earthquake. Research is needed to better quantify the correlations between lifeline systems and how to better design the systems when their interactions are considered.

13. Applications of Citizens as Sensors

Research is needed on how cell phones and social media can be used to learn from, respond to, and prepare for earthquakes at the community and “individual” level. The potential for tens of thousands to millions of people to monitor and report on the state of damage following an earthquake, as well as the environmental impacts, organization, and human behavior, can help provide a wealth of information that is useful for better understanding how disasters unfold. More research is needed, however, on the development of tools to collect, synthesize, verify, and redistribute the information in a useful and effective manner.

Acknowledgments

This paper was developed with the support of the National Science Foundation and the National Research Council. The ideas and opinions presented in this paper are those of the author, with considerable input from members of the lifeline earthquake engineering community. The author would like to recognize the following individuals for their valuable comments and suggestions: Tom O’Rourke, M. Saiid Saiidi, Leonardo Duenas-Osorio, Jerry Lynch, Jamie Padgett, Yang Wang, Glenn Rix, Michael Symans, and

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

Mary Lou Zoback. A special thanks to Sharon Wood, Ron Eguchi, and Arrietta Chakos for their detailed review of the manuscript and helpful comments, and Laurie Johnson and Greg Deierlein (who are authoring companion papers) for their helpful discussion while preparing the paper. Finally, I would like to thank the conference Co-chairs, Greg Fenves and Chris Poland, for their leadership in organizing this workshop.

References

Abdoun, T. H., D. Ha, M. J. O’Rourke, M. D. Symans, T. D. O’Rourke, M. C. Palmer, and H. E. Stewart. 2009. Factors influencing the behavior of buried pipelines subjected to earthquake faulting. Soil Dynamics and Earthquake Engineering 29(3):415-427.

ASCE. 2009. Report Card for America’s Infrastructure. American Society of Civil Engineering. Available at www.infrastructurereportcard.org.

Chang, S. E., and M. Shinozuka. 1996. Life-cycle cost analysis with natural hazard risk. Journal of Infrastructure Systems 2:118.

Cutter, S. L., S. Barnes, M. Berry, C. Burton, E. Evans, E. Tate, and J. Webb. 2008. A place-based model for understanding community resilience to natural disasters. Global Environmental Change 18(4):598-606.

Dueñas-Osorio, L., J. I. Craig, and B. J. Goodno. 2007. Seismic response of critical interdependent networks. Earthquake Engineering and Structural Dynamics 36(2):285-306.

Frangopol, D. M., K.-Y. Lin, and A. C. Estes. 1997. Life-cycle cost design of deteriorating structures. Journal of Structural Engineering 123:1390-1401.

Gedikli, A., M.A. Lav, and A. Yigit. 2008. Seismic Vulnerability of a Natural Gas Pipeline Network. Pipeline Asset Management: Maximizing Performance of Our Pipeline Infrastructure. Proceedings of Pipelines Congress 2008. ASCE Conference Proceedings. DOI: 10.1061/40994(321)77.

German, S., I. Brilakis, and R. DesRoches. 2011. Automated detection of exposed reinforcement in post-earthquake safety and structural evaluations. Proceedings of the 2011 ISEC-6 Modern Methods and Advances in Structural Engineering and Construction Conference, June 21-26, 2011, Zurich, Switzerland.

Ghosh, S., C. K. Huyck, M. Greene, S. P. Gill, J. Bevington, W. Svekla, R. DesRoches, and R. T. Eguchi. 2011. Crowdsourcing for rapid damage assessment: The Global Earth Observation Catastrophe Assessment Network (GEO-CAN). Earthquake Spectra, In Press, March.

Glaser, S. D., H. Li, M. L. Wang, J. Ou, and J. P. Lynch. 2007. Sensor technology innovation for the advancement of structural health monitoring: A strategic program of US-China research for the next decade. Sensor Structures and Systems 3(2):221-244.

Ha, D., T. H. Abdoun, M. J. O’Rourke, M. D. Symans, T. D. O’Rourke, M. C. Palmer, and H. E. Stewart. 2010. Earthquake faulting effect on buried pipelines—Case history and centrifuge study. Journal of Earthquake Engineering 14(5):646-669.

Holness, G. V. 2008. Building information modeling gaining momentum. ASHRAE Journal (June):28-30.

Ivey, L. M., G. J. Rix, S. D. Werner, and A. L. Erera. 2010. A framework for earthquake risk assessment for container ports. Journal of the Transportation Research Board 2166:116-123.

Johnson, N., R. T. Ranf, M. S. Saiidi, D. Sanders, and M. Eberhard. 2008. Seismic testing of a two-span reinforced concrete bridge. Journal of Bridge Engineering 13(2):173-182.

Kim, J., J. P. Lynch, R. L. Michalowski, R. A. Green, P. G. Mohammed, W. J. Weiss, and A. Bradshaw. 2009. Experimental study on the behavior of segmented buried concrete pipelines subject to ground movements. Proceedings of the International Society for Optical Engineering, Vol. 7294.

Kim, J. Y., L. J. Jacobs, J. Qu, and J. W. Littles. 2006. Experimental characterization of fatigue damage in a nickel-base superalloy using nonlinear ultrasonic waves. Journal of the Acoustical Society of America 130(3):1266-1272.

Kiremidjian, A., J. Moore, Y. Y. Fan, O. Yazlali, N. Basoz, and M. Williams. 2007. Seismic risk assessment of transportation network systems. Journal of Earthquake Engineering 11(3):371-382.

Lynch, J. P., and K. J. Loh. 2006. A summary review of wireless sensors and sensor networks for structural health monitoring. The Shock and Vibration Digest 38(2):91-128.

McCullough, N. J., S. E. Dickerson, S. M. Schlechter, and J. C. Boland. 2007. Centrifuge seismic modeling of pile supported wharves. Geotechnical Testing Journal 30(5):349-359.

Nielson, B., and R. DesRoches. 2007. Analytical fragility curves for typical highway bridge classes in the central and southeastern United States. Earthquake Spectra 23(3):615-633.

NRC (National Research Council). 2009. Sustainable Critical Infrastructure Systems: A Framework for Meeting 21st Century Imperatives. Washington, DC: The National Academies Press.

O’Rourke, T. D. 2007. Critical infrastructure, interdependencies, and resilience. National Academy of Engineering Bridge Magazine 37(1).

O’Rourke, T. D. 2010. Geohazards and large, geographically distributed systems. Geotechnique 60(7):505-543.

Padgett, J. E., and R. DesRoches. 2009. Retrofitted bridge fragility analysis for typical classes of multi-span bridges. Earthquake Spectra 29(1):117-141.

Padgett, J. E., K. Dennemann, and J. Ghosh. 2009. Risk-based seismic life-cycle cost-benefit analysis (LCC-B) for bridge retrofit assessment. Structural Safety 32(3):165-173.

Padgett, J. E., R. DesRoches, and E. Nilsson. 2010. Regional seismic risk assessment of bridge network in Charleston, South Carolina. Journal of Earthquake Engineering 14(6):918-933.

Romero, N., T. D. O’Rourke, L. K. Nozick, and C. R. Davis. 2010. Seismic hazards and water supply performance. Journal of Earthquake Engineering 14(7):1022-1043.

Shade, S., G. Lurashci, B. De Longueville, S. Cox, and L. Diaz. 2010. Citizens as Sensors for Crisis Events: Sensor Web Enablement for Volunteered Geographic Information. WebMGS.

Shafieezadeh, A., R. DesRoches, G. Rix, and S. Werner. 2011. Seismic performance of pile supported wharf structures considering soil structure interaction in liquefied soils. Earthquake Spectra, In Review.

Wald, D., and K.-W. Lin. 2007. USGS ShakeCast. U.S. Geological Survey Fact Sheet 2007-3086. Golden, CO.

Yong, A., S. E. Hough, M. J. Abrams, H. M. Cox, C. J. Wills, and G. W. Simila. 2008. Site characterization using integrated imaging analysis methods of satellite data of the Islamabad, Pakistan, region. Bulletin of the Seismological Society of America 98(6):2679-2693.

Zhu, Z., and I. Brilakis. 2010. Concrete column recognition in images and videos. Journal of Computing in Civil Engineering 24(6):478-487.

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

EARTHQUAKE ENGINEERING RESEARCH NEEDS IN
THE PLANNING, DESIGN, CONSTRUCTION, AND
OPERATION OF BUILDINGS

Gregory G. Deierlein, PhD, PE

John A. Blume Professor of Engineering

Stanford University

Introduction

Past reports on research needs and grand challenges in earthquake engineering have described the importance of building performance to control earthquake losses and life-safety risks (e.g., EERI, 2003; NRC, 2004). A 2000 HAZUS study estimates a $4.4 billion annualized earthquake loss associated with buildings in the United States (EERI, 2003); the actual losses are likely to be larger with today’s building inventory. Scenario studies of large earthquakes in the United States suggest the losses to be on par or larger than those caused by Hurricane Katrina, with $100 to $200 billion in economic losses and damaged buildings displacing hundreds of thousands of residents and thousands of businesses (Kircher et al., 2006; Jones et al., 2008). Earthquake risks are generally considered to be increasing as the population growth in cities and urban regions is outpacing mitigation measures. In the United States, more than 75 million people live in urban regions with moderate to high seismic hazards (NEHRP, 2008), and this number will continue to climb because of the increasing population and societal pressures toward more dense urban communities. Risks in earthquake-threatened cities are even more pronounced outside of the United States, particularly in developing countries that are experiencing rapid urban growth (Deutsche Bank Research, 2008).

It is generally accepted that the most significant earthquake threats affecting buildings in the United States are those associated with (1) casualty risks from collapse in existing buildings that are seismically deficient relative to current building code standards and (2) excessive economic losses, business interruption, and displacement of residents caused by earthquake damage to new and existing buildings. The latter point reflects the fact that current building codes primarily deal with life-safety and do not explicitly address the broader performance factors that can impact communities. With the goal of promoting community resilience to earthquake threats, the recent San Francisco Planning and Urban Research Association (SPUR) study (2009) proposes specific targets for building performance that go beyond basic building code requirements. Specifically, the SPUR study defines five levels of building performance, described in terms of safety and post-earthquake functionality (e.g., safe and operational, safe and useable during repair, etc.). These descriptions of performance highlight important societal needs for maintaining key services (emergency, medical, government, among others) and for sheltering residents in place after a large earthquake. Methods to accurately assess earthquake damage and its impact on continued occupancy and functionality of buildings are essential to implement SPUR’s resilient city vision.

Performance-based earthquake engineering provides the means to quantify building performance in terms of (1) collapse and fatality risks, (2) financial losses associated with damage and repairs, and (3) loss of function and recovery time. These performance metrics are intended to inform earthquake risk management and mitigation decisions by building owners, financial/insurance interests, public building code officials, and other stakeholders. The implementation of performance-based engineering requires computational models, criteria, and enabling technologies to (1) simulate building response and performance, (2) design and configure systems with the desired performance, and (3) create building materials, components, and systems that fulfill the design intent. Although there has been significant progress on performance-based methods over the past two decades, continued research is needed to fully achieve and implement the vision of performance-based design for earthquakes.

This paper summarizes significant research and development needs for the assessment, design, and creation of earthquake-resilient communities. Although the fundamental concepts of earthquake safety and resiliency are not new, performance-based engineering strategies and methods for addressing the needs are new. Coupled with emerging computational, sensing, and information technologies, the performance-based methods promise to transform the practices of earthquake engineering and risk management.

The paper is intentionally focused toward fundamental research and development that fall under the mission of the National Science Foundation (NSF), and it does not attempt to broadly address all of the issues that are important to earthquake risk management. In particular, the paper does not address more applied research and development, technology transfer, and adoption/enforcement of building code standards that fall under the mission of other federal and state agencies. Moreover, the focus in this paper relates specifically to earthquake concerns related to buildings. Broader research needs for earthquake-resilient communities and civil infrastructure are covered in companion workshop papers (DesRoches, 2011; Johnson, 2011); other important scientific research areas, such as geophysics and seismology, are outside the scope of this paper.

The paper begins with a summary of research challenges and needs to achieve the vision of resilient communities through performance-based engineering. These needs form the basis for topical research thrust areas that are described in the next section. The paper concludes with brief comments on research facilities and organizations that would be required to conduct the research.

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

Research Challenges and Needs for Buildings

Research needs and challenges for buildings can be broadly distinguished between those associated with either pre-earthquake planning, design, and construction, or post-earthquake response, evaluation, and restoration. Within each category, further distinctions can be made between needs related to (1) new versus existing buildings, (2) individual buildings versus large building inventories, and (3) short-term (rapid-response) decision making versus longer term planning. Although some research needs are specific to one category, many of them cut across boundaries and pertain to multiple situations. Therefore, in the following discussion, the research challenges and needs are presented in a single group, and comments on the likely applications are discussed under each topic.

1. Simulation of Structural System Response

Accurate nonlinear analysis of structural system response, from the initiation of damage to the onset of collapse, is essential to modern performance-based design methods. Analysis of damage initiation is a major factor in assessing post-earthquake functionality and losses associated with repair costs and downtime, and assessment of collapse is a fundamental metric in establishing minimum life-safety requirements for buildings. For example, the FEMA P695 (2009) procedures for evaluating seismic building code design requirements are based on collapse capacities calculated by nonlinear dynamic analysis. Accurate simulation models of damaged structures are likewise important to assess the post-earthquake safety of buildings to aftershocks and to establish requirements for structural repair.

In spite of significant advances in nonlinear structural analysis, state-of-the-art methods are still fairly limited in their ability to model nonlinear dynamic response. This is especially true as structures approach collapse, where failure is often triggered by localized concentration of strains, resulting in strength and stiffness degradation that is sensitive to loading history. Examples of such behavior include local buckling and fracture in steel (including both structural steel and steel reinforcement in concrete), shear failures in concrete columns and walls, and connection or splice failures. Current methods to simulate these effects rely heavily on phenomenological models, which rely on empirical calibration and are limited in applicability and/or accuracy by the test specimen sizes, configurations, design parameters, and loading histories considered in the calibration testing. New high-fidelity analyses are desired whose model formulations represent the underlying mechanics and material behavior more directly, such that the models can capture energy dissipation, strength and stiffness degradation, and other effects under any arbitrary loading. The model formulations should incorporate basic material and topology model parameters and should be validated through large-scale testing of realistic structural components and systems.

2. Comprehensive Assessment of Building Performance

Beyond improved structural analysis, research is needed to develop more robust and accurate models to assess complete building performance. Performance-based procedures, such as embodied in ATC 58 (ATC, 2011), provide a fairly comprehensive framework to evaluate building performance. However, current methodologies rely almost entirely on empirical fragility models to assess post-earthquake functionality, damage, and repairs to structural, architectural, electrical, and mechanical building components. Although empirical fragility models offer a practical approach to evaluating component performance, they are inherently constrained by available test data, the types and number of components tested, and the realism of the tests (e.g., representation of boundary conditions, loading, etc.). Moreover, most fragility testing does not provide direct measures of impact assessment (e.g., implications of the damage on other components and systems, restoration and recovery times, etc.). Instead, the impacts are often defined and quantified based on ad hoc judgments and experience.

In the near term, component testing and empirical fragility models will likely continue to play an important role toward implementing performance-based methods. However, looking further into the future, more research should be directed toward developing simulation-based fragility models, whereby the component behavior is explicitly modeled to calculate component damage and related impacts on functionality, repairs, and recovery. For example, one can envision detailed models of ceiling systems, including lighting, HVAC, and sprinkler piping, where the physical behavior of the overall system, including component damage and functionality, is directly simulated. Such simulations could include direct modeling of damage to the ceiling components, along with cascading damage and restoration, such as simulation of sprinkler piping failures, water damage, and reconstruction operations to repair and restore the facility. Modern building information modeling (BIM) is an important enabling technology to facilitate high-resolution simulation modeling and data management for the wide variety of structural, architectural, and other building systems and components.

3. Life-Cycle Analysis and Decision Making

Life-cycle analysis of economic and other performance metrics is an important tool for performance-based engineering for earthquake risk management and decision making. As with the prior topic (building performance assessment), general frameworks for life-cycle analysis models are fairly well established; however, beyond the basic tools of performance-based assessment, utilization of life-cycle assessment

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

is limited by (1) a lack of accurate models and information on post-earthquake building repair costs and (2) financial and other metrics to quantify the effects of building downtime, displacement of building occupants, and environmental sustainability. Research should focus on filling these needs and to facilitate the integration of life-cycle analysis with earthquake risk mitigation and management.

4. High-Performance Building Systems and Materials

Whereas earthquake engineering has traditionally been focused on strengthening and toughening conventional materials and systems to provide minimum life-safety protection, more robust and damage-resistant systems are needed to achieve the vision of earthquake-resilient communities. Seismic base isolation offers perhaps the highest level of seismic protection available today, but it comes with a significant cost premium and other requirements that limit its application. Therefore, research is needed to develop and test other types of high-performance systems, particularly ones that are economically competitive with conventional systems and viable for widespread implementation. Examples of recent developments in damage-resistant seismic systems include (1) self-centering precast concrete and steel framing systems, which employ controlled rocking and elastic posttensioning, (2) moment frame or wall systems that employ high-damping devices, and (3) architectural partitions and other non-structural components that are more resistant to damage from building drifts and accelerations. Given the growth of urban regions and the need for high-density housing and businesses, new earthquake damage-resistant systems for mid- to high-rise residential and office buildings are especially needed. Ideally, these newly developed systems should utilize construction automation, prefabrication, and holistic integration of structural and non-structural components to resist earthquake effects. These needed innovations can apply to both new buildings and retrofit of existing buildings.

5. Development and Evaluation of Repair Technologies

In contrast to the significant research on the design and behavior of new construction and pre-earthquake retrofit, comparatively less research is available to develop and evaluate common earthquake repair techniques. For example, it is very common to employ epoxy injection to fill earthquake-induced cracks in concrete walls and frames; however, there is comparatively little research to demonstrate the effectiveness of epoxy injection to restore the strength and stiffness of the damaged components. To further support efforts to quickly restore buildings to service, research should focus on (1) technologies to rapidly assess earthquake damage and its effect on building safety to earthquake aftershocks and (2) innovative repair techniques that can be implemented quickly with minimal disruption to building occupants and operations.

6. Sensing and Rapid Damage Assessment

In further support of the previous need for building repair methods, development of improved sensors and damage assessment methodologies could greatly facilitate post-earthquake response and recovery. Observations from past earthquakes reveal instances where safe buildings are inadvertently closed and taken out of service because of a lack of reliable information on the condition of the structure. Although there is an understandable tendency to err on the conservative side when making decisions about building closure, this conservatism exacerbates problems with displaced residents or loss of important building services. In some instances, overly pessimistic tagging of buildings may even result in buildings being unnecessarily abandoned and demolished. On the other hand, there may also be instances where unsafe buildings remain open and occupied, although these instances are probably less common. In either case, accurate and timely assessment of building conditions would be facilitated through improved sensors and diagnostic tools that can provide immediate feedback as to the building integrity following a significant earthquake.

7. Characterizing Ground Motions and Site/Foundation Response

Although free-field ground motions and/or ground motion intensities are commonly used as input for seismic design and analysis, it is generally recognized that the effective input ground motion effects can differ significantly from free-field motions. For example, in stiff short-period buildings, the free-field motions are considered by many engineers to over-estimate the effective input ground motions to the structure. These impressions are supported by the discrepancy between calculated versus observed damage to short-period buildings. Similar trends have been observed in other (longer period) buildings, depending on the building site and foundation conditions. It is hypothesized that the free-field ground motions are reduced by localized deformations in the nearby soil and the soil-foundation interface, but information to confirm this is lacking. Other situations where definition of input ground motions is complicated are in (1) buildings with deep basements, (2) buildings where ground conditions vary considerably across the building site, and (3) dense urban regions where the localized ground motions are influenced by the proximity of closely located buildings, underground structures, and other facilities. Although provisions for soil-structure interaction are available, they are limited in their ability to address situations such as these. Moreover, there is a general lack of well-documented laboratory and/or field data to develop reliable models to characterize soil-foundation-structure interaction and its effect on input ground motions.

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

8. Assessing and Remediating Sites Prone to Ground Deformations

Excessive ground deformations are a serious concern for any structure and may result in abandonment of a building site or expensive ground improvement to the site. Moreover, as land becomes scarce in urban regions, there is increasing demand to build structures on marginal sites with soft soils. Over the past 20 years, methods to estimate earthquake ground deformations have improved considerably; however, even the best methods are fairly empirical and have limited resolution to differentiate certain site conditions. As with structural and building systems performance, there is a need to develop simulation-based models that can more accurately calculate expected ground deformations for specific earthquake hazard levels and ground motion. The models should have the capabilities to assess ground deformations at sites with variable soil types (in depth and plan area), and the methods should have sufficient resolution to quantify the effectiveness of ground modification techniques to mitigate the deformations. In addition to developing improved methods to assess ground deformations, research is needed to develop and evaluate new techniques for ground modification, which are more economical and effective than existing methods.

9. Building Benchmarking and Rating

With the aim toward improving building codes and promoting effective public policy, research is needed to (1) benchmark the seismic performance of buildings and (2) provide rating methods to make stakeholders more aware of expected building performance and how it can vary between buildings. These benchmarking studies could be done for various purposes, such as to evaluate the performance implied by new buildings that conform to current building codes. Or, studies could be done to benchmark the performance of building types that are predominant in an urban region and to inform policy decisions on seismic safety. To the extent possible, the benchmark studies should be validated (or corroborated) by observed earthquake damage and losses.

FEMA P695 (2009) provides a framework for evaluating the collapse safety of buildings, but this procedure relies heavily on judgment to characterize variability in ground motions and model uncertainties; the accuracy of the method is ultimately limited by available nonlinear dynamic analysis models. ATC 58 (ATC, 2011) provides a framework for assessing the more complete response of buildings (casualty risks, direct dollar losses, and facility downtime), but, as noted previously, the performance-assessment techniques and fragility models of ATC 58 are heavily based on empirical evidence and judgment. Therefore, research is needed to develop more robust methods for benchmarking building performance.

The impact of benchmarking studies on building codes and policy very much depends on the accuracy (or perceived accuracy) of the studies. Therefore, it is important that the benchmarking studies be conducted on realistic building inventories, using comprehensive building simulations. Ideally, the benchmark metrics would go beyond collapse safety to incorporate complete building performance metrics (e.g., information on safety and post-earthquake functionality could be interpreted through one of the SPUR building ratings). Modern information technologies would play an important role in storing, accessing, and managing data for the benchmarking studies.

Building-specific rating systems to characterize the relative seismic performance between buildings have been proposed as a mechanism that would (1) promote greater awareness of expected earthquake performance of buildings, (2) provide more transparency in seismic design, and (3) encourage more proactive earthquake risk management. The significance of building-specific ratings would be more meaningful if they could be contrasted against comparable ratings for other buildings. Thus, the benchmarking studies of realistic building archetypes would serve an important role in establishing meaningful building ratings.

10. Performance-Based Design and Optimization

To date, most of the research on performance-based earthquake engineering has focused on assessing performance, with comparatively less research on how to use more advanced assessment tools to design cost-effective buildings. The implicit presumption is that effective design solutions can be developed by design professionals and then evaluated (checked) for conformance with the desired performance targets. However, given the inherent nonlinearities in the building response and the large number of design parameters, design optimization for seismic performance can be quite challenging. As performance-assessment technologies are further refined, their practical utilization will require techniques to optimize designs for specific performance targets (e.g., one of the SPUR building resiliency categories) or to minimize life-cycle costs. Therefore, computational design optimization tools are required, whereby a proposed building system can be optimally designed to meet specified performance targets for the lowest life-cycle cost. Design optimization can be applied on a building-specific basis or to archetype building types to develop simplified design provisions for certain classes of structural system or building types.

Topical Research Study Areas

The research needs outlined above are fairly broad and ambitious, and each will require carefully planned research programs to answer the specific needs. The following is a summary of general research thrust areas that are intended to help describe the necessary experimental, computational, and information technology resources required to respond to the research needs.

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

Probabilistic Framework for Performance-Based Life-Cycle Design and Decision Making

Although many of the research needs described in the previous section can be tackled independently, it would be highly desirable to have an overarching performance framework to promote coherency between research projects. This would help ensure, for example, that the computational simulation models, laboratory testing, benchmarking exercises, etc., are coordinated so as to quantify performance criteria in consistent ways that facilitate data and model sharing. To some extent, systematic performance assessment methodologies already exist in the form of HAZUS (Kircher et al., 1997), the PEER methodology (Krawinkler and Miranda, 2004; Moehle and Deierlein, 2004), and ATC 58 (ATC, 2011); however, these frameworks may need to be extended to incorporate simulation-based (in contrast to empirical observation-based) damage, impact, and recovery models in procedures for evaluation and decision making.

High-Performance Computational Simulation and Visualization

As outlined above, an important innovation in the computational simulation of building performance is an emphasis on fundamental model formulations that better represent the underlying phenomena. This approach is in contrast to more conventional reliance on phenomenological models and simplified fragility models that rely almost exclusively on empirical data and judgment. The proposed simulation models should employ mechanics-based idealizations (finite elements, discrete particle, or other methods) to capture the physical behavior and damage of structural and geotechnical materials and components. Consequences of the damage, associated with repair and recovery operations, could be simulated using construction planning/logistics models. To the extent possible, forward-looking research should apply more fundamental modeling and analysis methods to simulate the performance of non-structural architectural, mechanical, and electrical components and systems.

This proposed computational model development is aligned with broader simulation-based engineering and science initiatives at NSF (e.g., Oden et al., 2006; Cummings and Glotzer, 2010; Dyke et al., 2010). As described in these reports, development of fundamental models requires data capture, fusion, and visualization that is similar to the George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES) vision. The computational demands of the fundamental models will require high-performance petascale (and beyond) computing applications and resources. For some in the earthquake engineering research community, this change in approach will require a culture change. Nevertheless, development of fundamental models that require high-performance computing is inevitable and will provide for solutions that are scalable and adaptable to larger and more meaningful problems (e.g., going beyond simulating building components to entire buildings and urban regions).

Development and Parameter Calibration of Structural and Geotechnical Models

Development of more fundamental (mechanics-based) models for structural and geotechnical materials and components will require high-resolution tests to characterize the underlying mechanics and material behavior that govern behavior. In contrast to traditional experimental research, where large-scale component tests have typically been used to develop “design models” (e.g., nominal strength equations for building codes) and calibrate phenomenological models (e.g., generalized force-deformation models in ASCE 41), the need is for tests that are more directly aimed at development of computational models and calibration of underlying input parameters. Thus, the required testing programs will involve a range of component test specimens to interrogate multi-axial stress and strain states in materials and sub-assemblies. These tests will require complementary computational modeling in order to extract appropriate modeling parameters. The resulting computational models can then be validated against data from realistic (large-scale) component tests.

Consider, for example, reinforced concrete components that experience degradation due to longitudinal reinforcing bar buckling and fracture. These are complex phenomena that require detailed tests to characterize the buckling and fracture behavior more systematically than can be achieved in full-scale column tests. Ultimately, large-scale column tests are needed to validate the models, but large-scale tests alone are not necessarily well suited to model development. Similarly, models of soil-foundation-structure interaction would require detailed soil characterization that builds up to larger system tests. In this regard, the required tests may involve more material and small-scale testing than has been the practice up to now. As noted previously, the resulting analysis models should accurately capture the complete range of behavior from the initiation of damage through to the onset of large inelastic deformations that can trigger collapse.

Development and Calibration of Non-Structural Building Component Models

Just as with structural and geotechnical components, material and component testing is needed to enable and support the development of simulation-based models for nonstructural building components, such as architectural partitions and finishes, ceiling systems, HVAC, and plumbing and electrical systems and components. For certain components, testing and empirical model development will continue to be the most cost-effective way to develop performance models (e.g., damage and fragility curves). However, where the component behavior is amenable to detailed analysis and/or

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

where the distributed systems are too large to test, emphasis should be on testing to characterize features and parameters for computational models.

Data Capture and Utilization of Observed Building Performance in Earthquakes

Many of the research needs would benefit from more complete data collection to document the performance of real buildings in earthquakes. Because of limitations in resources and training and lack of community leadership, earthquake reconnaissance is often cursory and anecdotal. Although current reconnaissance efforts provide a reasonably good understanding of the “big picture” issues that have arisen in recent earthquakes, the reconnaissance efforts are not generally effective at collecting data and information in ways that support more detailed long-term research. Moreover, sometimes the most important lessons are in buildings that performed well, which are often overlooked during earthquake reconnaissance.

The following are suggested as steps toward improved documentation and utilization of data from buildings subjected to earthquakes: (1) protocols should be developed for more consistent procedures for planning and executing post-earthquake reconnaissance; (2) technologies should be developed and made available to facilitate rapid and effective data recording and uploading field observations that include appropriate markers/identifiers for data providence, management, and use; and (3) information technologies should be developed to facilitate storage, management, and use of earthquake observations, including but not limited to (i) photos, videos, and other field observations, (ii) recorded strong motions—both free field and from instrumented structures, (iii) information to classify and model individual buildings and building inventories in earthquake-affected regions, and (iv) information on earthquake losses, impacts on building function and operations, and recovery.

Validation Testing of Conventional and Innovative Building Components and Systems

Large-scale testing will continue to be important for validating simulation models of both conventional and new building components and systems. However, in contrast to past practice where tests are often run before detailed models have been developed, greater emphasis should be placed on model validation, where detailed analysis models are developed and scrutinized prior to the large-scale tests. Predictive analysis results should reflect the uncertainties in nonlinear behavior and analysis, where the uncertainties are built into the analysis ahead of the test, rather than being rationalized after the test to explain discrepancies between the calculated and measured data. Large-scale validation tests should be planned and developed with significant input and involvement of the research community so as to (1) make sure that the tests capture the most relevant and important behavioral effects and (2) engage the broader community in making effective use of data from expensive large-scale tests. Research funding models may need to be revised to support this sort of involvement.

Development and Validation of Sensors and Damage Models

The rapid pace of technological advancements in sensors, wireless communication, and digital information technologies over the past 20 years offer unprecedented opportunities for collecting extensive data from laboratory tests, real buildings, and distributed inventories of buildings. As with many new technologies, much of the previous research on sensing and health monitoring has been on development of sensors, signal processing, and stand-alone damage detection algorithms. Looking ahead, research programs should focus on ways to integrate sensing technologies with (1) computational model updating and validation, (2) interpretation of observed building performance in earthquakes, and (3) rapid post-earthquake assessment of buildings. Given the rapid proliferation of high-resolution imaging (still images and video), there are important research opportunities to investigate ways of automating (or semi-automating) the interpretation and use of image data.

Fusion of Inventory Data with High-Fidelity Simulations and Visualization for Benchmarking, Building Rating, and Cost-Benefit Studies

The needs for building benchmarking, seismic rating systems and cost-benefit studies offer ideal testbeds to apply high-performance computing to practical research needs. Such studies would promote close collaboration between researchers in high-performance computing, high-fidelity computational modeling, and earthquake risk assessment, management, and decision making. The studies would help promote realism into the research, which would ultimately lead to greater impacts on building design, adoption of new technologies, and development of policies for seismic risk mitigation.

Implications on Research Facilities

The research needs and thrust areas described above will require unprecedented coordination and data fusion between high-fidelity computational simulations, building inventory descriptions and information models, laboratory tests, and observations/measurements of building performance during earthquakes. Although large-scale laboratory testing will continue to be a critically important component of earthquake engineering research, increased emphasis should be placed on computational model development and physical testing that is in direct support of its development. As noted previously, the needs for physical testing are not all large

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

scale. In fact, in some cases, the most critical needs are for testing facilities and instruments to characterize material behavior and mechanics at small (micro) scales, e.g., characterizing detailed nonlinear behavior, damage, and failure of structural materials including concrete, steel, soils, wood, composite fibers, etc. At the other extreme, greater emphasis should be placed on methods to measure and collect data from field conditions, including both planned field experiments (including building demolition) and unplanned events (earthquakes and other extreme loadings).

Given the breadth and depth of the research needs, many of the needs may best be addressed through research center (or center-like) programs. Many significant facilities are already in place for large-scale testing, and these will need to be maintained. Although the United States has some remarkable high-performance computing facilities available, these tend to be underutilized for earthquake engineering research. Therefore, there is a critical need for a more concerted effort on simulation-based high-performance computing research, which involves development and validation of (1) models, (2) improved computational algorithms for large models, and (3) tools to facilitate management and integration of massive test and analysis databases. Otherwise, given the current state of earthquake engineering research, it would be difficult for any individual researcher (or small group of researchers) to assemble the critical mass of computational simulation expertise and resources to make the types of transformative changes that are necessary to address the research needs.

Acknowledgments

This paper was developed with the support and invitation of the organizers of Grand Challenges in Earthquake Engineering Research: A Community Workshop. The ideas and suggestions presented in this paper were developed by the author with input from many members of the earthquake engineering community. Although it is impossible to acknowledge all of the individuals who contributed to the ideas in the paper, the author would like to recognize the following individuals as making specific suggestions that are reflected in the paper: R. Boulanger, G. Brandow, M. Comerio, C. Comartin, G. Fenves, A, Filiatrault, J. Hajjar, R. Hamburger, J. Heintz, W. Holmes, C. Kircher, H. Krawinkler, E. Miranda, J. Moehle, J. Osteraas, C. Poland, K. Porter, J. Ramirez, R. Sabelli, S. Wood. The author would also like to acknowledge helpful discussions with L. Johnson and R. DesRoches, who are authoring companion papers for the workshop.

References

ATC (Applied Technology Council). 2011. Guidelines for Seismic Performance Assessment of Buildings. ATC Project 58, 75% Draft Guidelines. Redwood City, CA.

Cummings, P. T., and S. C. Glotzer. 2010. Inventing a New America through Discovery and Innovation in Science, Engineering and Medicine: A Vision for Research and Development in Simulation-Based Engineering and Science in the Next Decade. Baltimore, MD: World Technology Evaluation Center (WTEC). Available at www.wtec.org/sbes-vision/RDW-color-FINAL-04.22.10.pdf.

DesRoches, R. 2011. Grand challenges in lifeline earthquake engineering research. In Grand Challenges in Earthquake Engineering Research: A Community Workshop. Washington, DC: The National Academies Press.

Deutsche Bank Research. 2008. Megacities: Boundless Growth, 18 pp. Available at www.dbresearch.com/PROD/DBR_INTERNET_EN-PROD/PROD0000000000222116.pdf.

Dyke, S. J., B. Stojadinovic, P. Arduino, M. Garlock, N. Luco, J. Rairez, and S. Yim. 2010. Vision 2020: An Open Space Technology Workshop on the Future of Earthquake Engineering. A Report on the NSF-Funded Workshop, January 25-26, 2010, St. Louis, MO. Available at nees.org/resources/1637/download/Vision_2020__Final_Report.pdf.

EERI (Earthquake Engineering Research Institute). 2003. Securing Society Against Catastrophic Earthquake Losses: A Research and Outreach Plan in Earthquake Engineering. Oakland, CA. Available at www.eeri.org/cds_publications/securing_society.pdf.

FEMA (Federal Emergency Management Agency). 2009. Quantification of Building System Performance Factors. FEMA P-695. Prepared by the Applied Technology Council, Redwood City, CA.

Johnson, L. A. 2011. Transformative earthquake engineering research and solutions for achieving earthquake resilient communities. In Grand Challenges in Earthquake Engineering Research: A Community Workshop. Washington, DC: The National Academies Press.

Jones, L. M., R. Bernknopf, D. Cox, J. Goltz, K. Hudnut, D. Mileti, S. Perry, D. Ponti, K. Porter, M. Reichle, H. Seligson, K. Shoaf, J. Treiman, and A. Wein. 2008. The ShakeOut Scenario. USGS Open File Report 2008-1150. Reston, VA.

Kircher, C. A., R. K. Reitherman, R. V. Whitman, and C. Arnold. 1997. Estimation of earthquake losses to buildings. Earthquake Spectra 13(4):703-720.

Kircher, C. A., H. A. Seligson, J. Bouabid, and G. C. Morrow. 2006. When the big one strikes again—Estimated losses due to a repeat of the 1906 San Francisco earthquake. Earthquake Spectra 22(S2):S297-S339.

Krawinkler, H., and E. Miranda. 2004. “Performance-Based Earthquake Engineering,” Chapter 9 in Earthquake Engineering from Engineering Seismology to Performance-Based Engineering, Y. Bozorgnia and V. V. Bertero, eds. Boca Raton, FL: CRC Press.

Moehle, J., and G.G. Deierlein. 2004. A Framework Methodology for Performance-Based Earthquake Engineering. Proceedings of the 13th World Conference on Earthquake Engineering, Vancouver, B.C., Canada, on CD-ROM.

NEHRP (National Earthquake Hazards Reduction Program). 2008. Strategic Plan for the National Earthquake Hazards Reduction Program: Fiscal Years 2009-2013. October. Available at nehrp.gov/pdf/strategic_plan_2008.pdf.

NRC (National Research Council). 2004. Preventing Earthquake Disasters: The Grand Challenge in Earthquake Engineering. Washington, DC: The National Academies Press.

Oden, J. T., T. Belytschko, J. Fish, T. J. R. Hughes, C. Johnson, D. Keyes, A. Laub, L. Petzold, D. Srolovitz, and A. Yip. 2006. Simulation-Based Engineering Science: Revolutionizing Engineering Science through Simulation. Report of the National Science Foundation Blue Ribbon Panel on Simulation-Based Engineering Science, 88 pp. Available at www.nsf.gov/pubs/reports/sbes_final_report.pdf.

SPUR (San Francisco Planning and Urban Research Association). 2009. The Resilient City: A New Framework for Thinking about Disaster Planning in San Francisco. The Urbanist Policy Initiative. San Francisco, CA. Available at www.spur.org/policy/the-resilient-city.

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

CYBERINFRASTRUCTURE-INTENSIVE APPROACHES
TO GRAND CHALLENGES IN EARTHQUAKE
ENGINEERING RESEARCH

James D. Myers

Director, Computational Center for Nanotechnology Innovations

Rensselaer Polytechnic Institute

Introduction

In just the past decade, the power of the largest supercomputers has increased by more than a factor of a thousand. The cost per bit of data storage has decreased by a similar factor. We can buy personal cell phones today that feature Internet connections rivaling those of entire universities of the year 2000. Further, it is clear that such trends in increasing performance and decreasing price for performance ratios will continue in the coming decade. Such dramatic, ongoing change brings both opportunities and challenges to the earthquake engineering community. Research can be done faster, more cost-effectively, on larger systems, and with higher fidelity “simply” by applying the latest technology. However, technological advances bring “nonlinear” opportunities—when, for example, automated sensing becomes cheaper than manual measurement, when simulation becomes more accurate than experiments, when discovering and using existing data becomes simpler than generating new data. Realizing this type of opportunity involves significant changes to “business as usual” along with technology adoption—from the development of new research techniques to additional coordination required for data integration to support for new collaborations and career paths.

The earthquake engineering community has made significant scientific and engineering advances through the use of cyberinfrastructure over the past decade: large networks of seismic stations generate ground motion data that are integrated and stored as a long-term community resource; strong wall and shake table experiments record thousands of data channels and video to provide very high resolution information about structural performance; simulation of structural behavior has become a tool for interpreting experiments and extrapolating from them even to the point of becoming part of hybrid experiments; geospatial modeling tools have grown to encompass socioeconomic effects, inter-network interactions, and decision support capabilities; and the community-scale sharing of data, tools, and equipment has accelerated community development and adoption of new techniques.

To continue such advancements in the next decade, and to capitalize on emerging cyberinfrastructure capabilities, in pursuit of earthquake engineering grand challenges, it will be important to understand the current state and promising research directions in cyberinfrastructure. Although there are directions in which exponential increases in computing capabilities will continue, there are others, such as the raw clock speed of individual CPUs, where progress has effectively stopped. There are also areas where synergies from progress on multiple fronts are enabling dramatic new possibilities. Indeed, as this paper is being written, a computer challenged human supremacy at the game of Jeopardy! and won, opening an era in which computers will actively aid researchers in applying reference information to their work.

Cyberinfrastructure: A Working Definition

There are a number of connected challenges in providing a succinct overview of cyberinfrastructure and its potential. Cyberinfrastructure is broad, both in terms of the underlying suite of technologies and in the potential areas of application. It includes the hardware for sensing, networking, computation, data management, and visualization, as well as the software required to turn these technologies into capabilities—e.g., for automated data collection, modeling, data analysis, and group and community-scale collaboration. Most definitions of cyberinfrastructure, including those from the National Science Foundation (Atkins et al., 2003; NSF, 2007), also recognize its “socio-technical” nature—using it effectively requires changes in practices and culture, shifts in responsibilities between organizations, and even the development of new career paths.

In this sense, the definition quickly becomes “everything remotely related to information technology and its use.” However, the definition can be constrained somewhat by restricting it to true infrastructure (Edwards et al., 2007)—areas where economies of scale, coordination challenges, and/or transformative potential argue for shared provisioning, support for cross-disciplinary collaborations, and the creation of common middleware and standards that simplify and guide further development. These considerations naturally refocus the definition in areas of rapid change and where there is significant expected value and broad demand. Cyberinfrastructure development and deployment can thus be distinguished from curiosity-driven research through a requirement for a direct connection to domain (i.e., earthquake engineering) problems, a clear argument for its role in solving them, and defined metrics for its success (Berman et al., 2006). Although this definition is still not fully prescriptive, it does provide a useful framework for the following discussion.

The Near Future of Cyberinfrastructure

Although predicting the exact future of cyberinfrastructure is difficult, getting a sense of the future landscape is less so. The roadmaps of manufacturers indicate that trends such as the increasing computational capacity of supercomputers will continue for much of the decade and deliver factors of at least 100 to 1,000 times more performance than today. Power efficiency and density will also increase, meaning that desktop and rack-scale systems will also see dramatic

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

performance improvements. However, much of that power will come from increases in parallelism, either from an increase in the number of CPUs integrated together or via increased parallelism within chips, as is already occurring through the use of general purpose graphics processing units—video cards—and their hundreds of parallel pipelines. The corollary here is that software that is not engineered or reengineered to leverage that parallelism will see minimal gains—a dramatic difference from the past decade.

Data storage is also increasing rapidly in capacity and decreasing in price. However, an often overlooked fact is that single disk read/write speeds have not kept pace—the average time to retrieve a randomly located byte on a disk has decreased by less than a factor of 10 since the 1970s. This has opened a large—many orders of magnitude—gap between raw compute performance and the ability to compute effectively on large datasets, which is now receiving significant attention under the banner of “data-intensive” computing. Hardware vendors are devoting significant effort to adding fast data caches (multiple levels) to chips, creating solid state (flash memory) disks and caches, and distributing data across many disks for faster access. Innovation is also occurring in file systems and database software, as well as processing software, which is enabling massively parallel processing of data and statistical analysis across large datasets.

Early adopters of data-intensive techniques include Internet search engines and processors of text in general (e.g., books as well as web pages), in part because of the massive amount of text available. However, the use of distributed sensors, images and video feeds, and high-throughput analysis techniques are rapidly increasing the need for similar capabilities in scientific research. The size and cost of sensors, the bandwidth available to wireless and mobile sensor platforms, the resolution of cameras, and the size and resolution of displays are all benefitting from the advances in computer chip manufacturing and commodity/consumer applications, resulting again in orders of magnitude of change.

In synergy with hardware advances, software advances are occurring, and the combination can be stunning. It has become possible to automatically integrate photos (e.g., tourist photos from Flickr) to reconstruct an underlying 3-D model of the scene (e.g., of the Statue of Liberty). Cars, planes, quad-copters, and submersibles can pilot themselves. Grids can provide scheduled access to tens of thousands of interconnected processors, while clouds can provide truly on-demand access to commodity clusters or to applications and services directly. Computers can automatically monitor thousands of video streams and identify features and events, and they can read the literature and databases and answer questions in natural language.

Empowering Individuals to Address Grand Challenges

Although it would be entertaining to continue to review the existing and emerging wonders of computing, the goal of planning for cyberinfrastructure-intensive research in earthquake engineering is better served by returning to the definition of cyberinfrastructure and discussing the potential for these technology advances to further progress on grand challenges. The real questions are: What can several orders of magnitude of increases in compute power, data sizes, and sensor density, combined with a rapidly increasing capability to focus those resources on demand and to automate larger and more complex tasks, enable, and what is required beyond the raw technologies to realize those new opportunities?

To begin, consider first the ways that increased compute power can be used. In modeling physical processes, 3 to 4 orders of magnitude can provide (only!) a factor of 10 in spatial resolution in three dimensions and time, which would be useful if, and only if, such increases improve fidelity. Or one could model structures 10 times larger at the same resolution. In domains where stochastic and/or chaotic processes occur, the best use of additional power can be to run ensembles of simulations to map the uncertainties in initial measurements to a range of outcomes. Combined with advances in adaptive meshing and multi-scale/multi-domain modeling, processing power can also be expended to do very fine-grained modeling or to use more advanced models in small areas, e.g., where bending is occurring or cracks are developing.

The growing capabilities for data generation, storage, analysis, and visualization can be analyzed in a similar way. Increased resolution and/or experiments on larger or more complex samples become practical. Statistical properties can also be computed (assuming sufficient samples and/ or non-destructive testing) and compared with statistical simulations or with reconnaissance data. Even more interestingly, massive amounts of cheap data can substitute for more costly measurements. Aerial photography is already used to infer inventory information and reduce the need for people to physically categorize buildings. One could also imagine dense arrays of randomly located sensors replacing hand-placed ones, with positions calibrated through use of multiple image/video feeds. Advances in sensors themselves—to auto-calibrate, to dynamically adjust measurement rate—and for mobile sensor platforms to enable dynamic in-depth observations when and where needed will enable “stretching” within the data budget analogously to the way adaptive and multi-scale modeling can stretch the computational budget to enable more valuable results at lower costs.

Implicit in the use of such high-throughput techniques is the challenge of managing more measurement streams and more datasets. Folder/file hierarchical names become increasingly cumbersome as the number of files increases. Further, any manual actions required to find, select, or convert files, or to extract specific data from files, or to transfer files between machines, become bottlenecks. Fortunately, tools for automatically capturing descriptive information about data as well as its provenance (data history) and to allow subsequent human annotation are emerging. Such an infrastructure then opens a range of opportunities—use of

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

provenance for debugging and automating subsequent analyses, automation of reporting and data publication activities, etc. Given the management overheads in running sensor networks and equipment sites at the current scale, the potential to actually reduce those overheads rather than having them scale with, for example, the number of sensors could become critical in maintaining/increasing productivity.

Increasing amounts of data, whether from experiments or simulations, raise new challenges beyond those of acquisition and storage. Understanding large amounts of data, and/ or data representing a large number of dimensions, cannot be done by manual inspection and manipulation of the data. Automated detection of features and of events of interest and automated analysis of their correlations and evolution becomes a necessity. (Minimally, feature and event detection are required to provide navigational guides for manual exploration of the data.) As data volumes grow, algorithms to detect known features (e.g., cracks) will be supplemented by “machine-learning” algorithms that detect and report correlations that may or may not have been expected (e.g., if two sets of sensors suddenly have less correlation in their measurements, then it might indicate a broken joint or loose mounting that may or may not have been anticipated by the researcher). The potential to automate the detection of unanticipated features is at the heart of arguments for a “Fourth Paradigm” of research complementing experiment, theory, and simulation.

As discussed in previous sections, the raw capabilities to acquire, process, and store increased data volumes have advanced tremendously and will continue to do so. There have also been significant technical advances and cost reductions in the areas of large displays, stereoscopic 3-D displays, and remote visualization over the past few years, but significant challenges remain in effectively using such hardware and, for example, creating capabilities for feature-based navigation, analysis, and display on such devices. For example, tiled displays now exist that provide more than 300 million pixels. Smaller systems, ones that still match the resolution of the human eye (~15 million pixels), can now run un-modified applications (web browsers, Excel, Matlab) using only a single computer (versus the visualization cluster and specialized software required to run larger displays). However, using either class of displays for something more than showing a single large image or massive arrays of sensor traces quickly becomes a custom development effort. Over the next few years though, it should become possible to trigger feature detection algorithms to run automatically as data are stored and for feature-based navigation tools to not only show synchronized display of multiple inputs but also to select which inputs of the hundreds or thousands available are the best to look at to understand a given feature. Direct visualization of the distribution of values across an ensemble of experiments, and/or visualization of the deviations between experimental and simulated results, are also likely to be capabilities that can be quickly configured and automated.

Empowering the Community to Address Grand Challenges

As significant as the direct benefits of cyberinfrastructure are on an individual’s ability to increase the scope, scale, and speed of his or her research, the community-scale impacts, in terms of increased capacity to address grand challenges, are likely to be greater. For grand challenges, reducing the time-to-solution requires reducing the time required for problems to be understood, for new ideas to be implemented, for successful techniques to be reported and widely disseminated, and for new experimental results to become reference data and influence theories, models, and practice—and vice versa for information to flow in other directions. Cyberinfrastructure, with orders-of-magnitude increases in raw performance, can have dramatic impacts throughout this web of processes.

The earthquake engineering community is well versed in the core technologies that enable community coordination—distributed meeting software (e.g., videoconferencing, shared computer displays), asynchronous collaboration and networking tools (e.g., email, wikis, blogs, Facebook, Twitter), equipment sharing, community databases, remote modeling and simulation services, search engines, and digital libraries, etc.

Performance and the price/performance ratio will continue to improve in these areas, driven by the computing trends previously discussed and similar rapid increases in available network bandwidth. However, the more important trend will be automation of the information exchange across the community. Imagine:

  • sending a model to observe an experiment rather than observing it yourself and receiving alerts when the experiment and model results diverge;
  • shifting from manually searching the Internet and databases for relevant results to receiving automated notification as new results appear;
  • shifting from notices to automatic incorporation of such results into the calibration and validation suites of computational models. (Indeed, services that can automatically assess whether new results are consistent or inconsistent with existing reference information, as well as services to dynamically update reference information based on new results, have already been developed in fields such as chemistry [thermodynamics]);
  • combining such active reference services with domain computational models to enable goal-driven research, e.g., being able to plug individual models and datasets into an open framework to immediately assess their import in achieving regional resilience. In chemistry, such combined capabilities are expected to, for example, help identify which thermochemistry experiments would have the most impact in reducing the uncertainties and errors in modeling engine
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

     performance. In earthquake engineering, a similar approach might potentially be used to identify which experiments and what additional observational data would most directly address the largest uncertainties in the design of new structures and the estimation of earthquake impacts; or,

  • simply asking for the evidence in the literature for and against a given theory rather than reading all the papers with relevant keywords.

Although such speculation is perhaps utopian, it is hard to argue away the significance of the incremental progress that will be made. The deliberations of expert groups will be informed by automated test of data quality and mathematical “meta”-analysis of the entire corpus of experiments. Data and models will be available as services (“in the cloud”), and sensitivity analyses will inform experiment selection and design. It will become possible to identify specific claims being made in the literature and to retrieve the data and analysis details required to assess them. Further, as data, models, and metadata become more accessible, and as the computational resources available to individuals grows, small groups will tackle and succeed at tasks that would require unsustainable levels of community coordination today.

Challenges in Realizing the Potential of Cyberinfrastructure

Adoption of information systems and cyberinfrastructure in academia and business has often had what can tactfully be called mixed success. Looked at over the long term, progress is unmistakable. Over shorter timescales and at the scale of individual projects, it is much harder to discern what has and hasn’t worked and to assess how well money and time are being spent. There is growing understanding, however, some from the experiences of the earthquake engineering community, of the nature of the challenges involved and in best practices.

At their core, the major challenges are all related to the unprecedented and almost incomprehensible rate of technological advancement. By the time a non-IT specialist can acquire skills in a given technology, it becomes obsolete. Work practices, based on implicit assumptions about the relative difficulties of different tasks, become inefficient or ineffective as technological change invalidates those assumptions. Central coordination mechanisms that can be highly effective in disseminating best practices fail as small groups, empowered by technology, create disruptive innovations. Thus, in addition to the direct challenges of creating and deploying useful technologies, the pace of change creates “meta-challenges” to our traditional practices of development and deployment. Fortunately, across the efforts developing and deploying cyberinfrastructure, there has been a concerted effort to address these meta-challenges and to identify social, managerial, and technical practices that are robust in the face of rapid change (NEES, 2007; Faniel, 2009; Ribes and Finholt, 2009; Sustain, 2009).

Placing such general best practices in the context of the proceeding sections can help make them concrete. For example, the shift from faster clock speeds to massive parallelism as the mechanism for increasing performance during the next decade means that graduate students without parallel programming skills will not be able to realize the value of the emerging technology. Explicit support for cross-training, collaborative projects pairing computational scientists and domain researchers, and, most scalably, training in the use of programming frameworks and libraries that embed parallel expertise in their design will all be critical in capitalizing on the raw performance increases.

At the project level, there is growing recognition that user-centered, experiment-driven, iterative development practices are most effective in supporting co-evolution of technology and work practices. Central to such methodologies are partnerships in which the end goal of enabling new research is explicit and the design space of solutions includes both technologies and new practices. Such a process targeting the need to compare experimental and simulated results might first address the data conversions necessary to perform experiments and simulations on the same system followed by enhancements that would allow simulations to be run in parallel with experiments and that would enable direct visualization of the differences between them. Such technical changes would be paralleled by changes in practice where simulations might be run before experiments to improve their design, or where adaptive sensing or steering of the experiment might provide additional insight when differences are seen. Providing incremental capabilities that deliver value and thereby affect practice improve efficiency compared to monolithic, pre-planned approaches.

At the scale of communities, emerging best practice recognizes that true infrastructure is not a system, but a system of systems, which has consequences for design and management. Designs appropriate for infrastructure support “innovation at the edges” and focus on standardizing common functionality. Well-known examples include Internet routing, which does not constrain what is sent across the Internet, and HTTP/HTML, which defined formatting and linking rules but did not constrain the content of web pages. Cyberinfrastructure designs such as web services, workflow, content management, global identifiers and vocabularies (the basis of the semantic web), and separable authentication mechanisms (enabling single-sign-on) work analogously, providing simple, best practice means for addressing common problems without constraining future innovation. At the level of grand challenges, where many independent organizations must coordinate as peers, such designs are critical (ALA, 2008). Management structures that encourage such designs—independent processes for defining interfaces and for implementation within the defined framework, early definition of interfaces and competitive funding of functionality

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

written to those interfaces, and inter-project and interdisciplinary communication to identify functionality ripe for standardization, for example—are a critical complement. Such non-technical concerns extend to the implementation of mechanisms to recognize inter-disciplinary work and to support career paths for those who cross such boundaries.

Conclusions

The technical advances that have made cyberinfrastructure-centric approaches to scientific and engineering endeavors a major driver of overall scientific progress will continue and broaden in the next decade. Progress will bring more computational and data management capabilities to individual researchers, while the decreasing cost of those resources will allow their use more broadly to automate manual processes throughout the scientific lifecycle. Application of cyberinfrastructure to coordinate and accelerate community-scale processes will further increase the earthquake engineering community’s collective ability to tackle grand challenge issues. Although this paper has touched on many aspects of cyberinfrastructure and highlighted a number of potential uses, there are both technologies and application areas (e.g., most glaringly, education and outreach) that have not been discussed because of space limitations. However, the core conclusion from this paper is not about any specific application of cyberinfrastructure. Rather, the conclusions are that the underlying trends, and even relatively straightforward analysis of the potential applications, make it clear that further investment in cyberinfrastructure should be very profitable in terms of impact on grand challenge agendas and that experience over the previous decade provides clear guidance concerning what will be required to realize that value.

Acknowledgments

This paper was developed with the support of the National Research Council and the organizers of Grand Challenges in Engineering Research: A Community Workshop. The author wishes to acknowledge contributions by many members of the Network for Earthquake Engineering Simulation, the Mid-America Earthquake Center, and the large earthquake engineering community to the concepts presented here through many informal discussions and formal planning exercises over the past decade.

References

ALA (American Lifelines Alliance). 2008. Post-Earthquake Information Systems (PIMS) Scoping Study. Washington, DC: National Institute of Building Sciences.

Atkins, D. E., K. K. Droegemeier, S. I. Feldman, H. Garcia-Molina, M. L. Klein, D. G. Messerschmitt, P. Messina, J. P. Ostriker, and M. H. Wright. 2003. Revolutionizing science and engineering through cyberinfrastructure: Report of the National Science Foundation Blue-Ribbon Advisory Panel on Cyberinfrastructure.

Berman, F., J. Bernard, C. Pancake, and L. Wu. 2006. A Process-Oriented Approach to Engineering Cyberinfrastructure. Report from the Engineering Advisory Committee Subcommittee on Cyberinfrastructure. Available at director.sdsc.edu/pubs/ENG/.

Edwards, P., S. Jackson, G. Bowker, C. Knobel. 2007. Understanding Infrastructure: Dynamics, Tensions, and Design. Report of a Workshop on “History & Theory of Infrastructure: Lessons for New Scientific Cyberinfrastructures.” The National Science Foundation.

Faniel, I. M. 2009. Unrealized Potential: The Socio-Technical Challenges of a Large Scale Cyberinfrastructure Initiative. NSF Report.

NEES (Network on Earthquake Engineering Simulation). 2007. Information Technology within the George E. Brown, Jr. Network for Earthquake Engineering Simulation: A Vision for an Integrated Community. NEES Task Group on Information Technology Vision. Davis, CA.

NSF (National Science Foundation). 2007. Cyberinfrastructure Vision for 21st Century Discovery. NSF 07-28.

Ribes, D., and T. A. Finholt. 2009. The long now of technology infrastructure: Articulating tensions in development. Journal of the Association for Information Systems 10(5).

Sustain. 2009. Position papers from the NSF-sponsored Workshop on Cyberinfrastructure Software Sustainability, March 26-27, 2009, Bloomingdale, IN.

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

A BUILT ENVIRONMENT FROM FOSSIL CARBON

John W. Halloran
Professor, Department of Materials Science and Engineering University of Michigan

Transformative Materials

These white papers are intended to stimulate discussion in this Workshop on Grand Challenges in Earthquake Engineering Research. I am a materials engineer with no expertise at all in earthquake engineering, so what can I possibly have to say that could, in some way, stimulate the discussions of the experts at this workshop? This workshop seeks “transformative solutions” for an earthquake-resilient society involving, among other important issues, the design of the physical infrastructure. I will address points relating to the design of the infrastructure, in terms of the most basic materials from which we build our infrastructure. I hope to address transformational change in infrastructural materials that could make our society more resilient not only to the sudden shaking of the ground, but also to the gradual changing of the climate. If you seek a change in the infrastructure large enough to be considered transformational for earthquake resilience, can you also make the change large enough to make a fundamental change in sustainability?

I want to attempt to stimulate discussion not only on how to use steel and concrete to make us less vulnerable to shock, but also to make us less vulnerable to climate change. I hope to be provocative to the point of being outrageous, because I want you to think about abandoning steel and concrete. I am going to suggest designing a built environment with more resilient, lighter, stronger, and more sustainable materials based on fossil carbon.

The phrase “sustainable materials based on fossil carbon”—seems like an oxymoron. To explain this, I must back up. Fossil resources are obviously not renewable, so are not sustainable in the very long run. But in the shorter run, the limit to sustainability is not the supply of fossil resources but the damage the fossil resources do to our climate. The element of particular concern is fossil-carbon, which was removed from the ancient atmosphere by living creatures and stored as fossil-CO2 in carbonate rocks and as reduced fossil-carbon in hydrocarbons like gas, oil, and coal. The difficulty is that industrial society liberates the fossil carbon to the atmosphere at rates much faster than contemporary photosynthesis can deal with it. It is more sustainable to use fossil resources without liberating fossil-CO2. Can this be done?

A Modern World Based on Fossil Life

Modern industrial society enjoys prosperity in large part because we are using great storehouses of materials fossilized from ancient life. As the term “fossil fuels” implies, much of our energy economy exploits the fossil residue of photosynthesis. For the energy economy, we all realize that coal and petroleum and natural gas consist of chemically reduced carbon and reduced hydrogen from ancient biomass. Coal and oil are residues of the tissues of green creatures. It is clear that burning fossil-carbon returns to today’s atmosphere the CO2 that was removed by photosynthesis eons ago.

We do not often consider that our built environment is also predominantly created from fossils. Cement is made from carbonate limestone, consisting of ancient carbon dioxide chemically combined with calcium oxide in fossil shells. Limestone is one of our planet’s great reservoirs of geo-sequestered carbon dioxide, removed from the air long ago. The reactivity of Portland cements come from the alkaline chemical CaO, which is readily available because limestone fossil rocks are abundant, and the CaO can be obtained by simple heating of CaCO3. However, this liberates about a ton of CO2 per ton of cement. This is fossil-CO2, returned to the atmosphere after being sequestered as a solid. Limestone-based cement—the mainstay of the built environment—is not sustainable.

Steel is cheap because we have enormous deposits of iron oxide ore. These iron ores are a kind of geochemical fossil that accumulated during the Great Oxygenation Event 2 billion years ago, as insoluble ferric oxide formed from oxygen produced by photosynthesis. Red iron oxide is a reservoir of the oxygen exhaled by ancient green creatures. We smelt iron ore using carbon from coal, so that carbon fossilized for 300 million years combines with oxygen fossilized for 2,000 million years to flood our current atmosphere with CO2. Every ton of steel is associated with about 1.5 tons of fossil-CO2 liberated in the modern atmosphere. We could, at some cost, capture and sequester the CO2 from limekilns and blast furnaces, or we could choose to smelt iron ore without carbothermic reduction. I prefer to consider a different way to use fossil resources, both in the energy economy and the built environment. We need something besides steel to resist tensile forces, and something besides concrete for resisting compression.

In this white paper, I consider using the fossil-hydrogen for energy, and the fossil-carbon for durable materials, an approach called HECAM—Hydrogen Energy and Carbon Materials. This involves simple pyrolysis of coal, oil, or gas to extract fossil-hydrogen for use as a clean fuel. The fossil-carbon is left in the condensed state, as solid carbon or as carbon-rich resins. Production of enough fossil-hydrogen to satisfy the needs of the energy economy creates a very large amount of solid carbon and carbon-rich resins, which can satisfy the needs of the built environment.

Fossil Hydrogen for Energy, Fossil Carbon for Materials

Fossil fuels and building materials are the substances that our Industrial Society uses in gigatonne quantities. To continue to exploit our fossil resources, and still avoid climate change, we could stop burning fossil-carbon as fuel and

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

stop burning limestone for lime. Coal, petroleum, and gas are used as “hydrogen ores” for energy and “carbon ores” for materials, as presented in more detail previously (Halloran, 2007). Note that this necessarily means that only a fraction of the fossil fuel is available for energy. This fraction ranges from about 60 percent for natural gas to about 20 percent for coal. This might seem like we are “wasting” 40 percent of the gas or 80 percent of the coal—but it is wasted only if a valuable co-product cannot be found. If the solid carbon is used as a building material, the residual carbon could have more value as a solid than it did as a fuel.

Natural gas is a rich hydrogen ore, offering about 60 percent of its high heating value (HHV) from hydrogen combustion. It is a relatively lean carbon ore, and if the hydrogen is liberated by simple thermal decomposition: CH4 = 2H2 + C. The solid carbon is deposited from the vapor. Such vapor-deposited carbons are usually sooty nanoparticles, such as carbon black, which might be of limited use in the built environment. However, it may be possible to engineer new processes to produce very high strength and high-value vapor-deposited carbons, such as fibers, nanotubes, of pyrolyic carbons (Halloran, 2008). Pyrolytic carbon has exceptional strength. Vapor-derived carbon fibers could be very strong. Carbon nanotubes, which are very promising, are made from the vapor phase, and the large-scale decomposition of millions of tons of hydrocarbon vapors might provide a path for mass production.

An independent path for methane-rich hydrocarbon gases involves dehydrogenating the methane to ethylene: 2CH4 = 2H2 + C2H4. Conversion to ethylene liberates only half the hydrogen for fuel use, reducing the hydrogen energy yield to about 30 percent of the HHV of methane. However, the ethylene is a very useful material feedstock. It can be polymerized to polyethylene, the most important commodity polymer. Polyethylene is mostly carbon (87 wt percent C). Perhaps it is more convenient to sequester the fossil-carbon with some of the fossil-hydrogen as an easy-to-use white resin rather than as more difficult-to-use black elemental carbon. We can consider the polyethylene route as “white HECAM,” with the elemental carbon route as “black HECAM.”

Polyethylene from white HECAM could be very useful in the built environment as a thermoplastic resin, the matrix for fiber composites, or a binder for cementing aggregates. It should not be a difficult challenge to produce thermoset grades, to replace hydraulic-setting concretes with chemically setting composites. Moreover, if cost effective methods can be found for producing ultrahigh molecular weight polyethylene (UHMWPE) fibers, we could envision construction materials similar to the current SpectraTM or DyneemaTM fibers, which are among the strongest of all manufactured materials. The tensile strength of these UHMWPE fibers are on the order of 2,400 MPa, more than 10 times higher than the typical yield strength of grade 60 rebar steel. The density of steel is 8 times as large as polyethylene (PE), so the specific strength can be 80 times better for UHWMPE.

Petroleum as a hydrogen ore offers about 40 percent of its HHV from hydrogen, and is an exceptionally versatile carbon ore. The petrochemical industry exists to manipulate the C/H ratio of many products producing many structural materials. Indeed carbon fibers—the premier high-strength composite reinforcement—are manufactured from petroleum pitch. Fabricated carbons—the model for masonry-like carbon-building materials—are manufactured from petroleum coke.

Coal, with an elemental formulation around CH0.7, is the leanest hydrogen ore (but the richest carbon ore). When coal is heated in the absence of air (pyrolyzed), it partially melts to a viscous liquid (metaplast). Hydrogen and hydrocarbon gases evolve, which swells the viscous metaplast into foam. In metallurgical coke, the metaplast solidifies as a spongy cellular solid. Coke is about as strong as ordinary Portland cement concrete (OPC), but only about one-third the density of OPC. A stronger, denser solid carbon can be obtained by controlling the foaming during pyrolysis by various methods, or by pulverizing the coke and forming a carbon-bonded-carbon with coal tar pitch (a resin from coal pyrolysis). Wiratmoko has demonstrated that these pitch-bonded cokes, similar to conventional carbon masonry, can be 3-10 times as strong at OPC, and stronger that high strength concrete or fired clay brick, at less than half the density of OPC and 60 percent the density of clay brick (Wiratmoko and Halloran, 2009). Like petroleum pitch, coal tar pitch can be used as a precursor for carbon fibers.

Although less than 20 percent of the HHV of coal comes from the burning of the hydrogen, coal pyrolysis still can be competitive for the manufacture of hydrogen for fuel. Recently, Guerra conducted a thorough technoeconomic analysis of HECAM from coal, using a metallurgical coke plant as a model (Guerra, 2010). Hydrogen could be produced by coal pyrolysis with much less CO2 emission, compared to hydrogen from the standard method of steam reforming of natural gas. The relative hydrogen cost depends on the price of natural gas, the price of coal, and the market value of the solid carbon co-product. Assuming that the solid carbon could be manufactured as a product comparable to concrete masonry blocks, the hydrogen from coal pyrolysis is cheaper if carbon masonry blocks would have about 80 percent of market value of concrete blocks (based on 2007 coal and gas prices).

Since the fossil-carbon is not burned in the HECAM process, the carbon dioxide emission is much lower. However, much more coal has to be consumed for the same amount of energy. This carbon, however, is not wasted, but rather put to productive use. Because HECAM combines energy and materials, comparisons of environmental impact are more complicated. For one example (Halloran and Guerra, 2011), we considered producing a certain quantity of energy (1 TJ) and a certain volume of building materials (185 cubic meters). HECAM with hydrogen energy and carbon building materials emitted 47 tons of CO2 and required 236 m3 to

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

be removed by mining. Burning coal and using OPC for the material emitted 150 tons of CO2 and required 245 m3 to be removed from quarries and mines.

Can Carbon and Carbon-Rich Solids Be Used in the Infrastructure?

The mechanical properties of carbons appear to be favorable. For compression-resistors, carbons have been made that offer significant advantages in strength and strength/density ratio. Carbons are quite inert with respect to aqueous corrosion and should be durable. But of course the properties of these carbons in real environments have never been tested. For tensile-resistors, carbon-fiber composites or UHMWPE-fiber composites should be more than adequate, because they should be stronger and much less dense than steel. Durability has yet to be demonstrated. Fire resistance is an obvious issue. The ability to manufacture these materials in realistic volumes has yet to be demonstrated. An analogue to the setting of cement has not been demonstrated, although conventional chemical cross-linking appears to be viable. Construction methods using these materials have yet to be invented.

The cost of HECAM materials is not clearly defined, largely because the materials cost is related to the value of the energy co-product, in the same way that the energy cost is related to the value of the materials co-product. Guerra’s preliminary analysis looks favorable, with each co-product subsidizing the other. Fundamentally, durable materials such as concrete and steel are worth much more per ton than coal, and about the same as natural gas. Values in 2003 were about $70/ton for OPC, $220/ton for rebar, $44/ton for coal, and $88/ton for natural gas (Halloran, 2007). Carbon-rich solids are lower in density (and stronger), so figuring on the basis of volume suggests that converting some of the fossil fuels into construction materials should be economically feasible. But none of this has been demonstrated.

Similar carbon materials and composites are known in aerospace technologies as high-performance, very high-cost materials. Clearly aerospace-grade graphite fibers or SpectraTM fibers would not possibly be affordable in the tonnages required for infrastructure. For example, the tensile strength of about 1,400 MPa has been reported for carbon fibers produced from coal tar after relatively cheap processing (Halloran, 2007). These are not as strong as aerospace-grade graphite fibers (5,600 MPa), but are comparable in strength to the glass fibers now used in construction, which have a tensile strength of 630 MPa as strands. So we must stretch our imagination to envision construction-grade carbon fibers and UHMWPE fibers, perhaps not as strong but not nearly as costly as aerospace grade. In the same sense, the steel used in rebar is not nearly the cost (or the quality) of the steel used in aircraft landing gear.

Much will also depend on when (or if) there will be an effective cost for CO2 emissions. At present in the United States, carbon dioxide from power plants, steel mills, and cement kilns is vented to the atmosphere at no monetary cost to the manufacturer. It is likely in the future that climate change abatement and greenhouse gas control will become a concern for the building industry.

Could Carbon-Rich Materials Be Better for Earthquake Resilience?

A non-specialist like me, at a workshop of experts like this, should not offer an opinion on this question. I simply do not know. However, two of the strongest and lightest structural materials available for any type of engineering are carbon fibers and UHMWPE fibers, which are fossil-carbon sequestering materials we contemplate for HECAM. The specific strength and specific stiffness of fiber composites based on fossil-carbon based materials should easily exceed the requirements of structural steel. Masonry based on fossil-carbon might easily exceed the performance of ordinary Portland cement concrete, and could be stronger, lighter, and more durable. Would this enable a more earthquake-resilient built infrastructure? Would it make a more environmentally sustainable infrastructure? I hope this is discussed in this workshop.

The 19th Century as a Guide for the 21th Century

When contemplating any grand challenge, it is useful to look back into history. One hundred years ago, concrete and steel construction was still quite new. One hundred fifty years ago, structural steel was not available for construction. Two hundred years ago, there was no modern cement. So let me go back two centuries, to 1811, and consider what was available for the built environment. There was no structural steel in 1811. Steel was then a very costly engineering material, used in swords and tools in relatively small quantities. Steel was simply not available in the quantity and the cost needed for use as a structural material. There was no Portland cement concrete in 1811. Joseph Aspdin did not patent Portland cement until 1824.

But a great deal changed in a rather short time. In 1810, the Tickford Iron Bridge was built with cast iron, not steel. By 1856, Bessemer had reduced the cost of steel, and Siemens had produced ductile steel plate. By 1872, steel was used to build the Brooklyn Bridge. The Wainwright Building in 1890 had the first steel framework. The first glass and steel building (Peter Behrens’ AEG Turbine Factory Berlin) arrived in 1908. I.K. Brunel used Portland cement in his Thames Tunnel in 1828. Joseph Monier used steel-reinforced concrete in 1867, and the first concrete high-rise was built in Cincinnati in 1893. Perhaps a similar change can occur in the 21st century, and perhaps our descendents will think us fools to burn useful materials like carbon.

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

References

Guerra, Z. 2010. Technoeconomic Analysis of the Co-Production of Hydrogen Energy and Carbon Materials. Ph.D. Thesis, University of Michigan, Ann Arbor.

Halloran, J. W. 2007. Carbon-neutral economy with fossil fuel-based hydrogen energy and carbon materials. Energy Policy 53:4839-4846.

Halloran, J. W. 2008. Extraction of hydrogen from fossil resources with production of solid carbon materials. International Journal of Hydrogen Energy 33:2218-2224.

Halloran, J. W., and Z. Guerra. 2011. Carbon building materials from coal char: Durable materials for solid carbon sequestration to enable hydrogen production by coal pyrolysis. Pp. 61-71 in Materials Challenges in Alternative and Renewable Energy: Ceramic Transactions, Vol. 224, G. G. Wicks, J. Simon, R. Zidan, E. Lara-Curzio, T. Adams, J. Zayas, A. Karkamkar, R. Sindelar, and B. Garcia-Diaz, eds. John Wiley & Sons, Inc., Hoboken, NJ.

Wiratmoko, A., and J. W. Halloran. 2009. Fabricated carbon from minimally-processed coke and coal tar pitch as a Carbon-Sequestering Construction Material. Journal of Materials Science 34(8):2097-2100.

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

UNCERTAINTY QUANTIFICATION AND EXASCALE
COMPUTING: OPPORTUNITIES AND CHALLENGES
FOR EARTHQUAKE ENGINEERING

Omar Ghattas

Departments of Mechanical Engineering and Geological Sciences

Institute for Computational Engineering & Sciences

The University of Texas at Austin

Introduction

In this white paper we consider opportunities to extend large-scale simulation-based seismic hazard and risk analysis from its current reliance on deterministic earthquake simulations to those based on stochastic models. The simulations we have in mind begin with dynamic rupture, proceed through seismic wave propagation in large regions, and ultimately couple to structural response of buildings, bridges, and other critical infrastructure—so-called “rupture-to-rafters” simulations. The deterministic forward problem alone—predicting structural response given rupture, earth, and structural models—requires petascale computing, and is receiving considerable attention (e.g., Cui et al., 2010). The inverse problem—given observations, infer parameters in source, earth, or structural models—increases the computational complexity by several orders of magnitude. Finally, extending the framework to the stochastic setting—where uncertainties in observations and parameters are quantified and propagated to yield uncertainties in predictions—demands the next major prefix in supercomputing: the exascale.

Although the anticipated arrival of the age of exascale computing near the end of this decade is expected to provide the raw computing power needed to carry out stochastic rupture-to-rafters simulations, the mere availability of O(1018) flops per second peak performance is insufficient, by itself, to ensure success. There are two overarching challenges: (1) can we overcome the curse of dimensionality to make uncertainty quantification (UQ) for large-scale earthquake simulations tractable, even routine; and (2) can we design efficient parallel algorithms for the deterministic forward simulations at the heart of UQ that are capable of scaling up to the expected million nodes of exascale systems, and that also map well onto the thousand-threaded nodes that will form those systems? These two questions are wide open today; we must begin to address them now if we hope to overcome the challenges of UQ in time for the arrival of the exascale era.

We illustrate several of the points in this white paper with examples taken from wave propagation, which is just one component of the end-to-end rupture-to-rafters simulation, but typically the most expensive (some comments are made on the other components). Moreover, we limit the discussion of UQ to the stochastic inverse problem. Despite the narrowing of focus, the conclusions presented here could have been equally drawn from a consideration of the stochastic forward problem, or the stochastic optimization problem.

Uncertainty Quantification: Opportunities and Challenges

Perhaps the central challenge facing the field of computational science and engineering today is: how do we quantify uncertainties in the predictions of our large-scale simulations? For many societal grand challenges, the “single point” deterministic predictions issued by most contemporary large-scale simulations of complex systems are just a first step: to be of value for decision making (optimal design, optimal allocation of resources, optimal control, etc.), they must be accompanied by the degree of confidence we have in the predictions. This is particularly true in the field of earthquake engineering, which historically has been a leader in its embrace of stochastic modeling. Indeed, Vision 2025, American Society of Engineers’ (ASCE’s) vision for what it means to be a civil engineer in the world of the future, asserts among other characteristics that civil engineers (must) serve as … managers of risk and uncertainty caused by natural events. … (ASCE, 2009). Once simulations are endowed with quantified uncertainties, we can formally pose the decision-making problem as an optimization problem governed by stochastic partial differential equations (PDEs) (or other simulation models), with objective and/or constraint functions that take the form of, for example, expectations, and decision variables that represent design or control parameters (e.g., constitutive parameters, initial/boundary conditions, sources, geometry).

Uncertainty quantification arises in three fundamental ways in large-scale simulation:

  • Stochastic inverse problem: Estimation of probability densities for uncertain parameters in large-scale simulations, given noisy observations or measurements.
  • Stochastic forward problem: Forward propagation of the parameter uncertainties through the simulation to issue stochastic predictions.
  • Stochastic optimization: Solution of the stochastic optimization problems that make use of statistics of these predictions as objectives and/or constraints.

Although solution of stochastic inverse, forward, or optimization problems can be carried out today for smaller models with a handful of uncertain parameters, these tasks are computationally intractable for complex systems characterized by large-scale simulations and high-dimensional parameter spaces using contemporary algorithms (see, e.g., Oden et al., 2011). Moreover, existing methods suffer from the curse of dimensionality: simply throwing more processors at these problems will not address the basic difficulty. We need fundamentally new algorithms for estimation and propagation of, and optimization under, uncertainty in large-scale simulations of complex systems.

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

To focus our discussion, in the remainder of this section we will consider challenges and opportunities associated with the first task above, that of solving stochastic inverse problems, and employing Bayesian methods of statistical inference. This will be done in the context of the modeling of seismic wave propagation, which typically constitutes the most expensive component in simulation-based rupture-to-rafters seismic hazard assessment.

The problem of estimating uncertain parameters in a simulation model from observations is fundamentally an inverse problem. The forward problem seeks to predict output observables, such as seismic ground motion at seismometer locations, given the parameters, such as the heterogeneous elastic wave speeds and density throughout a region of interest, by solving the governing equations, such as the elastic (or poroelastic, or poroviscoelastoplastic) wave equations. The forward problem is usually well posed (the solution exists, is unique, and is stable to perturbations in inputs), causal (later-time solutions depend only on earlier time solutions), and local (the forward operator includes derivatives that couple nearby solutions in space and time). The inverse problem, on the other hand, reverses this relationship by seeking to estimate uncertain (and site-specific) parameters from (in situ) measurements or observations. The great challenge of solving inverse problems lies in the fact that they are usually ill-posed, non-causal, and non-local: many different sets of parameter values may be consistent with the data, and the inverse operator couples solution values across space and time.

Non-uniqueness in the inverse problem stems in part from the sparsity of data and the uncertainty in both measurements and the model itself, and in part from non-convexity of the parameter-to-observable map (i.e., the solution of the forward problem to yield output observables, given input parameters). The popular approach to obtaining a unique “solution” to the inverse problem is to formulate it as an optimization problem: minimize the misfit between observed and predicted outputs in an appropriate norm while also minimizing a regularization term that penalizes unwanted features of the parameters. This is often called Occam’s approach: find the “simplest” set of parameters that is consistent with the measured data. The inverse problem thus leads to a nonlinear optimization problem that is governed by the forward simulation model. When the forward model takes the form of PDEs (as is the case with the wave propagation models considered here), the result is an optimization problem that is extremely large-scale in the state variables (displacements, stresses or strains, etc.), even when the number of inversion parameters is small. More generally, because of the heterogeneity of the earth, the uncertain parameters are fields, and when discretized result in an inverse problem that is very large scale in the inversion parameters as well.

Estimation of parameters using the regularization approach to inverse problems as described above will yield an estimate of the “best” parameter values that simultaneously fit the data and minimize the regularization penalty term. However, we are interested in not just point estimates of the best-fit parameters, but also a complete statistical description of all the parameter values that is consistent with the data. The Bayesian approach does this by reformulating the inverse problem as a problem in statistical inference, incorporating uncertainties in the measurements, the forward model, and any prior information on the parameters. The solution of this inverse problem is the so-called “posterior” probability densities of the parameters, which reflects the degree of credibility we have in their values (Kaipio and Somersalo, 2005; Tarantola, 2005). Thus we are able to quantify the resulting uncertainty in the model parameters, taking into account uncertainties in the data, model, and prior information. Note that the term “parameter” is used here in the broadest sense—indeed, Bayesian methods have been developed to infer uncertainties in the form of the model as well (so-called structural uncertainties).

The Bayesian solution of the inverse problem proceeds as follows. Suppose the relationship between observable outputs y and uncertain input parameters p is denoted by y = f(p, e), where e represents noise due to measurement and/ or modeling errors. In other words, given the parameters p, the function f(p) invokes the solution of the forward problem to yield y, the predictions of the observables. Suppose also that we have the prior probability density πpr(p), which encodes the confidence we have in prior information on the unknown parameters (i.e., independent of information from the present observations), and the likelihood function π(yobs|p), which describes the conditional probability that the parameters p gave rise to the actual measurements yobs. Then Bayes’ theorem of inverse problems expresses the posterior probability density of the parameters, πpost, given the data yobs, as the conditional probability

images/eq75.jpg

where k is a normalizing constant. The expression (1) provides the statistical solution of the inverse problem as a probability density for the model parameters p.

Although it is easy to write down expressions for the posterior probability density such as (1), making use of these expressions poses a challenge, because of the high dimensionality of the posterior probability density (which is a surface of dimension equal to the number of parameters) and because the solution of the forward problem is required at each point on this surface. Straightforward grid-based sampling is out of the question for anything other than a few parameters and inexpensive forward simulations. Special sampling techniques, such as Markov chain Monte Carlo (MCMC) methods, have been developed to generate sample ensembles that typically require many fewer points than grid-based sampling (Kaipio and Somersalo, 2005; Tarantola, 2005). Even so, MCMC approaches will become intractable as the complexity of the forward simulations and the dimension of the parameter spaces increase. When the parameters

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

are a (suitably discretized) field (such as density or elastic wave speeds of a heterogeneous earth), and when the forward PDE requires many hours to solve on a supercomputer for a single point in parameter space (such as seismic wave propagation in large regions), the entire MCMC enterprise collapses dramatically.

The central problem in scaling up the standard MCMC methods for large-scale forward simulations and high-dimensional parameter spaces is that this approach is purely black-box, i.e., it does not exploit the structure of the parameter-to-observable map f(p). The key to overcoming the curse of dimensionality, we believe, lies in effectively exploiting the structure of this map to implicitly or explicitly reduce the dimension of both the parameter space as well as the state space. The motivation for doing so lies in the fact that the data are often informative about just a fraction of modes of the parameter field, because of ill-posedness of the inverse problem. Another way of saying this is that the Jacobian of the parameter-to-observable map is typically a compact operator, and thus can be represented effectively using a low-rank approximation—that is, it is sparse with respect to some basis (Flath et al., 2011). The remaining dimensions of parameter space that cannot be inferred from the data are typically informed by the prior; however, the prior does not require solution of forward problems, and is thus cheap to compute. Compactness of the parameter-to-observable map suggests that the state space of the forward problem can be reduced as well. A number of current approaches to model reduction for stochastic inverse problems show promise. These range from Gaussian process response surface approximation of the parameter-to-observable map (Kennedy and O’Hagan, 2001), to projection-type forward model reductions (Galbally et al., 2010; Lieberman et al., 2010), to polynomial chaos approximations of the stochastic forward problem (Narayanan and Zabaras, 2004; Ghanem and Doostan, 2006; Marzouk and Naim, 2009), to low-rank approximation of the Hessian of the log-posterior (Flath et al., 2011; Martin et al., In preparation). In the remainder of this section, as just one example of the above ideas, we illustrate the dramatic speedups that can be achieved by exploiting derivative information of the parameter-to-observable map, and in particular the properties of the Hessian.

Exploiting derivative information has been the key to overcoming the curse of dimensionality in deterministic inverse and optimization problems (e.g., Akçelik et al., 2006), and we believe it can play a similar critical role in stochastic inverse problems as well. Using modern adjoint techniques, gradients can be computed at a cost of a single linearized forward solve, as can actions of Hessians on vectors. These tools, combined with specialized solvers that exploit the fact that many ill-posed inverse problems have compact data misfit operators, often permit solution of deterministic inverse problems in a dimension-independent (and typically small) number of iterations. Deterministic inverse problems have been solved for millions of parameters and states in tens of iterations, for which the (formally dense, of dimension in the millions) Hessian matrix is never formed, and only its action on a vector (which requires a forward/adjoint pair of solves) is required (Akçelik et al., 2005). These fast deterministic methods can be capitalized upon to accelerate sampling of the posterior density πpost(p), via Langevin dynamics. Long-time trajectories of the Langevin equation sample the posterior, and integrating the equation requires evaluation of the gradient at each sample point. More importantly, the equation can be preconditioned by the inverse of the Hessian, in which case its time discretization is akin to a stochastic Newton method, permitting us to recruit many of the ideas from deterministic large-scale Newton methods developed over the past several decades.

This stochastic Newton method has been applied to a nonlinear seismic inversion problem, with the medium parameterized into 65 layers (Martin et al., In preparation). Figure 1 indicates just O(102) samples are necessary to adequately sample the (non-Gaussian) posterior density, while a reference (non-derivative) MCMC method (Delayed Rejection Adaptive Metropolis) is nowhere near converged after even O(105) samples. Moreover, the convergence of the method appears to be independent of the problem dimension when scaling from 65 to 1,000 parameters. Although the forward problem is still quite simple (wave propagation in a 1D layered medium), and the parameter dimension is moderate (up to 1,000 parameters), this prototype example demonstrates the considerable speedups that can be had if problem structure is exploited, as opposed to viewing the simulation as a black box.

Exascale Computing: Opportunities and Challenges

The advent of the age of petascale computing—and the roadmap for the arrival of exascale computing around 2018—bring unprecedented opportunities to address societal grand challenges in earthquake engineering, and more generally in such fields as biology, climate, energy, manufacturing, materials, and medicine (Oden et al., 2011). But the extraordinary complexity of the next generation of high-performance computing systems—with hundreds of thousands to millions of nodes, each having multiple processors, each with multiple cores, heterogeneous processing units, and deep memory hierarchies—presents tremendous challenges for scientists and engineers seeking to harness their raw power (Keyes, 2011). Two central challenges arise: how do we create parallel algorithms and implementations that (1) scale up to and make effective use of distributed memory systems with O(105-106) nodes and (2) exploit the power of shared memory massively multi-threaded individual nodes?

Although the first challenge cited is a classical difficulty, we can at least capitalize on several decades of work on constructing, scaling, analyzing, and applying parallel algorithms for distributed memory high-performance computing systems. Seismic wave propagation, in particular, has had a

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

image

Figure 1 Left: Comparison of number of points taken for sampling posterior density for a 65-dimensional seismic inverse problem to identify the distribution of elastic moduli for a layered medium, from reflected waves. DRAM (black), unpreconditioned Langevin (blue), and Stochastic Newton (red) sampling methods are compared. Convergence indicator is multivariate potential scale reduction factor, for which a value of unity indicates convergence. Stochastic Newton requires three orders of magnitude fewer sampling points than the other methods. Right: Comparison of convergence of stochastic Newton method for 65 and 1,000 dimensions suggests dimension independence. SOURCE: Courtesy of James Martin, University of Texas at Austin.

long history of being at the forefront of applications that can exploit massively parallel supercomputing, as illustrated, for example, by recent Gordon Bell Prize finalists and winners (Bao et al., 1996; Akçelik et al., 2003; Komatitsch et al., 2003; Burstedde et al., 2008; Carrington et al., 2008; Cui et al., 2010). To illustrate the strides that have been made and the barriers that remain to be overcome, we provide scalability results for our new seismic wave propagation code. This code solves the coupled acoustic-elastic wave equations in first order (velocity-strain) form using a discontinuous Galerkin spectral element method in space and explicit Runge Kutta in time (Wilcox et al., 2010). The equations are solved in a spherical earth model, with properties given by the Preliminary Reference Earth Model. The seismic source is a double couple point source with a Ricker wavelet in time, with central frequency of 0.28 Hz. Sixth-order spectral elements are used, with at least 10 points per wavelength, resulting in 170 million elements and 525 billion unknowns. Mesh generation is carried out in parallel prior to wave propagation, to ensure that the mesh respects material interfaces and resolves local wavelengths. Table 1 depicts strong scaling of the global seismic wave propagation code to the full size of the Cray XT5 supercomputer (Jaguar) at Oak Ridge National Laboratory (ORNL). The results indicate excellent strong scalability for the overall code (Burstedde et al., 2010). Note that mesh generation costs about 25 time steps (tens of thousands that are typically required), so that the cost of mesh generation is negligible for any realistic simulation. Not only is online parallel mesh generation important for accessing large memory and avoiding large input/output (I/O), but it becomes crucial for inverse problems, in which the material model changes at each inverse iterations, resulting in a need to remesh repeatedly. The results in Table 1 demonstrate that excellent scalability on the largest contemporary supercomputers can be achieved for the wave propagation solution, even when taking meshing into account, by careful numerical algorithm design and implementation. In this case, a high-order approximation in space (as needed to control dispersion errors) combined with a discontinuous Galerkin formulation (which provides stability and optimal convergence) together provide a higher computation to communication ratio, facilitating

Table 1 Strong scaling of discontinuous Galerkin spectral element seismic wave propagation code on the Cray XT-5 at ORNL (Jaguar), for a number of cores ranging from 32K to 224K.

# proc
cores
meshing
time* (s)
wave prop
per step (s)
par eff
wave
Tflops
32,460 6.32 12.76 1.00 25.6
65,280 6.78 6.30 1.01 52.2
130,560 17.76 3.12 1.02 105.5
223,752 47.64 1.89 0.99 175.6

Meshing time is the time for parallel generation of the mesh (adapted to local wave speed) prior to wave propagation solution; wave prop per step is the runtime in seconds per time step of the wave propagation solve; par eff wave is the parallel efficiency associated with strong scaling; and Tflops is the double precision flop rate in teraflops/s.
SOURCE: Courtesy of Carsten Burstedde, Georg Stadler, and Lucas Wilcox, University of Texas at Austin.

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

better latency tolerance and scalability to O(105) cores, while also resulting in dense local operations that ensure better cache performance. Explicit time integration avoids a global system solve, while the space filling curve-based ordering of the mesh results in better locality.

However, if we consider full rupture-to-rafters simulations beyond wave propagation, new and greater challenges arise. Rupture modeling may require dynamic mesh adaptivity to track the evolving rupture front, and historically this has presented significant challenges on highly parallel systems. In recent work, we have designed scalable dynamic mesh refinement and coarsening algorithms that scale to several hundred thousand cores, present little overhead relative to the PDE solution, and support complex geometry and high-order continuous/discontinuous discretization (Burstedde et al., 2010). Although they have not yet been applied to dynamic rupture modeling, we expect that the excellent scalability observed in Table 1 will be retained. On the other hand, coupling of wave propagation with structural response presents much greater challenges, because of the need to solve the structural dynamics equations with an implicit method (earthquakes usually excite structures in their low modes, for which explicit methods are highly wasteful). Scalability of implicit solvers to hundreds of thousands of cores and beyond remains extremely challenging because of the global communication required by effective preconditioners, though progress continues to be made (Yang, 2006). Finally, adding nonlinear constitutive models or finite deformations into the soil or structural behavior greatly increases parallel complexity, because of the need for dynamic load balancing and possibly local time stepping. It is fair to say that the difficulties associated with scaling end-to-end rupture-to-rafters simulations are formidable, but not insurmountable if present rates of progress can be sustained (and accelerated).

On the other hand, the second challenge identified above—exploiting massive on-chip multithreading—has emerged in the past several years and presents new and pernicious difficulties, particularly for PDE solvers. A sea change is under way in the design of the individual computer chips that power high-end supercomputers (as well as scientific workstations). These chips have exploded in complexity, and now support multiple cores on multiple processors, with deep memory hierarchies and add-on accelerators such as graphic processing units (GPUs). The parallelism within compute nodes has grown remarkably in recent years, from the single core processors of a half-decade ago to the hundreds of cores of modern GPUs and forthcoming many-core processors. These changes have been driven by power and heat dissipation constraints, which have dictated that increased performance cannot come from increasing the speed of individual cores, but rather by increasing the numbers of cores on a chip. As a result, computational scientists and engineers increasingly must contend with high degrees of fine-grained parallelism, even on their laptops and desktop systems, let alone on large clusters and supercomputers. This trend will only continue to accelerate.

Current high-end GPUs are capable of a teraflop per second peak performance, which offers a spectacular two orders of magnitude increase over conventional CPUs. The critical question, however, is: can this performance be effectively harnessed by scientists and engineers to accelerate their simulations? The new generation of many-core and accelerated chips performs well on throughput-oriented tasks, such as those supporting computer graphics, video gaming, and high-definition video. Unfortunately, a different picture emerges for scientific and engineering computations. Although certain specialized computations (such as dense matrix problems and those with high degrees of locality) map well onto modern many-core processors and accelerators, the mainstream of conventional scientific and engineering simulations—including the important class of PDE solvers—involve sparse operations, which are memory bandwidth-bound, not throughput-bound. As a result, the large increases in the number of cores on a processor, which have occurred without a concomitant increase in memory bandwidth (because of the large cost and low demand from the consumer market), deliver little increase in performance. Indeed, sparse matrix computations often achieve just 1-2 percent of peak performance on modern GPUs (Bell and Garland, 2009). Future peak performance increases will continue to come in the form of processors capable of massive hardware multithreading. It is now up to scientists and engineers to adapt to this new architectural landscape; the results thus far have been decidedly mixed, with some regular problems able to achieve large speedups, but many sparse unstructured problems unable to benefit.

Here we provide evidence of the excellent GPU performance that can be obtained by a hybrid parallel CPU-GPU implementation of the discontinuous Galerkin spectral element seismic wave propagation code described above (Burstedde et al., 2010). The mesh generation component remains on the CPU, because of the complex, dynamic data structures involved, while the wave propagation solution has been mapped to the GPU, capitalizing on the local dense blocks that stem from high-order approximation. Table 2 provides weak scaling results on the Longhorn GPU cluster at the Texas Advanced Computing Center (TACC), which consists of 512 NVIDIA FX 5800 GPUs, each with 4GB graphics memory, and 512 Intel Nehalem quad core processors connected by QDR InfiniBand interconnect. The combined mesh generation–wave propagation code is scaled weakly from 8 to 478 CPUs/GPUs, while maintaining between 25K-28K seventh-order elements per GPU (the adaptive nature of mesh generation means we cannot precisely guarantee a fixed number of elements). The largest problem has 12.3 million elements and 67 billion unknowns. As can be seen in the table, the scalability of the wave propagation code is exceptional; parallel efficiency remains at 100 percent in weak scaling over the range of GPUs considered. Moreover,

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

Table 2 Weak scaling of discontinuous Galerkin spectral element seismic wave propagation code on the Longhorn cluster at TACC.

#GPUs #elem mesh (s) transf (s) wave prop par eff wave Tflops (s.p.)
8 224048 9.40 13.0 29.95 1.000 0.63
64 1778776 9.37 21.3 29.88 1.000 5.07
256 6302960 10.6 19.1 30.03 0.997 20.3
478 12270656 11.5 16.2 29.89 1.002 37.9

#elem is number of 7th-order spectral elements; mesh is time to generate the mesh on the CPU; tranf is the time to transfer the mesh and other initial data from CPU to GPU memory; wave prop is the normalized runtime (in µsec per time step per average number of elements per GPU); par eff wave is the parallel efficiency of the wave propagation solver in scaling weakly from 8 to 478 GPUs; and Tflops is the sustained single precision flop rate in teraflops/s. The wall-clock time of the wave propagation solver is about 1 second per time step; meshing and transfer time are thus completely negligible for realistic simulations. SOURCE: Courtesy of Tim Warburton and Lucas Wilcox.

the wave propagation solver sustains around 80 gigaflops/s (single precision), which is outstanding performance for an irregular, sparse (albeit high-order) PDE solver.

Although these results bode well for scaling earthquake simulations to future multi-petaflops systems with massively multi-threaded nodes, we must emphasize that high-order-discretized (which enhance local dense operations), explicit (which maintain locality of operations) solvers are in the sweet spot for GPUs. Implicit sparse solvers (as required in structural dynamics) are another story altogether: the sparse matrix-vector multiply alone (which is just the kernel of an iterative linear solver, and much more readily parallelizable than the preconditioner) often sustains only 1-2 percent of peak performance in the most optimized of implementations (Bell and Garland, 2009). Adding nonlinear constitutive behavior and adaptivity for rupture dynamics further complicates matters. In these cases, the challenges of obtaining good performance on GPU and like systems appear overwhelming, and will require a complete rethinking of how we model, discretize, and solve the governing equations.

Conclusions

The coming age of exascale supercomputing promises to deliver the raw computing power that can facilitate data-driven, inversion-based, high-fidelity, high-resolution rupture-to-rafters simulations that are equipped with quantified uncertainties. This would pave the way to rational simulation-based decision making under uncertainty in such areas as design and retrofit of critical infrastructure in earthquake-prone regions. However, making effective use of that power is a grand challenge of the highest order, owing to the extraordinary complexity of the next generation of computing systems. Scalability of the entire end-to-end process—mesh generation, rupture modeling (including adaptive meshing), seismic wave propagation, coupled structural response, and analysis of the outputs—is questionable on contemporary supercomputers, let alone future exascale systems with three orders of magnitude more cores. Even worse, the sparse, unstructured, implicit, and adaptive nature of rupture-to-rafters deterministic forward earthquake simulations map poorly to modern consumer-market-driven, throughput-oriented chips with their massively multithreaded accelerators. Improvements in computer science techniques (e.g., auto-parallelizing and auto-tuning compilers) are important but insufficient: this problem goes back to the underlying mathematical formulation and algorithms. Finally, even if we could exploit parallelism on modern and emerging systems at all levels, the algorithms at our disposal for UQ suffer from the curse of dimensionality; entirely new algorithms that can scale to large numbers of uncertain parameters and expensive underlying simulations are critically needed.

It is imperative that we overcome the challenges of designing algorithms and models for stochastic rupture-to-rafters simulations with high-dimensional random parameter spaces that can scale on future exascale systems. Failure to do so risks undermining the substantial investments being made by federal agencies to deploy multi-petaflops and exaflops systems. Moreover, the lost opportunities to harness the power of new computing systems will ultimately have consequences many times more severe than mere hardware costs. The future of computational earthquake engineering depends critically on our ability to continue riding the exponentially growing curve of computing power, which is now threatened by architectures that are hostile to the computational models and algorithms that have been favored. Never before has there been as wide a gap between the capabilities of computing systems and our ability to exploit them. Nothing less than a complete rethinking of the entire end-to-end enterprise—beginning with the mathematical formulations of stochastic problems, leading to the manner in which they are approximated numerically, the design of the algorithms that carry out the numerical approximations, and the software that implements these algorithms—is imperative in order that we may exploit the radical changes in architecture with which we are presented, to carry out the stochastic forward and inverse simulations that are essential for rational decision making. This white paper has provided several examples—in the context of forward and inverse seismic wave propagation—of the substantial speedups

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×

that can be realized with new formulations, algorithms, and implementations. Significant work lies ahead to extend these and other ideas to the entire spectrum of computations underlying simulation-based seismic hazard and risk assessment.

Acknowledgments

Figure 1 is the work of James Martin; Table 1 is the work of Carsten Burstedde, Georg Stadler, and Lucas Wilcox; and Table 2 is the work of Tim Warburton and Lucas Wilcox. This work was supported by AFOSR grant FA9550-09-1-0608, NSF grants 0619078, 0724746, 0749334, and 1028889, and DOE grants DEFC02-06ER25782 and DE-FG02-08ER25860.

References

Akçelik, V., J. Bielak, G. Biros, I. Epanomeritakis, A. Fernandez, O. Ghattas, E. J. Kim, J. Lopez, D. R. O’Hallaron, T. Tu, and J. Urbanic. 2003. High resolution forward and inverse earthquake modeling on terascale computers. In SC03: Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis, ACM/IEEE.

Akçelik, V., G. Biros, A. Draganescu, O. Ghattas, J. Hill, and B. Van Bloeman Waanders. 2005. Dynamic data-driven inversion for terascale simulations: Real-time identification of airborne contaminants. In Proceedings of the 2005 ACM/IEEE Conference on Supercomputing, Seattle, 2005.

Akçelik, V., G. Biros, O. Ghattas, J. Hill, D. Keyes, and B. Van Bloeman Waanders. 2006. Parallel PDE constrained optimization. In Parallel Processing for Scientific Computing, M. Heroux, P. Raghaven, and H. Simon, eds., SIAM.

ASCE (American Society for Civil Engineers). 2009. Achieving the Vision for Civil Engineering in 2025: A Roadmap for the Profession. Available at content.asce.org/vision2025/index.html.

Bao, H., J. Bielak, O. Ghattas, L. F. Kallivokas, D. R. O’Hallaron, J. R. Shewchuk, and J. Xu. 1996. Earthquake ground motion modeling on parallel computers. In Supercomputing ’96, Pittsburgh, PA, November.

Bell, N., and M. Garland. 2009. Implementing sparse matrix-vector multiplication on throughput-oriented processors. In SC09: Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis, ACM/IEEE.

Burstedde, C., O. Ghattas, M. Gurnis, E. Tan, T. Tu, G. Stadler, L. C. Wilcox, and S. Zhong. 2008. Scalable adaptive mantle convection simulation on petascale supercomputers. In SC08: Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis, ACM/IEEE.

Burstedde, C., O. Ghattas, M. Gurnis, T. Isaac, G. Stadler, T. Warburton, and L. C. Wilcox. 2010. Extreme-scale AMR. In SC10: Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis, ACM/IEEE.

Carrington, L., D. Komatitsch, M. Laurenzano, M. M. Tikir, D. Michéa, N. L. Goff, A. Snavely, and J. Tromp. 2008. High-frequency simulations of global seismic wave propagation using SPECFEM3D GLOBE on 62K processors. In SC08: Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis, ACM/IEEE.

Cui, Y., K. B. Olsen, T. H. Jordan, K. Lee, J. Zhou, P. Small, D. Roten, G. Ely, D.K. Panda, A. Chourasia, J. Levesque, S. M. Day, and P. Maechling. 2010. Scalable earthquake simulation on petascale supercomputers. In SC10: Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis, ACM/IEEE.

Flath, H. P., L. C. Wilcox, V. Akçelik, J. Hill, B. Van Bloemen Waanders, and O. Ghattas. 2011. Fast algorithms for Bayesian uncertainty quantification in large-scale linear inverse problems based on low-rank partial Hessian approximations. SIAM Journal on Scientific Computing 33(1):407-432.

Galbally, D., K. Fidkowski, K. Willcox, and O. Ghattas. 2010. Nonlinear model reduction for uncertainty quantification in large-scale inverse problems. International Journal for Numerical Methods in Engineering 81:1581-1608.

Ghanem, R. G., and A. Doostan. 2006. On the construction and analysis of stochastic models: Characterization and propagation of the errors associated with limited data. Journal of Computational Physics 217:63-81.

Kaipio, J., and E. Somersalo. 2005. Statistical and Computational Inverse Problems. Applied Mathematical Sciences, Vol. 160. New York: Springer-Verlag.

Kennedy, M. C., and A. O’Hagan. 2001. Bayesian calibration of computer models. Journal of the Royal Statistical Society. Series B (Statistical Methodology) 63:425-464.

Keyes, D. E. 2011. Exaflop/s: The why and the how. Comptes Rendus Mcanique 339:70-77.

Komatitsch, D., S. Tsuboi, C. Ji, and J. Tromp. 2003. A 14.6 billion degrees of freedom, 5 teraflops, 2.5 terabyte earthquake simulation on the Earth Simulator. In SC03: Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis, ACM/IEEE.

Lieberman, C., K. Willcox, and O. Ghattas. 2010. Parameter and state model reduction for large-scale statistical inverse problems. SIAM Journal on Scientific Computing 32:2523-2542.

Martin, J., L. C. Wilcox, C. Burstedde, and O. Ghattas. A stochastic Newton MCMC method for large scale statistical inverse problems with application to seismic inversion. In preparation.

Marzouk, Y. M., and H. N. Najm. 2009. Dimensionality reduction and polynomial chaos acceleration of Bayesian inference in inverse problems. Journal of Computational Physics 228:1862-1902.

Narayanan, V. A. B., and N. Zabaras. 2004. Stochastic inverse heat conduction using a spectral approach. International Journal for Numerical Methods Engineering 60:1569-1593.

Oden, J. T., O. Ghattas, et al. 2011. Cyber Science and Engineering: A Report of the NSF Advisory Committee for Cyberinfrastructure Task Force on Grand Challenges. Arlington, VA: National Science Foundation.

Tarantola, A. 2005. Inverse Problem Theory and Methods for Model Parameter Estimation. Philadelphia, PA: SIAM.

Wilcox, L. C., G. Stadler, C. Burstedde, and O. Ghattas. 2010. A high-order discontinuous Galerkin method for wave propagation through coupled elastic-acoustic media. Journal of Computational Physics 229:9373-9396.

Yang, U. M. Parallel algebraic multigrid methods—high performance preconditioners. Pp. 209-236 in Numerical Solution of Partial Differential Equations on Parallel Computers, A. Bruaset and A. Tveito, eds., Lecture Notes in Computational Science and Engineering, Vol. 51, Heidelberg: Springer-Verlag.

Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 43
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 44
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 45
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 46
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 47
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 48
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 49
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 50
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 51
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 52
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 53
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 54
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 55
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 56
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 57
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 58
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 59
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 60
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 61
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 62
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 63
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 64
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 65
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 66
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 67
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 68
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 69
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 70
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 71
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 72
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 73
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 74
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 75
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 76
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 77
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 78
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 79
Suggested Citation:"Appendix B: White Papers." National Research Council. 2011. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/13167.
×
Page 80
Next: Appendix C: Workshop Participants »
Grand Challenges in Earthquake Engineering Research: A Community Workshop Report Get This Book
×
Buy Paperback | $35.00 Buy Ebook | $27.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

As geological threats become more imminent, society must make a major commitment to increase the resilience of its communities, infrastructure, and citizens. Recent earthquakes in Japan, New Zealand, Haiti, and Chile provide stark reminders of the devastating impact major earthquakes have on the lives and economic stability of millions of people worldwide. The events in Haiti continue to show that poor planning and governance lead to long-term chaos, while nations like Chile demonstrate steady recovery due to modern earthquake planning and proper construction and mitigation activities.

At the request of the National Science Foundation, the National Research Council hosted a two-day workshop to give members of the community an opportunity to identify "Grand Challenges" for earthquake engineering research that are needed to achieve an earthquake resilient society, as well as to describe networks of earthquake engineering experimental capabilities and cyberinfrastructure tools that could continue to address ongoing areas of concern. Grand Challenges in Earthquake Engineering Research: A Community Workshop Report explores the priorities and problems regions face in reducing consequent damage and spurring technological preparedness advances.

Over the course of the Grand Challenges in Earthquake Engineering Research workshop, 13 grand challenge problems emerged and were summarized in terms of five overarching themes including: community resilience framework, decision making, simulation, mitigation, and design tools. Participants suggested 14 experimental facilities and cyberinfrastructure tools that would be needed to carry out testing, observations, and simulations, and to analyze the results. The report also reviews progressive steps that have been made in research and development, and considers what factors will accelerate transformative solutions.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!