Global energy markets are in transition. Demand growth is shifting from mature economies of North America and Europe to developing economies in Asia, the Middle East, and Africa. Energy resources are abundant, in contrast to concerns of shortages beginning in the 1950s and prominent again just 10 years ago, and a number of energy technologies are available. Efforts to decarbonize energy systems, driven by environmental concerns (e.g., clean air and climate change) and market forces, are creating a shift away from traditional fossil fuels to lower carbon sources, which is expected to continue. Projections of energy systems into the future are, however, subject to great uncertainty. Important questions today concern the pace of change in energy markets and the depth of decarbonization (WEC, 2016).
This chapter provides an overview of current and future energy trends and summarizes the nation’s geologically based energy challenges. As part of its task (see Box 1.1), the committee summarizes the nation’s current geologically based energy challenges after reviewing current and projected energy trends in the context of the national and international energy outlooks.
An affordable, secure, and sustainable energy supply is the cornerstone of a strong economy and personal well-being. With global population projected to grow from just over 7 to more than 9 billion and gross domestic product expected to double by 2040 (EIA, 2017a; ExxonMobil, 2017), global energy demand is forecast to increase by 30% between 2016 and 2040 (IEA, 2017). Most of this growth in demand is expected to come from nations that are not part of the Organisation for Economic Co-operation and Development (OECD), particularly in the Asia-Pacific region. Over half of that demand will be for power generation (IEA, 2016; WEC, 2016; IEA, 2017; ExxonMobil, 2017). Energy efficiency gains are expected to restrain demand growth considerably (WEC, 2016). Without restraints, forecast demand would increase by as much as 100% (ExxonMobil, 2017).
Electrification is projected to become the largest use of energy (IEA, 2016; 2017), accounting for more than 55% of global energy demand growth over the next 25 years (ExxonMobil, 2017). Global electricity production expected to double by 2060 (WEC, 2016). Urbanization and an expanding middle class will drive demand increases,
particularly in China and India. New technologies (e.g., smart cities; automated zero-carbon mass transit systems; integrated grid/storage and electric vehicles) are expected to result in a transition from energy technologies used during the past 45 years (WEC, 2016). Given China’s rising significance in the energy sector (see Box 2.1) and its expected influence on global energy markets (WEC, 2016; IEA, 2017), policy changes in China are highlighted in most global projections (BP, 2017; IEA, 2017).
Despite efforts by several governments and international organizations to promote low-carbon energy sources, fossil fuels—and in particular oil and natural gas—are projected to continue to dominate global energy supply through 2040 (BP, 2017; EIA, 2017a; ExxonMobil, 2017). Oil will remain the world’s primary energy source to 2040, driven by demand increases in the transportation sector and industrial applications (e.g., plastics and advanced materials; ExxonMobil, 2017). Demand for oil in transportation and industry is expected to continue in all but the most extreme decarbonization scenarios (IEA, 2016).
Factors Affecting Oil Demand
Production of renewable fuels could reduce oil demand. For example, global ethanol production has nearly doubled in the last decade (from 13,489 million gallons in 2006 to 26,504 million gallons in 2016), and biodiesel has grown as well (from 1,585 million gallons in 2006 to 8,136 million gallons in 2016) (DOE, 2016; RFA, 2017). Sustainability issues will need to be addressed for growth to continue, perhaps through the development and commercialization of advanced biofuel production methods (IEA, 2016). Given the uncertainty about adoption of transportation options such as autonomous vehicles, car sharing, and ride pooling, their impact on oil demand is difficult to predict (BP, 2017, EIA, 2017a). Market uncertainties associated with cuts in upstream spending and declining conventional oil discoveries following the dramatic decline in oil prices in 2014 may also factor into oil demand (IEA, 2016).
Natural gas is forecast to be the world’s fastest growing fuel source (IEA, 2016; EIA, 2017a), meeting 25% of global energy demand by 2040 (ExxonMobil, 2017). Production is expected to grow—led by the United States, the Middle East, and China—with shale and tight resources (i.e., resources found in low permeability geologic formations) becoming increasingly important (EIA, 2017a). Other projections agree, but they raise concerns of market sensitivities to changing decarbonization rates (BP, 2017), varying estimates of U.S. shale gas resources (IEA, 2016), the need for better control of methane emissions (IEA, 2017), and risks associated with the supply and distribution of liquefied natural gas as growth and trade expands rapidly (IEA, 2016).
In contrast to projected growth of other energy sources, coal demand is expected either to remain flat (EIA, 2017a) or decline slightly from 30% of primary energy demand today to 27% (IEA, 2016); meeting less than 25% in 2035 (BP, 2017). Global electricity supply from coal is expected to plateau at around 30% in 2035, a 10% decline from 2015 (ExxonMobil, 2017). The degree to which China and India utilize coal will ultimately drive long-term global demand (WEC, 2016). China has made clear its intention to move away from coal to reduce severe air pollution (IEA, 2017) (see Box 2.1), but expected increases in coal use in India and Southeast Asian countries could counter demand reductions in China.
Cumulative global installed capacity of renewable electricity (e.g., electricity from geothermal, solar, wind, biomass, and hydroelectric sources) grew by 9.1% in 2016, continuing the steady 7.5% compound annual growth rate from 2006 to 2016. Renewable sources accounted for nearly 26% (6,211 TWh) of all electricity generation worldwide in 2016 (DOE, 2016; REN21, 2017). Electricity generation from renewable energy sources is expected to continue to grow and account for approximately 30% of world electricity
generation by 2040 as technologies improve and government incentives continue (EIA, 2017a). But this comes with a price. Meeting global decarbonization goals with renewable energy sources will require additional cost reductions in the technologies, favorable government policies, and subsidy support (peaking at $210 billion in 2030). Associated technology challenges include significant improvements in energy efficiency (IEA, 2016) as well as technologies that reduce the costs of reliable electricity delivery to end users from distributed energy sources (MIT, 2016). A number of new cost-effective energy storage solutions also will be required to support growth in variable renewable energy sources that are not generated 24 hours a day (e.g., solar) (IEA, 2016). Biomass, solar, energy storage, carbon capture, and the electric grid are identified as prime areas in which innovation could facilitate development (MIT, 2016).
While subsidies for renewables are substantial in some jurisdictions, they do not yet rival those for fossil fuels, although the gap narrowed from 10:1 in 2008 to 2:1 in 2015 (IEA, 2016, 2018). Moreover, a substantial carbon tax ($100/tonne carbon dioxide) would be required to advance the decarbonization transition more rapidly (IEA, 2016). A complex and competitive market landscape driving efficiency, innovation, and deployment of new technologies—combined with global action on security, environmental, and economic issues, and on government support enabling technological development—could hasten the transition to low-carbon energy sources to 2060 (WEC, 2016).
This transition will also drive pressures on other resources. Dependence on rare earth elements (e.g., tellurium, indium, gallium, and selenium; see Box 2.2) could present challenges to ramping up solar photovoltaic capacity (MIT, 2016). The International Energy Agency (IEA, 2016) highlights water and energy linkages, noting that increased reliance on biofuels, concentrating solar power, carbon capture, or nuclear power will also create new demands for water. Additional pressures on water are expected as urban centers continue to grow, including increased demand, rising prices, and deteriorating water quality, particularly along coastlines (MIT, 2016).
While geothermal energy resources are included among renewable resources, a separate discussion of the topic is warranted given the emphasis placed on them within the Energy Resource Program (ERP). Mean estimates of untapped conventional geothermal resources in the western United States (from identified and undiscovered resources) range from 9 GWe to 30 GWe (Williams et al., 2008a,b). The potential for engineered geothermal systems (EGS) in the western United States is much greater, estimated as 500 GWe or higher (MIT, 2006; Williams et al., 2008a,b; Lopez et al., 2012; Augustine, 2016), which is almost half of today’s total installed generation capacity in the United States. Yet, current U.S. geothermal energy production accounts for less than 0.4% of total electricity generation (EIA, 2018). Unlocking the potential of geothermal energy resources requires new technologies and approaches to reduce the exploration risk related to conventional geothermal resources as well as to support the successful creation of new subsurface reservoirs in EGS. EGS is an example of a development process that is still not
economically viable, but has shown technical success in small-scale demonstration projects (< 5 MWe) such as the Soultz-sous-Forets1 (France; e.g., Koelbel and Genter, 2017), Landau and Insheim2 (Germany; e.g., Schindler et al., 2010), and Cooper Bain (Australia; e.g., Hogarth et al., 2013; Mills and Humphreys, 2013) projects. Exploration risk can be reduced through studies to characterize and evaluate the geological controls on geothermal resource type and location, by new data acquisition, and through improved predictive methods for mapping resource potential.
Since the 2011 Fukushima accident, the shutdown of all reactors in Japan, and the pause in nuclear development in multiple countries to assess national regulatory requirements (NEA, 2016), the uranium market has been oversupplied. Uranium prices have dropped, and a number of producers have reduced output, including the global leaders of Kazakhstan and Canada. This bear market has also caused production cutbacks in the United States (NEA, 2017; Ostroff, 2017). However, with more than 50 reactors currently under construction around the world (PRIS, 2018), and at least another 100 planned (WNA, 2018), increasing demand will reduce inventories, with positive impacts on the uranium market.
China, India, and other developing countries (see Box 2.1). In contrast, nuclear generating capacity is projected to decline in the United States and Western Europe, due in part to policies to reduce nuclear power generation in some countries (e.g., Belgium, Germany, and Switzerland), but mainly because the pace of construction of new reactors is insufficient to counter the retirement of older units (EIA, 2017a; IEA, 2017). Financial difficulties of key reactor developers in the United States and France, along with increased costs and delays in the construction of new U.S.- and French-designed reactors in China, Finland, France, and the United States, add uncertainty to projected growth in nuclear generating capacity (IEA, 2017).
There are challenges to maintaining energy security, economic competitiveness, and environmental responsibility while also navigating the constant changes in U.S. energy production and use (DOE, 2015). Fossil fuels will dominate the U.S. energy mix for the coming decades, but with reduced reliance on imports. Fossil fuel imports declined by 35% from 2011 to 2015 (DOE, 2015), a trend that is expected to continue. Within 10 years, the United States is projected to become a net energy exporter for the first time since the 1950s, owing to the growing oil and gas production that has resulted from technological breakthroughs (EIA, 2017b). Energy production is projected to increase by 30% to 40% by 2040, led by shale gas, with the spot price3 doubling by 2040. Projections of tight oil and shale gas production are uncertain, however, because large portions of known formations have relatively short production histories and production technologies continue to evolve rapidly (IEA, 2016; EIA, 2017b).
From 2011 to 2015, U.S. energy consumption increased by about 1% as population grew by 3.1% and the economy grew by 8.2%. This reflects gains in energy efficiency (DOE, 2015). This trend is expected to continue, with U.S. energy consumption projected to remain relatively flat, increasing by only 5% from 2016 to 2040 (EIA, 2017b). Electricity demand is projected to increase by 1% or less each year, a significant decline from historic rates of growth owing to market saturation, improved efficiencies, and a shift to less energy intensive industry (EIA, 2013; 2017b).
As elsewhere, the domestic fuel mix is changing, featuring a rapid increase in natural gas use (Box 2.3). As of January 2017, coal consumption has declined from a peak of 22.5 quadrillion Btu to 14.5 quadrillion Btu (EIA, 2018) as it loses market share to low-cost natural gas, the falling costs of renewable energy sources, and the domestic regulatory environment. Renewables (mainly solar and wind), on the other hand, are projected to grow due to existing state and federal policies that encourage adoption. By 2040, renewables are projected to account for about 27% of U.S. electricity production and consumption (EIA, 2017b). The electric power sector remains the largest consumer of primary energy (i.e., energy sources such as coal and natural gas that can be used directly as extracted) throughout the projection period.
3 Current price at any point in time.
The United States has been central to global civil nuclear power generation since such generation began more than 50 years ago. With 99 operational reactors, the U.S. fleet remains the world’s largest, but with attrition through aging and only 2 reactors under construction (IAEA, 2017), U.S. nuclear power generation is expected to decline (EIA, 2017b), particularly after 2035. In 2016, nuclear power generated 20% of U.S. electricity (see Box 2.3), accounting for more than 60% of the nation’s carbon-free electricity (NEI, 2017a). Unlike other jurisdictions, particularly those in the developing world (e.g., China and India), nuclear energy in the United States is projected to decline gradually until 2040, and then more rapidly as 25% of existing plants are retired from operation (EIA, 2017b). Nuclear power is also losing market share to natural gas and renewables. Some states are making efforts to maintain nuclear capacity, and the U.S. government has indicated its intent to maintain nuclear power as a part of the nation’s energy mix (Perry, 2017). In states such as New York and Illinois that have more liberalized electricity markets,4 legislation recognizing zero carbon emissions generation from nuclear power has been passed but is the subject of legal challenges (Smith, 2017). Other states are considering policy measures to keep more reactors from closing before the end of their operational lives.
As directed by the committee’s statement of task, the next sections summarize the nation’s geologically based energy-related challenges identified by the committee.
4 Liberalized electricity markets are those that incorporate price bidding for electricity supply as opposed to long-term purchases made under contracts.
Addressing these challenges will be important for maintaining a secure, resilient, environmentally responsible, and economically competitive energy supply. Most of these challenges are directly relevant to ERP efforts and the decisions regarding the U.S. energy resources sector that the ERP addresses. The challenges include:
- Maintaining a robust understanding of the national resource inventory and its associated uncertainties;
- Exploring and developing geologic energy resources in an environmentally and socially responsible manner;
- Overcoming technical and economic barriers to new resource development processes; and
- Adapting to variable power-generation sources (e.g., wind and solar) and related energy storage.
These are broad and overarching challenges. The ERP may only contribute to addressing specific aspects of them. Nonetheless, a broad, inclusive view of the various geologically based energy challenges provides important contextual information about the current energy landscape in the United States. Subsequent text describes how current ERP activities address those challenges (Chapter 3), and how well the work aligns with stakeholder needs (see Chapter 4).
Challenge #1: Maintaining an Up-to-Date Geologic Resource Inventory
Geologic resources encompassed by the ERP include oil, natural gas, coal, methane hydrates, geothermal energy, and uranium. It is important to maintain robust assessments of both U.S. and international resources that accurately describe future expected recoverable quantities. Inventories of the global distributions of geologic resources are as important as those of domestic distributions because decisions regarding the nation’s future energy mix may need to account any political, financial, and technical uncertainties.5 Developing robust resource assessments involves answering questions such as: How much of the resource is there? How much is technically recoverable using current technologies? Where is it located? What are the uncertainties in the estimate? How difficult is it to extract? How much will extraction cost? Developing resource inventories for such geologic resources as oil, gas, geothermal, coal, or uranium requires extensive observations, experience, and scientific analyses that lead to reproducible predictions about the complex subsurface environment. Accurate descriptions of risk and uncertainty are necessary. Before such inventories can be developed, systems of standardization will need to be adopted.
Challenges associated with developing reliable resource inventories will change over the lifecycle of a given resource—from early exploration through depletion and reclamation. During resource identification and early exploration, resource estimates often
depend on conceptual geologic models with large uncertainty. They require significant interpolation and extrapolation from sparse geoscientific data. The challenge, therefore, is to understand and model the fundamental physical, chemical, and biologic systems that control resource distribution at spatial scales that can range from thousands of kilometers to microns. After initial exploration, resource development includes extensive data acquisition and mapping; coupled 3-dimensional numerical modelling, spatial analyses, and statistical analyses that all involve increasingly large quantities of geophysical and geological data and advanced computing tools to manage and interpret those large quantities. Compounding this challenge, the public and private sectors often acquire and interpret geologic data independently and in parallel; different resource estimates may result.
Private-sector development of subsurface energy resources results in substantial useful data (e.g., from seismic surveys, well log and other geophysical surveys, and production data). Those data, however, often are proprietary and remain confidential for a period specified by leasing regulations. Not all of those data enter the public domain. Nonetheless, the cost and pace of data acquisition, coupled with the diversification in energy research, has led the private sector to pursue new partnerships with research organizations to advance understanding of geologic fundamentals, providing both new opportunities and new challenges for public-private collaborations (e.g., Massachusetts Institute of Technology Energy Initiative,6 Texas Bureau of Economic Geology Consortia7).
Technically recoverable resource estimates may increase or decrease or decrease over time, with changes triggered by new regulations and the ability to identify new resources or access previously unrecoverable resources using new technologies. For example, once horizontal drilling and hydraulic fracturing enabled the extraction of what were previously commercially inaccessible resources, the economics around oil and gas extraction changed radically. In the future, captured greenhouse gas (such as carbon dioxide [CO2]) may be used to extract residual oil trapped in small pore spaces in the rock, both reducing CO2 emissions and changing the face of enhanced oil recovery. As the nation’s primary public source for energy resource assessments, the ERP has the challenge of remaining sufficiently technically engaged so as to be able to monitor advances in the industry and to be responsive to rapid changes.
Challenge #2: Environmentally and Socially Responsible Exploration and Development of Geologic Energy Resources
Developing geologic resources requires access to and careful management of surface lands and the prevention and mitigation of potential environmental impacts. Depending on the resource being developed, issues may be related to land-use and water-use requirements; management of produced water; the potential for aquifer contamination during drilling and hydraulic stimulation; the potential for induced seismicity; CO2
sequestration to offset emissions; and long-term geologic storage of radioactive and other energy waste products. Subsequent sections discuss these topics individually.
Water Used, Produced, and Disposed, and the Energy-Water Nexus
It is increasingly important to recognize the complex connections between energy and water and their various impacts on water supplies, on water quality, and on related factors such as induced seismicity associated with produced water disposal (see Box 2.4). The relationship between how water is used to produce energy and how energy is used to
extract, treat, and distribute water for end use is referred to as the “water-energy nexus.”8 This section addresses challenges associated with the use of water for energy production.
The energy extraction industry accounts for approximately 1% of total water withdrawn in the United States, including water used in hydraulic fracturing (Maupin et al., 2014). While this percentage may seem insignificant, percentages of water withdrawn locally may be much higher (e.g., in arid regions where many of the nation’s oil and gas resources are located). Water is also consumed for electricity generation and use. Those amounts can exceed the volumes of water used for resource extraction (e.g., the volume of water consumed to generate electricity from natural gas can be 15 times the volume of water used for hydraulic fracturing to produce that gas [Laurenzi and Jersey, 2013; Scanlon et al., 2014]. Similarly, water lost as a result of evaporation during the combustion of hydrocarbon fuels may equal or exceed the volume of produced waters disposed of via deep well injection (Belmont et al., 2017).
The United States accounted for 35% of total global water withdrawn for power generation and primary energy production in 2010 (IEA, 2016). An estimated 38% of total freshwater withdrawals in the United States are used for thermoelectric power generation (i.e., electricity generated using steam-driven turbines in coal, nuclear, gas, some geothermal, and solar-thermal power plants). Almost 100% of these waters are sourced from surface water (i.e., not groundwater; Maupin et al., 2014), and most of those waters are returned to the source and available for downstream use. Although irrigation accounts for a similar percentage (38%) of freshwater withdrawals that is sourced from surface water (57%) and groundwater (43%), most of irrigation water is consumed. Irrigation is focused primarily in 17 western states, accounting for 83% of irrigation water withdrawals. It is necessary to consider variability in regional water use within the United States when evaluating the energy-water nexus.
Hydraulic fracturing is often used to stimulate oil and gas in geologic reservoirs to move to a well and then to the surface. Depending on the resource reservoir and basin in which it is located, estimates of median water use for hydraulic fracturing are up to 10.5 million gallons/well (40 ×106 L/well) (Kondash and Vengosh, 2015; Scanlon et al., 2017). The amount of water used for multistage hydraulic fracturing development in horizontal wells has increased by a factor of 10 in the Permian Basin between 2008 and 2015 (Scanlon et al., 2017). This increase may stress water resources (as well as sand resources necessary as a proppant in conjunction with the injected water) (Scanlon et al., 2016; 2017). This is especially a concern where access to large volumes of water is difficult or where water sources are limited and sensitive to drought (e.g., Texas, California, New Mexico, and parts of the Midwest; Freyman, 2014). Wells not only produce oil and gas. They also produce some of the fluids that were pumped into the well as part of the hydraulic fracturing process, along with much larger volumes of fluids naturally present in the geologic formation. As described in Box 2.4, these produced fluids often contain additives that make it necessary to properly treat and dispose of water to avoid adverse environmental impacts. Improper well casing and cementing can allow methane to contaminate local groundwater supplies (Raimi, 2017). Other potential negative effects associated with injection of this
produced fluid include increased risk of induced seismicity (discussed later in this chapter) and undesirable geochemical feedbacks that have the potential to alter geochemistry or hydraulic properties over time. Proper handling of the produced water is critical because surface spills can contaminate soils. Transportation of produced water via trucks may create traffic issues and have other impacts on infrastructure such as roads and bridges (TAMEST, 2017). These impacts need to be identified and weighed against other physical and social impacts associated with produced water management.
Water is also required for the development of EGS reservoirs: for drilling new wells, for stimulation activities to increase reservoir permeability, for well testing, and to create the hydrothermal systems that circulate through the engineered subsurface reservoirs from which hot water is produced to the surface for electricity generation. The amount of water required for stimulation depends on the reservoir properties and type of stimulation used (hydraulic, chemical or thermal): on average, EGS projects developed to date have required approximately 5.1 million gallons per well (Harto et al., 2013). Over a project lifecycle (> 30 years of EGS power plant operation), modelling indicates that water needed for drilling and reservoir development may comprise less than 3% of total water requirements (Schroeder et al., 2014). In greenfield EGS projects (i.e., those not connected to a productive hydrothermal reservoir), predicted losses of injected water into the reservoir at depth (i.e., injected fluids that do not return to the production wells) will require supplementary water to be added to the system (makeup water) to maintain a viable EGS. This may comprise 80-95 % of a projects water requirements over its lifetime (Schroder et al., 2014). Therefore EGS projects need access to long-term water sources to meet this anticipated demand. Similar to development of oil and gas resources, meeting the demand may be challenging in areas with limited water availability. There may be opportunities to use recycled fluids (e.g., wastewater), brackish groundwater, or produced waters from oil and gas wells for this makeup fluid (Harto et al., 2014). The potential geochemical reactions that may cause undesirable loss of permeability in the reservoir will also need to be considered.
Water management studies are only in the early stages of data collection and of analysis of water use for different energy pathways. Where the water system is most strained is only now starting to be understood (e.g., Ch2MHill, 2015; Veil, 2015). Given water scarcity issues in some areas, produced waters may represent an alternative water source for hydraulic fracturing, or makeup water for EGS (although the latter is untested and will depend on the geochemical compatibility between the produced fluids and geothermal formations [Clark et al., 2011]). Water quality requirements for hydraulic fracturing have evolved—from use of freshwater during early practice to use of “clean brine” or direct reuse of produced water (Scanlon et al., 2017). Characterization of produced waters is complicated because of difficulties with analytical techniques, problems with high-salinity matrices (Oetjen et al., 2017), and lack of available reference materials. National databases related to produced water that include information about volumes and geochemistry will inform future research on the study and recycling of these produced fluids as well as the regulations and protocols associated with their use.
Induced seismicity refers to earthquakes caused by human activity such as those related to development of some oil and gas reservoirs or of engineered geothermal resources. Induced seismicity results primarily from the disposal of produced fluids into deep geological formations near the basement due to over-pressurization (e.g.,
Langenbruch and Zoback, 2016; see Box 2.5). Many of these earthquakes are strong enough to be felt, which may lead to an increase in regulation associated with fluid disposal. Understanding the risks and managing the hazards associated with induced seismicity requires development of coupled geomechanical and hydrological models that account for the effects of increased fluid pressures. The relationship between induced seismicity and the rate, volumes, and depth of injection of produced waters are being evaluated (e.g., Weingarten et al., 2015; Langenbruch and Zoback, 2016; Hincks et al., 2018). The industry and state regulators, however, are still learning about the effects and their associated hazards, and they are adapting injection strategies in real time to manage seismicity (e.g., Oklahoma Corporation Commission, 2016). While individual states address induced seismicity through regulation and study, some national organizations have developed information and guidance documents (e.g., GWPC and IOGCC, 2017). Much remains to be done before induced seismicity can be mitigated. Better knowledge of bottom-hole pressures, new fault mapping, and an improved understanding of pore pressure changes and the distance at which they influence seismic activity are necessary. In addition, statewide databases of disposal volumes and pressures, and access to them, would be invaluable.
Geologic Sequestration of Carbon Dioxide (Carbon Capture and Storage)
Given the domination of gas-fired and coal power plants (collectively producing approximately 65% of the nation’s electricity in 2016),9 electricity-generating facilities are responsible for the most CO2 emissions (1,928 million metric tons)10 among all the U.S. energy sectors in the United States (about 35%).11 Depending on how federal and state policies evolve to address CO2 emissions, and pending further research, development, and commercial demonstration, subsurface geologic CO2 sequestration may become more widespread in the United States. As of January 3, 2018, in excess of 16 million metric tons of CO2 have been injected in the United States as part of the Department of Energy (DOE) Clean Coal Research, Development, and Demonstration Programs.12 This represents a small fraction of national CO2 emissions from the U.S. electric power sector alone (EIA, 2016). Important challenges associated with the successful geologic CO2 sequestration center on identifying potential subsurface reservoirs and understanding their behavior under long-term sequestration conditions. For example, positive net fluid budgets from injecting CO2 into the subsurface can impact subsurface pressures and induce seismicity (NRC, 2013; Zoback and Gorelick, 2015). In addition, dissolution and precipitation reactions can affect long-term fluid chemistry, water quality, and aquifer or caprock permeability. Such phenomena have been studied in deep reservoirs for such activities as carbon sequestration (Johnson et al., 2005; Xu et al., 2011). Potential reservoirs:
- have appropriate porosities and permeabilities to store the CO2;
- have appropriate stratigraphic, geochemical, or structural trapping mechanisms (e.g., a cap rock) to prevent migration of CO2 to the surface or into other surrounding geologic and hydrologic formations (e.g., aquifers); and
- are geologically stable (i.e., where active faults will not compromise the reservoir trap or increase the risk of induced seismicity).
These challenges can be addressed via substantial subsurface characterization, through geological and geophysical data acquisition and analysis, in laboratory studies, and via numerical simulation. The DOE has taken the lead on producing maps of both CO2 sources and potential sequestration sites.13
Further research and development is required to facilitate wider deployment of geologic CO2 sequestration. For example, sequestration projects can benefit from information generated by and lessons learned from induced seismicity studies and experiences (NRC, 2013). The DOE aims to develop and advance the effectiveness of onshore and offshore carbon capture and storage technologies, reduce the challenges to their implementation, and prepare them for widespread commercial deployment in the 2025-2035 time frame.14 Understanding waste disposal issues and evaluation of potential impacts associated with CO2 sequestration can include basin-wide and lifecycle approaches—even at the earliest phases of resource assessment and development. ERP expertise in geochemistry, structural geology, and detailed stratigraphic analysis would complement DOE capabilities; ERP collaboration with the DOE might provide more detailed geologic characterization of subsurface reservoirs that may be suitable for carbon sequestration.
Subsurface Storage of Radioactive Waste
Because the energy density of uranium is thousands of time greater than that of fossil fuels,15 the full lifecycle volumes of wastes (including atmospheric emissions) produced as a result of electricity generation are much smaller than those produced by fossil fuels. Disposal of the relatively small volumes of the resulting radioactive waste is, however, controversial, particularly as related to spent fuel, given its long half-live and radioactivity (Alley and Alley, 2013), although progress is being made in a number of countries, notably Finland and Sweden (IAEA, 2018). Since the initiation of U.S. nuclear power generation, some 79,000 metric tons of spent fuel have been discharged from reactors (NEA, 2017). Currently, spent fuel is stored in aboveground facilities (e.g., in spent fuel pools and dry cask storage systems).
13 An atlas is available at https://www.netl.doe.gov/research/coal/carbon-storage/atlasv and an online mapper is located at https://edx.netl.doe.gov/geocube/#natcarbviewer.
As outlined by the Nuclear Regulatory Commission (NRC, 2017), radioactive wastes include low-level waste with radioactivity just above natural background levels (such as gloves and lab coats used in nuclear power stations and medical facilities, uranium mill tailings, sandy material discharged from uranium mines) and high-level wastes (such as those produced from the irradiation of fuel in reactors and from national defense activities). Long-term disposal of each type of waste ranges from near surface sites for low-level wastes and tailings to deep geologic disposal sites for high-level radioactive wastes.
There are four existing sites in the United States for the disposal of low-level radioactive wastes (NRC, 2017). Low-level wastes are treated and packaged according to the level of hazard and disposed of in licensed, near surface facilities. Tailings produced at uranium mining and milling sites are generally stabilized on-site. Since the radium contained in tailings decays to produce the radioactive gas radon over the course of thousands of years, tailings are placed in piles—usually lined, covered, and monitored for leaks—for long-term disposal. Remediation work continues at uranium mining and milling sites that operated prior to the 1978 Uranium Mill Tailings and Mining Act (referred to as legacy sites).
Spent fuel discharged from reactors is thermally hot, highly radioactive, and potentially harmful. It must be handled with care and safely stored until it becomes harmless through decay processes, which can take hundreds of thousands of years (NRC, 2017). Spent fuel is first temporarily stored in cooling ponds at reactor sites until it can be safely handled and transferred to dry storage casks, either at reactor sites or at a consolidated interim storage facility. Ultimately, spent fuel is destined for disposal in a deep geologic repository. Siting of a high-level radioactive waste facility in the United States has been a lengthy, controversial and politically charged process.
After decades of study, the DOE submitted a license application in 2008 to receive authorization to begin construction of a repository for high-level radioactive wastes at Yucca Mountain, Nevada. This process was stopped shortly thereafter and the DOE subsequently began to implement a consent-based process to select and evaluate sites and license facilities, reversing previous efforts to select a high-level waste repository based predominantly on engineering studies (NEA, 2017). In 2017, however, a House committee voted overwhelmingly to advance a bill meant to establish Yucca Mountain as the nation’s repository for high-level radioactive wastes (Cama, 2017).
Given the continued use of nuclear power in the United States, radioactive waste subsurface disposal facilities will need to be licensed and operated. Although the DOE is responsible for the siting and licensing of disposal sites for wastes of which the federal government is responsible, U.S. Geological Survey expertise will continue to be required during the selection and characterization of these facilities. This represents an opportunity for ERP input as uranium-groundwater research matures.
Surface Land Use
Oil and gas resource development between 2000 and 2012 is estimated to have required 30,000 km2 of land for well pads, roads, and storage facilities (Allred et al., 2015), making that land unavailable for agriculture, horticulture, wildlife habitat, recreation, rural and
urban development, or other uses. Approximately 200,000 km2 of new lands are projected to be required for energy production in the United States by 2040 (Trainor et al., 2016). When spacing requirements are included in the projection, up to 800,000 km2 of new land could be impacted by energy production by 2040. This accounts for the area of land between well pads or other surface infrastructure that might affect wildlife (e.g., by breaking up ecosystems that affect large mammal populations and wildlife migration) and human use. Land use for surface coal mining is not actively tracked by federal agencies and is thus imprecisely understood (Epstein et al., 2007), and the committee could not locate land use estimates in the scientific literature. Based on estimates of the tonnage and thickness of coal mined at the surface in 2016 (EIA, 2017c), about 270 km2 of land was disturbed by surface coal mining during that year.
Fossil fuels are forecast to account for 60% to 80% of the direct land footprint required for new energy, and more than 90% of the footprint when spacing requirements are considered (see Box 2.6). Energy resource studies need to include accounting for land disturbance issues for the full lifecycle of energy development. Understanding the economic and environmental tradeoffs that may occur as additional public or private land access is granted to develop new geologic energy resources is challenging as is recognizing new issues that arise with the land-use requirements associated with growing alternative energy production.
Challenge #3: Technical and Economic Barriers to New Resource Development Processes
Technical innovation is also required for more efficient and responsible development of existing geologic energy sources: for example, to improve resource recovery, mitigation of environment impacts, and waste disposal. Examples include innovations that address
- reducing methane (CH4) leakage associated with oil and gas production involving high-water-volume hydraulic fracturing; identifying EGS sites with the most desirable characteristics. Such characteristics would support the creation and maintenance of an engineered reservoir that accommodates commercial rates of fluid flow and heat exchange;
- maintaining permeability in EGS reservoirs. This could be done through managing the potential thermo-hydraulic-mechanical-geochemical (THMC) feedbacks, and by developing new technologies to monitor the system throughout its lifetime;
- achieving commercially viable methane recovery from hydrates;
- achieving stable long-term CO2 sequestration with appropriate monitoring;
- increasing the volumes of CO2 that can be efficiently injected for sequestration;
- increasing oil recovery efficiency, leaving less oil in place;16
- lowering the environmental impacts of coal production and fly ash disposal;
- managing volumes of produced fluids in oil and gas wells; and
16 Even small levels of improved recovery can translate into trillions of dollars of additional revenue.
- mitigating induced seismicity hazard during development of continuous oil and gas resources and in EGS.
Through continued subsurface geologic characterization, collection, and archiving of available data sets (e.g., well data, rock properties, production data) and basin-scale modeling and assessments, the ERP has an opportunity to contribute to the innovations that address these challenges. Their products could provide vital information that lead to technical breakthroughs and viable solutions.
Challenge #4: Adapting to Variable Power-Generation Sources
Contributions from renewable energy sources (e.g., wind and solar) to electricity generation is projected to increase in coming decades (EIA, 2017a). These energy sources do not always produce electricity continuously (e.g., solar energy is not generated at night). Generation can also be variable over short timescales (e.g., minutes). The current electricity grid was designed around stable baseload power generation, and, given the small storage capacity of the grid, electricity supply and demand must always be closely balanced to avoid grid collapse (e.g., blackouts) or other issues. To maintain grid stability and reliability, and to more closely match energy generation with times of energy demand, energy storage and other options (e.g., load following power plants that adjust power as demand fluctuates) are needed to accommodate sources of variable power generation sources such as wind and solar. Energy storage options include above ground systems such as pumped hydroelectric storage (water moved to higher elevation using solar or wind energy generated during peak production, and then drained to lower elevations to produce energy during off-peak times), batteries, flywheels, chemical/thermal systems (e.g., molten salts), and subsurface storage options.
Subsurface energy storage options include compressed air energy storage (CAES) in geologic formations (see Box 2.7), gas storage (e.g., hydrogen, methane gas—including liquefied natural gas), or thermal storage (storing hot fluids in subsurface reservoirs). Thermal storage (e.g., sensible heat storage) usually occurs at shallow depths up to a couple of hundred meters, whereas CAES and gas storage sites may use geological formations as deep as 2 km (Kabuth et al., 2017).
This page intentionally left blank.