The Forms and Costs of Carbon Sequestration and Capture from Energy Systems
SFA Pacific is a consulting firm that provides second opinions to people high on the learning curve before they make investments. We have no vested interest in promoting anything. Most of our work is in the private sector, and a lot of it is done outside the United States. In fact, about two-thirds of our work related to carbon dioxide is done abroad. We have been working on carbon dioxide for about 15 years, in the last two or three years mostly for private industry. When private industry asks people like us to get involved you know they take the issue very seriously.
Power generation is going through some enormous changes, 20 years of difficulty driven by newly deregulated utilities entering the competitive free market. The big ugly “C” word that all regulated utilities fear the most is competition. People who still work in power generation probably have ulcers and high blood pressure.
The sector of power generation operating in the greatest uncertainty today is not transmission or distribution, but power generation itself. The uncertainties for power generation are tremendous, especially when it comes to environmental law, which is constantly changing and is very prohibitive to new power plants. Incredibly, we have created a situation in which utilities are actually driven by economics to extend the lines of old, “big dirty,” inefficient power plants because environmental laws grandfather them in. Fifty percent of the power in this country is generated by coal-fired power plants with a mean age of 25 to 30 years. These aging plants produce about half of the total kilowatt hours generated in this country. In the last 15 years, essentially no new coal plants have been built for two reasons—environmental laws and cheap natural gas.
One of the key things that has come out of Kyoto is that the Kyoto agreement basically will not work. Nevertheless, companies are trying to make it work. Shell and British Petroleum, for example, are five years ahead of governments on effective, transparent, internal trading of carbon emissions. These private industries are actually ahead of their schedules in terms of reductions.
There are only four basic options for reducing carbon dioxide (CO2) emissions: (1) reducing world population; (2) reducing the standard of living in industrialized countries; (3) reducing energy intensity; or (4) reducing carbon intensity. For meaningful worldwide reductions in CO2, there are really only two places to look—energy intensity and carbon intensity in the world’s two 800-pound gorillas, the United States and China. The United States accounts for 25 percent of the world’s greenhouse gas emissions. But that is because we also create more than 25 percent of the world gross domestic product. We produce a high level of CO2 emissions because we are the economic engine that drives the entire world. China will probably surpass us in the next 20 years because that country’s industry is mostly coal based, and its energy systems are very, very inefficient.
Keep in mind that power generation is where the growth is. Power generation grows at the same rate as gross domestic product, whereas other end-use energies grow at about half that rate.
The United States has an incredible overall energy balance. Power generation accounts for 35 or 36 percent of the whole; transportation fuels account for about 26 percent. Those two add up to about 62 percent of the total energy consumed in this country. But an enormous amount of energy is wasted in our power generation, which is only 33 percent efficient. This is an embarrassing situation. Moreover, our transportation is only 20 percent efficient.
What is even worse, the efficiency numbers are not going up. In fact, they are going down. Power plants are becoming more inefficient because our pollution laws encourage old power plants to add scrubbers that extend their lifetimes, making them even less efficient. Transportation efficiency numbers keep going down because of the American love affair with sport utility vehicles (SUVs). Last year for the first time more SUVs were sold than cars. The efficiency numbers for fleet sales of cars last year declined to 20 miles per gallon, the lowest point since 1980. We are going in the wrong direction.
Three sectors are to blame for carbon emissions in the United States: the industrial sector, the transportation sector, and the power-generation sector. Which of these will have to be responsible for the major share of CO2 reduction? If a carbon tax is put in place, the U.S. industrial sector will move to China. The transportation sector won’t ever be expected to comply with a carbon tax, because SUVs are our sacred cows, and no one can gore a sacred cow and be reelected to Congress. (By the way, don’t blame Detroit for SUVs. Blame yourselves, because automakers produce what you want.) So which sector is left? Power plants.
The power-generation sector operates aging, inefficient, coal-burning plants with efficiencies of less than 35 percent, that produce 35 percent of the emissions in the United States and 9 percent of the emissions around the world. Power plants are vulnerable because they cannot move to China. Therefore, we should rethink power plants because there are a lot of ways to improve them, both by improving efficiencies and by using lower carbon fuels. But ultimately, we will probably need carbon capture, and, to do that, we will need large point sources, such as existing, coal-fired power plants.
The win-win situation for reducing carbon emissions in power generation is increasing energy intensity. There are two ideologically opposed approaches to higher efficiency. The regulated utilities want to build new ultra-supercritical coal plants. The industrial sector wants cogeneration. In my opinion, the utilities should focus on their old plants. Under current laws, if they try to improve their old coal plants, they are punished. We have to change the laws to encourage utilities to make their old plants more efficient. For new capacity, however, we should look to cogeneration. The key factor in cogeneration is that it works effectively, not with steam cycles but with gas turbines, because cogeneration is heat-host limited, a very simple issue that most people don’t appreciate. For a given heat host, you need technologies that give the highest power per unit of cogenerated heat. That is why you use gas turbines, which are already commercially proven. If we ever develop intercooled gas turbines, we will actually double the efficiency numbers.
Cogeneration is critical to increasing energy intensity. Without going into depth, I would point out that cogeneration could be brought into play in the two critical places—North America and China. Cogeneration will not become important in the United States until the old coal-burning power plants are no longer life extended. The marginal load dispatch of old plants is so low that you can’t compete with them. In China, the enormous potential for cogeneration is being stymied because it is not in the best interests of China’s regulated utilities to buy high-efficiency cogeneration from others. They make their money on guaranteed return on investment because 50 percent of China’s coal use is in very small boilers with very low efficiency and very high pollution.
Later in these sessions, you will hear talks about biosinks, which are good for mankind but questionable for net carbon reduction. First of all, they are not really sinks; they are carbon offsets. In addition, there are long-term issues that must be addressed in terms of permanence, verification, transparency, and especially fairness. The problem is that carbon offsets would allow Americans to take unfair advantage of poorer nations.
The important thing is to reduce the carbon content of fuels. We could use natural gas if it were cheap. But one thing you can be sure of is that in a carbon-constrained world natural gas will not be cheap.
Another win-win situation will be to life-extend existing nuclear plants. We cannot build new nuclear plants until we go through the ugly side—the shutting
down and decommissioning of parts of the existing fleets. We must also resolve the waste issue before we can build new nuclear plants.
When it is cost effective and when we can deliver it at a sound price, we want to co-fire biomass in existing boiler systems. Generally speaking, nuclear energy, renewable energy sources, and reforestation biomass are great ideas, but they have very limited possibilities, and we must appreciate these limits. Some popular ideas will never be very useful. When we think of biomass, we should concentrate on waste biomass, because afforestation doesn’t mean much for an existing coal plant; the economics of growing biomass for power are terrible. Wind turbines are great, but we have to be honest about what they can and can’t do. Cycling-load wind turbines can’t replace a base-load coal plant because they cycle low, and they need backup. Those are very real problems. Compare the 30-percent annual load factor produced by wind turbines with the 85-percent load factor produced by coal plants.
The new approach to reducing carbon intensity is CO2 capture and storage. This concept has changed the debate in the last five years because capture and storage cost less than the most politically correct approach—wind turbines. The best approaches to capture and storage appear to be through enhanced oil reservoirs and coal-bed methane production. Those are the places to start. Sequestering CO2 will increase cost by an order of magnitude. There are many pure CO2 vents out there right now.
The next thing to look at is gasification, repowering existing coal plants and ultimately building new plants with cogeneration or polygeneration systems. Most power-plant engineers will tell you that gasification is not commercially viable. They say it is very risky, and it doesn’t work, but that is not correct. A large demonstration in gasification is going on right now. Because expertise in chemical processes is necessary to make gasification work effectively, it is being used primarily in refineries. There are 65 commercial gasification plants right now, mostly in China, most of them producing ammonia. They also make pure hydrogen with gasification, and they do it every day. Hydrogen is becoming increasingly important for producing carbon-free energy. In fact, after conducting a series of tests, General Electric now guarantees performance for burning hydrogen-rich gas in its turbines. In other words, the technology for gasification is all commercially available today. The issue is cost.
One commercial power plant in the United States now makes pure hydrogen and pure CO2 by gasification from coke, which is basically coal without the volatile fraction. Farmers in the Midwest own the oil refinery that makes the coke they feed to the gasification plant. What the U.S. coal-based utilities say can’t be done, farmers in this country are doing commercially right now, with no subsidies.
Polygeneration is a unique approach. Commercial polygeneration plants are in production right now, with high availability and no spare gas fires. These plants are very cost effective, and several are operating commercially in major oil
refineries without subsidies. All of these new plants represent the future of gasifcation. No central power plant will be able to compete against them.
Now let’s turn to the cost of power generation with CO2 control. First, we have to define a baseline—a natural-gas combined-cycle power plant. Next, we must determine which energy source will be most cost competitive. Then, we must look at the costs of recovering CO2. The cost of capturing CO2 includes 50 percent to get the pure stream, about 25 percent to compress it, and 25 percent to dispose of it down a well. That is very important because, if the transfer price of the CO2 goes from this operating cost to the byproduct credit, it reduces the overall cost by about 50 percent.
If we compare a new coal plant and a new natural-gas combined-cycle plant, both built without CO2 capture, the incremental cost for a gasification plant to add CO2 capture is much less than for the natural-gas plant. A new coal plant will probably not be built until the price of natural gas climbs to about $4.50. If a carbon tax comes into play, the cost of new plants will continue to go up. Under those constraints, we would use natural gas until the carbon tax became very high. But that assumes a constant price for natural gas. In a carbon-constrained world, the price of natural gas would go way up.
A more exciting idea is retrofitting existing coal plants. Retrofitting has many possibilities. The most important thing I have to say today is that repowering old coal plants with gasification would increase their capacity and efficiency and, at the same time, reduce all emissions to zero. Retrofitting is the only large-scale way to use CO2 and is, perhaps, the most important issue in CO2 capture and storage.
Old coal plants are a major problem. No carbon stick is big enough to beat old coal plants to death. A carbon tax would have to reach $200 to $300 per ton to make it cheap enough for coal plants to change. We have considered allowing caps and trades with old coal plants. Many people in the power-generation sector would prefer to pay the carbon tax, which, in a net-sum game, would supply funds to those who want to reduce CO2.
In sum, Kyoto has big problems, but the key thing is that international industries are leading the way. Besides the Carbon Capture Project, another very ambitious project, called the Canadian Clean Power Coalition, plans to have a 300-megawatt, retrofitted, coal-fired power plant with CO2 capture in service five years from now. This project will probably lead the way for the future of coal plants.
Utilities will be forced to comply with most of the CO2 reductions, primarily because they can’t move to China. We won’t be seeing new central power plants. The sensible choices for efficient new capacity will be polygeneration and cogeneration with gas turbines. The utilities will have to be a lot more objective about where they are going in the future, but they have a lot of options. The key thing for the long term is CO2 capture.
To sum it up, there are two important issues that have to be addressed, the
two 800-pound gorillas, the United States and China. China is putting in inefficient systems under a regulated environment. The cogeneration and gasification expertise already used in Chinese ammonia plants is twice as efficient. The United States also has to change many things, starting with addressing the issue of old, inefficient coal plants.
Public Policy on Carbon Emissions from Fossil Fuels
DAVID G. HAWKINS
National Resources Defense Council
As Dale Simbeck pointed out, we need to end coal-plant life extensions. But these extensions haven’t been caused by the Clean Air Act; they have been caused by violations of the Clean Air Act. Actions to enforce the Clean Air Act are now before U.S. courts, and the operators of existing plants, not surprisingly, are investing lots of money in lobbyists to fight these enforcement actions. Today I want to stress the importance of moving ahead with the deployment of low-carbon technologies, including technologies like coal gasification, that are compatible with geologic carbon sequestration. We need to take this step to avoid a technology lock-in caused by additional commitments to high-carbon energy systems.
The fundamental aspects of the science of climate change are already settled. We know that carbon dioxide (CO2) emissions lead to increases in atmospheric concentrations, which lead to increases in temperature. Although we cannot forecast precise temperature increases as a result of any particular concentration level, the link between emissions and concentrations is clear. A given emissions path will produce a given concentration. To avoid going above a target concentration, we need a carbon budget. To stabilize carbon concentrations, as called for in the 1992 Climate Convention, only a fixed amount of carbon can be put into the atmosphere. We have to start thinking of this as a budgeting problem.
The difficulty is that we haven’t decided what the concentration target should be. In other words, instead of a classic budgeting problem, we have an options-preservation problem. What must we do to make sure we can achieve lower targets? Clearly, once we decide on the target level, we must put ourselves on a path that keeps our options open to achieve a “safe” concentration level. This
means we have to begin now, first to slow down increases in emissions to below the business-as-usual forecast and then to turn global emissions downward.
In the last few hundred years, since human beings started systematically transferring fossil carbon into the biosphere, we have emitted about 300 gigatons into the atmosphere. A possibly “safe” cumulative emissions budget for this century is 600 gigatons. The bad news is that the midrange reference forecasts for carbon emissions in the next hundred years are 1,500 gigatons—way above a safe budget. Midrange reference forecasts indicate that in the next quarter century we will put another 300 gigatons into the atmosphere—half the prudent budget for the next century.
Let’s say we want to preserve our option to stabilize CO2 concentrations at 450 parts per million volume (ppmv), a figure established in a study by the Intergovernmental Panel on Climate Change that summarizes five categories of environmental and health threats: (1) the risks to unique and threatened ecosystems; (2) risks posed by extreme climate events, such as more frequent storms, more intense storms, and droughts; (3) widespread negative impacts; (4) total negative impacts; (5) the risk from large-scale discontinuities. The last category relates to abrupt climate change, the surprise scenario, which by definition has a high degree of uncertainty. The chart shows the temperature increase that will put humanity into the red, or danger, zone for each risk category (Figure 1).
With the 450-ppmv scenario, a change of just 2°C, which is the midpoint for the 450 scenario, would plunge us into the red zone for the first category. The midpoint estimate for the 550-ppmv scenario is 3°C, which would place us in the red danger zone for the first two categories. At the upper limit of the uncertainty range for the temperature response to the 550 concentration, we would enter the red zone for three or four of the five categories.
We now face this dilemma: the longer we stay on a 550 ppmv or higher scenario, the more difficult it will be to get off of it. The long lifetime of CO2 in the atmosphere means that we are committing not only ourselves but also future generations to unknown consequences. For this reason alone, we should preserve the option of stabilizing CO2 at lower concentrations.
How are we doing? The newest forecasts from the Energy Information Administration (EIA) of total greenhouse gas emissions (expressed as carbon equivalence and gigatons of carbon) for the next 20 years (Figure 2) show that the United States is going to go from a little less than 1.5 gigatons of annual carbon emissions in 1990 to more than 2 gigatons in 2020. We also see big jumps in emissions in China and India. The total for the globe according to this forecast will go from about 6 gigatons in 2002 to 10 gigatons in 2020—not a good picture.
But I want to focus on something even more significant. Emissions are a function of energy investment, and energy investments are not annual phenomena. A much longer term commitment, the remainder of the century, is embedded in them. Once you make investments, you are committed to them for the life
of a facility, absent a breakthrough in technology that makes it very cheap to change something after the fact.
I want to focus on new conventional coal plants, which have the potential to consume a significant amount of the twenty-first-century carbon budget. The EIA forecasts nearly 200 gigawatts of new coal capacity in just three countries: 100 in China, 65 in India, and 31 in the United States. Unless we change our policies, almost all of that will be from conventional coal plants rather than from gasification plants. The implications of this are significant. Every decade that we delay establishing a policy of investing in low-carbon energy systems means additional long-term commitments to high-carbon systems. Ironically, current U.S. policy puts us at a serious disadvantage. The longer we stand by while rapidly growing countries invest in technologies that have a high-carbon commitment, the less of the carbon budget will be available for the rest of the century.
Does that mean that we should wave a stick at these countries, and say don’t do it? No, that isn’t going to work. A better approach would be to recognize that it is in our strategic interest to develop both technologies that will avoid high-carbon emission commitment and diplomacy that will convince rapidly growing
economies that it is in their interest to deploy these technologies. But that won’t happen unless we also deploy them at home.
Let’s examine the portfolio options. Whatever the state of our economic growth and population growth, the portfolio will include increased efficiency. We will be better off if we use our resources more efficiently, and there are many untapped opportunities to help us achieve that. Wind power, solar power, and other renewable energy sources won’t solve the entire problem—certainly not in the next several decades—but they will be important components of a comprehensive response. Carbon capture and geologic sequestration is another important strategy. To stabilize concentrations at lower levels, we will have to pursue all of these approaches very aggressively.
What should our emphasis be? Should we focus on buying down the costs or on new gasification plants or on retrofitting existing plants? Currently, there is a very high energy penalty associated with an existing combustion unit for separation techniques, such as amines. There is also an economic penalty. So when we build a new power plant, we must assess how long it can operate without limiting its carbon emissions. To be comfortable with new commitments to conventional coal plants, we would have to assume we will find a magic bullet to bring down costs for those plants so that their carbon can be captured. If this does not happen, we will face an unpleasant choice. Either we will have to incur large retrofitting or premature retirement costs, or we will have to accept continuing high
emissions. In my view, a wiser approach would be to minimize the construction of new conventional coal plants.
To do this we will have to deploy gasification technologies in the field to persuade skeptics in the industry and in the investment community who would apply a significant financial penalty to such projects. We have to demonstrate that gasification technologies can make electricity in the United States, and this will require more than the few subsidized gasification plants producing electricity today. Gasifiers are already making fertilizer in China and chemicals in Tennessee.
Here are the challenges we face. Compared to the existing baseload plants, the capital costs of integrated-gasification combined-cycle (IGCC) are high compared to alternatives. Current gas prices are too low to stimulate investment in IGCC; it is cheaper to build natural gas plants. Because of the policy confusion about climate change, nobody knows if there will be a payoff in the near term, or even in the midterm, for investing in a technology that facilitates future sequestration of carbon. Accordingly, gasification technology is hardly being considered for new projects.
A final challenge is the schizophrenic federal policy of subsidies and incentives. For instance, the House Energy Bill and the pending Senate Energy Bill both include a mix of incentives for coal-fired power generation. They include some research and development (R&D) programs, focused largely on gasification and sequestration—thanks to some quiet advocacy by the environmental community rather than to efforts by the coal industry or the electric generating industry. However, the money for R&D is offset by much more lavishly funded policies, federal tax production incentives amounting to a billion dollars or more over the next 10 years to patch up existing, old, conventional coal plants, the very plants that should be replaced. In other words, with these energy bills, Congress is making our energy policy go to war with itself. There are dollars to move IGCC forward, but there are also dollars on the table to push it back by keeping existing conventional capacity running longer. This is not a recipe for rapid progress, but it is a recipe for spending a lot of money, a lot of your money. Clearly, there is a major disconnect in our energy policy.
The results are predictable. The EIA Energy Outlook forecasts a need for large additional capacity in the next 20 years. But the predictions about what kind of capacity will be built don’t include more widespread coal-gasification technology; and there is very little increase in renewables or improvements in efficiency to meet this need. The new coal in the forecast is assumed to be all conventional. Natural gas dominates the picture with nearly 300 gigawatts of new capacity. The picture is clear. Instead of finding ways to proliferate coal-gasification technology, the plan ignores it. Under current policy, investors will minimize capital costs by building natural gas plants, assuming that gas prices will not increase before they get their money out. The necessary steps
to demonstrate ways of capturing and storing carbon from advanced coal technologies are not in the picture.
On February 14, 2002, President George W. Bush said the government’s policy toward greenhouse emissions would be to “slow and stop” them. The day before, his advisors had him add the phrase “and, as the science justifies, reverse global warming emissions.” The question is by when? The White House released graphics about greenhouse gas emissions, but they don’t include dates for emissions increases to end. Our calculations of the implied dates, based on available information, shows that the United States must basically get to zero growth in greenhouse gas emissions by 2020 (Figure 3).
If the White House really believes that a prudent course of action would be for us to get to a zero growth rate in greenhouse gas emissions in the United States by 2020, wouldn’t it help to announce that now? Delaying the announcement only eats into our lead time. We urgently need a signal from the president indicating the path ahead. So far, the administration is withholding the signal the private sector needs to make more climate-friendly investments in new capacity. Let us hope we can get beyond this impasse because we have no time to waste. If we allow climate change to progress at its current rate, we will incur lasting damages that we cannot undo.
EIA (Energy Information Administration). 2002. International Energy Outlook 2002. Washington, D.C.: Government Printing Office.
EPA (Environmental Protection Agency). 2002. Global Climate Change Policy Book. Available online at http://yosemite.epa.gov/oar/globalwarming.nsf/UniqueKeyLookup/SHSU5BNMAJ/$File/bush_gccp_021402.pdf
IPCC (Intergovernmental Panel on Climate Change). 2001. A Contribution of Working Groups I, II, and III to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, U.K.: Cambridge University Press.
Active Climate Stabilization
Practical Physics-Based Approaches to Preventing Climate Change1
RODERICK A. HYDE
Lawrence Livermore National Laboratory
Stanford University and Lawrence Livermore National Laboratory
LOWELL L. WOOD
Stanford University and Lawrence Livermore National Laboratory
It is not generally realized that Earth’s seasonally averaged climate is colder now than it has been 99 percent of the time since life on Earth got seriously under way with the Cambrian Explosion 545 million years ago. Nor is it widely appreciated that atmospheric concentrations of carbon dioxide (CO2) are only very loosely correlated with average climatic conditions over this extended interval of geologic time. In fact, it has been much colder with substantially higher air concentrations of CO2 and also much warmer with substantially lower atmospheric levels of CO2 than at present. Indeed, the CO2 level in the air in the geologic record is one of the weaker determinants of globally and seasonally averaged temperature. If one nevertheless wishes to maintain global climate at its current temperature level—or at the somewhat higher level that characterized the Holocene Optimum several thousand years ago or at the lower value of the Little Ice Age of three centuries ago or at any other reasonable level—then the purposeful modification of the basic radiative properties of Earth (i.e., active management of the radiative forcing of the temperature profiles of Earth’s atmosphere and oceans by the Sun) is an obvious gambit. Indeed, active management is likely to be the most practical approach overall.
This paper is concerned with the best way to effect—to actively manage— the desired changes in radiative forcing of Earth’s fluid envelopes. “Best” will be
determined in terms of practicality; the economic efficiency mandated by the U.N. Framework Convention; minimal interference with human activities; aesthetic considerations; collateral effects; and so on. We make no pretense that there is an absolute or objective way to determine practicality. Our examples are merely illustrative of what might be accomplished in the very near term, how much it might cost, and what some of the more obvious externalities might be. Detailed supporting information can be found in our earlier paper (Teller et al., 1997).
RADIATIVE BUDGET CONTROL
We note at the outset that basic concepts for purposeful modification of Earth’s radiative properties did not originate with us; they were proposed at least as long ago as 1979 by Dyson and Marland in the context of CO2-driven global warming and, perhaps most prominently, by the National Research Council (NRC) global change study group in 1992, which noted what appeared to them to be the surprising practicality of active intervention. A subsequent study in 1995 by the Intergovernmental Panel on Climate Change produced similar findings. Our studies are set in the context of the U.N. Framework Convention, Article 3, which states in part that “policies and measures to deal with climate change should be cost-effective so as to ensure global benefits at the lowest possible cost.” We have merely mass-optimized and cost-optimized previous schemes and offered a few new ones, with a little attention given to how near-term studies of optimized schemes for ensuring climatic stability might commence.
The comparatively rudimentary atmospheric and oceanic circulation models currently used to predict climate variability with time predict increases in mean planetary temperature of between ~1.5 and ~5 K, for doubling of atmospheric CO2 concentrations from the current level of ~350 ppm to ~700 ppm (and associated changes in the mean concentrations of atmospheric water vapor, other greenhouse gases, such as CH4 and N2O, aerosols of various compositions and sizes, Earth-surface and atmosphere reflectivity, and radiative transport changes, etc.). Temperature changes of this magnitude would also be induced by a change in either solar heating or terrestrial radiative cooling of about 2 W/m2, which is of the order of 1 percent. Thus, if sunlight is to be preferentially scattered back into space or if Earth is to be induced to thermally radiate more net power, the characteristic surface area involved in changing net solar input by a space-and-time average of 2 W/m2 is ~10-2 Aproj ~ 1.3 × 1016 cm2 ~ 1.3 × 1012 m2 ~ 1.3 × 106 km2, where Aproj is the area the solid Earth projects onto the plane perpendicular to the Earth-Sun axis. To impose a change uniformly over the entire Earth, it must be four times this size (i.e., the ratio of Earth’s surface area to the surface area of its disc).
Radiative budget control on the scales of present interest thus centers on generating and maintaining coverage of this 1- to 2-percent fraction of Earth’s surface—or its Sun-presented disc—with one or another of the materials that
substantially modify the transport of either incoming sunlight (i.e., insolation) or outgoing thermal radiation emitted at or near Earth’s surface over this area. If sunlight is blocked but terrestrial thermal radiation of ~20× greater wavelength is allowed to pass out into space, then Earth will cool by the desired amount in the space-and-time average. Conversely, if sunlight is allowed to pass through to Earth’s surface but terrestrial thermal radiation is blocked from escaping into space, then Earth will warm by the same amount—again in the space-and-time average.
Govindasamy and Caldeira (2000) and Govindasamy et al. (in press) have shown that fractional removal of insolation uniformly over the entire surface of Earth not only results in temperature changes of the predicted amounts in the space-and-time average, but also preserves the present climate in its seasonal and geographic detail, at least down through the mesoscales in space and time that are treated more or less aptly by present-day global circulation models. The most notable modeling results (Plate 7)—contrary to previous pessimistic hypotheses, which were unsupported by modeling—have been confirmed by subsequent work and indicate that terrestrial climate may be stabilized by adding or subtracting insolation along the lines that we propose, not only “in the large,” but also in the considerable spatial and temporal detail of interest to the man on the street who experiences the highest frequency components of climate as the daily weather in his microclimate. Govindasamy and collaborators also have offered a plausible mechanistic explanation for why these remarkable results might have been expected.
WAYS AND MEANS OF ACTIVE MANAGEMENT OF RADIATIVE FORCE
“Covering” a million square kilometers of Earth’s area with something that substantially affects the sunlight falling on it—or Earth’s thermal re-radiation— might appear to be a rather ambitious task. However, because matter may be made to interact quite strongly with radiation, if its composition and geometry are properly chosen, the principal challenge is not the preparation or handling of the quantities of materials involved, but rather ensuring that they will stay in place for usefully long intervals. (The average thickness of scattering material over this ~106 km2 is at most 10−4 cm, so that the total volume is about 1012 cm3 [the volume of a cube 100 meters on an edge]; the associated mass is only about 1 million tonnes.) As a specific example and looking ahead to one of our results, the present concern about global warming centers on the input of about 7 billion tonnes of carbon into the atmosphere each year and several times this level several decades hence. The annual deployment of barely 0.01 percent of this mass of sulfur (roughly one ten-thousandth as much sulfur as carbon) in appropriate form and location can be made to offset entirely the “greenhouse effect” of the 35,000-fold greater mass of added CO2.
We examined such considerations in some detail, combined with the summary of earlier results, and came up with the following conclusions. From a basic physics viewpoint, materials vary greatly in their ability to interact with, and thus to manipulate, optical-spectrum radiation. Resonant scatterers have the greatest mass efficiency by far; good metals have about 10,000 times less specific radiative-interaction efficiency than resonant scatterers; and dielectrics have about 1 percent the specific radiative-interaction power of the best metals. Each of these classes of materials offers distinct, independent, eminently practical ways and means of accomplishing the technical management of radiative forcing.
Positioning scatterers of incoming solar radiation in Earth’s upper atmosphere—specifically in the middle to upper stratosphere—is a venerable approach that appears to provide the most practical deployment for two reasons: (1) operational lifetimes of engineered scatterers can be as long as five years; and (2) required replacement rates are correspondingly modest. Thus, the stratosphere is where we would deploy all of the insolation-modulation scattering systems we propose for near-term study.
Insolation-reducing means that have been demonstrated twice in the past two decades (by the eruptions of El Chichon and Mount Pinatubo, two large tropical volcanoes) and that were noted in the NRC study illustrate the simplest kind of radiative forcing management—Rayleigh scattering by aerosols of dielectric materials—although in a grossly nonoptimized way. Each volcanic event injected sufficient sulfate aerosol into the stratosphere to decrease ground-level temperatures in various regions of the Northern Hemisphere for 1 to 3 years by 10 to 30 percent of the amount that CO2 is variously predicted to increase these temperatures by 2100. Optimized formation and emplacement of sulfate aerosol is the most mass-costly—although one of the more dollar-economical—means of scattering back out into space the sunlight fraction necessary to offset the predicted effects of atmospheric CO2 concentration in 2100. Interestingly, Rayleigh scattering of sunlight performed by stratospherically deployed aerosols, with quite small diameters compared to the wavelength of light itself, will selectively scatter back into space the largely deleterious ultraviolet (UV) component of sunlight while only imperceptibly diminishing the light we see and the light plants use for photosynthesis.
From the human perspective, if a stratospheric Rayleigh scattering system were deployed, skies would be bluer, twilights would be more visually spectacular, plants would be less stressed by UV photodamage and thus would be more productive, and children playing outdoors would be much less susceptible to sunburn (and thus to skin dysplasias and dermal cancers as adults).
We estimate the dollar outlay for active management of radiative forcing on the 2100 scale to be about $1 billion per year. No one to our knowledge has taken issue with this estimate since we offered it five years ago. Indeed, the NRC study implicitly acknowledged the practicality of this kind of approach, although it considered only thoroughly nonoptimized dielectric aerosol scattering.
Incidentally, the costs appear to be an order of magnitude lower than the savings in health care from avoidance of UV skin damage in the United States alone, and far smaller than the gains from increased agricultural productivity as a result of the avoidance of crop photodamage in the United States alone.2 Thus, the cost to the U.S. taxpayer of implementing this system of benefit to all humanity would appear to be negative, because the economic benefits in just the U.S. would greatly outweigh the costs.
Metals are greatly superior to dielectrics in the specific efficiency with which they scatter radiation, and the several particular means we considered for using metals in the management of radiative forcing reflect a 10-fold to 100-fold mass savings over dielectric aerosols. The geometries of metallic scatterers center on metal dipoles and metallic screens, with dimensions selected to be comparable to the reduced wavelengths of the portion of the solar spectrum we wish to scatter. The physics of metallic scatterers (which also include small, thin, metallic-walled superpressure balloons) suggest that they could most effectively scatter back into space the UV portions of solar insolation, just as dielectric scatterers do. These more highly engineered scatterers would cost significantly more to replace in the stratosphere than would dielectric aerosols, but because they have far lower
masses, the estimated annual costs to address the reference year 2100 problem might be as much as five times lower than approaches of comparable efficacy based on dielectrics, that is, about $0.2 billion per year (Teller et al., 1997). Because highly engineered scatterers would also diminish the intensity of a portion of the solar spectrum that is damaging to both plants and animals, their beneficial side effects would be comparable to those of dielectric aerosol Rayleigh scatterers. Again, the net economic cost of deployment would be negative.
Resonant scatterers of sunlight offer huge gains in mass efficiency— although much of this gain seems likely to be lost in “packaging” these materials so that they would be both harmless and unharmed in the photoreactive stratosphere. Overall, these novel materials appear to offer mass budgets a few-fold lower than the most interesting metallic scatterers but have operating costs comparable to dielectrics. This novel type of climate stabilization probably would be used to attenuate the near-UV solar spectrum, and thus the net economic costs would again be negative.
Most of these atmospherically deployed scatterers would remain “locked” into the air-mass parcels into which they were initially deployed and thus eventually would descend from the stratosphere, mostly as a result of vertical transport in the polar vortices at high latitudes. Once out of the stratosphere, they would “rain out” along with other tropospheric particulate material. The quantities deposited would be tiny compared to natural particulate depositions (e.g., wind-lofted dust and volcanic aerosols). The radiative forcing “magic” results from the midstratospheric deployment of these optimally formed scatterers. Virtually no natural particulate—except for a small fraction of particulates from explosive volcanic eruptions—ever ascends that high. Thus no other particulates are atmosphere-resident for as long, or “work” as hard. Tropospheric particulates usually rain out in a few days to a few weeks. Even volcanic aerosol particulates are far too large to be mass optimal, besides which they are loaded with chemical impurities that unfavorably impact stratospheric ozone levels. In fact, they are of interest in the present discussion only as an undoubted proof-of-concept of the several types of engineered-scatterer systems we propose.
Finally, deployment of one or more metallic scattering screens, so diaphanous as to be literally invisible to the human eye, just inside the interior Lagrange point of the Earth-Sun system and on the Earth-Sun axis, represents the absolute optimum of all means known to us for ensuring long-term climate stability. Barely 3,000 tonnes of optimally implemented metallic screen would suffice to stabilize climate against worst-case greenhouse warming through preferential scattering of near-infrared solar radiation so that it would just barely miss Earth. The same size screen in a slightly off-axis position could be used to prevent future Ice Ages by scattering “near-miss” solar radiation back onto Earth. It isn’t clear exactly how the deployment of such a long-term capital asset of the human race would be deployed, so no cost estimates can be made.
If you are inclined to subscribe to the U.N. Framework directive that mitigation of anthropogenic global warming should be effected with the “lowest possible cost”—whether or not you believe that Earth is indeed warming significantly above and beyond natural rates and whether or not you believe that human activities are largely responsible for such warming and whether or not you believe that problems likely to have significant impacts only a century hence should be addressed with current technological ways and means rather than deferred for obviating with more advanced means—then you will necessarily prefer active technical management of radiation forcing to administrative management of greenhouse gas inputs to Earth’s atmosphere. Indeed, if credit is properly taken for improved agricultural productivity resulting from increased CO2 and decreased solar UV fluxes—and human dermatological health benefits are properly accounted for—we expect that the net economic cost of radiative forcing management would be negative, perhaps amounting to several hundred billions of dollars each year, worldwide (Plate 8). The spectacular sunrises and sunsets and bluer skies would be noneconomic benefits.
Active technical management of radiative forcing would entail expenditures of no more than $1 billion per year, commencing about a half-century hence, even in worst-case scenarios.3 Thus we might just put a sinking fund of $1.7 billion into the bank for use in generating $1 billion per year forever, commencing a half-century hence, and proceed with business as usual. All of Earth’s plants would be much better fed with CO2 and much less exposed to solar UV radiation, kids could play in the sun without fear, and we would continue to enjoy today’s climate, bluer skies, and more beautiful sunsets until the next Ice Age commences. There is no obvious economic counterargument to this approach. Human-impacts counterarguments are even less obvious. Based on preliminary examinations to date, the externalities of active technical management, including environmental costs, seem likely to be small.
We therefore conclude that technical management of radiative forcing of Earth’s fluid envelopes, not administrative management of gaseous inputs to the atmosphere, is the path mandated by the pertinent provisions of the U.N. Framework Convention on Climate Change. Moreover, this appears to be true by a very large economic margin, almost $1 trillion dollars per year worldwide, because crops could be fertilized by greater concentrations of atmospheric CO2 without climatic regrets. One of the most pressing problems facing the human race in the twenty-first century—how to nourish a population that increases by 60 percent— thereby begins to look distinctly manageable. The areas of greatest gain in land-plant productivity would largely coincide with the areas of the planet where the largest gains in human population are projected (Plate 8). With active management of the radiative forcing of the atmosphere and oceans, humankind would be able to air-fertilize its way around the basic food-production challenge of the twenty-first century.
We have put forward four independent sets of technical options for implementing active management of radiative forcing, three of which could commence operation as soon as desired. All three have been peer-reviewed in international conferences and ad hoc specialist workshops for the past five years. We therefore suggest that the U.S. government would be well advised to launch an intensive program immediately to address all of the salient issues in active technical management of radiative forcing, including well-designed subscale experiments in the atmosphere. All of these experiments would terminate naturally back onto the present climatic posture on known, relatively short time scales. Because of the obvious global impacts of any management scheme, broad international participation in this program should be invited.
Dyson, F.J., and G. Marland. 1979. Technical Fixes for the Climate Effects of CO2. USDOE report CONF-770385. Washington, D.C.: U.S. Department of Energy.
Govindasamy, B., and K. Caldeira. 2000. Geoengineering Earth’s radiation balance to mitigate CO2-induced climate change. Geophysical Research Letters 27(14): 2141–2144.
Govindasamy, B., S. Thompson, P.B. Duffy, K. Caldeira, and C. Delire. 2002. Impact of geoengineering schemes on the terrestrial biosphere. Geophysical Research Letters 29(22): 2061–2064.
Govindasamy, B., K. Caldeira, and P.B. Duffy. In press. Geoengineering Earth’s radiation balance to mitigate climate change from a quadrupling of CO2. Global and Planetary Change.
IPCC (Intergovernmental Panel on Climate Change). 1995. Climate Change 1995: Impacts, Adaptations, and Mitigation of Climate Change: Scientific Technical Analysis, R.T. Watson et al., eds. Cambridge, U.K.: Cambridge University Press.
NRC (National Research Council). 1992. Policy Implications of Global Warming: Mitigation, Adaptation, and the Science Base. Washington, D.C.: National Academy Press.
Teller, E., L. Wood, and R. Hyde. 1997. Global Warming and Ice Ages: I. Prospects for Physics-Based Modulation of Global Change. UCRL-JC-128715. Available online at http://www.llnl.gov/global-warm/
Large-Scale, Zero-Emissions Technology
JAMES A. LAKE
Idaho National Engineering and Environmental Laboratory
It is widely recognized that abundant, affordable energy supplies are critical to a healthy world economy and an improved standard of living for future generations. World energy demand is growing at about 2.7 percent per year, driven primarily by underdeveloped countries. Because world supplies of fossil fuels are reaching their predictable limits, this growth is driving up prices; in addition concerns are growing about the global effects of increasing air pollution on human health and rising greenhouse-gas emissions on climate. Developed nations like the United States have a moral obligation to take the lead in the deployment of advanced, clean-energy technologies, including nuclear power, to ensure that remaining supplies of affordable fossil fuels will be available in the future.
CURRENT STATE OF NUCLEAR POWER
Four hundred thirty-nine nuclear power plants in 31 nations currently generate 16 percent of world electricity demand (6 percent of total world energy demand). Compared with coal-combustion plants, nuclear power plants annually avoid the worldwide emissions of more than 610 million tons (Mt) of carbon (2,200 Mt of carbon dioxide). Nuclear power, with a carbon-emissions intensity (measured in kilograms of carbon per kilowatt hour of electricity) only 1/50 of coal and 1/25 of the best liquefied natural-gas combined-cycle electricity-generating technology, has prevented emissions in the United States alone of more than 90 Mt of sodium dioxide (SO2) and 40 Mt of nitrogen oxides (NOx), in addition to 2.5 billion tons of carbon, in the last 25 years.
Nuclear power in the world is influenced (positively and negatively) by
national politics. It is highly valued for its positive energy-security attributes and low operating costs, but it is burdened with high capital costs that inhibit growth, especially in a deregulated energy market. Public support for nuclear power is positive overall (but fragile), hinging on perceptions of safety and physical protection. Finally, nuclear power faces an uncertain social and political future because of questions about waste disposal and concerns about the proliferation of growing inventories of spent nuclear fuel. In 2002, the U.S. government took positive steps to open a national geological waste repository at Yucca Mountain.
In spite of an uncertain future, the economic and safety performance of the world nuclear fleet continues to improve, and interest is being expressed in many nations, including the United States, in expanding the use of nuclear energy for economic, energy security, and environmental reasons (Lake, 2002; Lake et al., 2002). The U.S. National Energy Policy calls for “the expansion of nuclear energy in the United States as a major component of our national energy policy” (NEPDG, 2001).
CHALLENGES TO THE GROWTH OF NUCLEAR ENERGY
To fulfill its potential of providing affordable, abundant, clean energy, nuclear energy must overcome challenges associated with sustainability; economics; safety and reliability; proliferation; and physical security.
Sustainability is defined as the ability to meet the needs of the present generation while improving the ability of future generations to meet its needs. The 1987 World Commission on Environment and Development described sustainable development in terms of three dimensions: economic, environmental, and social.
Future sustainable nuclear energy systems must have the following characteristics:
They must have a substantial, positive impact on the quality of the environment, primarily through the displacement of polluting electricity and transportation energy sources by clean, nuclear-generated electricity and nuclear-produced hydrogen transportation fuels.
They must enable geological waste repositories to accept nuclear wastes from substantially more megawatt hours of nuclear plant operations by producing less waste and reducing the decay heat of waste from the open (so-called once-through) fuel cycle.
They must simplify the scientific basis for safe, long-term repository performance and licensing by greatly reducing the lifetime and toxicity of residual radioactive waste material committed for long-term geological disposal.
They must extend future nuclear fuel supplies to centuries and eliminate uncertainties associated with known, affordable uranium reserves by recycling used nuclear fuel to recover its residual energy.
The economic performance of nuclear power is country or region specific. The cost of nuclear electricity generation in many countries (including the United States) is the same as or lower than the cost of producing electricity from the burning of coal and substantially lower than the cost of producing electricity from oil or natural gas. In some countries (including the United States), the high capital cost and financial risk of constructing new nuclear power plants is a significant deterrent, especially in deregulated energy markets. To be competitive in the future, nuclear energy must meet several criteria:
overall competitive life-cycle and energy-production costs through innovative advances in plant and fuel cycle efficiency, design simplification, and perhaps, plant sizes matched to market conditions
reduced economic risk through a reduction in regulatory uncertainty and the development of innovative fabrication and construction techniques
production of other products, such as hydrogen, fresh water, and other process heat applications, to open up new economic markets for nuclear energy
Safety and Reliability
Safety is the key to worldwide public acceptance of nuclear energy. The safety performance of nuclear energy has improved substantially since the Three Mile Island II and Chernobyl accidents, and current safety performance indicators, tracked by the World Association of Nuclear Operators, are excellent. Public support for continued nuclear power operations is also strongly influenced by confidence in the regulatory process. Continuous improvement in nuclear power technology and operations is essential to the growth of nuclear power.
Future nuclear energy systems must have the following goals:
disciplined, safe, and reliable nuclear operations and deliberate, transparent regulation of nuclear operations worldwide
improved accident management and minimization of accident consequences to the public to reduce or eliminate the need for off-site emergency response
protection of the financial investment in the plant
increased use of so-called inherent or passive safety features and
transparency in the safety performance capabilities of nuclear power that can be more easily understood and evaluated by the public
Nonproliferation and Physical Security
Fissile materials in civilian nuclear power programs are very well protected by effective international safeguards overseen by the International Atomic Energy Agency. Furthermore, the very robust designs and security systems of current nuclear power plants effectively protect them against acts of terrorism. Nevertheless, future nuclear reactors and fuel-cycle systems, and future nuclear materials safeguards regimes, should be designed with even higher levels of resistance to the diversion of nuclear materials and the undeclared production of nuclear materials for nonpeaceful purposes. Finally, future nuclear energy systems must provide better physical protection against real or perceived threats of terrorism.
Future proliferation-resistant nuclear energy systems must have the following characteristics:
more intrinsic barriers and extrinsic safeguards against diversion or the production of nuclear materials for nonpeaceful purposes
better physical protection against terrorism
U.S. GOVERNMENT PROGRAMS
The U.S. Department of Energy (DOE) has three programs to address challenges to the continued use and growth of nuclear power: the Nuclear Power 2010 Program; the Generation IV Advanced Reactor Program; and the Advanced Fuel Cycle Program.
Nuclear Power 2010 Program
In February 2002, Secretary of Energy Spencer Abraham unveiled the Nuclear Power 2010 initiative in response to recommendations of the DOE Nuclear Energy Research Advisory Committee (DOE, 2002). The goal of the program is to establish a public-private partnership to build new nuclear power plants in the United States before the end of the decade. The initiative will accomplish three things: (1) explore federal and private sites that could host new nuclear power plants in the future; (2) demonstrate the efficiency and timeliness of the new 10 CFR52 Nuclear Regulatory Commission licensing process, which is designed to make licensing of new plants more efficient, more effective, and more predictable; and (3) conduct research to make the safest and most efficient nuclear plant technologies available to the U.S. marketplace.
Gas-cooled, high-temperature reactors, such as the gas-turbine modular helium reactor (GT-MHR) and the pebble bed modular reactor (PBMR), are examples of advanced technologies that could be deployable sometime after 2010. Engineering teams in South Africa, Russia, France, Japan, and the United States are pursuing gas-cooled reactor system designs and technologies, and the South African utility, ESKOM, plans to build a prototype PBMR plant before the end of this decade.
The PBMR design is based on a fuel element called a “pebble,” a billiard-ball-sized graphite sphere containing 15,000 uranium oxide particles about the size of poppy seeds. The fuel particles are coated with layers of high-strength graphite and silicon carbide to retain the products of the fission process during reactor operations or accidental high-temperature excursions. About 333,000 of these pebbles are placed in a large vessel surrounded by a graphite shield to form the reactor core. Inert-helium coolant is circulated through the bed of pebbles to remove heat to the power generation system. High-temperature refractory materials throughout the core enable the PBMR to operate with a helium-coolant outlet temperature of 850°C, substantially higher than conventional nuclear power plants. With the heat fed directly to a gas-turbine electrical generator, the high-temperature PBMR can produce electricity with thermal efficiencies that exceed 40 percent.
PBMR technology will have several attractive features. First, with its high-temperature refractory core materials, the large thermal capacity of the graphite in the system, and inert-helium coolant, the PBMR can survive a complete loss-of-helium-coolant accident without the fuel melting or the loss of core integrity that could release the contained fission products. This passive safety capability could greatly increase public acceptance of nuclear power. Second, because of the comparatively small size of the PBMR, it can be produced by factory fabrication. Third, because it requires substantially fewer plant operating and safety systems (about a dozen compared with more than 200 in a water-cooled reactor plant) and because its power output is only 120 to 140 megawatts (compared to 1,000 to 1,400 megawatts for large, water-cooled reactors), PBMR could significantly reduce the plant capital cost and may be better suited to meeting increased power demands in deregulated markets. Finally, the helium temperature of 850°C enables the thermochemical or thermoelectrical production of hydrogen from water, which could expand the market for nuclear energy to include transportation fuels.
Generation IV Advanced Reactor Program
The Generation IV International Forum (GIF) was founded in 2000 for the purpose of facilitating international cooperation in the design, development, and deployment of next-generation advanced nuclear energy and fuel-cycle systems. Its purpose was to identify across the board features that could be licensed,
constructed, and operated in world markets in a way that would provide competitively priced, reliable, secure energy products. At the same time, GIF wanted to identify opportunities to improve reactor safety and waste management, ease concerns about proliferation, and improve physical protection. The 10 member countries of GIF (Argentina, Brazil, Canada, France, Japan, the Republic of Korea, South Africa, Switzerland, the United Kingdom, and the United States) produced a comprehensive Generation IV technology road map to accomplish this goal. The road map describes the requirements for constructing one or more demonstrated Generation IV advanced reactor systems for deployment in the world market by 2030. The road map was completed in 2002 and published at www.inel.gov. Currently, a broad spectrum of advanced-reactor concepts are being considered. These concepts include high-temperature, gas-cooled reactors; liquid-metal cooled reactors using liquid sodium or lead alloy; and water-cooled reactors that use supercritical water. Special consideration is being given to fast-neutron-spectrum systems and a closed fuel cycle that would enable the effective management and “burn up” of plutonium and other long-lived materials. In addition, the goal is to produce systems that provide efficient conversion of fertile uranium to fissile fuel, thus providing a sustainable fuel cycle for the future.
Advanced Fuel Cycle Program
One of the most important issues facing nuclear energy is the disposal of nuclear wastes and spent nuclear fuel. Since the late 1970s, the policy of the United States has been to dispose of these materials geologically and not to process or recycle the remaining fuel constituents in spent nuclear fuel. Whereas this policy is technically feasible, and the U.S. government is making progress in the development and licensing of a geological repository at Yucca Mountain, Nevada, this approach has turned out to be both scientifically difficult and enormously expensive. In addition, the social and political acceptability of direct geological disposal is problematic.
Several countries, notably France, the United Kingdom, and soon Japan, have taken a different approach that involves the treatment, recycling, and transmutation of spent fuel to reduce the quantity, toxicity, and lifetime of wastes that require geological disposal and to improve energy security by extracting substantially more of the energy content of spent fuel materials. The advantages of this approach are: (1) it can reduce the cost and improve the safety of the geological repository; and (2) it can reduce inventories of plutonium in spent fuel.
The U.S. National Energy Policy directs that “in the context of developing advanced nuclear fuel cycles and next generation technologies for nuclear energy, the United States should reexamine its policies to allow for research, development and deployment of fuel conditioning methods (such as pyroprocessing)
that reduce waste streams and enhance proliferation resistance” (NEPDG, 2001). The research program would have the following goals:
to reduce the volume of high-level nuclear waste, principally through the extraction of uranium, which constitutes 96 percent of spent fuel
to reduce the cost of geological disposal of waste residues, principally by the optimum use of the repository to store smaller volumes of separated, shorter lived wastes and by eliminating the need for a second repository
to reduce the national security risks associated with growing inventories of civilian plutonium by recycling this material and burning it in advanced reactors
to reduce the toxicity and lifetime of high-level nuclear waste by removing long-lived, highly toxic plutonium and other actinides from the waste streams and burning or transmuting this material in advanced reactors so that the residual fission-product waste materials in the repository will decay to the level of natural uranium in less than 1,000 years
The Advanced Fuel Cycle Program, which will be initiated in 2003, will develop and demonstrate proliferation-resistant, spent-fuel treatment and transmutation technologies to enable the government to make an informed decision about future fuel-cycle policy and deployment alternatives in five to six years.
NEW MISSIONS FOR NUCLEAR ENERGY
Nuclear energy currently supplies 16 percent of worldwide electrical generating capacity. This amount could increase considerably, especially because electricity demand is growing faster than total energy. However, electricity represents only about one-third of total energy demand. Transportation fuel, which accounts for another third, is dominated by oil, a substantial fraction of which is imported from politically unstable parts of the world. Furthermore, oil is a rapidly depleting resource and, consequently, represents our most immediate energy security challenge.
In January 2002, Energy Secretary Spencer Abraham announced a new public-private partnership called FreedomCAR to develop and deploy hydrogen as a primary fuel for fuel-cell-powered cars and trucks as part of the U.S. effort to reduce its dependence on foreign oil. Currently, hydrogen is produced from the steam reforming of natural gas with incumbent emissions of greenhouse gas. Although this process produces fewer emissions than the direct combustion of oil, it substitutes one depleting fossil fuel for another and, at best, is an interim solution to what is expected to be an enormous market for hydrogen in the future. Therefore, a truly large-scale, zero-emissions, hydrogen-production technology
is critical to meeting the goal of a zero-emissions transportation fuel that meets our energy security needs. The preferred source of hydrogen fuel is water.
High-temperature nuclear energy, such as energy from a gas-cooled reactor, represents a unique, high-efficiency, zero-emissions capability for manufacturing hydrogen from water. Although it is possible to produce hydrogen by standard electrolysis using nuclear-generated electricity, the low efficiency of the process (perhaps 25 percent overall) is less likely to be economical, except for distributed, point-of-use hydrogen generation. Current research is focused on two primary high-efficiency alternatives: high-temperature steam electrolysis and thermochemical cycles.
High-temperature steam electrolysis uses a combination of heat from a high-temperature reactor to produce steam at 700 to 800°C and electricity from the same reactor to electrolyze the water in a high-temperature solid-oxide fuel cell at 50 to 55 percent net efficiency. With this system, the operating utility could also sell electricity during peak price periods and produce hydrogen during lower price periods, either for direct sale or for storage for the subsequent generation of electricity from fuel cells during peak demand.
Several thermochemical cycles (and some hybrid thermochemical/ thermoelectrical cycles) for splitting water are under development. A leading thermochemical candidate is the iodine-sulfur process, which uses high-quality heat from a high-temperature reactor at 800 to 1,000°C to drive an iodine-catalyzed dissociation of sulfuric acid. This reaction can produce hydrogen at efficiencies exceeding 60 percent, but the use of highly caustic and corrosive chemicals at high temperature and pressure will require materials research and may present some difficult safety issues. The capacity of a relatively small-scale hydrogen-production facility using a 600 MWth gas-cooled reactor and thermochemical water splitting is about 7,500 kg/hour (sufficient to power 175,000 hydrogen-fueled vehicles).
The economic performance, operating performance, and safety performance of nuclear power today are excellent. Nuclear power can respond to the challenges associated with rising world energy demand, diminishing fossil energy resources, and growing concerns about environmental quality and emissions of greenhouse gases. The current U.S. nuclear energy production of 100 GWe results in the avoidance of 640 Mt of carbon emissions per year (175 Mt of CO2 per year) compared with the combustion of coal.
With ongoing license extensions, the current fleet of 103 U.S. nuclear power plants will continue to contribute at this level of performance until about 2030; the level will decrease as older plants are retired. The U.S. nuclear industry road map, Vision 2020, contemplates the construction of 50 new 1,000-MWe plants by 2020 and an overall increase of 10 percent in output from existing plants to
achieve 160 GWe of U.S. electrical generation in 2020 (this will be necessary just to keep the overall share of nuclear-plus-hydroelectric emissions-free capacity near 30 percent of projected U.S. electricity demand in 2020) (NEI, 2002). If this can be accomplished, the contribution to greenhouse-gas emissions reduction by U.S. nuclear power will increase to more than 1 billion tons of CO2 per year.
Generation IV nuclear energy systems will come into the marketplace between 2020 and 2030, leading to substantially faster growth in nuclear capacity. With a goal of 50 percent of projected U.S. electrical generating capacity, plus nuclear-generated hydrogen displacing 25 percent of oil for transportation fuel by 2050, the required nuclear capacity could be as high as 700 GWe. At this rate, nuclear energy could account for the avoidance of more than 4.5 billion tons of CO2 per year, compared with energy from coal.
Whether this growth in nuclear energy can be achieved with Generation IV technology in a world with substantially higher energy demand and depleted fossil-fuel resources is a matter for debate. However, as the preceding examples illustrate, nuclear energy, as a major source of electrical energy and a growing source of hydrogen transportation fuel, can have a significant impact on future greenhouse-gas emissions.
DOE (U.S. Department of Energy). 2002. A Roadmap to Deploy New Nuclear Power Plants in the United States by 2010. Washington, D.C.: U.S. Department of Energy. Available online at www.inel.gov
Lake, J.A. 2002. The fourth generation of nuclear power. Progress in Nuclear Energy 40(3-4): 301–307.
Lake J.A., R.G. Bennett, and J.F. Kotek. 2002. Next-generation nuclear power. Scientific American 286(1): 72–81.
NEI (Nuclear Energy Institute). 2002. Vision 2020: Powering Tomorrow with Clean Nuclear Energy. Washington, D.C.: Nuclear Energy Institute. Available online at http://gen-iv.ne.doe.gov.
NEPDG (National Energy Policy Development Group). 2001. National Energy Policy. Washington, D.C.: Government Printing Office.