Shortly after the end of World War II, America’s electricity use rose rapidly with the introduction of labor-saving appliances and tools in the home, the electrification of manufacturing processes and assembly lines in factories, and the increased distribution of refrigerated and frozen foods into markets. This unprecedented growth averaged almost 7 percent annually on a compound basis for two decades. Helping to fuel this growth was the lower price of electricity made possible by economies of scale achieved as new plants were built.
With the close of the 1960s and the start of the 1970s, a series of events changed the face of electric power economics and structure, and this process continues today. The 1970 National Environmental Policy Act (NEPA) and the creation of the U.S. Environmental Protection Agency (EPA) signaled that environmental considerations would be required for every decision regarding expansion, construction, and operation of electric power systems and components. In 1973 the Organization of the Petroleum Exporting Countries’ oil embargo on the United States pointed out the vulnerability of the supply of transportation and boiler fuels. On the heels of the embargo, the United States experienced sharp increases in the cost of electricity due to the increased price of fuels. As the 1980s arrived, it became far more costly to construct large baseload power plants—particularly nuclear plants—because of lengthy approval processes and, post–Three Mile Island, reevaluation and redesign of nuclear safety systems.
The advent of deregulation due to legislation from 1978 onward meant that new project-financed independent power generators would look for least-cost options, which usually meant natural-gas-fired combined cycle power plants.
Based on a series of studies by the White House Office of Science and Technology Policy in the early 1970s, a few developers and utilities began to look into