Benefits of Technological Literacy
The argument for technological literacy is rooted in a single, fundamental belief. In a world permeated by technology, an individual can function more effectively if he or she is familiar with and has a basic understanding of technology. A higher level of technological literacy in the United States would have a number of benefits, for individuals and for the society as a whole.
Improving Decision Making
Technological literacy prepares individuals to make well-informed choices in their role as consumers. The world is full of products and services that promise to make people’s lives easier, more enjoyable, more efficient, or healthier, and more and more of these products appear every year. A technologically literate person cannot know how each new technology works, its advantages and disadvantages, how to operate it, and so on, but he or she can learn enough about a product to put it to good use or to choose not to use it.
Americans are not only consumers; they are also workers, members of families and communities, and citizens of a large, complex democracy. In all of these spheres, they face personal decisions that involve the development or use of technology. Is a local referendum on issuing bonds for the construction of a new power plant a wise use of taxpayer dollars? Does a plan to locate a new waste incinerator within several miles of one’s home pose serious health risks, as opponents of the initiative may claim? How should one react to efforts by local government to place surveillance cameras in high-crime areas of the city? Technologically literate people
will be much better able to address these and many other technology-related questions.
Decision making is not only personal. Leaders in a variety of sectors, including business, government, and the media, make decisions daily that affect what others—sometimes thousands or even millions of people—think and do. These individuals in particular will benefit from a considerable understanding of the nature of technology, and an awareness that all technologies involve trade-offs and may result in unintended consequences. With a higher level of technological literacy in the nation, people in positions of power will be more likely to manage technological developments in a way that maximizes the benefits to humankind and minimizes the negative impacts. Of course, there is no hard-and-fast line between purely personal concerns and business interests, the needs of states, and the needs of the nation. In most cases the personal interests of everyday Americans do influence decisions by policy makers and company CEOs.
Some concrete examples can illustrate the importance of technological literacy to decision making at all levels. The next three sections present descriptions of current issues that require decision making of some sort. The first is the use of car air bags and relates mostly to the concerns of individual citizens. The second addresses genetically modified foods, an issue relevant to individuals, who must decide which foods to buy at the grocery store; policy makers, who must take into account regulatory, trade, and other considerations; and the biotechnology industry and farmers, the two groups most responsible for creating and selling such products. The third example is the California energy crisis, which has put pressure on individuals, businesses, and political leaders to develop short-term and long-term solutions.
There is no hard-and-fast line between purely personal concerns and business interests, the needs of states, and the needs of the nation.
All three examples have a central technological component, which may be part of the problem, part of a solution, or both. The technological component cannot be separated from political, legal, social, and other concerns. A box at the end of each example shows how the three dimensions of technological literacy—knowledge, capabilities, and ways of thinking and acting—might come into play in each case.
On or Off? Deciding About Your Car Air Bag
By now, almost everyone knows that car air bags can cause injury or even death, as well as offer protection. Most car owners are aware of
recommendations by safety experts that young children be placed in the back seat and that a distance of at least 10 inches be maintained between the driver and the steering wheel to minimize the chances of air bag-induced injury. Some people feel that air bags are not worth the risk and would like to shut them off, or at least have the option to do so. An on-off switch can be installed, but it requires permission from the National Highway Traffic Safety Administration (NHTSA) and costs several hundred dollars.
The decision to disable your air bag has potentially serious consequences. To make the best choice, the decision maker should know something about how air bags work, how well they protect, and in what situations.
All air bag systems operate in basically the same way. Onboard sensing devices measure crash impact. Once activated, the crash sensors signal solid-propellant inflators to begin the chemical reaction that generates nitrogen gas that fills the air bag. The gas inflates a folded nylon bag, which acts as a protective cushion between the occupant and the inside of the car. As the person collides with the air bag, vents in the bag allow the gas to escape, absorbing energy and reducing the severity of impact. Ideally, occupants collide with the bag just as it becomes fully inflated. But if the bag strikes the occupant while it is still inflating, it can cause serious injury or death because the bags travel at speeds of more than 100 mph.
Studies show that air bags are about 13 percent effective in saving the lives of drivers not wearing a lap-shoulder seat belt (NHTSA, 1996). That is, if 100 fatally injured drivers in cars without air bags had been driving cars with air bags, 13 of them would have survived. By comparison, seat belts are approximately 42 percent effective in preventing driver fatalities, compared to situations in which no seat belts are worn. The combined effectiveness, for drivers, of seat belts and air bags is 47 percent. This means that, overall, air bags reduce the risk of death for drivers wearing seat belts by 9 percent ([58 – 53]/58).
Overall, air bags reduce the risk of death for drivers wearing seat belts by 9 percent.
As it turns out, the government vastly overestimated the effectiveness of air bags, claiming in the late 1970s they would save 12,000 lives annually (Federal Register, 1977). The actual record is not nearly as impressive. From 1986 through April 2001, fewer than 7,000 lives had been saved by air bags. An estimated 246 people (including 61 unconfirmed air bag-related fatalities), mostly drivers and children, had been
killed by air bags during the same period (GAO, 2001a). By comparison, about 11,000 lives are saved every year by seat belts.
The benefits of air bags depend on many factors. One of the most important factors is the weight and, especially, the height of the occupants. Because those two parameters are closely linked to gender, the effectiveness of air bags differs greatly for men and women. For example, nearly three-quarters of the drivers killed by air bags were women. In one study, air bags used in conjunction with seat belts reduced total harm (a mix of fatalities and injuries) among male drivers by 11 percent but increased the harm to female drivers wearing seat belts by 9 percent (Dalmotas et al., 1996). For people of small stature (shorter than 5 feet, 3 inches tall), air bags increased total harm. The data also show that age makes a difference. Drivers between the ages of 15 and 50 wearing seat belts were better protected with air bags. However, no clear evidence showed added protection for belted drivers over the age of 50.
A number of factors besides air bags affect the safety of vehicle occupants. Consider the 9 percent figure, which represents the additional lifesaving potential of air bags for belted drivers. A belted driver could reduce his or her risk of dying in a crash by the same amount by driving a car 200 pounds heavier (Evans, 1991). The same nine-percent reduction in driver fatalities could be achieved across the nation by lowering average driving speeds on U.S. roads by 2 mph.
Recently, the technological landscape for air bags has begun to change. New NHTSA regulations require that automakers design and install more advanced air bag systems for model 2004 vehicles. The new devices are meant to meet the safety needs of drivers and passengers of different sizes, weights, and seating positions. The rules have stimulated millions of dollars of research on occupant classification sensors, seat belt usage sensors, multistage inflators that can fill air bags at varying rates, and less aggressive air bag designs (GAO, 2001a).
A number of factors besides air bags affect the safety of vehicle occupants.
Return now to the original decision—whether or not to install an on-off switch. The decision will depend on many factors related not only to the personal characteristics of the people who will use the vehicle— drivers and passengers—but also to the type and age of the vehicle itself. To make an intelligent choice, the individual will have to draw on all three dimensions of technological literacy (Box 2-1).
BOX 2-1 The Technologically Literate Citizen and Air Bags
Ways of Thinking and Acting
Waiter, There’s a GMO in My Soup
In fall 2000, American consumers were informed that a type of genetically modified corn approved for use in animal feed had somehow made its way into grocery stores as an ingredient in taco shells manufactured by Kraft Foods. There were concerns that a bacterial protein inserted into the corn’s genetic makeup to protect growing plants from the European corn borer could trigger an allergic reaction in some people. Kraft recalled millions of its taco shells in response (Washington Post, September 18, 2000).
Groups opposed to genetically modified foods cited the episode as evidence that the risks had not been taken seriously enough. The biotechnology industry downplayed the importance of the mix-up, pointing out that the same protein is present in other types of corn grown for human consumption—including organically grown corn—and noting that
the amount of modified corn was so small that it was unlikely to cause any noticeable health effects. The media and the public were left to flounder in a sea of conflicting opinions and interpretations.
In early 2001, batches of seed corn grown by U.S. farmers and slated for sale overseas were found to contain small amounts of the same genetically modified version discovered in taco shells (Washington Post, March 1, 2001). Because European and Asian opposition to genetically modified organisms (GMOs) is very strong (Laget and Cantley, 2001), billions of dollars of U.S. exports were threatened. The U.S. government ended up buying back millions of dollars worth of seed stock that had been mixed with the genetically modified version, called StarLink.
In late July 2001, a scientific advisory panel to the Environmental Protection Agency (EPA) concluded there was not enough evidence to prove that the modified corn does not pose an allergic risk to people. Based on the panel’s finding, the agency decided to maintain its policy of banning even trace amounts of the modified corn in foods (Washington Post, July 28, 2001).
Because European and Asian opposition to genetically modified organisms is very strong, billions of dollars of U.S. exports were threatened.
Because of fears of adverse health effects, the European Union (EU) had already effectively banned the importation of most biotechderived foods in 1998, causing sales of exported U.S. corn to plunge from about $300 million annually in the mid-1990s to less than $10 million in recent years (GAO, 2001b). The EU accounts for only about 5 percent of the market for this U.S. crop, but other larger markets in Asia and Latin America have also taken steps, such as requiring labeling of genetically modified food products, that are expected to decrease the size of the export market for American farmers.
Perhaps no technology better illustrates the current mismatch between the adoption of a new technology and society’s ability to deal with it. In the past 10 years, the idea of taking genes from one organism and transferring them into another has gone from a laboratory demonstration to a commercial reality. In 1999, U.S. farmers planted some 70 million acres of genetically engineered crops, including 36 percent of all corn, 55 percent of soybeans, and 43 percent of cotton. Most of those crops were modified either to produce a substance, often a protein, that defends them against insect pests—as was the case for the corn that ended up in the tacos—or else to be resistant to herbicides that are sprayed on the fields to control weeds (New York Times, March 14, 2000).
In the next decade, we could see explosive growth in the agricultural uses of genetic engineering. Researchers are constantly improving
techniques for putting new genes into organisms, and scientists can now map out entire genomes—that is, the entire genetic makeup of organisms—quickly and at relatively low cost. This will have two effects. First, it will improve our understanding of the genetics of crops and farm animals. Second, it will provide a multitude of new genes to work with.
In the United States, the genetically engineered changes benefit both farmers and the environment. In the case of StarLink, for example, farmers growing the modified corn can use less chemical pesticide, thus cutting their production costs and, at the same time, reducing harmful pesticide-laden runoff. In developing countries, however, the benefits could be even greater. Genetic enhancements could mean the difference between starvation and survival for large numbers of people and between dependency on foreign imports and agricultural self-sufficiency for entire nations.
Some gene splicing dramatically improves the health benefits of foods. In Switzerland, for instance, a German scientist, Ingo Potrykus, has engineered a new type of rice that produces generous amounts of beta carotene, which the human body turns into Vitamin A. If widely adopted, this so-called golden rice could prevent 1 to 2 million deaths and 500,000 cases of blindness each year among children who survive almost completely on rice for months at a time and suffer from Vitamin A deficiency. Healthier foods of this sort could enhance diets and improve health around the world, in both developed and developing countries.
Today, we find ourselves with a volatile combination of rapidly growing biotech capabilities and a public that is not prepared to understand or assess those capabilities. In Europe, the mismatch has led to a nearly complete ban on genetically modified foodstuffs. Ingo Potrykus’s plan to distribute his beta-carotene rice to poor farmers around the world is threatened by an effort in Switzerland to pass legislation forbidding the export of GMOs.
It is impossible to know whether a technologically literate population would reject GMOs, embrace them, or find a middle ground.
The development and use of GMOs raises a number of questions, not only for consumers but also for farmers and policy makers. Which foods are safe to eat? Which crops should be grown and under what conditions and to whom can they be sold? How should products containing GMOs be labeled? It is impossible to know whether a technologically literate population would reject GMOs, embrace them, or find a middle ground, accepting foods that provided significant improvements, such as the beta-carotene rice, but rejecting foods that simply lowered the cost of production by a few percentage points. Whatever the outcome, the
BOX 2-2 The Technologically Literate Citizen and GMOs
Ways of Thinking and Acting
decision should be made by people with a basic understanding of technology and an ability to weigh risks and benefits (Box 2-2).
Turning the Lights Out: The California Energy Crisis
In January 2001, California was facing an energy crisis. Demand for electric power had grown to the point that the state’s two major utilities, Pacific Gas & Electric and Southern California Edison, were having difficulty meeting the need. On days of particularly high demand, they instituted rolling blackouts, turning off electricity to first one area, then another. In addition, the utilities were losing money so rapidly that both were predicting bankruptcy.
How did California, which would have the world’s sixth largest economy if it were a country, get into this predicament? The answer is complex. At least a part of the explanation is the failure of state officials to understand—or perhaps their decision to ignore—basic facts about how the electric power industry works. The state also appears to have miscal-
culated when it deregulated the electric power industry. In addition, uncontrollable factors, such as the pace of economic growth in California, and drought and colder than average temperatures in the Northwest, conspired to put further pressure on the system.
Commercial electricity is generated in plants large enough to provide energy for tens of thousands of homes. All electricity, whether generated from hydroelectric dams, solar collectors, wind turbines, or plants that consume coal, oil, natural gas, or nuclear fuel, is fed into a network of transmission wires—the “grid”—which delivers the power where it is needed. Operators keep track of demand on an hourly basis, making sure that enough power is being fed into the system. If demand outstrips supply, the operators attempt to find extra power from outside plants attached to the grid. If they cannot, they shut off power to some customers to prevent the entire system from failing.
Once it enters the grid, there is no distinction between electricity generated by, say, a natural gas plant outside Sacramento and a nuclear plant near San Diego. In short, electricity becomes a commodity that can be bought and sold by the kilowatt-hour. Because of this, a company like Southern California Edison does not have to generate exactly enough power for its customers. If it needs more, the company can buy extra power from another producer; if it has extra power, it can sell it.
For decades the electric power industry has been closely regulated by the states. Each utility was required to have enough generating capacity to serve its customers. In turn, the state set rates for electricity that guaranteed the utilities a reasonable return on their investment. Although this was a safe arrangement for the utilities, some critics argued that regulation removed much of the utilities’ incentive to produce power at the lowest possible cost.
Once it enters the grid, there is no distinction between electricity generated by a natural gas plant outside Sacramento and a nuclear plant near San Diego.
In response to these arguments, the state of California decided in 1996 to deregulate its electric utilities. According to the plan devised by legislators, the two major utilities would sell off much of their generating capacity and buy electric power wholesale from whatever companies would provide it to them at the lowest cost (New York Times, January 2, 2001). The idea was that competition would drive prices down, and the utilities would be able to purchase power at a lower cost than the cost of producing it. At a second, later stage, the retail market would be deregulated, allowing consumers to benefit from the lower costs of electricity production.
As events would prove, the plan had at least two major flaws.
First, it did not pay enough attention to the building of new generating plants. In the early 1990s, California had an excess of electrical generating capacity, and its economy was growing slowly enough that new plants did not seem to be a priority (New York Times, January 11, 2001). Pacific Gas & Electric and Southern California Edison had always provided enough electricity, and the lawmakers who wrote the bill assumed that, with deregulation, other companies would build whatever plants were necessary (New York Times, January 5, 2001).
But they had not counted on the hurdles these companies would face. California’s environmental laws are among the nation’s toughest, so building new plants is more difficult there than in many other states (New York Times, January 10, 11, 2001). Those difficulties, combined with uncertainties about how the deregulated industry would work, made companies cautious about committing to new plants. And those that did commit found that the approval rate was slowed both by the state agencies that approve new plants and local activist groups that did not want generating plants built in their backyards (New York Times, January 5, 11, 12, 2001). As a result, in the 3 years after the deregulation law passed, California added only 2 percent to its generating capacity (New York Times, January 11, 2001).
Meanwhile, the California economy grew rapidly, twice the national average in the late 1990s, and demand for electricity grew apace (New York Times, January 11, 2001). By summer 2000, demand had caught up with supply, and on hot days during peak hours, the demand exceeded maximum generating capacity. The utilities were forced to buy electricity from outside the state, but other Western states had little to spare, and the scarcity drove prices up sharply. The California utilities, which had been accustomed to paying about $60 to $70 per megawatthour, suddenly found themselves paying as much as $750, the federally mandated maximum. Later, when the cap was removed, they were forced to pay spot prices as high as $1,400 per megawatt-hour.
In the 3 years after the deregulation law passed, California added only 2 percent to its generating capacity.
This increased cost could not be passed on to consumers, however, which was a second major flaw in the deregulation plan. According to the 1996 law, retail prices of electricity were not scheduled to be deregulated until March 2002; until then, the utilities could charge no more than $65 per megawatt-hour (New York Times, January 4, 2001). As a result, Pacific Gas & Electric and Southern California Edison found themselves paying out several times as much to buy power as they took in for selling it; by January 2001, they had lost a combined $12 billion.
Unable to pay their bills and unable to find creditors willing to lend them the billions they needed to keep going, both utilities warned they might go bankrupt by February.
The price freeze also meant that consumers, who were paying an artificially low price for energy, had no incentive to use less electricity. As a result, demand continued to rise. The only exception was in the San Diego area, where San Diego Gas & Electric had sold all of its power plants and was free to raise its retail rates in response to wholesale costs. In the summer of 2000, when that utility more than doubled its rates, consumer energy use dropped by more than 5 percent in a few weeks (New York Times, January 10, 2001).
The California energy crisis illustrates the danger of taking a technology for granted and acting without thinking carefully about the factors that influence the technology in question. A more technologically literate California legislator might have insisted that planning for additional generating capacity begin before deregulation went forward. The trade-offs between increasing electricity supply and protecting the environment may also have been more prominent in the state’s debate on energy policy. More knowledgeable citizens might have made a difference, too, for instance by being more supportive of proposals for building new generating plants, agreeing to stricter conservation measures, or pushing for more investment in alternative energy sources, such as solar, wind, and thermal power. If lawmakers had believed their constituents were technologically savvy enough to understand the need for steps like these, they might have been more confident about making politically unpopular, but necessary, decisions.
Even after the crisis had begun, a more technologically literate public might have made a difference. Much of the debate over the crisis ignored the fact that the utilities had enough power except during times of peak load—the hours when demand is at or near a maximum. If consumers had been convinced to cut their usage slightly during those hours, the utilities might not have been forced to buy electricity at inflated prices.
A more technologically literate California legislator might have insisted that planning for additional generating capacity begin before deregulation went forward.
Based on the three dimensions of technological literacy, we can suggest the kinds of understanding and competencies technologically literate Californians—legislators and citizens—might have brought to bear on the state’s energy crisis (Box 2-3). It is impossible to know, of course, whether the crisis could have been avoided if the level of technological literacy had been higher. It seems reasonable, however, that the
BOX 2-3 The Technologically Literate Citizen and California’s Energy Crisis
Ways of Thinking and Acting
debate over electric power in California would have been different and might have included more prominently the voices of everyday citizens.
Increasing Citizen Participation
In addition to being consumers and workers, Americans are also citizens of a democracy who have a right—indeed a responsibility—to let their voices be heard on matters that concern them. Most current political, legal, and ethical issues, from what to do about global warming to how to protect privacy in the Information Age, have a technological component. A technologically literate citizen is likely to participate in the decision making, whether by voting for a candidate or in a referendum, writing a letter to the editor of a local paper, sending an e-mail to a member of Congress, participating in a public opinion poll, speaking out at a town meeting, or supporting the work of an organized special-interest group.
In a democratic society, people must be involved in the technological decisions that affect them for two very different reasons—one practical and one philosophical. First, decisions made without public input are often eventually rejected as illegitimate and antidemocratic,
which can impede the acceptance of a technology. Second, democratic principles are based on citizen participation—at least indirect participation through elected representatives—in decisions that affect them. Few decisions today affect people more than those about the kinds of technologies that are developed and how they are used. Citizen input can be influential during the design or research and development (R&D) phase of technology. People can also affect how a technology is used once it passes into the public arena.
Public participation in discussions about the development and uses of technology is also important for another reason—it can lead to greater technological literacy. The simple act of asking and trying to answer questions about technology can lead to a better understanding not only of technical, but also of the social, economic, and political aspects of the issue at hand. What are the risks and benefits, and the trade-offs, of developing or using a technology? Who wins and who loses? What are the costs and the alternatives? Public involvement also gives policy makers a sense of their constituents’ fears and hopes, and thus an indication of the public response to a particular path of technology development, as well as to new or lesser known alternatives.
Few decisions today affect people more than those about the kinds of technologies that are developed and how they are used.
Slaying the “Green Snake” 1
The design and construction of the Boston Central Artery and Tunnel, the largest public works project under way in the United States, illustrates the power of everyday people to influence the shape and direction of technological development.
Scheduled for completion in 2004, the $12 billion-plus project, involving 160 lane miles in a 7.5-mile corridor, will bring to a close the development of a massive interstate highway network begun during the administration of President Dwight Eisenhower. The central artery portion of the project will remove the “Green Snake,” the elevated roadway that has been an enormous eyesore in the heart of downtown Boston. The Green Snake, which was built in 1959, is now clogged with almost three times as much traffic as planners originally anticipated. The Boston Central Artery and Tunnel will replace the elevated structure with an underground route that is expected to facilitate the movement of inter-
state highway traffic through the Boston region. The harbor tunnel portion of the project will provide a route to Logan Airport. The project also calls for a new bridge across the Charles River from Boston into Cambridge.
The project is unique in the extent and nature of public participation during the design phase and the sensitivity to environmental concerns shown by the developers. Many people believe the project could become a model for other cities throughout the world.
Critics, however, point out that the actual cost of the project has greatly exceeded the original projected figure. Unanticipated construction problems can account for much of the cost overrun, but also to blame are the enormous expenses incurred in responding to the concerns of interest groups about potential environmental, economic, and cultural impacts on Boston.
Because the federal government funds about 90 percent of the work, the project had to comply with the National Environmental Protection Act of 1969, which requires the preparation of an Environmental Impact Statement (EIS). EISs are lengthy documents that identify in detail how a project will positively and negatively affect the environment. The EIS prepared for the Central Artery and Tunnel addressed 17 categories, including transportation, air quality, noise and vibration, energy, economic characteristics, visual characteristics, historic resources, water quality, wetlands and waterways, and vegetation and wildlife.
The project is unique in the extent and nature of public participation during the design phase and the sensitivity to environmental concerns shown by the developers.
Because the law mandated public participation in the design of the project, a draft EIS was widely circulated by managers of the project. Copies were placed in libraries; a public hearing was held; and a public comment period was provided. One hundred seventy-five people, including spokespersons for government agencies, such as EPA, and public interest groups, including the Sierra Club, testified at the hearing, and 99 individuals provided written comments.
Even before the EIS was circulated, negotiations between project management and the public, especially neighborhood, business, and environmental groups, had resulted in a number of changes, called “mitigations,” in the plan to address adverse impacts. Affluent organizations even hired their own engineers to provide detailed alternative designs for highway alignment, ramps, and locations of ventilation buildings. The citizens of East Boston called upon their congressional representatives to block funding for the project, if the harbor tunnel emerged in their neighborhood. The tunnel now emerges on Logan Airport property. Overall, the
project has accommodated some 1,100 mitigations, which added an estimated $2.8 billion to the total cost.
In 1990, public attention was focused on the design for the Charles River bridge and ramps. The twenty-sixth alternative design, nicknamed Scheme Z, was announced in August 1988 but aroused little reaction, probably because three-dimensional models and easily comprehensible drawings of the design were not available. When a model of the structure was displayed a year later, the architectural critic of the Boston Globe compared the bridge and access ramps to a massive wall across the Charles River. An EPA official predicted that the structure would be the ugliest in New England.
Various citizen groups responded vociferously to Scheme Z. A newly formed organization, Citizens for a Livable Charlestown, joined the chorus of complaints and hired an artist to prepare an illustration emphasizing the overwhelming size of the bridge and associated roadways. Publication of the drawing in the Charlestown Patriot caused a public uproar. Within weeks, other groups, including the Charles River Watershed Association, which has more than 1,000 members, and the New England chapter of the Sierra Club joined the chorus of opposition. A weeklong series of articles in the Boston Globe in December 1990 stressing the potential noise, shadows, and blight of the enormous structure fanned the fires of discontent, demonstrating the effectiveness of an alliance of media and activist groups in stimulating public participation. In light of the growing opposition, the Boston City Council, by unanimous vote, declared its opposition to Scheme Z.
In January 1991, the Massachusetts secretary of transportation attempted to assuage various interest groups by establishing a Bridge Design Review Committee. The composition of the 42-member committee was based on current thinking about participatory design and conflict resolution. The committee’s deliberations were open, multidisciplinary, and consensus seeking. Members represented national environmental organizations, such as the Sierra Club; local environmental, transportation, and business groups, such as the Charles River Watershed Association and the Boston Chamber of Commerce; and organizations of professional engineers, architects, and urban planners.
Overall, the project has accommodated some 1,100 mitigations, which added an estimated $2.8 billion to the total cost.
Instead of revising Scheme Z, in June 1991 the committee voted unanimously to abandon it and proposed a new conceptual design for a tunnel under the Charles River to replace some of the massive bridge structure. The Federal Highway Administration and the U.S. Army
Corps of Engineers, however, called for other, nontunnel alternatives. Critics warned that digging for a tunnel would not only be expensive, but would also cause serious pollution problems for the river.
The conflict was resolved when the state selected a bridge designed by world-famous Swiss architect Christian Menn. The new design specified that two bridges be built side by side, one with 10 lanes, and one with 4. Peter Zuk, the Central Artery/Tunnel project director, proclaimed it a world-class, elegant design. Others characterized it as a signature structure, and an appropriate gateway to a great city.
The Boston project illustrates some interesting ideas about technological literacy. In this case it was primarily organizations, especially environmental organizations, not individuals, that were active, effective participants in design reviews, controversies, and the obtaining of mitigations. The public at large did not have to be knowledgeable about the technical details of highway construction and environmental impact. However, public support—financial and political—for the involved organizations was critical. In addition, the media, especially local newspapers, played a major role in informing the public and raising the level of concern.
Supporting a Modern Workforce
One of the obvious benefits of technological literacy is in the economic realm. Technology, particularly in the high-tech sector, has been driving much of the economic growth in the United States and elsewhere, and an increasing percentage of jobs require technological skills (Rausch, 1998). Although technological literacy and technical competency are not the same thing, they are related. Increasing the overall level of technological literacy would almost certainly improve the climate for technology-driven economic growth. A technologically literate population would, for example, understand that science and technology are the foundation of our economic strength and would be more likely to support the research, education, and economic policies that support that foundation. Conversely, technologically literate citizens would be less likely to support policies that would undermine the technological basis of the economy.
Increasing the overall level of technological literacy would almost certainly improve the climate for technology-driven economic growth.
Improving technological literacy would also help to prepare individuals for jobs in our technology-driven economy, thus strengthening the economy. Technologically literate workers are more likely than those
lacking such literacy to have a broad range of knowledge and abilities, such as the critical skills identified by the Secretary’s Commission on Achieving Necessary Skills (SCANS) (DOL, 1991).
The study of technology involves evaluating how others have successfully solved problems and provides experience in hands-on problem solving; hence, technologically literate workers are likely to be able to identify and solve problems. They are also more likely to put things in a broad context, because the study of technology emphasizes systems thinking. They are more likely to be comfortable with complex interrelationships, which are common in technological systems. And they may be able to troubleshoot problems with equipment when necessary because they have learned how to ask the necessary questions to understand why a technology works—or why it isn’t working.
Technology is everywhere in the business world. Doctors, nurses, and other medical personnel depend on a growing number of medical devices for examination, diagnosis, and treatment. Teachers are bombarded with new tools for preparing and delivering lessons, researching new teaching techniques, and enabling students to learn outside the traditional setting. Farmers use the Global Positioning System to help monitor crop yields and tailor the application of herbicides, and they must decide whether or not to plant genetically modified seeds. Self-employed workers must set up home offices and purchase and operate their own office technology. Technologically literate people will tend to be more comfortable dealing with technologies that their jobs demand and will find it easier to master new technologies as they come along.
The military is also becoming increasingly dependent on technology. The nation’s 1.4 million soldiers, airmen, sailors, and marines must be able to operate and manage technically complex weaponry, transportation systems, and communications systems (DOD, 2001). The effectiveness of U.S. fighting forces depends largely on how well they do their jobs. Their performance, in turn, depends not only on their knowledge of the specific systems but also on their problem-solving, critical-thinking, and teamwork skills. Improving the overall technological literacy of the population will make it easier for the military to find men and women who can serve effectively.
Employers in all sectors are demanding workers with a mix of factual and conceptual knowledge, critical thinking skills, and procedural knowledge. In this climate, technologically literate workers may have a competitive advantage in the job market and may be more likely to land
better paying, more interesting jobs. For similar reasons, technological literacy can help narrow the growing wage gap—and related shortage of skills—between salaried workers with higher educations and hourly workers without it (DOL, 1999).
At the moment, the United States does not produce enough technically skilled workers to support certain sectors of its high-tech economy. Therefore, we must depend on workers brought in from other countries (Committee on Workforce Needs in Information Technology, 2001; 21st Century Workforce Commission, 2000). A campaign for technological literacy could lessen our dependence on foreign workers by encouraging young students to pursue scientific or technical careers. Boosting the awareness of the importance of technology in the general population may increase the esteem and respect accorded to jobs in the technology sector, which would also encourage more students to pursue careers in science and engineering.
Narrowing the Digital Divide
Many commentators have noted a distressing pattern in the use of the Internet. Most of the people who have access to it, either at work or at home, and those most likely to know how to take advantage of its resources are more affluent, better educated, urban, and are not members of ethnic or racial minorities. The most recent data from the federal government show that this “digital divide” has been decreasing as Internet usage among most groups of Americans continues to increase (DOC, 2000). For instance, in rural areas, 39 percent of households had access to the Internet as of August 2000, a 75 percent jump from just 20 months earlier. The gap between the percentage of rural households with Internet access and the nationwide average fell from 4 percentage points to 2.6 percentage points in 2000, a drop of 35 percent.
Blacks and Hispanics have made significant gains in Internet access. Over the 20-month period, the proportion of black households with access increased from 11.2 percent to 23.5 percent; Hispanic access rose from 12.6 percent to 23.6 percent. However, large gaps still remain for these groups when measured against the national average, and these gaps appear to be growing. The gap in Internet access between black and Hispanic households and the national average was 18 percentage points in August 2000, an increase of 3 percentage points for blacks and 4.3 percentage points for Hispanics. Large gaps in the ownership of comput-
ers between these two groups and the national average of ownership have not narrowed since the last government survey.
Access to a personal computer is the single most important factor in whether or not a person uses the Internet. Not surprisingly, people in higher socioeconomic brackets are far more likely than those in lower brackets to have personal computers at home or have access to them at work. In addition, people with higher levels of education were more likely to use the Internet, regardless of their income level.
Black students are less likely than white students to own a home computer even when household incomes are factored into the equation. Furthermore, among those without home computers, black students are less likely than white students to access the Internet outside the home—in school, libraries, or friends’ houses. As a result, many fewer black students than white students are working on the Internet.
Access to a personal computer is the single most important factor in whether or not a person uses the Internet.
A number of remedies have been suggested for closing the digital divide. Most focus on providing universal access to the Internet so that everyone can get online regardless of income level or job status. Equally important will be improving technological literacy because the better people understand the Internet and its value or are comfortable with technology, the more likely they will be to make the effort to learn to use it.
A similar situation exists for technology in general. All technology, not just computers and the Internet, empowers those who own it and understand it and puts those who do not at a disadvantage. Thus, the nation’s poor and minorities will benefit much more by being technologically literate; being literate, they will find it easier to overcome their lack of preparation and participate effectively in an increasingly technological world.
If overall technological literacy is not improved, particularly among the technological have-nots, we can expect to see the growth of a “technological divide” more pervasive than today’s digital divide. Interesting, well-paying jobs that require a technological understanding and skills will go mostly to well-educated upper- and middle-class Americans and foreign nationals, while the American underclass will continue to be stuck in low-wage, low-skill jobs. On a deeper level, the needs and views of this underclass will, for the most part, not be taken into account by those responsible for developing and setting policy about technology. Thus, new technologies and new applications of existing technologies will be
largely irrelevant to this group, who will fall further and further outside the mainstream.
Enhancing Social Well-being
It has become a cliché that only the young are up to date on technology, particularly in the fast-moving world of computers and the Internet. Can’t figure out how to set up your Web page? Ask a 15-year-old. Confused by e-mail? To many elementary school children it is easier to use than the U.S. mail. But behind the cliché is a basic truth. Technology is changing so rapidly that people who are not prepared to deal with it can quickly find themselves falling behind.
Losing touch in this way can leave people with a sense that they have somehow lost control of their lives, that the world is moving on without them. For much of human history, this was not a problem because changes occurred slowly enough that people had plenty of time to adapt and get used to them. But eras of rapid change—the Industrial Revolution in England, for example, or the United States in the late 1800s and early 1900s—have tested the limits of human adaptability. In times of rapid change, many people struggle to adjust to a world that is suddenly quite different from the one they have known. Even for people who can cope with specific how-tos of modern life, living in a highly technological world can be alienating. This idea has been studied by sociologists and historians and explored in the popular media, including books, movies, and television programs.
Technological literacy can provide a tool for dealing with rapid changes.
In the next few decades, people’s abilities to adjust to new ways of doing things will be tested far more than they have ever been tested before. People in their forties and fifties already often feel as if technology is passing them by; in another generation, people in their thirties could feel the same way. The more adaptable people—those who are invigorated, or at least not threatened, by the new and the unfamiliar—will do well. But many people will find that their sense of well-being and their quality of life are diminished rather than enhanced by new and improved technologies. They will wish that the world were not moving quite as quickly toward the future.
Technological literacy can provide a tool for dealing with rapid changes. A technologically literate person will find it easier to understand and assimilate new technologies and so will be less likely to be left behind.
Equally important, technologically literate people will have a high enough comfort level with and broad comprehension of technology to put the changes in context and accept them even if they do not fully understand them. Technological literacy, along with many other types of literacy, can empower people by giving them the tools to make sense of their world, even as it changes around them.
Much would be gained, for individuals and the country as a whole, by raising the general level of technological literacy in the United States. Of course, even if technological literacy reaches a high level among a majority of Americans, it will not solve all of our problems or compensate for the shortcomings of human nature. There will never be such a panacea. But it seems equally certain that technological literacy will be an essential ingredient to realizing the benefits outlined in this chapter.
A technologically literate public will undoubtedly make some poor decisions. But many more decisions will be good ones that benefit the whole society rather than only one part of it. Participation in itself is no guarantee of sound decision making. But if participation occurs in an environment in which education about technology is common and in which taking part in technological affairs is encouraged, then it will have a positive influence.
Technological literacy in the workplace is likely to be most relevant in technology-intensive industries, such as communications, biotechnology, and aerospace. But employers in other sectors of the economy that are not involved directly in the creation of technology will also reap the benefits. They, too, need employees with basic technological competence and the ability to solve problems. The positive effect of technological literacy on the national economy is necessarily speculative. The arguments that have been made about the importance of literacy in mathematics and science to the economic future of the country are at least as salient in the context of technological literacy.
The case for technological literacy related to the digital divide and social well-being is at heart about equity, about leaving no one behind. Technological literacy is not a sufficient condition for eliminating all inequities, but it is among the necessary conditions for improvement in a modern society.
Committee on Workforce Needs in Information Technology. 2001. Building a Workforce for the Information Economy. National Research Council. Washington, D.C.: National Academy Press.
Dalmotas, D.J., J. Hurley, A. German, and K. Digges. 1996. Air bag deployment crashes in Canada. Paper 96-S1O-05, 15th Enhanced Safety of Vehicles Conference, Melbourne, Australia, May 13-17, 1996.
DOC (U.S. Department of Commerce). 2000. Falling Through the Net: Toward Digital Inclusion. Available online at: <http://www.ntia.doc.gov/ntiahome/fttn00/contents00.html> (November 13, 2001).
DOD (U.S. Department of Defense). 2001. Armed Forces Strength Figures for April 2001. Available online at <http://web1.whs.osd.mil/mmid/military/ms0.pdf> (June 26, 2001).
DOL (U.S. Department of Labor). 1991. What Work Requires of Schools: A SCANS Report for America 2000. Washington, D.C.: U.S. Department of Labor.
DOL. 1999. Futurework: Trends and Challenges for Work in the 21st Century. U.S. Department of Labor, Washington, D.C. Available online at: <http://www.dol.gov/asp/futurework/report.htm> (November 13, 2001).
Evans, L. 1991. Traffic Safety and the Driver. New York: Van Nostrand.
Federal Register. 1977. Federal Motor Vehicle Standards: Occupant Protection Systems. 42 (128): 34289–34305.
GAO (General Accounting Office). 2001a. Vehicle Safety: Technologies, Challenges, and Research and Development Expenditures for Advanced Air Bags. Report to the Chairman and Ranking Minority Members, Committee on Commerce, Science, and Transportation, U.S. Senate. June 2001. Washington, D.C.: GAO.
GAO. 2001b. International Trade: Concerns Over Biotechnology Challenge U.S. Agricultural Exports. Report to the Ranking Minority Member, Committee on Finance, U.S. Senate. GAO-01-727. June 2001. Washington, D.C.: GAO.
Hughes, T.P. 1998. Coping with complexity: Central Artery/Tunnel. Pp. 197–254 in Rescuing Prometheus. New York: Pantheon Books.
Laget, P., and M. Cantley. 2001. European responses to biotechnology: Research, regulation, and dialogue. Issues in Science and Technology. Summer 2001. Available online at: <http://www.nap.edu/issues/17.4/p_laget.htm> (December 14, 2001).
NHTSA (National Highway Traffic Safety Administration). 1996. Effectiveness of Occupant Protection Systems and Their Use. Third Report to Congress. Available online at: <http://www.nhtsa.dot.gov/people/injury/airbags/208con2e.html> (November 13, 2001).
Rausch, L.M. 1998. High-Tech Industries Drive Global Economic Activity. Available online at: <http://www.nsf.gov/sbe/srs/issuebrf/sib98319.htm> (November 13, 2001).
21st Century Workforce Commission. 2000. A Nation of Opportunity: Building America’s 21st Century Workforce. Washington, D.C.: U.S. Department of Labor.