Getting Serious About Computer Security
David D. Clark
We Americans have been remarkably lucky. As far as we know, no one has systematically subverted our critical computing systems. Not yet.
There are signs our luck soon may run out. Thousands of computer ''virus attacks'' have been reported, money and information have been stolen successfully and lives have even been lost because of computer errors. A German computer club broke into NASA's computer. A student injected a "worm" into a nationwide computer system. Hackers have taken over TV satellite linkups. Patient information in a Michigan hospital computer was altered by a virus. A computer expert nearly defrauded the Pennsylvania Lottery of $15.2 million by pirating unclaimed computerized ticket numbers.
Some of the most serious problems have been unintentional. A year ago, for example, a software design error froze much of the country's long-distance network. Nonetheless, the nation has not yet suffered a truly catastrophic computer breakdown or security breach.
However, whether due to sabotage, poor design, insufficient quality control or an accident, the problem of computer security is very real — and growing. The advent of widespread computer networking and increasing computer literacy among the public has brought us to the point where
we must all begin taking computer security seriously or suffer the nearly inevitable consequences.
As matters now stand, far too many of the nation's computer users are like people in a small town who leave their houses and cars unlocked because they feel secure. Yet, with every day that computer systems become more prevalent and interconnected, this bucolic view becomes more outdated — and dangerous. Hackers receive the most attention, but they probably are less of a threat than disgruntled employees, terrorists and others who could wreak havoc on inadequately protected computers.
I chaired an expert committee of the National Research Council that recently studied this problem, and we identified some specific actions to improve computer security, both now and over the next decade:
Companies and organizations must implement better policies to protect electronic access, physical security, networking and the like.
Where appropriate, organizations should form computer emergency response teams to deal with security violations.
A systematic effort should be undertaken to gather data on computer crimes.
Universities should provide training in security engineering.
U.S. computer manufacturers and software designers must improve the security features of their products to remain competitive with foreign firms.
One of the most important needs is a clear articulation of what constitutes basic computer security — a set of security principles comparable in acceptance and scope to building codes or to the "Generally Accepted Accounting Principles" that guide the accounting industry. These requirements might build on criteria already established for the defense department, ensuring that a minimal level of security exists for all computer users. The goal would be to "raise the security floor" and provide a safer computing environment for everyone.
Raising everyone to a minimum level of security is important because often it takes only one weak link in a com-
puter network to permit access to the entire system. The current situation is like a polluting factory in a densely populated area; one person's laxness can affect many others.
Accomplishing these goals will require sustained collective action by the group at risk: the users of computers. Companies should join to establish a private, not-for-profit organization to focus attention and provide research on security issues. No such body now exists and one is sorely needed, preferably outside the government. While the government does have a key role to play, a private organization would be best positioned to direct this effort.
These measures will take time. But there are several things everyone who uses a computer can do right now. People can rename their password to something unpronounceable, and change it regularly, so others cannot gain access to their files. They can avoid software that has not been checked for viruses, and turn off their systems when not in use.
Most important, everyone must wake up and recognize the threat. A thief today can steal more with a computer than with a gun; a terrorist tomorrow can do more damage with a keyboard than with a bomb. Our luck, and our computer files, could change in an instant.
January 6, 1991
David Clark is a senior research scientist at the Massachusetts Institute of Technology.
* * *
Preventing Oil Spills Here at Home
Henry S. Marcus
The Persian Gulf conflict has once again focused our attention on oil spills. The terrible spills off Saudi Arabia are
vastly different in origin and size from those that have occurred near our own coastlines. Yet they should spur us to renew our efforts to prevent oil spills here at home. One of the best ways to do this is by improving the design and operation of the tankers that carry oil to our shores.
The need for this oil is growing. Energy experts expect imports of crude oil and petroleum products to increase by up to 50 percent within a decade. Most of this additional oil will arrive by sea. Less than one in every 50,000 gallons of the oil moving through U.S. waters is spilled, but that adds up to an average of 9,000 tons annually — more than enough to cause serious environmental damage.
Last year, in the wake of the Exxon Valdez disaster, Congress mandated that all new tankers traveling in U.S. waters be built with double hulls. Yet double-hulled ships alone cannot protect our coastlines from disaster.
I chaired a committee of the National Research Council that has just released a study of 17 possible tanker designs. We found that the double-hull approach has many advantages over traditional single-hull designs, especially in a low-speed collision or grounding. A severe accident such as occurred with the Exxon Valdez, however, probably would puncture both hulls and spill oil. The double-hull design also poses some potential safety problems of its own, such
as difficulty in inspecting the large void space between the hulls.
These concerns are manageable; overall, we found no design superior to double hulls in all accident scenarios. But double hulls are not a panacea; all designs perform better in some situations and worse in others. One design that deserves more consideration, for example, has double sides and an oil-tight deck across its middle to divide cargo tanks into upper and lower chambers. In theory, such a vessel would spill less oil in a severe accident — although it, too, has drawbacks.
This and other alternatives look good on paper, but implementing them raises other concerns. When automobile manufacturers consider the feasibility of safety options, they crash vehicles into a wall and do other tests to see which options work best. Yet the maritime industry has done relatively few safety experiments, largely because it has not been expected to design craft to withstand accidents. As a result, it lacks the database or even the criteria to evaluate how well a vessel will survive an accident. No one really knows how double hulls and other designs will perform. This is unacceptable.
It also is difficult to assess the costs and benefits of different options. Our committee's best estimate was that double hulls could cut by half the oil now spilled in U.S. waters as a result of collisions and groundings. They would add one or two cents to the cost of each gallon shipped, or $700 million per year when the Oil Pollution Act of 1990 is fully implemented.
More information is badly needed. But in the meantime, it is essential to consider ways of controlling pollution more effectively on existing vessels. The phase-out of single-hull tank vessels in U.S. waters will begin only in 1995 and then take 20 years to complete. Serious consideration should be given to requiring that all existing crude-oil tankers in U.S. waters promptly meet the latest International Maritime Organization provisions for pollution prevention for new tankers. Structural or operational changes also might be required of existing tankers, although the cost and effectiveness of these need to be weighed against other possible safety
measures, such as increased crew training or improved traffic systems.
Vessel design is an essential aspect of preventing oil spills, and the law mandating double-hulled tankers is a useful step. More needs to be done, however, and many questions about tanker safety remain. The Persian Gulf spills, although in a different setting, remind us of the catastrophe that could strike our own shores again if we do not become more vigilant and get some answers soon.
March 3, 1991
Henry Marcus chairs the ocean systems management program at the Massachusetts Institute of Technology.
* * *
Looking Beyond Potholes
Damian J. Kulash
The next time you lose a hubcap in a pothole or are stuck waiting for a road crew to repave the highway, think of Iraq. No, not the Iraq of Saddam Hussein, but the Iraq of nearly 4,000 years ago. The asphalt we use has its origins in ancient Iraq, or Babylonia. One reason our roads aren't better is that asphalt is still mired in too much ancient mystery.
With the winter pothole season approaching, we need to get smarter about this essential material. More than half of our nation's highways need everything from repaving to structural overhauls. The cost will be billions of dollars. Recent progress in asphalt chemistry makes it possible to build pavements that ride smoother and last years longer. We don't have to live with more potholes, more delays, and more spending.
We should look beyond our outdated technology and take
advantage of advances in materials science and diagnostic technology to steer into the 21st century.
Asphalt has been used for thousands of years. Surviving roads from Babylonian times still show good adhesion and toughness. When these roads were excavated early in this century, the asphalt mortar was so strong that it was hard to separate the bricks. The Sumerians and Assyrians also distinguished among many separate grades of asphalt, one of which was poured over the heads of convicted felons as a punishment.
Asphalt fell out of use later in history but re-emerged as a paving material in the 19th century. Since then, however, relatively little has been done to study it scientifically. Now we're paying the price for that neglect. Today's paving practices are based more on experience than on science; materials and methods are used because they worked before. Many of these tried-and-true approaches are more tried than true, unable to meet the growing demands on our highways.
Until recently, asphalt was thought of as a sticky gel binding together discrete lumps of material. But new findings show it to be more like strands of spaghetti swimming in sauce. Researchers working under a program of the National Research Council have generated a chemical model that sheds new light on the inner workings of asphalt. Using advanced chemical analysis techniques and studies of asphalt properties, they learned about a specific component that helps determine how asphalt performs.
The component, known as an amphoteric, is a combination of an acid and a base in the same molecule. The acidic and basic sites bind together and form the "spaghetti" that gives asphalt its structure. Although amphoterics usually constitute less than a quarter of the asphalt, they affect its overall quality profoundly.
This advance will allow highway engineers to specify the asphalt required for given climates and road conditions. States are already conducting tests to help them select the specific asphalt cements and aggregates they need. Refiners and manufacturers, in turn, will satisfy these requests by determining the chemical and physical properties of asphalt ce-
ment. The result will be materials that stand up better to heavy trucks, winter storms and millions of cars.
These advances could not come at a better time. The number of sources of crude oil used to make asphalt has risen to more than 100 nationwide. Since the oil from each source is chemically and physically different, engineers cannot control the final product adequately. They've been like chefs choosing from an ever-changing list of ingredients. The only way for them to avoid a lengthy and inexact process of trial and error is to understand what is happening chemically inside the asphalt and learn to pick the materials that work best for each purpose.
That's now possible, and we should begin improving our road infrastructure not just in small increments, but dramatically. Together with new insights from Europe and elsewhere about the construction of asphalt mixes, state and local highway agencies are experimenting with technologies to build more durable pavements.
Change will take time. Highway agencies and contractors need to shift specifications and tests. They also require new expertise and testing apparatus. With help from the Federal Highway Administration, they are beginning to get the training and laboratory capacity they need. But, for the sake of the nation's motorists, we need to push even harder. It's time we made our roads a lot smoother, tougher and more cost-effective.
November 24, 1991
Damian J. Kulash is executive director of the Strategic Highway Research Program of the National Research Council.
* * *
Getting Smart About 'Intelligent' Vehicles and Highways
You're late for work, so you hop in your car and immediately check your dashboard control for a detailed traffic update. You decide to take a new route, so you call up a map on your electronic console to guide you. You drive along until suddenly an alarm inside the car warns you of a truck about to veer into your fender. You steer to avoid it just in time.
Next you encounter a toll booth. No problem; you zip through it without stopping to pay. Instead, a code on your car is "read" by an electronic device. You'll receive the bill in the mail next week. Finally, as you pull into the parking lot, a beacon on your dashboard guides you to an empty spot.
If all this sounds preferable to sitting in endless traffic jams, it should. Technologies like these could not only reduce traffic delays but also cut fuel consumption, ease air pollution, lower freight costs and save lives.
A committee that I chaired for the National Research Council concluded recently that high-tech approaches to traffic management are increasingly feasible. Many of the ideas are not new, and components of them have been in use for many years. However, recently there has been a resurgence of interest in finding bold new solutions to our traffic woes. Many concepts have become practical with the spectacular evolution and declining costs of computers and telecommunications. Meanwhile it has become increasingly difficult to solve transportation problems by building new roads.
Computers have transformed our workplaces but had relatively little impact so far on our road and transit systems. By applying the tools of the Information Revolution, we might improve travel dramatically. Imagine, for example, that vehicles were equipped with sensors and adaptive cruise control technology enabling them to automatically maintain a constant distance from adjacent vehicles. This would
make it possible for vehicles to travel safely only inches apart, like railroad cars. Each lane could thus carry far more vehicles, reducing the need for new roads. Drivers might even be free to read a book or enjoy the scenery during the trip.
Some new technologies might develop as entirely private systems, with firms providing services to subscribers. For example, a traveler information system could be organized like a private cable television or cellular telephone service. Alternatively, new systems might be operated publicly, as are nearly all existing traffic management services. The best approach may be a partnership in which the government provides some components while private companies provide others.
The recently passed surface transportation law will greatly expand federal research and development of "Intelligent Vehicle and Highway Systems," or IVHS, and many states are supporting similar efforts. It is becoming possible to visualize a transportation system built upon a new kind of national "information infrastructure," just as the construction of the interstate highway system transformed travel in an earlier era. A newly created, private, non-profit organization called IVHS America is helping to coordinate and gain support for these initiatives.
A danger is that research funds intended to develop genuinely new ideas may be diverted for more cautious efforts. Public agencies have a natural tendency to stick with familiar approaches and try to maintain control over traffic management. Turf concerns are a related problem; squabbles between highway and transit officials could choke innovations that cut across current boundaries. What's needed is a national program that pursues truly creative ideas, one in which government is organized to accommodate new technologies.
It also is essential that the public and private sectors work together closely. Automobile and electronics manufacturers and others in the private sector have to join in meeting this emerging challenge. Major markets are likely to emerge both in the United States and abroad for new kinds of vehicles, electronics and communications. American compa-
nies can ill afford to fall behind their counterparts in Europe and Japan, where a great deal of activity already is under way.
Transportation of the future could look vastly different — and better — than it does today. Instead of beating our breasts about our worsening traffic jams, we should use our heads to solve them.
January 5, 1992
Daniel Roos is director of the Center for Technology, Policy and Industrial Development, and Japan Steel Industry Professor at the Massachusetts Institute of Technology.
* * *
A High-Tech Cure for Traffic Jams?
Lawrence D. Dahms
With our highways and airports more crowded every year, Americans can look forward to increasing gridlock and frustration unless something is done to solve our transportation headaches.
One tantalizing solution that has been proposed is to build high-speed trains like those found in Japan and Europe. Traveling up to 200 miles per hour, these trains could whisk passengers between New York and Washington, Dallas and Houston, San Francisco and Los Angeles, Orlando and Tampa, or elsewhere. They'd give travelers more options while making airports and highways less congested.
At the request of the U.S Department of Transportation, a committee of the National Research Council, which I chaired, examined the feasibility not only of existing high-speed trains but of more futuristic possibilities, such as ''maglev'' trains powered by magnetic levitation.
High-speed trains similar to those in Japan and Europe are technologically feasible right now. Unfortunately they are costly. Proposals for building such systems in the United States have ranged between $10 million and $20 million per mile, depending on the location. The most likely market is intercity trips of approximately 150 to 500 miles, with high-speed trains competing principally with air travel for ridership.
Getting people to use high-speed trains in the United States will be much tougher than it has been in Europe or Japan. Why? In part because substantial intercity rail ridership already existed in Europe and Japan when high-speed trains were introduced. Also, competition from the air and auto modes is likely to be stiffer in the United States. We have more frequent flights, lower fares and cheaper gasoline costs. When these factors are combined with the high cost of building and operating high-speed rail systems, it is almost impossible to imagine how ticket revenues could cover the full costs of new train systems in our country.
Those who dream of riding fast trains in the United States must face the fact that subsidies will be required.
Whether such subsidies are justified is more a political than a technical question. If building a high-speed train can be shown to be better than expanding airports or adding freeways between certain major cities, then perhaps the subsidies could be paid with money saved from the airport or highway funds. Alas, our governments are not organized to make such tradeoffs. This problem was discussed during the recent debate over the surface transportation act. But as matters now stand, highway funds are held in trust for highways, airport funds for airports, and no one has the power to make a meaningful exception.
Our committee offered four main messages:
Fast trains can be built and operated; we are not unduly limited by technology. High-speed trains are more feasible for the near term than maglev trains, which still are in a relatively early stage of development.
These systems cannot be paid for solely with revenues
from the fare box. Subsidies are necessary if real progress is to be made.
Subsidies may be justified in specific cases if it can be shown that a rail investment makes more sense than building new highways or expanding airports.
Congress and state legislatures must write new laws empowering both the U.S. Department of Transportation and the states to consider fast trains as an integral part of the national transportation system. Only then can subsidies be directed to appropriate projects.
This last conclusion is the most critical. Congress is expected to invest in research to advance the promise of maglev trains as a faster and perhaps less costly alternative for the distant future. This initiative is encouraging but, even if the research is successful, we still have to figure out how to get airplane and train operators working together to select the best investment for a particular transportation corridor.
In Los Angeles, for instance, a new airport is planned in Palmdale. Further north, San Francisco International Airport hopes to expand. Would the alternative of building a high-speed rail system to connect these two cities make more sense? No governmental agency now is empowered to make such a decision. Isn't it time such tradeoffs were made possible?
December 15, 1991
Lawrence D. Dahms is executive director of the Metropolitan Transportation Commission, Oakland, Calif.
* * *
Crossing the Bridge to More Beautiful Journeys
Suppose you were making a commercial to convey the sense of a product reaching consumers from coast to coast. Which visual images would you choose?
When United Airlines faced that problem a few years ago, it chose the Brooklyn Bridge to represent the East Coast and the Golden Gate Bridge for the West Coast. That's not surprising. Bridges are among our most celebrated structures. Stretching across Tampa Bay or the Mississippi River, they can become symbols for entire regions.
Our thousands of "everyday" bridges are important, too. The bridges on Chicago's Dan Ryan Expressway, for example, are seen by millions of commuters weekly. Collectively, those bridges have a greater impact than any single public building on people's perception of that city.
As any traveler knows, however, many bridges are sadly nondescript. Carrying traffic but lacking grace, they are merely functional.
They could be much more. They should be works of civic art that enliven each day's travel, making all of our journeys more pleasant.
I was among a group of engineers and architects that recently examined bridge design for the National Research Council. We identified a number of examples where engineers are designing bridges that are not only beautiful but reasonable in cost.
Two changes are needed to make such bridges more routine. First, people must insist that their public agencies make good appearance an explicit goal of public works. Second, the engineering profession must improve its ability to respond to that challenge.
When citizens speak up, more attractive bridges often result. In Columbus, Ohio, public interest encouraged officials to organize an informal competition to obtain the best possible design for the new Broad Street Bridge. In Tennes
see and California, continuing public support has led to a tradition of outstanding bridges. In Maryland, a new three-year program will improve the appearance of state bridges.
Many people think improved aesthetic impact derives from expensive "add-ons," such as an unusual color, ornamental features, or special materials like stone or brick. But in fact the greatest aesthetic impact is made by the structural components themselves — the cables, girders and piers. If these elements are well-shaped, the bridge will be attractive without added cost.
The Golden Gate Bridge, for example, owes its appeal to the graceful shape of its towers and cables, not to its reddish color. If the towers were ugly, painting them red would not make them attractive.
John Roebling, designer of the Brooklyn Bridge, and other notable engineers of the past and present have designed bridges that achieve structural excellence and outstanding appearance at a cost no greater than competing solutions. Their success proves that beauty need cost no more than mediocrity.
The dreariness of many bridges is not due to a lack of good intentions among designers. Most engineers do believe that concern for appearance should be an integral part of their work. But other matters often grab their attention. Engineers respond to the priorities of their clients — state
highway departments, toll authorities and other public agencies that now give aesthetics too little consideration.
These agencies have the most influence to improve the situation. They set the standards, select the designers and pay for the results, and they should demand more. They would do so if they thought the public expected it. To put it another way, Americans should insist on getting full value for their tax money. They should demand not just bridges but beautiful bridges.
Engineers can meet that demand although, as a group, they have some problems to overcome. Bridge design is an art, one that integrates judgments based on science and mathematics with others based on aesthetics. Many engineers are more comfortable with mathematical formulas than with the imprecision of appearance. Their bridges suffer as a result.
So engineers must expand their skills. The key is to make clear to them that appearance is a criterion equal in importance to performance and cost. Doing so will encourage engineering schools to place greater emphasis on aesthetic concerns and lead engineers to develop their aesthetic abilities in everyday practice.
The engineering challenge of building bridges is not just to find the "least cost" solution. It is to bring forth elegance from utility. We should not be content with bridges that move only vehicles and people. They should move our spirits as well.
September 1, 1991
Frederick Gottemoeller, a consulting architect and engineer in Columbia, Md., specializes in the design of bridges.
* * *
Launching Into a New Era in Space
Joseph G. Gavin Jr.
It once was one of our country's proudest enterprises. But then its facilities began to deteriorate and its vehicles failed to keep pace with models from abroad. Today, unless it starts building engines and vehicles that are more reliable and reasonably priced, it will fall even further behind competitors from Europe and Japan.
No, this is not another article about the automobile industry.
It is about a problem occurring high above Detroit and the rest of the country — in space. The United States, the nation that sent men to the moon, is falling behind in its ability to launch vehicles into orbit and beyond.
Both the European Arianespace group and the former Soviet Union have achieved higher launch rates with greater efficiency and lower costs than in the United States. Japan and other nations are making long-term commitments to large undertakings in space.
Our country, by contrast, continues to depend on outdated technology. The launch vehicles used by government and private industry are fragile, expensive and unreliable. Only one major advance in U.S. launch engines, the main engine of the space shuttle, has been made in 30 years. This was a superb engineering achievement when introduced in the 1970s but now operates at the very upper limits of its performance margins. Development of the more robust Space Transportation Main Engine should proceed immediately.
Our launch facilities in Florida and California are several decades old. They are customized to accommodate each vehicle, causing costly scheduling problems.
A committee of the National Research Council, which I chaired, concluded recently that this serious situation is getting worse. To meet future national needs and compete with other nations, the United States must reduce its launch costs by one-third to one-half. Action by the federal govern-
ment also could enhance the competitiveness of our country's budding commercial launch industry.
The administration has proposed a National Launch System to help revive our "Earth-to-orbit" capability. Yet the program's top priority is developing a launch vehicle to carry very heavy payloads, those in the 135,000 pound class. Although heavy loads could be lifted to low Earth orbit, it is not completely clear what this vehicle would be used for. One possibility might be to resupply a space station. The Space Station Office, however, has no firm plans for this capability. The same is true as regards lifting heavy loads for space exploration.
So the need for a heavy payload vehicle is unclear, and developing it will be expensive and complex. Instead, we should focus on a different model with a clearer purpose and lower cost. A much smaller vehicle in the 20,000-pound payload class would have immediate commercial and national security applications. It also could be produced at a rate allowing individual units to have a reasonable cost. Building it first would provide good experience for tackling larger models, including an intermediate design for payloads of about 50,000 pounds, which eventually could replace the Air Force's workhorse Titan IV vehicle.
In so doing, we should learn from the success of the former Soviet Union, choosing low system cost and high reliability over exotic technology. The United States needs access to an array of engines and should evaluate all possibilities, regardless of their origin. For example, we should examine the Russian RD-170 rocket engine used in the Energia and Zenit launch vehicles.
The National Aeronautics and Space Administration (NASA) is developing an advanced solid rocket motor for use on the space shuttle and other launch vehicles. It should reconsider funding this program due to high technical and programmatic risks. With constrained resources, NASA should keep using its redesigned solid rocket motor, which has proven reliable since the Challenger accident. For future "strap-on" booster applications, liquid boosters should be considered because they have several advantages over solid boosters.
The United States also needs to strengthen its long-term
investment in the technology base that leads to new approaches. Without adequate research and development, our launch vehicles and facilities are certain to slip even further behind. Companies with satellites, scientists with experiments and others will turn elsewhere to launch their payloads into orbit. This trend, already alarming, must be reversed. Whether on the highway or up in space, we cannot compete using outdated vehicles or facilities.
May 17, 1992
Joseph G. Gavin Jr. is former president of Grumman Corp.