Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 Keynote Address WILLIAM A. WULF National Academy of Engineering When I looked carefully at the attendance list for this symposium, I realized that a number of you probably have only a vague idea of what the National Academy of Engineering (NAE) is. So, let me give you my elevator speech about the National Academies. Most academies of science and academies of engineering around the world share two properties. First, they are private organizations, not part of their governments. Second, they are honorific. You cannot join the Royal Society in London or the Academie des Sciences in Paris. You have to be elected by the existing membership, and that election is generally considered a high honor. Back in 1863, in the middle of the Civil War, a group of Americans got together and created the National Academy of Sciences. They incorporated it in the District of Columbia as a not-for-profit corporation. You may remember from your high school civics class that until thirty years ago there was no city government in Washington—the federal government acted as the city government. Thus, the Academy’s articles of incorporation were actually a bill passed by Congress and signed by Abraham Lincoln. Somebody inserted about 40 words into this otherwise boilerplate corporate charter, and those words made all the difference. In modern English, they say that the academy will provide advice to the federal government on any issue of science or technology, whenever asked, and without compensation. On that basis, the academy became schizophrenic. It has two distinct personalities. On the one hand, it is an honorific organization like other academies around the world. On the other hand, it is an unbiased, absolutely authoritative advisor to our nation.
OCR for page 2
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 Fast forward to today. What was then one academy, the National Academy of Sciences, is now three academies—the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine—all honorific organizations. Collectively, these three organizations manage a fourth organization, an “operating arm” called the National Research Council (NRC) that organizes most projects and studies that provide advice when the government asks for it. Together, the four organizations are now called the National Academies. When the federal government asks us a question, we put together a committee of 10 to 20 people, literally the best people in the country on whatever the subject is, who participate pro bono. They must have no conflicts of interest, and their biases are carefully balanced. They then deliberate for anywhere from three months to three years, depending on the subject. Finally, they write a report, which I think of as a Ph.D. dissertation. It is usually 200 to 300 pages long, and the last 20 pages are citations to the literature. This fact-based, tightly reasoned report is then reviewed by a group of peers, people as eminent as the committee members themselves. The National Academies produces one of these reports every working day—about 280 last year. If you take a snapshot of the organization, there are 6,000 to 10,000 experts serving on these committees—a veritable Who’s Who of people in the science and technology community, doing the best they can to serve their nation. So, that’s who we are. Now I will turn to my real topic, the subject of this gathering. We just had the annual meeting of the National Academy of Engineering, at which the president is expected to deliver an address on an issue of importance to the engineering community. For the 1999 meeting, since we were about to approach the transition to the millennium, I decided to talk about the accomplishments of engineers in the twentieth century and the challenges facing them in the twenty-first century. Preparing the first part of the lecture was easy. We had made an arrangement with the engineering professional societies to collaborate on producing a list (everybody was making lists then, if you remember). Our list was of the 20 greatest engineering achievements of the twentieth century, defined not in terms of technology “gee whiz,” but in terms of impact on quality of life. I could easily just trot out that list in my lecture, but I’m not going to repeat it here. But it is amazing! It includes electrification, automobiles, airplanes, radio and TV, agricultural mechanization, refrigeration, and on and on. The striking thing about the list is how profoundly the items on it have transformed our daily lives. If you imagine taking any one thing off of the list, you quickly realize how different life would be. My favorite item on the list was ranked number four—and that is simply clean water. The average life span in 1900 in the United States was 46. It is now 77 plus—a difference of more than 31 years. It has been estimated that 20 of those 31 years are attributable simply to clean water and sanitation—the most prosaic engineering you can think of. In 1900, waterborne diseases were the third largest cause of death in this country. They still are in developing countries.
OCR for page 3
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 Almost nothing we could do for developing countries would have a bigger impact than supplying them with clean water. Preparing the second half of the lecture, the challenges of the twenty-first century, was more difficult. Everybody agrees that the pace of technological change seems to be accelerating. Like a lot of other people, I can “guesstimate” what things will be like 10 years from now, but predicting the engineering challenges for the whole of the twenty-first century is daunting. As I looked back on the achievements in the twentieth century, I was struck by two things. First, as I have already indicated, I was in awe of how much engineering and engineers matter—how much they affect our daily lives. Second, I realized that the immense societal impact of engineering achievements was almost never predicted by their inventors. As Norm Augustine, a former CEO of Lockheed Martin, wrote in The Bridge (the quarterly publication of the NAE), “The bottom line is that the things engineers do have consequences, both positive and negative, sometimes unintended, often widespread, and occasionally irreversible.” The more I thought about that, the more I realized that there are deep moral and ethical responsibilities associated with the impact of engineering on individuals and on society. So as I searched for the second half of my speech, I began to read broadly and deeply about engineering ethics and applied ethics. In the end, I was convinced that I should pose only one challenge for the twenty-first century—engineering ethics. The quickening pace of technological innovation, the spread of nano-, bio-, and information technology, coupled with the vastly increased complexity of systems engineers are building, I now believe raise a new class of ethical questions that the engineering profession hasn’t thought about. But we need to think about them, and, in fact, the need is urgent! In particular, we need to think about issues that go beyond the ethical behavior of individual engineers; we need to think about ethical behavior for the profession as a whole. After my speech in October 1999, I also became convinced that the NAE is the one pan-disciplinary organization that has the standing and prestige in the engineering community to take on this issue—to start our fellow engineers thinking about it. With the enthusiastic backing of the NAE Council, I asked Norm Augustine to chair a committee, which Deborah Johnson, chair of this workshop steering committee, served on, to suggest how we should proceed. This meeting today is one result of the committee’s report, one step in a process I hope will lead to the establishment of a permanent center on engineering ethics here at NAE. Let me back up now and go into a little more depth. First, I don’t think there is a crisis. I believe that engineers are, by and large, ethical individuals. Ethics courses are taught at many engineering schools, and there is a large literature on the subject. In addition, every engineering society has a code of ethics that usually starts with some words from the National Society of Professional Engineers code, “Hold paramount the health and welfare of the public.” I think this captures very well the overall responsibility of individual engineers. These codes typically
OCR for page 4
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 elaborate an engineer’s responsibilities to clients and employers—to report dangerous or lax practices, to respect the consequences of a conflict of interests, and so on. Beyond those codes, there are daily discussions. I can remember talking with my father and my uncle, who were both engineers, about ethical issues ranging from appropriate safety margins to undue pressure from management. I have similar vivid memories of discussions with my professors when I was in school, and with my colleagues. I remember late night debates on the subject with my friends in college. All of that is still in place, and it’s one of the reasons I’m proud to be an engineer. Individual engineers take ethics seriously. But engineering is changing in ways that raise issues that are not covered by existing codes or discussions or the textbooks I have read. These new issues are called macroethical questions (as opposed to microethical questions). “Macro” and “micro” are not intended to suggest big and important versus small and unimportant. A microethical question refers to the behavior of an individual, whereas a macroethical question refers to the responsibilities of a profession as a collective group. The changes I want to discuss are the macroethical issues—the ones that raise questions for the profession as a whole, rather than for an individual. For engineers in the audience for whom this distinction may not be transparent, let me give you an analogy with the medical profession. The Hippocratic oath, which focuses on the behavior of individual physicians, is similar in a lot of ways to the ethical codes of professional engineering societies. But medicine is also grappling with some macroethical questions—for example, allocation. If there are not enough organs for all of the patients who need transplants, who should get them? If there is not enough medicine for all of the patients who need it, who should get it? If there are more patients than there is time for the physician to treat them, who should be treated, and on what basis? These are not questions an individual physician decides for himself or herself. They are questions the profession must grapple with, or maybe society, guided by the profession. Once a decision is made, a physician’s decision to follow that decision (or not to follow it) becomes a microethical question. Several things have changed, and are changing, in engineering that raise macroethical questions. I’m going to talk only about the one that is closest to my professional experience—complexity. The level of complexity of the systems we are engineering today, specifically systems involving information technology, biotechnology, and increasingly nanotechnology, is simply astonishing. When systems reach a sufficiently high level of complexity, it becomes impossible to predict their behavior. It’s not just hard to predict their behavior, it’s impossible to predict their behavior. The question can’t be answered by taking more things into account or thinking harder about the problem or using a new set of tools. At a certain threshold of complexity, it becomes impossible to predict all system behaviors.
OCR for page 5
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 Over the last several decades, mathematical theories of complexity have developed. Although these are relatively immature compared to the mathematical tools most engineers are familiar with, one thing is crystal clear. There is a level of complexity beyond which it becomes impossible to predict the behavior of systems. Unfortunately, these theories carry some undeserved baggage. For example, the term for anticipated, unpredictable behaviors is “emergent properties,” a term that was first used in the 1930s in conjunction with attempts to explain why group behavior was different from individual behavior. As I understand it, these theories of group behavior are now discredited. Some postmodernists have tried to discredit science, specifically reductionist approaches to science that use similar terminology. Nevertheless, the results of these theories are solid. It is impossible, or to use the correct technical term, “intractable” to predict the behavior of sufficiently complex systems. Let me give you an example from my own field—software. I find it fascinating that the general public tolerates a large number of errors in computer software. At any given moment, there are roughly half a million to a million bugs in the Microsoft Office suite, for example. Most people do not understand that only some of these bugs are blatant errors. Some of them are emergent properties—properties that could not be predicted. Let’s talk about impossibility for a moment. Just for a touchstone, there are about 10100 atoms in the universe. The number of “states” in my laptop—that is, the number of patterns of zeros and ones in its primary memory—is 10100,000,000,000,000,000,000. That is an unimaginably large number! This raises an interesting problem about testing. But first, there is something else you need to understand. Engineers will understand this better than ethicists perhaps, but physical systems have a wonderful property called “continuity.” Basically, that means that for most mathematical functions that describe physical systems, if you make a small change in the input, you get a small change in the output. In other words, around a given point, continuous functions have pretty much the same value. They don’t do anything radically different. The trouble with digital systems like my laptop is that they are not continuous. If you change even one bit in the memory, the meaning of what is being represented may be radically changed. The lack of continuity has profound implications for testing. In testing physical systems, you can pick a finite number of test points that are sufficiently closely spaced, and, because of continuity, you can be reasonably certain the behavior in between those points will be similar. You cannot do that with digital systems. You have to test every configuration. But that is impossible; if every atom in the universe were a computer, and every computer in the universe could test 10100 states per second, there wouldn’t be enough time, even starting from the time of the Big Bang, to test all of the states in my laptop! We have a procedure that you could follow, but there isn’t enough time. The question then becomes how to engineer software ethically when you know ahead of time that there will be behaviors you cannot predict. You cannot
OCR for page 6
Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop, October 14–15, 2003 test for all of them, and some of them will be undesirable, possibly even disastrous. Take another example—the U.S. Army Corps of Engineers is about to undertake an exercise to “remediate” the Everglades. We have “screwed up” the Everglades by draining them to make places where people can live, work, and shop, and the Corps is now going to “fix” them—or so they claim. But the Everglades are at least as complicated as my laptop. We don’t understand the Everglades system, and we cannot predict all of the behaviors that will result from particular modifications. How can we make ethical decisions when we cannot predict what the outcomes will be? Yet doing nothing is, in fact, also doing something. We do not have the option of not doing anything and avoiding the ethical choice. My time is just about up, so I’ll have to conclude. Engineers have made tremendous contributions to our current quality of life. Certainly, we have made missteps, and certainly we need to do a lot more to bring the benefits we enjoy in the developed world to people in the developing world. I am unabashedly optimistic that we will do that, but progress is not guaranteed. We face many challenges, among them understanding what the process of engineering should be so we can engineer ethical solutions to the world’s problems. I happened on a quote from John Ladd, an emeritus professor of philosophy at Brown University, that seems apropos. “Perhaps the most mischievous side effect of ethical codes is that they tend to divert attention from the macro ethical problems of a profession to its micro ethical ones.” Thank you.
Representative terms from entire chapter: