CHARLES ANTONIAK: A question for Jon Kettenring: Is higher management at AT&T appreciative of these needs for statisticians in industry?
JON KETTENRING: As an employee of Bellcore, I will not attempt to speak for AT&T. However, I do not believe higher management generally has any particular interest in statisticians per se. They have considerable interest in people that they perceive as problem solvers and who can demonstrate through a track record of accomplishment that they contribute to the success of the business. Statisticians are as well equipped to make these contributions as anybody. So we have a special opportunity here, and in some sense we are negligent in not capturing this opportunity more effectively than we do already.
WILLIAM EDDY: Jon, where are we going to get space in our curriculum, in our years in school, to do all these things?
KETTENRING: I do not have an easy answer for that. One suggestion would be to integrate many of the things I described into the existing curriculum; that way you actually will get some synergy and hence some possible time savings. That may not do it all, but keep in mind that I need people that are as good at some of these things about which I spoke as the sort of people that you are producing for me today. And indeed, as you know, a lot of these people are not coming out ready for success in an environment where industry does not have time to retrain them. So if it forces statisticians to perhaps not go quite so far down some mathematical track, or so deeply into some more esoteric, theoretical statistical track, I am willing to make that compromise.
But I do believe that the experiences with real data, the experiences communicating, writing, and so on, can be integrated at least to some extent into an existing curriculum. I am not claiming it is easy. I believe this is going on in the engineering field in some schools these days, where they are trying to do some of the things I mentioned by integrating them into existing classes.
ANTONIAK: I taught at Berkeley for six years and then worked at Bell Labs for seven years, so I am familiar with both the academic and the industry environments. Peter, has Berkeley instituted anything in the way of Bayesian analysis?
PETER BICKEL: Of course, there has been a long-standing Bayesian course in the industrial engineering department. Berkeley also has a graduate Bayesian course that has been developed over the last couple of years by Andrew Gelman. When I mentioned inference, both Bayesian and frequentist, I meant it; both have to be taught as part of model-building. I devote time in my course to basic Bayesian inference, and I would like to expand it.
STEPHEN FIENBERG: Peter, where is there going to be space in your curriculum or in ours for these kinds of activities? And a related question: How do you bring these interdisciplinary activities into those theory courses that actually ask questions about inference? Is squeezing the curriculum to fit in these things going to be a workable mechanism?
BICKEL: I think it is. We at Berkeley currently have two parallel courses at the graduate level, one in theory and one in applications. This year David Freedman is teaching the applications course, and I am teaching the theory course. We are in fact talking about integrating the two in the future. It would be very difficult, and I certainly would hesitate to do that by myself precisely because I feel my own background is inadequate. On the other hand, there is enough expertise in the department to put together such courses. Whether all of my Berkeley colleagues would agree to do that is a different matter.
The time problem is critical, but a very exciting idea would be to work out ways of placing students in a coherent summer placement program in which it is clear that they truly learn some of the critical data analysis and communications skills that they need, and not merely have them be used as computer programmers. They could then bring back their impressions to the home department, provide some feedback, and perhaps some of their experiences could be incorporated into some of the courses.
EDDY: On your notion of dividing the courses so that we are sure to teach all of those activities: About 10 years ago we at Carnegie Mellon tried exactly that. We split all the introductory graduate program courses, and it was great because we certainly had a lot more courses and could teach many more things. But we found that each mini-course started to become a whole course, and the students began to collapse under the work load. In the last few years, we have migrated away from that and now have a very limited number of those divided mini-courses. Consequently, I would urge caution about dividing the courses and teaching responsibilities.
JEROME SACKS: One question that I have concerning the academic situation is the extent to which the environment within departments is suitable to get some of these things started. Peter, I believe you alluded to some difficulties with some of your colleagues. The question is, rather than just some of your colleagues, is it most of your colleagues or even all of your colleagues?
BICKEL: It is certainly not all of my colleagues. Either you or Jon said that things have to happen right away, that change has to happen immediately. I think, intrinsically, it cannot happen immediately just because the current faculty is a mix of people who were trained in different ways, some of which are more and some less attuned to interdisciplinary study. However, it seems clear that the way that the profession and the field are evolving, the great majority if not all of my younger colleagues would in fact support such initiatives.
MARJORIE HAHN: Peter, all of your comments were directed toward statistics PhD programs that are in separate statistics departments. Throughout the country, however, a large number of people get training in statistics within mathematics departments. Can you make any comments on that situation, or is the feeling that statisticians should be trained only within the statistics departments?
BICKEL: That is a very difficult question. We have in some sense faced that in our own department because we have a very strong pure probability group, and I believe there is much less willingness within that group to accept a view that a statistician should be the holistic "all-around-er" that Jon Kettenring described. I would hope that in mathematics departments the natural vehicle would be to form alliances with applied mathematics and to produce people who are PhDs in applied mathematics, because I think there is hope of having mathematicians who also recognize the need for some of these other skills. Also, there are going to be people
who want to be pure probabilists, and I think that is fine. But as to what should be the typical product of a statistics department, I would like to see it be what I described.
ANTONIAK: Addressing public policy and the public's perception of what statistical measures are, I am sure that many people along the Missouri River, having suffered through the worst flood in a hundred years, now feel they are safe for another hundred years. There are other areas where there is still much controversy about the ozone layer and greenhouse warming. How do statisticians handle such tough issues and give guidance when there are large uncertainties?
N. PHILLIP ROSS: We do that on a case-by-case basis, exactly as you have phrased the question. We try to talk to people, mostly our own management chain. Statisticians rarely get called on the telephone by the general public. Sometimes contact comes through EPA's Office of Federal Activities, or a similar office. The answers we give are probably not very satisfactory, especially to someone who has experienced the actual incident for which we have given the probability of it happening. Although the incidence of an event is given ''on the average," if you happen to be the one experiencing it, the average or the probability does not mean very much. If you are in a car accident, it does not help to be told that they happen only 1 in 10,000 times. But that is the only answer we can give.
J. LAURIE SNELL: As something of a philosophical question, to what extent should an academic institution ignore all the desires for particular training and continue being an ivory tower providing traditional education?
ROSS: As a person dealing with public policy, I believe universities have a role to play in the traditional sense. Everybody should be educated. My daughter has a friend who has decided to major in the classics, who, upon being asked the logical question, What are you going to do with that?, replies, "Teach the classics after I get my doctorate in it." I believe that in mathematics and statistics, universities should teach the theory, be the forum for debate, and provide students with those tools to prepare them not only to think but also to participate in the world of work, whatever that work might happen to be. In some instances it might be teaching; if students are to be well prepared for that situation, the art of teaching takes priority. There must also, however, be a way to expose students to real experience without universities inappropriately becoming institutes of vocational education.
There are people who wonder why they went to school and why they got the doctorate; it does not seem to have helped them at all. They wish that the university had told them there were not very many jobs in that field. There is some obligation to make students aware of such things, but I do not know how that is done in the curriculum, how to give a course for that. Integrating real experience, either outside of class or during the summer, is one way to address the issue. But the university has a role and is responsible for teaching the theory and raising the questions and the arguments, both those that may and those that may not be relevant in the real world.
CARL MORRIS: A small technical question for John Bailar: What do you mean by "bias?"
JOHN BAILAR: Consider the human cancer hazard of a specific level of exposure to a specific chemical compound, such as dioxin. There is great uncertainty attached to any number that can be produced from that. There are profound statistical questions involved with identifying the sources of uncertainty, correlations among those sources, producing estimates that
are compounded, in a technical sense, with the total uncertainty. There is a need for a good bit of education of policy makers of all persuasions about the meaning of the uncertainty that is attached to these numbers and how to interpret it.
I commonly see something I call log thinking, that is, where someone thinks being a log off in one direction balances being a log off in the other. Of course, that is untrue. The difference is between uncertainty in an estimate of a specific biologic parameter in this context as opposed to true variation from person to person or subject to subject. All of these illustrate within one narrow field what I would like our students to know on a much broader scale.
Sometimes there are even opportunities to take bias and to study it in quite specific ways without being able to estimate a particular bias. Much has been said over the years about interviewer bias, and about bias in public surveys and other kinds of data-gathering activities. The questions ask how big the interviewer bias is. There is a well-recognized but not commonly used technique to study this. It has been used in the Current Population Survey and is called interpenetrated subsamples; it is able to produce a component of variance that can then be labeled bias. In the end one really has a probability distribution of biases. You do not know what the bias is for any specific interviewer. But I believe the concept can be extended.
DANIEL SOLOMON: I represent a large PhD-granting statistics department. We hired two new faculty this year. One has a PhD in physics, and the other has a PhD in genetics; neither has a PhD in statistics. Perhaps we should view that as a sad commentary that underscores Jon Kettenring's remarks.
In order to build a more interdisciplinary curriculum, the statistics community must encourage more interdisciplinary faculty. To do that, the reward structure for interdisciplinary work must be changed in the university. As long as publications outside the mainstream statistics literature are undervalued in the tenure process, faculty in the early stages of their careers are not going to be encouraged to undertake such work. Developing interdisciplinary work often takes longer than continuing work on one's dissertation, that is, continuing work in statistical theory, so that it is dangerous for new faculty to undertake interdisciplinary work.
Thus how is interdisciplinary work to be brought into the process? It falls on the senior faculty and administrators of the departments to continuously educate the tenure committees, the deans, and whoever makes decisions about tenure in the university, about the nature of statistics and the nature of interdisciplinary research. The reason this must be a continuous process of education is that those faces change and the educating must begin anew as new faces come on board.
BICKEL: This is a theme that has been sounded a great deal, and it has a great deal of truth to it. On the other hand, if you ask which way the arrow is pointing, it seems in fact to be pointing toward greater ease and rewards in that direction. To cite an example, Berkeley appointed a PhD in geophysics some years ago. I believe it benefits and enriches the field to bring in such people who eventually identify themselves as statisticians.
Concerning publications, I do not think that there are difficulties at the higher levels, at least at Berkeley, in persuading deans and tenure committees of the value of publications outside the field of statistics, because deans and tenure committees are usually not statisticians and would not know or care. The question is, What do the people in the department say about this work? What do the outside letters say about this work? There I think you may well be right that perhaps some of our colleagues still define excellence in rather narrow terms.
DICK BECKMAN: I am glad to see that universities are now appointing scientists in the statistics departments, because almost every young PhD these days is scientifically illiterate. Where I work, it is very hard for people who do not know anything about science to be hired.
SACKS: I have proposed to a number of people that the most important course in the statistics curriculum would be an integrated science course for graduate students in statistics covering some chemistry, physics, and biology.
CLIFFORD CLOGG: First a comment and then a question for any of the panel of speakers.
A great many of the statisticians that were role models in my own career, people who would be thought of as applied statisticians or methodologists who have made contributions to many different areas, have backgrounds that in reality were multi-or interdisciplinary. Some of these people had undergraduate training in psychology or engineering or computer science, and even graduate-level training that involved serious work outside a statistics department. It is striking that no one has mentioned strengthening multidisciplinary education; in fact, making statistics education multidisciplinary would mean having statistics graduate students do serious work in some other field. That does not mean that they take a "Bugs Bunny" course in statistics for economists at the 300 level, but rather that they do serious work in economics or serious work in biology or a serious project in urban affairs, and so forth. I do not concur on this umbrella course in integrated science, whatever that might be, because such courses tend to be a laughing stock for real scientists.
But is room available in the curriculum for this kind of serious work? Why not require that statisticians in the 1990s do some serious work in another field that gives them credentials and perspective, and so on, even if it extends the degree program an extra year?
BICKEL: That is something that has concerned me also. Some 10 years ago I unsuccessfully proposed in the Berkeley department that all of our students be expected to take a master's degree in another field, possibly mathematics. The difficulty, though, is that in the hard sciences, in the biological sciences, and perhaps less so in some of the social sciences, there simply is not time. Furthermore, our students just cannot do it. If you plunge a statistics student into a relatively high-level course in biology, he or she has to go back to the basic freshman course. I agree that doing serious work in another discipline is the ideal and I still believe that to some extent it can be done, but there are severe difficulties.
EDWARD ROTHMAN: I am troubled by the idea of sending students to learn a significant amount about one thing or another. There may be 20 or 30 courses that teach precisely the same principle. The conservation of momentum law will appear in a course in aerodynamics and in physics. We want to avoid that type of departmentalization. The physics department teaches F = ma, and the cardiologist talks about blood pressure and cardiac output, and so on, but basically there is a first-order push-pull law. What we do as statisticians is try to extract what is common to all these things. If we simply say to our students that they should gain expertise in a particular area, they will not grasp that big picture. I believe we have to provide that in our own courses.
BAILAR: We ought to do it ourselves, but the immediate implication is that we have to know how to do it, which means that we have to be doing it on a large scale ourselves.
ANTONIAK: Does that not also concern—and question—cross-disciplinary knowledge? As John Bailar says, a bias or unfamiliarity with the field leads to making inappropriate
extrapolations of risk analysis of dioxin. Peter said that David Freedman has been teaching a class that covers good historical papers. Are there any classes reviewing great disasters, total misappropriation of funds, effort, and so forth, as in, say, asbestos or DDT? Students can get an idea of the pitfalls that are ever present when they get into a defined area. To name a classic example, storks bringing the babies comes to mind; the statistical tool is a correlation that we all think we understand.
BICKEL: I believe you will find a number of examples at that level in the Freedman et al. book I cited in my talk, but not major recent disasters, although I think it is an interesting question.
To slightly disagree, my son is starting a program in molecular biology at MIT in which one of the courses examines historical papers, but focusing on the points that are questionable: Was the evidence in the paper actually adequate for the conclusion that was reached?