DUANE MEETER: Consider a hypothetical situation in which undergraduate students take a mathematical statistics course and get excited about some theories but, to their shock, suddenly find out on getting a degree and going to graduate school that they are going to have to work with other people and in groups larger than two. This makes me feel that we have to move upstream, look at our mathematical statistics courses at the undergraduate level, and use some of the methods that have been discussed here, for example, ask people to work in teams so that they can expect more of that when they get to graduate school.
Of course, the other problem is that the faculty are not trained this way either, and so it is a tremendous job to change their thinking.
ROTHMAN: I would certainly welcome any suggestions you have about how we can change the faculty so that they think about these things and actually do something about it. Some of them have thought about it. But there are tremendous disincentives to making changes. Frankly, it is far easier to teach a well-defined body of theory. It is neat and fun to simply go into the class and present the standard theorems and be done; you have nicely defined lectures. This interdisciplinary and cooperative stuff is a lot harder. Also, it takes more time to do this right, whereas the other material you can dig out of an old book you had when you were a student.
At the University of Michigan — this is not said out loud, but you know it is true — people do not get promoted for outstanding teaching. Even if your colleagues think very highly of you as a teacher, the basis for promotion at the University of Michigan is ten outside letters. Where are you going to get ten outside letters? One must write a pen sketch of who those people are, and people in the field are contacted who are experts in this specialty and that specialty. They are themselves not universal masters, and they say, "Here are a few of the good papers that I understand, but I do not know enough about those other papers." This results in a situation in which assistant professors have a tremendous disincentive to get engaged in interdisciplinary education. Their best bet is to write research papers and get published in the Annals of Statistics and perhaps a few other places. That is not to say that Michigan will not promote other people; I am an example of that, but it is pretty rate. I think that changing that is the biggest challenge.
MORRIS: However, there are two problems. You spoke about measurements in your presentation; one problem I would like you to address is how you measure good teaching, because it is easy in some sense to measure good research; you get those ten letters and see what the person is like. It is harder to observe teaching, and so how can we measure that? I do not expect an answer today, but that is one problem.
When we teach well, looking at this as a collective team process, we are helping hundreds of students to learn; if we could measure how much better off the field is 20 years from now because of what somebody does in his or her teaching, we might find that that person's teaching is more important than some abstract research published in an obscure journal.
ROTHMAN: I think that is the key. I have heard the word "holistic" twice — call it system thinking. The way you measure the performance of anything is in terms of its impact on the aim.
To make it quite simple here, to evaluate teaching, I think it would be wrong to look at things that are too focused on just one aspect of the whole. We are not interested in maximizing individual parts of the function; that is what is meant by total quality. You look at the function of all these variables. And you are right; ultimately, what we want to know is how much learning is taking place 20 years from now. So if we look too carefully at what happens in the classroom, we may lose sight of the real aim.
My attitude about teaching is that it cannot be quantified in such simple terms. I have been the beneficiary of some of those simple measures. I have all the little awards, but that is nonsense, frankly; you can manipulate the audience to get those awards and garner high grades. But what really matters to me are the successes of some of my students and how they did later after having passed my way. That is what I feel best about, not what I did, nor whether this person liked me or not.
I think measuring good teaching will have to be based on some sort of written evaluation. You know who the poorer teachers are, and I think help must be focused on those who are not doing quite the job that ought to be done, on a one-to-one basis. When we set up some measure, people will optimize for that measure as opposed to the purpose of increasing the joy in learning.
ANTONIAK: Having taught for many years, I have to agree. In a parallel field, how do medical schools prepare doctors to recognize that they might possibly make the wrong diagnosis, or make the right diagnosis and provide the wrong treatment, and that in both cases, the patient will die? How do we prepare applied statisticians for dealing with poor data?
ROTHMAN: In medicine, there is no issue of truth to provide a yardstick. They can at best look at the process by which they arrive at some decision, but knowing all the while there are very few gold standards; even autopsies can have errors.
One bad approach that I know they use is multiple-choice tests for some evaluations. It is bad because in a world where women who entered medicine were clearly of a higher intellectual level than the men, thanks to vast levels of past discrimination, women did poorer on that platform. Yet when women were evaluated one-to-one, they did much better than their male counterparts. So some of their measures were simply nonsense.
At many good medical schools they simply throw out all that grade nonsense and report Pass/Fail; everybody of course passes, because if you do not pass, you are out of medical school. That is a much closer approximation to the truth than reporting on some numerical scale. This idea of connection to an outcome, a real outcome, I think is misleading. I wish I had more time to go into this.
MYERS: Are you a voice crying in the wilderness, or does anybody pay attention? What is the attitude of your colleagues at the university?
ROTHMAN: There are a few people in the department who agree. There is another group that is on the fence (you will find this is true everywhere), and there is a group that is in opposition. Our department has calmed down over the years. I do not think I am a voice in the wilderness, but there certainly is not a big parade out there pushing this along. I believe there are people who understand much of what Deming has been saying and are implementing
it, but a vast majority are not changing. Whether they believe that the change is right or otherwise, they are simply not changing. I do not see any radical changes taking place.
MYERS: You essentially said in a slightly different way that all of this is clearly not having much success.
ROTHMAN: We have got to change ourselves. One could sit back and say, "All of this and a buck and a half will get you a cup of coffee." What I am suggesting is that at least those of you who are here can change. It is such a simple idea that if you change yourself, that is all you can do, really; you cannot get other people to change, but if you yourself change, we are going to make something happen.
We have to increase what we consider to be our faculty, and include adjunct faculty. We have this tremendous industry resource. We choose to view statisticians in industry as not academic statisticians; that is ridiculous! There is much more that industrial and academic statisticians have in common than what they do not, and we ought to be bringing people from industry into the classroom instead of only sending our classroom theoreticians to industry. It is a two-way street.
There are a lot of little changes we can make. It is a gradual process. Stephen Fienberg said we should measure progress on the scale of generations. I do not know about that, but I feel that there is something of a moral obligation that when you have knowledge about how things ought to be, you should not sit idly by and let other people define the world for you. You need to say, "If that is what I think is right, I am going to make some changes along those lines."
The fact that we have a big barrier and a lot of people who are opposed to changing or doing anything differently: so what? Look at the fellow who proved Fermat's last theorem. He went seven years without a publication. I wonder if he talked about his paycheck at the end of each year the way we might, when he did not get any salary increase. He was afraid to tell people he was working on it. He had one colleague in the department review his work, apparently.
If you think what we say here is right, make some changes.
SOLOMON: I declined John Tucker's earlier invitation to talk about the statistics department at North Carolina State University, but instead I am going to do a little marketing for the Journal of Statistics Education. It has been referred to a couple of times in these two days. The first issue of that journal, which is accessible electronically in a number of ways, appeared on July 19th. The founders had two motivations in mind for establishing the journal that are relevant to this symposium. The primary one was what I hope is the obvious one, and that is to provide a vehicle for exchange of information on postsecondary statistics education. But there was a secondary goal that relates to the reward structure that we have talked about and has come up a couple of times in the course of this symposium. That goal is to provide a rigorously refereed, and, hopefully, therefore respected outlet for scholarship in statistics education, one that will provide some tangible fodder for promotion and tenure files. Whether it achieves that goal depends on whether all of you begin to submit papers to it. Part of the excitement of it is that it is distributed electronically, and so offers opportunities that simply are not available in the print media. For example, an article in the first issue written by George Cobb describes the results of a sequence of NSF-sponsored projects in statistics education. The article itself is a summary of those projects, but if you wish, you can request from the Journal
of Statistics Education additional information at various levels that will provide, for example, the final report of the project team or, in fact, some of the data sets or work sheets that were prepared in connection with some of these projects. That is part of the vision for this journal, to make materials as well as articles available electronically.
There are some brochures outside at the registration table, and there is going to be a poster session at the Joint Statistical Meetings in which the Journal of Statistics Education managing editor and the editor will actually demonstrate with a computer how to access the journal and let you see what it looks like on-line. I would be happy to talk privately to anybody who is interested in more information.
PREM GOEL: A question for Laurie Snell: How do you assess students in a course such as Chance?
SNELL: We benefit from the fact that Dartmouth, like many other schools, has wonderful grade inflation; for example, the average grade at Dartmouth is more than B+. Thus we are talking about giving grades of A, A-, B, and B+, and so we do not have to worry about huge distinctions. The actual final grade is based on the following components.
The students keep a journal of what has taken place in the course. The journal is used for students' comments about the articles that they were a little shy to make in the class or in their discussions. They put in the journal solutions to problems, if they have worked out of Friedman, Pisani, Purves and Adhikari, and so on. The students do all of the review exercises in the relevant chapters, which is usually about two-thirds of the book, pretty much on their own. Very little time is spent discussing that. They also hand in a lot of homework. At the end of the course they do a final project that is related to something they were interested in that came up in the course. That project is a substantial effort. Students do a poster show, and there is a kind of casino that goes with it; that, too, is a big event, and students seem to like it very much. A student's final grade is our subjective assessment of the journal, the homework, and the project. With there not being a lot of grade range to worry about, the true answer to your question is that assessment is not taken very seriously.
WARD: Is this class one large group?
SNELL: The schools where I have been involved with teaching the class have by chance been three different ones: Princeton, Dartmouth, and the University of California, San Diego. Two classes were of size 60, and one was 30. Sixty is pretty large, although I personally always team teach it with somebody else; I would not teach it any other way. There are simply too many ideas floating around at one time for one person to handle, and it is very convenient to be able to say, "Well, I do not know anything about that but maybe Joe does." The students love this, of course; there is nothing they like better than to see experts arguing. I do not know why they love that so much, but they do.
JOHN McKENZIE: How do you gather the information? How do you get the clippings?
SNELL: I am exposing all my secrets and am probably going to go to jail even faster! I scan through Mead Data Central, Inc.'s Lexis and Nexis for articles on the topics that have been interesting in the past, and then I try to find new ones by searching on such topics as probability theory and statistical theory. I thus get a long list of candidate topics. One of the wonders of modern electronic roads is that all these newspapers — the Los Angeles Times, the New York Times, yesterday's paper — are all there today, and so you are at most one day late in searching for these things. When I find articles that are interesting, I try to write a little
abstract of them. Then I simply download these articles to our database. All of this is done electronically; I never print anything.
MYERS: How are those newspapers accessible?
SNELL: Via gopher. Sign on to chance.dartmouth.edu, look into the Pull Text folder, and there they are.
MYERS: But prior to that?
SNELL: They are on Lexis, Nexis, or any of a number of databases that have the current major newspapers on them. Finding articles from journals is not as easy. Newspapers seem to be more casual about allowing their current issues to appear on the database. The journals do not like that, and so they tend to be a month or so behind.
THISTED: How much of your time is spent preparing for and teaching this course?
SNELL: I guess all, most, lots. But then it is just something of a hobby with me; it is in a way my final fling. Other people have taught it; for example, Tom Moore has taught it at Grinnell and Bill Peterson has taught it at Middlebury, and they have followed a somewhat less crazy route by picking three or four topics, such as quality control or streaks in sports, so that to some extent they can prepare that material ahead of time. On the other hand, their courses were actually part of a writing curriculum course that, they said, was really murder; to try to do two things like this at once is just astronomically difficult. There is no doubt that it is a difficult course to teach.
When we applied to renew our NSF grant, they made it perfectly clear to us that if we would figure out some way to allow somebody in, say, Oshkosh to know what lesson one is, lesson two, lesson three, lesson four, they would like this course much better. They are very nervous, obviously, about how likely it is that this Chance course will make an impact in the sense of people simply being able to do it, just having the time and effort. We hope people will do something much more modest, namely, in their own courses or just occasionally, use one or two of these things that they get from our CHANCE database. I am sure many people already do it; the idea of using current events is certainly not something we invented. I suspect many of you talk about something you read in the New York Times and so on, but this database just makes it a little more accessible.
MYERS: A question for Prem Goel: To what extent does computing come in to play in the Ohio State course on statistical factors?
GOEL: Throughout the year computing is an integral part of the course. You cannot do statistics today without computing.
MOHAMED MADI: What happened to the more traditional courses in the OSU statistics program?
GOEL: In the first year there are three sequences under way. The first is the mathematical statistics sequence, which is basically an introductory course. The second sequence is the real analysis sequence for the whole year, and the third sequence is the statistics practice course I described. Basically, everybody takes all three courses each quarter; note that the three books they have been using will change next year.
KETTENRING: Just to comment on the reaction from the students to the unstructured course, I have two thoughts. One is that their reaction may in some sense be an early litmus test, identifying students who are going to find these sorts of statistics that we are talking about beneficial. My second thought is that we are really providing these people a wonderful, early
exposure to what life is like in the real world, where it has nothing of that structure, and I hope that they appreciate some of what you are doing.
ROTHMAN: Students find identifying a series of questions to be less satisfactory than seeking specific answers. However, concentrating on what are the right questions is a better model than what you may see in talking with the students. It is not always acceptable to the student driven by a course grade, and I think that removing that course grade is the key to any successful transformation.
WARD: Again, what about the course grades for the OSU course, since it has been brought up again. Does Prem Goel have any comments about grades?
GOEL: For the statistical practice course?
GOEL: The grade was only Satisfactory or Unsatisfactory. Competition is a problem in grading. If you have four people working together, you cannot have grades of A, B, C, D, and F, because then you will have problems in terms of their cooperation with each other.
WARD: Do the students have some idea of the course objectives, so that they can know how grades of Satisfactory and Unsatisfactory are determined?
GOEL: I believe they do, but still, it is something they have not seen in their undergraduate years, and so they are uneasy about it in the beginning.
TUCKER: Are the faculty for the statistical practice course members with secure positions who do not have to worry about how devoting time to such a course might affect a tenure review?
GOEL: Yes. Both of the faculty members involved are senior associate professors, and so they are not worried about such things. We had four volunteers for this course, and I, as department chair at that time, chose two of them knowing that they would work very well together. It is important that a good team be selected and that the team people can work with each other. In this case, both these fellows worked with each other quite well.
ROSENBERGER: This is a first-year course?
GOEL: That is correct.
ROSENBERGER: We at Pennsylvania State University have worked on something similar in our first-year courses, but the question we discussed and have not resolved is whether there should be an advanced, second-year data analysis course for PhD students. It would replace a menu of topic courses that they now take; do you think from the experience at OSU that such a thing should be attempted?
GOEL: That is an interesting question, because we were not sure ourselves whether it should be a first-year course or a second-year course. However, if you want to implement something you have to keep in mind what the context is going to be. Quite a few of the OSU statistics faculty were very unhappy about so-called tool box courses that were being taught in our program. We considered this to be one way to solve that; it would be introduced the first year, and, hopefully, after three years of evaluation, it would then be decided if it should be in the second year. However, by that time, the tool box courses will not be in the program anyway. So it is not clear that the first year is the best time to do it; in that I agree with you.
SOLOMON: We at North Carolina State University have added the requirement; we have had for some time a second course in methods. It was to address what our graduates, who in our case are often employed in the pharmaceutical industry and similar places, were telling
us they were using in their jobs. To try to keep the curriculum manageable, we decided to collect many of those topics into a single one-semester course that introduces students to the existence of the topics, points them to the literature, and gives them a notion about the topics without going into the kind of detail given in, for instance, a whole course on survival analysis. This sort of survey is what we at NCSU call intermediate methods.
GOEL: I forgot to mention that we at OSU use seminars partially for the purpose of educating the students. We have nearly 2 seminars per week; in a 10-week quarter we may have about 17 seminars. One is supposed to be theory and one is supposed to be applied, again a counterpoint. The seminar is a course for students; they get one credit hour for it every quarter. Students are to attend 9 out of 17 or 18 seminars per quarter. We invite a number of faculty on campus for either of the two seminars each week so that they can present what problems are being addressed at Ohio State. It becomes a good tool for students to learn about the scope of statistics. The only problem is that students do not necessarily understand all the details, and so we tell them, "If you can understand the first ten minutes, you are doing fine; do not worry about it."
SOLOMON: We at NCSU are large enough that we can afford to offer quite a variety of courses, and at the graduate level we have introduced over the years a number of courses called Applied X, for example, applied time series, applied whatever, with the target audience being graduate students from other disciplines. But who is taking those courses? All the statistics majors, the master's degree students, as well as the originally intended audience. But they get the word that it is useful stuff.
ROTHMAN: Concerning the statistical practice course's content, I noted that regression and all the standard things were included, but one thing I did not see, which may nevertheless be in the course, is a requirement that the student give a presentation. We talked about written expression. But I think that what we have now is a graphics medium that enables us to express ourselves in a very different way — much the same way as we present results in a scientific paper. Presenting the results of an analysis in a graphics mode has only begun to get the attention of our faculty and researchers but should, I think, be integral to such a course. Do you do that at OSU?
GOEL: Not at present, because that is more or less covered in the consulting course rather than in the statistical practice course.
ROTHMAN: It would be nice to hear what in fact you do, what principles you advocate and so on. Another aspect of the course has to do with design. Design often focuses on randomization and procedures like that, when in fact most of what is out there are observational studies. Yet I believe we have a lot to share about what are interesting things to look at in observational studies, such as what some of their limitations are. To what extent do you talk about that?
GOEL: That is addressed quite extensively in the course, in the first quarter, in fact.
MYERS: To what extent do you include things in the course that require students to look at the literature?
GOEL: Not to a great extent as yet. If we ever make this course a second-year course, we may do more of that. They currently look for some examples of real data that relate to projects, but they do not look at the literature as such.
MYERS: I teach a spatial statistics course in the mathematics department that is populated almost entirely by students from the statistics department. I require that they do three journal-article reviews. These are probably not done in real depth. I ask them to identify what problem is discussed in the paper, use some kind of "legal review" technique or approach, and perhaps make some comments about the usefulness of what has been obtained. In part, I really want them to simply look at the literature and see what is there, but it forces them to use the literature.
GOEL: I do quite a bit of that in the second core consulting course.
FIENBERG: For a similar course at CMU, one that evolved away from what I did some ten years ago rather than solidified it, efforts are now made to reach out and draw in both faculty members and graduate students from other areas. The very exciting thing that happened to our curriculum and our students over the last six years was that we began involving students early on in projects. Even master's degree students have actually either collaborated with graduate students or been part of a team, albeit sometimes a junior member in a team. Currently, I am at the point where they have written up the material and have presented it, which is sometimes done jointly with a graduate student in another department. You often get a four-person team — two faculty members and two graduate students.
I believe you have to actively reach out; again, that is an even more complex kind of mentorship-apprenticeship relationship, but it is absolutely essential in order to do what I call interdisciplinary research. You must actually immerse yourself, even for a restricted period of time, in that culture. You have to go to them; you cannot expect them to come to you, because when they come to you, they will narrow the question that they ask and you may not know what they are really doing.
MYERS: A question for Steve Fienberg: Have you thought about the distinction between statistics in the law and statistics in public policy?
FIENBERG: What I call statistics in the law is a very focused and structured thing. Almost everything else I do is in a much larger sense statistics and public policy. Laurie Snell's presentation was very illustrative. There are a number of absolutely critical overlays that we do not teach our students in a typical statistics class, and I think that they are essential. The word "ethics" does not come up very much in the statistics curriculum. But if you go out and do a real project, it comes up all the time in disguise.
The notion of social values defining how fields structure their language and their investigations, and how public policy making is structured is something that nobody talks about in statistics journals and professional meetings, let alone with statistics students. When you get into things such as this, you are drawing in a much more complex array of issues than those that we traditionally talk about within statistics — ones that would come up much more naturally in, perhaps, an ethics course in a philosophy department. I think that that is an exciting aspect for me as a statistician, and it raises yet another set of interdisciplinary questions that I find quite challenging.
BAILAR: I said yesterday that I had set aside the entirety of what I came prepared to talk about and presented something entirely different. One of the things I left out has to do with the teaching of ethics. I teach a course in advanced epidemiology. This is presented is a combined department, so that there is also a lot of biostatistics in it. It is basically designed to be the last course that our PhD candidates take before they spend full time on their dissertations.
I started with one hour — by student demand, it is now up to six, and next year it is going to go up to 10 — in which we talk about ethical problems. It has something of a seminar format; the students do as much talking as I do, sometimes more. We talk about the smaller ethical issues; we do not spend much time on fabrication of data, falsification, and plagiarism — the big three of scientific fraud. We do not spend any time on those because everybody agrees those are wrong! There are no issues with them.
We spend a lot of time talking about cutting corners, which I feel is a much more serious threat to the progress of science, and about how to identify the lesser sins: mainly, not telling people about all the things that went wrong in your experiments, or about how much you are leaving out that might reflect badly on the initial design, or when in the course of the work you the statistician or you the investigator actually came up with the hypothesis that you present with a P-value. All of this I present in an ethical forum — stressing what actions are wrong — and the students gobble it up. Every year they have asked for more, and they have said something I think is a terrible reflection on the prior teaching they have had both in our department and, I suspect, everywhere else. They say that nobody has ever mentioned these things to them anywhere before as ethical problems. It makes my colleagues very uncomfortable, but the students are clearly very interested in this.
THISTED: I would like to ask John Bailar if his colleagues are uncomfortable about making value judgments or if they are uncomfortable in talking about it to the students?
BAILAR: Half of it, I believe, is that they are uncomfortable talking about this. They do not like to think that deliberate deception, cutting corners, and shaving results is really a part of science; yet you know and I know that it is, and we had better recognize it so that we can do something about it. The other half is that they do not feel this is really part of statistics, and I think that they are flat wrong.
MORRIS: Regarding on-the-job training, we need to prepare the students essentially to be fast learners or to know where the references are. If we are not going to teach them everything, we need to give them the ability to teach themselves and, obviously, to network with other statisticians.
ROTHMAN: I think ethics is an integral part of what we do as statisticians. I introduce it in the Michigan class as one of the many constraints in designing a study. I think it is interesting for students not just because it is important, but also because of the relatively recent history of ethics in experimentation. We have names for some of the things that you talked about, going back, for example, to Charles Babbage, such as ''cooking" data and trimming data, where cooking means serving up only what you feel is fit for consumption to support your case. This is not something that has been around for centuries or even a hundred years. Much of the constraint randomization work came out of the Nuremberg conferences and was only done in 1948. I think ethics is something to truly get students thinking, and that it is absolutely essential for them to grasp.
FIENBERG: When I first raised this ethics dimension, I did so in a very special context, and I think it is great that John Bailar brought out some of these other aspects that Ed has amplified. It is striking to me that over the last decade, I found myself drawn more and more into issues to which the word "ethics" could be attached that did not start out looking to be that. A number of years ago I was involved in a project that ultimately came under the label "sharing of research data," where the goal was to achieve access to data sets largely for secondary
analysis, something that fits quite naturally into the theme of this symposium because it provides terrific resources for classroom use. We discovered the problems about access; some of them are ethical and some of them are practical. In going from field to field, you discover that within fields the ethical problems are rarely discussed. As that work has evolved, so also has my involvement with it. It seems evermore the responsibility of statisticians, not because we are a higher moral authority, but because of our ability to generalize, pulling together the examples that we see in multiple fields, and pointing out ethical issues that others have not always identified as such.
I take a number of John Bailar's examples as illustrative of that. I see it everywhere in what I do and I now view things differently than I did earlier. I now look for vehicles for sharing that with students that I did not use a decade ago, before I became involved with this.
BAILAR: We at McGill spend an hour, sometimes two hours, talking about sharing data, during which I always cite the National Research Council's nice report Sharing Research Data (NRC, 1985).
JAIME GRABINSKY: Many times when we are teaching civil engineers, mechanical engineers, or physicists, we do not present the problems that really are of concern to them. For example, I know of one book on engineering statistics that has many interesting problems for civil engineers but was of course written by a civil engineer. But I am not able to find the types of books and materials that deal with other subjects, especially for engineers. I have looked through many books in physics and many books in statistics, and I know there are many important statistical problems in physics that are not appearing in the statistics books as well as many topics in statistics that are not appearing in physics books. So there is a lack of communication, a lack of mental infrastructure, if you will, that we need to achieve.
SOLOMON: I came across a book that is used in a graduate course in the chemistry department, Statistics for Analytical Chemistry (Miller and Miller, 1993). It is not very thick, yet in a couple of hundred pages it covers what, if we were teaching the material in our statistics department, would require chemistry students to take four or five courses; they do it all in a couple of hundred pages in the one course in chemistry. Also, in briefly looking through the book, I found that it seems to be largely what we statisticians would judge as correct and even uses much of our language.
SNELL: This relates to a problem that has not been solved and I think never will be solved. In a way, the most depressing thing I heard at this symposium was that a person would have to spend a year or two to learn what "bootstrap" was all about. That was one thing I learned a few years ago and thought was a wonderful idea, and I thought I understood it. I really do not know what the answer is to the question of how to teach it all. I do not know much about statistics — it is not my field; but in mathematics people have talked about the same problem. There everybody says, "What we need are students who really can think on their feet and are not interested in all these specialized things." They talk about horizontal knowledge instead of vertical knowledge — William Thurston is a key spokesman for that. Yet I know very well that when such a student applies to Princeton for graduate work in mathematics, they want to know whether the student knows this or that and so on; it is probably true in statistics, too. In other words, when you admit students to graduate work in statistics, do you really look at whether they are interested in different ideas and different kinds of work, or do you find out what courses they have had and how much statistics they know? Is horizontal knowledge
valued, or is the focus on how the student is going to be able to handle this very difficult program and the rather theoretical work he or she faces? And there you probably prefer the applicant who makes you think it is going to be a safe bet.
I do all of my current teaching with somebody who is now regarded as so brilliant that everybody would give him tenure, no problem; but he had a lot of trouble getting into graduate school because he really was interested in ideas and not specific facts.
Miller, J. C., and J. N. Miller. 1993. Statistics for Analytical Chemistry. 3rd ed. Englewood Cliffs, N.J.: Prentice Hall. 256 pp.
National Research Council. 1985. Sharing Research Data. Committee on National Statistics. Washington, D.C.: National Academy Press.