Respondent What Is Interdisciplinary Research?
Stephen E. Fienberg
Carnegie Mellon University
Only a week or so ago, I was very concerned about having to respond to the case studies because I had received no papers to read, despite promises that papers would be available in June. I became even more concerned as the date of this symposium got closer, and I arrived in San Francisco with nothing of my own on paper. I finally acquired a copy of the third talk and made some notes, trying to anticipate how the presentations would turn out. In the end, I have been pleasantly surprised, because the case studies and the discussion during the past two days have in fact fit into the framework that I laid out in my notes prior to the start of the symposium. Thus, my response is a partially planned presentation and discussion, but linked to topics that have emerged over the last two days, and in particular to issues raised in the three case studies.
To my mind, there were actually four case studies; the first was presented yesterday, with John Lehoczky describing the approach that my colleagues at Carnegie Mellon have taken, and I will make reference to it as well, partly because I am familiar with it and linked into it.
What Is Interdisciplinarity?
As I read preliminary versions of yesterday's presentations, I decided that there was some confusion between three different terms: "interdisciplinary," "multidisciplinary," and "cross-disciplinary." Thus I begin my response by focusing on what I think interdisciplinarity means, and then I will link my notion to the presentations we have heard today.
When I speak about multidisciplinarity, I mean people coming together, representing different disciplines, and somehow trying to work with one another but not necessarily changing their approach or adapting to the knowledge base or techniques of the other disciplines. This is not what I plan to talk about today.
When I speak of cross-disciplinarity, I have in mind people from one discipline, say chemistry, choosing to work in another discipline, say biology, and bringing to bear the tools of their original discipline in the new one. I think that this notion is closer to what we have been discussing, but it still falls short.
What is interdisciplinarity? In part, interdisciplinarity to me means people from two or more disciplines coming together, learning about the ideas and research in each other's fields, and then developing new ideas and research that meld those from the original disciplines. For the statistician attempting to engage in interdisciplinary activities, the first step is to learn what researchers in another discipline do and how they do it.
I have listed some of the things researchers in other disciplines do. They think, they speak, they have their own language, they theorize. And theory itself means something different
in each discipline with which I have ever had some involvement. They measure things. There is a form of empiricism and a style of measurement in each discipline, and tools for measurement. And then they evaluate empirical evidence, often with their own analytical and quantitative tools as well as with the standard statistical methods with which we are quite familiar. Statisticians must take these activities seriously if they plan to work with people from another field.
Four goals that for statisticians who are going to engage in this kind of interdisciplinarity include, but are not restricted to, the following. A primary goal must be that you plan to contribute to the other field — that is, that you as a statistician have something to bring to your collaboration.
At the same time, if you are engaging in scientific collaboration you hope not only to contribute to the other field but also to draw something back to the field of statistics, and I distinguish between two kinds of impact on our field. First, one may draw back new statistical methods, or at least variations on methods that have been appropriate for the questions that you are investigating in the language of the other field involved in the collaboration.
But then the statistician occasionally takes the methods as embedded in that field, steps back, and thinks about them, linking them to methods from other disparate fields. The resulting integration of ideas and methods becomes applicable more generally. A unique feature of statistics as a discipline comes from the ability of statisticians to generalize from multiple disciplinary settings and to devise methodology that has broader applicability than to one field alone.
Something that we have not heard a lot about today and yesterday is my fourth category of goals for interdisciplinary collaborations. If you are really doing interdisciplinary work involving, say, geology and statistics, and if you are really successful and the problems are important, then what you may end up with is a melding of techniques, ideas, and language that do not fit naturally in either geology or in statistics, but indeed form what those who do interdisciplinary research refer to as an interdiscipline.
The exciting projects that I have been involved with are ones that take on that interdisciplinary character and go far beyond even a sustained consulting opportunity.
I thought that some personal examples from the various speakers at this symposium would help to describe interdisciplinary activities to others and explain why we as statisticians find them exciting. In that sense, I have been disappointed, especially because I know that many of the people in this room are working on substantive problems and making important contributions. The trouble is that we tend to speak in a meta-language. We step back from what we as statisticians are doing when we engage in interdisciplinary work, and thus we fail to convey to one another the excitement that goes with it.
That excitement of discovery and creativity is what I think all of us really want to see in the statistics curriculum.
Examples of Interdisciplinary Activities
I have listed three topics that are examples of interdisciplinary research that I have been engaged in off and on over the last little while:
Cognitive aspects of survey methodology,
Statistics and the law (including the use of DNA fingerprinting as evidence), and
Designing and evaluating bilingual educational programs.
I will talk about the first at length because I suspect it is the one you may be least familiar with. The second is something that has been mentioned several times in the last two days in different guises and for which I will provide only some references to my own work (see Fienberg, 1989, 1990; and Fienberg and Kaye, 1991).
The third topic was a project of a panel of the Committee on National Statistics (see Meyer and Fienberg, 1992) and focused on a critical public policy issue of how to educate students with limited proficiency in English. The panel involved statisticians, educators, linguists, and survey specialists, and it evaluated two major studies of different approaches to bilingual education. Now let me return to the first topic: cognitive aspects of survey methodology.
In 1980 I was engaged in work with colleagues studying crime victimization. This was interdisciplinary research, and it involved criminologists, social psychologists, sociologists, and statisticians. We were exploring how to understand victimization patterns in the United States, using data that had been collected as part of the National Crime Survey — a household survey launched in the 1970s under the sponsorship of what was then the Law Enforcement Assistance Administration and implemented by the Bureau of the Census. With the creation of the Bureau of Justice Statistics in 1979, there was an effort to reexamine the survey and, in particular, to try to come to grips with reporting bias. Virtually every expert on criminal victimization seemed to believe that we were not measuring enough crime, that is, that there were more crimes being committed than the respondents to the survey were reporting. This was bias that, everybody perceived, was associated less with sampling procedures and many of the traditional sources of nonsampling error, than it was with the questionnaire — the fundamental design of the instrument used to measure victimization.
At a very early stage in our work, somebody said, "Maybe we do not know how people think about the questions that are being asked when interviewers go out in the field." And then someone else said, "Who knows about how people think?" "Well, psychologists worry about that," the first person responded. ''Wouldn't it be interesting if we did something multidisciplinary and took a group of psychologists, a group of criminologists, and a group of people actually interested in the National Crime Survey and put them in a room and let them talk!"
In fact, we did just that in the form of a two-day workshop. Mainly what resulted was babble, because the psychologists had their language and they wanted to tell us about what they did, but only in their own language; and the criminologists had their language and the survey people had their language. What was interesting was that the three groups often used the same words, but they did not mean the same thing.
Nonetheless, there was an interesting dialogue. What happened after a while was that a couple of people went off and they actually talked about a problem that lay close to the interface of both their interests. Then a few more did the same. The result was that some of us got quite excited about the possibility that there was something more in this enterprise, even
though this particular workshop was not coming off all that well in that we were not learning how to redesign the National Crime Survey.
A few years later, the Committee on National Statistics (the other statistics committee at the National Academy of Sciences) organized a week-long workshop. We selected people from the fields of cognitive psychology, statistics, and survey research and asked them to explore in very different ways what each of the fields did, what they had in common, and what they could bring to one another in the form of research ideas and research opportunities. Further, we commissioned special background papers to try to draw together research ideas at the interface that would be explored at greater length.
Finally, we went to the people at the National Center for Health Statistics who did the National Health Interview Survey, and we asked them if we could use their survey to focus our attention.
Because we wanted all of the participants to have some common knowledge, we arranged for each one to be visited by one of the interviewers in the National Health Interview Survey (NHIS). That is, we arranged for somebody who would normally go out and interview a family somewhere in the United States to come to each of our homes. My wife, my older son, and I sat in the living room of our house and went through the hour-long interview for the NHIS. The interviewer did not know why she was coming to my house. She did know, however, that we were not part of the regular NHIS sample. In planning for the workshop, we also sent the interviewers out to carefully selected households — not because we knew what those in the households had to say, but rather because we needed their cooperation — and we filmed the interviews.
Later, during the course of the workshop, we actually got together in the evenings and watched the filmed interviews. We would be watching the videotapes and suddenly one of the psychologists would say, "My God! Look at what's happening here!" And then the psychologists would try to explain to us in psychological terms something that was happening in the interview. The governmental health statisticians thought of what was happening in the interview in very different ways, as did the statisticians from universities.
What we discovered was that the survey statisticians at the National Center for Health Statistics had been doing what we at this workshop were talking about earlier. They had a wonderful statistical tool — the household survey. It was like a hammer, and when you have a hammer, you look for nails. Well, they had a well-developed approach to survey-taking. They had a way to design questionnaires for household interviews, and they applied it to a series of issues on health. But when we watched the videotapes, what we learned was that the respondents to the health survey had stories to tell us about their personal lives and the consequences for their interactions with the health care system.
I have a vivid memory of watching a woman try to tell the interviewer that she had been in a car accident, and as a result of that car accident she had gone to see her doctor several days later. She did not leave work early any day, but she had had a headache and she had had "minor" symptoms. It was a full and consistent story, and it turned out that every other question in the NHIS questionnaire tapped into that story. The answer to each question that she wanted to give was related to this incident, and she had no natural way to explain this to the interviewer in the context of the NHIS questionnaire. So she would try to give the answer in her own
terms, but this did not satisfy the interviewer, who would repeat the question, but slightly more loudly than before, and so on and so forth.
Over the course of that workshop, we began to have dialogues across disciplinary lines, we learned the language of others, we worked together, we developed teams looking at different aspects of that particular survey, and we developed interdisciplinary research projects. Jon Kettenring was the first to use the word "teamwork" yesterday, and teams played a very important role in that enterprise.
The participants left that workshop and started small research projects, and they did this in teams. The people in the statistical agencies actually brought in cognitive psychologists to work on survey problems. They began to use some of the tools developed by cognitive researchers—for example, something called protocol analysis.
There is a very long story associated with this interdisciplinary set of activities that I will skip over, but I can tell you that there are now cognitive laboratories in three of the largest statistical agencies in Washington. If you read the program of the American Statistical Association meetings that are about to commence on Monday, you will find several sessions devoted to this topic. For example, if you attend the session on the redesign of the year 2000 census, the user-friendly census form being described there is linked to work of the cognitive laboratory in the Bureau of the Census.
Cognitive aspects of survey methodology (CASM) is an enterprise that has brought together people from multiple disciplines, and they have formed what I describe as an interdiscipline. They now have their own language, their own dialogue — and they will probably have several journals before we are all done.
Statistics has had a critical role in this story. As the CASM interdiscipline evolved, we began to see an interplay between cognitive laboratory work, tightly and loosely designed experiments in the laboratory, and survey field tests so that we understand what happens when you make the transition from one environment to another — something that as statisticians we have learned about in other scientific disciplines. Finally, there have been formal tests, with well-designed embedded randomized experiments within surveys so that researchers can measure bias, measure change, and integrate ideas in lots of ways. (For further details see Fienberg and Tanur, 1989, 1994; Jabine et al., 1984; and Tanur and Fienberg, 1992.)
I did not actually plan to tell this story before coming here but I think it illustrates a number of themes evoked in this symposium, and it is a description of a very different kind of interdisciplinary activity that one does not normally associate with statistical education.
I want to go back for just a moment to my third example, bilingual education programs, mainly because of the word design. Ed Rothman said in the earlier discussion that most data are not designed; they come to us in different forms in the real world but with little planning. I do not think that came out in quite the way Ed intended. I agree with him that all statistical studies do not come to us in the form of a carefully designed randomized control trial, although perhaps it would be better if more did; however, almost every study is designed, by hook or by crook, and the problem is that most studies are ill-designed and have no possibility of yielding the answer that people set out to provide. In the Committee on National Statistics (CNSTAT) panel study to evaluate programs in bilingual education, we had to learn about what bilingual education meant, what bilingual education theory was, what happens in classrooms, and so on. Then we asked, "What does it mean to design studies involving different forms of bilingual
education that will affect the development and achievement of students in the modern public education curriculum?" I will not tell you our answer, but I do commend our report to you.
How Do We Do Interdisciplinary Research?
My story on cognitive aspects of survey methodology was intended to give you the flavor of what is different about interdisciplinary research. I think it is very important to distinguish between interdisciplinary research and what has been referred to by Prem Goel, by Ed Rothman, and by a number of others as statistical consultation or collaboration. Even collaboration is not quite the same as interdisciplinary research, at least of the sort that I have described, but it comes close.
I want to reemphasize the design issue here, largely because when we talk about what goes into modern statistical education, we often are willing to take data from anywhere and think that that's a sensible exercise. I take issue with that. If you have an important question you have to answer, you must ask how to get data that will assist you. Only if you have thought seriously about data requirements and for some reason are not able to acquire the best data for the purposes, should you be prepared to ask: "Well, if we have not got these data, what can we say?" But I think that if you fail to ask the first question, the one of primary interest, then you have not fulfilled your job as a statistician. Therefore design plays a critical role when you are really collaborating with others, when you are really doing interdisciplinary work, as opposed to doing statistical consulting, when it is often too late to have any influence on what data get collected. While statisticians do not get to design all of the studies they collaborate on, they must learn to think in a design mode, rather than just in an analysis mode.
Building Applications Into Statistics Courses
Today's session was really about the case studies, about statisticians having attempted to build applications into the statistics curriculum. I want to pick up a few highlights and make some observations.
We actually had two presentations that focused more on the first course in statistics; and indeed, throughout the discussions at this symposium there has been a tension between how to teach the introductory courses versus where in the advanced statistics curriculum there is a place for serious applications. Ed Rothman described features of the first course developed at the University of Michigan, and Laurie Snell has told us about Chance, his new first-course effort developed at Dartmouth College, but transported elsewhere.
I think they are terrific examples. The detailed ideas and the illustrations Ed and Laurie have told us about are very good, and I think that the variation in approaches that Ed talked about are also important. I am not yet convinced about buying into the full framework and structure that Ed laid down, but I am excited about some of the components that he described and I suspect I will try some in my next undergraduate course.
I do think that, at some point, you need a philosophy and framework for statistics courses in a broader undergraduate curriculum or you get an unstructured course. And students tend to
react poorly to unstructured courses. Prem Goel talked about the problems that his colleagues were having in the first go-round of a graduate course built around applications, and one feature the students reacted to negatively was the lack of perceived structure in the problems they were seeing. It is one thing not to structure the applications problems in the sense of predigesting them for the students, but you had better have a structure in what you are trying to get across to the students. Students need to understand that there are different kinds of statistical structures for different substantive problems, and I think that that is probably part of the problem the faculty members at Ohio State found in starting up their enterprise.
I cannot help but take a moment and talk about Chance, because Bill Eddy and I were there at the beginning. Chance is a magazine, not a journal, and when we created it, we intentionally designed it to have something like the look and feel of Life, Time, or Sports Illustrated . I am envious of the resources Laurie has been able to draw upon. When I last taught an introductory statistics class, I did not have the same rich array of materials at my disposal. The world of computing has clearly changed. His suggestion of getting the students engaged in looking at examples through the use of the Internet has given me some great ideas about what to do when I am back in an introductory class.
The last time I taught the big introductory statistics class at Carnegie Mellon, I did do some of the things that Laurie described. I began each lecture with a newspaper article or something in the news. Actually, I did not create this idea. Fred Mosteller used to do something like that in a different form back when I was a teaching assistant at Harvard in the 1960s. In my class, I actually encouraged the students to go out and find statistical material and bring it into class for discussion. But you have got to be very careful if you try this approach, because if the students do not have resources at their fingertips, what you can get is a lot of mush, and then it is very hard to respond positively to the students' efforts.
What we have now is an abundance of resources, and if you organize it just a bit and encourage students to reach into those resources, I think that there are exciting opportunities. I also wish Laurie had been around selling copies of Chance to the libraries when we were getting going back in 1987 and 1988, because that was and remains a difficult activity.
You can also link these things Laurie spoke about to videotapes. A couple of years ago, I taught a short course with a former science reporter for the Washington Post, Vic Cohen. Vic had written a book called Numbers & News (Cohen, 1989), and he was viewed by the science journalists as a statistician. What we did was to share with them several recent issues of Chance, and we were very fortunate because at that time David Moore had just put together Against All Odds, the wonderful Public Broadcasting System series.
Those kinds of resources exist, not just in Against All Odds, but also in many other places. We have classrooms that are fully automated and where it is easy to integrate standard lectures with computer-based and video materials. This requires preparation, but it is very exciting for the students and for the teacher. Parts of the Against All Odds lectures were drawn from Chance stories, and so Vic and I showed several excerpts as part of our presentation. Students and even journalists enjoy multimedia presentations.
We have also had some discussion at this symposium about focused method courses. The problem associated with those courses is mainly that the examples almost always are tied to methods introduced in the course. So the students know that when the smokestack data out of Daniel and Wood (1971) is used in a regression course, it is there for them to address with a
linear regression model — even if they need to put in a nonlinear term later on, and even if all the assumptions are not fully satisfied. When they take these courses, students know the statistical tool to apply because it is the one that you taught last week just before you set the assignment.
Now, the difficulty comes when you want to study statistical problems rather than statistical methods. When you give students a problem, they do not have the hammer in their hand. They are not even sure if they should look for a hammer and nails. They do not know whether regression is appropriate, or categorical data analysis, or time-series methods. This is when they struggle. We have actually heard two different features of that struggle, and the Ohio State experience is telling.
That struggle involves students who do not necessarily have a tool kit to carry around to begin with and who thus have a real handicap. At least if they have regression analysis, then they can try it and find out that it does not work. But if they do not have at least one or two tools available to them, then it is very hard for them to address new statistical problems because they have not gotten into the mode of statistical thinking.
Yesterday, we heard from John Lehoczky, who described the Carnegie Mellon approach, in which the students come with at least a partial tool kit and are taught about how to draw in other tools. At the same time, however, they are reoriented to look at statistical problems, and they ultimately have to face the real test: taking a problem from beginning to end, working with others, and presenting the results of their effort.
I would emphasize the word "transition" here because I think that we do not yet understand — I speak for my colleagues at CMU as well as others that I have talked with — how you shift from one mode of statistical education to the other. You do not do it simply by stopping teaching statistics one way and starting to do it the other; you have to guide students into this second way of approaching problems involving, or crying out for, statistical thinking and methodology.
What Is MIUSE?
Now I want to ask a much harder question: What is this acronym — MIUSE — that expresses the topic of this symposium? Well, I can tell you what it is not; I am not sure I can tell you what it is. It is not the "alphabet soup" that John Lehoczky referred to yesterday, the collection of the latest "hot" computer-based data analysis techniques that fill our journals. These may be wonderful data analytic ideas — innovative, creative, and coming out faster than anybody can possibly keep up with — but they do not constitute MIUSE. It is not even dynamic graphics, whether they involve formal analysis or the kind of summary display that Ed Rothman was drawing our attention to a little while ago. It is not even statistical process control (SPC) and total quality management (TQM), a little more alphabet soup, although these are important tools for possible use in the enterprise.
How to Do It
The issue we have been getting close to, but not yet getting our hands on, is how to bring into the curriculum real interdisciplinary work of the sort described through my example earlier. Some relevant themes have come up over the last two days, and I thought I would at least make reference to a few of them.
The statistics faculty who teach about applied problems must actually do interdisciplinary research, or this just is not going to become a part of the curriculum that anybody will take seriously. One of the modes for accomplishing this is cross-appointments. This topic arose in a few presentations and then was either dismissed or at least set aside.
Over the course of my career I have had cross-appointments in theoretical biology, in social science, and in law. (My fellow responder on this panel, Ron Thisted, also has a substance-based cross-appointment at the University of Chicago Medical School.) A cross-appointment is not simply two sets of faculty meetings, two sets of students to be responsible for, and two department chairs or heads who determine your salary. A cross-appointment typically opens up for the statistician a wonderful source for problems for collaborative study. The person with the cross-appointment also needs to draw research problems back into the department of statistics and share them with others, not simply work on them in isolation. Statistics as a field has had a tradition of cross-appointments, but I suspect that we have had fewer than we should have had in recent years. There is a chicken-and-egg problem here. How do you get a cross-appointment in the law faculty if you do not have any expertise in the law, or at the interface of statistics and the law, that the law faculty members judge to be of value to them? So you have to work on this.
We have also talked about how you make room in the curriculum for interdisciplinary activities, and I mentioned in the discussion yesterday afternoon my view that we must look at the graduate statistics curriculum from a much broader perspective. Prem Goel raised the related problem of required courses in the graduate curriculum. If we ask about requirements one by one, then it is inevitable that some faculty members will support keeping each and every thing in the curriculum, without change, unless, of course, they wish to add additional required courses. All I can say is that we have adopted a much more flexible approach at Carnegie Mellon. This does not mean that we do not have similar kinds of pressures; they just manifest themselves in a different form.
There is another feature about how to do modern interdisciplinary university statistics education that we have addressed only in passing in this symposium, but I take the Carnegie Mellon example to illustrate my point. One faculty member does not take sole responsibility for the interdisciplinary part; a team of two makes it a little easier, but even two faculty members will end up having trouble if the class has more than a few students.
Essentially, what we are trying to do when we attempt to implement MIUSE is to teach our students how to work and think in an interdisciplinary environment. You do not do that by taking 30 students (over even 5 to 10 students) to watch a statistician work collaboratively with two or three colleagues. You must approach this either in very small groups or even one-on-one.
The CMU model of data analysis projects for our PhD students literally ties student and faculty together, working with someone else in a different field. There is this mentor-and-apprentice
relationship that is an essential part of getting students going on applied problems and helping them through the inevitable rough patches. This is a very expensive way of educating graduate students, and we have to recognize that.
Laurie Snell mentioned something for introductory statistics courses that I think is very important for graduate courses as well: our students are not empty vessels. They often have an interest and experience in substantive fields other than statistics. I had a first-year graduate student who is my new advisee come into my office last week and ask, "Is it okay if I take an economics course? Do you think the department would be upset?" I replied, "Well, the one thing I can tell you is that the department will certainly not be upset. We may have to help you juggle various requirements, but all you have to do is explain why you're interested in studying economics and I'm sure that there will be support for your request. We think it's important that you have substantive knowledge, because the more things you know beyond statistics, the better you will be as a statistician." I am sure that, in two or three years, this student will bring his substantive knowledge about economics to bear on a statistics problem in a way that we did not anticipate. I believe that we must learn how to utilize such interests on the part of our students.
It is important that we remind ourselves as statisticians and our students about the rich tradition of our field involving people who work both on statistical methods and in other areas of science. I made only a brief list, in advance of hearing Jean Thiebaux make a related point yesterday: Pierre Simon de Laplace, Karl F. Gauss, Francis Y. Edgeworth, R. A. Fisher, Jerzy Neyman, William G. Cochran, Morris H. Hansen, Frederick Mosteller, and John Tukey. These are all people who drew strength for their statistical ideas from other fields.
In fact, when we talked about training problem solvers yesterday, I wanted to stand up and say that this is not a new idea. I would encourage all of you to read a wonderful paper published in 1949 in Science, called "The Education of a Scientific Generalist." In this paper (Bode et al., 1949), Hendrik Bode, Fred Mosteller, John Tukey, and Charlie Winsor described a curriculum that we today might label as the education of a statistical generalist, the student who is a problem solver and can move into new areas, learn the substance, work on difficult problems, and then move on, taking the lessons learned and putting them to work on fresh problems.
Finally, I note that we must be careful in articulating the goal for our students by bringing interdisciplinary approaches into the curriculum. It may be that even though you have faculty colleagues with a very rich tradition of working with others in multiple disciplines, not all of your students are going to feel comfortable in moving from area to area.
An outcome of modern interdisciplinary university statistics education may well be that we train a number of students who develop a primary interest in one of the interdiscipline, in one of the areas of application, but they may not move on to another interdisciplinary area at least for a long part of their career. This is different from the notion of training lots of people who will be busy solving problems in several substantive areas simultaneously, or possibly sequentially. I think we are much more likely to produce the first kind of student than the second kind.
Finally, I note that you cannot just expect materials for interdisciplinary statistics education to arise from your own resources. Even if you are a Fred Mosteller or a John Tukey, you are likely to not have enough resources at your fingertips to get every student going in a different kind of substantive problem. So we will need to rely on our colleagues for assistance,
both local colleagues and others elsewhere. I think that we need a set of interdisciplinary research resources at an advanced level. One approach is to develop something like what is being proposed by the Institute of Mathematical Statistics, an Annals of Applied Statistics. This would be the kind of publication that would publish the case studies, that would get people excited about interdisciplinary work, that would provide the substance and the development of statistical ideas. These are things that statisticians are not accustomed to writing about in our typical journal articles. I hope that, if you have friends on the IMS Council, you will tell them that this is important for the field and that they should vote to support its development.
People have asked me why, if a topic such as cognitive aspects of survey methodology is so terrific, we have not introduced it in our graduate curriculum at Carnegie Mellon. I must confess that even we at CMU have to work on developing modern interdisciplinary statistics education. There is no course on cognitive aspects of survey methodology, no course in statistics and the law, and not even one in statistics and public policy. All of these topics are near and dear to my heart. But we do try to introduce some of our students to these ideas, even if it is in an informal manner. I note with anticipation that, in three weeks' time, I will begin teaching the data analysis course for our PhD students. I do plan to address there how to bring the kinds of interdisciplinary projects we have been discussing at this workshop into the mainstream of our graduate statistics curriculum. To me this is what MIUSE is all about.
Bode, H., F. Mosteller, J. Tukey, and C. Winsor. 1949. The education of a scientific generalist. Science 109:553-559.
Cohen, V. 1989. News & Numbers. Ames: Iowa State University Press.
Daniel, C., and F. S. Wood. 1971. Fitting Equations to Data. New York: Wiley.
Fienberg, S. E., ed. 1989. The Evolving Role of Statistics as Evidence in the Courts. New York: Springer-Verlag.
Fienberg, S. E. 1990. Legal likelihoods and a priori assessments: What goes where? Pp. 141-162 in Bayesian and Likelihood Methods in Statistics and Econometrics: Essays in Honor of George Barnard. S. Geisser, J. S. Hodges, S. J. Press, and A. Zellner, eds. Amsterdam: North-Holland.
Fienberg, S. E., and D. H. Kaye. 1991. Legal and statistical aspects of some mysterious clusters. J. R. Stat. Soc., A 154:61-74.
Fienberg, S. E., and J. M. Tanur. 1989. Combining cognitive and statistical approaches to survey design. Science 243:1017-1022.
Fienberg, S. E. and J. M. Tanur. 1994. Surveying an interdisciplinary CASM. In The Interdisciplinary Experience: Some Eclectic Reflections. L. Salter and A. Hearns, eds. Montreal: McGill-Queens Press. In press.
Jabine, T., M. L. Straf, J. M. Tanur, and R. Tourangeau. 1984. Cognitive Aspects of Survey Methodology: Building a Bridge Between Disciplines. Washington, D.C.: National Academy Press.
Meyer, M. M., and S. E. Fienberg. 1992. Assessing Evaluation Studies: The Case of Bilingual Education Strategies. Washington, D.C.: National Academy Press .
Tanur, J. M., and S. E. Fienberg. 1992. Cognitive aspects of surveys: Yesterday, today, and tomorrow. J. Off. Stat. 8:5-17.