ROTHMAN: I would like to hear just a few things about the reward system.
JOHN LEHOCZKY: You may not believe this, but in terms of value and cross-disciplinary research, at CMU we actually value applications papers, papers in journals that are not statistics journals, equally with — I would say more than — non-applications papers. And we value those just as much in our promotion, in our year-to-year performance appraisals, which have to do with year-to-year salaries, let alone promotion and tenure. We value those contributions in the same coin as contributions to JASA or the Annals of Statistics or whatever your favorite statistics journals may be.
We have those applications papers reviewed. If we ourselves cannot review them, because we cannot always judge such contributions, we have them reviewed by substantive experts to assure ourselves that they are in fact good contributions. We go through that extra step.
We do expect our faculty to have excellent credentials in some areas of the core discipline of statistics, whether it be Bayesian statistics or probability theory or time series, or whatever be the classic areas. The person has to have notoriety in some area or set of areas. The individual has to have credentials in statistics. And we want that person to be collaborating with other faculty members in the department, because that is a very important way our department works, and to be collaborating with subject matter experts in other departments in the university or outside the university.
So we value those applications and non-applications aspects in the same coin. Molly Hahn wondered earlier about mathematics departments, and I do, too. I think that for statistics departments that are within mathematics departments and whose faculty members are evaluated along with their partners within the mathematics department, achieving this will be incredibly difficult to ever pull off.
JEAN THIEBAUX: I did not hear John Lehoczky suggest that students with specific other disciplinary backgrounds be recruited to statistics graduate programs. Doing that is a different way of creating cross-disciplinary graduates, rather than retrofitting them. It does not take time from graduate concentration in statistics. Has CMU looked at that possibility?
LEHOCZKY: Our original master's program had as a concept that students could have a disciplinary area of their own, whether it be biology or oceanography, whatever the field would happen to be, and would study statistics. Our faculty strongly endorsed that as a concept. There is a unity in feeling that we are very interested in such students. But I think it is simply a failure of the recruiting process that we are not seeing those students. We are just not getting the applicants to have the opportunity to bring them into the program. And I think the failure is ours; it is a marketing question. But I agree wholeheartedly with the spirit.
EDDY: I am very interested in Joan Garfield's references to research in using collaborative work as a way to improve teaching. I am starting to teach a course in probability
theory, and I want to try and use some collaborative learning methods that I have never tried before.
JOAN GARFIELD: A paper of mine that just appeared in the new, first issue of the Journal of Statistics Education, an electronic journal available on gopher, is specifically on using cooperative learning in teaching statistics. Concerning the research that says this is a more effective way of learning, some of my colleagues at the University of Minnesota, David and Roger Johnson, have put together a huge literature review [see p. 46, above]; I believe they cite over 250 articles that have shown that students do tend to learn better, that is, achievement seems to be higher, when they work in groups.
ANTONIAK: We technical types tend to be more impressed personally with these computer-generated animated demonstrations of principles. We tend to think that the right way to get a concept across is to find a new, good way of demonstrating something, by being colorful. What comes to my mind is Tom Apostle's work with Project Mathematics in which, for example, there are very neat demonstrations of the theorem of Pythagoras. Joan Garfield's presentation focused mainly on the methodology, the dynamics, the learning environment, the interpersonal dynamics between the students and the teacher. Have any studies been made on whether the cleverness of the demonstration is 30 percent, or 50 percent or whatever, of what is required? Or are you really saying that the biggest problem is in recognizing a different way to go about teaching any kind of material of this type?
GARFIELD: I am not sure I understood the question. I first thought you were going to ask me if there was research on the use of computer graphics, demonstrations, and so on, and would that have an impact on student learning? Is that part of what you were asking me?
ANTONIAK: Yes, basically.
GARFIELD: I think that studies are starting to be done on the use of different kinds of software and ways that students interact with them, and what is the most effective way to use that kind of software to help students learn. And it seems to me that it is very encouraging. With technology offering such sophisticated ways to demonstrate things that we have never been able to present before, it seems to offer the potential to help students understand very complex concepts in better ways than they had previously. But I cannot say that there is a set of literature out there that supports that right now.
JOHN TUCKER: Computer technology can be very effective in improving student learning, especially in reinforcing class presentations and for self-paced instruction, but its effectiveness strongly depends on how well or poorly the software is designed.
GARFIELD: Right, and how well students are able to interact with it, whether it is just a demonstration or whether it permits them to manipulate variables.
ROTHMAN: I would like to address this issue of grading and assessment and also encouraging cooperative learning at the same time. If in fact we put people in competition for a grade, do we not undermine the purpose of learning?
GARFIELD: I think I have a different view of assessment than that. My idea of assessment is giving feedback to students on their learning, not just handing them an end-of-the-term grade. It is more an ongoing interaction with the student that says, here are some areas of weakness, here are things I think you need to work on. I view assessment as an ongoing process, and as a very complex process where ideally we would be giving feedback to students on their statistical knowledge, how well they apply it, how well they communicate it, and so on.
I think that assessment is very much a part of collaborative activity because if a group works together and turns in a product, they need feedback on how well they did on that product. I know that most professors view assessment as grading, and I think that issues do come up when you are grading group work and students are worried about their grades. There have been different suggestions in the literature on different approaches to dealing with that.
ROTHMAN: Specifically, how do you feel about comparison between students? If you base assessment on how well they present their work, you are making a comparison, relative to someone else rather than to what that student has already done.
GARFIELD: I guess I do not see assessment as comparison to other students. I see it more as comparison to a standard: "Here is what we would like you to be able to do, and you are not there yet, but here are some suggestions for areas you should work on." I personally do not think of assessment as a way of comparing students to each other. I do not do rankings. I believe in more of a mastery approach, whereby if every student in the class masters things to the level I am looking for, they will all get the same grades.
JAMES ROSENBERGER: The idea that statistics is at the hub of a hub-and-spoke paradigm is one of the themes that I have encountered here that Pennsylvania State University is probably not aware of. I wonder if we need to do a great selling job of the statistics discipline for the rest of the academic community.
MORRIS: I am afraid so. But let us get started.
FIENBERG: One of the problems that is going to come up repeatedly, and has been alluded to everywhere, is how to fit everything in. In reflecting, I have been associated in one form or another with at least five different departments over my career, and every department has had this problem. So it was not a problem that only I encountered; indeed, it existed at Harvard when I was associated with that university early in my career.
John Lehoczky was correct in saying there is clear agreement on the goals and the importance of data analytic and cross-disciplinary training. But it is also very clear to me that there is not unanimity at CMU about the curricular details. Further, one could probably put any pair of people together who, when looked at from afar could seem to coincide, and find they think very differently about the curriculum.
A number of years ago, when I was at Minnesota — before Joan was there — I observed that, when put together in a room, the faculty demanded the union of the knowledge of all of the people in the room rather than the intersection. A consequence is that you add course requirements and you never take them away. If allowed to go to its ultimate end, you have an infinite-year curriculum, a curriculum that cannot work. So there is a serious problem here.
The other observation comes from my life as an administrator, which is now over; I am now languishing back into the field of being a faculty member. We in statistics are not alone. We talk about this as if it were unique to statistics, but in fact every field in every university faces these decisions. In fact, the pace of curricular reform and change is similar in other places, and indeed, I believe statistics in many respects is moving more rapidly ahead. In my most recent administrative role as a vice president, I was astonished at the slowness of some fields' willingness to embrace the notion that you had to reexamine what you were doing, let alone change it.
What I would commend to everybody is to think not in terms of 2 or 4 or 5 or even 10 years as the increment for comparison, but to think in terms of generations and centuries. If you
go back a century, statistics did not exist as a discipline. If you look at universities a hundred years ago, there was not an English department because it did not exist as a separate, identifiable field. And therefore, anybody who tells you that you cannot change the curriculum over that length of time is just talking from ignorance. If you use that long-term view, you know that change has to occur, and the question is how rapidly you make it happen, and how acceptable it is to be making changes regularly. Statistics as a field has actually been a good model for that. The notion of process control, where you do make regular changes and adapt, is something that we have been teaching others for years. Perhaps it is appropriate to take that and bring it back and use it ourselves as we adapt.
MORRIS: It is always easier to make change when departments are being built. The first statistics departments in the United States were formed just before World War II. Changing is much harder once you become institutionalized. Statistics is going to be more like the classics department the next time.
SACKS: Joan Garfield offered a set of tactics to go with the strategy that had been discussed before by Peter Bickel. Do you have, or do you know if there has been attempted, an assessment of the cost, in terms of time or resources, of implementing those sorts of tactics at a graduate level, or even at an undergraduate level, or whether it is cost-effective to do so?
GARFIELD: I do not know how to answer that. I have lots of suggestions on how it can be done, but I am not directly involved in a statistics program, and so I cannot speak to what the cost would be.
ROTHMAN: At the University of Michigan, we have one class in which we use masteries or portfolios rather than tests at the end of the period of time. Students demonstrate their understanding by writing something that indicates that they understand the facts and that they can apply the facts to situations that have not been described in class. They have to go to a newspaper, a scientific journal, and say, "Here is an application of this principle to some other situation." They get some feedback from the teaching fellow, and then they either have mastered the topic or have to revisit it. So the grade is "mastered" or "not yet." That simple change from assigning numbers as grades is very important because it focuses on learning as opposed to performance on tests. We are out there trying to encourage learning.
We have a class of 250 students. Even being involved in this teaching college for more than one term in this section, using three plus two other graders, it is a full-time job just getting involved in that seemingly small change. We are trying to find new ways of doing it by putting more of a burden on the student, and getting some software that allows them to check their own work. We have the $4,000 to do that this summer and will see how that plays out. We are going to have to do a lot of work to get the cost down.
The bottom line is that, from my understanding, it is going to be a very expensive policy.
EDDY: What we have been talking about seems to me to be the distinction between educating students and training them in something specific, and that the historical mode of lectures is to drum the information into them. What these various things that have been talked about this afternoon really focus on is educating students so they have the tools and savvy for these situations, and not worrying so much about training in the specifics.
In thinking about this, I am still struggling with my earlier question of how we are going to teach them all of these things. The answer is that we are not, and we do not have to worry so much about it.
Also, John Lehoczky omitted mentioning one of the other mechanisms that we at CMU have incorporated in the last few years, namely, small groups of students and faculty members that get together. We now have six or seven of these groups that meet two or three nights a week, in which the students have to make presentations. It is so much smaller; it is not a course or anything of the like, it is just a get-together or workshop. In the ones in which I play a role, in the course of a month the students probably make one or two presentations. They get feedback on the communications part and on the technical part, and their fellow students get exposed to whatever ideas they are discussing. So there are other mechanisms that are imparting the knowledge that they did not get in course work, and in realizing this I actually feel much better about it than I did earlier.
JAMES LANDWEHR: I want to comment about this issue of how one covers more in the same amount of time, from the perspectives of having worked with the Quantitative Literacy Project and of trying to get high school and middle school teachers to teach more statistics. Of course, if you say to a mathematics teacher that, in addition to everything else, he or she now should be teaching statistics and the students should understand statistics by the time they graduate, the immediate response is, "Well, what can I throw out? You tell me what to throw out, what is not important, and I will consider it."
Eventually other teachers may say to them something like the following: If we think about it differently, if instead of spending a lot of time teaching them linear algebra — in which the straight line is presented so abstractly that the students do not get it anyway and more time is thus taken than should be — we give the students some real data and ask them what it means and talk about scatter plots and look at association and eventually ask the class, "Could a straight line help us in understanding this?", we may end up not only teaching some statistics, but also teaching the simple algebra better — and I am now repeating what teachers tell me — so that the kids end up learning the simple algebra about the straight line better than previously happened the traditional way.
So if you can come up with a different method, sometimes you can kill two birds with one stone. That is not to say it is easy, but I think this is a perspective worth taking, as opposed to the "What-can-I-get-rid-of?" perspective.
BICKEL: I have to agree completely with Steve Fienberg that when you get the department together, the tendency, of course, is always toward the conclusion that one must offer the union of all things rather than the intersection. Furthermore, it is driven not only by the faculty interests that lead to that union, but also by what skills are, say, most relevant in the environment. For universities in the Washington, D.C., area, focusing on survey sampling and statistics policy may be most relevant; for the University of Michigan, focusing on Teach UM and its various aspects may be what is most important. On the other hand, it is clear that is impossible to offer everything. So there has to be selection. Nothing prevents there being a large number of topics available as long as you can get the faculty to agree on the intersection, that is, on what every student should have some exposure to. Then have the other things offered, or offered as time permits or as there are faculty members willing to teach them.
BODAPATI GANDHI: There has been discussion about cost. We try to extend the basic philosophy of quantitative literacy to undergraduate students who are taking undergraduate introductory courses. We ask them to do a project instead of a third partial examination. Each student has to collect primary and secondary data, prepare a proposal, a small one, and then
analyze the data on a computer. It is done at no cost to the university. There have been about 10 sections with each section containing about 34 students, so that around 300 students total have experienced this for two semesters. In the first semester they focus on descriptive statistics, and in the second semester they do a regression analysis project. We offered it this year also and it was very successful, according to comments of the faculty.
SNELL: I was just going to comment that we at Dartmouth and many other people have experimented, and many have approaches that are very expensive, and also some other methods are very cheap. I heard a wonderful presentation the other day about a physics program at Harvard in which the person who teaches a group of a couple hundred people simply comes in each day with a very short question, gives the students a few minutes to think of the answer, and has them turn around and discuss the problem with a neighbor and come to a joint conclusion. This is done merely to exhibit by first-hand experience how much better they do after they have talked about a problem a little bit with each other. He happened to have a lot of high-technology equipment that probably was expensive, because all the statistical results were displayed in front of the room almost instantly with automatic recording and such things, to also show the students how much they had learned from just one or two minutes of discussion with their neighbors.
However, to do something such as that is very simple and does not cost a lot of money.