National Academies Press: OpenBook

The Future of Statistical Software: Proceedings of a Forum (1991)

Chapter: Afternoon Discussion

« Previous: Incorporating Statistical Expertise into Data Analysis Software
Suggested Citation:"Afternoon Discussion." National Research Council. 1991. The Future of Statistical Software: Proceedings of a Forum. Washington, DC: The National Academies Press. doi: 10.17226/1910.
×

Afternoon Discussion

TURKAN GARDENIER (Equal Employment Opportunity Commission): I totally agree with Andrew Kirsch in his opinions on the need for configurable output. In litigation cases, an offense-defense strategy is played in which one has to selectively provide information to the opposing party without giving too much information. If too much is provided, not only might they fail to comprehend it, but--attorneys have told me--it could also be used against the person providing it.

ANDREW KIRSCH: The same risks are faced by engineers who must present their data analyses to others.

TURKAN GARDENIER: It has sometimes been necessary to cut and paste the standard statistical output, which is not kosher. If asked, “Did you present all the information the computer gave?” do you answer yes or no? If you reply that the omitted information is not relevant to the issue, they say, “Present everything,” and then they ask you to explain everything in depositions. The deposition consumes three hours on some trivial case in the computer output that is totally irrelevant to the issue, and frequently confuses the opposing party rather than shedding light on the issue. There needs to be a road map for selecting information to be displayed, depending on the capability of the user to either understand it or use it.

ANDREW KIRSCH: That is as much an issue in learning as it is in litigation. I tell my students that there are professionals whose careers are based on adding new kinds of tests and checks on various kinds of statistical analyses. In order to know everything about the output, individuals have to make a career of it themselves.

PAUL VELLEMAN: There seems to be a mild contradiction in this. I love Andrew's conclusion that ease of use is more important. Somehow, software vendors never hear “please give us more ease of use” from users. They hear “please give us another feature.” Andrew began with a list of capabilities. If a vendor's package does not have one of those capabilities, and the vendor wants 3M to use that package, that missing capability must be added, rather than the program being made easier to use. If greater ease of use is of prime importance, and I believe it is, then that is what the package developers need to hear from the user community.

ANDREW KIRSCH: I agree. That very comment, on being willing to set aside adding new capabilities for the sake of ease of use, has been made by 3M to some people present today.

By the way, 3M does not try to meet all four of those categories (acceptable cost;

Suggested Citation:"Afternoon Discussion." National Research Council. 1991. The Future of Statistical Software: Proceedings of a Forum. Washington, DC: The National Academies Press. doi: 10.17226/1910.
×

core statistical capabilities; availability for a variety of hardware environments, with interfaces to common database, spreadsheet, word processing programs and ready availability of support along with complete, readable documentation; and lastly, ease of use for infrequent users) with one software. 3M has separate software for acceptance sampling, design of experiments, and so on. When going through an evaluation process, it is really hard to say what are core capabilities.

ERIC STELLWAGEN (Business Forecast Systems, Inc.): As a developer of software, we never hear from our users concerning the vast majority of products that go out the door. The reason is that we strive very hard to make them easy to use.

Now, we have fallen into the trap of having different products to aim at different markets. Our most sophisticated product, statistically speaking, is the one on which we get the greatest number of comments, such as, “It does not have this test,” or “Do you not know about this other technique?” But those comments are coming not from infrequent users, but from people who are using the product every day in their work.

The vast majority of our clients are the ones who once a month take a program off the shelf, do their sales forecast, and put it back on the shelf until next month. So the points expressed at this forum mirror exactly my experience.

PAUL TUKEY: I think we have to make a clear distinction between ease of use and oversimplification of the technique. I, too, believe in ease of use. For instance, one of the reasons I like S is that it has high-level objects that are easy to use, because they are self-describing. This means I do not have to be constantly reminding the system of all the attributes.

But there is the danger of mixing up ease of use with pretending that the world is simple when it is not. If naive people do naive analyses, e.g., fit a straight line when a straight line is not appropriate to fit, they may miss the things that are happening, because they are using Lotus. Frankly, Lotus is not all that easy to use for statistics. Try to program a regression in the Lotus programming language; it is horrendous.

This is not a plea for oversimplified statistics. Some may worry that if software is made too easy to use, it sweeps the hard problems under the rug and everybody is left to believe that the world is always simple, when that is not necessarily so.

ANDREW KIRSCH: I tried to mention the idea of ease of flow through the program as well as that of limiting the amount of output generated. But certainly there is a danger in limiting generated output if you reduce the amount of data generated to the point that you are vastly oversimplifying. So users must keep focused on those things that are most important, because this large laundry list of output, of potential things to consider and act on, does not provide much focus.

FORREST YOUNG: Returning to the topic of guidance, one way of making systems easier to use is to not “cripple” them by taking options out or perhaps hiding options, but to make suggestions to the user as to which options, what kinds of analyses, are the ones

Suggested Citation:"Afternoon Discussion." National Research Council. 1991. The Future of Statistical Software: Proceedings of a Forum. Washington, DC: The National Academies Press. doi: 10.17226/1910.
×

that are appropriate at this point. The software should not give the appearance that there is only one thing the user can do, namely the thing that is being suggested by the system. Rather, the program should identify the things some expert thinks would be reasonable things to do next. That would certainly make systems easier to use for relatively new users.

TURKAN GARDENIER: Recently I received a demonstration copy of a menu-driven tutorial called Statistical Navigator that explains when to use statistical tests. It encompasses most of the modules but does not analyze data. It provides menus, with questions and answers and with information about the data it is given, and gives advice about what test to use.

DARYL PREGIBON: There is a publication that has probably been out about 10 years from the University of Michigan, I believe from the School of Social Sciences. It is a small monograph or paperback that basically does what that software implements. This Navigator tutorial is probably one of the first to implement the ideas into software.

PAUL TUKEY: Daryl, your enthusiasm is clear about solving the expert system problem in the near future. What are your thoughts on extracting some of the lessons from that experience and incorporating pieces into some more classical kinds of statistical software packages, so that one has a better understanding of what are the good diagnostics and so on? Such programs would not necessarily always say what should be done next, but could at least perform some operations in the background.

DARYL PREGIBON: Bill DuMouchel mentioned something related to that, the concept of meta-data. These are qualitative features, such as the units of measurements and the range of the variable, that can and should be integrated into the software. One can go to great lengths to provide a certain amount of guidance, but it also requires some discipline. If you have a procedure to do a Box-Cox analysis, you want it to involve more than just the numerical values of X and Y; you also want this meta-data. Many statisticians have probably written software procedures for Box-Cox, but how many of you have written any that truly ask or require this meta-data to be present? Yet in Box-Cox's paper [Box and Cox, 1964], approximately the first five pages were on the qualitative aspects of data transformations. There were various guidelines there, and after following them one can then do optimization.

Anyone who has created software that implements only the procedure is at fault. Statisticians have factored out all that qualitative meta-data from their routines. There is much room for improvement in applying the lessons that have already been learned, to simply bring that existing guidance into analyses. That would drastically improve the level of guidance available in statistical software and prohibit indiscriminate transforming of variables when there is no rationale for doing so on physical or other grounds.

KEITH MULLER: This goes back to the ultimate expert question where, when

Suggested Citation:"Afternoon Discussion." National Research Council. 1991. The Future of Statistical Software: Proceedings of a Forum. Washington, DC: The National Academies Press. doi: 10.17226/1910.
×

individuals come to a statistical consultant, often they are asking the wrong question of the data. Daryl's idea of meta-methods, in which a shell is built around the little pieces to try to build a bit bigger piece, is very attractive. But the insolubility of that problem of asking the wrong question may be the reason that many of us were originally so cynical about the feasibility of expert systems.

DARYL PREGIBON: I emphatically agree. The most interesting and difficult parts of an analysis problem are recognizing at the outset what the problem really is, and understanding what to disregard in order to reduce the problem to something for which one can draw up a sequence of steps.

CARL RUSSELL (U.S. Army Operational Test and Evaluation Command): When you mentioned mini-steps, I expected you to discuss the kind of implementation that is similar to your “next” function that is in both Jump and Data Desk. For those packages, whenever the user has an individual window, he or she can go up and essentially get it next. Those little steps are already implemented, at least in a couple of packages.

DARYL PREGIBON: Everyone should explore this approach in more detail. Those vendors probably saw the light before I did on this, which is why they were successful where I was not.

WILLIAM DUMOUCHEL: Every menu system helps to some extent in that manner, because there is a particular structure in which you want to do one set of analyses, while only in certain contexts have you completed other analyses.

FORREST YOUNG: What is your opinion about the future of expert systems?

WILLIAM DUMOUCHEL: That is a rather general question. I agree with Daryl about how hard the problem is in the absence of making use of context. The best thing a software company can do is to provide tools for users to design their own on-site expert environments. If a general tool is offered that has menu-building tools in it as well as other kinds of extensibility options, then a group at 3M who know what kind of data they usually use can fine-tune it and put in expertise.

The future effort toward incorporating expertise will be to try to make everything more concrete. Desktop metaphors seem to work wonders in many aspects of how to use computers quickly and easily. If we can think of other metaphors that help in the data analysis software line, then somehow we can make this whole process more transparent. Expertise can be more an invisible rather than an explicit thing.

CLIFTON BAILEY: With expert systems, one is often asking to deal with models that are inherently not in the class of models that are normally considered in everyday statistical work. Also, in putting expertise into a product, one needs to be aware that there are certain symmetries and structures that would never be followed in a particular

Suggested Citation:"Afternoon Discussion." National Research Council. 1991. The Future of Statistical Software: Proceedings of a Forum. Washington, DC: The National Academies Press. doi: 10.17226/1910.
×

context.

ANDREW KIRSCH: To reinforce a well-made point of Daryl's, the necessary antecedent to statisticians making progress in the world of expert systems is to rethink and specify what it is that statisticians do. It is not just automating some system, but is more akin to asking in advance whether it is laid out in the best possible way. Simply automating may be an inferior practice. The problem here is not inferior practice, but that the practice has not been adequately specified.

AL BEST (Virginia Commonwealth University): These ideas are all very exciting, but clearly a lot more needs to be known before much can be done that is concrete in promulgating guidelines. Should there be guidelines on whether there needs to be a “next” function, or guidelines on whether a user should be allowed to get only a system table without a graph? There are many questions here.

Reference

Box, G.E.P., and D.R. Cox, 1964, An analysis of transformations, J. Roy. Stat. Soc. B, Vol. 26, 211–252 (with discussion).

Suggested Citation:"Afternoon Discussion." National Research Council. 1991. The Future of Statistical Software: Proceedings of a Forum. Washington, DC: The National Academies Press. doi: 10.17226/1910.
×
This page in the original is blank.
Suggested Citation:"Afternoon Discussion." National Research Council. 1991. The Future of Statistical Software: Proceedings of a Forum. Washington, DC: The National Academies Press. doi: 10.17226/1910.
×
Page 63
Suggested Citation:"Afternoon Discussion." National Research Council. 1991. The Future of Statistical Software: Proceedings of a Forum. Washington, DC: The National Academies Press. doi: 10.17226/1910.
×
Page 64
Suggested Citation:"Afternoon Discussion." National Research Council. 1991. The Future of Statistical Software: Proceedings of a Forum. Washington, DC: The National Academies Press. doi: 10.17226/1910.
×
Page 65
Suggested Citation:"Afternoon Discussion." National Research Council. 1991. The Future of Statistical Software: Proceedings of a Forum. Washington, DC: The National Academies Press. doi: 10.17226/1910.
×
Page 66
Suggested Citation:"Afternoon Discussion." National Research Council. 1991. The Future of Statistical Software: Proceedings of a Forum. Washington, DC: The National Academies Press. doi: 10.17226/1910.
×
Page 67
Suggested Citation:"Afternoon Discussion." National Research Council. 1991. The Future of Statistical Software: Proceedings of a Forum. Washington, DC: The National Academies Press. doi: 10.17226/1910.
×
Page 68
Next: Closing Remarks »
The Future of Statistical Software: Proceedings of a Forum Get This Book
×
Buy Paperback | $45.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This book presents guidelines for the development and evaluation of statistical software designed to ensure minimum acceptable statistical functionality as well as ease of interpretation and use. It consists of the proceedings of a forum that focused on three qualities of statistical software: richness—the availability of layers of output sophistication, guidance—how the package helps a user do an analysis and do it well, and exactness—determining if the output is "correct" and when and how to warn of potential problems.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!