Survey Automation
Report and Workshop Proceedings
THE NATIONAL ACADEMIES PRESS
Washington, D.C.
www.nap.edu
THE NATIONAL ACADEMIES PRESS
500 Fifth Street, NW Washington, DC 20001
NOTICE: The project that is the subject of this report was approved by the Governing Board of the National Research Council, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine. The members of the committee responsible for the report were chosen for their special competences and with regard for appropriate balance.
The project that is the subject of this report was supported by contract no. SBR-9709489 between the National Academy of Sciences and the National Science Foundation. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the organizations or agencies that provided support for the project.
International Standard Book Number 0-309-08930-1 (book)
International Standard Book Number 0-309-51010-4 (PDF)
Library of Congress Control Number: 2003106250
Additional copies of this report are available from the
National Academies Press,
500 Fifth Street, NW, Washington, D.C. 20001; (202) 334-3096; Internet, http://www.nap.edu
Copyright 2003 by the National Academy of Sciences. All rights reserved.
Printed in the United States of America
Suggested citation: National Research Council (2003). Survey Automation: Report and Workshop Proceedings. Oversight Committee for the Workshop on Survey Automation. Daniel L. Cork, Michael L. Cohen, Robert Groves, and William Kalsbeek, eds. Committee on National Statistics. Washington, DC: The National Academies Press.
THE NATIONAL ACADEMIES
Advisers to the Nation on Science, Engineering, and Medicine
The National Academy of Sciences is a private, nonprofit, self-perpetuating society of distinguished scholars engaged in scientific and engineering research, dedicated to the furtherance of science and technology and to their use for the general welfare. Upon the authority of the charter granted to it by the Congress in 1863, the Academy has a mandate that requires it to advise the federal government on scientific and technical matters. Dr. Bruce M. Alberts is president of the National Academy of Sciences.
The National Academy of Engineering was established in 1964, under the charter of the National Academy of Sciences, as a parallel organization of out-standing engineers. It is autonomous in its administration and in the selection of its members, sharing with the National Academy of Sciences the responsibility for advising the federal government. The National Academy of Engineering also sponsors engineering programs aimed at meeting national needs, encourages education and research, and recognizes the superior achievements of engineers. Dr. Wm. A. Wulf is president of the National Academy of Engineering.
The Institute of Medicine was established in 1970 by the National Academy of Sciences to secure the services of eminent members of appropriate professions in the examination of policy matters pertaining to the health of the public. The Institute acts under the responsibility given to the National Academy of Sciences by its congressional charter to be an adviser to the federal government and, upon its own initiative, to identify issues of medical care, research, and education. Dr. Harvey V. Fineberg is president of the Institute of Medicine.
The National Research Council was organized by the National Academy of Sciences in 1916 to associate the broad community of science and technology with the Academy’s purposes of furthering knowledge and advising the federal government. Functioning in accordance with general policies determined by the Academy, the Council has become the principal operating agency of both the National Academy of Sciences and the National Academy of Engineering in providing services to the government, the public, and the scientific and engineering communities. The Council is administered jointly by both Academies and the Institute of Medicine. Dr. Bruce M. Alberts and Dr. Wm. A. Wulf are chair and vice chair, respectively, of the National Research Council.
OVERSIGHT COMMITTEE FOR THE WORKSHOP ON SURVEY AUTOMATION
ROBERT M. GROVES(Co-Chair),
Survey Research Center, University of Michigan, and Joint Program in Survey Methodology
WILLIAM KALSBEEK(Co-Chair),
Survey Research Unit, Department of Biostatistics, University of North Carolina
MICK P. COUPER,
Survey Research Center, University of Michigan, and Joint Program in Survey Methodology
JOEL L. HOROWITZ,
Department of Economics, Northwestern University
DARYL PREGIBON,
AT&T Labs—Research, Florham Park, New Jersey
DANIEL L. CORK, Study Director
MICHAEL L. COHEN, Senior Program Officer
MICHAEL SIRI, Program Assistant
COMMITTEE ON NATIONAL STATISTICS 2003
JOHN E. ROLPH(Chair),
Marshall School of Business, University of Southern California
JOSEPH G. ALTONJI,
Department of Economics, Yale University
ROBERT M. BELL,
AT&T Labs—Research, Florham Park, New Jersey
LAWRENCE D. BROWN,
Department of Statistics, The Wharton School, University of Pennsylvania
ROBERT M. GROVES,
Survey Research Center, University of Michigan, and Joint Program in Survey Methodology
JOEL L. HOROWITZ,
Department of Economics, Northwestern University
WILLIAM KALSBEEK,
Survey Research Unit, Department of Biostatistics, University of North Carolina
ARLEEN LEIBOWITZ,
School of Public Policy and Social Research, University of California at Los Angeles
THOMAS A. LOUIS,
Bloomberg School of Public Health, Johns Hopkins University
VIJAYAN NAIR,
Department of Statistics and Department of Industrial and Operations Engineering, University of Michigan
DARYL PREGIBON,
AT&T Labs—Research, Florham Park, New Jersey
KENNETH PREWITT,
School of Public Affairs, Columbia University
NORA CATE SCHAEFFER,
Department of Sociology, University of Wisconsin-Madison
MATTHEW D. SHAPIRO,
Department of Economics, University of Michigan
ANDREW A. WHITE, Director
Preface
This volume on survey automation differs in structure from other workshop reports issued by the National Academies. We have chosen to present this finished volume as the combination of two sub-reports:
-
The proceedings of the workshop, as it occurred on April 15–16, 2002. This is a transcript of the workshop presentations, edited for basic flow and to include such presentation graphics as are essential to effectively convey the points of the presentations.
-
A short report by the workshop’s oversight committee, containing the committee’s reactions to the proceedings of the workshop and providing its recommendations.
These two reports—the report and the proceedings—are packaged together in this single volume to provide a unified discussion of the workshop material. We believe that putting the committee’s conclusions in a concise report is an effective means of communicating those results, while packaging the short report with the proceedings provides all the relevant back-up and reference material. The report is Part I of the volume; the proceedings is Part II. The surrounding sections—such as the references and acknowledgments—have been constructed such as to be applicable to both sub-reports.
The text of this report contains references to particular company and trade names, including references to specific computer software packages. Such identification of specific names should not be interpreted as endorsement by the authoring committee or the National Academies, nor should it imply that the specific products are the best available for specific purposes.
Acknowledgments
The authoring committee for the Workshop on Survey Automation extends its thanks to the many people who made the workshop possible and whose contributions helped to bridge the computer science and survey methodology communities.
Our thanks first to the U.S. Census Bureau for its sponsorship of the workshop. Through a long and convoluted path from the project’s initiation to the completion of the workshop, Pat Doyle and her staff provided much useful assistance and occasional prodding, and were most receptive to questions and suggestions.
The shape and content of the workshop took form rapidly after a very successful planning meeting on December 11, 2001—arranged at the request of the authoring committee—that brought selected computer scientists into the same room with the Census Bureau’s practitioners. This planning session was most useful in clarifying paths of approach to the documentation and testing problems. Pat Doyle, Janis Lea Brown, and other Census staff put together a thorough briefing within a very rapid time frame. The authoring committee is most grateful to the three computer scientists called in for the meeting—Jesse Poore, Lawrence Markosian, and James Whittaker (Florida Institute of Technology)—for their eagerness to take on a new problem in their experience. Unfortunately, other commitments precluded Whittaker from participation in the workshop itself; we nonetheless appreciate his guidance at the early stages.
About the time the workshop took place, the staff was asked by the workshop’s parent committee—the Committee on National Statistics (CNSTAT)—to present a synopsis of the workshop material at the public seminar portion of CNSTAT’s regular meeting on May 8, 2002. We are very grateful to two workshop participants—Jesse Poore and Mark Pierzchala—for reprising their workshop presentations, on short notice, at the CNSTAT seminar.
Travel and logistics arrangements for the workshop were deftly made by the workshop’s project assistant, Michael Siri. We also appreciate the last-minute help of Danelle Dessaint of the CNSTAT staff in arranging the participants’ dinner.
Part I of this volume was reviewed in draft form by individuals chosen for their diverse perspectives and technical expertise, in accordance
with procedures approved by the Report Review Committee of the National Research Council (NRC). The purpose of this independent review is to provide candid and critical comments that will assist the institution in making the published reports as sound as possible and to ensure that the reports meet institutional requirements for objectivity, evidence, and responsiveness to the study charge. The review comments and draft manuscript remain confidential to protect the integrity of the deliberative process.
We thank the following individuals for their participation in the review of Part I of this volume: Don A. Dillman, Departments of Sociology and Rural Sociology, Washington State University; William L. Nicholls II, consultant, Alexandria, Virginia; James O’Reilly, Blaise Services at Westat, Durham, North Carolina; Stacy Prowell, Software Quality Research Laboratory, University of Tennessee; Nora Cate Schaeffer, Department of Sociology, University of Wisconsin; Elizabeth Stephenson, Institute for Social Science Research, University of California at Los Angeles; and Dave Zubrow, Software Engineering Institute, Carnegie Mellon University.
Although the reviewers listed above provided many constructive comments and suggestions, they were not asked to endorse the conclusions or recommendations, nor did they see the final drafts of the reports before their release. The review of Part I was overseen by Richard Kulka, Social and Statistical Sciences, RTI International, Research Triangle Park, North Carolina. Appointed by the National Research Council, he was responsible for making certain that an independent examination of this report was carried out in accordance with institutional procedures and that all review comments were carefully considered. Responsibility for the final content of this report rests entirely with the authoring committee and the institution.
Robert Groves, Co-Chair
William Kalsbeek, Co-Chair
Workshop on Survey Automation
Contents
List of Figures
I-1 |
Example of paper-and-pencil-style questionnaire, as seen by an interviewer. |
|||
I-2 |
Example of questionnaire item flow patterns in a CAI instrument |
|||
I-3 |
Prototype product line architecture for a CAPI process. |
|||
I-4 |
Effect on mathematical complexity of a small change in code. |
|||
II-1 |
One-page excerpt (out of 63) from the core questionnaire document, Wave 3 of the Survey of Income and Program Participation (SIPP), 1993 Panel. |
|||
II-2 |
General structure of a product line architecture. |
|||
II-3 |
Schematic model of a successful software environment. |
|||
II-4 |
Conjectured multipliers on cost of correcting errors at different phases of a softwaredesignproject. |
|||
II-5 |
Item ME16 from Instrument Document (IDOC) for Wave 6 of the Survey of IncomeandProgramParticipation(SIPP). |
|||
II-6 |
“More Information About This Item” View of Item ME16 from Instrument Document (IDOC) for Wave 6 of the Survey of Income and Program Participation(SIPP). |
|||
II-7 |
Portion of a sample questionnaire, as it might be represented on paper. |
|||
II-8 |
Portion of a sample questionnaire, as it might be represented in CASES. |
|||
II-9 |
Portion of a sample questionnaire, as it might be represented in Blaise. |
|||
II-10 |
Hypothetical routing graph of a questionnaire. |
|||
II-11 |
Sample question, as coded in the Questionnaire Definition Language used in the TADEQ project. |
|||
II-12 |
Sample route instruction, as coded in the Questionnaire Definition Language (QDL) used in the TADEQ project. |
|||
II-13 |
Screen shot of TADEQ applied to a sample questionnaire, with some sub-questionnaires unfolded. |
|||
II-14 |
Screen shot of some route statistics generated by TADEQ for a sample questionnaire. |
|||
II-15 |
Simple C algorithm with flow graph. |
|||
II-16 |
More complicated C algorithm with flow graph. |
|||
II-17 |
Schematic diagram of error-prone algorithm. |
|||
II-18 |
Four sources of unstructured logic in software programs. |
|||
II-19 |
Choice A in software metrics quiz. |
|||
II-20 |
Choice B in software metrics quiz. |
|||
II-21 |
Example of large change in complexity that can be introduced by a single change in a software module. |
|||
II-22 |
Simple survey example. |
|||
II-23 |
Operational states of Windows clock application, viewed as a flow graph and a model. |
|||
II-24 |
Example of static design for a Web questionnaire. |
|||
II-25 |
Humanizing touches in a Web questionnaire interface. |
|||
II-26 |
Web instrument with human face added to personalize the instrument. |