REDUCING INSULARITY

In making the preceding comments and recommendations, we do not in any way mean to imply that current survey design standards are “wrong” while software engineering standards are “right.” As one of the workshop discussants aptly observed and McCabe echoed in his remarks, the software industry hardly has an unblemished record in providing error-free and perfectly modular code on budget and on time. As we stated earlier, there are no quick fixes in improving current CAPI implementation; there is no single software engineering panacea that can be applied.

That said, current practice in computer science offers ways of structuring software projects that could markedly improve CAPI implementation. The computer-assisted survey research community is a relatively small group who have pursued the first two decades of CAPI with great professionalism and curiosity, often making do with very limited resources. But the Workshop on Survey Automation suggests that furthering the CAPI cause will require solutions and approaches with which current survey practitioners may be unfamiliar. Accordingly, the survey world remains insular of developments in computer science and software engineering at its peril; opportunities for long-lasting collaboration should be actively pursued.

Survey research is a relatively small industry; as an activity within the federal government, the total budget for federal surveys is a very small pool. As a consequence, the burden for building bridges to other disciplines rests principally on the survey research community. Although the problems are interesting and formidable, the scale of survey computing and limited user base are such that computer scientists and software developers are unlikely to latch onto survey work as a viable work area of their own accord. It will take money and effort to build external connections (as well as to conduct vital information sharing and standard building within the survey community), but the benefit of outside experience is substantial. The task of building external connections to software expertise parallels the one that has been faced in other product development industries, such as consumer electronics. These industries typically begin developing software functionality using in-house resources, without reference to professional software engineers. Ultimately, dependence on software to deliver functionality has led companies to take a more professional approach to their software development activities.

The need for outreach to outside experts is increasingly important as time passes and technology advances. The second day of the workshop featured talks in three areas in which new and emerging technologies are beginning to become part of the survey experience:



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 28
REDUCING INSULARITY In making the preceding comments and recommendations, we do not in any way mean to imply that current survey design standards are “wrong” while software engineering standards are “right.” As one of the workshop discussants aptly observed and McCabe echoed in his remarks, the software industry hardly has an unblemished record in providing error-free and perfectly modular code on budget and on time. As we stated earlier, there are no quick fixes in improving current CAPI implementation; there is no single software engineering panacea that can be applied. That said, current practice in computer science offers ways of structuring software projects that could markedly improve CAPI implementation. The computer-assisted survey research community is a relatively small group who have pursued the first two decades of CAPI with great professionalism and curiosity, often making do with very limited resources. But the Workshop on Survey Automation suggests that furthering the CAPI cause will require solutions and approaches with which current survey practitioners may be unfamiliar. Accordingly, the survey world remains insular of developments in computer science and software engineering at its peril; opportunities for long-lasting collaboration should be actively pursued. Survey research is a relatively small industry; as an activity within the federal government, the total budget for federal surveys is a very small pool. As a consequence, the burden for building bridges to other disciplines rests principally on the survey research community. Although the problems are interesting and formidable, the scale of survey computing and limited user base are such that computer scientists and software developers are unlikely to latch onto survey work as a viable work area of their own accord. It will take money and effort to build external connections (as well as to conduct vital information sharing and standard building within the survey community), but the benefit of outside experience is substantial. The task of building external connections to software expertise parallels the one that has been faced in other product development industries, such as consumer electronics. These industries typically begin developing software functionality using in-house resources, without reference to professional software engineers. Ultimately, dependence on software to deliver functionality has led companies to take a more professional approach to their software development activities. The need for outreach to outside experts is increasingly important as time passes and technology advances. The second day of the workshop featured talks in three areas in which new and emerging technologies are beginning to become part of the survey experience:

OCR for page 28
Development of surveys for deployment on the Internet (presented by Roger Tourangeau), thus removing a human interviewer from the process and requiring higher standards for human interface and usability; Incorporation of geographic information systems and global positioning satellite technology in the survey process (presented by Sarah Nusser), opening exciting new prospects for the development of survey frames and easing field interviewers’ basic navigation and task work; and Migration of surveys from laptop computers to portable handheld computers (presented by Jay Levinsohn and Martin Meyer), literally lightening the burden of field interviewers while presenting new challenges in terms of reduced on-screen space and more limited battery capacity and storage space. Coverage of these and other topics in general survey automation—among them the use of wireless networks and synchronization with the case management systems used to track completed questionnaires and assign follow-up interviews—was necessarily limited in this single workshop, and each topic merits fuller study. Realizing the benefits of these and other new technologies will be difficult without increased attention to standards and practices in the survey industry or drawing on the expertise of fields outside of survey research. For example, one particular segment of contemporary survey work that suggests great potential challenges is the development of mixed-mode surveys. To boost response rates and improve survey coverage, survey designers increasingly consider conducting the same survey using multiple response modes (e.g., offering respondents the chance to reply either by mail or the Internet or conducting a mail survey but following up with nonrespondents via telephone). Thus, the inherent difficulties of implementing a survey in any particular medium are compounded by the problem of managing parallel versions of the same survey using different media. Assigning total responsibility for developing a survey in each response mode to different groups of workers seems an inefficient and possibly error-prone way to proceed, particularly if each group develops its own unique standards and processes to best suit their given response mode. Hence, in addition to reaching internal agreement on survey specifications and item types, incorporating a product line architecture (identifying and emphasizing common elements, such as data movement and processing routines) seems to be a vital step in making mixed-mode surveys work most effectively; so too is carefully

OCR for page 28
weighing the trade-offs between needed functionality in one mode and added complexity thus incurred in another. Foster Collaboration Beyond Walls of Survey Research Survey research organizations and federal statistical agencies involved in survey work should seek ways to foster genuine collaboration with outside experts in computer science. Convening functions, such as the Workshop on Survey Automation or inviting computer science-based sessions at survey research meetings, are useful in this regard. However, creative ways to achieve true collaboration—engaging experts from both sides in detailed work, becoming familiar with each others’ fields—are in greatest need. A useful starting point to forge partnerships may lie in pilot work on some of the more immediately achievable recommendations suggested by this workshop. The task of finding a way to parse CAPI code in order to produce complexity metrics is one such project; another is selecting a nontrivial but still manageable existing survey and working out the mechanics of model-based testing for that instrument. Another immediate point of useful survey–computer science collaboration could be in tapping extant software engineering work on tracking project specifications over the life-cycle of a software project. The Census Bureau is working on tracking changes to survey specifications using database structures; this is an area in which outside expertise may be immediately applicable. As mentioned earlier in this report, drawing on software engineering best practices to design defect or error tracking systems is similarly an area in which collaboration could be fruitful. Draw from Experience of Related Applications At the workshop, Robert Smith spoke about the experiences of the Computer Curriculum Corporation in developing software for computer-based instruction. These software packages feature novel challenges for measuring advancement and deciding appropriate times to move students toward more advanced topics. But, at its heart, the course curriculum example shares many common features with CAPI: the question-and-answer format is central to both areas, both involve software projects of considerable size and scope, and both involve strong attention to human factors and communication issues. The computer-assisted survey community should identify application areas like computer-based instruction packages that—while not exactly identical to CAPI implementation—share common features with the survey experience. These application areas may be existence proofs that

OCR for page 28
problems similar to those in the CAPI world have been encountered by other practitioners, and those experiences may usefully be brought to bear in survey work. In addition to computer curriculum software, another obvious analogue to CAPI survey collection is automated tax preparation software. Like a CAPI survey, tax software must be able to conform to the needs of a particular user using flow and skip sequences; like federal surveys in particular, the number of items a particular user may need to encounter will vary with their particular situations but could be quite large. In terms of production or release cycles, the annual revisions of tax software may be a better analogy to CAPI surveys than would other software projects. As a CAPI survey is to a respondent, tax software is also a case in which user exposure is fairly limited; for a particular user, the software effectively has one chance to work and it must work correctly. Indeed, the requirements of tax software may in some respects be more stringent than many CAPI instruments because tax software involves self-administration rather than a human intermediary; as traditional CAPI surveys evolve toward self-administered Internet-based questionnaires, the lessons learned in the tax preparation area could be particularly important. Enhance Training of Current and Future Survey Researchers Making changes as suggested in this report will require not only serious commitment by survey organizations but also a diffusion of new knowledge among existing survey development staff. Sustaining the changes will require that the skill sets of new survey practitioners reflect new organizational styles. Accordingly, in her remarks, Pat Doyle stressed the importance of educating current and future survey staff about the importance and methods of documenting survey instruments, encouraging academic programs specializing in survey methodology to incorporate such training in their curricula. On this point, we agree; to the extent that software design is a key part of survey development, training in contemporary survey methodology should provide some background in software engineering. This includes not only effective strategies for instrument documentation, but also best practices in managing intensive software projects and emerging techniques for testing survey software as well. Keep the Survey-Computer Science Discussion Active In closing, it is our hope that the survey research industry will strive to build continuing channels of communication with the field of com-

OCR for page 28
puter science. As several participants at the Workshop on Survey Automation commented, this workshop was not the first such gathering at which they had been in attendance. Over the past decades, several conferences on emerging technology and survey practice have been held, and they—like the Workshop on Survey Automation—serve a useful purpose. Jesse Poore reminded the workshop participants of Moore’s Law, the adage that technological capacity tends to double on a roughly 18-month cycle. In light of this observation, and of the great lessons that survey methodology and computer science have to offer each other, having joint activities like the Workshop on Survey Automation once a decade is clearly too infrequent to be useful. Regardless of the forum for such collaborations—whether workshops like this one, special sessions at professional meetings, or other means—the computer-assisted survey community should strive to have formal collaborative opportunities with computer science and related fields on at least a three-year (or twice Moore’s Law) cycle. The greater challenge, as ever, will be to emerge from discussion and translate talk into action, forging enduring partnerships. Maintaining communication channels is too important a task to neglect.