Existing Systems: CASES/IDOC and Blaise/TADEQ

Methods to ease the creation of documentation by generating some form of it automatically from the electronic questionnaire itself have been prized goals. The computer-assisted survey community has made significant inroads in addressing the issue of automated instrument documentation. With sponsorship from the Census Bureau, the Computer Survey Methodology group at the University of California at Berkeley developed companion software for CASES, the DOS-based instrument authoring language that was a major force in early CAPI implementations but that has declined in use due to the lack of a Windows version. This companion software processes an instrument to produce an instrument document (IDOC), an automatically generated set of linked HTML pages that allow a user to browse through the logic of a questionnaire, identifying questions or decision points that flow directly into or out of a particular item in the questionnaire.

The emerging dominant survey authoring language, Blaise, also has a companion software suite for automated documentation under development. Sponsored by a consortium of European statistical agencies, the TADEQ Project has developed prototype software that can process a questionnaire and produce a flowchart-style overview of a questionnaire’s logic, as well as some descriptive statistics. The eventual hope is for TADEQ to be independent of the software platform used to write the questionnaire—if an electronic questionnaire could be ported into the XML markup language, it could be processed by TADEQ—but initial development appears to have focused on its coordination with Blaise.

These two automated documentation initiatives are good first steps in addressing the global documentation problem in electronic surveys. Both suffer from some inherent practical limitations—IDOC from its exclusive applicability only to CASES-coded instruments and from the lack of an overall map to what can be a massive number of linked HTML pages, TADEQ from its perceived difficulty in processing very large and complicated instruments. More fundamentally, both systems are essentially post-processors of coded instruments; hence, the extent to which they can contribute to up-front guidance on questionnaire development—as a diagnostic tool during survey design—is not clear. Both also suffer from the reality that automated documentation can go only so far in conveying meaning and context to a human reader; it can suggest the functional flow from item to item but, on its own, it may not explain why those items are related to each other. Contextual tags and explanatory text in survey questionnaires require human input during coding (which often is not done, given time and resource limitations).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement