Letter Report on the Omnibus Survey Program
November 5, 2002
Mr. Rick Kowalewski
Bureau of Transportation Statistics
400 7th Street, S.W. Washington, D.C. 20590
Dear Mr. Kowalewski:
We are pleased to transmit this second letter report of the Committee to Review the Bureau of Transportation Statistics’ (BTS) Survey Programs. This committee was convened by the Transportation Research Board and the Committee on National Statistics in response to a request from BTS. The membership of the committee is shown in Enclosure A. The committee has been charged with reviewing the current BTS survey programs in light of transportation data needs for policy planning and research and in light of the characteristics and functions of an effective statistical agency. This letter presents the committee’s consensus findings and recommendations concerning the Omnibus survey.
The committee held its second meeting on June 26–27, 2002, at the National Academies facilities in Washington, D.C. The purpose of this meeting was to review the Omnibus survey. To this end, the committee heard presentations from representatives of BTS and from Omnibus survey customers. A list of the presentations at the meeting is provided in Enclosure B. Following the data-gathering sessions, the committee met
in closed session to deliberate on its findings and recommendations and begin the preparation of this report, which was completed through correspondence among the members. In developing these findings and recommendations, the committee drew on information gathered at its June meeting, examples of Omnibus surveys and products, articles in the technical literature,1 and the experience and expertise of individual members. The committee would like to thank all those who contributed to this review through their participation in the June meeting and their responses to follow-up questions. The assistance of Lori Putman of BTS is particularly appreciated.
In summary, the committee found that the Omnibus Survey Program has value as a source of timely data to inform decision making on a range of transportation issues. These data capture public opinion about a wide range of topics broadly related to transportation and provide a means of monitoring the public’s use of and satisfaction with the transportation system. However, the committee is concerned that a BTS survey of public opinion on topical items has the potential to compromise the agency’s credibility as an independent provider of statistical data and services. Therefore, the committee suggests that BTS take steps to safeguard the integrity of the Omnibus program as an independent source of high-quality data. In particular, the committee recommends that the agency (a) establish an appropriate review mechanism for all proposed Omnibus surveys, (b) implement measures aimed at improving and ensuring survey quality, and (c) take steps to improve the quality of data analysis products and reports.
This report presents the committee’s findings and recommendations in four areas: the value of the Omnibus program, and three areas of action to safeguard the integrity of the program—review of proposed surveys, implementation of measures to improve and ensure survey quality, and steps to improve the quality of data analysis products and reports. Enclosure C provides an overview of the Omnibus Survey Program.
A list of all nonproprietary materials considered by the committee is available from the Public Records Office of the National Academies (e-mail: email@example.com).
VALUE OF THE OMNIBUS SURVEY PROGRAM
Finding 1: The Omnibus Survey Program has value for the U.S. Department of Transportation (USDOT) because it provides
A flexible, quick-response mechanism for assessing public opinion about a range of transportation issues and delivering timely data to inform decision making; and
A means of monitoring the public’s use of and opinions about the transportation system on a frequent and regular basis.
BTS is required to provide its customers with statistics that “support transportation decision-making by all levels of government, transportation-related associations, private businesses, and consumers” [49 U.S.C. 111(c)(7)]. The Omnibus Survey Program focuses on meeting some of the information needs of customers within USDOT. The program currently comprises two components: a monthly household survey and targeted surveys, up to a maximum of four per year, that address special transportation topics.2
The Omnibus program delivers timely data to inform decision making, as illustrated by two security-related examples. The monthly household survey provides a mechanism for conducting periodic assessments of traveler reactions to airport screening processes, thereby allowing the Transportation Security Administration (TSA) to track customer reactions to its programs. The 2001 Mariner Survey—a targeted survey—provided the Maritime Administration (MARAD) with information about the numbers of mariners who would be willing to take an afloat position in the event of a national defense emergency and the period of time they would be willing to serve (BTS/MARAD 2001).
The Omnibus survey is also being used to explore topics prior to, or in parallel with, more extensive investigations. For example, the National Highway Traffic Safety Administration (NHTSA) added questions to the monthly household survey to investigate drivers’ complaints about head-light glare. The resulting data from a national sample of survey respon-
dents will be used to supplement information from specific populations, namely, those who respond to NHTSA’s recent notice asking for comments on drivers’ complaints about headlight glare.
The committee anticipates that there will continue to be opportunities for BTS to support various Omnibus survey initiatives requested by other agencies within USDOT. The example of the 2001 Mariner Survey demonstrates that a clearly defined agency need (from MARAD), combined with BTS’s survey expertise, can result in a useful, high-quality survey.
Indeed, the committee believes the Omnibus program has the potential to benefit a wider range of data users both inside and outside of USDOT. For example, the Omnibus surveys could be used to provide interim information on the transportation system between the periods of major surveys. Key aspects of the dynamics of the transportation system could be captured more frequently than once every 4 to 5 years when the National Household Travel Survey (NHTS) is conducted. To take advantage of its potential update capability, the Omnibus monthly household survey would require modification, with appropriate phrasing and structuring of survey questions to ensure that data are comparable with NHTS data.3
Recommendation 1: BTS should continue its Omnibus Survey Program as a relatively low-budget activity that provides timely information on a range of transportation issues.
SAFEGUARDING THE INTEGRITY OF THE OMNIBUS SURVEY PROGRAM
The opportunity to obtain timely public opinion data on key transportation issues makes the Omnibus program an attractive tool for policy makers. However, a recent National Research Council (NRC) report, Principles and Practices for a Federal Statistical Agency, notes that “one reason to establish a separate statistical agency is the need for data series to be independent of control by policy makers or regulatory or enforcement
agencies” (Martin et al. 2001, 3). The committee has some concerns that, in its role as a survey service organization within USDOT, BTS may be asked to conduct Omnibus targeted surveys, or add questions to the Omnibus monthly household survey, that could ultimately damage the agency’s credibility as an independent provider of transportation data. Therefore, the committee encourages BTS to take a proactive approach in ensuring that the Omnibus program is an independent source of high-quality data on the transportation system. The committee identified opportunities to enhance and ensure the integrity of the Omnibus program in three areas:
Review of proposed surveys,
Implementation of measures to improve and ensure survey quality, and
Steps to improve the quality of data analysis products and reports.
Review of Proposed Surveys
Finding 2: Current BTS procedures for approving Omnibus surveys are unsatisfactory because they do not ensure that every survey is subject to a rigorous, objective, and informed review of its content and method before being fielded.
While the existing OMB blanket approval for the Omnibus program facilitates the rapid implementation of surveys, it imposes an additional responsibility on BTS to ensure that all Omnibus surveys are appropriately reviewed before being fielded. The committee is concerned about the effectiveness of current review procedures.
In the case of the monthly household survey, the review of each month’s draft questionnaire by a panel of experts selected by the survey contractor provides an important mechanism for identifying and correcting problems, although the time available to incorporate and test the panel’s suggestions may be insufficient (see Finding 3 below). Furthermore, the extent to which the expert panel considers the appropriateness of specific questions for a federal statistical agency is unclear. The committee’s review of questionnaires for the monthly household survey led it to conclude that the design and selection of questions would benefit from additional consideration of how the survey results will be used to inform analyses of the transportation system.
The current lack of an established mechanism for external review of proposed targeted surveys is of serious concern to the committee. In the committee’s view, BTS is responsible for establishing and implementing an effective review mechanism (or mechanisms) for the Omnibus surveys. Because BTS staff may not have the experience and insights needed to understand all the policy implications of proposed surveys, these procedures should include external review of all targeted surveys. Starting in 2003, OMB will review a shortened clearance package for each targeted survey and will require a 30-day public comment period on proposed targeted surveys. These additional OMB requirements may help in remedying some of the present deficiencies, but more needs to be done in this area.
The Intermodal Surface Transportation Efficiency Act of 1991 established BTS as a statistical agency with responsibility for compiling transportation statistics—not as “a policy development office or an administrative unit” (Citro and Norwood 1997, 2). In the committee’s opinion, effective review procedures for the Omnibus program would assist BTS in maintaining its independence from USDOT’s policy-making activities, while allowing the agency to continue providing valuable statistical services to its customers within USDOT.
Recommendation 2: BTS should establish an independent review mechanism for the Omnibus program with contributions from experts outside BTS to ensure that
Proposed surveys are consistent with BTS’s overall mission and do not address inappropriate questions that could undermine the independence of the agency; and
The objective of every survey is clearly defined and the proposed design will achieve that objective.
Implementation of Measures to Improve and Ensure Survey Quality
Finding 3: There is a risk that the quality of the Omnibus monthly household survey will be compromised by the time constraints imposed by the monthly schedule.
The availability of very timely information on topical issues is important to some data users. In addition, a monthly survey captures short-term
effects of factors influencing transportation use—effects that may be difficult to measure with a less frequent survey. Nevertheless, the committee is concerned that the quality of the Omnibus monthly household survey is being jeopardized by the limited time available to (a) develop and test the survey questionnaire and (b) collect the data.
Development and Testing of the Survey Questionnaire
The committee questions whether the time available for formulating the survey content and testing the questionnaire is sufficient to ensure that the resulting data will provide a sound basis for analysis. Because resource limitations preclude working on several months’ surveys simultaneously, the monthly schedule does not allow time to conduct a pilot survey.4 The draft survey questionnaire is reviewed by an expert panel and subjected to cognitive testing using a mall intercept. These two activities are conducted in parallel over a 1-week period. During the course of the following week, BTS staff develop a revised questionnaire that addresses any problems identified as the result of the expert panel review and cognitive interviews. This revised questionnaire is then sent to the survey contractor without further evaluation.
Currently, a minimum of 20 people are interviewed for cognitive testing of the monthly household questionnaire.5 Potential interviewees are intercepted in a New Jersey shopping mall and screened on the basis of race, gender, age, and income “to ensure the ending sample of respondents [is] reflective of the United States population as a whole regarding the aforementioned characteristics” (BTS 2002). All the cognitive interviews are conducted on a single day, and the interviewers are required to compile results from their interviews and develop a summary of noteworthy issues and any suggested solutions by the end of the next day.
There is empirical evidence that the response to a survey question depends on the way in which the question is framed (see, for example, Sudman and Bradburn 1978; Schuman and Presser 1981). Therefore, careful cognitive testing of questionnaires is needed to ensure that they will yield
interpretable data consistent with survey objectives. Cognitive testing explores the mental process by which respondents reach an answer to a question, and in doing so it can show whether a question is working as intended. If modifications are made in response to test results, good survey practice requires that further cognitive testing be conducted to evaluate the modified questionnaire.
The committee is concerned about three features of the current cognitive testing, all of which appear to be adversely affected by the time constraints imposed by the monthly schedule.
Sampling procedure: Quota sampling at a single location (a New Jersey shopping mall), while relatively quick, is unlikely to ensure that the sample reflects the U.S. population in terms of race, gender, age, and income. At best an attempt can be made to get some diversity on these four characteristics.
Scope of testing: Insufficient time is available to conduct the necessary in-depth cognitive testing of each month’s household survey questionnaire. For example, a question from the May 2002 Monthly Household Survey about security procedures at airports asks, “How satisfied were you with the time that you waited in line at the passenger screening checkpoint?” It is not clear to the committee that the very limited cognitive testing of the survey questionnaire, conducted in a single day with a small sample, is sufficient to establish what respondents understand by the possible answers to this question, which range from “very unsatisfied” to “very satisfied.”
Absence of a test-modify-retest cycle: Because there is insufficient time to conduct more than one iteration of the survey questionnaire, modifications to the draft questionnaire in response to cognitive testing and suggestions from the expert review panel are not adequately evaluated.6
In general, data for the monthly household survey are collected over a period of 10 consecutive days, although the data collection schedule may,
on occasion, be modified. For example, the data collection schedule for May 2002 was interrupted because the interviewers did not work on Mother’s Day (Sunday, May 12). Data are collected using computer-assisted telephone interviewing (CATI) procedures.
The response rate for the monthly household survey has been a source of some concern to BTS and was a factor influencing the agency’s decision to change contractor after the survey had been fielded for 8 months. During the initial period from August 2000 through March 2001, the response rate increased from 10 percent to 34 percent. Following a brief hiatus associated with the change in contractor, the survey resumed in July 2001, when the response rate was 38 percent. The response rate has now increased to a plateau of approximately 43 percent for the 4 months ending June 2002. The committee is concerned about the possibility of nonresponse bias associated with the relatively low response rate.
The survey contractor uses a range of strategies to maximize the number of completed interviews, including an unrestricted number of call attempts, callback scheduling, messages left on answering machines at the seventh call attempt, a toll-free number for respondents to call to complete the survey, the use of Spanish-speaking interviewers as necessary, and the use of refusal conversion specialists. Although the numbers of call attempts and callbacks are, in principle, unlimited, restricting the data collection period to 10 days effectively limits the total number of calls that can be made. An additional day of data collection was needed for the May 2002 survey to obtain the required 1,000 household interviews.
Increasing the data collection period for the monthly household survey could increase the response rate by increasing the numbers of call attempts and callbacks. However, in the absence of additional resources, a longer data collection period for the monthly household survey would reduce the already limited time available for questionnaire development and testing, thereby heightening concerns about the limited testing of the draft questionnaire.
Measures to Improve and Ensure Quality
Question Design and Evaluation The formulation and evaluation of proposed survey questions should ensure that the resulting questionnaire will provide a sound basis for analysis and assessment. Linking attitudi-
nal questions to operational information would be beneficial in ensuring that data are meaningful and are not readily susceptible to misrepresentation. For example, assessments of passenger attitudes toward airport security screening procedures would be more informative if they were linked to objective measures of passenger screening delays.7
The committee sees a need for BTS to be more proactive in examining issues relating to the purpose and use of survey data. If appropriate transportation expertise and experience are not available within the agency, BTS should enlist the help of outside experts in planning investigations of the transportation system and formulating appropriate survey questions.
Additional cognitive research may also be needed to better inform the development of questions that generate usable data. Several government agencies, notably the National Center for Health Statistics, the Bureau of Labor Statistics, and the Census Bureau, have professionally staffed cognitive laboratories that are recognized for their contributions to understanding the survey process. The committee urges BTS to seek advice on cognitive testing from the staffs of these laboratories.
Response Rate Response rate is one of a number of factors affecting total survey quality and is of concern because of the potential for nonresponse bias. Survey respondents may differ from nonrespondents in ways that are germane to the objectives of the survey, with associated implications for the validity of the survey results. The nonresponse bias associated with estimates from random digit dialing (RDD) telephone surveys—such as the Omnibus monthly household survey—is not known unless special studies are undertaken. Consequently, obtaining a high response rate is often the only way to reduce the potential for a significant nonresponse bias. There is likely to be a trade-off between response rate and survey cost, since achieving high response rates generally involves extensive calling procedures to reach households and a number of attempts to convert refusals and breakoffs (Massey et al. 1998).
An investigation of 39 RDD surveys sponsored by government and other organizations between 1990 and 1996 showed response rates
ranging from 42 percent to 79 percent (Massey et al. 1998).8 The average response rate was 62 percent—significantly higher than the 43 percent currently obtained in the Omnibus monthly household survey. The committee recognizes that the average response rate reported by Massey et al. (1998) is only broadly indicative of typical RDD response rates during the period 1990–1996, and that today’s average may be lower.9 Nevertheless, the committee believes that, in the light of concerns about nonresponse bias, two additional efforts are warranted. First, a concerted effort is needed to increase the response rate in the Omnibus monthly household survey. Second, BTS should undertake methodological investigations to assess the consequences of reducing nonresponse and estimate differences between survey respondents and nonrespondents.
Efforts to increase response rate. The committee encourages BTS to continue investigating a range of approaches that may help reduce non-response, including providing incentives to respondents, increasing the number of calls, subsampling ring-without-answer numbers to increase calls for a subsample, and using bilingual/multilingual interviewers. Careful selection of the survey contractor10 and provision of the necessary technical guidance and support to that contractor are important mechanisms for achieving a high-quality survey in general and a high response rate in particular. It is generally acknowledged that different survey organizations achieve different response rates for the same voluntary survey—the so-called “house effect” (NRC 1979). In some instances, lower response rates may result from a lack of relevant experience and expertise. Therefore, the committee suggests that BTS consider appointing a consultant to assist the contractor responsible
for the monthly household survey in implementing best industry practices with the potential to improve the response rate.
Methodological investigations. The committee encourages BTS to investigate the consequences of reducing nonresponse in the Omnibus monthly household survey. A recent study by Keeter et al. (2000) compared two RDD national telephone surveys that used identical questionnaires but very different levels of effort. The quick turnaround survey, conducted over a 5-day period, yielded a response rate of 36.0 percent, whereas the more rigorous survey, conducted over an 8-week period, had a response rate of 60.6 percent. Nevertheless, the two surveys produced similar results, with an average difference across 91 comparisons of about 2 percentage points. A comparable experiment with the Omnibus monthly household survey—comparing the present design with a more intensive effort yielding a substantially higher response rate—would allow BTS to better understand the effect of increased effort on estimates from the survey. The committee also urges BTS to launch methodological investigations of nonresponse on a continuing basis. These investigations should include examinations of the characteristics of nonresponding telephone households, longer-term follow-up of a subsample of nonrespondents to determine whether differences exist between respondents and nonrespondents, and experimental study of the use of incentives to reduce nonresponse rates.
Trading Frequency for Quality The committee urges BTS to consider replacing the Omnibus monthly household survey with a similar survey done quarterly (once every 3 months). The reduced survey frequency would allow more time to develop and test the survey questionnaire and collect data. It would also permit a threefold increase in sample size and an associated reduction in sampling error without increasing either the overall quantity of data collected over a 3-month period or the associated respondent burden time. Extending the data collection period beyond the current 10-day limit could also reduce nonresponse by providing more time for call attempts and callbacks. All these benefits could probably be achieved without substantial increases in survey cost.
The committee recognizes that some customers may have a genuine need for data on a monthly basis or at very short notice to measure public
reaction to a “hot topic.” The needs of these customers could be met by occasionally making one of the targeted surveys under the Omnibus program a very quick response survey with a short turnaround time. Such a survey could be conducted on a one-off basis or could be repeated several times over a period of several months, depending on customer requirements.
Recommendation 3: BTS should implement a range of measures aimed at ensuring that all surveys conducted under the Omnibus program are of a consistently high quality. In particular, BTS should
Ensure that procedures for developing and evaluating survey questionnaires are effective;
Aggressively pursue strategies for increasing response rates, notably for the monthly household survey; and
Consider trading frequency for quality in the monthly household survey.
Steps to Improve the Quality of Data Analysis Products and Reports
Finding 4: BTS reporting of the results of the Omnibus monthly household survey does not consistently meet the quality standards expected of a federal statistical agency.
The data from the Omnibus monthly survey are made available on the BTS website (www.bts.gov) and are also used by BTS to prepare OmniStats, two- or three-page popular reports that offer “items of widespread interest from the BTS monthly Omnibus Household Survey.”11 The committee examined six issues of OmniStats,12 some of which gave cause for
Omnibus Survey—OmniStats, Overview (www.bts.gov/publications/omnistats/).
The issues of OmniStats reviewed by the committee were as follows:• OmniStats, November 19, 2001, “Fewer Americans Plan Thanksgiving Travel.”• OmniStats, December 17, 2001, “Nine Million Americans Change Holiday Travel Plans Because of September 11 Tragedies.”• OmniStats, March 7, 2002, “American Public Is Concerned About National Security Issues but Satisfied with Federal Government’s Efforts.”• OmniStats, March 28, 2002, “Passengers Quickly Adapt to New Baggage Rules.”• OmniStats, April 23, 2002, “46% of Transit Users Bike or Walk (or Both) to Transit Stop.”• OmniStats, June 27, 2002, “How Americans Use Our National Transportation System.”
concern. In general, OmniStats in its present form does not consistently reflect the considerable effort that goes into the Omnibus monthly household survey. The problems encountered include the following:
Providing interpretations of survey results without considering plausible rival hypotheses: For example, the text entitled “Nine Million Americans Change Holiday Travel Plans Because of September 11 Tragedies” does not consider that plans and actual behavior may always show considerable differences. Such omissions could give the appearance of promoting a particular interpretation.
Inconsistently reporting the form of the question: Since the wording of the question may affect both the answer and its interpretation, good practice calls for including the question in the report.
Presenting graphics that are difficult to interpret: For example, it is not clear how the graphics in the report on new baggage rules support the assertion in the headline that “passengers quickly adapt to new baggage rules.”
Paying insufficient attention to statistical reliability: For example, the report on new baggage rules notes that the average number of carry-on bags in February 2002 was 1.2, compared with 1.3 prior to December 2001. No comment is made about the statistical significance of this difference.
It is the committee’s understanding that the deficiencies in OmniStats are due largely to a lack of time and resources within BTS. The expertise needed for analysis of survey data is somewhat different from that needed to design and conduct a survey, and while BTS has considerable knowledge and experience in survey methodology, its expertise in data analysis and reporting is more limited. The committee also recognizes that BTS is anxious to disseminate its survey results to decision makers and the public and thereby gain credibility for the agency.13 OmniStats is an attempt to popularize the results of the Omnibus monthly survey for a general audience—including USDOT staff who are not technical specialists. Presenting survey results to a general audience without compromising
statistical rigor is a challenge, given the very technical nature of many statistical reports. In the committee’s view, the fact that OmniStats is oriented toward a nontechnical audience places a special obligation on BTS to provide clear and objective analyses, to reveal rival hypotheses and data limitations, and to discuss possible uncertainties in data interpretation. While applauding BTS’s intent to inform a broad audience about survey findings, the committee is concerned that the dissemination of inferior-quality products may diminish the reputation of BTS as a credible federal statistical agency of high professional standing.
In general, more substantive reports are needed, with more thoughtful and in-depth interpretations, effective graphics, and better integration. The Bureau of Justice Statistics (BJS) Bulletin provides a good model of high-quality statistical reporting. The publication of each issue of BJS Bulletin is accompanied by a press release highlighting the major features of the study and providing the appropriate link to the full report on the BJS website (www.ojp.usdoj.gov/bjs/). BTS may wish to consider producing reports analogous to those in the BJS Bulletin, accompanied by carefully constructed and vetted fact sheets and, on occasion, press releases. Given the challenges in “popularizing” statistical reports, extreme caution is needed in issuing press releases that attempt to convey sophisticated statistical information in a simplified form. In general, the task of preparing press releases that interpret survey results in terms of policy issues may be better left to the modal administrations responsible for transportation policy.
To encourage high-quality statistical reporting, it would be valuable for BTS to develop and promulgate guidelines for reporting on items such as sample design, standard errors, and response rates. These guidelines would assist BTS staff—as well as others within USDOT—in preparing survey reports.
The reporting of the results of surveys requested by the modal administrations within USDOT poses particular challenges for BTS because the missions of BTS and its internal customers differ. BTS needs to ensure that its own analyses and reports are of high quality and that it draws on the appropriate transportation expertise in preparing these products. There should be a clear demarcation between BTS’s technical reports and the more interpretive, policy-oriented reports coming from other groups
within USDOT. For example, the latter reports could include a disclaimer noting that survey data were provided by BTS but that analyses and interpretation are the responsibility of the issuing administration.
Recommendation 4: BTS should take steps to ensure that its analyses of Omnibus survey data are technically robust and that the resulting products comply with established guidelines for the reporting of statistical data.
The committee appreciates this opportunity to review and comment on the Omnibus Survey Program and hopes that the recommendations made in this report are helpful to BTS in building on its initial experience with this innovative program. We look forward to continuing to work with BTS staff, contractors, and the professional community as a whole in the committee’s forthcoming review of the Commodity Flow Survey.
Joseph L. Schofer
Committee to Review the Bureau of Transportation Statistics’ Survey Programs
BTS. 2002. Survey Documentation for the Bureau of Transportation Statistics Omnibus Survey Program, Public Use. U.S. Department of Transportation, May.
BTS/MARAD. 2001. 2001 Mariner Survey: Principal Findings. U.S. Department of Transportation.
Citro, C. F., and J. L. Norwood (eds.). 1997. The Bureau of Transportation Statistics: Priorities for the Future. Panel on Statistical Programs and Practices of the Bureau of Transportation Statistics, National Research Council, Washington, D.C.
De Leeuw, E., and W. de Heer. 2001. Trends in Household Survey Nonresponse: A Longitudinal and International Comparison. In Survey Nonresponse (R. M. Groves, D. A. Dillman, J. L. Eltinge, and R. J. A. Little, eds.), John Wiley and Sons, London, pp. 41–69.
Keeter, S., C. Miller, A. Kohut, R. M. Groves, and S. Presser. 2000. Consequences of Reducing Nonresponse in a National Telephone Survey. Public Opinion Quarterly, Vol. 64, No. 2, pp. 125–148.
Martin, M. E., M. L. Straf, and C. F. Citro (eds.). 2001. Principles and Practices for a Federal Statistical Agency, 2nd ed. Committee on National Statistics, National Research Council, Washington, D.C.
Massey, J. T., D. O’Connor, and K. Krotki. 1998. Response Rates in Random Digit Dialing (RDD) Telephone Surveys. In 1997 Proceedings of the Section on Survey Research Methods, American Statistical Association, Alexandria, Va.
NRC. 1979. Privacy and Confidentiality as Factors in Survey Response. National Academy Press, Washington, D.C.
Schuman, H., and S. Presser. 1981. Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording, and Context. Academic Press, San Diego, Calif.
Sudman, S., and N. Bradburn. 1978. Asking Questions: A Practical Guide to Questionnaire Design. John Wiley and Sons, New York.
OVERVIEW OF THE OMNIBUS SURVEY PROGRAM
The Omnibus Survey Program15 currently comprises two categories of survey:
A household survey, conducted monthly, that addresses a range of transportation issues; and
Targeted surveys, up to a maximum of four per year, that address special transportation topics, such as transportation use by persons with disabilities and mariners’ willingness to serve in a defense emergency.
Although the Omnibus monthly household and targeted surveys differ in many respects, as discussed later, they carry the same OMB clearance number. As such, they are subject to constraints defined in the OMB clearance package for the BTS Omnibus program.16 In particular, all surveys under the Omnibus program are required to include questions assessing customer satisfaction with various aspects of the transportation system—the core function of the Omnibus program. For example, the May 2002 household survey asked, “In terms of security from crime or terrorism, did you feel more secure or less secure flying on a commercial airline in April than a year ago?” This customer satisfaction component, which is a relatively unusual feature for federal government surveys, assists USDOT in complying with the requirements of the Government Performance and Results Act of 1993 for federal agencies to establish standards measuring their performance and effectiveness.
In addition to the customer satisfaction questions, the Omnibus surveys include questions designed to obtain factual (behavioral) information on transportation use or other transportation-related items. For example, the 2002 Mariner (targeted) survey asks merchant mariners, “Do you have a Standards of Training, Certification, and Watchkeeping (STCW) 95 certificate?”
The Omnibus program in general, and the monthly household survey in particular, offers opportunities for expediting the survey process so that data can be delivered to customers in a timely fashion:
Surveys that fall within the scope of the Omnibus program can be initiated at relatively short notice because a blanket survey clearance has already been obtained from OMB.
BTS staff, who have experience in conducting surveys of this type, provide methodological support to other administrations within USDOT, notably those with limited statistical expertise in-house.
The survey process for the monthly household survey is already established, so special questions from the modal administrations (see below) can be added relatively easily and at relatively low cost.
THE OMNIBUS MONTHLY HOUSEHOLD SURVEY
The purpose of the Omnibus monthly household survey is to “monitor expectations of and satisfaction with the transportation system and to gather event, issue, and mode-specific information” (BTS 2002). In addition to a core set of demographic questions to determine respondents’ age, gender, geographic area, and so forth, each month’s survey questionnaire contains three sets of questions:
A core set of transportation questions: These questions, which remain the same from month to month, ask respondents about their use of different modes of transportation and their perceptions and experiences using these modes.
Questions to assess achievement of USDOT’s strategic goals: The goals of safety, mobility, human and natural environment, and security17 are addressed on a rotating basis. For example, questions on the environment are asked three times a year (in January, May, and September). A particular question may be included only once or may be repeated in several editions of the survey.
Questions provided by the modal administrations within USDOT: These questions address specific issues of immediate interest to the modal administrations. For example, NHTSA has asked opinion and behavioral questions about headlight glare and tire pressure measurement, and TSA has asked questions about security screening procedures at airports. Each question may be included only once or may be asked for several consecutive months.
Data are collected every month from approximately 1,000 U.S. households using an RDD telephone methodology. Data collection, which occurs over a 10-day period, is performed by a contractor, who also programs the CATI instrument. BTS provides the contractor with the survey questionnaire.
The existing OMB clearance for the Omnibus program avoids the need for BTS to obtain approval for each monthly household survey individually unless deviations from the preapproved survey package are proposed.18 The draft survey questionnaire is reviewed each month by a panel of experts selected by the survey contractor and drawn from the statistical and transportation communities.
The committee was unable to obtain an estimate of the total cost of the monthly household survey because BTS staff time spent developing the questionnaire and processing the data is not itemized. However, this survey appears to be a relatively low-budget initiative. BTS spends $109,000 per month ($1.3 million per year) on contractor costs for the survey and charges $800 for each modal administration question included in the survey.
The Omnibus monthly household survey is distinguished from many federal surveys not only by its customer satisfaction component but also by its ability to provide quick responses to a range of questions on a continuing basis.19 For example, the questions for the May 2002 survey were finalized on April 26, 2002, and the final data tabulations and microdata file were made available on June 20, 2002. Thus, the survey allows the modal administrations within USDOT to obtain answers to well-defined policy questions with a turnaround time of approximately 2 months.
THE OMNIBUS TARGETED SURVEYS
The Omnibus targeted surveys fulfill the needs of modal administrations within USDOT for information on special interest populations, such as air travelers, mariners, and travelers with disabilities.20 In some cases
BTS is still required to provide OMB with each month’s survey questionnaire 30 days before data collection commences.
Other customer satisfaction surveys conducted by federal agencies include the in-depth visitor survey and the customer satisfaction card survey conducted as part of the National Park Service Visitor Service Project (www.nps.gov/socialscience/waso/products.htm). However, these are not quick-response surveys; for example, the results of the customer satisfaction card survey are published annually. Federal agencies such as the National Science Foundation and the National Center for Education Statistics have on occasion conducted quick-response surveys, but not on a continuing basis.
In a presentation to the committee on June 26, 2002, Michael Cohen of BTS reported that the following Omnibus targeted surveys have been completed or are in progress: Air Traveler I and II, Highway Use, Mariner I and II, Transportation Use (Disability), and Bicycle Use and Pedestrians.
these surveys originate from modal administration questions in the Omnibus monthly household survey. For example, TSA would like to correlate operational data on airport screening procedures (e.g., staffing levels, time of day, operation of screening equipment) with air travelers’ experiences and opinions. The agency obtained some preliminary information from questions added to the monthly survey but is now considering using a targeted survey to investigate customer satisfaction and confidence in more depth.
In contrast to the monthly household survey, which relies on telephone methods to gather data, the targeted surveys use a variety of data collection methods—including mail out/mail back, telephone, and Web-based approaches—depending on the purpose of the survey and the target population. For example, the 2001 Mariner Survey was conducted primarily by mail, but telephone interviews were conducted with some nonrespondents in an effort to increase the overall response rate. The data collection cycle for targeted surveys is determined by information requirements and, in contrast to the monthly household survey, is not routinely constrained by the need for a quick response. The sample size is determined by the purpose of the survey and the availability of resources.
BTS is required to inform OMB in advance of its general plans for Omnibus targeted surveys for the forthcoming year. At present, senior BTS staff review proposed targeted surveys and have, on occasion, sought advice from OMB regarding a particular feature of a proposed survey. However, there is currently no formal review process for targeted surveys that fall within the scope of the OMB clearance package for BTS Omnibus surveys. This situation is about to change. Starting in 2003, a shortened version of a full OMB clearance package for each targeted survey will be submitted to OMB for review. The purpose of the truncated package is to ensure either that proposed surveys do not deviate from approved sample or survey designs or that any such deviations are justified. In addition, OMB will require a 30-day public comment period on proposed targeted surveys.21
As in the case of the monthly household survey, the committee was unable to obtain an estimate of the total cost for any given targeted survey because BTS staff time is not itemized. Funding for the targeted surveys is provided by BTS and the modal administration requesting the survey. In some instances the BTS contribution is “in kind” and consists of staff time to conduct a range of survey-related tasks.22 In other cases, BTS also provides funds for the survey. The reported costs of targeted surveys range from $125,000 to $745,000.