Cover Image

Not for Sale

View/Hide Left Panel
Click for next page ( 50

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 49
Survey Design 49 Sampling of air passengers to survey can be particularly problematic because of their tran- sience. Passengers arriving late at the gate are typically under-represented and have a high non- response rate, while connecting passengers are often over-represented. Appropriate sampling methods are discussed in Sections 5.2 and 5.3. 4.3 Questionnaire Design and Structure The design and structure of the survey questionnaire, including the wording of individual questions, is crucial to the success of a survey. Issues to be considered include what information to request, the order in which the questions are asked, how much detail to try to obtain, and the amount of time that respondents can be expected to spend completing the survey. 4.3.1 Length As the amount of information to be obtained by a survey or the level of detail desired for the responses increases, so does the length of the questionnaire. Once the decision is made to incur the cost of performing a survey, there is often a strong desire to increase the amount of informa- tion it provides. However, increasing the length of the survey may increase the refusal rate and the number of incomplete responses, and also reduce the number of surveys that the field staff can perform in a given time period, thereby increasing the cost of the survey to obtain the same number of responses. There are a number of practical limitations on survey length. The most obvious is the time that respondents are willing to spend answering the questions. This length of time will depend in part on the circumstances. Someone completing a survey questionnaire in the comfort of their office or home will generally be willing to answer more questions, and in greater detail, than someone who is standing in a busy airport terminal and is anxious to catch a flight. The survey methodology may also impose limitations on survey length. If the survey question- naire is a printed form that is completed by hand, it should take up no more than two sides of a single sheet of paper. The text has to be large enough for respondents to read and the form has to provide enough space to write in the answers. 4.3.2 Response Options Survey questions fall into three broad types, based on the response options: Numerical, in which respondents provide a numerical value, which could include dates and times. Categorical, in which respondents choose among predefined alternatives. Open-ended, in which respondents can answer in their own words. The results of open-ended questions are much more difficult to analyze, but they may provide richer information because the respondents are not forced to select from a limited number of categories. For many applications it is common to use a hybrid form, in which respondents are presented with a set of categorical responses, one of which is "Other" with an option for an open- ended response. This option allows common responses that were not covered by the categorical options to be assigned their own category code after the fact. Also "Other" responses that really should have been one of the defined categories can be recoded. However, adding a category after the survey based on the "Other" responses can result in an under-reporting of that category, because some respondents who would have selected that option if it had been presented chose a defined category instead. This occurrence is less of a

OCR for page 49
50 Guidebook for Conducting Airport User Surveys problem with an interview survey, where the respondents cannot see the defined categories when they answer the question, than with a self-completed survey. In addition to being easier to analyze, categorical questions have the advantage that they are generally quicker to answer, because they typically involve just checking a box. Also, because they present the respondent with a predefined set of possible responses, they encour- age the use of standardized terminology, yet may also trigger a response that would not otherwise have been mentioned. While this is true for self-completed questionnaires, there is a potential disconnect with interview surveys, where the respondent does not see all the options and the interviewer assigns the response provided to one of the defined categories. Different interviewers may handle a similar response in different ways. This phenomenon is called inter-rater reliability, and it can be a particular problem when asking about ground transportation modes, because different respondents may refer to the same mode in many different ways. One solution is to provide interviewers with printed cards that list the defined options, which can be shown to respondents to help them provide an appropriate response. Categorical questions and the responses obtained from them may include the following types of problems: The respondent checks multiple boxes when asked to check only one. The wording should make it clear whether one or multiple boxes should be checked. This wording should not be part of the question and should stand out. Web-based surveys and those using electronic data collection devices (discussed in Section 4.9) eliminate this error. A categorical response for "Not applicable" or "No opinion" is not provided where some such form of non-response is appropriate. When using a rating scale for opinions such as 15 or 17, be careful to word all the questions in a consistent way so that the highest number corresponds to the most positive opinion and 1 corresponds to the most negative opinion. Consideration should be given to including a comment box after each group of questions to allow respondents to note any clarifications or other relevant information. 4.3.3 Question Wording The wording of questions is critical to the success of a survey. Respondents who misunder- stand a question are not going to provide the desired information. Worse, it may not even be clear that they have given an answer to a different question from the one intended. Similarly, interviewers who misunderstand a question may miscode the response. Therefore, considerable effort should be devoted to developing clear and unambiguous questions. Consider the question "How far do you live from the nearest airport?" The question is asking how far in terms of what: miles? blocks? travel time? travel time by car or by transit? And what kind of airport is meant: the local general aviation airport? the nearest airport with scheduled passenger service? Getting from the general and vague to the specific is both necessary and difficult. Airports with years of experience conducting user surveys are still investigating possible refinements to their questions. Unfortunately, the easiest way to discover that a question is problematic is to look at the resulting data. Preventing this after-the-fact problem requires a serious commitment of time and effort to planning, thoughtful consideration of possible answers, and thorough testing of questions before the survey is deployed.

OCR for page 49
Survey Design 51 There are two broad categories of questions: Factual questions. Opinion questions. Factual questions ask for factual information that the respondent should be able to provide (such as how many bags they checked or how they got to the airport), while opinion questions seek the respondents' views on an issue. Opinion questions present respondents with a range of options so they can select the one that best describes their opinion. This type of question may take the form of a statement, with the respondents being asked how strongly they agree or dis- agree. Satisfaction questions that explore the respondents' satisfaction with particular facilities or services are a subcategory of opinion questions. Wording concerns with factual questions largely revolve around ensuring that the intent of the question is clear to the respondents and that the descriptions used for categorical questions are unambiguous. For example, difficulties can arise over local terminology that may not be familiar to visiting air passengers, such as the names of different ground transportation services (discussed further in Section 5.4). Question clarity is particularly important with self-completed questionnaires, where there is limited opportunity for respondents to clarify the intent of a ques- tion or ask how their response should be classified in terms of the response categories provided. The challenge with opinion questions is to ask the question in a way that allows for a meaning- ful answer. Such careful wording is particularly important with questions that ask respondents to indicate how likely they would be to use some proposed facility or service, or their satisfaction with some existing facility or service. Because the likelihood of using a facility or service depends on the circumstances affecting the decision, such questions have to be framed in terms of a specific situation, such as the trip that an air passenger is currently taking. Similarly, because satisfaction with a given facility or service is influenced by both expectations and the respondent's experience with the use of the facility or service, customer satisfaction questions need to be worded in a way that allows these influences to be identified. 4.3.4 Question Order and Interview Flow At least four considerations affect the order in which the different questions are asked: The most obvious consideration is where the answer to one question affects subsequent ques- tions. For example, it is important to determine whether an air passenger is starting a trip or connecting between flights before asking questions about the ground access trip to the airport. A more subtle but equally important consideration is to introduce requests for information in a logical sequence. Asking survey respondents the type of place from which they began their trip to the airport gets them thinking about where they started their trip and leads naturally to questions about the location of that trip origin, such as the city or zip code. Earlier questions can also help clarify the intent of subsequent questions. Asking how many people are traveling together clarifies subsequent references to the travel party, such as how many bags the travel party checked. A third consideration is to obtain as much key information as possible if there is a likelihood that the respondent will be unable to complete the survey. Asking those questions earlier in the survey makes it more likely that they will be answered. The fourth consideration is that most surveys involve some branching that depends on the responses to earlier questions. These branches or skip patterns can request more detailed information for certain responses or omit questions that do not apply. In the case of printed questionnaires, these skip patterns should not be too complex, or respondents or interview- ers will have difficulty deciding where to go next in the questionnaire and may miss key