National Academies Press: OpenBook

Guidebook for Conducting Airport User Surveys (2009)

Chapter: Chapter 8 - Surveys of Area Residents

« Previous: Chapter 7 - Tenant Surveys
Page 132
Suggested Citation:"Chapter 8 - Surveys of Area Residents." National Academies of Sciences, Engineering, and Medicine. 2009. Guidebook for Conducting Airport User Surveys. Washington, DC: The National Academies Press. doi: 10.17226/14333.
×
Page 132
Page 133
Suggested Citation:"Chapter 8 - Surveys of Area Residents." National Academies of Sciences, Engineering, and Medicine. 2009. Guidebook for Conducting Airport User Surveys. Washington, DC: The National Academies Press. doi: 10.17226/14333.
×
Page 133
Page 134
Suggested Citation:"Chapter 8 - Surveys of Area Residents." National Academies of Sciences, Engineering, and Medicine. 2009. Guidebook for Conducting Airport User Surveys. Washington, DC: The National Academies Press. doi: 10.17226/14333.
×
Page 134
Page 135
Suggested Citation:"Chapter 8 - Surveys of Area Residents." National Academies of Sciences, Engineering, and Medicine. 2009. Guidebook for Conducting Airport User Surveys. Washington, DC: The National Academies Press. doi: 10.17226/14333.
×
Page 135
Page 136
Suggested Citation:"Chapter 8 - Surveys of Area Residents." National Academies of Sciences, Engineering, and Medicine. 2009. Guidebook for Conducting Airport User Surveys. Washington, DC: The National Academies Press. doi: 10.17226/14333.
×
Page 136

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

132 Many of the issues related to planning and designing surveys of area residents are common to other types of airport user surveys, and the reader will be referred to those sections of the guide- book where applicable. 8.1 Purpose of the Survey and the Data to Be Collected Most surveys of area residents are conducted to obtain information for marketing and airport planning purposes. Common areas of inquiry include reasons residents choose one airport over another, the extent to which residents of one area are using an airport in another area, the trip char- acteristics (airline, final destination, airfare, etc.), why the other airport is preferred, what might make the local airport more attractive to prospective passengers, what messages about the airport would resonate with these passengers, and what information sources passengers are using to make airport choices. 8.2 Survey Methodology Surveys of members of the general public are most commonly conducted by telephone, because this is by far the most cost-effective method. Although some contend that such surveys can now be conducted via the Internet, the fact remains that only 45% to 60% of households are online, depending on whose figures one uses and the community in question. In addition, online surveys generally have the lowest response rates of any of the available survey strategies, which can lead to results that are unrepresentative of the population of interest. For these two reasons, the telephone remains the preferred approach. Whether the Internet comes into its own as a vehicle for general public surveys will depend on its future rate of penetra- tion. Of course, telephone surveys also have their drawbacks. These, and the methods used to over- come them, are discussed in the following section. 8.3 Sampling, Coverage, and Timing 8.3.1 Types of Telephone Survey Samples Unfortunately, there are no lists of telephone numbers of all members of the general public from which an airport could select a list of people to call. Accordingly, less than optimal lists or some alternative approach must be utilized. Various types of lists do exist, but none of them represent randomly selected samples of all peo- ple in a given geographical area. In addition, most such lists, usually purchased from brokers, are C H A P T E R 8 Surveys of Area Residents

compilations of people with particular characteristics. Typically, these lists contain non-random samples of “low-incidence” target groups—groups whose proportion in the general population is small. If such a target group is of interest, it is acceptable to sample from non-random lists because the cost of searching for members of low-incidence groups is usually prohibitive. If the target is the general public as a whole, however, the non-randomness of lists makes them widely frowned on as sampling sources. The alternative, which is theoretically elegant but messy in practice, is to use something called “random-digit dialing” (RDD). In brief, RDD samples are constructed by combining known pairs of area codes and prefixes (the first three digits of a telephone number) with a random four-digit suffix. The elegant aspect of an RDD sample is that it represents a true random sample of every telephone-owning household in an area. Households without telephones are excluded, but this is only an issue in areas with high proportions of non-telephone households. Overall, about 97% of American households have telephones. Households with multiple land lines are also over- sampled, but this usually has a trivial impact on survey results. Importantly, households with unlisted numbers (as well as newly listed and erroneously listed numbers) are included. Because about a third of numbers in the United States are unlisted, this is a key benefit of RDD. One challenging aspect of RDD is that it includes a lot of “junk” numbers: fax machines, data lines, businesses, non-working numbers, and the like. Although this does not affect the response rate for an RDD sample—these numbers are simply excluded from the calculations—it does affect the cost of the survey. It is not at all unusual to have to generate and dial 8 to 10 numbers for every completed interview in a relatively simple survey. Completion rates are generally between 1.25 and 1.75 interviews per interviewer per hour, which becomes boring for the interviewers and expensive for the sponsors. Another challenging aspect of RDD sampling is what happens with a “ring-no-answer”—a number that is never answered when repeatedly dialed. Answering machines usually give suffi- cient clues for categorizing the number as either a residence or a business, but many of these num- bers have no answering machines. Dialing such numbers dozens of times usually resolves their status, but this is extremely expensive and usually done only during large academic or federal gov- ernment surveys. How these numbers are treated in final response rate calculations thus becomes problematic. 8.3.2 Call Sequence and Design If most studies do not dial numbers dozens of times, how many calls are usually placed? This matters, because the more calls that are made, the more representative the sample becomes as more and more hard-to-reach people are included. However, multiple dialings lead to increased costs. The general rule among public opinion researchers outside academia is to use a sequence of between four and six calls spread over different days of the week and different times of day. Most call centers dial from 5 to 9 p.m. local time Monday through Thursday or Friday (Friday evening is the least productive time) and during some hours Saturday and Sunday (Sunday evening is the most productive time). Generally, calls past 9 p.m. are frowned upon, as are calls before 10 a.m. Saturday. Whether calling before noon on Sunday makes sense is a function of the area and how many people attend church, go to Sunday brunch, or both. 8.3.3 Sources of Bias As noted previously, one small source of bias in telephone surveys derives from the exclusion of non-telephone households, and another emerges from households with two land lines. Both of these are quite trivial and in most cases dismissed as inconsequential. Surveys of Area Residents 133

A larger potential source of bias is how ring-no-answer numbers are handled. Whether these numbers actually create a bias in any given study is generally unknown, because in most cases the numbers are fairly rapidly abandoned. A potentially more important source of bias is refusals, because it is clear from many studies that people who refuse differ from those who do not. As a result, it is generally wise only to use call centers that focus on and keep refusals under control. For a medium-interest, relatively brief and well-designed survey of the general public, a refusal rate of more than 30% is an indicator that refusals are not being controlled. Many call centers now attempt refusal conversions, and the general consensus is that these efforts are worthwhile. In conversions, the most persuasive interviewers try a number at which a refusal occurred another time. Only if they are refused a second time is the number abandoned. It is also worth noting in this regard that most people refuse not because they never do surveys, but because the interviewer called at an inconvenient time. Frequently, callbacks occur at a bet- ter time for the respondent, and consent is readily obtained. It is also possible that someone else eligible to participate in the survey and with a generally more favorable attitude will be reached. At the same time, so-called “hard refusals”—those who say they don’t do surveys or who request to be placed on a do not call list—are never called back, because the outcome is pre- dictable. (Survey research is not subject to the do not call laws, but many people do not know this and ask for do not call protection anyway. Most call centers oblige them.) If sample types other than RDD are used, a major source of bias is the exclusion of households with unlisted telephone numbers. This is particularly true in areas where the proportion of unlisted numbers is high; in some parts of the United States, the unlisted rate currently exceeds 70%. Finally, there is the issue of cell-phone-only households. Although estimates on the number of such households vary, this problem is not trivial. The problem is compounded by the fact that it is illegal to use automated dialing equipment, which many call centers use, to call cell phones. In addition, if interviews are actually conducted on cell phones, the respondent may be paying for the privilege with precious minutes. At present, the survey research profession has not come up with a satisfactory solution to this problem. Recent experiments with dual sampling frames (one for cell phones and one for land lines) have had some success in reaching the cell-phone-only population, but these experiments are in their infancy. In the meantime, it probably makes sense to stay with a traditional RDD sample. 8.3.4 Dates to Avoid Although it may seem obvious that certain dates should be avoided in conducting a telephone survey, some organizations have overlooked this basic concept. Dates to avoid include: • Major holidays, including the day before, day of, and day after Thanksgiving. • The annual income tax due date in April. • Any date from December 15 through January 2. • The day of any major sporting event. 8.3.5 Sample Size Generally, sample sizes for surveys of area residents are determined using the formula for pro- portional data, as most of the results that are obtained in such surveys are expressed as percent- ages. It is also generally assumed that the distribution of the data will be the worst-case scenario (a 50/50 split). True pilot tests in order to establish a different and more favorable benchmark are rarely conducted in telephone research. Further parameters for the sample are usually fixed in 134 Guidebook for Conducting Airport User Surveys

advance based on the available budget, assumptions about the importance of the results, and the risk of making wrong decisions based on the findings. Customarily in opinion research, the confidence level is fixed at 95%. The confidence interval or margin of error is then stipulated based on the factors outlined in the previous paragraph. For public opinion research, the margin of error is most often fixed at ±5 percentage points, leading to a sample size of 400 (rounded up from 384). A margin of ±3 percentage points requiring a sample size of 1,000 is also common. Larger sample sizes are utilized for high-risk projects and when sub- group analysis will be conducted. Commercial market researchers frequently use a ±6 percentage point margin of error and thus a sample size of 300. Selection of sample sizes and the associated errors and confidence intervals are discussed in detail in Section 3.4, and further examples for deter- mining the required sample size are provided in Appendix B. 8.4 Questionnaire Wording and Length Experience suggests that surveys of the general public will start to experience unacceptable rates of refusals and terminations (people who quit in the middle of the interview) when the interview goes past about 10 minutes. Cooperation rates are highest when interviews are five minutes or less, although it is admittedly difficult to craft such a short survey on most topics. A detailed discussion of question wording issues can be found in Section 4.3. There are nuances in how questions are worded across methods (e.g., the phrase “the following list” works well for self-completed questionnaires but sounds nonsensical over the telephone), but the fundamental principles are the same for all methods. A sample questionnaire for area residents is provided in Appendix J. 8.5 Measures to Obtain Adequate Response As pointed out in previous sections, obtaining an adequate response is key to a survey’s success. A low response rate leads to questionable results. Important aspects in conducting a telephone survey include the following: • A centralized facility where interviewers are closely supervised and regularly monitored, ideally during every shift. • A thorough interviewer training program. • A comprehensive and proactive approach to coaching interviewers whose skills need improve- ment or who are doing something wrong during their interviews. Generally, data quality will be superior if a computer-assisted telephone interviewing (CATI) system is used, although this is not necessarily the case. However, complicated questionnaires with many skips and branches should always be done on CATI systems; they are simply too difficult for interviewers to follow correctly on paper. Finally, there is the issue of languages. There are some communities in which interviewing needs to be conducted in a language other than English. However, even in areas with high percentages of people who speak other languages, many of these people also speak English, and some would pre- fer to be interviewed in English to show they are trying to learn, even if the interview takes longer as a result. It is therefore wise to ask the call center what their experience is when questionnaires are translated; often the translation is not worth the time and cost. It is also important to note that interviewing in other languages requires a written translation of the questionnaire in addition to bilingual interviewers. If interviewers are simply asked to translate Surveys of Area Residents 135

on the fly, they will come up with many different wordings, which will cause inconsistencies in the way questions are interpreted and possibly the responses obtained. This occurrence can compro- mise the utility of the results. 8.6 Survey Budget A number of factors influence a telephone survey budget. In terms of the survey itself, inter- view length, types of questions (open-ended versus closed-ended), languages to be used (both the number of languages to be used and the difficulty of recruiting interviewers who are fluent), call sequences, the level of data analysis required, and the nature of desired deliverables all play impor- tant roles. Geography is also important, in two respects. In areas with a high cost of living, survey costs will be higher because rents, salaries, and wages are higher. And in urban areas, costs will be higher because cooperation is more difficult to achieve than it is in rural or suburban areas. Given these factors, it is difficult to give “a price” for a telephone survey. However, if one con- siders a typical survey, which is 10 minutes long, directed to the general public, conducted in a moderately cooperative market, and includes two open-ended questions, the unit price (cost per interview) is likely to be in the $40 to $50 range. Surveys that are shorter, are targeted to more coop- erative areas, or have fewer open-ended questions will generally cost less; those that have a nar- rower target audience (e.g., only people who have made an air trip in the preceding year) will tend to cost more. 8.7 Summary Most surveys of area residents are conducted to obtain information for marketing and airport planning purposes. Typically, these surveys are conducted by telephone. Unfortunately, there are no lists of telephone numbers for all members of the general public from which an airport could select a list of people to call. The widely accepted alternative approach is to use RDD. Perhaps the greatest source of bias in telephone surveys is the refusal of potential interviewees to participate. Smaller sources of bias are non-telephone households and telephone numbers that are not answered after repeated calls. A more recent, and growing, cause for concern is house- holds that have only cellular telephones. Most RDD samples exclude cell phones because it is ille- gal to dial cell phone numbers automatically and because the respondent must pay the cost of the call. Experience suggests that surveys of the general public will start to experience unacceptable rates of refusals and terminations when the interview exceeds about 10 minutes. Cooperation rates are highest when interviews are five minutes or less. The cost for a telephone survey is influenced by a wide variety of factors—including interview length, types of questions, languages to be used, how many calls are placed to each number, the level of data analysis required, and the nature of desired deliverables—making it difficult to pro- vide a generalized cost estimate. If one considers a typical 10-minute survey, however, the unit price (cost per interview) is likely to be in the $40 to $50 range. 136 Guidebook for Conducting Airport User Surveys

Next: Chapter 9 - Surveys of Area Businesses »
Guidebook for Conducting Airport User Surveys Get This Book
×
 Guidebook for Conducting Airport User Surveys
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Airport Cooperative Research Program (ACRP) Report 26: Guidebook for Conducting Airport User Surveys explores the basic concepts of survey sampling and the steps involved in planning and implementing a survey. The guidebook also examines the different types of airport user surveys, and includes guidance on how to design a survey and analyze its results.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!