National Academies Press: OpenBook
« Previous: 1 Executive Summary
Page 9
Suggested Citation:"2 Annoyance Survey Method." National Academies of Sciences, Engineering, and Medicine. 2014. Research Methods for Understanding Aircraft Noise Annoyances and Sleep Disturbance. Washington, DC: The National Academies Press. doi: 10.17226/22352.
×
Page 9
Page 10
Suggested Citation:"2 Annoyance Survey Method." National Academies of Sciences, Engineering, and Medicine. 2014. Research Methods for Understanding Aircraft Noise Annoyances and Sleep Disturbance. Washington, DC: The National Academies Press. doi: 10.17226/22352.
×
Page 10
Page 11
Suggested Citation:"2 Annoyance Survey Method." National Academies of Sciences, Engineering, and Medicine. 2014. Research Methods for Understanding Aircraft Noise Annoyances and Sleep Disturbance. Washington, DC: The National Academies Press. doi: 10.17226/22352.
×
Page 11
Page 12
Suggested Citation:"2 Annoyance Survey Method." National Academies of Sciences, Engineering, and Medicine. 2014. Research Methods for Understanding Aircraft Noise Annoyances and Sleep Disturbance. Washington, DC: The National Academies Press. doi: 10.17226/22352.
×
Page 12

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

PHASE I – AIRCRAFT NOISE ANNOYANCE SURVEYS 2 Annoyance Survey Method 2.1 Design Considerations The goal of this ACRP Study was to test two methods of data collection that could be used in a national survey of community reactions to aircraft noise. In the study, a telephone survey was compared with a mail survey. These survey modes were selected for the study after considering the costs and possible sources of bias for the modes that could be employed in a survey. This section describes methods that can be used to collect data in a sample survey, and summarizes the advantages and disadvantages of each method in the context of an aircraft noise and annoyance survey. The study design used an address sample, based on the USPS Delivery Sequence file (DSF)2 . Table 1 ranks four methods of survey data collection along with different evaluation criteria. Each column in the table ranks the method using different criteria with a low value considered better than a higher value. Unit cost refers to the expense of completing and processing an interview. This cost is directly related to the total number of interviews (quantity) that can be collected. There are four measures related to data quality. Coverage refers to the correspondence between the population of interest (adults living around the airport) and the sample frame. The sample frame is the list or sets of procedures which produce the elements that are eligible to be sampled for the study. A data collection method has complete coverage if the sampling frame includes everyone in the population of interest. Response rate and respondent selection are additional quality measures. A respondent selection method is required to randomly assign the interview to a specific individual living within each sampled address, and a method receives a lower score if the survey researcher can ensure that the person who is randomly assigned to respond to the survey actually is the person who responds to the survey. A method scores well on follow- up if it is easy to determine whether a sampled unit is eligible for the survey and to follow-up with the household after the initial contact. Complex questionnaire refers to the capability of the method to accommodate complex skip and logic navigation patterns within the questionnaire. Comparability refers to whether the method allows comparison to prior surveys that have been completed on airport noise. Table 1 Rank order of Survey Data Collection Methods by Quality and Cost Criteria Method Lowest Unit Cost Highest Data Quality Most Complex Questionnaire Comparability Coverage Response rate Respondent Selection Follow-up In-Person 4 1 1 1 1 1 1 Telephone 3 3 3 2 2 2 1 Mail 2 2 2 4 3 4 2 Web 1 4 4 4 3 3 2 2.1.1 Unit cost A review of all columns in Table 1 reveals that if not for cost, in-person would be the best method of collection. However, because of the extensive travel requirements, in-person collection generally ranges between 6 – 8 times as costly per completed interview as a telephone interview. A web survey is the least expensive. An additional contributor to in-person cost is the need to cluster in-person interviews in small geographic areas to minimize travel costs and time. This can decrease the effective sample size (Groves et 2 This is the list of all addresses where the USPS delivers mail. 5

al., 2004)3, although the compact regions included in an aircraft noise survey would involve less travel than surveys that cover a larger geographic area. 2.1.2 Coverage In-person surveys cover the largest portion of the population by allowing direct contact with both unit (in our case, households at specific addresses) and within-household eligible populations. Mail is rated second because the DSF frame has been found to have very good coverage for adults living in the civilian non-institutional population. Telephone and web surveys have lower coverage than in-person surveys or mail surveys because of their inability to efficiently contact particular units. A telephone survey relies on obtaining telephone numbers to call sampled households. This might be done using a random digit dial sample (RDD). However, because the ACRP sample has to be targeted within a relatively small area, it is not efficient to use RDD.4 A second way to collect telephone numbers is to sample households based on addresses and then match these using a reverse phone directory that associates a phone number with addresses. This approach yields telephone numbers that are listed in the phone book, as well as telephone numbers from other data sources used by the sample vendor (e.g., warranties or subscriptions). For those addresses that do not match to a telephone number, a short mail survey can be sent to households, asking for a telephone number for purposes of participating in the survey. This two-pronged method has been found to yield telephone numbers for approximately 88% of households in the original address sample (Montaquila et al., 2010). Based on Table 1, the telephone method provides less coverage than both mail and in-person method because it results in a percentage of households for which no telephone number can be acquired (i.e., no reverse directory match; no mail survey returned with a telephone number; household has no telephone). A web survey is lowest on coverage because it is limited to those with access to the internet. Approximately 73% of the population has internet access in some way (Pew, 2013). Those that do not have access tend to have lower income and are in older age groups (Groves et al., 2004), thus automatically creating a biased sample. If a web mode were to be used, the survey would have to also use another method that could reach those who do not have access to the web or those who are not willing to use the web (Messer and Dillman, 2011). 2.1.3 Response rate In-person surveys have the highest response rates. The ability for the interviewer to make contact with the sampled unit and make a personal appeal is the most effective way to obtain cooperation. Up until several years ago, telephone surveys had relatively high response rates. However, there has been a dramatic decline in response rates over the last decade (Curtin, et al., 2005). Telephone survey response rates of approximately 20% to 30% are standard in the industry for random digit dialing at the present time (though for the three test airports of this study, the response rate was only 12.1%, see Section 7). Alternatively, mail and mail/web surveys are increasingly being used to replace telephone surveys because of the ability to achieve higher rates. For example, Westat conducted parallel mail and telephone surveys as part of the Health Information National Trends Survey and found the mail survey to have a higher response rate than the telephone survey. Similarly, Westat recently conducted a two-phase mail survey for the National Household Education Survey (NHES) and achieved response rates of approximately 45% (Montaquila, et al., 2010).5 3 References are listed in Section 13. 4 It is possible to do some linking of area code and telephone pre-fixes to particular areas for landline telephones. This process, however, is subject to some error. This process is even more problematic for generating numbers for cell phone users. 5 See also Messer and Dillman (2011) for a similar result. 6

With respect to response rates, mail surveys consistently outperform web surveys for a general population survey like the one contemplated in this ACRP study (Tourangeau et al, 2013; Messer and Dillman, 2011). Combined with the lower coverage of the web (see discussion above), at this point in time there do not seem to be any advantages of using a web survey rather than a mail survey if the goal is to increase the representativeness of the survey. 2.1.4 Respondent selection Self-administered surveys, such as those conducted by mail or on the web, rely on respondents to find someone in the household to complete the survey. This is typically done by giving the household a rule to follow to determine who should participate (for example, the person with the next birthday, the oldest male, etc.). Prior research has found that respondents have some trouble following selection rules. For example, Battaglia et al (2008) found that approximately one-third of the survey respondents to a mail survey did not follow the birthday rule. In-person or telephone surveys have the interviewer follow the preferred procedures for selecting respondents. There is some evidence that mail surveys lead to higher estimated levels of noise annoyance (Yamada, Kaku, Yokota, Namba, and Ogata, 2008). The suspicion is that the person who is most concerned about noise will respond. Despite these concerns, there is an increasing trend for noise surveys in other countries to use self-administered methods such as mail surveys (Janssen et al., 2011). The present test at three airports, however, found mail and telephone surveys produced similar annoyance response results, see Section 7. 2.1.5 Follow-up All of these methodologies permit multiple follow-ups with sampled addresses. In-person surveys provide more information on the status of the units and thus allow for more efficient follow-up of eligible units. For example, interviewers are able to look at a unit and decide if it is a business and/or if someone is living there. Less information is available for a telephone survey if no one answers the telephone or the individual refuses to answer any questions. However, if the initial call is completed, interviewers are able to determine the status of the unit, its geographical location and eligibility for the survey (e.g., business or residential). In both an in-person and telephone survey, if an individual cooperates with the initial set of screening questions (e.g., eligibility; selecting someone in the household), follow-up can be tailored to the particular respondent. Mail and web surveys provide the least amount of information for follow-up. Both require mailing a request to an address. If there is no return, neither the eligibility of the unit can be determined nor whether someone in the unit has actually seen the survey request. Though it is good practice to follow up non- responses from the initial mailings, the biggest difference from similar follow-ups for in-person and telephone is that the mail follow-up cannot be tailored to the status of the unit. For example, with a telephone survey it is possible to call back and ask for the individual who was selected to be the respondent. For a mail/web survey, this is not possible. A general request to the household has to be made because of the lack of any knowledge of the status of the initial request. 2.1.6 Complex questionnaire The in-person, telephone, and web-based surveys may be administered using computers. In the case of in- person and telephone surveys, the interviewer reads the questions from a computer and enters the information directly. For a web survey, the respondent is performing the same task. In all three cases the computer is programmed to navigate the respondent through the questionnaire. This approach provides the survey with significant flexibility with respect to tailoring the questions. This approach eliminates the need for respondents to understand navigation instructions. Paper-mail surveys require special accommodations to simplify navigation instructions and survey procedures. 7

2.1.7 Comparability Most of the previous annoyance surveys have been conducted using an in-person, interviewer methodology. As noted in the literature review, telephone and in-person surveys produce similar results from a measurement perspective. This is consistent with the survey literature, which does not find big measurement differences between these two modes (de Leeuw, 2005). Larger differences are generally found when comparing responses from interviewer and self-administered methods, such as a paper-mail or a web survey (de Leeuw, 2005). For this reason, a survey conducted in-person or by telephone would be most comparable to many of the prior studies. However, there is an increasing trend to move from interviewer-administered surveys to self-administered surveys. Paper-mail or web surveys are less expensive to administer and are becoming more common over time. 2.2 Final Design While an in-person survey is rated ‘best’ on most criteria, the cost was deemed too high to implement on a national level. The final design adopted for this ACRP Study included parallel telephone and paper-mail surveys. A telephone survey has the advantage of maintaining control over respondent selection and being compatible with many of the prior noise surveys. The advantage of a mail survey is that it will have better response and coverage rates. It is also less expensive than a telephone survey. The main disadvantage of a mail survey is the possibility that respondents who are most annoyed will tend to respond to the survey. With a mail survey the respondent is able to review the questionnaire before filling it out. If there are a large number of questions about the airport and annoyance, the respondent is able to factor the topic of the survey into his or her decision to participate. In order to minimize the effects of respondent selection on the mail survey, a short survey was designed that included 10 substantive questions, one of which was the primary measure used to assess annoyance with airport noise. This makes it difficult, if not impossible, for anyone to detect that the survey was specifically intended to measure annoyance with airport noise. The telephone survey is longer and collects more data on the local airport and the respondent’s annoyance. These additional data provide information for analysts interested in trying to explain annoyance levels. By administering both types of surveys, the project can assess whether the levels of annoyance differ for mail and telephone surveys. If the mail survey results are similar to the telephone survey results, including a mail survey (which has a lower cost per response obtained) for a national survey could allow a larger number of airports to be surveyed with an increased sample size from each. 8

Next: 3 Literature Review »
Research Methods for Understanding Aircraft Noise Annoyances and Sleep Disturbance Get This Book
×
 Research Methods for Understanding Aircraft Noise Annoyances and Sleep Disturbance
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Airport Cooperative Research Program (ACRP) Web-Only Document 17: Research Methods for Understanding Aircraft Noise Annoyances and Sleep Disturbance explores the development and validation of a research protocol for a large-scale study of aircraft noise exposure-annoyance response relationships across the U.S. The report also highlights alternative research methods for field studies to assess the relationship between aircraft noise and sleep disturbance for U.S. airports.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!