National Academies Press: OpenBook

Web-Based Survey Techniques (2006)

Chapter: Chapter Six - Case Studies

« Previous: Chapter Five - Technology
Page 38
Suggested Citation:"Chapter Six - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2006. Web-Based Survey Techniques. Washington, DC: The National Academies Press. doi: 10.17226/14028.
×
Page 38
Page 39
Suggested Citation:"Chapter Six - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2006. Web-Based Survey Techniques. Washington, DC: The National Academies Press. doi: 10.17226/14028.
×
Page 39
Page 40
Suggested Citation:"Chapter Six - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2006. Web-Based Survey Techniques. Washington, DC: The National Academies Press. doi: 10.17226/14028.
×
Page 40
Page 41
Suggested Citation:"Chapter Six - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2006. Web-Based Survey Techniques. Washington, DC: The National Academies Press. doi: 10.17226/14028.
×
Page 41
Page 42
Suggested Citation:"Chapter Six - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2006. Web-Based Survey Techniques. Washington, DC: The National Academies Press. doi: 10.17226/14028.
×
Page 42
Page 43
Suggested Citation:"Chapter Six - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2006. Web-Based Survey Techniques. Washington, DC: The National Academies Press. doi: 10.17226/14028.
×
Page 43
Page 44
Suggested Citation:"Chapter Six - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2006. Web-Based Survey Techniques. Washington, DC: The National Academies Press. doi: 10.17226/14028.
×
Page 44
Page 45
Suggested Citation:"Chapter Six - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2006. Web-Based Survey Techniques. Washington, DC: The National Academies Press. doi: 10.17226/14028.
×
Page 45
Page 46
Suggested Citation:"Chapter Six - Case Studies." National Academies of Sciences, Engineering, and Medicine. 2006. Web-Based Survey Techniques. Washington, DC: The National Academies Press. doi: 10.17226/14028.
×
Page 46

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

This chapter details three case studies describing projects conducted by NJ TRANSIT, Metrolink, and TriMet. These case studies show what can and are currently being done with web-based research by transit agencies. Various themes described in earlier chapters are repeated and can be under- stood in a real-world context. NJ TRANSIT RAIL ePANEL The benefits of using web-based longitudinal panels for cus- tomer satisfaction studies can be clearly seen based on the experience of such a study for NJ TRANSIT’s rail cus- tomers. The study revealed numerous benefits of this method over cross-sectional studies, including more robust statistics, better understanding of customer satisfaction, and the ability to analyze customer satisfaction trends. A variety of innova- tive Internet technologies was implemented, adding value to the study by ensuring data quality, timeliness, reductions in respondent burden, less random error in respondent answers, and techniques to pair qualitative data to quantitative analy- sis. Online geocoding of respondents’ origins and destina- tions was also described as another aspect of the survey, which provided NJ TRANSIT more value from the study. Customer satisfaction studies are conducted by many major organizations, including those who provide transportation ser- vices. Typically, customer satisfaction studies are carried out using repeated cross-sectional sampling of customers. Satis- faction scores are compared across these repeated cross sections. Differences in satisfaction scores resulting from per- ceived changes in service are measured; however, the mea- surement is confounded in part by differences between the cross-sectional samples. Demographic differences can be accounted for by weighting the samples so that they are equiv- alent; however, there are significant differences in satisfaction scores between individuals that are not explained by demo- graphic or other easily measured characteristics. The result is that relatively large samples are required to measure changes in customer satisfaction over time. Longitudinal panels offer a potentially attractive alterna- tive to repeated cross sections for measuring customer sat- isfaction. Measuring changes in satisfaction of the same individuals from one period of time to another eliminates the confounding caused by variations between the different individuals in repeated cross-sectional samples. The result is that the sample sizes required to measure differences in 38 customer satisfaction can be much lower in panel surveys. In addition, panels provide opportunities to directly deter- mine the reasons for those changes. Longitudinal panels can be administered using a variety of methods. For transportation studies, intercept recruiting is an efficient approach to assembling panels. Although telephone and mail-out/mail-back instruments are commonly used, web-based instruments can be a highly cost-effective alterna- tive for many applications. Web access has increased signifi- cantly across the population and it is possible to construct demographically representative panels from among those who have web access. In addition to their cost-effectiveness, an important advantage of using web instruments with panels is that the time required to complete and analyze data from a survey wave can be dramatically reduced. NJ TRANSIT’s Rail Customer Satisfaction ePanel was designed to be a continuous survey, providing monthly data on customer satisfaction. It used web-based technologies to invite respondents from one of three panels each month and to administer a customer satisfaction survey. The resulting survey data monitor customers’ concerns on a monthly and even daily basis, as study data were continually being received from respondents throughout each month. Web-based survey technology allowed for great flexibility in obtaining both quantitative customer satisfaction responses, such as typical satisfaction scores, and qualitative responses, such as written answers to open-ended questions that are used to explain why quantitative scores have changed. This was a critical part of the study, because the reasons for change in sat- isfaction scores can be quickly understood when they are paired directly to responses from open-ended questions (see Drill Down Questions section and Figure 24). Advanced web-based survey technologies also allowed for a number of innovative features that improved data integrity and currency. These features included online geocoding of origin and destination data, automatic updating and querying of train schedule data so that respondents could select only valid trains in their surveys, and full validation of responses to questions. Web-based longitudinal panel survey instru- ments can be designed in ways that minimize respondent fatigue. This was accomplished using a number of techniques that required respondents only to confirm that various aspects of their travel have not changed since their previous survey. CHAPTER SIX CASE STUDIES

39 Background The NJ TRANSIT ePanel Customer Satisfaction Study was conducted to provide continuous monthly and quarterly tracking of NJ TRANSIT commuter rail riders’ satisfaction along 65 satisfaction measures. These measures had previ- ously been tracked in surveys conducted less than annually, using cross-sectional sampling with handout/handback paper questionnaires. The ePanel study measured rail customers’ satisfaction scores in what NJ TRANSIT calls “functional areas,” which included questions related to parking, boarding stations, des- tination stations, train scheduling, and customer service. The survey also measured “key-driver areas,” which include on- time performance, personal security, employee performance, fares, and mechanical reliability. The study provided the ability to segment the customer satisfaction measures based on different train lines, destination markets, customer demo- graphics, stations, etc. NJ TRANSIT’s ePanel was designed to answer the fol- lowing specific questions about commuter rail customers on a continuing basis: • What are the trends in customer satisfaction and what factors influence these trends? • On which train lines within the NJ TRANSIT system is customer satisfaction changing? In what direction are these changes, how big are the changes, and why are they occurring? • What are customers’ main concerns? Where does NJ TRANSIT need to improve? • Where are customers satisfied? What performance does NJ TRANSIT need to maintain? To address these questions, a longitudinal panel study plan was developed in July 2002 that was driven by a monthly sur- vey that began in September 2002. This survey collected cus- tomer satisfaction data every month from one of three separate customer panels, which were each comprised of approximately 4,000 participants. Each panel respondent was surveyed four times a year at three-month intervals, giving NJ TRANSIT new monthly customer satisfaction data throughout the year and allowing them to track customer satisfaction trends and cus- tomer origin and destination patterns. Respondents were asked to take a survey only once every quarter, reducing respondent fatigue and also giving respondents enough time between sur- vey waves to notice service changes. Web-Based Survey Instrument The survey used a web-based, multi-paneled, multi-waved customer satisfaction questionnaire that had a number of sec- tions. The questionnaire first obtained background informa- tion about respondents’ current NJ TRANSIT travel, then the survey presented 65 customer satisfaction attributes for respondents to rate. It continued by asking general customer satisfaction questions (e.g., would you recommend NJ TRANSIT to a friend?, etc.) and also determined respon- dents’ origin and destination locations, and ended by asking additional background questions and demographics. The 65 customer satisfaction ratings were crucial to deter- mining where NJ TRANSIT was performing well and where improvements would be needed on its rail system. Ratings were on a scale of 0 to 10, with the option to answer not applicable. Data validation was used for many questions, such as the customer satisfaction questions, to ensure quality data and complete responses. Wording was customized for each respondent on many of the survey screens as well. For example, in the screen shot below, the question asks about “parking at Woodcliff Lake Station” instead of simply say- ing “your boarding station” (Figure 20). Wording cus- tomization makes the questionnaire clearer for respondents and by extension improves data quality. Origin–Destination Data Collection An important part of the study for NJ TRANSIT was to obtain origin and destination data. To accomplish this, respondents were asked to geolocate their origin and desti- nation addresses by using a point-and-click map, a street address, a business name, or an intersection search. A screen shot of the map search is shown as Figure 21. Regardless of the type of geolocation search used (map, address, business, or intersection), a latitude and longitude for each origin and destination was determined. These were then automatically coded into the proper NJ TRANSIT transportation analysis zones using an online, point-in-polygon routine. Therefore, NJ TRANSIT received immediate real-time access to fully coded origin–destination data with transportation analysis zones already attached to the data. Another important function of the survey was determining what train the respondent rode. Respondents were asked the appropriate questions to classify them into four categories: frequent weekday rider, frequent weekend rider, infrequent weekday rider, and infrequent weekend rider. Once the respon- dent type was known, the survey then asked the respondent what train they used and then displayed only the relevant trains for their station and day of week (Figure 22). Anchoring The differences between a respondent’s first survey and their subsequent surveys could be subtle, but important, and served three main purposes: (1) to deliver respondents more efficient second, third, and fourth surveys by asking them only to con- firm answers from their previous surveys when the answers are unchanged; (2) to use “anchoring” so that respondents knew how they rated satisfaction measures in the previous survey wave, which helped them make new judgments based

FIGURE 20 Example screen showing customer satisfaction attributes rating (NJ TRANSIT). 40 FIGURE 21 Map search screen in NJ TRANSIT Rail ePanel survey.

41 on their previous answers; and (3) to ask respondents “drill down” questions that requested a written explanation of rat- ing differences between the previous and current survey. Anchoring was a technique used in the second, third, and fourth survey waves to enable respondents to see how they previously rated their customer satisfaction attributes (Figure 23). Anchoring was used to ensure that a changed answer was in response to a change in service, and not because the respondent had forgotten how they had previ- ously rated the service. Respondents were focused on the change in service, reducing the random error in the mea- surement of this change. Drill Down Questions “Drill downs” are open-ended questions that were asked to determine the reasons for a respondent’s change in FIGURE 22 Schedule page in NJ TRANSIT Rail ePanel survey. FIGURE 23 Example screen showing “anchoring” functionality: Dotted arrows indicate rating given in previous survey wave (NJ TRANSIT).

FIGURE 24 Drill down questions screen in rail ePanel Survey (NJ TRANSIT). satisfaction ratings. Drill downs provided the unique longi- tudinal ability to ask respondents a qualitative question that is directly related to a changed rating score. The differences between the 65 satisfaction scores from each respondent’s previous survey and their current survey were calculated and the 10 largest differences in satisfaction scores were deter- mined (differences could be both positive and negative; therefore, absolute value was used). If there were ties, then enough satisfaction questions to obtain up to 10 were randomly selected. If there were fewer than 10 differences 42 (i.e., if the respondent did not change their answer in 10 or more questions from their previous survey), then only those differences that did exist for that respondent were shown. Once the 10 questions with the highest absolute dif- ferences were determined, respondents were asked why they had changed their answers to these questions using open-ended comment boxes (Figure 24). Again, changes could be either positive or negative, as NJ TRANSIT wanted to understand both what is performing and what needs improvement.

43 Conclusion Web-based longitudinal panel studies provide timely, statisti- cally robust, and relevant data for customer satisfaction studies. They can use innovative techniques to minimize respondent fatigue and attrition and can provide valuable data to customer- oriented transportation service organizations. NJ TRANSIT was able to implement actions directly in response to the results of the feedback from the ePanel study. Specific actions included impetus for the “back to basics” campaign and ensur- ing seating availability for overcrowded trains. METROLINK RIDER POLL, LOS ANGELES, CALIFORNIA The Southern California Regional Rail Authority’s Metrolink Rider Poll is comprised of Metrolink riders who have volun- teered to participate in a longitudinal research panel. The Metrolink Rider Poll, which was created in 2001, tracks Metrolink customer satisfaction and travel behavior through several survey waves over time, utilizing both web-based and telephone methods of data collection. Participants are recruited through the Metrolink website. To ensure that the panel com- position is proportionally representative of Metrolink ridership, onboard survey and customer data are sometimes used to tar- get new and infrequent riders for recruitment. The purpose of this rider panel was not to replace, but to supplement other ongoing research programs, such as the biennial onboard survey. Specifically, Metrolink wanted to utilize several distinct advantages the web-based research panel design offers. First, the online, longitudinal panel affords Metrolink the opportunity to survey riders who have stopped using their ser- vice. Metrolink can therefore examine reasons why riders decrease or stop using the service and the factors that may con- tribute to the decision. Additionally, the online panel gives Metrolink access to a constant group of participants to include in focus groups and studies of specific ridership segments and niche markets. The online panel also ensures rapid data col- lection and analysis. This is reflected in Metrolink’s decision to conduct a fifth wave of the survey in Spring 2005 following the January 2005 derailment and the Spring 2005 on-time per- formance problems to help determine the impact on ridership decisions. Metrolink also takes advantage of web-based sur- veys as an efficient way to collect natural language data: com- ments and opinions expressed in the respondents’ own words can be valuable for understanding changes in rider behavior. The anonymity associated with filling out a web-based survey also reduces the social desirability bias and allows text analy- sis to identify underlying factors and associations. Open-ended questions and comment boxes have become a part of all Metrolink web-based surveys and help improve the design of future surveys. Finally, Metrolink uses its longitudinal research panel for the cost-effective implementation of split sample research designs to test different versions of the survey instrument. Metrolink uses this method to test new wording or the format of survey questions, or to reduce the length of the survey each respondent is asked to complete. Each wave of the Metrolink Rider Poll consists of a set of tracking questions that monitor changes in usage character- istics and perception over time. Question topics include: • Frequency of usage, • Fare media usage, • Satisfaction ratings (both overall and item satisfaction), • Loyalty measures, and • Safety awareness. Sample screenshots showing survey questions on the Metrolink line used, satisfaction with the service, and a follow-up question comparing the respondent’s impression of Metrolink’s current service with that of a year ago can be seen in Figure 25. Each survey wave also contains one or more sections with questions related to current issues and areas of interest to other departments within the agency. The 2003 survey wave featured a range of psychographic and attitudinal questions about commuting to support market segmentation and mode choice analysis. The same questions were also used in a sur- vey of non-riders, which allowed Metrolink to better under- stand motivations behind mode choice and to contrast riders and non-riders based on their perceptions of commute modes. Another example of special issues studied in Metrolink’s web-based panel surveys is a study of rider preferences for electronic signage. That survey took advantage of the web- based survey’s capabilities to display photographs and illus- trations to help the respondent evaluate proposed concepts. TRI-COUNTY METROPOLITAN TRANSPORTATION DISTRICT OF OREGON INTERACTIVE MAP STUDY TriMet, the municipal corporation that provides public trans- portation to the three counties in the Portland, Oregon, metro- politan area (Figure 26), conducted a “pulse-taking” study in July and August of 2005 on the functionality of the TriMet website’s Interactive Map. The TriMet Interactive Map first went live in August 2003 and is considered an integral part of trip planning on the TriMet website. TriMet’s Interactive Map study in 2005 was intentionally designed to be offered to a small population to acquire voluntary feedback from cus- tomers on the Interactive Map for planning and directional purposes. The results from the survey were not intended to direct a complete website or Interactive Map redesign. The purpose of the TriMet survey was to gain customer feedback regarding the TriMet Interactive Map and to deter- mine if the map contained any severe flaws that required immediate correction. Consequently, TriMet sought to gather

44 FIGURE 25 Sample screenshots from the Metrolink rider poll online survey (Southern California Regional Rail Authority) (continued on next page). from survey respondents their purpose for visiting the TriMet website, their success in finding what they were looking for, their ease or difficulty in doing so, any problems encountered on the website, their use of the Interactive Map, and their loy- alty and use of TriMet. The Interactive Map survey was hosted by SurveyMon- key.com, and 210 respondents completed the voluntary survey for TriMet covering the period from July 25, 2005, through August 29, 2005 (Figure 27). The survey respon- dents were recruited by displaying a static web link at the top of the Interactive Map webpage, which led to a pop-up survey for respondents to complete. TriMet originally intended to have the survey appear or pop up automatically when a user of the website exited the Interactive Map page, but was unable to implement this owing to the additional technological and logistical constraints. For future sur- veys, TriMet will maintain the static web link, which leads

45 to a pop-up survey to ensure comparable results between studies. More than half of the study respondents (58%) indicated that they wanted additional features available on the Interac- tive Map. Of these, approximately two-thirds (62%) were respondents who did not find the information they were seek- ing while using the Interactive Map. Many respondents requested specific additional features, some of which were unavailable and some of which were present but were missed. The TriMet Interactive Map study results indicated that, although not perfect, the map meets the information needs of many visitors to the TriMet website. Despite this, customers have not hesitated in suggesting improvements to the Inter- active Map, some of which are possible, some of which are not, within the current system and technology. Of the improvements to the Interactive Map where changes are feasible, TriMet aims to create a simplified map with greater clarity, less color, and more detail. Additionally, TriMet proposes to have clear help indicators to guide web- site users to the information they seek. Lastly, although the TriMet Interactive Map study did result in an initial look at customers’ experience using the Interactive Map, it was rec- ommended that TriMet undertake an in-depth study. FIGURE 25 Sample screenshots from the Metrolink rider poll online survey (Southern California Regional Rail Authority) (continued ). FIGURE 26 TriMet service area, Portland, Oregon.

46 FIGURE 27 TriMet Interactive Map screen shot.

Next: Chapter Seven - Conclusions »
Web-Based Survey Techniques Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB's Transit Cooperative Research Program (TCRP) Synthesis 69: Web-Based Survey Techniques explores the current state of the practice for web-based surveys. The report examines successful practice, reviews the technologies necessary to conduct web-based surveys, and includes several case studies and profiles of transit agency use of web-based surveys. The report also focuses on the strengths and limitations of all survey methods.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!