Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 38
38 CHAPTER SIX CASE STUDIES This chapter details three case studies describing projects customer satisfaction can be much lower in panel surveys. conducted by NJ TRANSIT, Metrolink, and TriMet. These In addition, panels provide opportunities to directly deter- case studies show what can and are currently being done with mine the reasons for those changes. web-based research by transit agencies. Various themes described in earlier chapters are repeated and can be under- Longitudinal panels can be administered using a variety of stood in a real-world context. methods. For transportation studies, intercept recruiting is an efficient approach to assembling panels. Although telephone NJ TRANSIT RAIL ePANEL and mail-out/mail-back instruments are commonly used, web-based instruments can be a highly cost-effective alterna- The benefits of using web-based longitudinal panels for cus- tive for many applications. Web access has increased signifi- tomer satisfaction studies can be clearly seen based on the cantly across the population and it is possible to construct experience of such a study for NJ TRANSIT's rail cus- demographically representative panels from among those tomers. The study revealed numerous benefits of this method who have web access. In addition to their cost-effectiveness, over cross-sectional studies, including more robust statistics, an important advantage of using web instruments with panels better understanding of customer satisfaction, and the ability is that the time required to complete and analyze data from a to analyze customer satisfaction trends. A variety of innova- survey wave can be dramatically reduced. tive Internet technologies was implemented, adding value to the study by ensuring data quality, timeliness, reductions in NJ TRANSIT's Rail Customer Satisfaction ePanel was respondent burden, less random error in respondent answers, designed to be a continuous survey, providing monthly data and techniques to pair qualitative data to quantitative analy- on customer satisfaction. It used web-based technologies to sis. Online geocoding of respondents' origins and destina- invite respondents from one of three panels each month and tions was also described as another aspect of the survey, to administer a customer satisfaction survey. The resulting which provided NJ TRANSIT more value from the study. survey data monitor customers' concerns on a monthly and even daily basis, as study data were continually being Customer satisfaction studies are conducted by many major received from respondents throughout each month. organizations, including those who provide transportation ser- vices. Typically, customer satisfaction studies are carried out Web-based survey technology allowed for great flexibility using repeated cross-sectional sampling of customers. Satis- in obtaining both quantitative customer satisfaction responses, faction scores are compared across these repeated cross such as typical satisfaction scores, and qualitative responses, sections. Differences in satisfaction scores resulting from per- such as written answers to open-ended questions that are used ceived changes in service are measured; however, the mea- to explain why quantitative scores have changed. This was a surement is confounded in part by differences between the critical part of the study, because the reasons for change in sat- cross-sectional samples. Demographic differences can be isfaction scores can be quickly understood when they are accounted for by weighting the samples so that they are equiv- paired directly to responses from open-ended questions (see alent; however, there are significant differences in satisfaction Drill Down Questions section and Figure 24). scores between individuals that are not explained by demo- graphic or other easily measured characteristics. The result is Advanced web-based survey technologies also allowed for that relatively large samples are required to measure changes a number of innovative features that improved data integrity in customer satisfaction over time. and currency. These features included online geocoding of origin and destination data, automatic updating and querying Longitudinal panels offer a potentially attractive alterna- of train schedule data so that respondents could select only tive to repeated cross sections for measuring customer sat- valid trains in their surveys, and full validation of responses isfaction. Measuring changes in satisfaction of the same to questions. Web-based longitudinal panel survey instru- individuals from one period of time to another eliminates the ments can be designed in ways that minimize respondent confounding caused by variations between the different fatigue. This was accomplished using a number of techniques individuals in repeated cross-sectional samples. The result that required respondents only to confirm that various aspects is that the sample sizes required to measure differences in of their travel have not changed since their previous survey.
OCR for page 39
39 Background respondents to rate. It continued by asking general customer satisfaction questions (e.g., would you recommend NJ The NJ TRANSIT ePanel Customer Satisfaction Study was TRANSIT to a friend?, etc.) and also determined respon- conducted to provide continuous monthly and quarterly dents' origin and destination locations, and ended by asking tracking of NJ TRANSIT commuter rail riders' satisfaction additional background questions and demographics. along 65 satisfaction measures. These measures had previ- ously been tracked in surveys conducted less than annually, The 65 customer satisfaction ratings were crucial to deter- using cross-sectional sampling with handout/handback paper mining where NJ TRANSIT was performing well and where questionnaires. improvements would be needed on its rail system. Ratings were on a scale of 0 to 10, with the option to answer not The ePanel study measured rail customers' satisfaction applicable. Data validation was used for many questions, scores in what NJ TRANSIT calls "functional areas," which such as the customer satisfaction questions, to ensure quality included questions related to parking, boarding stations, des- data and complete responses. Wording was customized for tination stations, train scheduling, and customer service. The each respondent on many of the survey screens as well. For survey also measured "key-driver areas," which include on- example, in the screen shot below, the question asks about time performance, personal security, employee performance, "parking at Woodcliff Lake Station" instead of simply say- fares, and mechanical reliability. The study provided the ing "your boarding station" (Figure 20). Wording cus- ability to segment the customer satisfaction measures based tomization makes the questionnaire clearer for respondents on different train lines, destination markets, customer demo- and by extension improves data quality. graphics, stations, etc. NJ TRANSIT's ePanel was designed to answer the fol- OriginDestination Data Collection lowing specific questions about commuter rail customers on a continuing basis: An important part of the study for NJ TRANSIT was to obtain origin and destination data. To accomplish this, · What are the trends in customer satisfaction and what respondents were asked to geolocate their origin and desti- factors influence these trends? nation addresses by using a point-and-click map, a street · On which train lines within the NJ TRANSIT system is address, a business name, or an intersection search. A screen customer satisfaction changing? In what direction are shot of the map search is shown as Figure 21. Regardless of these changes, how big are the changes, and why are the type of geolocation search used (map, address, business, they occurring? or intersection), a latitude and longitude for each origin and · What are customers' main concerns? Where does NJ destination was determined. These were then automatically TRANSIT need to improve? coded into the proper NJ TRANSIT transportation analysis · Where are customers satisfied? What performance does zones using an online, point-in-polygon routine. Therefore, NJ TRANSIT need to maintain? NJ TRANSIT received immediate real-time access to fully coded origindestination data with transportation analysis To address these questions, a longitudinal panel study plan zones already attached to the data. was developed in July 2002 that was driven by a monthly sur- vey that began in September 2002. This survey collected cus- Another important function of the survey was determining tomer satisfaction data every month from one of three separate what train the respondent rode. Respondents were asked the customer panels, which were each comprised of approximately appropriate questions to classify them into four categories: 4,000 participants. Each panel respondent was surveyed four frequent weekday rider, frequent weekend rider, infrequent times a year at three-month intervals, giving NJ TRANSIT new weekday rider, and infrequent weekend rider. Once the respon- monthly customer satisfaction data throughout the year and dent type was known, the survey then asked the respondent allowing them to track customer satisfaction trends and cus- what train they used and then displayed only the relevant tomer origin and destination patterns. Respondents were asked trains for their station and day of week (Figure 22). to take a survey only once every quarter, reducing respondent fatigue and also giving respondents enough time between sur- Anchoring vey waves to notice service changes. The differences between a respondent's first survey and their Web-Based Survey Instrument subsequent surveys could be subtle, but important, and served three main purposes: (1) to deliver respondents more efficient The survey used a web-based, multi-paneled, multi-waved second, third, and fourth surveys by asking them only to con- customer satisfaction questionnaire that had a number of sec- firm answers from their previous surveys when the answers tions. The questionnaire first obtained background informa- are unchanged; (2) to use "anchoring" so that respondents tion about respondents' current NJ TRANSIT travel, then the knew how they rated satisfaction measures in the previous survey presented 65 customer satisfaction attributes for survey wave, which helped them make new judgments based
OCR for page 40
40 FIGURE 20 Example screen showing customer satisfaction attributes rating (NJ TRANSIT). FIGURE 21 Map search screen in NJ TRANSIT Rail ePanel survey.
OCR for page 41
41 FIGURE 22 Schedule page in NJ TRANSIT Rail ePanel survey. on their previous answers; and (3) to ask respondents "drill because the respondent had forgotten how they had previ- down" questions that requested a written explanation of rat- ously rated the service. Respondents were focused on the ing differences between the previous and current survey. change in service, reducing the random error in the mea- surement of this change. Anchoring was a technique used in the second, third, and fourth survey waves to enable respondents to see how Drill Down Questions they previously rated their customer satisfaction attributes (Figure 23). Anchoring was used to ensure that a changed "Drill downs" are open-ended questions that were asked answer was in response to a change in service, and not to determine the reasons for a respondent's change in FIGURE 23 Example screen showing "anchoring" functionality: Dotted arrows indicate rating given in previous survey wave (NJ TRANSIT).
OCR for page 42
42 satisfaction ratings. Drill downs provided the unique longi- (i.e., if the respondent did not change their answer in 10 or tudinal ability to ask respondents a qualitative question that more questions from their previous survey), then only is directly related to a changed rating score. The differences those differences that did exist for that respondent were between the 65 satisfaction scores from each respondent's shown. Once the 10 questions with the highest absolute dif- previous survey and their current survey were calculated and ferences were determined, respondents were asked why the 10 largest differences in satisfaction scores were deter- they had changed their answers to these questions using mined (differences could be both positive and negative; open-ended comment boxes (Figure 24). Again, changes therefore, absolute value was used). If there were ties, then could be either positive or negative, as NJ TRANSIT enough satisfaction questions to obtain up to 10 were wanted to understand both what is performing and what randomly selected. If there were fewer than 10 differences needs improvement. FIGURE 24 Drill down questions screen in rail ePanel Survey (NJ TRANSIT).