Cover Image

Not for Sale



View/Hide Left Panel
Click for next page ( 26


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 25
25 mented IVR-based trip planning software (HASTINFO from shown in Table 8. Essentially, all of the transit agencies that GIRO, Inc.) that helps the agency deal with customers calling were interviewed track overall call volume in some fashion in to get assistance with complicated trip planning requests. (thus, that metric is not included in Figure 7). Figure 7 indi- The software helps PSTA train new CSR recruits as well. (20) cates that, overall, the candidate metrics shown in Table 8 are Training, one of the important aspects of call center oper- fairly routinely utilized by a number of transit agencies. Most ations, is needed to ensure good, quality customer service. of the metrics in Table 8 are tracked by about half (between The director of customer service at SEPTA said, "Many of our 10 and 16 agencies) of the 25 agencies interviewed. Not un- customer service reps did not have adequate skills to become expectedly, the "basic" metrics are among the most commonly good customer service liaisons." (21, p. 102) utilized. Only one of the eight candidate metrics--"percent In order to test the skills of prospective CSRs, SEPTA calls not resolved at the first attempt"--was tracked by less adopted computerized tests to screen the applicants. Such tests than one-third of the agencies interviewed. help the Human Resources Department assess CSR candidates In a few cases, agency interviewees identified a metric not in- "on a number of topics, including math, reading, writing, cluded in the candidate list. Two examples are the Washington grammar, customer service and computer databases." (22, Metropolitan Area Transit Authority (WMATA) and CATS, p. 102) This enhanced recruiting process has helped SEPTA both of which track the number of calls answered on their IVR bring down average talk time from 30 to 17 min per call. Also, systems. This metric can be helpful in determining the utility the abandoned call percentage went down from 3.5% to 2%. of the IVR technology with respect to manually providing in- These changes have improved the efficiency and utilization formation. CATS reported that 90% of their calls are handled of call center employees at SEPTA by helping to reduce the by their IVR system. Further, LYNX monitors several measures number of agents from 57 to 47 in 10 years. beyond the aforementioned list, including the attendance of their call center staff, number of calls transferred, number of 3.1.2.4 Metrics calls received, and number of complaints handled by each agent per month. LYNX also monitors measures obtained Metrics refers to a variety of statistics that call centers may from the 511 system. Denver Regional Transit District (RTD) track to monitor different aspects of their performance. The re- monitors the amount of time that each operator spends on the view of the general call center literature was useful in identify- phone since they have a specific number of hours (within a ing a list of metrics utilized, to varying extents, in modern call daily schedule) allocated for answering customer calls. center operations. (23-25) These metrics were used as a check- The measures shown in Figure 7 provide an overview of list (prompts) in the 25 transit agency telephone interviews transit call center operational performance and assist man- conducted for this study. The metrics were organized into two agement in evaluating and adjusting their staff assignments categories, basic and advanced, and are presented in Table 8. so they may utilize their resources effectively. Many of these Figure 7 summarizes the results of the transit agency inter- statistics are available from technologies such as ACD and views in terms of the number of agencies using the metrics CRM. Agencies with limited technology resources utilize Table 8. Call center metrics. Basic Metrics Average call duration Average number of calls in the queue Number and percentage of calls abandoned Number of calls/inquiries per hour Number and percentage of calls answered Average delay while waiting in a queue Advanced Metrics Information requested Number of agents ready to take calls Average number of agents in wrap-up mode Average call duration including wrap-up time Average time taken to pick up a phone call Average time until a call is abandoned Not ready time Idle time Percent of calls not resolved at the first attempt Call volume