Cover Image

Not for Sale



View/Hide Left Panel
Click for next page ( 132


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 131
Chapter 4: Steps of Customer-Driven Benchmarking STEP 4. IDENTIFY BEST PERFORMANCES AND PRACTICES All the preparation described above leads to the heart of the matter--evaluating the outcomes and resources used by each benchmarking partner to identify best performers and improvement opportunities for each organizational unit. There are many possible approaches to evaluating performance, and this guide describes a few that are useful to maintenance organizations. The guide describes a simple approach to assessing performance and then presents a rigorous procedure capable of simultaneously handing outcomes, inputs, and external factors for large numbers of benchmarking units. But first, some important definitions are given: Best performance: a performance such that there is no other performance that could produce higher customer- oriented outcomes in one or more dimensions of measurement with the same resources and under similar conditions or, equivalently, a performance such that there is no other performance that could produce the same customer-oriented outcomes with fewer resources or under worse conditions. There is no single best performance because it depends on the outcomes, inputs, and levels of hardship factors being examined. Best performer: a performer that produces a best performance. Frontier of best performances: the boundary represented by the lines through the points connecting the best performances (see Figure 11). Improvement opportunity: the gap in one or more measurement dimensions between the frontier connecting best performance and a performance inside (i.e., below) the frontier. Best practice: a business practice associated with those of a best performance. 134

OCR for page 131
Figure 11. Best Performance Simplified Benchmarking Procedure The overriding philosophy of customer-driven benchmarking is that best performers have the highest customer-driven outcomes relative to the resources used while taking into account significant differences in production requirements (outputs) and hardship factors (i.e., factors outside their control). If you are working with just a few benchmarking units--between 7 and 20--it is possible to use a process of visual inspection to obtain enough insight to identify benchmarking units that are best performers and, therefore, sources of best practices. If you have more than 20 units, visual inspection becomes difficult; if you have benchmarking units numbering higher than 30--for example, in the hundreds--you will need to use mathematical and statistical analysis tools such as the data envelopment analysis discussed below. Assuming you have just a small number of benchmarking units, you can analyze their benchmarking data by going through the following steps: 1. Prepare spreadsheet: present the data in a spreadsheet for each outcome, resource, output, and hardship measure for each benchmarking unit. 135

OCR for page 131
Chapter 4: Steps of Customer-Driven Benchmarking 2. Determine value: examine each measure and establish whether increasing or decreasing values of the measure are better or worse from the standpoint of performance. For example, higher customer satisfaction ratings are better, but higher resource usage is worse. 3. Plot bar graphs: plot a bar graph for each measure so that you can see which are the three or four best-performing benchmarking units when judged according to that measure of performance. The best performers will vary depending upon the selection of the measure. You can obtain this information from the spreadsheet, but the bar graphs help you see more clearly which are the best performers for each measure. 4. Consolidate measures: attempt to consolidate the measures in the spreadsheet you developed under the first step so there are as few as possible--for example, five. Do not exceed seven because it is well established in psychological research that individuals have difficulty simultaneously weighing more than seven factors at once. When you consolidate measures, try to do it in such a way that the reduced set of measures provides more insight into the performance of the each of the benchmarking units. Also, establish for each new measure whether increasing or decreasing values represent better performance. 5. Prepare a new spreadsheet: build a new spreadsheet that shows for the reduced set of measures the outcomes, resource usage, outputs, and hardship factors combined in new ways for each benchmarking unit. Now you can determine the best performers by visual inspection. 6. Identify best performers: for each measure, highlight the three or four best performers. You can do this highlighting using the "cell color fill" feature of the spreadsheet software. Now go down the list of benchmarking units and see which ones have the most important cells highlighted or the most cells highlighted. Since you are concerned with customer-driven benchmarking, you want to identify units that do well in serving their customers as reflected by customer survey information, by a technical measure of performance related to the attributes of roads 136

OCR for page 131
that customers care about, or both. Furthermore, in the best of all worlds, it is desirable that the organizations with the highest customer-oriented outcomes also have the lowest resource usage, have the highest production, and achieve this regardless of the level of hardship. Usually you will find that no benchmarking unit satisfies all these criteria simultaneously and that several could be identified as best performers and therefore are potential sources of best-practices information. Let's go through an example using the data that was obtained from the field test used to validate the procedures in this guide. Prepare Spreadsheet The first step is to put all the measurement data for each benchmarking unit in a spreadsheet. Table 4 shows a spreadsheet with groups of outcome, resource, output, and hardship measures. 137

OCR for page 131
Table 4. Performance Measures for 12 Districts Outcomes Resources Output Hardship Customer Total Miles Actual Number of Average District Regain Labor Equipment Material Satisfaction Covered for Lane Snow and Daily ID Time Cost Cost Cost Rating Season Miles Ice Events VMT A 8.1 12.2 $536,568 $661,478 $899,520 242,060 1,960 95 4,262,352 B 8.1 34.7 $420,765 $437,788 $666,665 214,819 1,809 95 2,315,384 C 7.9 6.4 $422,308 $847,359 $254,430 490,051 3,933 89 3,280,673 D 7.5 6.2 $238,392 $551,179 $669,172 139,991 1,984 72 3,445,186 E 7.5 4.9 $686,286 $862,725 $527,519 141,725 2,072 72 7,908,242 F 7.5 1.09 $580,406 $1,278,141 $632,392 277,679 3,673 63 4,850,026 G 8.2 3.4 $3,426,774 $6,108,419 $3,107,224 398,279 3,751 56 41,892,999 H 7.7 5.6 $519,652 $487,406 $775,949 164,425 1,931 65 4,049,412 I 7.7 5.4 $645,410 $786,760 $477,106 109,395 1,700 65 4,964,813 J 7.7 8.2 $514,695 $851,307 $480,502 251,281 1,931 91 2,914,743 K 7.7 5.7 $457,553 $449,117 $389,594 193,980 1,579 91 2,173,749 L 7.5 43.8 $261,447 $386,734 $203,525 267,262 3,035 74 3,601,587

OCR for page 131
Determine Value The second step in the example is to determine whether increasing or decreasing values of each measure is better. Outcomes Customer satisfaction rating--higher values are better. Regain time (time required to restore bare pavement after a snow storm)--lower values are better. Resources Labor--lower values are better. Equipment--lower values are better. Material--lower values are better. Output Total miles covered per season--higher values are better, given a certain amount of snow and ice. Hardship factors Lane miles--fewer are better. Number of snow and ice events--fewer are better. Average daily vehicle-miles traveled (VMT)--more is better because more customers are being served. Plot Bar Graphs By graphing how each benchmarking unit performs with regards to each measure, one can obtain a clear picture of which benchmarking units are the best performers when examined from the standpoint of a single dimension of performance. The following are a series of bar graphs providing different views of the performance of the benchmarking units depending on the measure of interest. 139

OCR for page 131
Chapter 4: Steps of Customer-Driven Benchmarking Customer Satisfaction 8.4 Customer Satisfaction 8.2 8 Rating 7.8 7.6 7.4 7.2 7 A B C D E F G H I J K L Districts Figure 12a. Outcome: Customer Satisfaction Figure 12a shows that District G achieved the highest level of customer satisfaction. Districts A, B, and C also did well in this regard. Snow and Ice Removal 50 Time to Regain Bare 40 30 Pvmt. 20 10 0 A B C D E F G H I J K L Districts Figure 12b. Outcome: Regain Time Figure 12b shows that Districts E, G, H, I, and K regained bare pavement in the shortest average time. 140

OCR for page 131
Labor Costs $4,000,000 $3,500,000 $3,000,000 $2,500,000 Costs $2,000,000 $1,500,000 $1,000,000 $500,000 $0 A B C D E F G H I J K L Districts Figure 12c. Resource: Labor Figure 12c shows each district's labor costs. Districts with the lowest costs were D, L, B, and C. District G is an aberration--its labor costs are many times the costs of the other districts. Equipm ent Costs $7,000,000 $6,000,000 $5,000,000 $4,000,000 Costs $3,000,000 $2,000,000 $1,000,000 $0 A B C D E F G H I J K L Districts Figure 12d. Resource: Equipment Figure 12d shows the equipment costs for each district. Districts with the lowest equipment costs were B, K, L, H, and D. Again District G is an aberration--its equipment costs are many times the costs of the other districts. 141

OCR for page 131
Chapter 4: Steps of Customer-Driven Benchmarking Material Costs $3,500,000 $3,000,000 $2,500,000 $2,000,000 Costs $1,500,000 $1,000,000 $500,000 $0 A B C D E F G H I J K L Districts Figure 12e. Resource: Material Costs Figure 12e shows that districts C, L, and K have the lowest material costs. Total Miles Covered for Season 600,000 500,000 400,000 Miles 300,000 200,000 100,000 0 A B C D E F G H I J K L Districts Figure 12f. Output: Total Miles Covered for Season Figure 12f shows that Districts C, G, and F accomplished the most snow and ice control during the year measured in terms of miles. "Total Miles Covered for the Season" equals the total lane miles times the average percent of lane miles covered per storm event, which is then multiplied times the number of events or 142

OCR for page 131
storms for the season. Some storms may require going over all the roads numerous times. Actual Lane Miles 4,500 4,000 3,500 3,000 Lane Miles 2,500 2,000 1,500 1,000 500 0 A B C D E F G H I J K L Districts Figure 12g. Hardship: Actual Lane Miles Figure 12g presents the number of lane miles in each district that require attention when ice or snow accumulates. Districts C, G, and F have the most lane miles to address. District Snow & Ice Events Number of Snow & Ice 100 80 Events 60 40 20 0 A B C D E F G H I J K L Districts Figure 12h. Hardship Factor: Number of Snow and Ice Events Figure 12h shows the number of snow and ice events that occurred in each district. The more events, the greater the challenge, everything else being equal. Districts A, B, C, J, and K experienced the most snow and ice events. 143

OCR for page 131
Chapter 4: Steps of Customer-Driven Benchmarking Ave rage Daily VM T 45,000,000 40,000,000 35,000,000 Average Daily VMT 30,000,000 25,000,000 20,000,000 15,000,000 10,000,000 5,000,000 0 A B C D E F G H I J K L Districts Figure 12i. Hardship Factor: Average Daily VMT Figure 12i presents the level of traffic in each district expressed in terms of average daily VMT. District G has a far greater challenge in serving traffic and operating in traffic than does any other district. District E, A, and I are faced with more daily VMT than are the remaining districts. These bar graphs provide some clarity regarding how well each district performs with regard to each variable and the hardships each faces in delivering winter services to its customers. Consolidate Measures The original table (Table 5) presents nine measures, which are too many to absorb and to use to identify best performers. By judiciously combining these measures, it is possible to obtain a clear picture regarding how well each district is able to serve its customers while managing its resources effectively and contending with hardship factors. The original set of measures can be reduced to five that are useful for identifying best performers and searching for best practices: 1. Customer satisfaction rating (outcome measure); 2. Regain time (outcome measure); 144

OCR for page 131
Chapter 4: Steps of Customer-Driven Benchmarking When using DEA to evaluate and compare performances, many units may be on the "frontier" of best performances. The frontier may include 10 to 40 percent of the total number of units. If there are 50 organizational units comparing performances, then as many as 20 units could be determined to be best performers. Practically, 20 is too many units with which to compare processes or business procedures. To start comparing practices, it is best to select a small number of organizations, approximately 2 to 5 of the best performing units. The issue is then for each organizational unit to determine which of the best performing units are best for comparing practices. A simple method usually works well to begin selecting peers with whom to compare practices. For maintenance organizations, this means selecting the peers with best performances who also meet one of the following criteria: Represent the largest improvement opportunities, Operate in environments that are most similar, Have a similar amount or type of roadway feature inventory, or Have a similar total resource budget. The initial selection is not necessarily a final decision. Additional units or alternative units may be selected at any time. Begin peer comparisons with those products, services, or maintenance areas that are most important to your customers and that have the greatest opportunity to impact customer- oriented outcomes. Select one product, service, or maintenance area at a time to begin to develop a set of peers whose "best" practices you may investigate. Note that the peer set will vary as the product, service, or maintenance area changes. For each outcome or resource measure, given a particular environmental setting, there will be a gap between the best performer and the others. If you are not a best performer, this gap is your improvement opportunity. The gap will represent the potential increase in the outcome you can achieve relative to a best performer. 164

OCR for page 131
After you have investigated all outcome measures, turn your attention to the resources and compare performances for labor, equipment materials, costs, and so forth. If you are looking at a type of resource usage, the improvement opportunity and corresponding gap will represent the potential savings in the resource you can achieve relative to the best performer. To obtain your initial set of peers for purposes of investigating best practices, select the organizational units with the greatest improvement opportunities based on the performance evaluations of all of the products, services, or maintenance areas that you and your partners have evaluated. You can refine your initial set of peers by screening based on other criteria listed above--for example, by identifying which of the peer set have inventory quantity and budget levels similar to yours. Geographical proximity and the same political structure are not the best reasons for picking peers. Maintenance organizations typically already know the most about others that are geographically close and that operate under the same type of political jurisdiction or administrative unit. Benchmarking is an opportunity to reach out beyond the typical regional or state relationships and to learn what others do. However, the project team is not suggesting that just because a unit is in geographical proximity, it should be eliminated from the peer group. Also, the intent should not be to eliminate from the comparison peer group all organizational units that are different from yours in size and operating characteristics. Human nature too easily allows one to justify why an organizational unit cannot be compared with your own. Instead, you want to establish why units that have better performance can be a basis for comparison. Identifying Best Practices Once you have settled on a peer set for each product, service, or maintenance area, then you are ready to investigate best practices of the best performers. Investigation of best practices is a critical part of benchmarking. A number of different approaches have been found to be effective; frequently, benchmarking involves all of them. Examples are as follows: 1. Background research: often there is published information available that illuminates the practices of the best 165

OCR for page 131
Chapter 4: Steps of Customer-Driven Benchmarking performers. This published information includes research reports, journal articles, conference proceedings, procedural manuals, specifications, regulations, Internet sources, and information from equipment and material vendors. Specific practices of organizations that are known to be top performers, both in the public and private sectors, often have been published and can be found among these sources. 2. Questionnaires: many benchmarking efforts involve the development of a questionnaire that is used to explore in detail the partners' practices. To some extent, the worksheets for recording measurements of outcomes, resources, hardship factors, outputs, and other information serve the function of a questionnaire. However, you should also develop a detailed set of supplementary questions whose answers will shed light on the nature of the best practices of the best performers you wish to investigate. As soon as you know what business processes will be the focus of the best practice investigation, you should prepare the questionnaire and share it with the partners with whom you plan to exchange information. The questionnaire should address the following types of issues: Work methods--including the type of labor (skills and training levels); equipment (type, age, reliability); and materials (type, methods of application) and how these are combined in productive activity. Nature and impact of related processes on outcomes and resource usage--for example, setting up and removing work zones, material and equipment requisition, scheduling, daily work reporting, timesheet reporting, budgeting, and resource allocation. Policies, procedures, or operating constraints-- including regulatory requirements, specifications, or other policies and procedures that affect work methods and results. Are there operating circumstances that require or limit the practices? Roles and responsibilities of different levels of management--how do they affect outcomes and resource usage? 166

OCR for page 131
Hardship factors--including weather, terrain, and population density--that are favorable or unfavorable for the practices. Cost structures--the costs associated with each resource needed for the practice(s). Difficulties in transferring the practice--including major investments in equipment, material, and skill training. Critical success factors--that is, the most important procedures or requirements to achieve successful implementation of the practices, including customer requirements. Figure 21 is an example of part of a questionnaire completed by one of the participants in the field test used to validate the procedures of this guide. 167

OCR for page 131
Figure 21. Sample Questionnaire 168

OCR for page 131
Business Process Flow Documentation If you have followed the sequence of steps in this guide, you will have already documented the business processes associated with your practices. Once you have identified best performers whose "best" practices you wish to evaluate, however, you will need to obtain similar documentation from them. Documentation of practices of best performers should include results from background research, business process flow charts, answers to questionnaires, and results of site visits. It is critically important to understand how each level of each organization that is a best performer contributes to the outcomes and resource usage. Management actions at different levels of the organization will have varying effects on customer-driven outcomes and resource usage and costs. Conference Calls, Electronic Information Exchanges, and Video Conferences It is possible that the background research, initial documentation, and answers to questionnaires are adequate for deciding to adopt a different practice; however, more frequently, additional data and understanding of peer practices will be necessary. The best- performing peers you have selected need to be contacted to gain a more complete understanding of their practices. Communication can occur using conference calls; electronic information exchanges such as e-mail, groupware, and chat rooms; and video conferences. The investigation should include the details of the practices, the circumstances under which the peer uses the practices, how long or how much experience the peer has had with the practices under investigation, the key requirements for implementation success, and any recommendations for other organizations considering the practices. Before such communications begin, the initiating organization should establish objectives for the interchange and describe the questions to be answered. 169

OCR for page 131
Chapter 4: Steps of Customer-Driven Benchmarking Site Visits Many organizations that do benchmarking find that site visits are valuable for understanding a practice of a best performer. Avoid industrial tourism--making site visits simply for the sake of visiting other organizations. Site visits should only occur if there is strong reason to believe that they will add value and both parties are well prepared. Generally, a pair of visitors is desirable to conduct the site visit because two pairs of eyes and ears help capture accurately what is observed. More visitors are usually unnecessary. Here are some guidelines for conducting site visits: Work through a specific point of contact to schedule the meeting and line up participants. Develop an interview protocol and agenda in advance and share it with the host. Presumably, a questionnaire will have been distributed earlier. Have the authority to share information and make sure your host does, too. Be courteous and professional. Offer a reciprocal visit. Keep to your meeting schedule and finish on time. Be sure to thank your host. Write up the practices you encountered during or immediately after your visit. 170

OCR for page 131
Example of Site Visit in Maintenance Benchmarking The Kansas City Department of Public Works participated in a municipal public works departmentbenchmarking program with several other cities in North America in order to achieve the following three goals: 1. Improve the quality of service, 2. Reduce the cost of operations, and 3. Improve the satisfaction of customers. In a structured program facilitated by a consultant, the group of public works departments chose benchmarking partners based upon performance comparisons and documented work processes. Then the benchmarking partners arranged on-site visits to compare practices and seek ideas for improvement opportunities. The visits were a commitment of time consisting of 2 days of on-site visits and documentation of work flow and work processes. Individuals participating in visits to other departments were trained in benchmarking concepts. Priorities were set for the processes each participant wished to pursue. The total benchmarking activity uncovered 32 specific work process improvements to be included in the Kansas City Department of Public Works operating plan. Some of the changes were implemented immediately, such as instituting quick service bays in all fleet maintenance facilities, while other changes were implemented over a much longer period. Analyzing the Causes of Superior Performance Before adopting a best practice, you may wish to understand in more detail the causes of superior performance. You can use a variety of techniques. The following three are explained in turn: 1. Root cause analysis; 2. Correlation, regression, analysis of variance, and other statistical methods; and 3. Design of experiments. Root Cause Analysis A straightforward and often helpful method of understanding the underlying reasons for performance, root cause analysis employs a diagram such as Figure 22 to identify the main and deeper root causes contributing to an outcome. 171

OCR for page 131
Chapter 4: Steps of Customer-Driven Benchmarking To apply root cause analysis, a group of people knowledgeable about the business process identifies main categories of potential causes leading to an outcome and then dissects the causes further. The fishbone diagram is well suited for organizing the discussion and displaying the results. Figure 22. Root Cause Analysis Using Fishbone Diagram Correlation, Regression, Analysis of Variance, and Other Statistical Methods There are a wide variety of statistical techniques one can apply to identify statistically significant factors associated with an outcome. By using correlation, regression, analysis of variance, and other statistical methods, often you can identify factors that correlate or explain the variation in outcomes and resource usage. You can then make important strides in determining the likelihood that an attribute of a practice will contribute positively to an outcome or to a reduction in resource costs. Commonly applied statistical techniques include the following: Correlation coefficients provide measures of the degree that various variables or factors are correlated. Regression involves estimating an equation that involves many variables and that best fits a set of data points. Analysis of variance determines the degree that different variables contribute to the variance of another variable. 172

OCR for page 131
Analysis of variance allows you to analyze the variance within and among groups. Factor analysis helps to reduce a set of possible causal factors to a smaller set that explains most of the variation caused by the original set. To perform various types of statistical analysis, you will need to assemble a data set for all the variables or factors of interest. Depending upon the properties of the data set, different types of statistical analysis will be appropriate. For example, you could make a list of factors contributing to pavement smoothness. If the factor is at play in a particular organization or unit, you would give it a value of 1; otherwise, you would give it a value of 0. Thus if there were 40 organizational units constituting a benchmarking partnership and 20 different factors potentially contributing to pavement roughness, then the data set would be a matrix of 40 20 composed of 1s and 0s. Pavement roughness could then be regressed against each of the 20 factors to determine the significance of each factor. Before doing such an analysis, you should develop a hypothesis regarding which variables are most likely to be significant. The statistical analysis will allow you to accept or reject your hypothesis. Such analysis provides a great deal of objectivity and helps overcome the use of hunches and educated guesses regarding what attributes of a process are contributing to an outcome. You will end up with more insight and have a stronger foundation for deciding whether to implement a practice. You will require a person knowledgeable about statistical methods to apply these techniques. Most larger agencies have individuals who can perform correlation analysis and do regression, and many also have people with advanced degrees in statistics or related fields. Individual consultants and firms that specialize in statistical analysis are additional sources of expertise. Design of Experiments The types of statistical analysis described above use historical data--that is, data concerning results that have already occurred from applying resources in various settings. However, additional 173

OCR for page 131
Chapter 4: Steps of Customer-Driven Benchmarking insights regarding variables that contribute to outcomes can be achieved by designing experiments and by carefully controlling for different factors of interest, whether they are main effects or interactions among factors. There is a large body of literature on the design of experiments to achieve quality improvements. Design of experiments plays an important role in diagnosing the causes of complex manufacturing problems and other processes.4 You will need expert help to design experiments in an efficient manner in order to root out the factors contributing to outcomes. The MnDOT used an experimental design in constructing a survey instrument to assess the strength of different factors contributing to the value motorists receive from different attributes of roadside vegetation. These attributes are affected by maintenance activities associated with the delivery of MnDOT's "Attractive Roadside" product. Appendix D briefly describes how the experimental design was used to better understand the underlying factors affecting customer preferences for roadside aesthetic features. Considerations for Changing Practices Matching best practices to the goals of the initiating organization is critical because some best practices may be excellent, but they may not be consistent with an organization's priorities. The first determination is whether the identified best practices of peer organizations are aimed at reducing resource usage and costs or whether they are designed to increase customer outcomes. If the practices are aimed at reducing resource costs and if your organization is primarily concerned with increasing the level of customer outcomes, then this might not be the first practice to spend time implementing. Also, if you are satisfied with the level of outcomes that are being produced, then you will likely be seeking to implement practices that will lower resources and costs. Estimating the Near Term-Impact of Changes For a selected practice or set of practices, the originating organization needs to calculate the estimated costs of 4 Keki R. Bhote and Adi K. Bhote, World Class Quality: Using Design of Experiments to Make It Happen, Second Edition, American Management Association, New York, 2000. 174