National Academies Press: OpenBook
« Previous: Chapter 2 - Selecting Benchmarking Partners
Page 43
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 43
Page 44
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 44
Page 45
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 45
Page 46
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 46
Page 47
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 47
Page 48
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 48
Page 49
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 49
Page 50
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 50
Page 51
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 51
Page 52
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 52
Page 53
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 53
Page 54
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 54
Page 55
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 55
Page 56
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 56
Page 57
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 57
Page 58
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 58
Page 59
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 59
Page 60
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 60
Page 61
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 61
Page 62
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 62
Page 63
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 63
Page 64
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 64
Page 65
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 65
Page 66
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 66
Page 67
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 67
Page 68
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 68
Page 69
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 69
Page 70
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 70
Page 71
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 71
Page 72
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 72
Page 73
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 73
Page 74
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 74
Page 75
Suggested Citation:"Chapter 3 - Measurement." National Academies of Sciences, Engineering, and Medicine. 2004. Guide for Customer-Driven Benchmarking of Maintenance Activities. Washington, DC: The National Academies Press. doi: 10.17226/13720.
×
Page 75

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

45 CHAPTER 3: MEASUREMENT TYPES OF MEASURES Central to benchmarking is measurement. Measurement provides an objective way to gauge performance and to identify best performances. In the case of customer-driven benchmarking, you need to apply a set of measures in order to assess how well you address customer desires and satisfaction. If you are not measuring, then you cannot possibly be doing benchmarking, and if you are not applying measures oriented toward how well you are serving your customers, then you are not doing customer-driven benchmarking. During the last several decades, a system of classifying measures has evolved that helps to focus on customers. In the 1960s and 1970s, most attempts to develop performance management systems, including traditional maintenance management systems, focused on two types of measures: outputs and inputs. These measures are defined as follows. Outputs Outputs are a measure of production or accomplishment. In highway maintenance, examples of output measures are lane miles of roadway surfaced, the number of bags of litter picked up, and the number of acres mowed. Inputs Inputs are the resources used to deliver a product or service, perform an activity, or undertake a business process. In highway maintenance, the inputs consist of labor, equipment, and materials. The funds needed to pay for these resources may also be considered an input. Under certain circumstances, other productive resources—such as land, water, or air—can be treated as an input. As illustrated in Figure 5, maintenance agencies focused on measures of productivity use these measures by looking at the ratio of output to various types of inputs. One could measure output per labor hour, per equipment dollar, per quantity of material used, or per dollar of expenditure. One might also examine unit costs expressed as the cost per unit of output.

The trouble with input and output measures is that they are internally focused on the work maintenance personnel do. They are not externally focused on customers. More recently, especially since the enactment of the Government Performance and Results Act of 1993, the focus has been increasingly on outcomes. Outcomes Outcomes are the results, effects, or changes that occur due to delivering a product or service, conducting an activity, or carrying out a business process. For example, an outcome of pavement resurfacing might be smoother pavement. An outcome of litter pickup might be cleaner roadsides, and the outcomes of mowing might be increased sight distance at intersections and around curves and, consequently, reduced accidents. Outcomes are more likely to be externally focused and frequently relate to customer preferences, expectations, and satisfaction. By looking at the ratio of outcomes relative to inputs, one can address the effectiveness of a program in addressing customer- oriented results. Typical measures might be an outcome per labor hour, per equipment hour, or per dollar of expenditure. One might also examine cost effectiveness, which is the cost per unit of outcome achieved. Figure 4 illustrates that as one transitions from using output measures to outcome measures, the emphasis shifts from productivity to effectiveness. Chapter 3: Measurement 46 Figure 5. Product and Service Delivery Processes

Some agencies have gone one step further and identified another set of measures: value added. Value Added Value-added measures are customer-oriented outcome measures expressed in terms of the value received by the customer. Measures of value added include an increase in customer satisfaction or an increase in economic value from, for example, travel time saved or life-cycle costs avoided. As one transitions from a focus on outcomes to value added, the perspective shifts from effectiveness to the net value added to the customer and provides the basis for resource allocation in economic terms. Four Types of Measures for Customer-Driven Benchmarking In customer-driven benchmarking, use measures similar in type to the ones described above. However, the project team suggests you think about four types of measures: 1. Outcomes—the customer-oriented outcome or value- added measures as defined above. 2. Resources—the same as the inputs defined above (e.g., labor, equipment, materials, and funding). 3. Outputs—measures of levels of production or accomplishment. 4. Hardship Factors—factors outside the control of the maintenance organization such as weather and terrain that influence the outcomes and level of resources used. Relationship of Outcomes, Resources, and Hardship Factors In the spirit of sound economic analysis, customer-driven benchmarking takes the approach that some overall picture of outcomes relative to some overall picture of the resources expended, while adjusting for factors outside the control of a maintenance organization, is the proper way to assess performance. There is no attempt to combine outcomes into a single measure of benefits, which would require converting all benefits into dollar terms or applying appropriate weights to each outcome in order to construct an overall performance index. 47 Customer-driven benchmarking is like a stool that rests on four types of measures Outcomes Inputs Hardship Factors Outputs

Instead, in customer-driven benchmarking, a variety of customer-driven outcome measures are treated as a group but remain separate. Similarly, a variety of resource measures are treated as a group but remain separate. Factors outside the control of the agency—weather, terrain, traffic volumes—are also treated as a group, but remain separate. Outputs have a bearing on the analysis because they help establish the level of effort for each benchmarking unit and the comparability of their performances. The idea is to simultaneously preserve each of these measures while (1) continually bearing in mind the importance of looking at the outcomes achieved relative to the resources used and (2) taking into account hardship factors outside the control of each organizational unit and their level of production. OUTCOMES In customer-driven benchmarking, three important kinds outcomes can be measured: 1. Customer satisfaction, 2. Condition of assets and other attributes of roads, and 3. Value received by the customer. Customer Satisfaction Customer satisfaction is a topic addressed in countless books and articles on marketing and market research, as well as in specialized fields such as psychology. Benchmarking is ultimately about making continuous improvements through the identification and adoption of best practices in order to equal or exceed the satisfaction of the customer. Measuring changes in customer satisfaction over time provides the feedback regarding how well you are doing. An important measurement tool for assessing customer satisfaction is statistically valid measures of customer satisfaction obtained from administering a survey using random sampling. Types of Surveys As you plan to get started with benchmarking, important questions you need to address are as follows: Chapter 3: Measurement 48

♦ What role will surveys of customer satisfaction play as a measurement tool? ♦ What types of survey data are currently available? ♦ Should you develop your own survey? ♦ Should you rely on surveys developed by others? You will also need to address the cost and timing of developing your surveys. If you decide to develop your own surveys, you will also want to address the issue of what related questions and answers you should be seeking in the survey—for example, do you want to merely learn about customer satisfaction regarding the department’s maintenance products and services or do you also want to learn about customer preferences and expectations, the relative value of their preferences as they make tradeoffs, and perhaps even what they are willing to pay? National Quality Initiative As mentioned previously, FHWA, AASHTO, the American Public Works Association, and various industry associations are supporting the National Quality Initiative (NQI) in Transportation. The NQI develops and administers, with the assistance of the U.S. DOT, a national survey. In May 1996, the NQI released the results of a scientific random sample of 2,205 households that assessed customer satisfaction and preferences regarding the nation’s highway system. Summary data from the survey’s categorical questions are accurate within plus or minus 2 percent with, 95 percent confidence.1 The NQI survey included numerous questions that pertain to the outcomes of road maintenance. It is vitally important to recognize that the NQI survey, in attempting to determine customer satisfaction, focuses upon important attributes of highways. In the case of maintenance, the key issue is what the customer satisfaction is with regard to the attributes of maintenance products and services—for example, the NQI asks how satisfied survey respondents are regarding the smoothness of roads. It is not possible to solely associate the smoothness of roads with maintenance; nonetheless, certain types of road 49 1 National Quality Initiative Steering Committee, National Highway User Survey, prepared by Coopers & Lybrand and the Opinion Research Corporation, May 1996.

maintenance, patching potholes, and resurfacing contribute significantly to the smoothness of roads. Many administrators and managers in state DOTs have long believed that the driving public placed safety above smooth pavement in order of importance. An important result of the NQI survey is the revelation that road users’ preferences are the reverse: they place more importance on road smoothness than safety. Results such as this have been highly influential to program managers in making resource allocation decisions. During the last several years, a number of states have increased the relative expenditures on actions that would improve pavement smoothness. Figure 6 presents the NQI survey questions that are the most pertinent to road maintenance. Figures 7a through 7f show the results that were obtained from the 1996 survey. Chapter 3: Measurement 50 Thinking about the areas we just discussed, how satisfied are you with the following? A. Traffic Flow a. Level of congestion b. Toll booth delays c. Construction delays d. Accident clean-up B. Pavement Conditions e. Smooth ride f. Surface appearance g. Durability C. Visual Appeal h. Appearance of sound barriers i. Landscaping j. Design of rest areas k. Compatibility with the natural environment D. Maintenance Response Time l. Litter removal m. Snow removal n. Pavement repairs o. Guardrail and barrier repairs p. Rest area cleaning E. Travel Amenities q. Number of rest areas or service plazas r. Variety of rest areas or plaza services s. Number of emergency call boxes and radio advisory stations t. Signs for motorist services and attractions u. Signs for mileage and destinations Figure 6. Sample NQI Survey Questions

51 Figure 7c. Satisfaction with Safety Items Figure 7b. Satisfaction with Visual Appeal Figure 7a. Satisfaction with Attributes of Highway System i i

Chapter 3: Measurement 52 Figure 7f. Satisfaction with Pavement Conditions Figure 7e. Satisfaction with Travel Amenities Figure 7d. Satisfaction with Bridge Conditions Appearance Areas Variety of Rest Areas

Because the NQI survey provides a national baseline of data, many states have incorporated questions from the NQI survey into their own customer satisfaction surveys. This inclusion allows states to compare the results obtained from their own surveys with those obtained nationally. Kentucky, for example, compared the results of customer satisfaction surveys conducted in 1995 and 1996 with the national survey results.2 Potentially, results could be compared with other states to do a simple form of customer-driven benchmarking. The significance of the NQI survey is that the maintenance- related questions represent a set of widely or commonly recognized measures of customer satisfaction. Having an agreed- upon set of questions for assessing customer satisfaction makes it easier to do benchmarking. Note that the NQI survey instrument was revised in 2000 but contains the same maintenance-related questions that were included in the early survey. Survey comparisons between the results of the 1995 and 2000 surveys can be found in “Moving Ahead, The American Public Speaks on Roadways and Transportation in Communities.” Agency Surveys An alternative to using results from the NQI survey or to incorporating NQI survey questions into your own questionnaire is to develop a survey tailored to your own maintenance products and services and to the issues in your own state, city, or county or bridge, tunnel, and turnpike authority. Many states are seeking more detailed insight about customer preferences and satisfaction than the NQI survey questions can provide, and thus have developed additional or more refined surveys and questions. In constructing survey questions, you will need to first define products and services and identify their corresponding attributes—steps in the benchmarking process discussed in Part II. Then you will need to develop questions regarding customer preferences and satisfaction corresponding to each attribute. You will have to choose a suitable response scale. 53 2 Kentucky Transportation Cabinet, The Path, Mid-Year 1999 Report, p. 35.

Appendix B contains tables showing maintenance attributes and corresponding customer outcome measures found in surveys developed and administered by various states—for example, the State of California has a question to assess customer satisfaction regarding response time to emergency situations. This question pertains to the maintenance product category of “Maintenance Response to National Disasters.” Respondents (i.e., customers) rate their satisfaction on scale of 0 to 10, where 10 represents “extremely satisfied” and 0 “extremely dissatisfied.” This question is intended to provide the California DOT (Caltrans) with feedback regarding how the state does in responding to maintenance problems associated with mudslides, floods, earthquakes, and so on. Note that a random sample is unlikely to include very many people who have actually been in an emergency situation. The state was probably seeking information regarding public perceptions of the responsiveness of Caltrans to natural disasters, even though the respondent was unlikely to have experienced one directly. The Caltrans survey also included a series of related questions intended to assess customer preferences regarding response time for time-sensitive maintenance activities such as sign repair, traffic delays due to maintenance, and pothole repairs. Respondents were asked to state whether the preferred response time should be within 15 minutes, 30 minutes, 60 minutes, 1 day, or 1 week. Survey Design and Administration If you decide to develop your own survey to be used as a benchmarking measurement tool, you should go through the standard steps for developing sound surveys: 1. Focus groups, 2. Survey design and pretesting, 3. Coding guide and database design, 4. Sample design, 5. Administration, and 6. Summarization and analysis. Further guidance on developing and administering surveys is found in Appendix C. Chapter 3: Measurement 54

Condition of Assets The second category of outcome measures that are essential for customer-driven benchmarking consists of condition measures. Condition includes the condition of assets and other attributes of roads. Condition is important to customers from three standpoints. First, the physical attributes of roads directly affect the experience of road users. Examples of these attributes are pavement smoothness and comfort, ruts and shoulder edge drop-offs, the narrowness of bridges, the brightness of signs at night, obstructions in the roadway, and whether ice is on the road. Second, virtually every customer of roads pays for the roads either directly or indirectly. The condition of roads is important to customers of roads, if for no other reason than they do not wish to pay higher gas and property taxes. Responsible stewardship of the roads through proper and timely maintenance preserves the investment in highways and streets and avoids wasting money that could be used for more productive purposes. Third, not only does condition relate to the physical condition of roads, but also to the condition that results from maintenance services such as mowing; picking up litter; trimming brush and trees; cleaning ditches; removing drainage system blockages; controlling erosion; cleaning rest areas; and landscaping, including planting wildflowers. Value Received by the Customer The third category is value received by the customer. It is important to remember that customers of maintenance “wear three hats, “so to speak. One set of customers consists of those who use the roads. This set of customers is primarily concerned with avoiding road user costs such as travel time, vehicle- operating costs, and accident costs. The second set of customers consists of those who pay for the roads and generally, but not necessarily, consists of those who use the roads. These customers do not like it if the taxes or fees they pay increase in order to pay for extra costs that could have been avoided if the roads were maintained by performing the right treatment at the right time in the right place. In other words, by not deferring needed maintenance one avoids increased maintenance, rehabilitation, and reconstruction costs in the future. 55

The third set of customers consists of those impacted by the operation of roads. Prominent among this group are adjacent property owners who can be impacted by what economists call externalities—costs or benefits experienced by others than the producers or consumers of a product. Pollution or changes in property value because of the use and activities occurring on the road or in the right-of-way are examples of externalities. Property owners adjacent to roads and who experience externalities are among those who pay for the roads, particularly in cities and counties where property taxes are a major source of road funding. Economic value to maintenance customers can be conveniently grouped into the following types: ♦ Avoided user costs, ♦ Avoidable life-cycle costs, and ♦ Avoided external costs. Customers are willing to pay to avoid user costs, life-cycle costs, and external costs; hence, the willingness to pay is also an important measure of economic value. Appendix D includes a discussion of how to calculate life-cycle costs, user costs, and willingness to pay. COMMONLY RECOGNIZED MEASURES A prerequisite for benchmarking of any type, including customer-driven benchmarking, is that benchmarking participants agree on the measures that will be used. This is true regardless of whether all the benchmarking participants are within your organization or whether you benchmark with other organizations. Therefore, one of the early tasks in benchmarking is to begin to establish a foundation for agreed-upon measures. There are several ways to tackle this prerequisite. First, if you are planning to do benchmarking only within your organization you can establish your own agreed-upon measures. Second, if you are benchmarking with other organizations, you can begin the process of identifying your partners, establishing what you plan to benchmark, and gaining agreement on the measures you will Chapter 3: Measurement 56

use. Third, whether you are doing internal or external benchmarking, you can determine whether there is a pre-existing set of commonly recognized customer-driven measures for benchmarking maintenance activities. Importance and Adopted Measures The issue of widely agreed-upon measures for benchmarking and other purposes is of such importance that the AASHTO Subcommittee on Maintenance and FHWA sponsored the National Workshop on Commonly Recognized Measures for Maintenance in June 2000 in Scottsdale, Arizona. At the workshop, states agreed to an initial set of “commonly recognized measures” that reflect the outcomes and satisfaction that customers experience from the delivery of maintenance products and services. Table 1 summarizes the measures that were adopted by the states at the workshop. In only a few cases was a recommended measure fully defined. In most instances, the workshop participants adopted a type of measure with the expectation that the definition, units of measure, and other aspects of a measurement protocol would be established in the future. A view was expressed that it is not necessary to be overly specific in the workshop. It was sufficient for workshop participants to identify areas where there is general agreement that commonly recognized measures exist, particularly ones that relate directly or indirectly to the customer. The adopted commonly recognized measures exist side-by-side with other performance measures that many states have already developed and generally use for maintenance management and asset management. However, over time it is anticipated that an increasing number of states, cities, counties, turnpike authorities, and contractors will apply commonly recognized measures for an increasing number of purposes. The common measures are useful for customer-driven benchmarking, customer-driven asset management systems, performance-based contracting, and public reporting of maintenance performance. Commonly recognized measures create efficiencies in data collection, measurement systems, and management systems. 57

Chapter 3: Measurement 58 Table 1. Commonly Recognized Measures Adopted by Consensus (continued on next page)

Key Issues in Adopting Agreed-Upon Measures When adopting benchmarking measures, there are a number of key issues to consider: ♦ Desirable attributes of the measurement scale; ♦ Types of measures to avoid; ♦ Selection of appropriate units; ♦ Segment length; ♦ Repeatability, reliability, and accuracy; and ♦ Protocols. 59 Table 1. (Continued)

Type of Measurement Scale Just as you would select an appropriate tool to pound a nail into wood, it is critical to select measures for benchmarking that have the appropriate attributes. The measures need to support objective, repeatable measurement—in some cases, with desirable precision and statistical confidence. To do so, the measures generally need to have a continuous measurement scale, be expressed in units with appropriate resolution, apply to standard lengths or parts of roadway geometry, and be taken under a standard and rigorous protocol. There may also be a need for acceptance testing of data using random sampling. Continuous Measures It is strongly recommended that wherever possible you apply measures with a continuous scale. A continuous scale extends indefinitely from a starting point, and the units of measurement can be divided into equal, arbitrarily small intervals. Examples of continuous scales are as follows: ♦ Extent of bridge deck distress measured in terms of percentage of the deck area affected, ♦ Roughness measured according to the International Roughness Index (IRI), ♦ Shoulder edge drop-off measured in inches or centimeters and arbitrarily small fractions thereof, ♦ Retroreflectivity of signs measured as candelas per foot- meter square foot, ♦ Mean response time to fix a problem, and ♦ Mean time between failures. By using a continuous scale, you remove the subjectivity and difficulty of having to define the meaning of scale intervals other than the units of measurement. The results of a measurement can be of any magnitude from very small to very large. Measurement systems that use continuous, well- established scales are more likely to be repeatable, and there is a basis for establishing the statistical quality of measures to any degree of accuracy and statistical confidence. Chapter 3: Measurement 60 The Virginia DOT contracted for the collection of roadway inventory feature and condition data. The contract specified that the data must be accurate within plus or minus 5 percent, with 95 percent accuracy. The contractor had to agree to acceptance testing to ensure that the data was of the accuracy and statistical confidence specified in the contract.

Discrete or Continuous Interval Scale If you cannot select a continuous scale, the next best type of scale is a discrete scale with constant intervals between steps, otherwise known as a continuous interval scale. Examples of such an interval scale are ♦ 1, 2, 3, 4, 5 and ♦ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. A discrete interval scale has most of the desirable attributes of a continuous scale. However, one must define what each step in the scale means, and this can be fraught with subjectivity and technical challenges. The following customer rating scale attempts to maintain equal distances between each step of the scale: ♦ 1 = Very dissatisfied; ♦ 2 = Somewhat dissatisfied; ♦ 3 = Satisfied; ♦ 4 = Somewhat more than satisfied; and ♦ 5 = Very satisfied. A similar type of scale might also be a letter grade—for example, “A, B, C, D, and E.” This type of scale has the same strengths and weaknesses of the continuous interval scale if it is equivalent to “1, 2, 3, 4, 5” or some other similar equally spaced discrete scale. Occasionally, a letter scale has a leap in it—for example, A, B, C, D, and F. Usually this type of scale implies that measurement will occur in constant steps up to a point, and thereafter the only measurement of concern is failure. You are likely to encounter a measurement system that involves probabilistic condition states. The measurement scale is likely to be a discrete scale such as 1, 2, 3, 4, and 5. Probabilistic condition states are used to identify the probability that a maintainable element, such as bridge deck or pavement surface, will deteriorate from one condition state to another. The distances between steps on the scale are not necessarily even, but are defined by alternative actions that may be considered for maintaining a maintenance element in a particular condition state. 61

Binary Measures Binary measures take on just two values such as “0 or 1” or “yes or no.” The project team recommends avoiding binary scales because they have much less resolution than do a continuous or continuous interval scale. Establishing the definition of each value is likely to be much more subjective. In some cases, using a binary scale is the only logical choice. Examples are whether a traffic signal is working, a sign is up or knocked down, or a drainage structure is blocked. Types of Measures to Avoid Do not choose targets, objectives, or goals for benchmarking measures. Frequently people confuse these points on the measurement scale with the measurement scale itself. A target, goal, or objective may have such importance (for example, a performance target agreed upon by a Chief Administrative Officer and the legislature) that managers may think of little else besides whether the target or goal is being met. The measurement process of benchmarking is not about targets, objectives, or goals; it is about measuring performance along some scale to discern best performers so that benchmarking partners can explore what work methods and business processes lie behind best performances and can adopt or improve upon best practices. You should also avoid choosing measures that represent thresholds for actions, such as minimum tolerable conditions (e.g., a warrant to replace a traffic signal). An important exception is a probabilistic condition state that has alternative actions associated with it. Selection of Appropriate Units Not only can a measurement scale be too coarse to differentiate performance, but choosing inappropriate units can have the same effect. You may decide to measure litter count per unit of elapsed distance along the roadside. If you select as your measure litter count per mile, you will get one result; if you select litter count per tenth of mile, you will get another; and if you select litter count per foot or inch, the quantity of litter you Chapter 3: Measurement 62

observe may always be close to zero, which is not very useful for benchmarking. This example reveals how important it is to select units that will provide enough resolution to measure the performances of different organization units. Agreement on Road Segment Length and Geometric Measurements In reaching an agreement with your benchmarking partners regarding what measures to use, you may find it necessary to define an agreed-upon segment length or other standard measurement procedures pertaining to roadway geometrics so that everyone takes the same measurement in the same manner. Suppose you are measuring guardrail condition. Do you measure the percent of total guardrail damaged over a 1-mile distance, over a tenth of a mile, or over a kilometer? Suppose you are measuring the presence of a type of noxious weed. How will you define the area over which you will take measurements? Perhaps you might agree with your partners that a measure of roadside vegetation management will be sight distance at intersections. How will you define the procedure for measuring sight distance? Do you determine, for example, how many feet from the corner along one side of the intersection you can see a car at an equal distance along the other side of the intersection? In general, you will need to reach prior agreement on how to define segment length, area, and other geometric procedures for different types of measurements. Repeatability, Reliability, and Accuracy of Measurement Any measure selected for customer-driven benchmarking needs to be repeatable and reliable. Repeatable means that different people who apply the measure and take a measurement under the same circumstances obtain the same or nearly the same result. To obtain repeatability usually requires training. Each person who takes a measure requires instruction on how to do it. If equipment is involved such as a profilometer or a friction meter, it will need to be calibrated and recalibrated from time to time to ensure repeatability. 63

The measure needs to be reliable. The equipment used to take the measure should not break down easily. Excessive measurement deviations should not occur due to normal changes in weather, normal wear and tear, or a switch in the personnel who are taking the measurement. The measure needs to be accurate, and it is desirable to specify its accuracy. In other words, if you or others take repeated measurements, you should get the same result within some range. This range is often referred to as the “accuracy” or “precision” of the measure and is expressed as “plus or minus” some percentage deviation from the mean score (e.g., plus or minus 5 percent). The accuracy or precision is a random variable; the measurements will occur within the accuracy with some statistical confidence level—for example, 95 percent of the time. Indeed you should specify what accuracy and statistical confidence you expect of your measurements. Measurement Protocols In general, it is a good idea to develop formal protocols for measurement. Protocols exist for taking many different kinds of measurement—for example, the IRI and rutting. A good protocol should set the purpose, scope, measurement, data recording procedure, and quality assurance and should document references. An outline of a protocol for edge drop-off (taken from the proceedings of the National Workshop on Commonly Recognized Measures) might consist of the following: 1. Purpose. The edge drop-off protocol defines a standard method for estimating and summarizing edge drop-off. The purpose is to produce consistent estimates of edge drop-off. 2. Scope. Applies to estimating edge drop-off on any pavement surface, but does not provide specifications for equipment. Any equipment capable of taking the measurement is acceptable for the protocol. Safety issues in applying the protocol are the responsibility of the organization taking the measurement. 3. Measurement. Each agency should designate the lane(s), shoulders, and direction(s) of travel to be surveyed based on sound engineering principles and management needs. Edge drop-off is an elevation difference between the Chapter 3: Measurement 64

paved travel lane and shoulder, between a paved shoulder and an unpaved shoulder, or both. Measurements are made longitudinally at maximum intervals of 15 meters (50 feet). 4. Data Recording. Data collection sections should be of a constant length within some prescribed range as determined by the agency. Sample intervals within each data collection section should be of uniform length. Minimum sample section lengths are 30 meters (100 feet). There are five edge drop-off condition levels defined as a function of the length of the edge drop-off and the elevation difference (for example, the edge drop-off condition levels used by the Texas DOT). The minimum data recorded should consist of section identification, the length of the data collection section, the date of collection, and the rating for the section. 5. Quality Assurance. Each agency should develop a quality assurance plan that addresses personnel certification training, accuracy of equipment (including calibration), daily quality control procedures, and periodic and ongoing quality control. 6. Reference Documents. A list should be provided of references associated with the measurement protocol.3 Data Availability, Quality, and Costs Some types of measures depend on making a calculation—for example, a measure of response to customer demand for control of ice and snow is the ratio of the time it takes to restore pavement to bare condition from the onset of a snowstorm relative to the duration of the snowstorm. To calculate this ratio, you need to track how long it takes from the start of each storm to the point in time when snow removal crews have removed all the snow from the roadway. You also need to calculate how long the snowstorm is. Neither of these numbers is trivial to determine. You have to define when a storm starts. Does this mean the storm begins when precipitation starts, or when snow starts to stick to the roads? Do you measure where snow sticks to the road in one standard place or along every section of road and then take an average of the time the snow starts to stick? Similar 65 3 Proceedings, National Workshop on Commonly Recognized Measures.

difficulty exists in trying to define when a storm ends and when pavement has been restored to a bare condition. Once you settle on the definition of the measure, you need to compile or collect data to calculate it. If the data is unreliable or inaccurate and without an appropriate degree of statistical confidence, then the measure will be also unreliable or not accurate enough. Before you finalize the measures you will be using for benchmarking, you need to do a careful assessment of data availability, reliability, and accuracy. In addition, you need to estimate the costs of data collection. If the costs are excessive, you may have to choose another measure or accept a lower level of accuracy and confidence. You may think that there is too much emphasis on data and measurement quality. Many important decisions will eventually depend on the accuracy of the measures you collect and the underlying data; however, overemphasizing accuracy has its costs, too. Do not go overboard in trying to be too accurate. The right thing to do may be to start benchmarking and measuring as soon as possible and to gradually improve the quality of your measurements. A CATALOG OF MEASURES Appendix B provides a catalog of measures you may want to use for benchmarking. Many of the measures presented are widely used, and include those identified as “commonly recognized measures” at the national workshop on the topic. Some types of measures discussed are not yet widely used but are important from the standpoint of their relationship to the customer. As you get started with benchmarking, you will want to select among these and other possible measures. The catalog offers some guidance regarding the pertinence of each measure to the customer and its reliability, accuracy, and ease and cost of application. Chapter 3: Measurement 66

Performance measures are presented for the following areas: ♦ Pavements; ♦ Shoulders; ♦ Bridges; ♦ Signs, striping, and markings; ♦ Safety features; ♦ Ice and snow control; ♦ Roadside vegetation; ♦ Drainage; ♦ Litter removal; ♦ Rest areas; ♦ Signals; and ♦ Other electronic devices. As an example of the material in Appendix B; Table 2 (which is identical to Table B1) shows measures for pavements. Pavements experience different types of deterioration that affect their appearance, riding experience, and structural soundness. Table 2 presents the following information: ♦ Attributes important to the customer that the measure addresses; ♦ The name of the measure (e.g., IRI); ♦ Units of measurement (e.g., inches per mile); ♦ Commonly recognized at the National Workshop on Commonly Recognized Measures for Maintenance; ♦ Repeatable, reliable, and accurate—in other words, an assessment of whether the measure has these attributes; and ♦ Cost of using the measure or other important issues. 67

Chapter 3: Measurement 68 Table 2. Condition Measures for Pavements (continued on next page) Attribute Measure Units Commonly Recognized by AASHTO? Repeatable, Reliable, and Accurate? Cost and Other Issues Pavement Smoothness (roughness) IRI Inches/mile or m/km Yes Well-established procedures and equipment that result in repeatable, reliable, and reasonably accurate results Low incremental cost for agencies already collecting IRI; moderate to high cost of new data collection effort Pavement Smoothness (customer satisfaction) NQI or other survey question asking customer satisfaction regarding pavement smoothness 1–5 response scale Survey question on pavement smoothness Standard NQI survey question; not accurate for jurisdictions lower than state, unless separate survey administered Low cost to use NQI survey results; moderate to high cost to develop and administer your own survey that includes question on pavement smoothness Pavement Smoothness (potholes) Number of potholes of specified size per unit distance Number per unit distance Potholes are easily observed, but the number per unit distance can be difficult to count. The number of potholes can change rapidly as new ones appear and existing ones are repaired. High cost to develop a comprehensive, accurate pothole count. Pavement Smoothness, Accessibility (blowups) Number of blowups per unit distance Number per unit distance Blowups are easily observed and easy to count. Blowups occur during the freeze-thaw transition, so new ones can suddenly emerge and affect the reliability of the count. Seasonal problem that requires moderate measurement cost; motorist call-ins could reduce data collection costs. Safety (danger of hydroplaning) Rutting Inches Yes Well-established, reliable, repeatable, and reasonably accurate measurements using a ruler Low cost to do for sample sections or if data already exists; high cost to obtain comprehensive coverage if data doesn't exist

69 Table 2. (Continued) Attribute Measure Units Commonly Recognized by AASHTO? Repeatable, Reliable, and Accurate? Cost and Other Issues Safety (skid resistance) Friction Yes Well established equipment and procedures for reliable, repeatable, and reasonably accurate measures Low incremental cost if agency already routinely measures; high cost for new measurement program Preservation Characteristic (protection against water damage to structure due to faults) Faulting Inches Repeatable, reliable, and reasonably accurate measures obtained using ruler Low cost to do for sample sections or if data already exists; high cost to obtain comprehensive coverage if data doesn't exist Preservation Characteristic (appearance of deterioration, raveling, water infiltration) Extent and severity of different types of cracking: –alligator –longitudinal –transverse Percent of area covered or length of cracks and rating of severity on a scale Challenge in maintaining consistency among raters; automated distress identification technology not highly accurate Much lower cost to do for sample sections in comparison to comprehensive network coverage Overall Pavement Condition Health Index Some type of index, e.g., from 0–100 Requires construction of index reflecting key pavement attributes; each characteristic can be measured with varying degrees of reliability Low to high cost to develop and apply index, depending upon the availability of data to calculate index components Overall Level of Service Visual Level of Service Condition Rating Rating scale of A, B, C, D, or E Often visual rating scales combine more than one characteristic, and so it is difficult to portray and isolate condition of different attributes Mainly useful for communicating to policy makers and general public

RESOURCE MEASURES The next broad class of measures needed for benchmarking is resources composed of labor, equipment, and material, as well as financial costs. Labor Labor is an important input to the production of maintenance products and services. In benchmarking, you need an overall measure of the quantity of labor that is used to produce a maintenance product or service or undertake an activity. The quantity of labor is measured in terms of person-hours of labor. Person-hours equal regular hours plus overtime hours. Try to separate travel hours (i.e., time to go from the garage to and from the worksite). Some agencies require workers to report travel hours in addition to regular and overtime hours. Eventually, as you become more deeply involved in benchmarking and desire to understand your practices in detail, you will want to distinguish between labor hours of different quality. Measures of quality pertain to training, education, and experience. The productivity of different people is not a measure of quality; productivity is the output of labor that is achieved as a result of labor hours expended and the quality of the labor. As you assemble labor data to support initial benchmarking and for subsequent comparison of your own and “best” practices, you should break down your labor hours by categories that distinguish the levels of training, education, and experience of different personnel. You can do this by categorizing labor hours expended into one or more of the following: ♦ Wage class or other class of personnel (e.g., equipment operator or not); ♦ Number of years of experience; or ♦ Documented training or certification to perform certain types of activities or to use certain types of equipment (e.g., herbicide application). Chapter 3: Measurement 70

Key sources of labor data are the agency’s maintenance management system and the payroll system. Some agencies might also have a database containing information on the training of each employee. Equipment As with labor, you will need an overall measure of the equipment used. Equipment quantity consists of the number of hours each type of equipment is used or some metered measurement of usage—for example, a truck odometer reading. Equipment quality is determined by the type of equipment, its condition; frequency of breakdown; and operator requirements, which relate to the ease of operation and number of operators required. In preparation for analysis of best practices and comparison to your own, try to categorize your equipment along these different dimensions of quality and to measure equipment usage of each in hours, by odometers, or both. Information on equipment type and utilization usually can be obtained in a maintenance management system, an equipment management system, a financial management system, or in all three. Material You will also need a measure of material usage. Material usage can be measured by the physical quantity of each type of material used to deliver a specific maintenance service or product or to undertake a specific activity. Examples of material use are the number of signs and posts, linear feet of guardrail, tons of pothole material, and gallons of crack sealant. Selection of the proper units to measure material usage requires some care. For example, it might be better to measure signs replaced not by the number of signs replaced, but by the area of the sign facing, which reflects the magnitude and difficulty of putting up or replacing a sign. Alternatively, one could count both the number of signs replaced and the number of signposts. The number of signposts required might be an indicator of the difficulty in replacing certain types of signs. 71

Various information on materials used can be found in the maintenance management system, material management system, financial management system, or in all three. Costs Another measure of resource utilization is the total dollar costs of using the labor, equipment, and material involved in delivering a maintenance product or service. Sometimes, however, it is better to employ measures of the raw labor, equipment, and material inputs instead because there can be local and regional differences in the unit cost of labor, equipment, and materials. If you use total resource costs or even costs of each input to maintenance production, you will not easily be able to distinguish to what degree the physical inputs or variation in price of inputs are contributing to the outcomes. If physical measures of labor, equipment, and material resources are not available and only cost data is available, then cost data can be used as a measure of resource utilization. Indeed, one can argue that expressing all resources in financial terms results in convenience of analysis and, in some cases, in a better measure of resource utilization than does separate usage rates for labor, equipment, and materials. Note that if a maintenance cost index that varies by year and part of the country is available, you can use dollars as a measure of resource costs and can normalize the costs by geographical area for any past year covered by the index. It is important to understand that even if you do not use resource costs when you measure performance, once you have identified best performers and improvement opportunities and begin to analyze the effect of adopting best practices, you will need cost information in order to estimate potential cost savings or the costs of improving certain outcomes. Variable Costs Wherever possible, you should distinguish between variable and fixed costs. Variable costs vary with output and include labor, selected equipment costs such as fuel, and material costs. Variable costs do not include overhead and other fixed costs. Therefore, fixed costs should be excluded from your measures of labor, equipment, and material input. Chapter 3: Measurement 72

Fixed Costs and Activity-Based Costing Fixed costs are those costs that do not vary with output, such as costs of administration and buildings. Ideally, your agency should have an accounting system that determines fixed and variable costs by maintenance activity and by product and service category. This is known as “activity-based costing.” If your agency does not have such an accounting system, eventually you may want to implement activity-based costing to identify your fixed and variable costs by activity, product, and service. HARDSHIP FACTORS In addition to outcomes and resources, the third major group of measures needed for customer-driven benchmarking is hardship factors. These are factors outside the influence of maintenance crews. Examples of hardship factors are the following: ♦ Weather, ♦ Terrain, ♦ Traffic, ♦ Absence of shoulders along roads where work is performed, ♦ Average travel distance to work sites, and ♦ On-street parking. You need to prepare to collect data on these kinds of hardship factors because these will be assessed alongside outcomes and resources used. Weather In most states, weather varies considerably from one part of the state to another. Some states have wide extremes in weather that are partly a function of geography. Mountains, plains, deserts, heat island effects of urban areas, and proximity to oceans and large lakes are just a few factors that influence weather. It is desirable to adjust outcomes based on differences in weather from one location to another. Ideally, one should store data on 73

weather conditions present at the time maintenance work is performed. To be more specific, standard daily work reporting should be augmented with weather data—at a minimum, the type and quantity of precipitation that occurred during the day and the high, low, and mean temperature. The drawback to further data collection is that it requires additional effort on the part of crew leaders to record this information, which detracts from getting their jobs done. An alternative approach to crew leaders recording weather data is to gather data from other sources and to combine it in a database with accomplishment and resource utilization information reported in daily work reports. There is extensive weather-related information available from the National Weather Service and state meteorological agencies. Weather data includes temperature; precipitation (rain and snow); wind direction; wind speed; humidity; and other information. Weather information is collected at selected sites throughout a state, but not necessarily in every county. Therefore, if you want to benchmark at the county level or a at lower organizational level, you will probably have to interpolate weather data from information collected at existing weather stations, unless maintenance personnel record weather conditions at the time they work. Another potential source of weather information is the Roadway Weather Information System (RWIS). Most states that experience snow and ice conditions have a RWIS. These systems consist of a set of pavement surface temperature sensors; subsurface sensors; and regular weather sensors (air temperature, wind direction, wind speed, humidity) at various locations along the roadway network. RWIS roadside units continually monitor weather-related pavement conditions and weather conditions. The data is collected and transmitted to a service bureau or to the transportation department that has responsibility for the roads. RWIS data can also be analyzed and extrapolated to counties, areas, and garages throughout a state. Chapter 3: Measurement 74

Geographic Information Another hardship factor that affects maintenance productivity and outcomes is terrain. Mountainous and hilly areas are likely to affect maintenance outputs and outcomes differently than will flat areas. Information on terrain is readily available from both government and private-sector data sets. Most state DOTs have access to a geographic information system (GIS) that has information on terrain. However, information in a digital map often is not adequate for recording the type of terrain or other geographic information that affects maintenance outcomes in different locations. The reason is that a digital map is often a bit map, which is not in a form that allows manipulation of data concerning attributes of the roadway. More useful is roadway attribute data, which describes the type of terrain and other geographic features present where a section of road is located. Most if not all state agencies have a highway database containing this information. Ideally, terrain data will be included in the attribute database of the GIS and linked to a roadway centerline. It should be possible to transfer terrain data to the database in which you will be keeping information for benchmarking. Then, when work is performed, you can associate terrain and other geographic data with the data used to measure outcomes and resource usage. Roadway Attributes Certain roadway attributes affect the productivity and outcomes of maintenance work—for example, the presence of shoulders makes it easier for crews to park their vehicles and work on roadside safety features such as guardrails and signs. In the absence of shoulders, work zones will probably need to be established, which requires blocking off a lane of traffic and takes time that could otherwise be spent performing maintenance work. Data concerning roadway attributes such as shoulders will be found in the agency’s roadway feature inventory database. Every state and most cities and counties will have data on the presence or absence of shoulders along various sections of road. 75

Frequently, this information will also be available within the agency’s GIS. Like terrain data, roadway attribute data will need to be combined with information regarding outcomes and resource usage in order to support benchmarking. OUTPUT MEASURES The discussion so far has ignored output measures because they are not focused on what the customer gains from road maintenance. Output measures, as stated above, are used to record maintenance production—for example, the miles of pavement resurfaced per day or the number of feet of guardrail repaired. Even though output measures are not focused on the customer, you will want to add output measures to your set of outcome, resource usage, and hardship measures. There are a number of reasons to do so: ♦ A way to establish comparability. Output measures provide a means to access the scale of activity of a benchmarking unit and therefore provide a more informed basis for comparing performance. For example, one benchmarking unit may resurface only 10 miles of pavement per year, whereas another may resurface 100 miles. These benchmarking units are not really comparable. ♦ Surrogates for outcome measures. Reliable, repeatable, accurate, and reasonable-cost outcome measures may not be available in some instances. You may want to use an output variable as a proxy for an outcome variable. For example, you may not be able to estimate the degree to which damaged guardrail replacement along a stretch of highway saves lives. Instead, you may simply use the linear feet of damaged guardrail replaced as a proxy for fatalities avoided, in the rare event that a vehicle crashes into a previously damaged guardrail. ♦ Utility for productivity measurement. Even though you should remain focused on the customer, it will be important to analyze the productivity of crews and other work units. Output information is essential for analyzing productivity. You may also want to estimate production functions that predict output as a function of labor, equipment, material, and environmental factors. Chapter 3: Measurement 76

♦ Linkage to outcomes. Some analysts find that the most logical way to establish a measure of certain types of outcomes is to establish a functional relationship between outputs and various types of outcomes. Under this approach, output data is essential to establishing outcomes. In preparing to benchmark, you will need to assess the role that output information will play in customer-driven benchmarking and related analysis. You will need output data and measures— even if you are focused on outcomes. 77

Next: Chapter 4 - Steps of Customer-Driven Benchmarking »
Guide for Customer-Driven Benchmarking of Maintenance Activities Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB's National Cooperative Highway Research Program (NCHRP) Report 511: Guide for Customer-Driven Benchmarking of Maintenance Activities provides guidance on how to evaluate and improve an agency's performance through a process called "customer-driven benchmarking." The objective of benchmarking is to identify, evaluate, and implement best practices by comparing the performance of agencies.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!