Click for next page ( 21


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 20
Volume II: Guide for Performance Measure Identification and Target Setting Table 3 can also be used as an initial screen to filter out potential measures that may not be feasible or meaningful from an asset management perspective. This formal evaluation approach can save you time by enabling you to focus on measures that are most promising. It may also be helpful to complete the evaluation as a group process or to give the evaluation to a few different individuals and then get together to compare and discuss the results. This will allow for a broader set of concerns to be raised and will provide a larger base of knowledge to draw upon that is relevant to understanding whether a given measure would function well. Based on the results of your evaluation, you can narrow down the set of candidate measures to those that you want to move forward with. It is important at this point to clearly document the definitions of each measure so that everyone understands what is being measured and how it is to be calcu- lated and reported. In the next section, you will take these measures and look more closely at where and how they will be used in your organization. This activity will likely result in further refinement to the set of measures. 3.3 Integrating Performance Measures into the Organization In the previous section, individual performance measures were evaluated and selected to fill gaps in coverage and decision support needs. This section takes a more detailed look at how to integrate a set of performance measures into the organization. Step 1: Engage Stakeholders Necessary ingredients for a successful performance measurement implementation are: Top management support and leadership, Stakeholder buy-in and commitment to use the measures, Integration of performance measures into existing business processes and decision-making forums, and Clear ownership and responsibility for each measure and associated data and tools. These ingredients need to be considered during the process of performance measure selection. It is important to have active support from people who will be receiving performance measure reports and who will be asked to make decisions in response to these measures and be accountable for results. It is also important to have buy-in from people whose cooperation or resources are needed to gather or summarize measures. Stakeholders should be involved early on in the performance measure development process. Although stakeholders should be given the opportunity to participate in all stages of performance measure identification, evaluation, and implementation, it is important to make sure that the overall implementation moves forward at a reasonable pace. A strong leader for 20

OCR for page 20
Volume II: Guide for Performance Measure Identification and Target Setting the performance management effort is essential for ensuring that the effort does not become stalled or overwhelmed with the need to address too many issues at one time. The right balance of stake- holder involvement will maximize the chances that the resulting measures will be used to improve decision making and help ensure that they are implemented in a timely fashion. Depending on the types of performance measures being implemented, stakeholders may all be within a single agency, or they may be in several different agencies--for example, a state DOT and the state's metropolitan planning organizations (MPOs) (in the case of congestion measures) and a state DOT, Governors Highway Safety Office, state police, and local jurisdictions (in the case of safety measures). When stakeholders are dispersed across multiple agencies, extra effort should be anticipated to allow for sufficient communication to achieve buy-in. Some additional guidelines for working with stakeholders and designing performance measures to achieve buy-in are the following: Emphasize the use of performance measures as a means to improve system performance and quality, not as a report card or judgment on productivity or effectiveness. Keep the measures focused on the strategic goals and related to the activities of the agencies. Keep the measures focused on the customer. When multiple agencies are involved and consistency in performance measures can be achieved, work toward identifying mutual interests. Depending on the situation, use judgment to determine whether it is best to keep the number of measures limited (to keep things simple and facilitate consensus) or pursue an expanded set of measures that represents the interests and needs of a diverse set of stakeholders. Step 2: Tailor Measures to Decisions The purpose of this step is to examine your candidate performance measures in the context of decision-making processes in your agency and to design appropriate forms of the measures that are responsive to the different types and levels of decisions. The more specific you are in thinking through the activities where performance measures would be used, and the units of your organiza- tion that would use them, the better. The activities defined in Section 3.2, Step 4 (and shown in the columns of Table 2) can provide a good starting point for identifying the various decision contexts within which performance measures are to be used. These decision contexts vary by geographic scope (network, corridor, site, or project) and timeframe (from immediate or real time to a 20-year horizon). Some categories of performance measures will be applicable to multiple geographic scopes and timeframes--for example, there is a need to track pavement preservation at the project, corridor, subnetwork, and statewide levels; and this information is valuable for short-, medium-, and long- range decisions. Where this is the case, it is important to ensure that performance measures are con- sistent across different levels and timeframes--for example, the performance measures and targets established in the agency's long-range plan should be consistent with the performance criteria used to prioritize projects or activities for the program. This does not mean, however, that the specific 21

OCR for page 20
Volume II: Guide for Performance Measure Identification and Target Setting form of the measures needs to be identical. In fact, for each type of decision, the specific form of the performance measure should be tailored to the type of decision being made (i.e., geared to the needs of the target audience or user) and defined at the appropriate level of sensitivity so that the impact of decisions can be detected. Table 4 provides an overview of applications of performance measures at different levels and time- frames. This table can be used as a framework for defining appropriate forms of performance mea- sures to be used for different types and levels of decisions. An example of a set of related measures for pavement preservation is illustrated in Table 5. Defin- ing groups such as these will help to ensure that performance measures can be applied effectively in your agency's decision-making processes. They will also help you to ensure consistency in the use of performance measures across different decision-making levels. Finally, it will help you to identify needed activities for improvements to data access, management, and analytical tools for automating calculations to translate performance data from one form to another (e.g., "roll-ups") or to estimate future values. Table 4. Performance Measures for Different Types and Levels of Decisions Short- or Medium-Term Long-Term Network Summary roll-ups of corridor or subnetwork Progress toward long-term, strategic policy performance objectives Evaluation of accomplishments versus Predicted long-term conditions or needs at targets system or modal level (life-cycle analyses where appropriate) Comparisons of performance under different 3- to 6-year investment scenarios Broad-based tradeoffs among modal, system, location and program options Corridor Description of existing conditions to assess Forecasted 10- to 20-year corridor-level connectivity and consistency of corridor conditions (requires use of management level of development by mode systems, travel demand models, and extrapolation/estimation methods) Forecasts of performance for different corri- dor investment options Impacts of proposed corridor investment on Assessment of options for project packaging broader systemwide performance (consistent and staging (considering coordination of with Title 23 U.S.C. Sections 134 and 135) detours and alternative routes and modes of travel) Measures for evaluating alternative types of investments--different modal options; operational versus capacity Project Technical information required to design Forecasts of performance and cost suitable appropriate corrective solution for program and budget development Prioritization criteria for selecting among Evaluation of wide range of transportation, candidate projects environmental, and social performance impacts of major projects with long lead Assistance with detailed project delivery times for project development planning (e.g., work zone configuration, detour routes) 22

OCR for page 20
Volume II: Guide for Performance Measure Identification and Target Setting Table 5. Measures for Different Decisions Pavement Preservation Example Short- or Medium-Term Long-Term Network Percent of mileage in poor condition (based on Projected percent of mileage in poor pavement condition index, or PCI) by system condition in 10 years for alternative category, current and projected, given currently funding scenarios programmed projects or expenditures Percent of network resurfaced per year (versus level needed to achieve condition targets) Corridor Average PCI Projected average PCI in 10 years Project PCI Distress/cracking by type International Roughness Index (IRI) General guidance on tailoring performance measures to ensure appropriate sensitivity and useful- ness at different levels is as follows: Use more detailed measures for project-level decisions, which can be translated (ideally in an automated fashion) into less technical and more general measures for use at corridor and net- work levels. For support of network-level, short- and medium-term decisions, select measures of perfor- mance distribution (e.g., percent in good condition) rather than average performance, since sub- stantial improvements in performance at a project or corridor level will likely have a negligible impact on networkwide averages. Use performance measures to identify critical infrastructure deficiencies by establishing a threshold value of a condition index based on experience or engineering judgment about what level is serious enough to threaten structural integrity, dramatically increase user costs, or result in many customer complaints. An alternative to using a condition index is to focus on one or more conditions that are judged as critical to facility performance (e.g., pavement roughness or rut depth for pavement preservation, condition of bridge superstructure and substructure ele- ments for bridge preservation, or congestion level for mobility). Economically based thresholds can also be established to signal concern about the planned level and pattern of investment (e.g., for percent of remaining asset value). Design measures to reflect the target scope of implemented strategies. For example, to measure impacts of intersection improvements for mobility and operational efficiency, use a measure like "time savings at intersections" rather than a more global measure such as "overall reduction in total network travel time." Use normalized indexes of performance measure values (01 or 0100 scale) to facilitate under- standing of how performance varies within the range of allowable or achievable values. 23

OCR for page 20
Volume II: Guide for Performance Measure Identification and Target Setting Use rates (e.g., crashes per 100 million VMT or incidents per million passenger-miles traveled, or PMT) to facilitate comparison of performance measures across different portions of the net- work and to allow for meaningful tracking of trends. Use ratios (e.g., ratio of travel time in congested conditions to travel time in free-flow conditions and number of fatal accidents divided by total accidents) to put the measures into perspective and to provide useful insights. Both rates and ratios can be helpful in understanding and com- municating the extent to which transportation performance can be attributed to the actions of the agency as opposed to external trends that are beyond an agency's control. Use measures of agency activity or "output" in order to provide short-term feedback on planned versus actual accomplishment. However, also monitor outcomes (where feasible) for longer- term decisions and work toward improving your agency's ability to predict the relationship between outputs and outcomes using simple models or more sophisticated analytical tools. Institutionalize the process of conducting before-and-after studies in order to distinguish per- formance impacts of agency projects. Step 3: Design Consistent Measures Across Program Areas Consistency and alignment of criteria for decision making within an organization is a central con- cept of asset management. Developing a consistent set of performance measures will enable an agency to describe asset condition or service level for engineers, administrators, legislative bodies, and the traveling public. Performance measures that are consistently defined across program areas responsible for different asset classes and/or functions can be extremely valuable for providing a high-level understanding of performance for upper-level managers and for facilitating tradeoff analysis and target setting. These types of measures can be defined based on the more detailed performance information shown in Appendix A and discussed in Section 3.2. Implementing consistent measures is often challenging because it requires coordination and agree- ment across different units of the organization on criteria and methods for performance measure- ment calculations. Strong upper management leadership and good communication among agency units is a necessary ingredient for their successful implementation. It may be helpful to designate a single office with specific responsibility for coordination across different parts of the organization. Alternatively, cross-functional teams can be formed, charged with defining and implementing a consistent package of measures for a given set of agency business functions. Agencywide central performance data repositories, reporting tools, and geographic information system (GIS) tools can be helpful to support implementation of these measures as well. Uniformity and consistency in data are critical to support tradeoffs across asset classes and across geographic areas. One straightforward approach to consistency in performance measurement across asset classes would be to define the following three measures for each major class of asset: 1. Percent of assets (based on quantity or value) operating at "desirable" levels, 2. Percent of assets (based on quantity or value) operating at "minimum tolerable levels," and 3. Percent of assets (based on quantity or value) designated as "high-risk" (for structural failure, oper- ational failure, or hazard to the traveling public) where immediate action or evaluation is needed. 24

OCR for page 20
Volume II: Guide for Performance Measure Identification and Target Setting Each of these performance measures could be defined based on threshold values for physical con- dition, congestion levels, crash rates, design features versus standards, and so forth. Other examples of performance measures that could be defined consistently across asset classes include: Percent of assets in "good" or "poor" physical condition, Percent of assets (based on quantity or value) in a "state of good repair" (defined based on either condition or maintenance records), Percent of assets with more than (or less than) X years of remaining life, Percent of assets that are more than X percent of their design life, Percent of assets that are at the end of their economic life (i.e., maintenance and rehabilitation cost would equal or exceed the replacement cost), Remaining asset value (or related measures such as the ratio of remaining value to replacement value or the ratio of deteriorated value to replacement value), Backlog of need (where need is defined based on specified service or condition thresholds), Percent of target work completed or programmed (based on asset quantity, dollars, and num- bers of projects), Customer satisfaction or utility measures derived from customer satisfaction and importance ratings, and User costs associated with deficiencies or benefits associated with correcting the deficiencies (these measures would be based on available user cost models for pavement condition, bridge condition, safety, and congestion). Step 4: Identify Improvements to Data and Tools Successful integration of performance measures into your organization's decision-making processes will depend on the quality of data that you use and the availability of credible analytical methods and tools for prediction of performance measure values. These were identified as key screening cri- teria for performance measure selection in Section 3.2. The task now is to assess current capabilities and develop a detailed plan of data and tool-related work that will be required to implement the measures that you have selected. As a result of this exercise, you may find that some of your selected measures are infeasible at present or that they can only be used in a limited set of contexts (e.g., they can be tracked but not yet predicted). For measures that you intend to pursue, the objective is to identify specific actions that will need to be taken and ensure availability of budget and staff time for completing them. The format shown in Table 6 can be used to structure the investigation of data and tool requirements in support of performance measurement. 25

OCR for page 20
Volume II: Guide for Performance Measure Identification and Target Setting Table 6. Data and Tool Assessment for Performance Measurement Current Method and Owner Known Issues Actions Required Data Sources/ Collection Methods Data Transformation/ Measure Calculation Impact Assessment and Forecasting Tools This investigation should consider: Data collection methods or data sources for all of the inputs required for calculating a given per- formance measure, Methods to transform and process the data for different purposes (e.g., assumptions and tech- niques used to calculate asset value from inventory, condition, and financial information), and Forecasting and impact assessment tools (automated or manual) for (1) predicting the value of the performance measure that would result from implementation of a particular project or program strategy or from investing a given level of resources and (2) predicting how the value of the measure (or its components) would change over time assuming no action on the part of the agency. For each of these elements, identify: The current methods in use (if any) and the responsible units; Known issues, questions, or concerns with respect to data quality, prediction accuracy, reason- ableness of tool inputs, and so forth; and A list of specific actions needed to ensure data quality and accuracy (both initially and on an ongoing basis), address concerns, and fill gaps in methodologies or tool sets. In developing the list of required actions, consider the following kinds of activities: New data collection efforts. Changes in equipment or methods used in existing efforts to obtain better accuracy, reliability, and/or timeliness. This might include investigation of emerging data collection technologies that may provide better information at a lower cost. Consolidation of existing data collection efforts. For example, instead of separate inspections for different data elements on the highway network, coordinate inspection activity so that all items are collected at one time. 26

OCR for page 20
Volume II: Guide for Performance Measure Identification and Target Setting Changes to data processing procedures to improve data quality and timeliness. Collection/reporting of additional supporting data elements in order to better understand fac- tors influencing trends in performance that are outside of the agency's control (e.g., vehicle reg- istrations, fuel prices, employment, population/demographics, activities of other agencies, and weather monitoring). New formal data quality checking procedures, including improved validation based on specific test criteria (and automation of validation checking where feasible), consistency checks across different data sources, and spot verification of inspection data. New initiatives to correct known data quality or consistency problems. Establishment of standards across different parts of the agency (or across agencies, where appli- cable) to ensure consistency in performance measurement calculations and predictions. Such standards could: Identify "official" data sources to be used for performance measures that are calculated using system quantities (e.g., mileage by functional class), VMT, annual average daily traffic [AADT], population, employment, and other items; Establish common geographic and temporal referencing methods to allow for integration of performance data from different sources (geographic referencing would include linear ref- erencing, spatial referencing, and zone systems); Establish requirements for documentation of performance data so that data can be properly interpreted and consolidated, particularly for performance measurement efforts involving multiple agencies; Establish parameter values to be used in economically based calculations (e.g., value of time, accident costs, and discount rate); Define methodologies for calculating asset value; and Establish common time horizons and base years for projections. Identification of major agency initiatives that are likely to impact data and tools used for per- formance measurement (e.g., replacement of legacy systems). Planning for smooth transitions in order to maintain or improve capabilities and ensure that tracking of trends will not be impacted. Improvements to (or a new initiative to implement) performance-monitoring systems to con- solidate performance information from different sources, automate calculations, and provide reporting and query capabilities suitable for different users. These improvements may include updates or enhancements to existing executive support systems that are in place. Identification of needs for new analytical tools (or improvements to tools already in place) to cal- culate or predict performance, including tools that can be used for assessing performance impacts of planned strategies or different investment scenarios. 27

OCR for page 20
Volume II: Guide for Performance Measure Identification and Target Setting Step 5: Design Communication Devices It goes without saying that effective and timely communication of performance results to stake- holders is of critical importance. Communication devices need to be tailored to different audiences: external/public, agency executives, line managers, and technical staff. Once measures are selected, it is important to carefully consider how each measure will be reported and ensure that reports match the needs of the intended users. Report formats should be designed to make the measures easily understandable (ideally using graphics). In addition, steps should be taken to ensure the time- liness and dependability of reporting. Internal communication of performance measures will ideally be well integrated into business planning, budgeting, and management reporting procedures. For example, regularly communi- cating progress toward targets at quarterly or monthly management meetings can help create a cul- tural shift toward more performance-based operations. In many transportation agencies, there will be a need for an education process for engineering staff on the fundamentals of performance-based management. Devices that have been used successfully by agencies for performance reporting include the following: Continual reports of performance such as Washington DOT's accountability website, including the quarterly Gray Notebook: Measures, Markers, and Mileposts (www.wsdot.wa.gov/account- ability/default.htm). Public report cards. See, for example, Virginia DOT's quarterly report card (http://www. virginiadot.org/infoservice/ctb-qtrlyrpt.asp). Dashboards that summarize performance in a concise, easy-to-read diagram. See, for example, the Virginia DOT Project Dashboard (http://dashboard.virginiadot.org). Regular performance reports linked to annual or biannual business planning and budgeting activities. See, for example, Minnesota DOT's Departmental-Level Business Plan Measures and Targets (http://www.dot.state.mn.us/dashboards/pdfs/2year.pdf). Step 6: Document Definitions and Procedures Credibility is essential to the success of a performance measurement initiative. The willingness of people to base decisions on performance results depends on their understanding of how the mea- sures are to be interpreted and their level of confidence that the measures were derived from accu- rate data, calculated using a sound technical methodology, and quality-checked to ensure that they are free from errors. Good documentation of the performance measures is necessary to provide a common detailed understanding of how the measures are defined, how they are calculated, and what steps are to be taken to ensure accuracy. This detailed documentation should be made available to all of the peo- ple responsible for producing the performance data, and to the consumers of the data. It can and should also provide the basis for periodic audits of performance measurement accuracy and adher- ence to defined procedure. 28