National Academies Press: OpenBook
« Previous: BREAKOUT SESSIONS
Page 107
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 107
Page 108
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 108
Page 109
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 109
Page 110
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 110
Page 111
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 111
Page 112
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 112
Page 113
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 113
Page 114
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 114
Page 115
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 115
Page 116
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 116
Page 117
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 117
Page 118
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 118
Page 119
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 119
Page 120
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 120
Page 121
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 121
Page 122
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 122
Page 123
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 123
Page 124
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 124
Page 125
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 125
Page 126
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 126
Page 127
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 127
Page 128
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 128
Page 129
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 129
Page 130
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 130
Page 131
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 131
Page 132
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 132
Page 133
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 133
Page 134
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 134
Page 135
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 135
Page 136
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 136
Page 137
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 137
Page 138
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 138
Page 139
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 139
Page 140
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 140
Page 141
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 141
Page 142
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 142
Page 143
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 143
Page 144
Suggested Citation:"RESOURCE PAPERS." National Academies of Sciences, Engineering, and Medicine. 2008. U.S. and International Approaches to Performance Measurement for Transportation Systems. Washington, DC: The National Academies Press. doi: 10.17226/23063.
×
Page 144

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

RESOURCE PAPERS

109 RESOURCE PAPER Multimodal Trade- Off Analysis for Planning and Programming Kimberly Spence, Commonwealth of Virginia Mary Lynn Tischer, Commonwealth of Virginia This paper reviews existing methodologies and the state of the practice in multimodal trade- off analysis. Barriers to multimodal trade- off analysis are discussed, the types of methodologies that could be used to make trade- offs are reviewed, the means by which states and regional planning bodies are applying performance measures within the transportation planning process are pre- sented, and finally, the activities performed in Virginia to quantify and compare projects that span transportation modes are discussed. Most states and regional planning bodies have trans- portation performance measures. Many use them to inform planning, and some use them to allocate resources and prioritize projects. However, the amount of money spent on each mode is often determined by law or formula, and individual program categories within modal programs can be predetermined as well. As a result, project prioritization occurs within the program category rather than across categories or modes. For example, transit projects are usually prioritized relative to other transit projects and highway projects are priori- tized relative to other highway projects, but the prioriti- zation of a transit project relative to a highway project is not typically considered at the planning stage. Although it is widely recognized that a true picture of system per- formance and the effective use of limited monies can be obtained only by considering all modal facilities and ser- vices on a comparable basis, examples of cross- modal prioritization of potential projects are few. Virginia began the development of the state’s long- range multimodal transportation plan, known as VTrans2025, in 2001. At the direction of the state secre- tary of transportation, efforts were made to make the plan truly multimodal and not merely a compilation of individual modal plans. A concept for the methodology was developed to translate the policy objectives in the plan into a system for determining multimodal priorities. The concept was well received as progress toward plan- ning and prioritizing multimodal projects at the state level and was viewed as a potential approach to allocat- ing scarce funding for transportation in the future. Plan- ners from Virginia’s five modal transportation agencies are continuing to refine the methodology to include con- sideration of emerging policy issues, such as freight mobility, land use, economic vitality, and quality of life, and will apply the methodology to identify multimodal project priorities as the VTrans2025 plan is updated. BARRIERS TO MULTIMODAL TRADE- OFF ANALYSIS Transportation planning is carried out at state, regional, and local levels, with each level addressing the different functionalities of the constituent systems and subsys- tems. At all levels, deficiencies are identified and various solutions that can be used to address them are evaluated. This process of evaluating potential solutions presents an opportunity to explore the trade- offs of investing in one mode or program over another. As Lambert noted

110 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT (Jim Lambert, personal communication), in multiobjec- tive optimization, a trade- off refers to a gain in one cate- gory of performance at the expense of a loss in another area. However, the investment decision can involve a comparison of desirable solutions. Ideally, this multi- modal analysis will involve the prioritization of candi- date investments across multiple modes and determina- tion of the better overall investment. In practice, such an analysis is difficult. This becomes even more difficult when the investments are not mutually exclusive and involve a combination of modes (multiobjective combi- natorial optimization). Participants in FHWA’s Multimodal Trade-Offs Work- shop in October 2005 noted several barriers to multimodal trade- off analysis, including the following: • Limited flexibility in federal and state funding pro- grams, • Organization around individual modes, • Lack of mode- neutral performance measures that facilitate comparisons across modes, • A lack of data and analytical tools, and • Politics. A lack of flexibility in funding programs is often cited as a barrier to investing in multimodal projects. Most fed- eral transportation funding levels are determined by for- mulas, and state transportation programs tend to follow federal structures. Too, there is often a need to distribute funds among regions and between urban and rural areas in a state, which can result in additional constraints on funding. Public policy results in the development of pro- gram categories, and the legislators who create the pro- grams generally attempt to ensure that the funds are spent to achieve particular goals. Congress wants the states to use bridge funds to fix bridges, Congestion Mitigation and Air Quality Program (CMAQ) funds to clean the air, and so on. Although there is flexibility to shift some funds from one mode to another or among program categories, the lack of adequate funding overall is generally used as a reason to limit the flexible shifting of monies among the modes and programs. When funding categories are fixed, there is little reason to prioritize projects across modes or program categories. The organization of state transportation planning functions often mirrors that at the federal level. Trans- portation planning is typically compartmentalized by mode. Young et al. (2002) suggest that “each modal divi- sion tends to define benefits in a way that focuses on that mode’s particular strengths.” The planning and imple- mentation of multimodal projects are made more diffi- cult by the complex and cumbersome process of coordinating the efforts of multiple departments and agencies. As a result, multimodal plans tend to be an aggregation of individual modal plans and not plans that result from an integrated analysis of a multimodal transportation system. One key issue is that performance data are more read- ily available for some modes than for others. Similarly, data are available at various levels of geographic scale, and it is difficult to obtain consistency statewide. Most states do not collect data at the levels of detail and geo- graphic scale necessary to facilitate comparable evalua- tions of multiple modes at the long- range planning level. The tools used to evaluate the impacts of transportation at the statewide level tend to be highway oriented and lack sufficient detail for the simultaneous evaluation of improvements to the transit or pedestrian mode. More and more often, decision makers at all levels identify specific projects for funding without the benefit of an analysis of the trade- offs associated with alterna- tive improvements. This may be an attempt to be more responsive to constituents and streamline what can be a lengthy process. It may also reflect a desire to ensure that each mode and geographic region receives some share of the available funding. A total reliance on performance- based planning and programming processes can reduce this flexibility and can be perceived unfavorably by deci- sion makers. However, such processes can also provide a technical basis from which to defend decisions regarding the allocation of scarce resources. STATE OF THE ART Ideally, one would want to compare modes early in the planning process. That can be accomplished by the use of mode- neutral performance measures; a methodology such as benefit–cost analysis that reduces noncompara- ble impacts to a single ratio; or other approaches, such as goal achievement, which uses comparable metrics even if the measures are not the same. Each approach has a number of variations. Mode- Neutral Approaches Mode- neutral approaches facilitate the comparison of competing modes and permit an unbiased assessment of modal alternatives. An example of a mode- neutral mea- sure is person miles of travel, which addresses travel without regard to vehicle type, in contrast to the more often used vehicle miles of travel, which reflects motor vehicle usage. However, it is not always easy to find mea- sures that are not dependent on a particular mode or program category, and not everything can be measured in the same way. For example, accessibility to the auto- mobile may be appropriately captured by automobile ownership, whereas access to transit might be deter- mined by the distance to a transit stop. Additionally,

there is often a different geographic scale to the modal analysis. The ability to take transit decreases as one goes from the local to the regional to the state levels. Thus, the use of mode- specific measures may limit the objec- tives that are addressed. Cambridge Systematics proposed a conceptual frame- work for assessing multimodal trade- offs in statewide transportation planning [NCHRP Project 8-36, Task 7 (Cambridge Systematics, Inc., 2001)]. It suggests that there are two dimensions in which trade- offs can be assessed: the vertical dimension, in which trade- offs are evaluated within a single program or mode, and a hori- zontal dimension, in which trade- offs are evaluated across multiple programs or modes. Program- or mode- specific objectives and criteria would be identified to facilitate the evaluation of trade- offs within the vertical dimension (e.g., the prioritization of maintenance proj- ects). Goals and objectives provide the mechanism by which trade- offs may be assessed in the horizontal dimension (e.g., choosing between a transit project and a highway improvement). This two- dimensional frame- work recognizes that the same performance objectives generally cannot be applied to every mode or program. A similar two- dimensional framework for coordi- nated multimodal and modal prioritization was described by Lambert et al. (2007) and the Virginia DOT (2004). In the latter case, each modal agency advances its project into multimodal consideration in which (a) one mode is dependent on another to be suc- cessful, like a bus needs a road; (b) the project would substitute for another mode, for example, a rail line ver- sus a road; (c) the project connects two or more modes, like a road connects to an airport; or (d) the project is multimodal by definition, like high- occupancy vehicle (HOV) lanes. Once the multimodal decision is made and the appropriate project is determined, it is fed back into each of the modal plans. In this framework, the existing modal plans would not be replaced. However, a separate evaluation process would take place for multimodal priorities. Benefit–Cost Analyses One tool often used to compare alternative solutions is benefit–cost analysis, which generates a single ratio of monetized discounted benefits to monetized discounted costs for each project. The ratio of benefits to costs deter- mines the relative value of the project. The analysis involves the addition of all the discounted costs of a project or program, the addition of all the benefits, and then comparison of the costs and benefits to choose the project with the best ratio. The concept is simple and has the advantage of leveling the playing field by converting disparate impacts to a common monetized or efficiency metric. However, in practice, making this conversion is not always straightforward. It can be data intensive if a large number of factors are considered and requires con- sensus on which factors are to be considered and their monetary values. Quantification of these factors often requires nonrepeatable value judgments, and there can be significant variability in the estimates of factors, such as environmental and quality- of- life considerations. In addition, as Hill (1973) notes, “When costs and benefits are not available in market prices, the cost–benefit model imputes them as if they were subject to market transac- tions.” In other words, the attribution of some benefits may be arbitrary. Lambert and Joshi (2006) suggest the use of net bene- fits, or the difference between benefits and costs, in addi- tion to the benefit–cost ratio. They note that “a high- cost project with a relatively lower benefit- to- cost ratio might be preferred to a low- cost project with a relatively higher benefit- to- cost ratio and that an opportunity to invest more in order to achieve more benefits can be masked by presentation of the benefit- to- cost ratio alone.” Several notable examples of statewide benefit–cost models exist. Virginia’s Rail Enhancement Fund bene- fit–cost model is a project- level analysis tool. It is used to evaluate rail projects and can be used to compare pas- senger rail and freight rail proposals. The U.S. Depart- ment of Transportation’s (USDOT’s) Highway Economic Requirement System (HERS) and Highway Economic Requirement System–State Version ( HERS- ST) models provide national and statewide analysis capabilities. The application of the HERS model at the level necessary to estimate multimodal trade- offs would require a signifi- cant data collection effort, however, since it currently addresses only highways. USDOT has also developed a benefit–cost approach to the evaluation of bridge improvements (the National Bridge Investment Analysis System and the National Bridge Inventory) and transit improvements (the Transit Economic Requirements Model and National Transit Database), but the approaches can be used to evaluate only the projects within their respective programs. To make the transit analysis comparable to the highway analysis in the HERS model, the user would need to have or would need to collect similarly detailed transit data. Cost- Effectiveness Models Cost- effectiveness models seek to measure how closely a given project corresponds to a predefined goal, such as performance, in relation to its cost. Objectives or out- comes are identified, and the costs required to achieve each are compared. Like many multimodal trade- off analysis tools, cost- effectiveness models provide decision makers with useful information about the relative prefer- 111MULTIMODAL TRADE- OFF ANALYSIS FOR PLANNING AND PROGRAMMING

ability of one solution over another rather than identify- ing the single best solution. Like benefit–cost analysis, the complex impacts of transportation improvements are reduced to monetary values; however, rather than addressing benefits, cost- effectiveness analysis compares the degree to which goals and objectives are met relative to the cost required to do so. FTA evaluates new transit proposals using a cost- effectiveness index of cost per new transit rider. The Hampton Roads Planning District Commission in Virginia evaluates CMAQ projects across modes on the basis of the cost per ton of emis- sions reduced. This approach works best when fewer objectives are associated with the decision. Least- Cost Planning Approaches Least- cost planning is an approach that identifies the lowest- cost project that meets the performance goal. The definitions and approaches vary. Mozer (1993) suggests that the goal is “to minimize the total societal cost of meeting service needs.” Mozer defines societal costs to include “all of the costs associated with constructing and operating a resource over its entire life, including envi- ronment costs such as the health effects of air, noise and water pollution, any waste disposal and demolition cost.” Conceptually, more strategies can be considered and transportation demand management or transit proj- ects can be placed on the same footing as a major high- way construction project (Victoria Transportation Policy Institute, 2007). Washington State has legislation requiring least- cost planning, and it has been implemented in regional plans. The Puget Sound Regional Council has implemented various approaches built around bene- fit–cost analysis of system alternatives. It has developed a series of performance measures that it uses to prioritize highway, HOV, and transit (including ferry) projects as part of its Congestion Management Plan. Multicriteria and Goals Achievement Analyses Ideally, the analytical tool or process used should permit the analysis of all modes simultaneously to evaluate the trade- offs between solutions for multiple modes ade- quately (Fontaine and Miller, 2002). In practice, con- ventional benefit–cost analysis focuses on the evaluation of a single investment scenario at a time. In contrast, a multicriteria analysis evaluates several alternatives over a common set of evaluation objectives and compara- tively ranks the alternatives [NCHRP Project 20-92(2)]. The framework begins with objectives and the corre- sponding indicators that can be weighted to arrive at a project score and overall ranking (Bristow and Nellthorp, 2000). This type of analysis, also called goals achieve- ment (Hill, 1973), permits the linkage of indicators or metrics to a set of goals or objectives that define a desired outcome. Objectives are associated with metrics that measure the degree to which a given improvement meets broader goals. Transportation Decision Analysis software (Trans- Dec) is a tool that can be used to quantify the degree to which a project meets performance objectives. The use of TransDec involves the identification of transportation policy goals, objectives, and performance measures; the assignment of a 10-point scale to each objective’s mea- sure; the identification of investment alternatives; the attachment of a weight to each of the objectives; nor- malization of the data; and the identification of project rankings. Various methods can be used to determine the weighting scheme, including the use of expert panels or surveys (Virginia DOT, 2004). The Multimodal Investment Choice Analysis (MICA) model, which was developed for the Washington State DOT was a hybrid of the benefit–cost and the multicri- teria analysis methodologies. The model measures the performance of projects relative to particular metrics and ranks projects on the basis of the weights assigned to the metrics to determine the optimal set of projects for a given funding level and policy scenario. As Young et al. (2002) have noted, “By using the outcome objective score, the user can prioritize spending on projects that may not be the most cost- effective in terms of traditional benefit–cost values but that may address a particular [state] concern.” Attempts to use the MICA model for transportation prioritization in Washington State have not been successful to date. Regional and Project- Level Evaluations Numerous analysis tools are available for the project- level evaluation of multimodal alternatives. USDOT developed a corridor analysis tool called the Sketch Plan- ning Analysis Spreadsheet Model (SPASM) to help eval- uate demand management strategies and multimodal improvements. As a sketch planning tool, SPASM is not well suited for the detailed analysis of multimodal alter- natives or for systemwide use. A more robust version of SPASM called the Surface Transportation Efficiency Analysis Model (STEAM) was developed to facilitate systemwide analysis and the detailed evaluation of alter- natives. STEAM is typically used with the results of a regional travel demand model to convert benefits and impacts to dollar values to facilitate comparison (DeCorla- Souza and Hunt, 1998). Microsimulation tools such as VISSIM can be used to model project- level impacts, and although the model permits the evaluation 112 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT

of motorized and nonmotorized traffic, most examples are highway oriented. The Real Accessibility Index is another tool used to measure multimodal accessibility at the community level, but it requires the collection of sig- nificant amounts of data. The Highway Economic Analysis Tool (HEAT), an analysis package developed by Cambridge Systematics, has also been used to evaluate the potential economic benefits and costs of highway improvements in an objective, consistent, efficient, and accurate way. STATE OF THE PRACTICE Two major products of the transportation planning process are the long- range plan (LRP) and the trans- portation improvement program (TIP). Most states and regional planning bodies engage in some level of performance- based planning through the development of a vision, goals, and objectives in the long- range plan; and many identify and use performance measures to examine the transportation system and identify areas of deficiency. Few use performance measures to prioritize projects for the program, and those that do generally use them within modal and program categories. Long- Range Planning Bremmer et al. (2004) have described a generational model of performance management that reflects three levels of increasing maturity and sophistication in the states’ application of performance measures. In each state, the process is evolutionary, and as the planners and decision makers become more experienced with the con- cepts, they expand to the use of new, nontraditional mea- sures and more integrated planning and programming practices. States that prepare LRPs based on performance mea- sures include the following: • Florida identifies multimodal goals and objectives in the Statewide Transportation Plan (STP), but the allo- cation of resources among the various program cate- gories is determined primarily by formula. To choose projects for programming, it uses a decision- support tool that uses the goals and measures but not in a quan- tifiable way. Additionally, Florida has been a leader in developing and applying for each mode level- of- service (LOS) methodologies that could be used as a way to compare modes. However, a LOS of C, for example, does not mean the same thing across modes and consid- ers only one factor. Winters et al. (2001) suggest the use of a method, referred to as the slide rule, that makes the LOS ratings more comparable. They also suggest weighting of the LOS by volume, cost, corridor, or location. • The Intermodal Transportation Plan in Idaho iden- tifies performance measures and reports progress toward the achievement of modal plans. • New Mexico’s Good to Great document outlines goals, targets, and performance measures. • Tennessee uses report cards of measures and tar- gets from Plan GO of the Tennessee DOT. • Alaska, Arizona, California, Georgia, Idaho, Ken- tucky, Louisiana, Maine, Maryland, Michigan, Min- nesota, Missouri, Montana, Nebraska, North Carolina, North Dakota, Ohio, Oregon, Pennsylvania, Texas, Utah, Virginia, and Washington, among others, develop performance measures as part of their long- range planning processes. Several states and regions use long- range goals and measures to monitor the condition of the transportation system on a periodic basis. Most performance reports include information about all modes. Examples include but are not limited to the following: • Alaska, California, and Florida provide perfor- mance review reports. • Maryland publishes the Attainment Report on Transportation System Performance. • Maine reports on the state of the system using per- formance measures for each mode. • In Washington State, the STP outlines transporta- tion goals and objectives for the entire state and provides policy guidance for transportation investments in the areas of preservation, safety, economic vitality, mobility, environmental quality, and health. Washington State’s Measures, Markers, and Mileposts (also called the Gray Notebook) contains data on a large number of perfor- mance measures and is a notable example of statewide performance measurement). • Virginia published Virginia’s Transportation Per- formance Report—2006 and updates the report annually. • The Wilmington, Delaware, Area Planning Council produces a regional progress report to summarize efforts undertaken to fulfill the goals set out in the Regional Transportation Plan (RTP). Performance indicators are identified for each goal and objective to determine which aspects of the plan are moving in the right direction, as well as those that need attention. • The Metropolitan Washington, D.C., Council of Governments publishes a report on the results of a regional state- of- the- commute survey. The survey docu- ments trends in commuting behavior, such as commute mode shares and distance traveled, and prevalent atti- tudes about specific transportation services that are 113MULTIMODAL TRADE- OFF ANALYSIS FOR PLANNING AND PROGRAMMING

available to commuters in the region. The survey also helps examine how other commute alternative programs and marketing efforts are influencing commuting behav- ior in the region. • The Metropolitan Transportation Commission in the San Francisco, California, Bay Area reports on the state of the system annually. The report summarizes the performance of the Bay Area transportation system for freeways, local roadways, transit, goods movement, and bicycle and pedestrian travel. • The Southern California Association of Govern- ments (SCAG) uses its State of the Region report to track on an annual basis the region’s progress in achieving the goals in SCAG’s Regional Comprehensive Plan and Guide. It uses a set of performance indicators to com- pare the region’s recent performance with its own previ- ous record and that of the other large U.S. metropolitan regions. • The North Central Texas Council of Governments publishes a report called Transportation: State of the Region to provide a summary of the transportation sys- tem’s performance in the Dallas–Fort Worth area. Several states provide the performance measures and describe the system on the Internet. Virginia and Min- nesota have dashboards, Missouri reports on the perfor- mance of the Missouri DOT and 18 desired outcomes on its Tracker system, and Nebraska uses its performance measures to monitor the system and reports online. Project Lists Most state LRPs are vision plans; few include specific projects. However, Arizona’s MOVEAZ plan provides a list of projects selected through the use of performance measures. Bundles of smaller projects were evaluated as well, and although alternatives to highways were dis- cussed in the plan, modal assessment was done sepa- rately. Utah also provides a list of capacity projects outside the urbanized areas in its 2007 to 2030 Long- Range Transportation Plan. Regional planning bodies prioritize projects in vari- ous ways. The Hampton Roads, Virginia, Planning Dis- trict Commission uses performance measures to analyze projects within categories for Regional Surface Transportation Program funding. The categories in - clude bicycle and pedestrian, transit and transportation demand management, signal system integration or retiming, and other [intelligent transportation systems (ITSs), signage, park- and- ride lots]. The Capital Dis- trict Transportation Committee (CDTC) in Albany, New York, does much the same by evaluating projects in 17 categories (e.g., highway operations, ITS capital investment, stand- alone goods movement actions, intermodal facility capital investment, and transit). However, CDTC states that the New Visions plan established new CDTC policy regarding planning and investment: transportation investment is based on function and need, not upon facility ownership. This results in an agreement to put all funds [National Highway System (NHS), CMAQ, STP] “on the table”; the best projects are selected according to CDTC investment strategy, and then money is assigned. This is noticeably dif- ferent from how most MPOs (metropolitan plan- ning organizations) approach the TIP or LRP: normally, federal funds type determines project selection [e.g., NYSDOT (New York State Depart- ment of Transportation) owned facilities compete against themselves for NHS funding, and the locally owned facilities compete against each other for STP funding]. The Atlanta, Georgia, Regional Commission bases project selection on criteria that include cost- effectiveness (reductions in the cost of delay and wasted fuel), safety, congestion (intensity, duration, and extent), support for the regional plan, regional equity, and project status. Other factors used to rank projects include environmen- tal, demographic, historic, and land use impacts. Use of Trade- off Methods in Programming Many states, for example, Arizona, Arkansas, Califor- nia, Iowa, Indiana, Louisiana, New Mexico, North Dakota, and Oregon, among others, use the HERS- ST model to identify highway projects. Utah develops cost–benefit ratios using its asset management model. Additionally, the model develops treatment plans and recommended budget splits for the asset groups. These budgets are then applied within each asset management system and a 10-year list of projects is generated. The projects are then harmonized to ensure that, for exam- ple, a road improvement and a bridge project are treated holistically if they are on the same segment. Georgia uses HEAT to determine the costs and benefits of highway projects and to test scenarios. It also identifies build–no build scenarios in which the build alternative is defined as full funding for each mode. Most states assume a funding level for each program and mode and then prioritize within those levels. Although few states use performance measures or the benefit–cost methodology to program projects, Montana used a trade- off analysis within categories (i.e., district, system, and type of work). For a project to get funded, it must contribute to the performance goals of the overall transportation system. In Montana’s Performance Pro- 114 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT

gramming Process (called P3), individual proj- ects are nominated for funding by the districts and must support the overall vision and performance goals estab- lished in the STP. Funding levels are tied to performance. Multimodal Analyses The status and sophistication of multimodal planning among the states have been the subjects of a number of recent surveys (Transmanagement, Inc., 1998; Peyre- brune, 1999; Fontaine and Miller, 2002; AASHTO, 2006; AASHTO, 2007; Roerden, 2007; Lambert et al., 2006). All show that planning methods that attempt to identify trade-offs between the modes are not well developed. The planning process in Minnesota focuses on trade- offs within program categories by using metrics such as bridge sufficiency ratings and pavement serviceability ratings. A statewide vision is determined in the Statewide Strategic Plan as well as by the use of performance mea- sures and targets for its implementation. Most goals and measures reflect the characteristics of highways, although one of the 10 goals is to provide cost- effective transportation options for people and freight. In the pro- gramming process, each district identifies investment pri- orities on the basis of the performance measures and targets in the plan. Oregon uses its STP to provide the framework for prioritizing investments across all modes. Management systems developed for pavement, bridges, congestion, public transportation, safety, and other elements assist with the establishment of investment decisions at the modal level. A prioritization system based on bene- fit–cost assessment was developed as part of the state’s Intermodal Management System. The system permits the analysis of trade- offs in terms of dollar value and system performance between 10 intermodal facility types (i.e., bus station, rail station, air passenger termi- nal, marine terminal, rail truck facility, grain reload facility, petroleum terminal, truck terminal, air cargo facility, and connector and mainline roadways) (Merk- hofer et al. 1997). Washington State compares costs and benefits within each funding program and project type, as noted above. However, the mobility program includes bicycle and HOV improvements. Several approaches to implementing multicriteria and goal- achievement analysis for project selection are used, including the following: • Delaware identified 10 factors related to three long- range goals. Roadway projects are scored by using a scale of 5, 3, 0, 3, and 5 and are ranked within pools of similar projects. Transit, bicycle, and pedestrian projects are scored separately. The projects are compared to determine which ones best meet the goals. • Ohio essentially evaluates highway projects but gives additional points if a project expands connections to water ports, airports, rail, transit, or train facilities; increases unique multimodal aspects; supports reinvest- ment in an urban core; or helps a city retain existing jobs (i.e., urban revitalization). The capital costs of ITS proj- ects are also evaluated. The total number of points that a project can obtain is 130, of which up to 30 are bonus points. • In New Jersey, projects are evaluated for inclusion in the state’s Capital Investment Strategy (CIS) on the basis of the degree to which the project satisfies the long- range goals of the strategy. The CIS uses specific perfor- mance measures to calculate the achievements of the capital program achievement against annual target allo- cations for each investment objective. Performance mea- surement and management system data (for bridges, pavement, safety, congestion, etc.) are used to link the projects selected for capital funding and broad program objectives. Performance analyses are developed to evalu- ate how well the present and the proposed capital pro- grams meet the objectives. Nine program categories are used to evaluate the projects. • The North Jersey Transportation Planning Author- ity provides a ranked listing of projects for inclusion in the transportation improvement program using the six goals of the RTP. Numerical scores are assigned on the basis of the degree to which the project satisfies the goals. The maximum total scores that projects can receive are 850 points for transit projects and 825 for all other projects. • Michigan identified needs by categories (which included multimodal project preservation and multi- modal project expansion), identified unmet needs, and evaluated four funding scenarios to determine the best set of projects that should be funded to meet the goals. The scenarios included the same funding share, the same overall funding level but a reallocation of shares to increase funding for multimodal projects, a 16% increase in funding, and a more significant increase in funding. • Oregon employs a traditional four- step model to generate nontraditional measures. It estimates access to activity centers on the basis of the number of attractions that are available by automobile and separately by tran- sit. The University of Minnesota is engaged in counting attractions that are accessible in Minneapolis–St. Paul. The Northern Virginia Transportation Authority per- formed a similar analysis, which is described in its Trans- Action 2030 report. Berechman and Paaswell (2005) developed trans- portation and economic development benefits and costs 115MULTIMODAL TRADE- OFF ANALYSIS FOR PLANNING AND PROGRAMMING

to score several projects in New York City and ranked them by using a goal– achievement matrix. When a large number of alternatives need to be ana- lyzed, it makes sense to use some process to reduce the number of projects to be evaluated. One way to do this is to evaluate the alternatives on a modal basis and further analyze those that rank the highest. Stuart and Weber (1977) used a travel demand model (and, depending on what was being analyzed, other performance measures) to evaluate the effects of improvements resulting from alternative multimodal service combinations. Multiple computer model runs were used to evaluate the impact of improvements to one mode at a time. The highest- ranking modal projects were then evaluated on the basis of addi- tional criteria. A simple scoring mechanism could also be used to reduce the number of projects to be evaluated. Determination of the high- level impacts of projects on the basis of key criteria could facilitate the identification of project groupings, assuming that the impacts are inde- pendent. These impacts could then be analyzed in more detail to determine their collective impact. These tiered approaches have many variations. For example, Khasnabis et al. (2002) evaluated two approaches. The first was an analytic hierarchy process, in which alternatives were ranked by individual (mostly) quantitative performance measures (e.g., the number of passengers per hour) on the basis of the quantitative score (which was weighted) for the measure. They were ranked again on the basis of the number of measures on which they ranked highly. Alternatively, they evaluated a simplified goals achievement technique in which the alternative with the highest score for a particular perfor- mance measure was assigned a score of 100 and the other alternatives were normalized accordingly. They were then weighted and ranked. The authors found that the ratings resulting from the two approaches were not sig- nificantly different, but they concluded that the former approach had a stronger mathematical basis. Whether the trade- off analysis attempts to monetize impacts for benefit–cost analysis or assigns scores to cap- ture the degree to which a project meets predetermined goals, the factors are measured differently and the scales have different meanings. It is important for the decision maker to understand the somewhat subjective nature of the comparisons and to apply the results of the trade- off analysis accordingly. The use of a goal- achievement process can be more transparent, as scores and rankings for each measure can be easily summarized and understood. 116 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT FIGURE 1 Virginia statewide multimodal corridors.

THE VIRGINIA APPROACH A major element of Virginia’s long- range transportation plan, known as VTran2025, is the concept of the statewide multimodal corridor. Like Florida, North Car- olina, and Pennsylvania, multimodal corridors of statewide interest are identified. The corridors, shown in Figure 1, are major conduits for the movement of passen- gers and goods, include multiple modes, connect major activity centers or regions, and support the goals of the commonwealth (e.g., tourism and economic prosperity). The purpose of focusing statewide planning around these corridors is to ensure that statewide resources are directed to those corridors that serve statewide needs. The goals identified in VTrans2025 were developed through extensive outreach to the public and stakehold- ers and serve as the basis for the objective, performance- based criteria used to rate projects. The degree to which projects meet these goals influences funding priorities. The system serves as a decision- support tool by provid- ing a list of investment options for decision makers that is based on objective performance- based criteria. Planners from each modal agency worked with repre- sentatives of regional planning bodies and others to iden- tify the corridors as well as the multimodal performance measures that would be used to facilitate evaluation of the system and potential improvements. Performance measures were developed for each of the five goals identi- fied in the plan to ensure a link between the long- range vision described in the plan and project identification and prioritization. Deficiencies in the system were identified, and a goals achievement matrix was developed to evalu- ate quantitatively the projects within the corridors according to the performance measures. Initially, ratings for each measure were based on whether the project had a positive impact on the measure (1), a negative impact on the measure (1), or no impact (0). Various weighting systems were also considered to reflect the different pol- icy priorities shown in Table 1. These were defined from the responses to a telephone survey and by an expert panel. The stakeholder feedback weights were based on a survey of 1,200 Virginians that examined public opin- ions, attitudes, and values about transportation by focus- ing on alternative visions for the transportation system and the relative importance of the long- range goals. In 117MULTIMODAL TRADE- OFF ANALYSIS FOR PLANNING AND PROGRAMMING TABLE 1 Sample Weighting Factors for Transportation Planning Goals Criterion Expert Panel Stakeholder Feedback Average Safety and security 30 16 23 Preservation and management 10 20 15 Efficient movement of people and goods 30 28 29 Economic vitality 15 21 18 Quality of life 15 15 15 Total 100 100 100 TABLE 2 Excerpt of VTrans2025 Goal-Achievement Score Sheet Goal 3. Facilitate the efficient movement of people and goods and expand choices and improve interconnectivity of all transportation modes (20%). Factor Objective Performance Measure Score 3.1. Mobility 3.1.a. Reduce congestion (33%) 3.1.a. Does the project reduce congestion in terms of the (33%) volume-to-capacity (V/C) ratio, level of service (LOS), and/or travel time? 1 3.1.b. Provide mode/route choice for all people 3.1.b. Does the project provide mode/route choice for all and goods (33%) people and goods? 1 3.1.c. Increase capacity for the movement of 3.1.c. Does the project increase capacity in terms of tons of people and goods (33%) freight moved, 20-ft equivalent unit (TEUs), and/or person trips? 1 3.2. Accessibility 3.2.a. Improve access to major activity 3.2.a. Does the project improve access to major activity centers (33%) centers (50%) in terms of the number of modes serving the activity center, frequency of service, and/or barriers removed? 1 3.2.b. Improve accessibility of transportation 3.2.b. Does the project improve accessibility of transportation services or facilities (50%) services or facilities in terms of the number of mode choices for people and goods in the corridor, the cost per trip, and/or the cost per ton mile? 1 3.3. System 3.3.a. Provide seamless connectivity between 3.3.a. Does the project reduce transfer time between modes, connectivity modes (33%) reduce travel time to the main line/hub of network, and/or (33%) increase the number modal connections? 0 3.3.b. Provide interconnected networks that 3.3.b. Does the project provide system continuity? 1 facilitate the “complete journey” (e.g., origin to destination and all connections between) (33%) 3.4. Reliability 3.4.a. Provide transportation services, facilities, 3.4.a. Does the project improve on-time performance of modes and information that improve predictability and and/or reduce travel time variability? 0 reliability (33%)

some cases, the relative importance of the goals defined by stakeholder feedback was different from that defined by an expert panel. Equal weighting and an average were also evaluated. Weighted scores were summed to generate a composite score; and projects were sorted into tiers of immediate, midrange, and long- range priorities. Table 2 shows a portion of a sample score sheet. The use of scores of 1, 0, and 1 ignores informa- tion useful for the differentiation of projects because the degree to which a project meets the goal is not consid- ered. However, it can be used as a screening device to reduce to a manageable number the number of projects for which further analysis is required. Although multimodal prioritization was put on hold, the highway programming process benefited from the long- range planning effort. The goals and performance measures developed in the plan were used to prioritize more than 1,000 highway construction projects. Multi- 118 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT TABLE 3 Virginia Highway Project Prioritization Matrix Planning Factor Planning Objective Measure Goal 1. Provide a transportation system that facilitates the efficient movement of people and goods. Mobility Reduce congestion Current-day LOS Current-day volume-to-capacity ratio Maximize benefits for the greatest Current-day passenger car equivalents (both directions) number of users Goal 2. Provide a safe and secure transportation system. Safety Improve safety for roadway users Crash rate Goal 3. Improve Virginia’s economic vitality and provide access to economic opportunities for all Virginians. Economic development Enhance the movement of goods Average daily volume of trucks throughout the commonwealth Provide transportation investments Local unemployment rate in economically disadvantaged areas Goal 4. Improve quality of life and minimize potential impacts to the environment. Community character and Minimize cultural and environmental Potential environmental or cultural impacts environmental quality impacts Minimize community impacts Use of existing state-owned right-of-way Goal 5. Preserve the existing transportation system and promote efficient system management. System management Encourage access management Interchange spacing/main line adequacy Reduce reliance on single-occupant Inclusion of HOV, bicycle, and/or pedestrian facilities vehicles or provisions for other modes System preservation Minimize long-term maintenance costs Bridge conditions: bridge deficiencies are based on bridge sufficiency ratings Cost-effectiveness Maximize the use of limited highway Cost-effectiveness of the proposed recommendation funding Additional points Multimodalism Support recommendations identified by Highway component of an identified VTRANS multimodal the Virginia Department of investment network (MIN) Transportation (VTRANS)

modal elements were included in the evaluation; for example, improved access to ports, airports, transit, park- and- ride lots, or other intermodal facilities was one of the measures. The evaluation also included truck volumes; consideration of whether accommodations for HOV lanes, bicycles, pedestrians, and other modes were included; and whether the project improved a component of an identified statewide multimodal corridor. The high- way project prioritization matrix is shown in Table 3. The update to the long- range plan will expand the current approach to include measures that reflect the importance of alternate modes, freight mobility, land use, economic vitality, and quality of life. The plan will be financially constrained as well as unconstrained. Vir- ginia will screen the number of projects to be addressed in detail, will apply a performance- based approach using a goal- achievement matrix, and consider project cost in the more detailed evaluation. 119MULTIMODAL TRADE- OFF ANALYSIS FOR PLANNING AND PROGRAMMING Definition LOS is a standard highway performance measure used to indicate congestion and the degree to which the highway facility is meeting the needs of the traveling public. Scores are assigned on the basis of an LOS analysis. Scoring is as follows: LOS A = 0 points, LOS B = 2 points, LOS C = 4 points, LOS D = 6 points, LOS E = 8 points, LOS F = 10 points. For stand-alone interchange improvements, scoring is handled differently. Each stand-alone interchange recommendation begins with 0 points. By using the following criteria, points (maximum of 10 points) are added: (a) substandard interchange design = 3 points, (b) main-line traffic-weaving problem = 2 points, (c) cross-route weaving/congestion problem = 2 points, and (d) traffic backup onto main line during peak hour = 3 points. A roadway’s volume-to-capacity ratio is another, more specific measure of congestion. Scoring is based on a formula used to determine per centile ranges. On the basis of these ranges, recommendations can receive from 0 to 10 points. For stand-alone interchange improvements, the scoring is handled differently. Each stand-alone interchange recommendation begins with 0 points. By using the following criteria, points (maxi- mum of 10 points) are added: (a) substandard interchange design = 3 points, (b) main-line traffic-weaving problem = 2 points, (c) cross-route weaving/congestion problem = 2 points, and (d) traffic backup onto main line during peak hour = 3 points. Current-day passenger car equivalents (both directions). By using a nationally accepted method, heavy trucks are converted into passenger cars. Scoring is based on a logarithmic formula used to define 10 value ranges. On the basis of these ranges, recommendations can receive from 0 to 10 points. Segment crash rates from the HTRIS database. On new location facilities, the crash rate from the parallel or bypassed facility is used. Scoring is based on a logarithmic formula used to define 10 value ranges. On the basis of these ranges, recommendations can receive from 0 to 10 points. The 2003 average daily volume of trucks. Scoring is based on a logarithmic formula used to define 10 value ranges. On the basis of these ranges, recommendations can receive from 0 to 10 points. By using official data from the Virginia Employment Commission, this measure is defined as the maximum 2003 unemployment rate from all jurisdictions affected. Scoring is based on a formula used to determine percentile ranges. On the basis of these ranges, recommendations can receive from 0 to 10 points. Based on a spatial analysis of the recommendation’s terminus/location and the environmental layers in the geographic information system inte- grator. Potential impacts fall into seven categories: (a) wetlands, (b) streams, (c) agricultural/forest districts, (d) cultural resources, (e) conserva- tion lands, (f) Virginia Outdoor Foundation easements, and (g) threatened and endangered species. Each recommendation begins with 10 points. With each potential impact, 1.438 points are subtracted. On the basis of the current facility and the extent of the recommended improvement, this performance measure is defined as the potential for the improvement to be constructed within the existing state-owned right-of-way. For scoring, yes = 10 points and no = 0 points. Improvements to existing facilities will get full points. In urban areas, new interchanges should not be within 1 mi of an existing interchange unless a collector–distributor road is included (if not = 0 points). In rural areas, a new interchange should not be within 2 mi of an existing interchange (if not = 0 points). Proposed new interchanges will also receive 0 points if they do not include an improvement to the main line and the main line is deficient (LOS F) within the planning horizon (2025). Yes is defined as the inclusion in the recommendation of HOV facilities, bicycle or pedestrian accommodations, park-and-ride lots, bus lanes, rail facilities, and bus pullouts. For scoring, yes = 10 points and no = 0 points. By using bridge sufficiency ratings from the Structure and Bridge Division, this measure entails the lowest bridge sufficiency rating (BSR) from all Statewide Planning System segments associated with the recommendation’s termini. Scoring is based on BSR ranges. These ranges are as fol- lows: 0 to 20 = 10 points, 21 to 40 = 5 points, 41 to 60 = 3 points, and 61 and over = 0 points. Cost-effectiveness is measured by using the following formula: total estimated cost of improvement divided by the 2025 estimated vehicle miles traveled. Scoring is based on a logarithmic formula that defines 10 value ranges. On the basis of these ranges, recommendations can receive from 0 to 10 points. Points will be assigned to highway improvements that are components of MIN. Points will be assigned on the basis of what tier the MIN is assigned to: Tier 3 = 0 points, Tier 2 = 5 points, and Tier 1 = 10 points.

LESSONS LEARNED FROM THE LITERATURE AND THE VIRGINIA EXPERIENCE • Get buy- in up front— and again and again. The long- range planning effort took 3 years and involved a substantial public involvement process, a staff- level Technical Committee, as well as a Policy Committee that included members of the Commonwealth Transporta- tion Board (CTB). Periodic presentations were made at various board and other meetings throughout the state. Ultimately, the CTB members who had also been Policy Committee members were enthusiastic about the use of goals and performance measures and were supportive of their use for project prioritization. However, the board members who had not been involved throughout the process were less enthusiastic. The tools need to be con- sidered to be credible by the decision makers and trans- parent to stakeholders. They need to buy- in up front and throughout the process. • The process must be simple to understand, imple- ment, and explain. This is critical to ensuring the accep- tance and the institutionalization of the process. • Multimodal trade- offs are doable. Whether one uses a complex methodology, a simplified scoring process, or a nonquantitative approach, it is possible to compare a transit project with a highway project and decide which one is the better investment. Utah’s use of cost–benefit ratios to determine asset management bud- gets, Michigan’s approach to integration, and Florida’s Decision System Tool are all examples of reasonable approaches to trading off program or modal projects. • There is greater consideration of multimodal trade- offs than initially meets the eye. Even when the evalua- tion is made by mode, the way in which the goals are described, the types of performance measures that are used, and the way in which projects can be bundled are indicative of an increasingly multimodal approach to transportation. • The use of performance measures blurs the differ- ences between the modes. When the goal is mobility and not highway construction per se, the debate is changed. • Not all projects are multimodal. Sometimes a high- way project is just a highway project. Maintenance and operational projects typically involve only one mode, although the importance of a transit project versus that of a highway maintenance project can be estimated once there is an agreement on the value of the goal. • Scale matters. Trade- off analyses can occur at different levels— statewide, regional, or local— as well as at the planning or the programming phase. The data requirements vary considerably, and the tool must fit the scale. The performance of a detailed benefit–cost analy- sis for all potential projects over the course of 20 years would be costly and unnecessary. • If tools are provided, the states will use them. This is demonstrated by the large number of states that started using the HER- ST model once FHWA made it available. • Cost matters. When they are asked, the public clearly prefers the “Cadillac” to the “Volkswagen”; however, the cost of a project is sometimes so prohibitive that an alternative becomes disqualified on the basis of that criterion alone. Lambert et al. (2007) show project cost by the size of the bubble on a two- dimensional graph. Whether decision making is based on a purely objec- tive project selection system, solely on political judgment, or somewhere in between, decision making for project selection is becoming more closely linked to the planning process that preceded it. More and more, projects under consideration have resulted from a planning process that considered all modes and are consistent with the overar- ching vision set forth in the long- range plans. However, as Meyer and Miller (2001) note, “attempts to analyti- cally structure the priority- setting process are a useful exercise for both planners and decision- makers, but the final decision will still be based on political judgment.” REFERENCES AASHTO. 2006. Measuring Performance Among State DOTs. AASHTO, Washington, D.C. AASHTO Standing Committee on Quality. 2007. Survey of State DOT Performance Measures. AASHTO, Washington, D.C. Berechman, J., and R. E. Paaswell. 2005. Evaluation, Prioritization and Selection of Transportation Investment Projects in New York City. Kluwer Academic, Dordrecht, Netherlands. Bremmer, D., K. C. Cotton, and B. Hamilton. 2004. Emerging Performance Measurement Responses to Changing Political Pressures at State DOTs: A Practitioners’ Perspective. Revised version, Nov. 16. Planning Website, www.wsdot .wa.gov/planning/. Washington State Department of Transportation, Olympia. Bristow, A. L., and J. Nellthorp. 2000. Transport Project Appraisal in the European Union. Transport Policy, Vol. 7, pp. 51–60. Cambridge Systematics, Inc. 2001. Multimodal Tradeoffs in Statewide Transportation Planning. NCHRP Project 8-36, Task 7. TRB, National Research Council, Washington, D.C. DeCorla- Souza, P., and J. T. Hunt. 1998. Use of the Surface Transportation Efficiency Analysis Model (STEAM) in Evaluating Transportation Alternatives. Institute of Transportation Engineers, Washington, D.C. Fontaine, M., and J. Miller. 2002. Survey of Statewide Multimodal Transportation Planning Practices. Virginia Transportation Research Council, Charlottesville. 120 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT

Hill, M. 1973. Planning for Multiple Objectives: An Approach to the Evaluation of Transportation Plans, Regional Science Research Institute, Philadelphia, Pa. Khasnabis, S., E. Alsaidi, L. Liu, and R. D. Ellis, 2002. Comparative Study of Two Techniques of Transit Perfor- mance Assessment: AHP and GAT. Journal of Transporta- tion Engineering, Vol. 128, No. 6, Nov./Dec., pp. 499–508. Lambert, J. H., and N. N. Joshi. 2006. Benefits Estimates of Highway Capital Improvements with Uncertain Para- meters. Report VTRC 07-CR4. Virginia Transportation Research Council, Charlottesville. Lambert, J. H., C. A. Pinto, and K. D. Peterson. 2003. Final Contract Report: Extended Comparison Tool for Major Highway Projects. Virginia Transportation Research Council, Charlottesville. Lambert, J. H., K. A. Peterson, and N. N. Joshi. 2006. Synthesis of Quantitative and Qualitative Evidence for Risk- Based Analysis of Highway Projects. Accident Analysis and Prevention, Vol. 38, pp. 925–935. Lambert, J. H., N. N. Joshi, K. D. Peterson, and S. M. Wadie. 2007. Coordination and Diversification of Investments in Multimodal Transportation. Public Works Management and Policy, Vol. 11, pp. 250–265. Merkhofer, M., M. Schwartz, and E. Rothstein. 1997. A Priority System for Multimodal and Intermodal Transpor- tation Planning. In Proc., Sixth TRB Conference on the Application of Transportaiton Planning Methods. Transportation Research Board, Washingotn, D.C. (CD- ROM.) Meyer, M., and E. Miller. 2001. Urban Transportation Planning, 2nd ed., McGraw- Hill Series in Transportation. McGraw- Hill, Boston, Mass., pp. 489–491. Mozer, D. 1993. Least Cost Transport Planning, abridged. International Bicycle Fund, Seattle, Wash. Peyrebrune, H. L. 1999. NCHRP Synthesis of Highway Practice 286: Multimodal Aspects of Statewide Transpor- tation Planning. TRB, National Research Council, Wash- ington, D.C. Roerden, J. 2007. State DOT Strategic Planning Practices Survey. North Carolina Department of Transportation, Raleigh, May 22. Stuart, D. G., and W. D. Weber. 1977. Accommodating Multiple Alternatives in Transportation Planning. In Transportation Research Record 639, TRB, National Research Council, Washington, D.C., pp. 7–13. Texas Transportation Institute. 2003. TransDec, 2.0, User’s Manual. Texas Transportation Institute, College Station. Victoria Transport Policy Institute. 2007. Least- Cost Transportation Planning. TDM Encyclopedia, Updated. Victoria Transport Policy Institute, Victoria, British Columbia, Canada, March 7. www.vtpi.org/tdm/index.php. Virginia Department of Transportation. 2004. VTran2025— Virginia’s Statewide Multimodal Long- Range Trans- portation Plan— Phase Three. Virginia Department of Transportation, Richmond. Winters, P., F. Cleland, E. Mierzejewski, and L. Tucker. 2001. Assessing Level of Service Equally Across Modes. Center for Urban Transportation Research, University of South Florida, Tampa. Young, R., J. Barnes, and G. S. Rutherford. 2002. Multimodal Investment Choice Analysis for Washington State Transportation Projects: Phase I Results. In Transportation Research Record: Journal of the Transportation Research Board, No. 1817, Transportation Research Board of the National Academies, Washington, D.C., pp. 137–142. ADDITIONAL RESOURCES Cambridge Systematics, Inc. 2006. Multimodal Tradeoffs Workshop Final Report. FHWA, U.S. Department of Transportation. www.fhwa.dot.gov/planning/statewide/ mmwkshp.htm# s11. DeCorla- Souza, P. 1998. Benefit–Cost Analysis in Corridor Planning: Lessons from Two Case Studies. ITE Journal, Vol. 68, No. 1, Jan. 1, pp 34–36, 42, 44–45. FHWA. 1996. Exploring the Application of Benefit/Cost Methodologies to Transportation Infrastructure De- cision Making. Number 16. FHWA, U.S. Department of Transportation. FHWA and FTA. 2001. Transportation Planning Capacity Building Program Albany, NY: Noteworthy Practices of Capital District Transportation Committee (CDTC). Peer Exchange Report. FHWA and FTA, U.S. Department of Transportation. General Accounting Office. 2001. Highway Infrastructure: FHWA’s Model for Estimating Highway Needs Has Been Modified for State- Level Planning. Report GAO-01-299. General Accounting Office, Washington, D.C. Gomez- Ibanez, J. A., W. B. Tye, and C. Winston. 1999. Essays in Transportation Economics and Policy: A Handbook in Honor of John Myer. Brookings Institution Press, Washington, D.C. Hendren, P. G., and M. D. Meyer. 2006. Peer Exchange Series on State and Metropolitan Transportation Planning Issues, Meeting 2: Non- Traditional Performance Measures. Cambridge Systematics, Inc., Cambridge, Mass. Ishaque, M. M. 2006. Policies for Pedestrian Access: Multi- Modal Analysis Using Micro- Simulation Techniques. Department of Civil and Environmental Engineering, Imperial College, London. Joshi, N., and J. H. Lambert. 2007. Equity Metrics for the Prioritization and Selection of Transportation Projects. IEEE Transactions on Engineering Management, Vol. 54, No. 3. 121MULTIMODAL TRADE- OFF ANALYSIS FOR PLANNING AND PROGRAMMING

Lewis, D. L. Transportation- Related Works. 2007. Literature search by Ken Winter. Research Library, Virginia Department of Transportation, Charlottesville, May 24. Linthicum, A. S. 2007. Multi- Scale Analysis of Risks in Long- Range Planning of Transportation Corridors. Michigan Department of Transportation. 2006. MDOT State Long- Range Transportation Plan, 2005–2030. Michigan Department of Transportation, Lansing. Minnesota Department of Transportation. 2007. District Long Range Plan Guidance. Minnesota Department of Transportation, St. Paul, May 24. Montana Department of Transportation. 2004. Performance Programming Process, A Tool for Making Transportation Investment Decisions. 2004 Update. Montana Department of Transportation, Helena. Nelson, D., D. Shakow, and The Institute for Transportation and the Environment. 1994. Development of a Prototype Least Cost Planning Model and Its Initial Application to the Puget Sound Region. Phase II Report. Global Telematics, Seattle, Wash. http://globaltelematics.com/lcp/ nel5.htm. Neumann, L. 2000. Integration of Intermodal and Multi- modal Considerations into the Planning Process. Proc., Conference for Refocusing Transportation Planning for the 21st Century. FHWA and FTA, U.S. Department of Transportation. New Jersey Department of Transportation. 2007. FY2008– 2012 Statewide Capital Investment Strategy. New Jersey Department of Transportation, Trenton. Niessner, C. W. 2001. Development of a Computer Model for Multimodal, Multicriteria Transportation Investment Analysis. In NCHRP Research Results Digest, No. 258, TRB, National Research Council, Washington, D.C. Ohio Department of Transportation. 2003. Transportation Review Advisory Council Policies for Selecting Major New Capacity Projects. Ohio Department of Transportation, Columbus, Dec. 9. Padgette, R. 2006. Effective Organization of Performance Measurement. NCHRP Project 8-36, Task 47. TRB, National Research Council, Washington, D.C. Patassini, D., and D. Miller. 2005. Beyond Benefit Cost Analysis: Accounting for Non- Market Values in Planning Evaluation. Ashgate Publishing Ltd., Surrey, United Kingdom. Reed, T., J. P. Franklin, D. A. Niemeier, and T. Rufulo. 1998. Benefit–Cost Analysis of Multimodal Projects for State- Wide Prioritization of Capacity Investments. Report UCD- ITS- RR-98-3. Institute of Transportation Studies, University of California, Davis. Richarme, M. 2004. Consumer Decision- Making Models, Strategies, and Theories, Oh My! Decision Analyst, Inc., Arlington, Texas. Steiner, R. L., I. Li, P. Shad, and M. Brown. 2003. Multimodal Tradeoff Analysis in Traffic Impact Studies. Office of Systems Planning, Florida Department of Transportation, Tallahassee. Timms, P. M., A. D. May and S. P. Sheperd, 2002. The Sensi- tivity of Optimal Transportation Strategies to Specification of Objectives. Transportation Research, Part A: Policy and Practice, Vol. 36, No. 5, pp. 383–401. Tippett, J. C., Jr. 2004. 12 Years of Project Evaluation: Apply- ing the Benefits Matrix Mode in Hickory–Newton– Conover, NC, Olympia, IL. FHWA, U.S. Department of Transportation. Transmanagement, Inc. 1998. NCHRP Report 404: Innova- tive Practices for Multimodal Transportation Planning for Freight and Passengers. TRB, National Research Council, Washington, D.C. TRR 639. 1977. Transportation Research Record 639: Transportation System Evaluation Techniques. TRB, National Research Council, Washington, D.C. U.S. Department of Transportation. 2006. Status of the Nation’s Highways, Bridges, and Transit: Conditions and Performance. Federal Highway Administration, U.S. Department of Transportation. http://www.fhwa.dot .gov/policy/2006cpr/index.htm. Utah Department of Transportation. 2007. Utah’s Long- Range Transportation Plan, 2007–2030. Utah Depart- ment of Transportation, Salt Lake City. Washington State Department of Transportation. 2004. Washington Transportation Plan Phase 2 Work Plan. Washington State Department of Transportation, Olympia. 122 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT

123 RESOURCE PAPER Measuring the Value and Impact of Agency Communication with the Public David Kuehn, Federal Highway Administration At the conclusion of the Second TRB Conference on Per- formance Measurement in 2004, Lance Neumann, the conference cochair, observed how performance measure- ment could serve as a communication tool. At that time, however, research gaps included an understanding of how performance measurement influences behavior, methods for the reporting of performance measure- ments, and difficulties with the communication of risk (Turnbull, 2005). This paper relates and builds on the summary conclu- sions from the 2004 conference. It provides examples of subsequent research and transportation agency practices that respond to previously identified gaps. The paper also references research relevant to but not specific to transportation. These examples are intended to reinforce certain points by noting that other industries apply simi- lar approaches. In some cases, the non- transportation- specific examples suggest alternate approaches or fill in the gaps in the literature and, thus, are intended to expand what practitioners in the transportation industry may consider applying to their own circumstances. In the end, the paper attempts to explain the value of public engagement in the development and implementa- tion of performance measurement programs for the public agencies responsible for surface transportation. It also shows progress in each of the three areas identified as research gaps in 2004: assessing the impacts of communi- cation, communication methods, and risk communication. In researching the paper, the author conducted a com- prehensive review of English- language research related to the subject at hand: communicating surface trans- portation agency performance measurement with the public. The author reviewed bibliographies collected by the TRB Performance Measurement Committee; reviewed recent transportation performance measure- ment discussion boards; searched the Transportation Research Information System and the Research in Progress database; and conducted limited Internet searches using the following key words: measures of effectiveness, performance measurement, public involve- ment, public participation, and public opinion. The paper is divided into six parts, each of which out- lines a different concept or provides a set of examples and each of which builds on the previous topic: 1. Why communicate performance measurement? 2. The public, customers, and market segmentation 3. Partnerships: two- way communication and concepts of integration 4. Perceived value of customer communication 5. Assessing the impacts of customer communication 6. Communication methods: the nuts and bolts WHY COMMUNICATE PERFORMANCE MEASUREMENT? Although many of the people reading this paper may have a preconceived notion that they should communi- cate performance with the public (a view that may now also be broadly held by transportation agencies), it still is important to describe the basis for this belief. The research literature suggests the following seven reasons for the communication of performance measure-

ment. (In reality, the communication of performance measurement is done for a mix of one or more of the indicated reasons.) • Legislative direction. The I-95 Corridor Coalition (2005b) conducted a survey of its members about the use of performance measurement. One of the questions asked about communication about performance with legislators; few member agencies responded that they were communicating about performance with legisla- tors. Although it is not commonly noted as a reason for communicating performance (perhaps because it is obvi- ous), Padgette (2006) wrote about the importance of reporting on performance measurement in response to legislative demands. In some cases, this is a direct reflec- tion of a legislative mandate. In other cases, proactive communication with advisory boards and oversight agencies can help guide the types of questions that they may ask the boards and agencies. Communication can clarify and even lead to shared assumptions about realis- tic program outcomes and controls. Emerson and Carl- son (2003), writing about the measurement of environmental conflict resolution programs, similarly note that administrative and legislative bodies are important audiences. • Public awareness. The communication of perfor- mance can educate the public about agency priorities or manage expectations by describing the challenges and external influences that affect transportation programs. Public awareness was a specific component of the design of the annual Metropolitan Atlanta Performance Report prepared by the Georgia Regional Transportation Authority (2007). In regard to nontraditional measures, over which transportation agencies frequently have shared or limited control, Hendren and Meyer (2006) noted the importance of education. Similarly, in the envi- ronmental sector, the Government Accountability Office (2004) found that after assessing conditions and trends, the most frequently cited reason for performance mea- surement among federal, state, and regional organiza- tions was to educate the audience, raise awareness, and communicate complex issues, in descending order. • Support for new revenue. A report for Transport Canada (Transportation Association of Canada, 2006) suggested that performance measurements can provide data to justify program expenditures, support requests for the allocation of additional resources, and support public agency demands for greater accountability as rea- sons for applying performance measurement, at least in regard to communicating with the public. Hendren and Meyer (2006) also noted that demonstrating perfor- mance is important when revenue is sought. Cameron et al. (2003) suggested that the communication of perfor- mance measurement is important for gaining stakeholder trust, particularly when agencies are seeking funding and raising awareness of agency priorities. • Customer feedback. Communication is a two- way street: it allows agencies to gain input and guidance on how and what to communicate about performance as well as provide information about performance. Schaller (2005) noted that communication with customers is one of five reasons that transit agencies conduct surveys. Hendren and Meyer (2006) suggested a shift from a focus on the system to a focus on the customer in non- traditional performance measurement and the impor- tance of customer feedback. Stein and Sloane (2003) wrote about keeping customers informed to demonstrate that agencies are providing transportation services that meet customer needs. • Accountability. Padgette (2006) wrote that in sev- eral departments of transportation the senior leadership provides information on performance to the public as a means of reinforcing accountability. Accountability can be considered analogous to legislative reporting for a more general audience. The work of the Virginia Depart- ment of Transportation (DOT) is a good example (and will be described in more detail below) of the importance of reporting performance measurement in public accountability. Hendren and Meyer (2006) similarly noted that accountability and credibility are important issues related to non traditional performance measure- ments, which include measurements of interest to other agencies and public groups, such as measurements related to land use, environment, and quality of life. • Trust building. Cameron et al. (2003) suggested that the communication of performance measurement is important for gaining stakeholder trust. Trust building requires transparency and accountability. The Missouri DOT (2007) identified transparency as an important reason behind communication with the public in its Tracker performance measurement report. An NCHRP report (2004) noted the New Mexico DOT’s commit- ment to an open and public process in the communica- tion of performance in the environmental management area. The Virginia DOT found accountability to be an important element of the communication of perfor- mance (Jones, 2007). • Collaboration. The Missouri Department of Trans- portation (2007) noted that the creation of opportunities for collaboration is another important reason behind communication with the public by use of its Tracker per- formance measurement report. However, Missouri appears to be unusual among agency performance mea- surement programs by naming collaboration as a reason. This issue will be discussed further in the section on partnerships below. 124 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT

THE PUBLIC, CUSTOMERS, AND MARKET SEGMENTATION As noted in the section above, agencies reference a vari- ety of audiences when describing the purpose of the com- munication of performance. In this regard, customers may include any external audience: decision makers, partner agencies, commuters, residents, and visitors. For example, the Florida DOT conducted customer surveys of residents, local officials, visitors, seniors, and com- mercial divers (Florida DOT, 2005). The Michigan Transportation Summit provides another example. The Michigan DOT engaged multiple segments of the public and the business community in the development of the department’s strategic plan. Schwartz (2006) noted that this effort goes beyond surveying customer satisfaction after goals and measures are developed. Schwartz (2006) and Stein and Sloane (2003) differ- entiated between stakeholders, partners, and customers. Stein and Sloane (2003) went on to describe the value of segmenting customers and discussed the societal changes that have led to increased segmentation for transporta- tion on the basis of geography, demographics, travel behavior, and socioeconomics. Schaller (2005) also noted the importance of customer segmentation specific to transit service, including the value of segmentation when one is communicating with different customer groups. This paper takes a broader view of customers so that they include both external audiences and, in some instances, audiences internal to large organizations. This seems to be consistent with the approach taken by sev- eral state departments of transportation, whereas others (Florida Department of Transportation, 2005) do divide customers into multiple segments. Although the methods of communicating performance to different segments may vary, as will be described in the final section of this paper, the importance of communicating performance to each segment may be similar, regardless of which segment it is. Private industry frequently uses the American Cus- tomer Satisfaction Index (ACSI) to infer the quality of communications with groups in the area of transporta- tion services. ASCI compares customer expectations and the perceptions of service quality. The measurements allow a correlation to be made between expectations and perceived quality, which leads to customer satisfaction. Van Ryzin et al. (2004) described how the city of New York applied ASCI to government performance in areas including road smoothness, street cleanliness, subway service, and bus service. New York City is well known for its diversity; the data captured by ACSI allowed the segmentation of the results by geography (each borough of the city), race–ethnicity, and income. In this example, the city of New York was interested in providing city leaders with information about how resident satisfaction correlated with confidence and trust in government ser- vices. Overall, road conditions were a strong driver of the overall perceived quality of and public satisfaction with city services. Transit services, on the other hand, appeared to matter more to residents in the outer bor- oughs and those with lower incomes. ACSI was not designed to assess public confidence or trust, however. Van Ryzin et al. (2004) noted that about one- sixth of the variation in confidence was captured by ACSI. Public appreciation of agency control (or a lack thereof) and external factors can cloud the results. The public may not hold an agency accountable for condi- tions or may not attribute outcomes to actions that the agency has taken. This allows the application of ACSI or similar survey instruments to be a useful approach for measuring either changes in service quality or communi- cation that could appreciably modify the expectations of a segment of the population or the general public. PARTNERSHIPS: TWO- WAY COMMUNICATION AND CONCEPTS OF INTEGRATION Surprisingly, it appears that few agencies have consid- ered or embraced the communication of performance measurement for the purpose of seeking cooperation and building partnerships, although at the 2004 TRB Perfor- mance Measurement Conference, Klein (2005) spoke about integrating measurement across agencies, and Joshua (2005) talked about how a metropolitan plan- ning organization (MPO) can use its formal structure, which consists of a policy board and advisory commit- tees, to engage customers in the development of performance measurements. Some examples exist in the area of nontraditional measures, such as environmental measures. Hendren and Meyer (2006), in writing about nontraditional transportation performance measures, noted that the measures may be outside the typical control of trans- portation agencies. It is in these cases (e.g., energy and resource conservation, environmental quality, quality of life, and sustainability) in which partnerships may be of particular importance. The chapter Organizational Environmental Stewardship Practices of Environmental Stewardship Practices, Policies, and Procedures for Road Construction and Maintenance discussed partner- ships and shared reporting between agencies and indus- try in the measurement of environmental mitigations (AASHTO Standing Committee on the Environment, 2004). Likewise, a report of context- sensitive solutions 125MEASURING THE VALUE AND IMPACT OF AGENCY COMMUNICATION WITH THE PUBLIC

(NCHRP, 2004) discussed the collaborative aspect of performance measurement, as performance measure- ments may be linked to local land use and community needs. They are also linked to land use systems and ecosystems in the environment. This is therefore an additional reason for collaboration. Groups that may be involved in partnering with trans- portation agencies in the area of performance measure- ment include other transportation agencies, such as public transportation providers and ports. In addition, performance measures in nontraditional areas for trans- portation fall outside the jurisdiction of transportation agencies (e.g., health). The communication of perfor- mance measures in these areas could lead to new collab- orations among agencies. Such collaborations may include stakeholders with a more narrow interest in the transportation program, such as air quality districts, public and traffic safety organizations, health providers. and land use and environmental regulatory agencies. They may also include nongovernmental organizations and advisory groups. The Wilmington Area Planning Council (WILMAPCO), the designated metropolitan planning organization for the Wilmington, Delaware, area, provides an example of part- nering and coordination. The metropolitan area includes parts of Delaware and Maryland. WILMAPCO developed a long- range transportation plan with performance mea- surement data and information from multiple agencies (Wilmington Area Planning Council, 2007). The plan meshed the goals of the Delaware Department of Trans- portation (DelDOT) and the Maryland State Highway Administration (MDSHA) for road and bridge conditions and of Maryland DOT and DelDOT for on- time transit performance (Figure 1). When one thinks of how the communication of per- formance measurement may aid with the building of part- nerships, it may be helpful to consider communication in the shape of an hourglass, with the width being the level of effort or engagement outside the organization and the length of the hourglass being time (Figure 2).1 By use of the hourglass model, communication about performance measurement with external organizations frequently starts at the top of the hourglass with exten- sive engagement; communication then decreases (the narrowing of the hourglass) as an organization works internally to develop, implement, or modify a perfor- mance measurement program; communication again becomes extensive (at the bottom of the hourglass) as the organization reports and discusses the results. An illustration of this is from the Delaware Valley Regional Planning Commission (DVRPC), which is the designated metropolitan planning organization for the Philadelphia, Pennsylvania–Camden, New Jersey, area. DVRPC used a steering committee to incorporate feed- back from external sources into the performance mea- surement program (Delaware Valley Regional Planning Commission, 2006). Another example of an agency and stakeholder part- nership used for the development of performance mea- surements is the Sustainable Region Showcase for Greater Vancouver, British Columbia, Canada, which developed diverse measures, including measures related to transit and pedestrian priority, hybrid buses, a green- way, transit villages, goods movement, and household- based marketing (Transportation Association of Canada, 2006). At the output end of the hourglass is the Smart Com- mute Initiative (2003) in the greater Toronto, Ontario, Canada, area. That initiative is a public–private trans- portation demand management organization that used a 126 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT DTC on-time performance % of routes meeting performance targets Vs. NDOT & DelDOT gcals Vs. DelDOT & MCSHA gcals Vs. DelDOT & MCSHA gcals % of bridges in need of repair % of roads in poor condition FIGURE 1 WILMAPCO draft long- range transportation plan (DTC = Delaware Transit Corporation. Source: www.wilmapco.org/RTP/Update.htm). 1Zoe Neaderland of DVRPC introduced the author to the analogy. FIGURE 2 Hourglass of communi- cation, starting with broad input, a narrowing, and then completion with a broad output.

partnership to increase the dissemination and discussion of regional performance measures. The Smart Commute Initiative included demand strategies that were measur- able and that were developed and implemented across multiple jurisdictions and by both public and private partners. The intent of the initiative was to link the per- formance measures for the system at the regional and local levels. The Smart Commute Initiative also illustrates another way to look at the communication of performance mea- surement: communication may be vertical (between one office and the larger organization or between a local agency and a regional council of governments) or hori- zontal (among local agencies). WILMAPCO demon- strates an example of an agency that uses vertical communication, in which communication was between local agencies and the MPO (WILMAPCO) and between the metropolitan planning organization and the states of Delaware and Maryland. NCHRP (2004) described a case of vertical integration between micromeasures (project level) and macromeasures (agencywide) for context- sensitive design. As an example of horizontal integration, Emerson and Carlson (2003) also noted the use of the benchmarking of measures for environmental conflict resolution programs to demonstrate aggregate outcomes, which required coordination, quality control, and clarity regarding data management. A final example of horizontal integration is from the Baltimore, Mary- land, Neighborhood Indicators Alliance (2006). The alliance reported on indicators such as travel time and mode split by neighborhood to an audience of the gen- eral public and policy makers with the purpose of influ- encing government programs. PERCEIVED VALUE OF CUSTOMER COMMUNICATION Behind the reasons for communicating performance (leg- islative direction, public awareness, etc.), agencies and their employees anticipate some benefit. The Virginia DOT is an example of an agency that found a clear ben- efit in effectively communicating performance. Before adopting current performance measurement practices, the public and the media were skeptical of the Virginia DOT’s performance (Jones, 2007). This led the depart- ment to focus on program delivery and the adoption of new mechanisms for reporting on performance by using a dashboard (Figure 3) (Virginia Department of Trans- portation, 2007). The new focus and the reporting of performance measurements increased the credibility of the department and improved press coverage. The Missouri DOT (2007) found value in communi- cating performance as well. The department measured the percentage of customers who viewed the department as Missouri’s transportation expert, which the depart- ment found to demonstrate its credibility with the public. More interesting was the department’s measurement of the percentage of federal earmarked highway projects on the state highway system. This was designed as a similar indicator of credibility among a much smaller group, the state’s congressional delegation. The Missouri DOT also tracks more typical measures of customer involvement in transportation decision mak- ing as well as the percentage of customers who believed that the department included their views in the transportation decision- making process. Again, these measures illustrate an underlying basis for building pub- lic trust and confidence. The city of Baltimore, Maryland, provides another example. The city developed CitiStat to manage the day- to- day operations of city departments (City of Baltimore, 2007). CitiStat employed a database to develop common maps, charts, and graphs showing agency performance. For transportation, performance included snow removal, street light repairs, and curb lane closures. The mayor and other executives meet biweekly to review per- formance. One unexpected result of the system was learning that the city responded to most pothole com- plaints within 48 hours. The mayor announced a public campaign promising responsiveness to pothole com- plaints, which the city was already doing. This led to increased public confidence and trust in city services (Baxandall and Euchner, 2003). A final example is the Canadian Smart Commute Ini- tiative (2003), which included the development of an 127MEASURING THE VALUE AND IMPACT OF AGENCY COMMUNICATION WITH THE PUBLIC FIGURE 3 Detail of the Virginia DOT Dashboard. (Source: dashboard.virginiadot.org/default.aspx.)

assessment tool for the tracking of stakeholder and public engagement with the initiative. The initiative considered benchmarking, regular monitoring, and public reporting as important methods for sustaining program goals. A major legacy to which the Smart Commute Ini- tiative aspires is to firmly establish the value of TDM [transportation demand management] mea- sures in the public’s mind and travel culture to such an extent that there will continue to be widespread municipal and private- sector support to maintain and expand these programs beyond the timeframe of the Showcase Program. Reporting accomplish- ments on an annual basis provides the Smart Com- mute Initiative the opportunity to measure its success at reaching this major goal. (Transport Canada, 2007) Although it is too early to tell if the Smart Commute Initiative was able to build value by discussing perfor- mance with customers, Wang and Wart (2007) provided an interesting and perhaps important consideration about the relationship between trust and public commu- nication. They conducted a national assessment of larger local governments in the United States that identified important intermediate considerations that link public participation and increased trust. Transportation was one of 10 functions and fell in the middle in terms of public involvement in local government, with general land use, recreation, and public safety more frequently being topics of involvement. The most frequent process was program goals and objectives. Wang and Wart (2007) started with considering the assumptions behind linking participation and trust, which they noted is widely accepted in the political sci- ence literature. They then tested five distinct intermedi- ate factors commonly identified. They found that the most important intermediate element in contributing to increased public trust was service competence. Public trust, as defined in their article, is a broad sense that gov- ernment will deliver what is needed, as opposed to satis- faction with a specific action or good or service. Service competence suggests that the public trusts agencies more when agencies can demonstrate that the response for services is consistently well met. They sug- gest that fulfillment and demonstrating the delivery of results are critical to building trust. Wang and Wart (2007) noted that there is a strong correlation between increased public interaction and accountability but that that does not translate into increased trust. They hypothesized that information alone does not change public attitudes or perceptions about government. They also noted that public commu- nication can support the legitimacy of public actions, which is separate from trust. On the basis of their research, transportation agencies may want to be cau- tious about using communication as a means of trust building. ASSESSING IMPACT OF CUSTOMER COMMUNICATION ACSI and methods that use similar means to assess the impact of communication with customers use quadrant analysis, which compares satisfaction with importance (Van Ryzin et al., 2004). FHWA provided an example of quadrant analysis that supports agency performance measurement on the basis of a national survey of travel- ers in 2005. In a quadrant analysis, the upper right quad- rant shows programs that customers found both satisfactory and important; going clockwise, the next quadrant contains programs that customers found unsat- isfactory and important, the next quadrant contains pro- grams that customers found unsatisfactory but unimportant, and the final quadrant contains programs that customers found satisfactory but less important (Table 1). The fit between agency resources and the combina- tion of customer satisfaction and importance is as impor- tant as noting which quadrant that an agency program or activity falls into. Accordingly, one method for assess- ing the impact of customer communication is the ability to match agency resources correctly to the combination of importance and perceived quality (Figure 4). The fur- ther that a program is tangentially below the diagonal line shown in Figure 4, the more that the public sees the agency as underperforming. Schwartz (2006) makes a good argument for the value of customer communication. On the basis of a review of cases in state departments of transportation, MPOs, and public transit providers, Schwartz found that engaging with a broad range of stakeholders not only can increase public trust but also can lead to actual changes in programs. This builds on a presentation at the prior TRB Con- ference on Performance Measurement that discussed resource allocation and program impact on the basis of customer understanding. This is part of the two- way discussion about performance that agencies have with customers through market research (Halverson, 2005). 128 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT TABLE 1 Typical Quadrant Analysis Overall Grade Low High High Secondary strengths Primary strengths Low Potential weaknesses Critical weaknesses Importance

COMMUNICATION METHODS: THE NUTS AND BOLTS Communication methods are described in this final sec- tion not as an afterthought but to reinforce the impor- tance of public communication in implementing performance measures. When one is considering how to communicate, it is important to return to the idea of audience segmenta- tion. For technical audiences, which may include other transportation agencies, communication should include details. For officials, communication techniques should provide decision support. For the public and the media, the impact of the performance measurement should be apparent. In general, for nontechnical audiences, agencies have used the following methods to communicate performance: • Simple charts and tables; • Dashboards, scorecards, and report cards; • System maps; and • Narratives. These methods of communicating agency perfor- mance may be included in publications, brochures, exec- utive summaries, and full reports and on posters. They may be reported at meetings and in presentations. They may also be sent to the media or contact lists for differ- ent groups of customers, stakeholders, and interested parties. Besides the method used to report performance to cus- tomers, another consideration is the frequency of com- munication. Padgette (2006) mentioned that the regularity of reporting might be more important than the format. Report cards tend to be annual activities, whereas dashboards and interactive maps can show more frequent and operational measurements (I-95 Cor- ridor Coalition, 2005a). The I-95 Corridor Coalition report (2005a) and Government Accountability Office (2004) also raised cautions about the time lag of annual reporting. On the basis of information collected from provincial and territorial transportation agencies in Canada (Trans- portation Association of Canada, 2006), it appears that performance measurement information in Canada is available to the public mostly through annual reports. An example of an annual report is that used in Austin, Texas, which has been effective at using a community scorecard to report performance (International City/ County Management Association, 2007). The city of Austin reports on transportation as well as other munic- ipal functions using a scorecard for quick comparison across time and across departments. Although reporting methods were identified to be one of the research gaps in 2004, Larson (2005) also men- tioned the use of geographic information systems and dashboards to communicate performance to customers, which suggests that the practices existed but were not widely adopted. Since then, Lindley (2005) made the point that the data- reporting methods can be complex to the point that customers may not have the knowledge to understand the method; nonetheless, they can under- stand the importance of the measure if it is communi- cated well. Reinforcing this concept is a report on environmental indicators (Government Accountability Office, 2004). The report discusses at length the impor- tant of communicating complex concepts such as risk among agencies and to the public and decision makers. One example of a method of reporting that is easy to understand even when complex information is being 129MEASURING THE VALUE AND IMPACT OF AGENCY COMMUNICATION WITH THE PUBLIC FIGURE 4 Best fit of resources (diagonal line).

related is dashboards. Padgette (2006) found that some departments of transportation used automated data management systems to provide performance informa- tion on dashboards that were accessible to agency lead- ership, legislators, stakeholders, and the general public. Cameron et al. (2003) also mentioned the use of dash- boards to articulate performance to external stakehold- ers (Figure 3). The number of current examples suggests that knowl- edge about the methods used to report performance is more widely applied today than it was in 2004. One exemplary instance is the Kansas DOT, which won a National Partnership for Highway Quality award for Kansas City Scout, which reports on the performance of the road system in the Kansas City, Missouri, metropol- itan area and which is operated by the Kansas and Mis- souri DOTs (Figure 5). The award noted that the department was effective not only at supporting the development of system congestion measures but also at building partnerships with the media so that the public had access to the system measures (National Partnership for Highway Quality, 2006). Similarly, the Smart Commute Initiative (2003) in the Toronto region of Ontario, Canada, included forms of public outreach in the development of strategies for the communication of performance, including a one- day retreat by a public–private working group and a stake- holders’ breakfast to provide initial information. The Smart Commute Initiative also used incentives and awards that led to media coverage and participated in a national information network to promote coordination with external stakeholders. Generally, the use of appropriate methods to create a straightforward message about performance and to reach a specific audience appear to be the keys to communicat- ing with customers and other important external groups. REFERENCES AASHTO Standing Committee on the Environment. 2004. Organizational Environmental Stewardship Practices. In Environmental Stewardship Practices, Policies, and Procedures for Road Construction and Maintenance. NCHRP Project 25-25, Task 04. Venner Consulting and Parsons Brinkerhoff, Washington, D.C. Baltimore Neighborhood Indicators Alliance. 2006. Vital Signs IV: Measuring Baltimore’s Progress Toward Strong Neighborhoods and a Thriving City. Baltimore Neighborhood Indicators Alliance, Jacob France Institute University of Baltimore, Baltimore, Md. Baxandall, P., and C. C. Euchner. 2003. Working Paper 7: Can CitiStat Work in Greater Boston? Rappaport Institute for Greater Boston, National Center for Digital Government, John F. Kennedy School of Government, Harvard University, Cambridge, Mass. Cameron, J., J. Crossett, and C. Secrest. 2003. Strategic Performance Measures for State Departments of Transportation: A Handbook for CEOs. Requested by AASHTO Standing Committee on Quality for the Trans - portation Research Board of the National Academies, Washington, D.C. downloads.transportation.org/Quality- CEOHandbook.pdf. 130 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT FIGURE 5 Current system operations on Kansas City Scout website. (Source: www.kcscout.net.)

City of Baltimore. 2007. CitiStat Reports and Maps. Baltimore, Maryland. www.ci.baltimore.md.us/news/citi stat/reports.html. Accessed 2007. www.grta.org/news_section/2007_publications/2007_Trasns portation_Map_Re port.pdf. Delaware Valley Regional Planning Commission. 2006. Current Efforts: Tracking Progress Toward 2030. www.dvrpc.org/planning/longrange/indicators/current.htm. Emerson, K., and C. Carlson. 2003. A Collaborative Initiative for Evaluating Environmental and Public Policy Conflict Resolution Programs. In The Promise and Performance of Environmental Conflict Resolution. Resources for the Future Press, Washington, D.C. Florida Department of Transportation. 2005. Customer Satisfaction Survey. www.dot.state.fl.us/planning/cus tomers/. Government Accountability Office. 2004. Environmental Indicators: Better Coordination Is Needed to Develop Environmental Indicator Sets That Inform Decisions. Report GAO-05-52. Government Accountability Office, Washington, D.C. Georgia Regional Transportation Authority. 2007. 2007 Transportation Metropolitan Atlanta Performance Report. Halverson, R. 2005. Tools for Understanding Customers in Making Investment Decisions. In Conference Proceedings 36: Performance Measures to Improve Transportation Systems, Transportation Research Board of the National Academies, Washington, D.C., pp. 54–57. Hendren, P. G., and M. D. Meyer. 2006. Peer Exchange Series on State and Metropolitan Planning Issues Meeting 2: Non- Traditional Performance Measures. NCHRP Project 8-36 (53)(3). Requested by AASHTO Standing Committee on Planning for the Transportation Research Board of the National Academies, Washington, D.C. I-95 Corridor Coalition. 2005a. Current Practices in Performance Measurement of Member Organizations. I- 95 Corridor Coalition, Rockville, Md., Oct. 19. I-95 Corridor Coalition. 2005b. Performance Measures White Paper. I-95 Corridor Coalition, Rockville, Md., April. International City/County Management Association. 2007. ICMA Center for Performance Measurement: Overview. International City/County Management Association, Washington, D.C. www1.icma.org/main/bc.asp?bcid=107 &hsid=1&ssid1=50&ssid2=220&ssid3= 297. Accessed July 29, 2007. Jones, L. 2007. Putting Performance Management to Work at VDOT. Presentation by the Virginia Department of Transportation to AASHTO Standing Committee on Quality, May 24, 2007. quality.transportation.org/ ?siteid=38&c=downloads. Accessed Aug. 15, 2007. Joshua, S. 2005. Involvement of Customers in Performance- Based Management. In Conference Proceedings 36: Performance Measures to Improve Transportation Systems, Transportation Research Board of the National Academies, Washington, D.C., pp. 16–17. Klein, L. 2005. Integrating Performance Measures Across Multiple Jurisdictions. In Conference Proceedings 36: Performance Measures to Improve Transportation Systems, Transportation Research Board of the National Academies, Washington, D.C., pp. 15–16. Larson, M. C. 2005. Organizing for Performance- Based Management. In Conference Proceedings 36: Performance Measures to Improve Transportation Systems, Transportation Research Board of the National Academies, Washington, D.C., pp. 13–15. Lindley, J. 2005. Linking National Performance Measures to External Customers. Conference Proceedings 36: Performance Measures to Improve Transportation Systems, Transportation Research Board of the National Academies, Washington, D.C., pp. 17–18. Missouri Department of Transportation. 2007. MoDOT Tracker: Measures of Performance. www.modot.missouri .gov/about/general_info/Tracker.htm. National Partnership for Highway Quality. 2007. Awards and Success Stories, 2006 Make a Difference Award Winners, Kansas. www.nqi.org/awards_2006mad ks.cfm. Accessed July 29, 2007. NCHRP. 2004. NCHRP Web Document 69. Performance Measures for Context- Sensitive Solutions— A Guidebook for State DOTs. NCHRP, Transportation Research Board of the National Academies, Washington, D.C. Padgette, R. 2006. Effective Organization of Performance Measurement. NCHRP 8-36, Task 47. Requested by AASHTO Standing Committee on Planning for the Transportation Research Board of the National Academies, Washington, D.C. planning.transportation .org/?siteid=30& pageid=1399. Schaller, B. 2005. TCRP Report 63: On- Board and Intercept Transit Survey Techniques: A Synthesis of Transit Practice. Transportation Research Board of the National Academies, Washington, D.C. Schwartz, M. 2006. Building Credibility with Customers and Stakeholders. CH2M Hill, Portland, Oreg. Smart Commute Initiative. 2003. A Collaborative Proposal of the Greater Toronto Area and Hamilton to the Urban Transportation Showcase Program. Smart Commute Initiative, Toronto, Ontario, Canada. Stein, K., and R. K. Sloane. 2003. NCHRP Report 487: Using Customer Needs to Drive Transportation Decisions. Transportation Research Board of the National Academies, Washington, D.C. Transportation Association of Canada. 2006. Performance Measures for Road Networks: A Survey of Canadian Use. Transport Canada, Ottawa, Ontario, Canada. Transport Canada. 2007. Urban Transportation Showcase Program. Transport Canada, Ottawa, Ontario, Canada. www.tc.gc.ca/programs/environment/UTSP/showcases.htm. Accessed May 2007. Turnbull, K. F. 2005. Preface. In Conference Proceedings 36: Performance Measures to Improve Transportation 131MEASURING THE VALUE AND IMPACT OF AGENCY COMMUNICATION WITH THE PUBLIC

Systems, Transportation Research Board of the National Academies, Washington, D.C., pp. vii–viii. Van Ryzin, G. G., D. Muzzio, S. Immerwahr, L. Gulick, and E. Martinez. 2004. Drivers and Consequences of Citizen Satisfaction: An Application of the American Customer Satisfaction Index Model to New York City. Public Administration Review, Vol. 64, No. 3, May/June, pp. 331–341. Virginia Department of Transportation. 2007. Dashboard. dashboard.virginiadot.org/default.aspx. Wang, X. H., and M. W. Wart. 2007. When Public Participation in Administration Leads to Trust: An Empirical Assessment of Managers’ Perceptions. Public Administration Review, Vol. 62, No. 2, March/April, pp. 265–278. Wilmington Area Planning Council. 2007. Measuring Our Performance. www.wilmapco.org/RTP/Update.htm. 132 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT

133 RESOURCE PAPER Performance- Based Contracting A Viable Contract Option? Sidney Scott III, Trauner Consulting Services, Inc. Linda Konrath, Trauner Consulting Services, Inc. “Just tell me what to do; I want to build it and move on.” “Tell me what you want, but don’t tell me how to do it.” These statements are typical of what you might hear at a highway construction site anywhere in the country. It is also possible that you heard the same thing in 1970 or before. The first statement addresses the conventional approach to highway construction that puts the burden on the owners to design, specify, and control the work. The contractors are hired on the basis of the lowest price with the expectation that they will execute the work in accordance with the terms of the contract. Where does the risk lie? It lies mostly with the owner. Where is the innovation? Again, it is mostly from the owner. The National Highway System (NHS) is not keeping pace with the demands placed on it to move people and goods safely and efficiently. Recent infrastructure report cards indicate that the system is deteriorating and facing increased congestion. Much of America’s transportation infrastructure is reaching the end of its design life and needs to be reconstructed. State highway agencies are under pressure to improve highway systems while main- taining traffic in work zones with limited resources. To accomplish this, states are increasingly experimenting with ways to accelerate construction and minimize dis- ruption while maintaining or improving quality. One aspect of this initiative is the use of specifications and contracting strategies to motivate and empower the pri- vate sector. The traditional way of doing business, which is to use prescriptive requirements that tell the contractor how to perform the work, does not motivate the con- tractor to provide more than the minimum or to find cre- ative solutions to save time, minimize disruption, or enhance safety and quality. The overriding reason for performance- based con- tracting is to craft a new business model between the owner and the contractor. What will this new model do? It will translate the performance requirements of the owner into language that will allow the contractor to understand, plan, and build the project accordingly. This new model will clearly address product performance requirements, the need to minimize disruption to traffic and the community, and the need to produce facilities with long lives. Societal changes are driving changes in contracting strategies as well. With dramatic reductions in both the numbers and the experience levels of government inspec- tors and engineers, highway agencies are reexamining their roles and responsibilities. The complexity of high- speed construction, nighttime construction, and the per- formance of rehabilitation work under conditions in which traffic remains flowing— all of which the public demands— further stretches the available agency resources. Low- bid contracting is not the best approach for this type of work, as the growing interest in design–build contracting, public–private partnership (PPP) agreements, and long- term warranties and mainte- nance contracts indicates. Performance- based contracting is not new. As noted in Figure 1, its roots are in older forms of design–build contracting by use of the integrated master- builder con- cept. In a sense, today’s design–build contracting, PPPs,

and other forms of integrated contracts have taken con- tracting full circle. Performance contracting is a common thread. Performance contracting is outcomes based. It works best in a best- value, lump- sum contracting envi- ronment rather than in a low- bid, quantity- based, unit- priced contracting environment. It motivates the contractor to focus on outcomes rather than output to be innovative and efficient. Given the current pressures on the NHS and the promise of performance- based contracting, the question remains: Is performance- based contracting a viable con- tract option for building and maintaining the U.S. high- way system? This paper first examines the state of the practice of performance- based contracting outside the United States. It then looks at the extent that these prac- tices have taken root in the United States, the limitations to their use, and new initiatives to promote their implementation. STATE OF THE PRACTICE OUTSIDE THE UNITED STATES The need to manage and improve an aging highway infrastructure program efficiently and effectively while being confronted with limited public funding and reduc- tions in agency personnel is not a problem unique to the United States but is, instead, a universal issue facing the transportation sector throughout the world. By looking beyond U.S. borders, it can be seen that highway agen- cies in Europe, Latin America, and elsewhere have responded to this challenge by increasing private- sector involvement in highway construction, operation, and maintenance. In doing so, several of these agencies have gradually moved away from the use of traditional proce- dural (method) specifications to the greater use of performance- oriented contracts that include functional (end result- and outcome- based) specifications that capi- talize on the expertise of the private sector. For example, a performance contract may specify pavement perfor- mance in terms of roughness, rutting, or surface friction. Left unstated is exactly how the contractor is to achieve the performance standard prescribed. This arrangement thus allocates a greater performance risk to the contrac- tor but also creates the opportunity for increased profit margins should contractor- initiated design, process, or technology innovations yield improved efficiencies or cost savings. On the basis of the experiences of agencies outside the United States in implementing performance- oriented con- tracting, no single approach can be considered one that is typically used by all agencies. Cultural differences, soci- etal needs, the experience of the road administrators with outsourcing work, and the size and the competence of local contractors, among other issues, all drive the con- tractual arrangements ultimately established with the pri- vate sector. In addition, the nature of the project itself [e.g., new construction versus the maintenance of existing assets and design–build versus design–build–finance– operate (DBFO)] also plays a large role in determining the amount of risk allocated to the private entity and the term for which the private entity is responsible for the asset. Examples of performance- oriented contracting used outside the United States include design–build and its variants, PPPs, and performance- based maintenance contracts. These techniques have been implemented with various degrees of success; however, the continued inter- est in and the expansion of these concepts are the best indicators of their long- term viability. Agencies reporting the highest satisfaction with per- formance contracting embarked on their programs with an eye toward fostering a culture of trust and partner- ship with the private sector. This is not an environment that developed overnight; rather, as these agencies grew more comfortable with private- sector involvement in public works, the level of private participation increased. This progression is perhaps best exemplified by the evo- lution of design–build contracting in the United King- dom (1). The early design–build contracts let by the United Kingdom’s Highways Agency in the mid-1990s did not integrate the designer–builder until after the con- clusion of the statutory planning stages, at which point the design was at least 80% fixed. Recognizing that the earlier involvement of the contractor could increase opportunities for innovation, improve risk management, improve constructability, and reduce impacts during construction, the Highways Agency created a new gener- ation of design–build contracts that provided for earlier contractor involvement. Under these contracts with early contractor involvement, the designer–builder is selected, largely on the basis of qualifications, shortly after the identification of the preferred route and well before the start of any statutory planning stages that involve public hearings. After contractor selection, additional design and planning tasks are performed with the input of the entire delivery team to establish a target price for the project from that point forward. Various mechanisms are incorporated throughout the design and construction 134 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT If a builder build a house for a man and does not make its con- struction firm and the house which he has built collapse and cause the death of the owner of the house— that builder shall be put to death. If a builder build a house for a man and does not make its con- struction meet the requirements and a wall fall in—that builder shall strengthen the wall at his own expense. FIGURE 1 Excerpt from the Code of Hammurabi, King of Babylonia, 2200 B.C.

process so that the contractor may share in the savings achieved and participate in any losses realized when the actual costs are compared with the target price. This scheme is intended to encourage additional innovation and continual improvement throughout the development of the project by the designer–builder. As a natural outgrowth of its design–build program, the United Kingdom extended private- sector involvement to include finance, operation, and maintenance. Through its DBFO contracts, the Highways Agency engages the private sector in the finance, construction, and operation of new roadway facilities (2). Under the terms of these contracts, the agency monitors the performance of the contractor during the construction, operation, and main- tenance phases to ensure that contractual obligations are met. A penalty point system assesses points for failure to perform on the basis of specific threshold triggers. Should a specified number of penalty points be assessed, the agency has the right to terminate the contract. As an alter- native remedy, the agency also has the right to correct any defaults and invoice the DBFO firm accordingly. To ensure that the road is returned in a condition fit for ser- vice, the DBFO contract also includes specific clauses regarding hand back, with a required residual life speci- fied for each element of the project road. For example, at least 85% of the road pavement must have at least a 10- year residual life on hand back. Similar partnership arrangements between the public and the private sectors for infrastructure construction and operation are increasingly being used throughout Europe. This trend is attributable to the expectation of the higher efficiency and the faster implementation pro- vided by private- sector involvement and the need for pri- vate capital to be added to limited public resources. Even Germany, which has historically been more prescriptive than some of its European counterparts, has seen a rise in PPP arrangements since 2000, with German munici- palities reporting average efficiency gains of 10% through the use of this type of contracting initiative (3). Where performance contracting has perhaps taken the greatest hold outside the United States is in the area of performance- based maintenance of existing highway assets. The Highways Agency in the United Kingdom has established managing agent contractor (MAC) contracts under which a service provider (typically, a joint venture between a contractor and a consultant) has single- point responsibility for the management and the maintenance of an area network (4). The MAC contract allows the service provider to design and undertake all projects up to a value of £500,000 ($980,000 in 2006 dollars). The contract also incorporates performance specifications for routine and winter service and includes the require- ment for the provider to measure and benchmark per- formance, with the expectation that the provider will achieve continual improvement. The asset modeling required to determine the interactions and dependencies between routine and periodic maintenance and rehabili- tation treatments is undertaken by the provider in col- laboration with the agency. The MAC contracts used in the United Kingdom are typical of the performance- based maintenance contracts used by many national highway agencies faced with staffing shortages (5). These contracts typically include key performance indicators against which the contrac- tor’s performance is measured. Typical indicators may include • The international roughness index, which mea- sures the roughness of the road surface; • The absence of potholes and the control of cracks and rutting; • The minimum amount of friction between tires and the road surface; • The maximum amount of siltation or debris in drainage systems; and • The retroreflectivity of road signs. For each performance indicator included in a con- tract, a response time and, often, a penalty are defined for noncompliance. For example, in Argentina, where the rehabilitation and maintenance of over 14,000 km (approximately 45%) of the national paved roadway network has been contracted out, a penalty of $440 (in 2000 dollars) is applied for each day that a pothole of more than 2 cm deep is left open (6). Similarly, in other countries the contractor may receive a bonus payment for exceeding the specified targets. The extent to which such maintenance contracts involve the contractor in the engineering and design of the roadway asset varies from agency to agency. Some agencies are still using a hybrid approach of method- and performance- based specifications, whereas more advanced practitioners hold the contractor accountable for both rehabilitation and routine maintenance. For example, under the Argentine model of areawide performance- based contracts, contractors are responsi- ble for the rehabilitation and subsequent maintenance of a roadway section for a defined period. Rehabilitation (e.g., slurry seal, surface dressing, overlay, and recon- struction) occurs during the first year. This is followed by maintenance activities (e.g., patching, cleaning, and sign renewal) in subsequent years. The contractor carries out the detailed design for all rehabilitation work on the basis of a minimum standard stipulated in the contract. In keeping with this worldwide move toward perfor- mance contracting, the Netherlands National Public Works Department (NNPWD) has experimented with changing its business model considerably to move more toward the use of performance- oriented contracts for construction and maintenance. As described in the 2002 International Scan 135PERFORMANCE- BASED CONTRACTING: A VIABLE CONTRACT OPTION?

Tour Report, NNPWD planned to pilot a number of inte- grated contracts containing design, construction, mainte- nance planning, and maintenance tasks (7). With this new approach, the private sector will bear more responsibility and risk in the contract. This will be done on the basis of risk analysis. The contractor will no longer have obligations based on detailed technical pre- scriptions; instead, these obligations will be based on functional contract requirements describing the desired performance of the work. In the proposed NNPWD model, there is a relation- ship between the form of the contracts and the levels of specifications. As shown in Table 1, Level 2 requirements are applicable in maintenance performance, design–build, design–build–maintain (DBM), and PPP contracts. How- ever, when the desired lifetime is longer than the contract time under DBM and PPP contracts, there might be risks that make it necessary to go down to Level 3, in which the contractor must ensure future construction behavior. Clearly, performance- based contracting has become a viable contract option outside of the United States, as evidenced by its sustained growth and development, par- ticularly in Europe, where it has become a common busi- ness model for some transportation agencies. If broad comparisons can be made between the European and U.S. business models for the transportation industry, they might highlight the characteristics listed in Table 2. This comparison does not represent the universe of contracting in either case. The models represent more the norm. The models that some European agencies use are more closely aligned with the U.S. model, whereas some agencies in the United States are actively moving toward the use of practices common in Europe (8). STATE OF THE PRACTICE IN THE UNITED STATES Although the viability of performance- based contracting has been proven outside of the United States, barriers to the widespread development of performance- based contracts in the United States stem from the separation of services and the low- bid system ingrained in public- sector construction, the long- standing use of prescrip- tive or method specifications by the U.S. highway industry, and pressure from the industry to package construction contracts to accommodate smaller, mom- and- pop highway contractors and disadvantaged busi- nesses. Despite these barriers, performance- based contracting has taken root in the United States in recent years because of economic, societal, and organizational pressures within U.S. transportation agencies. U.S. transportation agencies have adopted contracting con- cepts from Europe and have also developed home- grown approaches. Design–build, warranties, roadway maintenance, and pavement performance specifications are the areas in which performance- based contracting has made inroads. Design–Build Design–build project delivery in the United States has been evolving rapidly over the past 10 years. As of Janu- ary 2006, a report on the effectiveness of design–build contracting prepared for FHWA reported that more than 32 states have used or are considering the use of design–build on federal aid highway construction proj- ects (9) (Figure 2). 136 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT TABLE 1 Levels of Requirements to Be Used in Different Forms of Contracts Level 1: Level 2: Level 3: Level 4: Level 5: Drivers’ Performance Construction Materials Raw Materials Contract Type Wishes Requirement Behavior Behavior and Processing Traditional X X Maintenance performance X x X X Design–build X X Con Con DBM X Con Con Con PPP X X Con Con Con Note: X = in these contract types, this will be the first level; x = in many cases, these levels will be used for considerable parts of the project; = during the initial preparation of a project, the agency should always start with Level 1 and then move to the appropriate specification level for the contract type; Con = the contractor must translate the contract specifications into instructions for personnel at Level 5 or even lower. TABLE 2 Comparison of European and U.S. Business and Contracting Models European Model U.S. Model • Large, vertically integrated companies compete for larger integrated • Specialty companies compete for smaller and separate design, con- service contracts (by using design–build, early contractor involvement, struction, maintenance, or other contracts PPP, or other integrated contracts) • Industry is highly involved with the owner in project development, • The owner retains more control over project development, man- management, and implementation agement, and oversight • Qualifications-based selection is widely used • Low-bid contracting is the standard procurement method • Specifications are more performance based • Specifications are largely prescriptive in nature

That report indicated that FHWA had approved more than 400 design–build projects nationwide under Special Experimental Project No. 14 (SEP-14), and a large num- ber of these were still not complete as of the date that the study was published (10). Although high- profile megaprojects such as the Utah I-15 project and the Colorado T- REX project have gained national attention, only about 20% of the SEP-14 projects are more than $50 million (in 2003 dollars) in value. Although design–build represents a small portion of the projects, it is growing rapidly. Approximately 80% of the SEP-14 projects were approved after 1999. One of the key advantages reported in the design–build effectiveness study was the potential to use performance specifications in design–build contracts to encourage greater innovation by the designer–builder and focus on project performance outcomes rather than conformance with product or prescriptive specifications. The excerpt from the I-15 design–build request for proposal in Figure 3 illustrates this emphasis on performance outcomes. On the basis of the results of the survey, more than half of the specifications for the design–build projects reported in the United States were entirely prescriptive. The remainder of the specifications used some combi- nation of prescriptive and performance- based specifica- tions. Only 3% of the respondents reported that they used purely performance- based specifications. The sur- vey also indicated that traditional design–bid–build con- tracts also incorporated a similar mix of prescriptive and performance- based specifications. The authors con- cluded that the rate of use of performance specifications has been growing for both delivery approaches but that there is still significant resistance from within owner organizations to relinquish control and to replace pre- scriptive requirements with performance requirements. Recommendations for improving design–build pro- grams on the basis of the lessons learned from design– build projects have included the following: (a) overpre- scription of design details or construction techniques may stifle potential innovation, and (b) to ensure that the con- tracting agency receives the expected product within bud- get, clear and concise performance specifications are essential to the success of a design–build contract. Warranties and Performance Contracting Warranties are not a new phenomenon in the United States. They have been used for construction projects since the late 1800s. In the United States, one of the earliest providers of warranties on roadway construction was Warren Brothers Paving. From 1890 to 1921, Warren Brothers Paving owned a patent on hot- mix asphalt (HMA) and warranted the material and the workmanship of its HMA pavements for up to 15 years. This practice was commonplace for cities until the early 1900s, when AASHO (now AASHTO) and state highway agencies began to develop and maintain method specifications to promote uniformity in specifica- tion use throughout the country. With the issuance of an FHWA final rule in 1995 that allowed states to use warranties without approval, war- ranty use in the United States increased dramatically and then leveled off, especially for low- bid contracts. How- ever, interest in warranties combined with nontraditional integrated services delivery in the United States appears to 137PERFORMANCE- BASED CONTRACTING: A VIABLE CONTRACT OPTION? MA-1 1 AK- 4 1 6 4 6 DC- 5 66 6 HI- 1 7 MD- 8 2 11 2 NJ- 12 2 1 3 7 52 2 50 7 1 1 1 7 5 2 DE-1 Number of approved design-build projects VI- 1 FIGURE 2 States with approved SEP-14 design–build proj - ects (9). The Performance Specifications included in Section 6 of the Utah I-15 RFP establish requirements that the Contractor’s work shall achieve. They are intended to provide clear require- ments for how the finished product is to perform while allow- ing the Contractor considerable flexibility in selecting the design, means, materials, components, and construction methods used to achieve the specified performance. Additional standards and references are cited within the Per- formance Specifications under the headings “Standards” and “References” and within the body of the specification. The fol- lowing distinctions apply. “Standards” constitute a further elab- oration of the requirement. “References” constitute advisory or informational material, provided for the Contractor’s bene- fit, that need not be followed, but in some cases provide acceptable solutions already in use by the Department. In most cases, the Standards are cited within the body of the Performance Specifications, and in a few cases, specific parts of References are cited as requirements. FIGURE 3 Excerpt from Utah Department of Transportation OT I-15 design–build request for proposal (2002).

be growing. The warranted components identified by var- ious NCHRP synthesis studies include (11) the following: • HMA concrete pavement, • Portland cement concrete (PCC) pavement, • Bridge components, • Bridge painting, • Chip sealing, • Intelligent transportation system components, • Landscaping and irrigation systems, • Microsurfacing, • Pavement markings, and • Roofs. Warranty provisions are performance based, in the sense that they incorporate performance indicators and thresholds to measure performance over a prescribed warranty period. Performance indicators and thresholds vary considerably among the agencies that have imple- mented them (Table 3). Warranty performance indica- tors are distresses, properties, or characteristics of the warranted component that can be measured and that are linked to the performance of the warranted component of the end product. For example, performance indicators for an asphalt pavement may include rutting and crack- ing. Thresholds are the allowable limits not to be exceeded over the performance period. Practitioners agree that the basic benefits of a war- ranty are improved performance, the reduced need for inspections, and the potential for cost savings and inno- vation. However, if a warranty with material- and workmanship- type provisions is used, there is less of a likelihood that cost savings, innovation, and improved performance will be realized. In conjunction with a best- value design–build or an integrated services contract, the more performance oriented that the warranty is, the greater the ability that the contractor has to control the design, material selection, and workmanship so that they meet or exceed the desired outcome (8). In the larger context of performance contracting, warranties represent a transition between a prescriptive or material and method specification and performance specifications, in the sense that warranty provisions do not include all the factors that contribute to perfor- mance. For example, warranty provisions for pave- ments typically exclude subbase, drainage, and embank- ment features or other factors related to pavement design or construction methods that may affect perfor- mance. Although the scope of the warranted work and the performance indicators may not capture all of the factors that contribute to performance, they provide a tool that can be used to transfer the responsibility for performance to the private sector and ensure that the products of construction will meet the targeted perfor- mance thresholds for at least part of the life cycle of that product or component. Performance Standards Under a PPP Agreement A PPP, by definition, is an agreement between a public owner and a private entity to develop, design, build, finance, operate, and maintain a transportation facility or system for a specified service life on the basis of a defined set of agreed- upon performance standards. The term “PPP” encompasses the term “concession,” which is used more commonly in Europe, in which a private operator purchases the right to develop, operate, and maintain a transportation facility for a specified number of years in return for a fee (tolls, fees, taxes, or other rev- enue sources). Typically, the financing is a blend of pub- lic and private funds or, in some cases, is wholly financed by the private sector. The PPP contractor operates and maintains a facility on the basis of an agreed- upon set of performance stan- dards, which apply the concept of a performance threshold to a long- term operation and maintenance period. PPP contracts also include the concept of hand- back or turnover requirements at the end of the opera- tion and maintenance period (or lease) on the basis of a defined performance service level or residual life. In parallel with its growing use in Europe, PPP con- tracts have been applied to a handful of high- profile proj- ects in the United States, but they have recently gained new momentum as transportation system owners struggle to find resources to fund and deliver critical transporta- tion projects. Some of the earliest examples of privately funded PPP projects developed in the United States were the Route 91 Express Lanes in California and the Dulles Greenway in Virginia, both of which were completed in 138 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT TABLE 3 Performance Indicators and Thresholds Indicator Threshold Mississippi Wisconsin Minnesota Method or segment length Deduct points Segment = 1/10 mi Segment = 500 ft Rutting >5.0 points ≥0.25 in. ≥0.375 in. >7.0 points <0.50 in. ≥0.50 in. Transverse cracking >3.0 points >25 cracks that average 1 in. >5.0 points in width per segment Three cracks per segment

1995. More recent examples include the Port of Miami, Florida, Tunnel and the Trans- Texas Corridor (12). The Texas Department of Transportation (TxDOT) established operational and maintenance performance standards that the concessionaire must meet during the operation of the Trans- Texas Corridor. The pavement performance standards define the minimum standards (thresholds) that the concessionaire will be required to meet to operate and maintain the facility. Corrective action will be made if these thresholds are exceeded. The performance standards included (13) the following: 1. Pavement condition score— measurements and inspections are necessary to derive the pavement condi- tion score (in accordance with TxDOT procedures). 2. Ruts on the main lanes, shoulders, and ramps— depths are measured with an automated device, in com- pliance with TxDOT standards, and a straightedge is used to measure the rut depths for localized areas. 3. Ride quality— the international roughness index is measured according to TxDOT standard Tex-1001-S (operating inertial profilers and evaluating pavement profiles). 4. Failures— instances of failures exceeding the failure criteria set forth in the TxDOT pavement management information system rater’s manual, including potholes, base failures, punch outs, and jointed concrete pavement failures, are recorded. 5. Edge drop- offs— edge drop- off levels compared with the level of the adjacent surface are measured physically. 6. Skid resistance— the ASTM standard test method for skid resistance testing of paved surfaces at 50 mph (ASTM E274) is performed with a full- scale smooth tire meeting the requirements of ASTM E524. These PPP performance standards and thresholds are similar to the performance characteristics and thresholds specified for warranty contracts, but they extend the per- formance period in some cases well beyond the service life of the pavement or component, which would entail major rehabilitation during the operation and mainte- nance period. They also do not include exclusions that may void the agreement. To achieve these standards, PPP specifications are performance based rather than pre- scriptive. For example, TxDOT will not, in theory, spec- ify the pavement design and type and will limit its review and approval functions. Performance- Based Contracting for Maintenance In traditional maintenance contracting, the owner directs a group of contractors to perform specific tasks. The owner specifies what work will be done and how it will be done. Under this traditional approach, the owner retains complete control over the direction of the work but also retains all of the risk that must be undertaken to achieve the desired system condition. This desired condi- tion is not always defined, which can lead to mainte- nance by crisis rather than the taking of a programmatic approach to optimization of the condition of the system. Under a performance- based maintenance contracting system, the owner specifies what it wants to achieve in terms of performance standards; and the contractor selects the methods, materials, and techniques that will best meet the performance standards at a systemwide level. The contractor manages and directs the work, and the owner agency monitors the progress to ensure that it is getting the performance and system conditions that it is paying for. This arrangement promotes efficiency, the optimization of resources, and innovation and transfers the risk and responsibility for achieving performance goals from the owner to the contractor. Performance- based maintenance contracting is commonplace in Europe, Canada, and elsewhere; but its use in the United States has advanced through the implementation of long- term maintenance contracts in the Virginia DOT, TxDOT, and the District of Columbia. Several other agencies also plan to implement this approach. The DC Streets project, which has been undertaken over the past 5 years by the District of Columbia DOT and FHWA, is an experimental project that uses federal- aid funds to lengthen the life cycle of the infrastructure and provide better service to the public. The project aims to rehabilitate the condition of the assets to a specified level and maintain them at or above the specified level under a performance- based preservation contracting environment. This $70 million federal- aid project was the first urban, performance- based asset preservation effort of its kind in the United States. This was also the first time that FHWA teamed directly with a city government on a program to preserve its highway infrastructure. The project entails a private contractor that manages, rehabilitates, and maintains more than 75 mi of the NHS in the District of Columbia. The District’s portion of the NHS contains the city’s most important and heavily trav- eled roadways. The DC Streets contract covers all of the NHS roadways, with the exception of those maintained by the National Park Service. The contract includes all trans- portation infrastructure assets, right- of- way to right- of- way, with the exception of traffic signals. Specifically, the following maintenance categories are included: pavement structures, roadway cleaning, drainage, roadsides, traffic safety (i.e., guiderails, barriers, attenuators, pavement markings, signs, and lighting), roadside cleaning, roadside vegetation, bridges, tunnels, pedestrian bridges, weigh- in- motion stations, and snow and ice control. The contract includes rehabilitation and maintenance, but it excludes reconstruction. The contractor is scored on the basis of 139PERFORMANCE- BASED CONTRACTING: A VIABLE CONTRACT OPTION?

various performance criteria on a monthly and an annual basis, as illustrated in Table 4 (14). Product Performance Specifications In the performance specification arena, NCHRP and FHWA research initiatives have resulted in the develop- ment of homegrown performance- related specifications for HMA and PCC pavements. These specifications, con- sidered the next generation of quality assurance specifi- cations, are a mix of prescriptive and performance requirements. Prototypes have been piloted on the basis of traditional low- bid highway contracts but continue to evolve and are not widely implemented by the industry. Performance specifications have also been proposed for bridges, landscaping, intelligent transportation system components, and other features. An example of a per- formance specification framework for bridges might include the considerations described below. Method or Prescriptive Techniques Conventional AASHTO and state department of trans- portation bridge construction specifications generally require the contractor to follow prescriptive specifica- tions that provide the physical configurations of various components made of specific materials. The physical properties of those materials are typi- cally specified by reference to AASHTO or ASTM speci- fications and are confirmed by agency- controlled acceptance sampling and testing in the field and in the laboratory. Performance Requirements The framework for a performance specification for an entire bridge structure may be to construct a bridge that will safely carry traffic for a prescribed period of time on the basis of specific loading requirements, the location of the bridge, geometric constraints, environmental condi- tions, and specific codes and criteria. Material properties must be based on ASTM or AASHTO standards. The contractor must use work practices that maintain qual- ity, safety, and efficiency and that do not result in any short- or long- term durability or performance impacts on the structure. Other performance specifications can be developed for components of the bridge. These performance speci- fications would establish conditions for acceptance at the time of construction, but they also contain some ele- ments of future performance to confirm the structural integrity and functionality of the structure. Examples of component specifications include the following: • Deck smoothness, friction (safety), noise, and per- meability could be specified. The specification could also control deck cracking and concrete spalling. Acceptable performance measures would be developed in each of these categories. • Concrete spalling for components of the bridge other than the deck could be specified and measured in terms of an acceptable amount of spalling permitted over time. • There are many examples of warranty specifica- tions for paint that could be explored as well. Rather than specifying a specific painting system, it could be specified that the paint must last x years without repaint- ing. Performance measures may be that, depending on the location on the structure, less than x percent of the structure area is peeling, cracking, rusting, blistering, and so forth. • The performance of the rebar or prestressing strand exposure could be specified as well. • The overall appearance and the functionality of expansion dams, joints, and bearings are other elements that could be specified. Under a performance umbrella, the bridge is still designed according to the same parameters, but the con- tractor would be granted more freedom to determine the specific design and materials used to meet the performance requirements. 140 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT TABLE 4 Scores for Year 4.5 Evaluation (14) Maintenance Category Score Maximum Score % Pavement structure 8.6 9.1 95 Roadway cleaning 7.7 7.3 106 Drainage 7.3 6.8 107 Roadside Curbs Gutters Sidewalks 7.6 6.7 113 Traffic safety Guardrails Barriers Attenuators 8.1 7.3 111 Roadside cleaning 8.0 6.6 120 Roadside vegetation 5.8 6.1 96 Bridges 7.3 8.2 89 Tunnels 8.7 8.7 101 Traffic safety Pavement markings 7.1 6.9 103 Signs 6.1 6.6 92 Lighting 6.4 7.0 91 Miscellaneous assets Pedestrian bridges Weigh-in-motion 4.9 5.5 90 Snow and ice control 7.2 7.2 100 Total score 100.8

Testing and Confirmation of Performance Requirements Loading conditions will need to be monitored to deter- mine if the actual loadings are consistent with the design loadings. This can be performed through the use of mon- itoring devices embedded within the structure. Conven- tional bridge inspection techniques may be used to monitor the bridge’s condition. Proper maintenance is a key to the life expectancy of a structure. To avoid concerns or claims from the con- tractor that improper maintenance caused the structure to not meet the performance specifications, the depart- ment of transportation may require the contractor to perform the maintenance that it determines is required. This could lead to a warranty that includes both planned and unplanned maintenance. NEW INITIATIVES FHWA Highways for LIFE The FHWA- sponsored Highways for LIFE (HfL) pro- gram (LIFE represents longer- lasting highway infrastruc- ture using innovations to accomplish fast construction of efficient and safe highways and bridges) has recently developed a performance- based framework for desig- nated HfL projects. Under the HfL program, perfor- mance contracting is defined as an approach by which a private contractor is responsible for achieving a defined set of goals and in which performance goals instead of methods are specified. The performance contracting framework allows agencies to define and communicate to construction contractors specifically what they and FHWA want to achieve. The construction contractors on HfL projects share the risks and rewards as a project partner, and the defined performance goals and mea- surement methodologies provide a basis for the applica- tion of incentives and disincentives. For a performance contract to be successful, the contractor must be pro- vided with flexibility on how to perform the work (15). The purpose of this framework is to provide the states participating in HfL projects with processes and materi- als that they can use to develop a performance- based solicitation package. The framework also helps to pro- vide a consistent basis of measurement between HfL projects for use at the program level. The framework includes processes and sample materials for • Performance goals, • Performance measurement methodologies, • Best- value awards, and • Enhanced low- bid awards. The framework focuses on processes and materials that would be different from those used for a traditional low- bid solicitation process for a nonperformance- based construction contract. The basis of any performance contract is the set of performance goals that defines what the contractor is to achieve under the contract. The development of these goals is time- consuming and needs to be a group activity within the agency. A goal develop- ment process is described in Figure 4. The project team followed a similar process to develop a sample set of performance measures for HfL projects. 141PERFORMANCE- BASED CONTRACTING: A VIABLE CONTRACT OPTION? Define and Recruit Internal Stakeholders Hold Initial Brainstorming Sessions “What Makes a Good Goal” Test Determine Performance Goal Format and Write Draft Goals Organize/Categorize Goals Define and Refine Levels of Service Establish the Baseline Test, Refine, and Finalize the Goals/Levels of Service MultilevelPass/Fail Refine Goals Pass/Fail or Multilevel FIGURE 4 Process for defining performance goals and measures.

Strategic Highway Research Program Project R-07 Performance Specifications for Rapid Highway Renewal The Strategic Highway Research Program (SHRP II) contains four target areas: safety, reliability, renewal, and capacity. The renewal track looks at improving the aging infrastructure through the use of rapid design and construction methods that would cause minimal disrup- tion and produce long- lived facilities. Project R-07 tar- gets the development of performance specifications. As recently defined by the FHWA- sponsored Perfor- mance Specification Technical Working Group and adopted by SHRP II, a “performance specification” is an umbrella term that more and more describes a family of specification types. An overview of construction specifi- cation types is shown in Figure 5. Under the umbrella, one might see end- result specifications, performance- related specifications, performance- based specifications, warranties, and incentive- based specifications. Table 5 compares the various types of performance specifications. A performance specification attempts to define the per- formance characteristics of the final product or service and links them to construction, materials, and other items under the contractor’s control. Performance characteris- tics may include end- result items such as pavement smoothness, bridge deck corrosion, and embankment slope stability; but they may also extend to other project performance objectives related to time, quality, safety, cost, or user satisfaction (16). When the future performance of a product is esti- mated by using key construction tests and measurements linked to the original design by modeling and life- cycle costs, the specification is described as being performance related or based. When the condition of the product is measured after a predetermined time by using measur- able parameters, the specification is known as a war- ranty. The Project R-07 definition of performance specifications expands the concept of performance spec- ifications further to include incentive- based specifica- tions for time, safety, or other measurable performance goals for a project. The objective of specification writers is to translate the highway agency’s intentions into clear, concise, com- plete, and correct (or unambiguous) instructions for the contractor. Today, more than ever, owners and practi- tioners recognize that this objective must also allow the contractor to exercise ingenuity in complicated rehabili- tation and reconstruction projects. Less prescriptive specifications give the contractor more control and allow the contractor to exercise more creativity to meet project demands. The overriding reason for the Project R-07 perfor- mance specification initiative is to craft a new language for communication between the owner and the contrac- tor. It will translate the performance requirements of the designer into language that will allow the contractor to understand, plan, and build the project accordingly. This new language will address product performance require- ments, the need to minimize disruption to traffic and community, and the need to produce long- lived facilities. 142 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT FIGURE 5 Spectrum of construction specifications (Stds = standards; QA = quality assurance; QC = quality control; I/D = incentive/disincentive provisions; A + B = cost- plus- time bidding). TABLE 5 Performance Specification Types: Reduce Noise by Reducing Tire–Pavement Noise (PCC Pavement) Key Performance Drivers End Result Through Physical Performance Specifications Functional Under Contractor Control Dimension Measurements Measurement (One Time Only) Material properties: the effects of large Transverse and longitudinal fin spacing, Noise generated from pavement–tire interaction and coarse aggregates on noise for depth, variability, e.g., leave randomized shall not exceed a decibel value of x when it is both micro- and macrotexture issues spacing of 16 to 26 mm (approximately measured by y placed z distance from the pave- 5/8 to 1 in.). The required tine width is ment under live (or controlled) traffic. Included Construction practices: burlap drag, 2 to 3 mm (approximately 1/12 to are a sampling plan and percent-within-limits astroturf drag, tining, raking, mix 1/8 in.), and the required tine depth is (PWL) analysis. Note that this is a one-time consistency and delivery, impact on 3 to 8 mm (approximately 1/8 to measurement and generally assumes, but is not skid and smoothness if changes are 5/16 in.). Included is a sampling plan made explicit, that downstream use will be made only for noise reduction and measurement technique. adequate. May need material, construction, and end result. NOTE: Method and end result are both generally included in current specifications. Performance specifications must address the measurement technique, noise, and some level of understanding of the relationship between noise and other distresses, for example, smoothness and skid.

SUMMARY: WHERE TO FROM HERE? Despite the impediments, the use of performance- based contracting is advancing on several fronts in the United States, including design–build, warranties, PPPs, performance- based maintenance contracting, and various performance specification initiatives. The lessons learned from the experience gained to date in both the United States and overseas is that performance- based contracting is more effective when the contractor has greater flexibil- ity, input, or control over factors that affect performance. This tends to move performance- based contracting in the direction of integrated services contracts, including design, construction, maintenance, and operations contracts, in which the use of prescriptive requirements is not econom- ical or practical from a risk management perspective. Are the contracting practices of Europe a window to the future? To some degree, the U.S. transportation indus- try is currently experimenting and adopting many of the performance- based contracting practices that have proven to be viable there. Should the United States simply copy what Europe is doing? This would not be practical with- out organizational and cultural changes. The United States cannot simply transplant these practices without changing the inherent business model and ingrained organizational culture. Is performance- based contracting a viable con- tracting option in the United States? Absolutely! The United States has a real- life laboratory to learn about per- formance specifications and contracts and will develop viable homegrown versions. The United States is are learn- ing from Europe’s efforts; is experimenting with its own performance contracting and specifications; and will adapt those with the greatest potential to improve perfor- mance, accelerate construction, and reduce the life- cycle costs of the U.S. transportation system. REFERENCES 1. Mathews, T. Highways Agency Procurement Strategy. Nov. 2001. www.highways.gov.uk/business/10838.htm. Accessed June 5, 2007. 2. About DBFO’s. Highways Agency, London. www.high ways.gov.uk/roads/10908.htm. Accessed June 10, 2007. 3. Grabow, B., M. Reidenbach, M. Rottman, and A. Seidel- Schulze. Public Private Partnership Projects in Germany— A Survey of Current Projects at Federal, State, and Municipal Level. Federal Ministry of Transport, Building, and Housing, Berlin, 2005. 4. Managing Agent Contractor Contract— Contract Guidance Manual. Highways Agency, London, Dec. 2006. 5. Stankevich, N., N. Qureshi, and C. Queiroz. Performance- Based Contracting for Preservation and Improvement of Road Assets. Transport Note No. TN-27. World Bank, Washington, D.C., Sept. 2005. 6. Queiroz, C. Contractual Procedures to Involve the Private Sector in Road Maintenance and Rehabilitation. World Bank, Washington, D.C., Aug. 2000. 7. Contract Administration: Technology and Practice in Europe. Report FHWA- PL-02-033. Office of International Programs, FHWA, U.S. Department of Transportation, Oct. 2002. 8. NCHRP Synthesis of Highway Practice 20-07 (201): Use of Warranties in Highway Construction. Draft Report. Transportation Research Board of the National Academies, Washington, D.C., 2007. 9. FHWA Report to the U.S. Congress on the Effectiveness of Design–Build as Required by TEA-21 Section 1307(f). Report FHWA DTFH61-98-C-00074. FHWA, U.S. Department of Transportation, 2006. 10. SEP-14 Design–Build Project Summaries. FHWA, U.S. Department of Transportation, Jan. 2004. www.fhwa .dot.gov/programadmin/contracts/sep14a.htm. 11. NCHRP Report 451: Guidelines for Warranty, Multi- Parameter, and Best- Value Contracting. TRB, National Research Council, Washington, D.C., 2001. 12. Report to Congress on Public–Private Partnerships. U.S. Department of Transportation, Dec. 2004. 13. Seiders, J. Pavement Performance Standards and Specifications for Concessions. Materials and Pavements Section, Texas Department of Transportation, Austin, Oct. 31, 2006. 143PERFORMANCE- BASED CONTRACTING: A VIABLE CONTRACT OPTION? Performance-Related Specification Based on Prediction of Future Value Warranty Specification Noise generated from pavement–tire interaction shall not exceed a Noise generated from pavement–tire interaction shall not exceed a decibel value of 0.9x when it is measured by y placed z distance decibel value of 1.1x at the end of a 5-year period. An actual peri- from the pavement under live (or controlled) traffic. Included are a odic measurement schedule is used, and the actual traffic numbers sampling plan and PWL analysis. Is based on a model (data) that and percentage of trucks are determined. Included are a sampling shows that similarly designed and built pavements become noisier plan and PWL analysis. Is based on a model (data) that shows that over a 5-year period. similarly designed and built pavements become over in a 5-year period. the value required at the end of operation, the value required at the end of the performance period, corrections for traffic and other factors that influence

14. Robinson, M., E. Raynault, W. Frazer, M. Lakew, S. Rennie, and E. Sheldahl. The DC Streets Performance- Based Asset Preservation Experiment— Current Quantita- tive Results and Suggestions for Future Contracts. Presented at 85th Annual Meeting of the Transportation Research Board, Washington, D.C., 2006. 15. SAIC Corporation. Performance Contracting Framework Fostered by Highways for LIFE. FHWA, U.S. Depart- ment of Transportation, July 2006. 16. Performance Specifications— Strategic Roadmap. FHWA, U.S. Department of Transportation, 2004. 144 U.S. AND INTERNATIONAL APPROACHES TO PERFORMANCE MEASUREMENT

Next: APPENDICES »
U.S. and International Approaches to Performance Measurement for Transportation Systems Get This Book
×
 U.S. and International Approaches to Performance Measurement for Transportation Systems
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB Conference Proceedings 44: U.S. and International Approaches to Performance Measurement for Transportation Systems is the proceedings of a September 2007 conference that explored opportunities for and experiences in using performance measurement as a strategic tool to better communicate goals, objectives, and results to a wide range of stakeholder groups.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!