National Academies Press: OpenBook

Case Studies in Cross-Asset, Multi-Objective Resource Allocation (2019)

Chapter: Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches

« Previous: Chapter 2 - Cross-Asset Resource Allocation Overview
Page 8
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 8
Page 9
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 9
Page 10
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 10
Page 11
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 11
Page 12
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 12
Page 13
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 13
Page 14
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 14
Page 15
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 15
Page 16
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 16
Page 17
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 17
Page 18
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 18
Page 19
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 19
Page 20
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 20
Page 21
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 21
Page 22
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 22
Page 23
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 23
Page 24
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 24
Page 25
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 25
Page 26
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 26
Page 27
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 27
Page 28
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 28
Page 29
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 29
Page 30
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 30
Page 31
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 31
Page 32
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 32
Page 33
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 33
Page 34
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 34
Page 35
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 35
Page 36
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 36
Page 37
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 37
Page 38
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 38
Page 39
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 39
Page 40
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 40
Page 41
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 41
Page 42
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 42
Page 43
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 43
Page 44
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 44
Page 45
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 45
Page 46
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 46
Page 47
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 47
Page 48
Suggested Citation:"Chapter 3 - Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches." National Academies of Sciences, Engineering, and Medicine. 2019. Case Studies in Cross-Asset, Multi-Objective Resource Allocation. Washington, DC: The National Academies Press. doi: 10.17226/25684.
×
Page 48

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

8 This chapter describes a set of four case studies performed to test the implementation of cross-asset resource allocation approaches and document lessons learned for cases in which an agency has implemented such an approach. The case studies were performed for the fol- lowing agencies: • Arizona Department of Transportation (ADOT) • California Department of Transportation (Caltrans) • Delaware Valley Regional Planning Commission (DVRPC) • Maryland Department of Transportation (MDOT) and State Highway Administration (MDOT SHA) Each case study description contains a summary of the scope of the case study, relevant back- ground information, details on the prioritization approach implemented or tested by the agency, and lessons learned. Case Study 1: Arizona Department of Transportation Summary ADOT recently implemented a novel MODA-based process to inform scenario planning as part of the update of its new statewide long-range transportation plan (LRTP). The approach used a COTS program-level prioritization tool and a unique survey tool to evaluate the impact and support for different resource allocation strategies, which informed development of ADOT’s policy direction on future allocation of resources between preservation, moderniza- tion, and expansion spending areas. As an initial exploration into MODA, ADOT chose not to perform project-level tradeoffs across asset classes. Background ADOT owns and maintains a state highway system composed of more than 18,400 lane miles. The department also provides varying levels of support to improve, maintain, and/or operate the state’s transit systems, airports, local roads, and bicycle/pedestrian facilities. Together, these elements comprise a state transportation system that serves a population of nearly seven million citizens and supports a growing and diversifying state economy. Like many states, Arizona faces a significant gap between available revenues and identified transportation investment needs to maintain and improve its highway system, and it faces tough tradeoff decisions when it allocates its resources to meet different types of needs. To both guide its investment decisions and comply with federal and state requirements, ADOT develops statewide LRTP updates about every 5 years. In February 2018, the Arizona C H A P T E R 3 Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 9 Transportation Board approved ADOT’s most recent plan update entitled “What Moves You Arizona” (“WMYA”) 2040, which covers a 25-year planning horizon from 2016–2040. The purpose of WMYA 2040 is to provide information and direction to the state, MPOs and coun- cils of government (COGs), and ADOT’s other partners about transportation needs, available revenues, investment priorities, and anticipated system performance. The WMYA 2040 plan builds from its predecessor plan (WMYA 2035), which shifted ADOT’s planning approach from a project to a policy focus. Specifically, WMYA 2035 estab- lished what is known as the Recommended Investment Choice (RIC). The RIC identifies a desired ADOT allocation of resources between highway preservation, modernization, and expansion spending. At the time WMYA 2035 was developed, ADOT had just begun to establish system performance metrics and had limited data, analysis, and tools to inform tradeoff deci- sions between different resource allocation strategies. Thus, development of the WMYA 2035 RIC was based on largely qualitative assessments of expected system performance under different resource allocation scenarios known as Alternative Investment Choices (AICs). For the WMYA 2040 Plan update, ADOT decided to establish a new RIC through a more data-driven and performance-informed approach that included both quantitative and qualitative forecasts of future system performance. To do so, ADOT used its COTS project prioritization software to develop and evaluate different AICs based on assessment of performance tradeoffs across a wide range of system considerations. A consultant team that is highly familiar with the software developed this analytical process and applied it working closely with ADOT staff. This effort then informed ADOT’s development of the agency’s final RIC, which defines how the department intends to allocate nearly a billion a year for transportation investment. The following is summarized from the Arizona Long-Range Plan Update, Final Working Paper #5: Recommended Investment Choice (RIC) Development published in 2017 (1). Approach Overview ADOT’s approach to conducting a MODA-based scenario analysis process centered on developing a series of AICs that represented different perspectives on how ADOT’s resources could or should be allocated in the future. The AICs, in effect, served as data points to inform development of the final RIC. The steps for developing the AICs and RIC are briefly described below, with detailed documentation of each step provided in the subsequent sections of this case study. • Assembling the Building Blocks: To support development of the AICs, ADOT worked from goals and objectives established earlier in the planning process to define a strategic framework of investment areas and associated performance measures. • Evaluating Current Spending: To provide a baseline for comparison, a “Current Plan AIC” was developed by extrapolating planned spending in the Maricopa Association of Govern- ments (MAG) and Pima Association of Governments (PAG) Metropolitan Transportation Plans (MTPs) and ADOT’s most recent 5-year Capital Plans (for Greater Arizona). Note that MAG is the Phoenix-area MPO, PAG is the Tucson-area MPO, and “Greater Arizona” is the rest of the state outside the boundaries of these two MPOs. • Scenario Analysis: Application of a COTS MODA tool to enable ADOT staff and stake- holders to evaluate the system performance implications of different investment scenarios and develop consensus around a recommended resource allocation approach known as the “Agency AIC.” • Gathering Public Input: Use of an online survey tool to educate citizens about transporta- tion investment tradeoffs and to gain widespread public input about relative priorities and

10 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation how to spend limited transportation dollars. The results from the survey were then translated into a “Public AIC.” • Developing the Final RIC: The final overall statewide and Greater Arizona RICs were developed through an iterative process. The MAG and PAG RICs were developed based on the MTPs for the two respective regions. The RIC Building Blocks The following section describes how earlier elements of the WMYA 2040 plan development effort were brought together and refined to support application of the MODA tool to develop the AICs and RICs. Needs Estimates of ADOT’s 25-year spending needs were completed in February 2017, development of which required significant technical modeling as well as synthesis and extrapolation of various studies and research. Arizona’s State Highway System (SHS) needs modernization, including costs for pavement and bridge preservation (e.g., upgrading highways, safety improvements, and ITS deployment) and expansion (e.g., added capacity, new alignments, and new interchanges). In addition, ADOT maintenance staff estimated optimal spending on operations and maintenance (O&M) for the SHS (e.g., patching potholes, fixing guardrails, mowing, and snow removal) based on their expert opinions about how much additional funding is needed to address short- falls on the current system, as well as the additional needs created by future expansion of urban highways. These original needs estimates were subsequently refined based on new information, resulting in the final estimates presented in Table 3-1. Revenues The revenue figure used to support development of the AICs and RIC was derived from the 25-year revenue estimate completed in April 2017. The baseline forecast provides an estimate of funding anticipated to be available for capital spending. After allocations for O&M and support to local transit agencies, the forecast indicates about $23 billion in constant dollars will be available for highway capital spending over the 25-Year WMYA 2040 planning horizon ($923 million annually). Investment Category 25-Year Need (Billions of Constant $) Preservation Pavement $7,902 Bridge $1,334 Subtotal Preservation $9,236 Modernization Highways $4,273 Bridge $400 Safety $1,934 ITS/Technology $3,255 Subtotal Modernization $9,862 Expansion Existing Highway Expansion $17,561 New Roads $13,770 New Bridges $403 New Interchanges $2,320 Subtotal Expansion $34,054 Total 25-Year Highway Needs $53,152 Table 3-1. 25-year statewide highway capital needs.

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 11 Strategic Framework The WMYA 2040 goals and objectives were established early in the planning effort and served as the strategic framework for AIC and RIC development. As illustrated in Figure 3-1, the framework was based on three system-related goal areas: (1) Mobility, Reliability, Acces- sibility; (2) Safety; and (3) Preservation. These three areas were expanded to identify specific “Investment Types” and associated “Performance Metrics” that could be used to help quantify future system performance under different allocation schemes. The investment areas were then rolled-up into three “Major Investment Categories.” Current Plan AIC To provide perspective for development of AICs and RICs for Greater Arizona, MAG, and PAG, ADOT developed a snapshot of how future resources would be allocated to different investment types if no changes were made to current and planned capital spending of ADOT resources on the SHS. “Investment categories” refer to the three major spending areas of preser- vation, modernization, and expansion. “Investment types” are more specific types of spending that make up the major investment categories. Table 3-2 shows the resulting annual average allocations for the three regions by major investment category (expansion, modernization, preservation, and O&M) and investment type (safety, bridge, pavement, expansion, technology, accessibility, and O&M), and documents what was used in the RIC development process as the Current Plan AIC. (Note, because the Current AIC is based on shorter-term funding, the annual total after O&M is slightly different than the revenue forecast.) Scenario Planning The RIC development process included a scenario planning webinar and workshop with ADOT staff and stakeholders. This process incorporated ADOT’s COTS project prioritization system to assess what the state’s transportation experts think Arizona’s highway investment priorities should be and included the elements described below. Performance Curves The first step in the scenario planning process was to establish performance curves that define anticipated performance outcomes at different spending levels for a specific investment type WMYA System Goals Mobility, Reliability & Accessibility Preservation Safety System Expansion Technology Deployment Accessibility Safety Preservation • Auto/Truck Delay • User Costs • % ITS Needs Met • % Interchange Needs Met • % Safety Needs Met • % Bridges Good • % Pavement Good Expansion Modernization Preservation Performance Metrics Major Investment Categories Investment Types Figure 3-1. Strategic framework for AIC/RIC development.

12 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation and to enable scenario planning participants to see how changes in resource allocation strate- gies potentially affect system performance. This aided the participants to make more informed decisions regarding tradeoffs between spending on different investment types. The method- ological elements employed to develop performance curves for each investment area in support of the WMYA 2040 scenario planning effort are summarized in Table 3-3. Pairwise Comparison Prior to the scenario planning workshop, a webinar was conducted to explain the scenario development process and provide directions for identifying the relative priority for the following six investment types through a pairwise comparison survey. Note that a “Pairwise Comparison” asks respondents to identify the relative priority between two different options, such as safety versus technology investment. During the scenario planning exercise, O&M was considered a fourth “Major Investment Category.” The six investments types are as follows: • Expansion, • Preservation, • Safety, • Technology, • Accessibility, and • O&M. Spending % of Total Spending % of Total Spending % of Total Spending % of Total Safety and Modernization 96$ 9% 82$ 15% 13$ 3% 2$ 2% Bridge 40$ 4% 40$ 7% -$ 0% -$ 0% Pavement 224$ 21% 218$ 40% 6$ 1% -$ 0% Expansion 482$ 45% 45$ 8% 343$ 84% 93$ 78% Technology 12$ 1% 3$ 1% 7$ 2% 2$ 2% Accessibility 67$ 6% 21$ 4% 23$ 6% 23$ 19% O&M 152$ 14% 138$ 25% 14$ 3% -$ 0% TOTAL 1,073$ 100% 547$ 100% 406$ 100% 120$ 100% Major Investment Category Preservation 264$ 25% 258$ 47% 6$ 1% -$ 0% Modernization 175$ 16% 106$ 19% 43$ 11% 27.00$ 23% Expansion 482$ 45% 45$ 8% 343$ 84% 93.00$ 78% O&M 152$ 14% 138$ 25% 14$ 3% -$ 0% TOTAL 1,073$ 100% 547$ 100% 406$ 100% 120$ 100% Investment Type Statewide Allocation Greater Arizona MAG PAG Table 3-2. Current plan AIC (average annual spending in constant dollars). Investment Area Performance Curve Methodology Expansion Travel demand model runs for different levels of building-out projects identified in Corridor Profile Studies were used to develop performance curves based on changes in projected daily hours of delay, average truck speed, and travel time index. Preservation Current system performance data and deterioration curves were used to project the percent of pavement and bridge decking in “poor” condition by system tier (Interstate vs. National Highway System). Safety Percent of direct safety needs met (based on staff opinions). Technology Percent of ITS/technology needs met (based on staff opinions). Accessibility Percent of new/improved interchange needs met (based on staff opinions). O&M Percent of identified O&M needs met (based on staff opinions). Table 3-3. Investment curve methodology.

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 13 The results of this survey led to an average priority weighting for each investment type, as illustrated in Figure 3-2. Priorities are reflected by how respondents weighed the importance of different improvement types when asked to allocate funding among them. The optimization module in the agency’s project prioritization system then was used to translate the weighting results into a “Baseline Allocation of Resources.” The resulting percentages of total funding that would be allocated to each major investment category are shown in Figure 3-3. Safety & Modernization 25% System Expansion 8% Accessibility 8% Technology Deployment 14% O&M 20% Preservation 25% Figure 3-2. Average investment type weighting. Preservation 20% O&M 20% Expansion 39% Modernization 21% Figure 3-3. Baseline allocation of resources by major investment category.

14 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation Scenario Workshop The results from the pairwise comparison were presented at a scenario planning workshop held in August 2016, which was attended by more than 60 transportation stakeholders from across the state. Attendees were assigned breakout groups and discussed their reactions to the pairwise results and associated Baseline Allocation of Resources. Each group then developed consensus around a recommended allocation of resources to different investment types and associated forecasts of system performance (summarized in Table 3-4). The workshop average then became known as the “Agency AIC” (since most of those who participated were either ADOT staff or personnel from MPOs and COGs) and became an important data point for development of the final RIC; a roll-up of this AIC to the major investment categories is provided in Table 3-5. Note that, for simplicity, an annual funding level of $1 billion was used for the MODA scenario exercise. General Public’s Investment Priorities To gain the general public’s input about investment priorities and inform development of the RIC, ADOT conducted an online interactive community engagement effort for two months in Fall 2016. The survey used a web-based software tool and asked respondents to provide input on highway investment priorities (see Figure 3-4 for an example). The tool provided illustrative information on the type of improvements (but not performance) that might be implemented to address the different investment type options. The online survey provided significant input on people’s investment priorities. As illustrated in Figure 3-5, the public viewed safety as most important, followed by expansion. However, both O&M and preservation also ranked high and, when considered together, are a high priority for the public. This is reinforced by the results, as shown in Figure 3-6 that, given $100 to spend on transportation, the general public would spend nearly half the funding on just preservation and O&M. Investment Type Annual Resource Allocations Baseline Group 1 Group 2 Group 3 Workshop Average Funding Range Safety $77 $77 $50 $72 $66 $50 to $77 Bridge $50 $50 $55 $61 $55 $50 to $61 Pavement $150 $269 $315 $200 $261 $200 to $315 Expansion $295 $342 $235 $295 $291 $235 to $342 Technology $134 $95 $80 $126 $100 $80 to $100 Accessibility $96 $48 $65 $50 $54 $48 to $65 O&M $198 $120 $200 $196 $172 $120 to $200 TOTAL $1,000 $1,000 $1,000 $1,000 $1,000 Table 3-4. Scenario planning workshop results ($ millions). Major Investment Category Annual Allocation Preservation $316 Modernization $220 Expansion $291 O&M $173 TOTAL $1,000 Table 3-5. Agency AIC ($ millions).

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 15 Figure 3-4. Example of survey page. Figure 3-5. Investment type priorities.

16 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation The survey results were then translated into another potential resource allocation strategy (using the same $1 billion per year in available annual funding that was used with the Agency AIC). This new strategy became known as the “Public AIC” and provided an alternative data point to the Current Plan and Agency AICs to inform RIC development (see Tables 3-6 and 3-7). The WMYA 2040 RIC Development of the final RIC was heavily informed by the three AICs (summarized in Figure 3-7), which provided important perspectives on both how ADOT’s limited resources could be allocated and how different interests view the relative priority of different investment types and associated system performance outcomes. Figure 3-6. Survey “allocation of funding” results. Table 3-6. Public AIC by investment type ($ millions). Table 3-7. Public AIC by major investment category ($ millions).

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 17 The ADOT Priority Planning Advisory Committee (PPAC) took the lead role in building from the AICs to develop the final WMYA 2040 RIC that was recommended to the Arizona Trans- portation Board. The resulting final WMYA 2040 RIC, shown in Figure 3-8 and Figure 3-9, reflect three important elements: • Strategic Framework Refinement: Because annual O&M spending levels are determined independently by the Arizona Legislature and ADOT does not have the ability to allocate these funds to highway capital spending, it was determined that O&M needs should be excluded from the RIC. • Asset Management Analysis: An analysis of how pavement and bridge conditions would change over the 25-year planning horizon with different funding levels was conducted to better inform RIC decisions. The analysis found that, without increases to current preservation spending levels, a life-cycle approach to maintaining SHS performance could not be achieved and would result in both long-term deterioration of the system and much higher overall costs to preserve the system in the long term. A “life-cycle approach” to system preservation seeks to optimize the timing of pavement and bridge treatments to achieve the lowest overall preservation costs over the life of facilities and reflects the concept that $1 of deferred minor treatments today may require $4 to $5 of major treatments at a later date. As a result, ADOT focused the resources the department controls on preservation, safety, and, to the extent possible, other needed modernization improvements to the existing system. • Two-Tier RIC Structure: To address differences in how ADOT funding controlled by MAG and PAG would be spent versus how ADOT-controlled funding would be allocated, ADOT Figure 3-7. Summary of AICs. Figure 3-8. WMYA 2040 RIC.

18 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation created two tiers of RICs. The first tier provides guidance on how ADOT’s overall capital funding should be allocated to preservation, modernization, and expansion. The second tier identifies how ADOT funds will be allocated in the MAG and PAG areas (based on their respective MTPs) and in the Greater Arizona area. Observations and Considerations The ADOT MODA-based scenario planning effort approach was highly successful in leading the department to a more informed decision about how it should allocate its scarce highway resources in the future. Positive outcomes of the initiative included: • Both stakeholders and staff liked the ability to see the performance tradeoffs in real time as funding was shifted from one investment area to another. For example, as participants shifted funding from preservation to expansion, they could see the projected decline in pavement/ bridge conditions in return for anticipated improvements in a travel time index. • The approach brought people to the table in a way that hasn’t been achieved before and educated them about the implication of criteria weighting, performance measurement, tradeoffs, and target setting. • The success of the approach created a foundation for implementing a MODA-based process for implementing WMYA 2040 in ADOT’s programming process. • The establishment of a performance curve for mobility (albeit a rudimentary curve based on a few data points) that showed how delay and reliability would change as spending increased or decreased represented a major advancement in MODA capabilities. • The MODA exercise and public outreach that followed helped build support for tough decisions about limiting expansion spending to avoid future decline in system preservation performance. As can be expected with an approach that breaks new ground, ADOT staff and stakeholders also identified real or potential issues with the MODA-based scenario planning approach, as well as areas where improvements could be made. These include: • Many participants struggled with the pairwise comparison, either because they were not comfortable with some of the comparisons (e.g., how can you compare preservation and safety), or because they did not understand its purpose. • It is important for participants in the planning process to recognize that the magnitude of performance changes from funding increases and decreases will likely vary considerably across programs, which can be hard for non-transportation professionals to understand. Figure 3-9. Greater Arizona, MAG, and PAG RICs.

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 19 • There was confusion about what some of the investment categories referred to and this may have led to inaccuracies in results. In particular, many did not know the difference between “preservation” and “O&M,” which may have led to preservation being underweighted. • It was difficult to explain how the results of the pairwise comparison translated into the weighting for the various MODA criteria and, in turn, how the weighting translated into baseline spending for the different investment categories. • Because ADOT had not yet set performance targets for the different goal areas, the thresholds for “good,” “fair,” and “poor” performance were somewhat arbitrarily set; these thresholds ended up having a strong, unintended impact on people’s opinions about how funds should be allocated. • The effectiveness of scenario planning is affected by the ability to develop meaningful curves, which varies depending on the availability of data and analytical methods to forecast future system performance. In some cases, good outcome-oriented performance curves can be estab- lished. In other cases, however, curves may simply need to reflect the percent of identified needs met at a given allocation level and therefore provide less value in terms of tradeoff comparisons. • ADOT’s MODA approach of stopping at the major investment category level did not enable users to consider synergies in spending, such as the benefits to safety or mobility that might come from increased preservation spending. Lessons Learned Lessons learned from ADOT’s experience that other agencies may wish to consider in implementing their own MODA approaches (whether for scenario analysis or project selection applications) include: • Make sure criteria category definitions are easy to understand and do no create gaps or overlaps in the type of improvements they address. • Take great care in setting performance thresholds—they tend to drive the outcomes of MODA analyses. • While the underlying performance curves that drive the MODA analyses are important and should be as valid as possible, don’t let the desire for perfection stand in the way of using a curve that is imperfect but that still advances the rigor of the analysis. • If possible, look for ways to integrate consideration of investment synergies across perfor- mance areas in the analysis. • Avoid presenting results without supporting information to show how a given set of inputs lead to specific outputs. • Agencies may not have the internal expertise to configure software tools to conduct this analysis and, if they do not, will need to either create the capacity in-house or use well- informed consultants to support similar efforts. • Recognize that a MODA process should probably not create final decisions; it should create outputs that are considered with other factors to arrive at final decisions. Case Study 2: California Department of Transportation Summary The California Department of Transportation (Caltrans) is in the process of implementing an improved approach for prioritizing projects in the California State Highway Operation and Protection Program (SHOPP). The approach developed assigns a monetized benefit to projects

20 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation across five different goals. The following paragraphs describe Caltrans’s efforts to imple- ment the new approach, which was still in testing as of the completion of this report. Also at the time of the completion of this report, Caltrans was testing two approaches: one based on an objective of maximizing overall utility (knapsack algorithms) and another based on maximizing achievement of each objective using DEA. Background Caltrans funds repair, preservation, and safety improvements on the California State High- way System (SHS) through the California SHOPP. The SHS is composed of approximately 50,000 lane miles and the 2018 SHOPP will implement $17.96 billion in projects over 4 years (2). The portfolio of projects is updated every two years and is composed of projects in the following categories: • Major Damage Restoration, • Collision Reduction, • Regulatory Mandates, • Mobility Improvement, • Bridge Preservation, • Roadway Preservation, • Roadside Preservation, and • Facility Improvement. The current process for selecting projects for inclusion in the SHOPP starts with describing and quantifying the rehabilitation and reconstruction needs on the SHS. These needs are detailed in the State Highway System Management Plan (SHSMP), which covers a 10-year period. The Transportation Asset Management Plan (TAMP) then presents the current inventory and condition of SHS assets, after which the fiscal capacity and funding forecast is developed, which in turn identifies how much will be available for each of the four years covered by the SHOPP. Finally, projects are selected that respond to the identified needs, fit within the fiscal constraints, and help achieve the performance targets in the TAMP. The process currently uses data and tools to select projects that will best meet the agency’s performance goals. In addition, Caltrans incorporates input from stakeholders and subject matter experts as well as analysis from individual asset managers. In order to work toward an even greater performance and data-driven approach to project selection and prioritization, Caltrans seeks to develop and implement a MODA-based approach. The SHOPP is inherently a multi-objective program. All SHOPP projects focus on the exist- ing SHS; however, SHOPP investments are motivated by a range of factors and are intended to accomplish a range of different outcomes. MODA explicitly helps address Caltrans’ diverse set of goals and helps determine what investments to make by considering all of the state’s goals and objectives. In addition, incorporating consideration of multiple objectives into project selection can improve outcomes. Ultimately, by forecasting the progress that proposed projects are expected to make toward achieving multiple objectives, Caltrans seeks to implement better projects that are designed with those objectives in mind. For instance, a project motivated by the need to improve pavement conditions might incorporate safety improvements, enhancements to facilitate bicycle and pedestrian mobility, and upgrades to drainage systems. Addressing multiple needs with one project helps accomplish more, particularly with respect to objectives that may otherwise be overlooked. Also, this approach may result in increased efficiency by reducing the number of times work is performed at a given location, reducing costly work zones, detours, and closures (3).

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 21 Approach Overview Using MODA, potential projects are evaluated quantitatively in terms of how well they support each of Caltrans’ goals. The goals, adapted from the Caltrans 2015–2020 Strategic Plan, as seen in Exhibit 3-1. Each of these goals has one or more specific objectives. Projects are characterized based on the degree to which they achieve each objective using a benefit function. The total benefit of a project is the sum of the benefits calculated for each goal. The benefit functions are defined such that the values predicted are analogous to the monetized annual benefits a project is expected to yield to Caltrans, road users, and society as a whole if the project is performed. These benefits are summed and compared to a base case in which the project is deferred for one decision period (2 years). The approach thus yields an approximation of annual benefits related to each goal resulting from a project, as depicted in Figure 3-10. This approach facilitates comparisons between goals and leverages models and parameters developed for benefit–cost analysis and/or other applications. Note, however, that the benefit functions are not intended for use in predict- ing actual total benefits over the life cycle of a project and cannot be used to establish whether or not a project is justified. Once the benefits are calculated, they can be used to help prioritize projects for funding. A basic approach to do this is to select projects that maximize overall benefit subject to a budget constraint and/or other constraints. If there is only one budget constraint, projects can be prioritized by decreasing benefit–cost ratio. Another use of the benefit functions is to help communicate the return of a given set of projects. Figure 3-11 exemplifies how the overall benefit resulting from a portfolio of projects can be represented by goal area. Goals and Objectives The following sections describe each of the five goal areas outlined above. For each goal the section describes the objectives for that goal, specific measures recommended for characterizing the objective, and major assumptions/qualifications regarding the measures. Goal 1: Safety Safety addresses the importance of reducing property damage, injuries, and fatalities for both motor vehicle occupants and other non-motorized vehicle users of the SHS. Many SHOPP projects incorporate safety upgrades, such as intersection improvements, cable barriers, guardrails, or other features to better protect drivers, cyclists, pedestrians, and/or maintenance workers. Table 3-8 provides the objectives of this goal. Goal 1: Safety Goal 2: Air Quality and Health Goal 3: Stewardship and Efficiency Goal 4: System Performance and Economy Goal 5: Sustainability and Livability Exhibit 3-1. Caltrans goal areas.

22 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation Figure 3-10. Calculating project benefits. Figure 3-11. Overall benefits of a portfolio of projects.

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 23 Vehicle user safety benefits are measured in terms of the estimated annual crash cost savings resulting from a project. Crash cost savings are calculated by estimating the reduction in the crash rate for the project. The reduction in the crash rate is multiplied by the average cost of a crash considering costs associated with injuries and fatalities for motor vehicle occupants, cyclists, and pedestrians, as well as property damage costs. Some projects may yield supple- mental safety benefits for non-motorized vehicle users by reducing their time of exposure to traffic over and above any improvements resulting from lowering the overall crash rate. For instance, a project might include a pedestrian overpass or result in moving equipment away from traffic to yield additional safety benefits. These benefits are measured in terms of estimated annual crash cost savings, similar to the vehicle user safety benefits described above. Goal 2: Air Quality and Health The goal of Air Quality and Health is to consider opportunities for reducing negative impacts of traffic-generated air pollution, as well as improving public health through encouraging use of active transportation. SHOPP projects can improve air quality and health through reducing fuel consumption and encouraging use of ZEVs or alternatives to automobile travel. Table 3-9 provides the objectives of this goal. The air quality benefit of a project is measured in terms of its annual emissions reduction. The reduction in emissions is proportional to the reduction in fuel consumption. Such a reduction is estimated if the project encourages increased use of ZEVs, reduces pavement roughness, or includes other features that result in reducing VMT. The estimated reduction in fuel consump- tion yields the annual emissions reduction cost using the approach implemented in the Caltrans benefit–cost analysis tool, Cal-B/C (4). The Cal-B/C models consider costs of carbon monoxide (CO), carbon dioxide (CO2), nitrogen oxides (NOx), particulate matter (PM), sulfur dioxide (SO2) and volatile organic compounds (VOCs). Annual health benefits are predicted in cases in which a project results in increased use of active transportation, specifically walking and cycling. The calculation takes into account the Objective Short Description Measure Minimize health impacts and damage to the environment from air pollution Air Quality Annual Emissions Reduction Benefit Maximize health benefits from active transporation Health Annual Active Transportation Health Benefit Table 3-9. Goal 2 objectives. Objective Short Description Measure Minimize property damage, injuries, and fatalities for vehicle users Vehicle User Safety Annual Vehicle User Crash Savings Minimize property damage, injuries, and fatalities for non- motorized vehicle users Non-Motorized Vehicle User Safety Annual Non-Motorized Vehicle User Crash Savings Table 3-8. Goal 1 objectives.

24 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation specific quantities of the following facilities, constructed as part of the project, and that are associated with increased use of active transportation: • Bike lanes, • Multi-use paths, and • Sidewalks. Goal 3: Stewardship and Efficiency Many of the projects in the SHOPP are motivated by a desire to maintain the SHS efficiently and responsibly by restoring assets to a state of good repair, improving resiliency of the system, and reducing risk. Projects that restore bridges, pavements, and/or other assets that are in fair or poor condition to a state of good repair require an up-front investment but help reduce maintenance costs over time. Further, a project that helps reduce the vulnerability of the SHS to risks such as coastal flooding or landslides lowers the probability of costly detours required in the event the road must be closed. Table 3-10 provides the objectives of this goal. The benefits of asset preservation are estimated in terms of the savings in life-cycle cost if work is performed to restore asset conditions relative to the life-cycle cost of deferring needed work. A 2-year deferral period is assumed for the calculation, as the SHOPP is updated every 2 years. The benefit accounts for the fact that if needed work is deferred, there is some possibility that the asset will deteriorate further, resulting in a need for a more costly treatment and/or emergency repairs. The SHOPP includes a number of projects that reduce potential vehicle detours from road closure by reducing risk. This includes projects to improve bridge clearances (reducing risk of bridge hit), addresses scour (which can result in bridge closure if the substructure of the bridge is undermined), and improves resilience to rising sea levels and other effects of climate change. These benefits are quantified by first calculating the potential reduction in service disruptions. This reduction is then used to calculate an annual reduction in detour costs. Hazards Caltrans engineers consider that may cause service disruptions include, but are not limited to, the following: • Earthquakes, • Landslides, • Rockfalls, • Tidal inundation and tsunamis, • Floods, • Bridge scour, • Wildfires, • Temperature extremes, • Bridge overloads, • Bridge over-height collisions, • Hazardous truck collisions, Objective Short Description Measure Minimize agency cost of maintaining infrastructure Asset Preservation Asset Preservation Benefit Minimize user cost of asset closure/failure Detour Reduction Annual Vehicle Detour Reduction Benefit Table 3-10. Goal 3 objectives.

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 25 • Vessel collisions, • Sabotage, • Advanced deterioration, and • Steel fatigue. Note that Caltrans does not have specific algorithms for addressing each type of risk. Instead, project engineers estimate the likelihood and consequences for each risk a project is expected to mitigate. Goal 4: System Performance Some SHOPP projects include mobility enhancements that reduce congestion through improvements such as adding ramp meters or passing lanes to an existing facility. Other projects reduce fuel consumption by improving pedestrian/cyclist access, encouraging use of alter- native modes of transportation, and/or rehabilitating pavement, reducing its roughness. This goal captures the benefits of these and other performance benefits. Table 3-11 provides the objectives of this goal. Lowering fuel consumption reduces costs to road users and yields additional benefits through reducing emissions. The savings to road users are captured through the fuel savings benefits; additional benefits from reduced emissions are described in the section on Goal 2: Air Quality and Health. As discussed previously, reduced fuel consumption may result from increased use of ZEVs, reducing pavement roughness, or other project features that result in reducing VMT. The annual fuel savings benefit is calculating separately for automobiles and trucks. A project may reduce road users’ travel time by reducing recurring congestion and/or improving system reliability. Benefits of reducing recurring congestion are characterized by the reduction in delay hours, while benefits to reliability are captured through reduction in the buffer time index, a measure of the time road users must budget for travel considering day-to-day variability in travel times The objectives outlined above include benefits to freight. However, some projects may yield additional benefits to freight not fully captured in the previous two objectives. For example, a project to rehabilitate a corridor used extensively for truck traffic may generate additional economic activity. In cases where a project includes improvements to a freight corridor, additional freight corridor benefits are approximated by multiplying the fuel and travel time benefits by a specified factor. The factors used for the calculation are based on a separate research effort performed by Caltrans to test various tools for predicting the benefits of freight projects (5). Goal 5: Sustainability and Livability Many SHOPP projects include enhancements that minimize the negative impacts of transportation infrastructure and/or enhance livability. For instance, a project to rehabilitate a Objective Short Description Measure Minimize road user fuel costs Fuel Savings Annual Fuel Savings Minimize inconvenience to users Travel Time Savings Annual Travel Time Benefit Minimize disruption to the economy Freight Improvement Freight Corridor Benefit Table 3-11. Goal 4 objectives.

26 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation highway may include improvements to drainage systems and restoration of wetlands that reduce pollution from runoff. Other projects may include fish passages or wildlife crossings that reduce impacts of roads to wildlife (and in the case of wildlife crossings, improve safety). Projects that include pedestrian improvements and bike lanes help promote increased use of non-motorized transportation modes, helping build more sustainable and livable communities. Goal 5 captures these and other benefits associated with sustainability and livability not otherwise represented in the previous goals. Table 3-12 provides the objectives of this goal. Modal improvement benefits are predicted in cases where a project results in increased use of transit or non-motorized transportation, specifically walking and cycling. For transit this objective captures a range of benefits associated with increased use of transit not otherwise captured in other measures described above. In the case of bicycle and pedestrian facilities, health and system performance benefits are captured in other goals. Thus, benefits captured for these improvements are those associated with increased use of non-motorized transportation for recreation. SHOPP projects can improve water quality by including activities to restore or improve wetlands. Annual water quality benefits are estimated based on the average benefit per acre treated for water quality and the acreage of treated land. Annual benefits are predicted for two specific types of biological improvements included in some SHOPP projects: fish passages and wildlife crossings. Annual biological-related benefits are estimated based on the quantities associated with each type of biological improvement and average benefit per passing or crossing for each type of improvement. Optimization Methods and Results To facilitate an application test of the NCHRP Report 806 tool, Caltrans provided data for 121 sample projects. A simple evaluation spreadsheet was used to perform the calculations for each measure and obtain a raw score for each project. Two different optimization approaches were then used to rank projects: (1) the knapsack approach incorporated in the Cross-Asset Resource Allocation Spreadsheet Tool and (2) DEA. Knapsack Approach The data and scores from the evaluation spreadsheet were entered into the Cross-Asset Resource Allocation Tool developed for this project and described in Chapter 5. This tool attempts to select the set of projects to perform to maximize utility subject to a budget constraint by solving the “knapsack problem.” Figures 3-12 through 3-14 show snapshots of the input sheets of this tool. These figures are provided for illustrative purposes and reflect a draft version of the spreadsheet tool as well as data provided at the time of this writing. Objective Short Description Measure Maximize multimodal transportation options Modal Improvement Modal Improvement Benefit Minimize damage to the environment Water Quality Water Quality Benefit Maximize improvement to biological-related infrastructure Biological-Related Improvements Biological Benefit Table 3-12. Goal 5 objectives.

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 27 For this case study, the five goal areas were entered as the performance measures in the tool, shown in Figure 3-12. A sampling of the project data entered in the “Project Impacts” worksheet of the tool is shown in Figure 3-13. In Figure 3-14, the weights for each of the goals were set at 100%. This is unique because the quantitative methodology used to score projects results in a monetary benefit value that makes the goals easily compared. For this test Caltrans opted not to apply any additional weighting on the goals. The results obtained from the spreadsheet tool include a rank of projects that can be sorted by overall score or by overall score divided by cost. A small subset of the ranking by overall score divided by cost is provided in Figure 3-15. The “Scoring” worksheet of the tool also provides the scores for each of the measures with conditional formatting that indicates visually how the projects score in specific areas. Finally, the “Optimization” worksheet of the tool provides a recommended portfolio of projects given a budget that is two-thirds of the overall cost to complete all the projects. The recommendation included building 88 of the 121 sample projects, allocating 97% of the budget. Figure 3-16 shows the budget allocation information and the first several projects that are recommended for inclusion in the portfolio. These projects are ranked by overall score. Data Envelopment Analysis In addition to using the spreadsheet Cross-Asset Resource Allocation Tool described in Chapter 5 to rank projects and allocate funding, the team also tested the use of DEA as an alter- native method for prioritizing projects. This approach is included in the web-based version of the Cross-Asset Resource Allocation Tool described in Chapter 5. The testing illustrated that using DEA yields similar results to that obtained using the knap- sack approach, with a correlation coefficient of more than 0.90 between the ranks obtained using the two methods. Figure 3-12. Caltrans performance measures. Figure 3-13. Caltrans project impacts for selected projects and measures.

28 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation Figure 3-16. Sample of optimization results. Figure 3-14. Caltrans measure weights. Figure 3-15. First 10 projects ranked by score/cost.

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 29 Lessons Learned The lessons learned from the case study include the following: • It is feasible to develop a quantitative approach for prioritizing asset preservation projects, though more testing and refinement is required to put the approach into production for Caltrans. • Data, models, and techniques developed for benefit–cost analysis are extremely applicable for developing a benefit or utility function. Incorporating these in the approach developed for Caltrans resulted in a more defensible and scalable approach for quantifying project value. • There are different approaches to optimizing project selection given data on the value deliv- ered by a candidate project for a set of different objectives. In a test of 121 sample projects, similar results were obtained optimizing project selection using the knapsack approach and DEA. The testing suggests that either approach is viable for optimizing project selection. Note that at the time of this writing, work to develop the approach at Caltrans is complete. The agency is evaluating options for systems to support the data collection process. In addition, the publication of the results of the Caltrans effort is pending. Case Study 3: Delaware Valley Regional Planning Commission Summary The DVRPC applies an innovative data-driven, MODA-based approach to evaluate Trans- portation Improvement Program (TIP) projects against a comprehensive set of performance criteria. Through this framework, the bi-state MPO has more effectively communicated with their board and ensured the most valuable projects are programmed. Background The DVRPC is the designated MPO for the nine-county Greater Philadelphia Region that includes Bucks, Chester, Delaware, and Montgomery Counties, and the city of Philadelphia in Pennsylvania, and Burlington, Camden, Gloucester, and Mercer Counties in New Jersey. The agency’s mission is to establish and work toward achieving the region’s vision by convening the widest array of partners to inform and facilitate data-driven decision making and by striving to be leaders and innovators, exploring new ideas and creating best practices. Two of DVRPC’s key roles are establishing the region’s long-range metropolitan transporta- tion plan (Connections 2045) and leading bi-annual development of TIPs for the Pennsylvania and New Jersey portions of the MPO region. Connections 2045, adopted in October 2017, establishes a vision for the growth and development of the region and serves as a blueprint for prioritizing transportation funding over the next 28 years in the development of the region’s TIPs. The TIPs, which program projects for the next four years but identify longer horizon projects as well (12 years for Pennsylvania and 10 years for New Jersey), list all projects for which the use of federal funds is intended, along with non-federally funded projects that are regionally significant. Through its role in leading the development of the region’s TIPs, DVRPC staff are respon- sible for allocating limited resources toward keeping the region’s roads, structures, and facilities safe, secure, and in a state of good repair. The DVRPC must address the needs associated with enabling economic growth, providing increased travel choices, reducing congestion, preserving natural resources, and sustaining a high quality of life for system users. To ensure

30 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation the maximum return on investment for all communities, the DVRPC moved to a data-driven MODA approach for project evaluation and selection beginning with development of the FY 2016 New Jersey TIP and FY 2017 Pennsylvania TIP and has continued to expand and improve upon this process. Project Selection Approach Overview The current DVRPC project selection approach, developed in 2014, aims to leverage improved agency data resources and analytical capabilities to objectively evaluate projects based on alignment with LRTP goals. As a result of this process, the DVRPC has streamlined planning partner conversations around candidate projects and provided a stronger justification for final programming decisions, with an aim to balance the region’s transportation infrastructure investment needs with available resources. Projects in the TIP and LRTP have become more complex—many projects are now more than just simply roadway repaving but may also incor- porate preservation, operations, bicycle and pedestrian, safety, signal, and congestion benefits. As a result, the commission is aiming to use evaluation criteria that are universal and multi- modal to capture the wide range of benefits offered by today’s sophisticated transportation project development. The new project selection process, illustrated in Figure 3-17, was devel- oped in conjunction with the New Jersey and Pennsylvania TIP subcommittees of the DVRPC’s Regional Technical Committee (RTC) in order to answer two critical questions for the DVRPC’s Governing Board: 1. Are projects located where we want to make investments? 2. How beneficial or effective are candidate projects relative to one another? Establish Project Evaluation Criteria Aligned to Agency Goals & Objectives Weight Relative Importance of Criteria Collect Projects and Criteria Ratings from Member Agencies, Cities, and Counties Prioritize Projects based on Benefit-Cost Ratios with considerations for equity and level-of-support Provide Project Rankings and Recommendations to RTC Subcommittee(s) with Final Program Approval by the Board Quadrennial LRTP Cycle Biennial TIP Cycle Figure 3-17. DVRPC project selection process.

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 31 Development of the new project selection process included an innovative application of MODA principles and strived to use criteria that are multimodal in nature to align with the LRTP’s vision for a more multimodal transportation network and to better reflect the increasingly complex and multimodal nature of projects that are coming into the TIP. This approach consisted of the establishment of project selection criteria, determination of criteria weighting reflective of agency priorities, creation of criteria rating scales for evaluating projects on a level playing field, and ranking of candidate projects based on a combination of their over- all value and cost effectiveness. The implementation of the process involved data collection and evaluation, and culminated in the finalization of a recommended program of projects by the DVRPC Governing Board when the region’s TIPs were adopted. Each of these elements is described below. Establishing Project Evaluation Criteria As a first step in the project evaluation process refinement effort, the RTC identified the following principles to use as potential evaluation criteria: • Alignment with planning goals and objectives, • Differentiation to produce a clear ranking, • Representation of all member counties, • As quantitative as possible, • Measurable using regularly available data, • Relevant for a diverse set of projects, • Comprehensive to cover regional goals, • Simple with concise, non-redundant measures, and • Understandable for any audience. Applying these principles, the RTC conducted several rounds of deliberations to arrive upon the final evaluation criteria, identified in Table 3-13. These criteria were chosen to pro- vide a meaningful assessment of how candidate projects would likely make progress toward improving performance under the following plan goal areas: facility/asset condition, safety, congestion reduction, connectivity between economic development centers, number of users experiencing benefits, economic competitiveness, intermodal facilities, environmental justice, and air quality/green design (6). Criteria Weighting Once the evaluation criteria were determined, the subcommittee of the RTC established criteria weighting that were reflective of agency and stakeholder perspectives of relative impor- tance, as determined through a collaborative, pairwise comparison process enabled by decision support software. Specifically, member agency, county, and city officials were surveyed to determine the directionality and degree of preference between the criteria, which were then translated into a set of priority weights (included for each criterion in Table 3-13). DVRPC staff facilitated this effort and did vote in the actual pairwise comparison. Clear definitions were found essential during the facilitation of stakeholder preferences. By communicating the types of projects that would score higher as a result of their elicited preferences, decision makers were able to make better informed decisions while avoiding common pitfalls (e.g., asking participants to assess the relative importance of “safety” without context can result in biased responses). Development of Criteria Rating Scales In addition to establishing evaluation criteria and associated weightings, the Subcommittee of the RTC needed to develop rating scales to translate a diverse set of criteria metrics onto a

32 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation level, normalized playing field for comparative analysis. Using such scales, each criterion rating is converted to a raw (i.e., unweighted) value between 0 (least valuable) and 1(most valuable). As illustrated in Figure 3-18, these rating (or utility) scales take one of two general forms as determined by the underlying criteria rating data and degree of preference for being on one end of the scale versus another. For most of the DVRPC criteria, a discrete rating is applied, meaning the scale has a finite number of categorical rating options from which to choose. Each rating is then translated to an exact value between 0 and 1. Alternatively, quantitative data (e.g., VMT) can be measured using a continuous scale, whereby an infinite number of valuations are possible on a continuum between 0 and 1. Scale bounds in such instances are often taken as the maximum and minimum observed data points of the project set. The resulting unweighted value, read from the scale, can be interpreted as the percent of the possible score that a project will earn relative to the respective criterion weight. For instance, if a scaled value of 0.25 is determined for a project against a criterion with a 24% weight then this would be interpreted as the project having earned one fourth of the 24 points (i.e., a 6-point contribution to the overall score). A sample calculation is provided in Table 3-14. Project Priority Scoring The final element of the project prioritization framework was to determine representative scores by combining criteria weights and scaled ratings, and bringing cost into the equation. Scores are calculated in a fairly straightforward manner by first summing the weighted value a candidate project receives for each criterion (scaled criteria rating p criteria weight) and then dividing this score by the total cost of the project so that projects can be compared based on their relative return on investment (ROI). Reduce Congestion (15 possible points) Location in CMP (Congestion Management Process) congested corridors, or appropriate everywhere CMP strategy; annual traffic (AADT) per lane, and daily transit riders per daily seats. Invest in Centers (13 possible points) Location in Connections 2040 Center or Freight Center, or high, medium-high, or medium transit score areas, or connection between two or more key centers. Facility/Asset Use (11 possible points) Levels of daily VMT, trucks, and transit ridership. Economic Competitiveness (8 possible points): Provides reduced operating/maintenance costs, or is part of an economic development or transit-oriented development (TOD) project. Multimodal Bike/Pedestrian (7 possible points): Accounts for bicyclists and pedestrians using the facility; new trails, sidewalks, or bike lanes, and connections to other multimodal facilities. Environmental Justice (5 possible points): Benefits census tracts with high indicators of potential disadvantage (IPD – previously known as degrees of disadvantage, or “DOD”) communities. Air Quality/Green Design (5 possible points): Stresses air quality benefits and incorporates environmentally friendly principles. Criterion Description Facility/Asset Condition (projects may earn up to 19 possible points ) Brings a facility or asset into a state of good repair, extends the useful life of a facility, or removes a functionally obsolete bridge rating. Safety (17 possible points) Impacts safety-critical element for transit, high-crash road location, or incorporates an FHWA proven safety countermeasure. Table 3-13. DVRPC TIP evaluation criteria and priority weights.

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 33 Quantitative scores, however, are only one piece of information used when making project decisions. Benefit-cost ratios may not always be accurate due to incomplete/poor data, so other factors such as geographic equity, alignment with the goal of fostering a multimodal program, and level of political support are considered during the final selection alongside the project priority scores and any carried over projects from previous TIPs. Project Selection Process Implementation The following section discusses how the DVRPC applied the MODA-based approach to new projects coming into the program for the New Jersey FY 2016 TIP and the Pennsylvania FY 2017 TIP. Project Data Collection & Evaluation To identify candidate projects for the TIP, DVRPC requests applications from member agencies, which are compiled via email using a common template that requests responses to over 80 questions across 15 categories. These categories include general information; cost estimates and project milestones; DVRPC long-range plan consistency; land use; relevance of proposed project; congestion management/reduction and transportation systems manage- ment and operations (TSMO) and ITS enhancements; environmental justice; asset management (bridge/pavement); fostering a multimodal system by connecting bike/pedestrian, transit, and highway modes; transportation safety; economic competitiveness; access and freight/aviation; environmental; public outreach; and stakeholder involvement. The responses to these questions Prioritization Criteria Scaled Value for Project Possible Points for Criterion (i.e., Weight) Score Contribution for Project Criterion A 0.25 out of 1 24 out of 100 0.25 * 24 = 6 Criterion B 0.5 out of 1 76 out of 100 0.5 * 76 = 38 Project Priority Score 6 + 38 = 44 Table 3-14. Sample Calculation for determining project score contributions. Criterion Value Ra tin g 100%0% Most Valuable Least Valuable Criterion Value Ra tin g 100%0% Most Valuable Least Valuable Continuous Discrete Figure 3-18. Types of criteria rating scales.

34 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation enable the DVRPC to achieve a thorough understanding of context and specific challenges for each project in addition to gathering data to support application of the evaluation criteria and MODA ranking process. A spreadsheet tool is used to translate accepted applications into the established prioritization process, with projects assigned points primarily based on categorical (discrete) ratings driven by inspection data within project limits and/or project classification information. In cases of projects with specific funding sources, for example, the Transportation Alternatives Program (TAP), the Highway Safety Improvement Program (HSIP), and Congestion Mitigation and Air Quality (CMAQ), more tactical evaluation criteria may be used in conjunction or in place of the general evaluation criteria. To avoid comparing apples and oranges, such as the case of evaluat- ing transit and roadway/bridge projects for facility/asset condition and safety, separate rating scales are applied. Project Ranking and Finalization of Program Once candidate TIP projects have been evaluated using the MODA framework, the scores are blended with expert judgment to create a baseline recommendation of prioritized projects that is provided to the DVRPC Subcommittee of the RTC for discussion. The information elements included in the discussion are (for each project) descriptions, locations, benefit–cost ratio, total benefits, criteria ratings, and costs. Informed by the project rankings, the Subcommittee of the RTC then recommends final project selection that focuses on cost effectiveness, magnitude and diversity of benefits, geo- graphic equity, project readiness and schedule, funding eligibility, and level of political support. This selection, as well as the process and summary of key projects for each goal area, is then made available for public comment. Numerical scores, however, are not publicized given the potential for stakeholders to assume a false level of precision in a project’s value and general lack of context. The numerical score also is not the sole decision-making factor during a project’s selection process. Interactive mapping and other summaries are posted to the website for public review and comment before the TIP is presented to the DVRPC Governing Board for adoption (Figure 3-19). Challenges and Opportunities The DVRPC strives to be a leading organization in developing innovative best practices for decision making. While the agency has made significant progress toward evaluating projects on a “cross-asset” basis, there are ongoing plans for continued maturation. As with many transportation agencies, data availability remains a challenge in terms of coverage, timeliness, and consistency. It is the hope of the DVRPC that by proactively establish- ing a data-driven prioritization process, its members would be incentivized to enhance their data collection efforts, particularly on local networks, toward quantifying a consistent set of performance measures. This is of particular importance for the “softer” performance areas such as “green design.” Despite a low level of data maturity, the inclusion of such areas is intended to encourage sponsoring agencies to develop innovative strategies to more effectively serve agency stakeholders and the public. DVRPC anticipates that as data improves, there will be opportunities for the agency to evolve to more predictive and quantitative project assessments. This will be done by calibrating various models and tools to historical before and after performance data, which will then enable the organization to track not only system performance but also the observed efficacy of various types of projects. As new programs are completed, additional data points will be available

Figure 3-19. Screenshot of DVRPC interactive TIP mapping.

36 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation to revise ROI predictions—made at various phases of the project development and delivery process—and validate decision processes. The RTC identified two areas it wanted to more directly analyze in the project evaluation criteria: risk and health. The current set of criteria include safety, use, and condition measures that capture risk to a degree. Bike and pedestrian facilities, air quality, safety, environmental justice, and reduced household expenditures on transportation are all measures that can indi- rectly provide health benefits. In addition, the Subcommittee is interested in a more rigorous economic analysis for projects. DVRPC has tested out the EconWorks tool (a web-based bench- marking tool to assess possible economic impacts early in project decision making) through a SHRP2 grant. However, these analytical tools take time to set up and to analyze each project, highlighting the difficult tradeoff between labor-intensive quantitative techniques and the need to quickly analyze and report back on project evaluation. Since the adoption of final performance rules legislated by MAP-21, the DVRPC has begun revisiting how to best align the described prioritization process with national measure reporting requirements. While some measures may be tracked rather than used for prioritization, the DVRPC does hope to bring a reliability measure to the forefront rather than relying solely on a proxy CMP measure. Other areas being targeted for improvement in future TIP update cycles are equity and recon- figuring measures to more explicitly account for scale. Concerning equity, a more proactive approach of quantifying the share of benefits to environmental justice populations is being evaluated to replace assigning points based on the extent of a project’s equity within IPD communities. One of the downfalls of applying categorical (discrete) ratings is that a false equivalency can be created. For instance, under the current decision process, a roadway project that transitions from a very poor asset to a state-of-good repair would receive the full possible points regardless of the scope of the project (e.g., a project correcting 500k square feet of poor bridge deck area would consequently be considered just as valuable, or result in the same number of points, as a project that corrects 5k of poor bridge deck area). Applying scaling factors may help to offset such unintentional consequences of a discrete rating scale. Currently, the long-range plan uses the TIP benefit criteria to analyze major regional preser- vation (transit and roadway), operational improvement (transit and roadway), and bicycle and pedestrian projects. A separate set of project evaluation criteria is used for roadway and transit expansion projects, meaning that there are three separate analyses that can be used to evaluate different types of projects. DVRPC is working to better align the expansion and TIP benefit criteria so that expansion projects can be analyzed using the TIP benefit criteria plus, which adds one or two more criteria factors as well as land use and CMP screening factors into the evaluation. Lessons Learned Planning and programming is an iterative, living process that requires constant recalibration. The described data-driven, “cross-asset” approach has been a catalyst for the DVRPC in enabling more meaningful conversations around transparent goals and objectives. Yet a data-driven approach is only as good as the data that feeds into it. Evaluating all projects across performance areas is a significant undertaking that requires vigilance and commitment. Recognizing the need for balancing rigorous analysis and turnaround time, an emphasis was placed on keeping the process simple. In lieu of performing a detailed analysis for each project, DVRPC defined a set of categories with respect to each performance area. DVRPC then relied on expert judgement to classify projects into categories based on expected outcomes.

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 37 Building flexibility into the decision framework was additionally found to be essential. As priorities shift, having the ability to update both weights and the criteria themselves has enabled the DVRPC to dynamically react to shifting goals and stakeholder values while making continued progress in evaluating programs and corresponding tradeoffs. Taking a more objective approach has helped the agency stay laser-focused on making progress toward goals and objectives, which has elevated the DVRPC’s credibility with its stake- holders. As measures and data mature, the DVRPC cross-asset–based approach will only further help ensure better outcomes for their region’s residents and visitors. Leveraging data has enabled the DVRPC to quickly establish a baseline program with minor adjustments for intangible benefits to the region and varying levels of support among agency stakeholders. Blending expert judgment with the data-driven scoring has only further strengthened the process by guiding decision makers based on quantifiable facts and filling in gaps where data may be lacking. As a result of the transparent, MODA-based framework, the DVRPC has been able to effectively and objectively prioritize a diverse set of projects, while focusing benefits toward addressing the most critical needs and obtaining the best estimate of ROI for taxpayer dollars. Case Study 4: Maryland Department of Transportation Summary MDOT is in the process of implementing state legislation for the prioritization of major expansion projects. MDOT staff developed criteria for evaluating projects across the nine goals and 23 measures established in the statute. Ultimately, this prioritization scheme is one of several tools that will be used to select projects for inclusion in the Consolidated Transportation Plan (CTP). The methodology developed to implement the state legislation was also adapted for use in prioritizing highway-specific projects. A subset of the goals and measures was used by the MDOT SHA to prioritize a small sample of highway projects in order to show the potential of applying cross-asset resource allocation approaches to aid in decision making. The NCHRP Report 806 spreadsheet tool was tested with sample MDOT SHA projects as well. The methodology and results of both efforts, by MDOT and SHA, are detailed in this case study. MDOT Chapter 30 Implementation Background The Maryland Open Transportation Investment Decision Act—Application and Evaluation (Senate Bill 307, also referred to as Chapter 30) was enacted on April 11, 2017. The law requires MDOT to develop a project-based scoring system to rank major capital transportation projects being considered for inclusion in the CTP. Major transportation projects are those transit and highway projects for which total cost for all phases is over $5 million and that meet certain criteria based on project activities. The legislation defines the goals and measures MDOT must use to evaluate projects. The specific goal areas for evaluating projects are defined in law as follows: Goal 1: Safety and Security, Goal 2: System Preservation,

38 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation Goal 3: Reducing Congestion and Improving Commute Times, Goal 4: Environmental Stewardship, Goal 5: Community Vitality, Goal 6: Economic Prosperity, Goal 7: Equitable Access to Transportation, Goal 8: Cost Effectiveness and Return on Investment, and Goal 9: Local Priorities. A cross-functional team of state transportation staff and local partners at the Maryland Municipal League (MML) and the Maryland Association of Counties (MACo) developed the Chapter 30 scoring model to meet the statutory requirements of Chapter 30. The Chapter 30 scoring model evaluates projects across the nine goals and 23 measures using a combination of project data, modeling analysis, and qualitative questionnaires. Each major transportation capacity project being considered for funding and inclusion in the CTP is evaluated through the Chapter 30 scoring model and ranked based on score. The project rank is then one of many factors that contribute to the decision of what projects to select for funding and inclusion in the CTP. Establishing Project Evaluation Criteria A series of workshops were held over the course of several months to determine the evalua- tion criteria for all of the measures. Subject matter experts and stakeholders from MDOT and partner agencies and organizations [e.g., SHA, Maryland Transit Administration (MTA), MML, and MACo] participated in the workshops. Where possible, the evaluation criteria rely on quan- titative data and methods. However, in some cases the measures called for a more qualitative approach for evaluation. Table 3-15 provides a list of the goals, measures, and measure defini- tions. The Chapter 30 Transportation Project-Based Scoring Model, Technical Guide prepared by MDOT further describes how each measure is calculated (7). The Maryland Open Transportation Investment Decision Act directs MDOT to establish the weighting metrics for each goal and measure established in the law. MDOT utilized a cross-functional group of transit, highway, and county and local representatives to establish the weighting criteria. The Delphi method was used to allow participants to vote on the weights, discuss differences of opinion, and ultimately reach consensus. Below are the weighting criteria that were determined for the scoring methodology. All eligible major transportation projects regardless of location or type, are evaluated with the same evaluation and weighting criteria. Once submitted for consideration in the Chapter 30 scoring process, the measure score for each project is determined by the project application data, qualitative checklist responses, and forecasted data. Depending on the measure, the score is determined through a combination of quantitative data associated with the project (i.e., crash severity index, asset quantity, travel time savings, etc.) or points assigned based on the evaluation checklist responses in the project application. When qualitative assessment data from the checklists is used to compute a project score for a measure, the points are scaled by project size to distinguish the magnitude of the measure benefit. To obtain measure results on a scale from 0 to 1, each score is divided by the highest project score for the particular measure. This results in one project that has a score of 1 and all other projects scaled accordingly between 0 and 1. As a result, the Chapter 30 scoring model does not predetermine what the highest possible score is for a given measure. Following the calculation of the 23 measure scores for each project, the measure scores are multiplied by a set of measure weights. Once the individual goal scores are determined, these are

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 39 G4 M1 Emissions Reduction The potential of the project to limit or reduce harmful emissions. The measure quantifies the gallons of fuel projected to be saved by the project. G4 M2 State Resource Impact The degree to which the project avoids impacts on state resources in the project area and adjacent areas. Geospatial analysis is performed to determine the proportion of project area that impacts state resources. Measure ID Name Description G1 M1 Reduction in Fatalities and Injuries The expected reduction in total fatalities and severe injuries in all modes affected by the project. For highway projects, the measure calculates the project’s benefit by combining the severity index value with the number of safety improvements included in the project, prioritizing the most dangerous locations and projects most focused on improving the situation. For transit projects, the number of daily new passengers serves as a proxy for safety as transit travel is consistently safer than highway automobile travel. G1 M2 Complete Streets The extent to which the project implements the Maryland SHA’s Complete Streets policies. This is the degree to which the project aligns with SHA Complete Streets policies by improving bicycle and pedestrian infrastructure. The measure emphasizes projects that meet bicycle/pedestrian demand, especially with regard to improving safety and connectivity of existing facilities. G2 M1 Facility Lifespan The degree to which the project increases the lifespan of the affected facility. G2 M2 Facility Functionality The degree to which the project increases the functionality of the facility. This includes ADA, bridge functional classification, and transit state of good repair. G2 M3 Facility Resiliency The degree to which the project renders the facility more resilient. The measure prioritizes projects that mitigate 100-year flood risk or are not within the 100-year flood plains. G3 M1 Job Accessibility The expected change in cumulative job accessibility within an approximately 60-minute commute for highway projects or transit projects. Geospatial modeling is performed to determine the increased number of accessible jobs within 60 minutes for both highway and transit modes. The measure is not concerned with the total number of jobs accessible, but rather the increased number of jobs to which the project allows access. G3 M2 Travel Time Reliability The degree to which the project has a positive impact on travel time reliability and congestion. The measure quantifies the annual hours of travel time savings yielded by the project across modes. For highway projects, this is estimated through simulating the effects of the project in a travel demand model. G3 M3 Modal Connection The degree to which the project supports connections between different modes of transportation and promotes multiple transportation choices. The measure prioritizes projects that include direct connections to passenger and freight facilities as well as improvement in public and non- motorized transportation. Table 3-15. Goal and measure summary. (continued on next page)

40 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation G7 M2 Low-Income Community Economic Development The projected economic development impact on low-income communities. G8 M1 Travel Time Savings The estimated travel time savings divided by the project cost. G8 M2 Funding Sources The degree to which project leverages additional federal, state, local and private-sector transportation investment. G8 M3 Transportation Alternatives The degree to which the project will increase transportation alternatives and redundancy. G9 M1 Local Priorities The degree to which the project supports local government transportation priorities, as specified in local government priority letters. G5 M1 Walking, Biking, and Transit The degree to which the project is projected to increase the use of walking, biking, and transit. Estimates the project’s contribution to increasing the use of public and non-motorized transportation. G5 M2 Community Access The degree to which the project enhances existing community assets. Estimates the project’s contribution to enhancing community assets such as schools and community centers. G5 M3 Revitalization The degree to which the project furthers the affected community’s and state’s plans for revitalization. G6 M1 Job Accessibility The projected increase in the cumulative job accessibility within an approximately 60-minute commute for projects. Geospatial modeling is performed to determine the increased number of accessible jobs within 60 minutes for both highway and transit modes. The measure is not concerned with the total number of jobs accessible, but rather the increased number of jobs to which the project allows access. G6 M2 Movement of Goods and Services The extent to which the project is projected to enhance access to critical intermodal locations for the movement of goods and services. Estimates the project’s alignment with the freight plan. G6 M3 Economic Development Strategy Support The projected increase in furthering local and state economic development strategies in existing communities. Estimates the project’s impact on economic development by determining the status and expected employment density of planned development in the area of the project. G7 M1 Job Accessibility for Disadvantaged The expected increase in job accessibility for disadvantaged populations within an approximately 60-minute commute for projects. Geospatial modeling is performed to determine the increased number of accessible jobs within 60 minutes for both highway and transit modes. The measure is not concerned with the total number of jobs accessible, but rather the increased number of jobs to which the project allows access. Measure ID Name Description G4 M3 State Environmental Goal Advancement The degree to which the project advances the state’s environmental goals. Projects are prioritized if they promote renewable energy development, asset management, land conservation, green jobs, and reduced pollution of the Chesapeake Bay. Table 3-15. (Continued).

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 41 multiplied by the goal weights shown in Figure 3-20, and the weighted goal scores are summed to obtain the project raw score. For instance, Goal 1: Safety and Security has a weight of 19%, thus the score for this goal is multiplied by 0.19 and added together with the values for other goals to obtain the project raw score. The project raw score represents the final evaluation of the project across all the goals and measures. After determining the project raw score, the raw score is divided by the total project cost to obtain the final project score. Dividing the project raw score by the project cost ensures that the financial feasibility of the project is considered in the prioritization process. Given this approach, if two projects yield the same project raw score then the less costly of the two projects will have greater priority. Projects are then ranked based on the final project score, with the high- est scoring project ranked first on the list. Projects with higher scores are determined or expected to deliver the most benefit for the lowest cost. Project Scoring Cycle The annual Chapter 30 cycle begins in January each year when proposing entities coordi- nate with MDOT SHA and the MTA to gather project information and data for applications. Chapter 30 applications must be completed and submitted by proposing entities by March 1 to kick off the evaluation process. In the four months following application submission, MDOT processes applications, validates project information and eligibility, collects necessary technical data, and completes all modeling and forecasting. Beginning in July, MDOT utilizes the model- ing results and technical data to evaluate each project, calculate the scores, and determine the final ranking of projects. (See Figure 3-21.) The final ranking then helps inform the development of the Draft CTP in August. The Draft CTP is made public and presented to the Legislature on September 8. The final project scores and ranking are included in an appendix in the CTP, which is posted on the MDOT website. Between September and November, MDOT conducts CTP Tours throughout Maryland to solicit feedback from local partners on the Draft CTP and to discuss the project scores and ranking. Following the CTP Tours, MDOT evaluates and scores any projects sponsored by the Figure 3-20. Chapter 30 goal weights.

42 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation Secretary of Transportation for consideration in the Final CTP. The Final CTP is published in early January. Details on the final scores and project rank are provided in an appendix to the Final CTP and are also made available on the MDOT website (8). MDOT SHA Project Prioritization Establishing Project Evaluation Criteria Four goal areas were selected to evaluate sample highway projects for MDOT SHA: Goal A: Safety, Goal B: System Preservation, Goal C: Mobility, and Goal D: Environment and Community. The methodology to evaluate projects and use the NCHRP Report 806 spreadsheet tool incorporated seven measures organized by goal areas. These measures were adapted from the goals and measures used for the Chapter 30 implementation described above. Some measures were also adapted from project prioritization methodologies used in other states. An overview of the goals and measures is provided in Table 3-16. Scoring and Prioritization MDOT SHA provided data for 11 sample projects to test the evaluation methodology. Using a simple evaluation spreadsheet to perform the calculations, projects were scored across each of the measures to obtain a raw score. This raw score was then scaled by dividing each measure score by the maximum value for that measure. This put all the scores on a scale from 0 to 1. Weights, as follows, were assigned to each goal with Safety and System Preservation weighted higher than Mobility and Environment and Community: Figure 3-21. Annual chapter 30 scoring cycle.

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 43 • Safety: 35%, • System Preservation: 35%, • Mobility: 15%, and • Environment and Community: 15%. For illustrative purposes, these weights were applied to the overall score for each of the 11 sample projects to obtain a final score. Projects were then ranked based on the score divided by the cost. In addition to applying the above weighting to calculate overall project scores, the team also tested use of DEA as an alternative method for prioritizing projects. The application of DEA yields the measure “relative efficiency,” which is used to prioritize in the same manner as score/cost ratio. The projects were ranked according to the score divided by cost ratio as well as the relative efficiency from DEA. These results are summarized in Tables 3-17 and 3-18. Measure ID Name Description Goal A Measure 1 Impact on Reducing Fatalities and Injuries This measure evaluates a project’s contribution to reducing injuries and fatalities on the project corridor. A project’s safety score is calculated based on the road severity index and the number of safety improvements included in the project. The number of safety improvements is determined by counting the improvements the project will make from a list of possible improvements. Goal B Measure 1 Benefit of Maintaining Infrastructure This measure quantified the benefit of performing preservation work on pavements and bridges. Given the quantity of bridge deck area and pavement lane miles in fair and poor condition that will be improved in the project, this measure incorporates the unit cost of preservation and the benefit-cost ratio of performing the work to compute a monetized benefit for preservation. Goal C Measure 1 Travel Time Savings This measure incorporates travel time savings due to improved congestion and reduced delay on the route. The measure also includes travel time savings from work done to reduce detours on bridges. Goal C Measure 2 Fuel Savings Reducing pavement roughness or performing work that reduces VMT will result in a reduction in fuel consumption that reduces fuel costs to the user. In the fuel savings measure, the predicted annual reduction in fuel consumption due to a change in VMT and/or IRI is multiplied by the average cost of fuel to obtain the total annual fuel savings. Goal D Measure 1 Emissions Reduction This measure calculates the reduction in emissions due to a reduction in fuel consumption from a change in VMT and/or IRI. This measure utilizes an approach implemented in the California Department of Transportation (Caltrans) benefit–cost analysis tool, Cal-B/C (4). The approach considers the cost of various pollutants including carbon monoxide (CO), carbon dioxide (CO2), nitrogen oxides (NOx), particulate matter (PM), sulfur dioxide (SO2), and VOC. Goal D Measure 2 Acres of Land Impacted This measure scores projects based on the acres of land negatively impacted by the project. Most projects do not negatively impact land area, but if there is significant land area impact as stated in the Environmental Impact Statement then this measure captures those impacts. Goal D Measure 3 Impact of Increasing Use of Bike/ Pedestrian/ Transit This measure assigns points to the projects based on the plans to increase the use of transit, walking, and biking. Table 3-16. MDOT SHA goals and measures.

44 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation Rank ID Project Description Score Cost ($000) Score/Cost (x1M) 1 10 Pavement- Micro-Surface Roadway 37.9 $4,635 8185.9 2 9 Pavement- Mill/Grind Patch, Resurface Roadway 37.9 $6,180 6139.4 3 7 Bridge- Invert Paving and Restoration 24.7 $5,069 4882.6 4 1 Replace SD Bridge (1) 31.7 $8,607 3680.7 5 6 Bridge Congestion Relief 29.8 $24,889 1197.2 6 4 Pavement- Add lane, bicycle compatible shoulder, ADA sidewalk 11.8 $18,109 649.1 7 2 Bridge Replacement 30.9 $51,333 601.1 8 8 Replace SD Bridge (2) 12.2 $21,294 572.5 9 5 Roadway Widening 46.9 $121,211 386.9 10 11 Traffic Congestion Relief 54.7 $151,000 362.5 11 3 Pavement- Reconstruction 16.4 $105,407 155.2 Project Table 3-17. Project ranking by score/cost. Rank ID Project Description Score Cost ($000) Relative Efficiency 1 9 Pavement: Mill/Grind Patch, Resurface Roadway 37.9 $6,180 1.000 2 11 Traffic Congestion Relief 54.7 $151,000 1.000 3 5 Roadway Widening 46.9 $121,211 0.585 4 7 Bridge: Invert Paving and Restoration 24.7 $5,069 0.352 5 4 Pavement: Add lane, bicycle compatible shoulder, ADA sidewalk 11.8 $18,109 0.337 6 6 Bridge Congestion Relief 29.8 $24,889 0.152 7 2 Bridge Replacement 30.9 $51,333 0.075 8 1 Replace SD Bridge (1) 31.7 $8,607 0.051 9 10 Pavement- Micro-Surface Roadway 37.9 $4,635 0.044 10 8 Replace structurally deficient Bridge (2) 12.2 $21,294 0.023 11 3 Pavement: Reconstruction 16.4 $105,407 0.022 Project Table 3-18. Project ranking by relative efficiency. In most cases, the ranks calculated using the two approaches are similar, with two major exceptions. Project 10 is ranked first in the score/cost ranking, but ranked ninth using relative efficiency. On the other hand, Project 11 is ranked tenth based on score/cost, but second based on relative efficiency. Regarding Project 10, this project performs well in one area, Maintaining Infrastructure, but other projects achieve more in this area while also contributing to other areas. Thus, it performs well considering score/cost, which does not consider achievement across goals (except by adding the score), and performs well in DEA, which tends to value this. Regarding Project 11, the opposite is true: This project achieves benefits in all of the different goal areas, and is thus ranked highly in DEA, despite having a lower score/cost ratio than other projects. Cross-Asset Resource Allocation Tool. The data and scores from the evaluation spread- sheet were then used in the Cross-Asset Resource Allocation Tool. The seven measures were first added in the spreadsheet tool on the “Performance Measures” worksheet. Figure 3-22 shows the performance measure and program objective list after all the measures were entered on the worksheet.

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 45 The projects and scores from the evaluation spreadsheet were then entered in the “Project Impacts” worksheet of the tool. The tool includes columns for both the “No Build” and “Build” scenario, indicating the impact of the project if it is completed or not. In this case, the “No Build” scenario was assumed to be zero and the scores for the SHA projects were entered in the “Build” column. The tool offers users the option to calculate the weights for the measures through a pairwise comparison survey. Alternatively, users can enter custom weights for measures in the “Custom Weighting” worksheet. This is where the weights for the SHA measures were entered. For goals with more than one measure, the measures were weighted equally. Figure 3-23 shows the weights for each of the seven measures. After scaling the scores on a scale from 0 to 1, the “Scoring” worksheet shows the overall score and overall score divided by cost for all projects. In addition, this worksheet shows the scores for each measure and provides a visualization of how each project fares. The red color indicates a low score, while a green color indicates a high score indicating that the project does more to impact that measure. Figure 3-24 shows the projects ranked in terms of overall score divided by cost. This is the same rank obtained from the evaluation spreadsheet and shown in Table 3-16. Finally, the scores are combined with the budget information to allocate funding to certain projects. The “Optimization” worksheet shows the portfolio of projects that have been allocated for funding. In Figure 3-25, the Overall Budget is given as two-thirds of the total cost of all the projects. The Current Allocation is the sum of the cost of all the projects that are funded in the allocation. The Program Score reflects the score of the projects that are recommended for inclusion, compared with the total possible score if all the projects were funded. Below these values, there is information on the allocation by investment area. Given this budget informa- tion, the tool recommended allocating funding to nine of the 11 sample projects. This is shown in Figures 3-25 and 3-26. Figure 3-22. Performance measures and program objectives list. Figure 3-23. Measure weights.

46 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation Figure 3-24. Project ranking by overall score/cost. Figure 3-25. Budget allocation. Figure 3-26. Allocation of funding to projects.

Case Studies Illustrating Cross-Asset, Multi-Objective Resource Allocation Approaches 47 Lastly, the tool shows the performance of the projects selected for funding. The “Build” column in Figure 3-27 indicates the score possible if all the projects were selected for funding. The “Current Performance” column indicates the total score for each measure for the projects that have allocated funding in this portfolio. Lessons Learned Performing this case study in two parts, MDOT’s Chapter 30 implementation and SHA’s highway-related project prioritization, provided unique insight into cross-asset resource allocation techniques and practices. • Selecting performance criteria is a large part of setting up a cross-asset resource allocation approach. This case study illuminated two important considerations when selecting measures. First, the number of measures can impact the practicality and the sustainability of a cross-asset resource allocation approach. The Chapter 30 legislation called for the use of 23 measures in scoring capacity projects. Having this many measures increases the analysis burden and the likelihood of overlap or double counting between measures. While MDOT used a Delphi voting approach to assign weights to the overall goals, performing a pairwise comparison to assign weights to each of the measures is not recommended with so many measures. Second, there are clear benefits to selecting quantitative measures over qualitative measures. Quantita- tive measures, those that can be evaluated based on data, are preferred to qualitative measures, which rely on subjective judgment to assign scores or values. Even better are measures that equate a dollar benefit to performing the project work. These values are easily compared and combined across measures. • This case study provided insight into the value of the Cross-Asset Resource Allocation Spreadsheet Tool, as well as its limitations. The optimization capabilities that allow the user to enter a budget and see which projects are recommended for inclusion in a portfolio are a great resource for DOTs. It is a simple approach to see how changes to the budget impact projects and investment areas. However, the pairwise comparison approach to weighting measures in the tool was found to be quite cumbersome. This prompted the addition of the “Custom Weighting” worksheet that allows users to bypass the pairwise comparison survey and input pre-determined weights. • Finally, this case study provided an opportunity to test DEA as an alternative to weighting goals and ranking by score or score divided by cost. The results of DEA for the 11 sample projects from SHA were reasonable, but indicated that a larger sample size is best suited for DEA. References 1. Arizona Department of Transportation. Arizona Long-Range Transportation Plan Update, Final Working Paper #5: Recommended Investment Choice (RIC) Development. October 2017. https://www.azdot.gov/docs/ default-source/planning/lrtp-wp5-ric-development.pdf?sfvrsn=2. Figure 3-27. Project performance.

48 Case Studies in Implementing Cross-Asset, Multi-Objective Resource Allocation 2. California Department of Transportation. 2018 SHOPP: Fiscal Years 2018-2019 through 2021-2022. March 2018. http://www.dot.ca.gov/hq/transprog/SHOPP/2018_shopp/2018-shopp-adopted-by-ctc.pdf. 3. Spy Pond Partners, LLC. DRAFT Recommended Application of Multi-Objective Decision Analysis (MODA) to the State Highway Operations and Protection Program (SHOPP). Technical report prepared by Spy Pond Partners, LLC, for California Department of Transportation. December 2017. 4. System Metrics Group, Inc. California Life-Cycle Benefit/Cost Analysis Model (Cal-B/C) Technical Supplement to User’s Guide—Volume 3: Traffic Operations Consistency, Network and Corridor Analysis, New Capabilities, and Economic and Parameter Value Updates—Revision 2, 2012. 5. Caltrans Economic Analysis Branch. Managing the Increasing Demand for Freight Infrastructure: Needs Versus Limited Resources: An Effort to Prioritize California’s Freight Projects Using the Strategic Highway Research Program’s Wider Economic Benefits Tool, 2016. 6. Delaware Valley Regional Planning Commission. DVRPC FY2015 Transportation Improvement Program for PA, 2015. https://www.dvrpc.org/Products/15001A/. 7. Maryland Department of Transportation. Chapter 30 Transportation Project-Based Scoring Model, Technical Guide, 2018. 8. Maryland Department of Transportation. Consolidated Transportation Program DRAFT FY2019 to FY2024. http://www.mdot.maryland.gov/newMDOT/Planning/CTP/CTP_19_24_Draft/Documents/CTP_FY2019- 2024.pdf.

Next: Chapter 4 - Guidance for Implementing a Cross-Asset Resource Allocation Approach »
Case Studies in Cross-Asset, Multi-Objective Resource Allocation Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Transportation agencies face a complex set of challenges as they make cross-asset resource allocation decisions. Such decisions entail deciding how much to invest in an agency’s roads, bridges, intelligent transportation systems (ITS), and other traffic and safety assets to achieve a variety of competing objectives, such as improving pavement and bridge conditions, increasing mobility, and enhancing safety.

The TRB National Cooperative Highway Research Program's NCHRP Research Report 921: Case Studies in Cross-Asset, Multi-Objective Resource Allocation extends and implements the results of NCHRP Report 806: Cross-AssetResource Allocation and the Impact on System Performance. Case studies were used to illustrate key issues in implementing a cross-asset resource allocation approach, and the lessons learned were then used to improve the guidance and tools developed in NCHRP Report 806.

In addition, a web tool was developed to enable the use of Data Envelopment Analysis (DEA) in the optimization step of the implementation process and to demonstrate use of a web service that transportation agencies can use in their own web applications to automate DEA analysis.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!