National Academies Press: OpenBook
« Previous: Part 2 Transportation Comparative Benchmarking Platform
Page 81
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 81
Page 82
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 82
Page 83
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 83
Page 84
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 84
Page 85
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 85
Page 86
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 86
Page 87
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 87
Page 88
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 88
Page 89
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 89
Page 90
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 90
Page 91
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 91
Page 92
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 92
Page 93
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 93
Page 94
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 94
Page 95
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 95
Page 96
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 96
Page 97
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 97
Page 98
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 98
Page 99
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 99
Page 100
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 100
Page 101
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 101
Page 102
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 102
Page 103
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 103
Page 104
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 104
Page 105
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 105
Page 106
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 106
Page 107
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 107
Page 108
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 108
Page 109
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 109
Page 110
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 110
Page 111
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 111
Page 112
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 112
Page 113
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 113
Page 114
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 114
Page 115
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 115
Page 116
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 116
Page 117
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 117
Page 118
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 118
Page 119
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 119
Page 120
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 120
Page 121
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 121
Page 122
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 122
Page 123
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 123
Page 124
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 124
Page 125
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 125
Page 126
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 126
Page 127
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 127
Page 128
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 128
Page 129
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 129
Page 130
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 130
Page 131
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 131
Page 132
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 132
Page 133
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 133
Page 134
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 134
Page 135
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 135
Page 136
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 136
Page 137
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 137
Page 138
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 138
Page 139
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 139
Page 140
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 140
Page 141
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 141
Page 142
Suggested Citation:"Part 3 Benchmarking Pilot Results." National Academies of Sciences, Engineering, and Medicine. 2019. Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies. Washington, DC: The National Academies Press. doi: 10.17226/25365.
×
Page 142

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

81 Part 3. Benchmarking Pilot Results

Effective Performance Management by Transportation Agencies Benchmarking and Comparative Measurement Pilot Purpose • Summary • Results 82 Purpose of the Benchmarking Pilots The principles developed for this project’s guidance were implemented in two pilots of the network approach to benchmarking. These pilots fine-tuned the guidance principles, generated more robust lessons learned, and provide specific examples of how to implement network benchmarking for interested practitioners and agencies. Users can explore the results of these pilots to inform future benchmarking initiatives. The objectives of the pilots included the following: 1. Testing the benchmarking process and principles outlined in the benchmarking guidebook section of this report to aid the research team, project panel, and NCHRP in confirming the accuracy and usefulness of the guidance; 2. Informing the guidebook content by uncovering lessons and tactics for successful benchmarking that the team did not uncover in the research and guidebook development stages of the project; and 3. Serving as a useful reference for anyone seeking to implement a similar benchmarking network. The pilot reports that follow serve as inspiration for transportation practitioners who might be involved in the day-to-day implementation of benchmarking. Benchmarking Pilots Pilot Purpose

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Summary Pilot Purpose • Summary • Results 83 Pilots Summary Details of the two pilot experiences are outlined below using the benchmarking guidebook’s first six steps. A high-level overview of the outcomes at each step and a summary of lessons learned are provided to help orient readers as they explore the detailed content more closely. For the pilots, the research team adopted a benchmarking network approach as described in the guidebook section of this report and worked to establish networks of DOT practitioners for each of the performance areas chosen for the pilots: wildlife–vehicle collisions and bicycle and pedestrian connectivity. Each network of peers consisted of six agencies that already had strong programs, and therefore interest, in the chosen performance areas. Based on initial project research on other benchmarking initiatives, five to ten participants were determined as the ideal number of participants for a new network. This size balanced having sufficient performance results and participant ideas for robust discussion and learning with keeping the level of effort for data gathering and other logistics reasonable. Each network used the first six recommended steps for benchmarking that are described in the benchmarking guidance: 1. Set the Stage 2. Define the Approach 3. Select Peer Agencies 4. Obtain Data 5. Analyze Data 6. Identify Noteworthy Practices A high-level summary of the pilots’ process and outcomes is presented below.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Summary Pilot Purpose • Summary • Results 84 Step 1. Set the Stage The project team planned a benchmarking network approach for the pilots. After research and discussion with the panel members and workshop participants, the team selected two performance areas for each pilot to pursue (Table 2). However, only one group in each pilot was able to recruit enough participants. Table 2. Performance Areas Considered for Pilot Development Environment Nonmotorized Wildlife–vehicle collisions Bicycle and pedestrian connectivity (measured via route directness) NEPA timeliness Bicycle and pedestrian miles traveled estimates A proposed pilot on National Environmental Policy Act (NEPA) document timeliness did not succeed in attracting participants and was dropped. A pilot on bike and pedestrian miles traveled estimates found a few interested practitioners, but data was not mature enough for full implementation. Full implementation was limited to wildlife–vehicle collisions for the environmental pilot and bicycle and pedestrian connectivity for the nonmotorized pilot. These performance areas were selected based on the relevance of the selected topic areas to environmental and active transportation practitioners, availability of data, and likely interest among practitioners of benchmarking on that topic. Wildlife–vehicle collisions—Vehicle collisions with wildlife are a widespread problem that adversely affects animal and human populations. Such collisions directly harm the species being struck, and they can point to wider ecological problems such as a lack of suitable crossing locations to connect habitats. Crashes with larger animals, such as ungulates (large hoofed mammals such as deer, elk, and wild horses) cause billions of dollars in property damage each year, and a rising number of human injuries and fatalities make it a key public safety concern for some agencies. Bicycle and pedestrian connectivity—Route directness compares the distance a bicyclist or pedestrian will likely travel versus the straight- line distance to the same location. Directness characterizes networks in terms of how much obstacles impede direct travel. This concept is often used to characterize the accessibility of specific destinations, but it can also be applied at a network level. Route directness is important because bicyclists and pedestrians are highly sensitive to out-of-direction travel. Research cited in the 2012 Mineta Transportation Group report, Low Stress Bicycling and Network Connectivity, shows that bicyclists and pedestrians may choose to forgo trips or seek other modes if available routes are indirect, with some sensitivity even if routes are just 25 percent out-of-direction. Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Summary Pilot Purpose • Summary • Results 85 Step 2. Select Peer Agencies The project team reached out to 10 to 12 practitioners in different states who were expected to be interested in each performance area based on input by subject matter experts who attended a workshop during project development. States that agreed to participate are shown in Table 3. Table 3. States Participating in the Benchmarking Pilots Wildlife–Vehicle Collisions Bicycle and Pedestrian Connectivity California Colorado Montana Nevada Utah Washington Kansas Minnesota New Mexico Utah Vermont Washington

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Summary Pilot Purpose • Summary • Results 85 Step 2. Select Peer Agencies The project team reached out to 10 to 12 practitioners in different states who were expected to be interested in each performance area based on input by subject matter experts who attended a workshop during project development. States that agreed to participate are shown in Table 3. Table 3. States Participating in the Benchmarking Pilots Wildlife–Vehicle Collisions Bicycle and Pedestrian Connectivity California Colorado Montana Nevada Utah Washington Kansas Minnesota New Mexico Utah Vermont Washington

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Summary Pilot Purpose • Summary • Results 86 Step 3. Define the Approach Defining the approach requires getting participants to agree on specific metrics, data sources, definitions, and parameters. The pilot teams accomplished this by holding initial conference calls with committed agencies early in 2018. The final performance metrics, data sources, and parameters decided on for each pilot are shown below. Wildlife–Vehicle Collisions Bicycle and Pedestrian Connectivity Final measure Percentage reduction in annual wildlife–vehicle collisions after project implementation Permeability of highway networks, based on a route directness index (RDI) Explanation Rather than a statewide measure, which participants agreed was not useful to the way wildlife practitioners think about this issue, this measure would focus at the project level. The measure compares the average number of annual wildlife strikes before a project is implemented (this could be an average over any number of years) with the average annual number of strikes in the years after project completion. Route directness summarizes the out- of-direction travel required to cross a “barrier” highway. An ideal ratio of 1 indicates that no out-of-direction travel is required, whereas a ratio of 2 indicates that a bicyclist or pedestrian would travel twice the straight-line distance between two points to cross the highway. Data sources • DOT maintenance offices for carcasses • Crash reports from local law enforcement • Existing DOT reports on each project that summarized the necessary data • Highway Performance Monitoring System (HPMS) centerline data • OpenStreetMap (OSM) roadway network data • Metropolitan planning organization (MPO) boundaries from participants Parameters • Both carcasses and collisions would be reviewed to start. • Data beyond immediate project area would be included when possible. • Any animals the DOT designates would be included. • All roadways owned by a state DOT were assessed as barriers. • Routes that required travel along the state highway network or routes with an RDI of 10 or greater were excluded.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Summary Pilot Purpose • Summary • Results 87 Step 4. Obtain Data Effective benchmarking depends on access to quality data. Following the two- pronged approach outlined during the project’s research phase, the team collected both nationally available data that the benchmarking participants could access on the benchmarking platform and data from each pilot group’s participants. Collecting data sometimes took longer than anticipated, pushing back the project timeline. National Data for Benchmarking Platform Data Collected • Fatalities • Vehicle miles traveled • Bridge condition • Bike and walk commute mode share • Bike fatalities • Motor fuel usage Environmental Performance Data Requested • Annual crash and carcass counts for each project area, including distance along the corridor just outside the project to account for “end effects.” Nonmotorized Performance Data Requested • OSM road network • HPMS roadway ownership Photo courtesy of Josh Richert, Blue Valley Ranch

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Summary Pilot Purpose • Summary • Results 88 Step 5. Analyze Data Analyzing the data often requires initial cleaning and formatting. Getting data to the correct specification took some back-and-forth between facilitators and participants in both pilots and required technical expertise to generate new data for the bicycle and pedestrian measure. Visualizing the cleaned data in charts and maps (for example, Figure 2 and Figure 3) was essential to productive discussion later in the pilots. Environmental Performance Figure 2. Example of visualization for the wildlife pilot Nonmotorized Performance Figure 3. Demonstration of the analytical process to develop the route directness index (RDI) 0 50 100 150 200 20092008200720062005200420032002200120001999199819971996 Utah I-15 Wildcat Fencing Project Results Carcass Counts Milepost 112-134 Milepost 102-144 Project

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Summary Pilot Purpose • Summary • Results 89 Step 6. Identify Noteworthy Practices Data analysis should lead to conversations about how top performers achieve results. Pilot leaders organized conference calls with participants to start these conversations. The facilitators for each pilot developed agendas suited to the interests of participants and the maturity of each measure. Environmental Performance The agenda for the final wildlife call focused on practices exchange. It included prepared topics that arose from the data analysis phase, as well as informal discussion about topics of most interest to participants. Performance review—The practices sharing call began with a review of the projects and summaries of each agency’s results, including visuals. Prepared presentation: Mobile carcass reporting—Early in the pilot a participant mentioned a recent switch from manual carcass reporting to a mobile app. The facilitator asked her to prepare a presentation on the dramatic effect this change had on carcass reporting. Scheduled topic (no prepared content): End effects—An end effect occurs when animals enter the roadway in greater numbers at the end of the fencing. As this is a persistent problem for DOTs, the topic was scheduled for an informal discussion among the participants. Impromptu topic: Citizen group assistance—One issue that came up organically was leveraging hunters, recreationalists, and other interested citizen groups to help with the labor- intensive tasks of locating carcasses and identifying fencing maintenance needs. Nonmotorized Performance Rather than identifying high-performing agencies, the practice exchange for the nonmotorized pilot focused on how to make the connectivity metric most useful. Recommendations that came out of this discussion included the following: • Further refine the definition of rural and urban land (e.g., improve on the current definition of land inside and outside MPO boundaries); • Further refine the roadway categories assessed (e.g., consider state-owned access-controlled versus state-owned non-access- controlled roads separately); and • Further refine the definition of when crossings are needed to present a more nuanced and useful RDI.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Summary Pilot Purpose • Summary • Results 90 Summary of Key Implementation Lessons The project team learned several implementation lessons from conducting the pilots. Some of these lessons are not surprising, but they all serve as useful reminders and aids for anyone attempting a similar network approach to benchmarking. The order of benchmarking steps can vary depending on whether a benchmarking network or independent benchmarking approach is used. If an agency is undertaking independent benchmarking, its practitioners will likely define the parameters of what they want to benchmark and determine suitable measures to use before finding appropriate peer agencies. With a benchmarking network approach, however, peers should be recruited to participate in the network before a discussion of specific parameters occurs. Input by subject matter experts, rather than performance generalists, will assure greater success in all steps of the process. Although a planning and performance generalist may identify performance areas that make sense at a high level, these may not be what the actual subject area practitioners are interested in or have data for. In the pilots, having subject area experts was particularly helpful for identifying appropriate peers and establishing the parameters of the measure to be benchmarked. In a network, some benchmarking steps are driven by the facilitator, while others require participation by network members. The benchmarking networks reviewed for the research portion of this project all identified a facilitator as a vital component of successful benchmarking. Certain steps in the benchmarking process were found to work best if conducted “behind the scenes” by the facilitator, while other steps required the input of the benchmarking participants. The primary takeaway for anyone serving in a facilitator role is to prepare participants for the steps that require their input. Even in these steps, responsibility for staying on track lies with the facilitator. Facilitators completed the following steps: • Step 1—Set the stage • Step 2—Select peer agencies • Step 5—Analyze data Participants provided the substance for these steps: • Step 3—Define the approach • Step 4—Obtain data • Step 6—Identify noteworthy practices

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Summary Pilot Purpose • Summary • Results 91 Interest in a given performance area often is not uniform from one state to another. When establishing benchmarking performance areas and identifying peers, a particularly important measure for one set of states may have little relevance to others. This variation in interest was particularly true for this project’s pilots, for which rural–urban divides exist. Biking and walking are less feasible outside of dense urban areas, so agencies with large rural networks did not focus on the topic as much as agencies more dominated by urban centers. Environmental topics of concern tend to differ in urban and rural areas, with air quality a concern in urban areas, for example, and wildlife a factor for rural states. The most effective form of benchmarking will partly be informed by data collection methods. Independent benchmarking is best suited to using data that are already collected and published, because collecting unpublished data goes more smoothly when the relevant agencies are involved in the process. A benchmarking network guided by a strong facilitator is the likeliest path to success when using data that are not readily available to outsiders. Generating the nonmotorized pilot data required significant tool development, for example, which could be challenging to replicate without someone willing to devote sufficient time and resources. Although the pilots were both performed as facilitator- run networks, an in-between scenario might be an independently run benchmarking network, that is, a network of peers without an external facilitator. This in-between scenario could succeed with a similarly in-between data format: noncentralized, but easily available at most agencies. Do not underestimate the time and effort to collect, quality check, and process data. This part of the project is least under the facilitator’s or team leader’s control and relies on participation by all benchmarking participants. It is also the step in the benchmarking process most likely to throw off the overall schedule. Facilitators and team leaders should take the variability of this step into account from the beginning, and likewise prepare participants for the effort likely needed to collect all necessary data. Even when data are already collected at participating agencies and primarily need to be centralized, facilitators and team leaders should expect to spend time quality checking and processing the data into comparable formats. Each participating agency will likely have slight variations in the way it collected, defined, or set up its data that require attention before analysis and comparison are possible.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Summary Pilot Purpose • Summary • Results 92 Allow for formal and informal peer interactions. Throughout the process, but especially during scheduled practice exchange calls or meetings, allow for a mix of informal, organic interactions among peer participants, along with programmed presentations and information exchanges. The structured aspects of calls and meetings provide focus and topics for discussion, both to prevent participants from going completely off track and to help when discussion lags. The informal discussions allow for topics to surface that the facilitator or team leader has not considered but that are important and interesting to participants. The value of such informal discussions was conveyed to the team in the initial research phase of the project and reinforced during the pilot experience. The prepared content was very helpful. The power points and preparation made the topics clear and provided a sound base from which to begin discussions. -Participant survey response Having conversations with other practitioners was the most useful part of this process to me. -Participant survey response

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results 93 Pilot Results The benchmarking pilots for environmental and nonmotorized performance implemented the first six steps of the benchmarking process outlined in this project’s guidebook. Details of their activities and outcomes are presented step-by-step.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Set the Stage 94 Step 1. Set the Stage Key Actions to Set the Stage • Identify performance area(s) of concern • Form a benchmarking team • Select a benchmarking team leader or facilitator Performance areas of concern—Often the performance area of concern is obvious. Wanting to improve in a particular area may prompt the decision to benchmark. In other cases, such as during the effort to launch this project’s pilots, this step can take several iterations. The broad categories of environmental and nonmotorized performance contain many distinct topics, and the goal to balance interest to practitioners, reasonably accessible data, and promise for yielding constructive best practices took several discussions. The project team proposed an initial set of performance areas and measures after the research phase. These recommendations received mixed reactions from the panel, prompting the need for more discussion. The team proposed a new round of measures and used the November workshop to have in-depth discussions. In the end, workshop participants steered the team to several promising performance areas. Benchmarking team and leaders—When undertaking independent benchmarking, the benchmarking “team” may consist of a single person who is interested in the topic. An internal benchmarking team might include coworkers who work together on a particular topic or individuals from offices across the organization who work on different aspects of a topic. One or more of these individuals may serve as team leader to take responsibility for coordination and task completion. In a benchmarking network like the ones for this project’s pilots, a formalized team leader, often called a facilitator, is a helpful element for success. In the case of the pilots, the project team served in this facilitator role by organizing meetings, soliciting and compiling data, and engaging with benchmarking participants to share noteworthy practices. The larger team included panel members, workshop attendees, and eventually pilot participants. Mobilize for a benchmarking initiative by identifying a benchmarking performance area, assembling a team, and choosing a leader. – From the Benchmarking Guidebook

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Set the Stage 95 Environmental Performance The team relied on NCHRP Report 809: Environmental Performance Measures for State Departments of Transportation for its recommendations on environmental performance measures. One of the core challenges for making environmental performance comparisons is finding measures that can be applied across all state DOTs. NCHRP Report 809 explains that agreement on common environmental measures is challenged by widely varying population patterns, travel habits, ecosystems, climates, and political values. Building on NCHRP’s work, the team initially chose benchmarking areas that could include all DOTs. These were measures of at least some relevance to most DOTs and for which most DOTs would have data. After the workshop discussions, however, it became clear that universal applicability would not be necessary under the benchmarking network approach envisioned for the pilots. Further, such universality made for performance measures that some practitioners were not enthusiastic to benchmark against. The decision was therefore made to go forward with measures that had a reasonable number of agencies committed to the topic, even if the issue was not uniformly valued across agencies. The progression of proposed measures is shown in Table 4. Table 4. Progression of Recommended Measures for Environmental Benchmarking Pilot Note: RAP = reclaimed asphalt pavement; EIS = environmental impact statement; EA = environmental assessment; CE = categorical exclusion. Initial •Gasoline consumption per capita •DOT alternative fuel use •RAP usage •NEPA timeliness–EIS Revised •Gasoline consumption per capita •DOT alternative fuel use •Stormwater treatment •NEPA timeliness–EIS/ EA/CE Final •Gasoline consumption per capita (independent benchmarking only) •Wildlife collisions •NEPA timeliness–EIS/ EA/CE

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Set the Stage 96 Final environmental performance areas— The research team and workshop participants chose two performance measure areas: a form of NEPA documentation timeliness and wildlife– vehicle collisions. These measures were chosen based on interest among at least a handful of states, expected data availability, and “benchmarkability”—the capacity to share specific strategies for improvement among a small group of interested practitioners. Wildlife collisions, which the team had not initially recommended, was a topic on which one workshop participant had a strong background. His knowledge let him gauge the level of interest in benchmarking this measure among practitioners and ascertain which agencies might be interested in joining. His influence points to the importance of topic area experts for quick success in getting such an initiative off the ground. NEPA compliance—The NEPA process helps assure environmental protection meets basic standards, so it serves an important function in ensuring transportation agencies remain mindful of environmental impacts. There was notable enthusiasm for this measure area at the benchmarking workshop, as NEPA documentation is a significant element in any initiative to expedite project delivery. Three NEPA documents were considered for this pilot: environmental impact statements (EIS) for large expansion projects and environmental assessments (EAs) and categorical exclusions (CEs) for smaller, more common projects. Wildlife–vehicle collisions—Vehicle collisions with wildlife are a widespread problem that adversely affect both animal and human populations. Such collisions clearly do damage to the species being struck and can point to wider ecological problems such as a lack of suitable crossing locations to connect habitats. On the human side, crashes with larger animals, such as deer, wild horses, and other hooved ungulates, cause billions of dollars in property damage each year. A growing number of human injuries and fatalities caused by these collisions also make wildlife collisions a public safety concern for many transportation agencies.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Set the Stage 97 Nonmotorized Performance Several categories of potential nonmotorized performance areas were proposed, including demand, network connectivity, safety, and community benefits. The research team initially recommended exploring measures related to demand and safety because data in these areas are more consistent and readily collectable than data for other categories, for which definitions and information- gathering practices vary among states. After an initial review of potential measures, the project panel was concerned that too few states had the necessary bicycle and pedestrian counts to accurately calculate fatality rates and that “miles traveled” may not be the best way to measure walking. At the project’s November workshop, therefore, the research team replaced the fatality rate suggestion with a measure related to connectivity. During the benchmarking workshop, ideas for network connectivity and demand/safety measures were key discussion topics. Workshop participants expressed interest in developing a common definition and understanding of how state roadways both benefit users and act as barriers. While recognizing data limitations, the workshop participants agreed this was the most promising area among the proposed bicycle and pedestrian performance areas. At the conclusion of the benchmarking workshop, participants agreed to explore potential bicycle and pedestrian measures for network connectivity, demand (user counts), and customer opinion (Table 5). These three ideas were advanced to Step 2, with an understanding that the final pilots would be selected based on practitioner interest. Table 6 lists performance areas that were not used for either pilot and the reasons they were not chosen.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Set the Stage 98 Table 5. Progression of Recommended Performance for Nonmotorized Benchmarking Pilot Table 6. Rationale for Dropping Performance Areas for Pilot Implementation Performance Area Panel and Workshop Participant Concerns RAP usage Participants noted there has been increasing concern at DOTs regarding the longevity of pavement that uses RAP. EIS timeliness Many states have so few large expansion projects that the need to produce EIS documents is growing rare. DOT fleet alternative fuel use Promoting alternative fuel use emerged as a low priority for some rural state participants at the workshop, and concern was expressed that the overall impact of this topic on a DOT’s own fleet is small. Stormwater management Practitioners were not confident about data quality and availability for the stormwater measure and agreed it was not a simple enough measure to showcase benchmarking. Bicycle and pedestrian community benefits There was general agreement to avoid community benefits and equity as these are nuanced topics with varied definitions, and there are often multiple agencies involved that affect outcomes. Bicycle and pedestrian safety Although safety is a priority in all states, comparison of collision rates cannot reliably be made because standard user counts do not exist. Initial •Walk and bicycle commute mode share (independent) •Pedestrian and bicycle fatalities (independent) •Pedestrian and bicycle fatality rates •Pedestrian and bicycle miles traveled Revised •Walk and bicycle commute mode share (independent) •Pedestrian and bicycle fatalities (independent) •Pedestrian and bicycle facility connectivity •Pedestrian and bicycle demand (miles traveled) Final •Walk and bicycle commute mode share (independent) •Pedestrian and bicycle fatalities (independent) •Pedestrian and bicycle facility connectivity •Pedestrian and bicycle demand (miles traveled) •Customer satisfaction

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Set the Stage 99 Step 1 Takeaways • Very few measures relating to the environment and nonmotorized performance will have appeal across all 50 states. The smaller scale of benchmarking networks allows states with common interests to discuss performance of less widely applicable performance topics. • A state’s political climate can greatly affect which performance areas it pursues and to which it will devote resources. • Performance areas that make sense on paper or in theory may not be those of most interest or make the most sense to on-the-ground practitioners. • Individuals who specialize in an area of interest should attend initial discussions. In many cases the impetus for benchmarking will come from these individuals, so they will naturally be involved, but it can also happen that managers or performance generalists initiate a push for benchmarking, in which case soliciting specialized area expertise will be helpful.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Select Peer Agencies 100 Step 2. Select Peer Agencies – From the Guidance Key Actions to Select Peer Agencies • Identify relevant peer selection criteria for each performance area • Collect criteria data or conduct peer research to identify appropriately similar states • Recruit participants (benchmarking network) Peer selection criteria—The benchmarking guidebook (Part 1), along with the digital benchmarking platform developed as an online tool to aid in benchmarking (Part 2), suggests using selection criteria to identify states with similar characteristics for a given performance area. For example, when comparing pavement condition, it would make sense to look only at states with similar climates, as extreme cold and freeze–thaw cycling are major factors in pavement condition. The digital platform has a range of such characteristics in its peer selection function to aid in finding the most appropriate benchmarking peers. In the course of the pilots, however, less reliance was placed on agency characteristics—such as system size, administrative structure, or climate—and more reliance was placed on program-level information, such as the existence of a dedicated topic-specific program or its level of sophistication. This emphasis was particularly true for the wildlife–vehicle collisions pilot. These kinds of criteria differed from those initially envisioned by the team, and their emergence led the team to further refine its use of peer selection criteria to two scenarios: 1. Benchmarking a topic with national appeal and implementation, or 2. Benchmarking a niche topic. In the first scenario, a data-driven approach to finding peers is possible and data availability is a nonissue, so it is practical to benchmark against peers with the greatest similarity on relevant characteristics. Finding peers will rely on gathering data on characteristics to compare all possible agencies. This situation will be most common when undertaking independent benchmarking and for creating benchmarking networks on a national measure.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Select Peer Agencies 101 In the second scenario, that is, when benchmarking for a measure for which data are not likely to be uniformly available across states, the mere existence of data or a specific program may be the most important element. In these cases, which will include many benchmarking networks, research and talking with subject matter experts, rather than data collection, will be needed to find peer agencies likely to participate. Peer research and data collection—Finding benchmarking participants for this project’s pilots started with leads generated by workshop attendees during group discussions. The workshop discussion focused on how interested agencies could be reached and which agencies might be most interested in the measures selected. For the nonmotorized pilot, an invited subject matter expert knew of a mailing list of bike and pedestrian coordinators. She was confident this list would target precisely the practitioners the team was looking for and would yield interested participants for all nonmotorized measure areas. For the wildlife collisions measure proposed as part of the environmental pilot, the team conducted further research, starting with states that a subject matter expert at the workshop knew were active in this area. For the NEPA timeliness measure, the team reached out to an FHWA contact and conducted further research. Recruit participants—The team sent emails to the targeted lists of practitioners who had been identified. The email included overview information about the project and the proposed measures and explained that participants would be expected to 1. Join a kickoff call to discuss measure parameters and definitions, 2. Gather data on agreed-upon measures, and 3. Participate in a noteworthy practices exchange after data collection and comparison were complete. Thanks to accurate initial information and careful research at the start, these emails resulted in enough participants for at least one measure in each pilot to proceed quickly.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Select Peer Agencies 102 Environmental Performance Peer States for Wildlife Collisions Measure For the wildlife–vehicle collisions pilot, the team targeted states that had active wildlife collision–reduction programs, with the expectation that these states would most likely be interested and have the necessary data. The panel’s subject matter expert suggested several states that had data collection programs associated with wildlife collision–reduction efforts. In addition, this expert contacted organizers of the International Conference on Ecology & Transportation, which is attended by wildlife–transportation practitioners from many states. The team used participant information supplied by the conference organizers to identify practitioners most likely to be involved in wildlife collision issues. Internet research of the identified practitioners confirmed the most likely contacts and identified additional practitioners. After email outreach to ten agencies from this search, the team successfully recruited seven participants from six state DOTs (California, Colorado, Montana, Nevada, Utah, and Washington) to participate in the wildlife pilot. In general, the states with wildlife collision–reduction programs share many characteristics: they tend to have large wildlife habitat areas, healthy wildlife populations, and a high incidence of wildlife–vehicle collisions. Wildlife collisions are an issue where thriving wildlife populations meet denser human populations with busier roads. Table 7 lists the peer selection criterion used in this pilot, along with possible criteria that could be used for similar initiatives. Table 7. Peer Selection Criteria for Wildlife–Vehicle Collision Measures Criterion Used in Pilot Other Possible Selection Criteria • Existence of a wildlife collision– reduction program at the state DOT • Rural lane miles • Wildlife population levels • Types of wildlife present • Endangered wildlife populations • Incidence of wildlife–vehicle collisions • Cost of wildlife–vehicle collisions • Method of collision documentation • Types of interventions implemented

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Select Peer Agencies 103 Peer States for NEPA Document Timeliness Workshop attendees expressed interest in a benchmarking initiative targeted to NEPA document time frames, but no panel members had the domain-specific knowledge and connections to champion the topic. The research team therefore decided to reach out to FHWA staff for guidance on the likely availability of data, parameters that made sense for a measure, and state contacts to participate in the pilot. FHWA suggested that EIS time frames were the most promising measure for data uniformity because all states must use the same start date for the EIS process, but CE and EA start dates can, and often do, vary. The team was thus back to seeking a measure on EIS timeliness. The FHWA contact also recommended starting with the larger “NEPA assignment” states—that is, those states with agencies that oversee the NEPA compliance process on their own. Because EIS documents are only required for large expansion projects, many states will go several years without completing an EIS. He reasoned that NEPA assignment agencies tend to be larger and would have a greater number of EIS projects. One practitioner in a NEPA agency expressed interest in assisting, but another practitioner discouraged the use of EIS documentation because her agency conducted so few of them, just as the workshop participants had reasoned. When asked about the possibility of using uniform CE or EA start dates for benchmarking, the practitioner expressed interest in joining. No other interest was generated from this outreach, and the initial statements of interest did not progress. The research team initiated another round of outreach to five more state practitioners, but all either declined participation or did not respond. The NEPA-related benchmarking pilot was therefore dropped at this stage. Table 8 lists the peer selection criterion used in this pilot, along with possible criteria that could be used for similar initiatives. Table 8. Peer Selection Criteria for NEPA Timeliness Measures Criterion Used in Pilot Other Possible Selection Criteria • NEPA assignment status • Size of system • Number of large projects • Current “begin” definition for EAs Nonmotorized Performance State DOTs were invited to participate in all three potential nonmotorized benchmarking pilots. One of the workshop participants offered to solicit participants

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Select Peer Agencies 104 via a national list serve for state bike and pedestrian coordinators. This list, which provided a targeted opportunity to reach interested practitioners, was available due to this participant’s topic expertise. Because a direct line was available to contact numerous bike and pedestrian coordinators, the research team did not apply peer selection criteria in recruiting pilot participants. Any agency with the necessary data and interest out of all agencies would become a peer. Table 9 shows possible relevant criteria a benchmarking initiative could use in this sphere. Table 9. Peer Selection Criteria for Nonmotorized Performance Possible Selection Criteria • Population served • Percentage of urban lane miles • Population density • Miles of bicycle and pedestrian facilities Peer States for Connectivity As anticipated, most states expressed interest in the connectivity pilot. Regardless of land use patterns or state size, there is an increasing expectation that state DOTs plan for all forms of transportation rather than just highways. The final group included seven bicycle–pedestrian transportation practitioners from six state DOTs: Kansas, Minnesota, New Mexico, Utah, Vermont, and Washington. Peer States for Demand Only two states, Washington and Minnesota, expressed a willingness to participate in a demand- or count-related pilot. This group was not convened as two states were insufficient for network benchmarking. It is worthwhile to note that FHWA released recommended counting methods in 2016. In 2018, it will release a standard format for data collection. These standards may increase the number of states willing to dedicate resources to systematic bike and pedestrian counts. Peer States for Consumer Satisfaction No state expressed interest in the consumer opinion pilot so the effort was abandoned.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Select Peer Agencies 105 Step 2 Takeaways • Selection criteria based on the characteristics of different agencies are helpful in reducing a large field of potential benchmarking peers to those most appropriate for a given performance area. However, when benchmarking on a niche topic, peer agencies may simply be those agencies that are willing to participate and have the data needed to compare performance. • Subject matter experts with deep and specific knowledge can contribute information to the peer selection process that a performance generalist may not have. They can identify interested practitioners and know which agencies likely have a need or interest in the performance area, and they are often aware of email lists—useful for recruiting participants—and conferences or other gatherings that practitioners are likely to be involved in. • Topic-area conferences or publications can be a valuable source for identifying peers in a particular performance area, particularly when looking for active participants in a benchmarking network. • Internet research and data collection on practitioners are helpful for focusing leads and ensuring the right individuals are contacted.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Define the Approach 106 Step 3. Define the Approach Pick one or more measures related to the selected performance area that are suited to comparing performance among agencies or groups and come to agreement on relevant details and definitions. – From the Guidance Key Actions to Define the Approach • Choose specific performance measure(s) • Define the parameters of the measure(s) • Determine variables that influence performance comparability • Identify data sources for performance measures At first glance, the work before launching data gathering and comparison seems complete: the measure area has been chosen, and peers have been recruited or selected. However, many options and decisions go into fully defining a performance metric, such as whether to normalize data and, if so, by what; whether the metric should apply to all cases or only some; what data sources to use; whether any variations between peers will be acceptable; and definitions for the data. A crucial step in conducting a formal benchmarking initiative—considered the most important by many benchmarking practitioners interviewed for this project— is to reach agreement on these elements. The research team held conference calls with the committed participants for each performance topic. This step requires input from the benchmarking participants: it cannot be completed by the facilitators on their own. To ensure a fruitful start for the discussion, however, the team proposed various measures and their parameters to the participants. The team did not expect the final measures would stay the same as they were proposed. Rather, the intention was that the participants’ expert input would refine the suggestions into a final measure everyone could support.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Define the Approach 107 Environmental Performance Wildlife–Vehicle Collisions For the wildlife–vehicle collision measure, the research team held a conference call with practitioners from five of the six interested agencies. The sixth participant was unable to join the call but committed to future aspects of the project. The team had labeled the measure “wildlife–vehicle collisions,” but team members considered if there should be caveats, such as counting only large animals, or if differences between tracking collision reports or carcasses were important. With these questions in mind, the research team proposed the measures and parameters shown in Table 10 to pilot participants. Table 10. Measures and Parameters Proposed to Wildlife Collision Participants Proposed Measure Proposed Parameter Number of wildlife– vehicle collisions Raw numbers, or per mile of roadway All wildlife, or only some Number of wildlife carcasses inventoried Raw numbers, or per mile of roadway All wildlife, or only some Estimated cost of all reported wildlife– vehicle collisions Agreed-upon categorization and costs for each type of crash In the course of discussing how to define a statewide animal crash rate measure, it became apparent that most states only track collisions with large animals, such as elk, deer, horse, and bear, but the states differed as to which animals were tracked. It also became clear that most practitioners track both carcass counts and reported collisions, and that both informed their tracking. The team learned that the accuracy of carcass counts varied widely based on the habits of the maintenance crew in a particular state or district. In Washington State, for example, the maintenance staff often did not file reports of carcass removals. However, within a month of implementing iPad-enabled mobile reporting, the reporting rates for carcass removals increased dramatically in some districts due to the easier process. On both of these issues—animal applicability and crash report versus carcass count data—no clear consensus was reached as to which made the most sense for a performance measure. Then a question raised by a participant changed the course of the discussion: Why use a statewide measure on number of wildlife strikes? He explained that this count is not the way practitioners in their specialty generally think about the issue and is not usually the kind of data they tracked. The other participants quickly agreed, noting that even in their most successful years, gains made in particular

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Define the Approach 108 corridors might not “move the needle” when looking at statewide numbers. This discussion led to a measure the project team did not originally consider: percentage reduction in wildlife strikes on a corridor after project implementation. All the practitioners on the call agreed this was a measure of interest to them and their agencies and stated they could obtain data for one or more projects. Furthermore, because it is a measure that could simplify to percentage change within each organization, differences between agencies in terms of data collection or definitions would be less of an issue. This shift resulted in data that were somewhat different from most data on the digital platform, but participants were clear that this measure made the most sense. NEPA Document Timeliness The second environmental measure pursued for a pilot, NEPA document completion time, made less progress than the wildlife measure and did not succeed in attracting enough interested peer states in Step 2. However, some of the discussion concerning an EIS-based versus an EA- or CE-based measure is worth noting. The argument for using a measure of how long an EIS takes was that start and end dates are uniform across states and data are already reported at the federal level. This was the measure the research team initially proposed and that the FHWA staff interviewed for this project recommended. The objection from practitioners, however, was that because an EIS is only completed for large expansion projects, DOTs complete so few in a year—often none—that it would not be meaningful for most states. Projects that require EAs or CEs, however, are far more common. Reduced completion time for these NEPA documents would be of more widespread benefit, as noted by practitioners in both the benchmarking workshop and initial pilot participant outreach. Although this discussion was not pursued for this NCHRP project, it could serve as an interesting topic to explore for other agencies. Coming to agreement on how different agencies can treat start and end dates could be a particularly useful endeavor to a subset of practitioners.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Define the Approach 109 Nonmotorized Performance The connectivity pilot was kicked off via a conference call with all participants except the Utah DOT (UDOT). The call focused on three topics in order to obtain a useful and concise measure for network connectivity: • Method definition • Data availability • Data development Method definition—The project team proposed using an RDI measuring network permeability that would answer this question: How easily can a person walking or biking cross high-speed state-owned roadways? The question is answered in terms of out- of-direction travel (a ratio of travel distance along the network versus the as-the-crow- flies distance between two points). A use case can be found in the forthcoming Caltrans District 4 Bike Plan, and more detail on its methods are in FHWA’s forthcoming Guidebook Measuring Multimodal Network Connectivity. These data may be summarized at various geographic scales. This method can also consider various categories of origin and destination points, such as “high-quality” crossings or connections to locally designated bike networks. Though measures of route directness have been around for some time, this large-scale application to bike and pedestrian networks is relatively new. Building on the basic method proposed by the research team, the pilot members indicated the importance of the following: • Rural versus urban land use comparison; • Understanding, if possible, the effects of signalized and unsignalized crossing locations; and • Understanding high- versus low- quality roadway connections. Data availability—The pilot members provided comments on data availability. The research team proposed using OpenStreetMap (OSM) for initial data comparison across all states. Although national-level roadway network data are available, there is no consistent source of bike network data or traffic signals. After discussing in some detail the available data sets and options, it was determined that a standardized data source was desirable to facilitate comparison. MPO boundaries were chosen to delineate urban and rural land use. Data development—Based on the direction provided by the pilot members, the research team computed and summarized state highway permeability. Permeability was calculated using OSM roadway data and summarized using MPO boundaries. The plan was for these data to be loaded into the benchmarking platform once data development was complete.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Define the Approach 110 Step 3 Takeaways • Have a facilitator or participant draft initial measure options to provide a starting point for a group conversation and avoid aimless discussion. • Final measure parameters may be informed by the data that are available, rather than the other way around. Benchmarking participants may need to begin discussions on measure details with a discussion on data availability. • Data that are not uniformly defined among practitioners, such as the start dates for EAs and CEs, may slow the progress of a benchmarking initiative. Participants may need to either work around the differences and find meaningful comparisons among disparate data sets or work toward compatible comparisons by changing some participants’ definitions or limiting benchmarking to only those peers whose data match. • Data collection and maintenance are key foundational issues that all DOTs hoping to learn from their data should invest in. • Pilot participants seemed more interested in how they could use the digital tool for local planning efforts than for comparison between states. This preference could be related to the fact that state-to-state comparison for biking and walking performance measures has only been done previously by advocates.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Obtain Data 111 Step 4. Obtain Data Effective benchmarking depends on access to quality data. Data may be easily obtainable from national data sources; gathering data from peer agencies can provide more targeted information. New data-gathering efforts take time to establish. – From the Guidance Key Actions to Obtain Data • Check national data sources • Set up a method to obtain data Check national data sources—Depending on the metrics selected in earlier steps, data may be easily obtainable from national or already-centralized data sources. Several national data sources were incorporated into the project’s digital benchmarking platform that provide data on fatalities, bridge condition, commute mode share, and motor fuel usage. Set up a method to obtain data—Often, data of most interest to transportation practitioners have not yet been collected and centralized. New data-gathering efforts take time to establish, and they often go through one or more learning cycles before they produce high-quality results. Even when peer agencies have collected appropriate data, centralizing them for the benchmarking initiative takes work. Two approaches to obtaining data are as follows: 1. Task a third-party facilitator to collect data—This approach, which was used by the benchmarking initiatives researched for this project, helps ensure more complete data. 2. Rely on members to submit data or complete a survey—This route often results in partial or slower data collection. Without someone responsible for the collective results, other duties can push the data request aside for busy participants. This approach is most workable when there is someone in the group tasked with follow-up on data acquisition. As these pilots mimicked a benchmarking network, both the environmental and nonmotorized pilot had facilitators (the project team members) to collect data. The performance measure selected for the environmental pilot, change in wildlife strikes postintervention, was selected in part because participants had the data available in existing reports. Data collection was different for the nonmotorized pilot, as the group decided to generate estimates for data that did not currently exist. Both approaches resulted in satisfactory results for most participants in the relatively short pilot period.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Obtain Data 112 Environmental Performance Data Requested • Annual crash and carcass counts for the project area, including distance along the corridor just outside the project to account for “end effects” The wildlife collisions pilot group selected a measure—percentage reduction in annual wildlife–vehicle collisions after project implementation—for which most of the participants already had access to data. Of the six agencies that agreed to participate in the wildlife pilot, five confirmed they would have data to contribute, with only California unable to provide any. Many of the agencies had already developed individual project reports that examined each project’s success in reducing wildlife strikes, increasing animal use of crossings, and other related outcomes. Such reports from Colorado, Montana, and Washington already contained the necessary data for the pilot performance measure, and these were sent to the facilitator soon after the initial conference call. Utah did not have a formal report already drafted, but an informal project summary from agency staff close to the project provided much of the relevant information. However, the first data submission did not provide the annual data that would allow for trend analysis. Participants had previously confirmed that carcass data were available from maintenance databases and that police reports for crashes were tracked by agencies. The facilitator received these data within a few weeks of requesting them. Nevada sent a partially completed report on an ongoing project, but it lacked data in sufficient detail to compare trends across time. The facilitator worked with the agency to obtain the raw data behind their analysis. These data took the longest to receive, however, as a series of high-profile horse crashes resulted in the destruction of two new police vehicles soon after the pilot began. These incidents diverted the participant’s attention for several weeks and delayed receipt and analysis of final data. The experience from the pilot shows that practitioners dealing with wildlife–vehicle issues are used to thinking about individual mitigation projects in isolation. Collecting data on disparate projects created an opportunity to place these individual projects side-by-side and to get practitioners thinking about how the data on each one could be made comparable. This consideration set the stage for more detailed discussion.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Obtain Data 113 Nonmotorized Performance Data Requested • OSM road network • Highway Performance Monitoring System roadway ownership Bike and pedestrian data are limited at the national level, and individual state and MPO data are so varied that comparable metrics are a challenge. For this reason, the nonmotorized pilot participants decided to generate the data needed for a measure that held interest for them. Doing so would provide both a chance to try out the benchmarking philosophy proposed in the guidebook and create a new and useful data set. The pilot team originally planned to use data inputs unique to each state, but quickly decided to use nationally available data. The team decided on open source and federal government data, which allowed for fairer and more accurate comparisons of the participants. In contrast, state-level data would always have differences that would impede state-to-state comparisons. Roadway Data Used OSM was selected as it provides a comprehensive data set for the United States. Open source tools allow extraction and processing of their data to create a routable network data set. Routable networks like Google Maps allow for analysis of the relationships between destinations, a critical characteristic when assessing performance on transportation networks. One trade-off in using the OSM data is the limitation of roadway attribute information. Although the general type (e.g., highway or residential road) and location of the roadway are known, other critical factors, such as speed limit or the presence of sidewalks, may not be present. Because OSM relies on a volunteer community to populate and keep many map elements up to date, the completeness of information about roadways and buildings largely depends on the size, skill, and dedication of the local OSM community. After a preliminary review of roadways defined as “motorways” (highways) in the OSM network data, the peer group recommended supplementing with a more authoritative source of roadway ownership data. The team turned to the Highway Performance Monitoring System (HPMS) database maintained by FHWA for this purpose. Consequently, although the collection of these nationally available data inputs (OSM) was not itself time consuming, assessing the initial data sources to confirm their appropriateness took considerable time and led to additional data collection (the HPMS data).

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Obtain Data 114 Step 4 Takeaways • Data gathering and assembly can be time consuming. Facilitators or task leads should expect to follow up several times with participants before having all necessary data, as they did in the wildlife pilot. In the nonmotorized pilot, assembling final data required a significant investment in consultant time for tool development and assessment of data sources. The investment in tool development to generate the needed data may be challenging outside of an effort funded by NCHRP or another interested agency. • Even seemingly simple collection tasks can be sidelined by other agency matters. When a high-profile incident occurred at one agency in the environmental pilot, final data collection was set back several weeks. Data collection is the step most outside of a facilitator’s control, and many things can arise to derail even carefully planned timelines. • Data developed for benchmarking can be used for multiple purposes. Developing robust data that can be used at multiple scales may increase interest in the process and thus increase resources available to develop more meaningful measures. Several nonmotorized pilot participants expressed interest in using the data generated for ongoing planning work, as comparative, statewide data of this kind are infrequently available. The benchmarking group expressed interest in creating comparative visuals to understand variations within DOT districts and applying the data at the corridor, district, and state levels to inform planning efforts. There was also interest in refining the data to further expand their range of uses. • Depending on the nature of the data, data collection and management may require a technical specialist to format, assemble, and explain limitations. Care should be taken to understand the range of desired uses to control the complexity of data creation and the allocation of resources that might threaten completion of the benchmarking effort.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 115 Step 5. Analyze Data Data cleaning is almost always required before the data can be analyzed. If the benchmarking effort involves collecting new data, a formal quality assurance and quality control approach will help pinpoint any patterns of incorrect data. Once data are cleaned, compare the values of selected benchmark measures against peers’ values. Visuals are conducive to exploration and comparison. – From the Guidance Key Actions to Analyze Data • Clean the data • Explore the data Clean the data—Data collection, notation, storage, and other practices vary from state to state. To get data from a variety of sources in a format suitable for comparison, some formatting and cleaning will be necessary, or participants may need to provide additional detail or data in a different format. Quality checks should be performed at this stage. In the pilots, getting data in the correct specification took some back-and-forth between facilitators and participants. Explore the data—All data for the pilots were put into either chart or map format before group discussions so participants could easily see the results and variation across agencies. As the wildlife measure was a before-and-after comparison, the facilitators requested measure results by year for incorporation into simple line charts. The nonmotorized measure was calculated using geographic analysis, so maps formed the basis for data exploration. The digital benchmarking platform was developed to be a quick resource to create charts for comparison. Such visuals can be created with the national data preloaded by the project team or with new data from the pilots or other initiatives that users upload themselves. Although analyzing data can be the least time-intensive part of the benchmarking process, it is the most important in discovering trends and patterns in the data. Sound analysis will lead to identifying the most relevant and useful practices in Step 6.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 116 Environmental Performance Data Cleaning and Formatting Although the chosen performance measure required only a single number for the before-construction period and a single number for the after-construction period, breaking the results down to annual trend data provided the best means to clearly see the effect of agency interventions on wildlife collisions. The facilitator sought to obtain this annual data from all participants. Most participants in the wildlife pilot sent existing reports that included annual collision data for the project area, though sometimes without an accompanying spreadsheet with data summarized. When participants did provide spreadsheets, they often included data beyond the immediate project limits or categories of crashes that would not be included in the pilot analysis. The inclusion of this extra information meant filtering was required to get all agencies’ data in the same comparable format: annual numbers with matching parameters on crash types and inclusion. This process required asking questions and making decisions on detailed specifications beyond the specifications agreed to in the initial pilot conference call. Answers to these questions ensured data were in the proper format and comparable across agencies. Details on these questions and decisions are in Table 11. Table 11. Data Questions and Decisions in the Wildlife Pilot Question Decision How far beyond the projects’ geographic limits should crashes be included? A distance of 0.5 miles was selected for this project; 0.2 miles is the minimum accepted distance to capture immediate end effects, where animals simply go around the end of a fence segment and onto the roadway. The research team wanted additional distance to be more conservative. Which years count as preconstruction and which as postconstruction? Is the construction period included in either of these two periods? Results should exclude the construction period. The other years will include as many before or after years as each agency has quality data for. Agencies should exclude specific years as deemed appropriate by their agency.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 117 Should collisions or carcass numbers be used? Carcasses were selected as the better option, as there are often too few crash records from which to interpret reliable trends. Note: The Nevada data contained only crash data. Given the severe nature of horse collisions, crash reports are likely a reliable indicator of total collisions as most of these more serious crashes will be reported to police. When carcass data for the same project area are provided by multiple entities, which data set should be used? This was an issue for Colorado DOT, which obtained data from two other local entities in addition to its own maintenance records. As combining these data was not appropriate, the team chose the data set that appeared to be more comprehensive. Results Analysis Summaries of all submitted projects were developed for the conference call presentation, allowing all participants to see and explore each project’s summary data, trends, and results. The intervention period is displayed on all charts to help assess the impact of the intervention. Without this explanatory element, interpreting the results would be far less meaningful. Figures 4–8 summarize performance measure results for Utah, Washington, Montana, Colorado, and Nevada, respectively.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 118 Utah Performance Measure Result: 91% Reduction Figure 4. Utah DOT wildlife carcass count results Pre-project annual average: 106.0 Post-project annual average: 9.3 Project—I-15 Wildcat Fencing Project (2004) • 22 miles of wildlife fence on both sides of I-15 from milepost 112 to 134; wildlife crossings at mileposts 123.5 and 125.7 • Project cost: $2,599,681 Context—Most of the Beaver deer herd migrate across I-15 twice a year in this area. I-15 has experienced increases in traffic numbers and in 24-hour traffic. Outcomes—After project implementation, carcass counts within the project limits decreased from an average of 106 per year from the years 1996 to 2003 to fewer than 10 per year in the years 2005–2009. An economic analysis showed a payout period of just under 3 years from crashes avoided. In addition, the benefit to the herd was notable, with the herd surpassing the state’s population goal. Although other factors, such as habitat restoration and predator control, contributed to the increased herd size, UDOT's population model showed 4,080 animals in the herd as a result of not being hit on the highways. Further exploration—To explore whether animal populations shifted to other parts of the roadway, the project team also looked at the carcass trend for 10 miles in either direction of the project limits, a distance much further than immediate end effects. The same trend held for this larger geographic span, indicating that a shift did not occur and that the project appeared to have decreased collisions overall along the corridor. This exploration was possible due to the level of detail in the raw data provided to the facilitator. 0 50 100 150 200 20092008200720062005200420032002200120001999199819971996 Utah I-15 Wildcat Fencing Project Results Carcass Counts Milepost 112-134 Milepost 102-144 Project

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 119 Washington Performance Measure Result: 43% Reduction Figure 5. Washington State DOT wildlife carcass count results Pre-project annual average: 12.75 Post-project annual average: 7.33 Project—Primarily a project to correct fish barriers in waterways, but with terrestrial wildlife habitat connectivity enhancements including • Three wildlife guards • Six jumpouts (places for wildlife to “jump” from the roadway to a safer location if vehicles approach while they are trapped in the fenced area) • 0.7 miles of fencing on both sides of the highway Terrestrial wildlife enhancements cost— $308,400 ($3.6 million total project cost including fish barrier correction) Context—Area contains black-tailed deer, elk, black bear, and cougar. Outcomes—After project implementation, a notable downward trend in carcasses was observed. However, given the low overall number of data points, longer monitoring could improve confidence in results. 0 2 4 6 8 10 12 14 16 18 20 08 -2 00 9 20 09 -2 01 0 20 10 -2 01 1 20 11 -2 01 2 20 12 -2 01 3 20 13 -2 01 4 20 14 -2 01 5 20 15 -2 01 6 N um be r o f C ar ca ss R em ov al s US 97 Project Area Carcass Removals Project

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 120 Montana Performance Measure Results: 5.2% (without Control) and 71.4% (with Control) Reductions Figure 6. Montana DOT wildlife carcass count results Pre-project annual average: 6.75 Post-project annual average: 6.40 Control pre-project annual average: 5.00 Control post-project annual average: 16.60 Project—Reconstruction of a 56-mile-long section of U.S. 93 North (2005–2010) • Wildlife crossing structures at 39 locations • Approximately 8.71 miles of wildlife exclusion fences on both sides of U.S. 93 Outcome—Montana was the only state to include data for control segments along with the project area data. Therefore, Montana has two performance measure results: one without and one with the use of control segments. Without control segments (simple before–after comparison)—A 5.2% reduction in large wild mammal carcasses, from 6.75 to 6.40 per year. With control segments—A 71.4% reduction in what would have happened if these areas were not mitigated, assuming the 232% increase seen on the control segment would have occurred without the project. Further exploration—A notable facet of the Montana project was the increase in carcasses found along the entire corridor, as shown in the control segment results. A proposed reason for this increase is that the wildlife mitigation efforts were part of a total reconstruction of the corridor that included straightening curves and widening shoulders. These road changes resulted in higher travel speeds, which in turn led to more wildlife collisions. 0 5 10 15 20 25 2002 2003 2004 2005 2006 2007 2008 2011 2012 2013 2014 2015 Ca rc as s Co un ts Montana U.S. 93 Project versus Control Area Carcass Counts Fenced Carcasses Control Carcasses Linear (Fenced Carcasses) Linear (Control Carcasses) Project Construction

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 121 Colorado Performance Measure Result: 86% Reduction Figure 7. Colorado DOT wildlife carcass count results Pre-project construction annual average: 56 Post-project construction annual average: 8 Project—State Highway 9 Colorado River South Wildlife and Safety Improvement Project • Two wildlife overpasses, five underpasses, 8-foot-high exclusion fencing, 61 wildlife escape ramps, and 29 deer guards over 10.4 miles • Phase 1: December 2015; Phase 2: December 2016 Context—Prior to the project, wildlife–vehicle collisions were the most common accident type on this segment of highway, accounting for 60% of all accidents reported to law enforcement. Outcomes—Local agency carcass reports indicate that across the project area, wildlife collisions decreased 86% post-project construction compared to the 5-year pre- project construction average. Wildlife–vehicle collisions did not appear to increase beyond the fence ends north and south of the project area. Post-project construction, Colorado DOT maintenance data recorded only one carcass in the mile south of the project and one in the mile north of the project. Further analysis—The Colorado DOT report also examined possible effects on nearby roadways to see if the project pushed animal crossings elsewhere. Data were examined for ungulate carcass counts on a different highway—U.S. 40—east and west of the project. The 2 years following the project, each showed different results, so more data are needed to reach a conclusion on the likelihood of this effect.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 122 Nevada Performance Measure Result: In Progress Figure 8. Nevada DOT horse mitigation project crash records Project—U.S. 50 Horse Mitigation Project Phase 1: 2013 One large box-style underpass with approximately 5.3 miles of horse fencing, not including fence end treatments. Cost: $1,200,000 Phase 2: Fall 2015 4.0 miles of fencing along U.S. 50. Phase 2 fencing tied into the eastern limits of Phase 1 and included end treatment at the eastern limits. Cost: $750,000 Phase 3: 2018 construction season Will fence the center portion of the project area. After Phase 3 completion, the region should be entirely fenced from Chaves Road to Silver Strike Casino. Context—Several high-profile horse crashes occurred in Nevada soon after the benchmarking pilot began. Destructive crashes such as these had become increasingly common throughout the state and were the impetus for a series of improvements aimed at the state’s wild horse population. Along U.S. 50, between Dayton and Silver Springs, 55 horse-related vehicular crashes were documented between 2006 and 2016 (Figure 9). Due to the extreme safety risks associated with horse–vehicle collisions, Nevada DOT began the three-phased horse fencing project. Figure 9. Map of Nevada DOT’s project area 0 2 4 6 8 10 12 14 16 18 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016N um be r o f R ep or te d Cr as he s Nevada U.S. 50 Horse Mitigation Project Crash Records Dayton Phase 1 Phase 2 Phase 3

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 123 Nonmotorized Performance Data Processing and Formatting For the sake of efficiency, the team scripted the analysis process to calculate the route directness index (RDI) for the nonmotorized connectivity pilot by using various freely available open source software packages along with custom code written for this project. The process described here may be used with any GIS software (e.g., ESRI’s ArcMap) to obtain results that can be calculated manually or scripted within the programming environment. An example of the results is shown in Figure 10. The RDI was calculated as follows: 1. Defined barriers. All state-owned highways in the HPMS database were downloaded as a polyline shapefile. 2. Downloaded network for routing. OpenStreetMap (OSM) highways (all classes) were downloaded via the Overpass application programming interface. 3. Weighted network for routing. Roadways within the network were weighted according to their OSM highway tags. Roadways with tags other than those specified in Table 12 were excluded from the network, removing the potential for routing along limited-access freeways and other large highways unsuitable for bicycling. Table 12. OSM Highway Tags and Network Distance Rating OSM Highway Tag Weight Cycleway 0.8 Residential 1 Unclassified 1 Tertiary 1.2 Secondary 2 Primary 3 Note: OSM highway tag definitions are available at https://wiki.openstreetmap.org/wiki/Key:highway. Highway types not listed here were assumed to be unsuitable for bicycling and were restricted from routing. 4. Barriers broken into equal-length segments. Barriers were broken into 500-meter- long segments, which provided an analysis of cross-barrier connectivity at intervals of 500 meters along each barrier. For a finer-resolution analysis, barriers could be broken into shorter segments, or for a coarser analysis, longer segments.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 124 The following steps were computed for each segment: 5. Defined “offset points” to either side of the segment. Points were drawn 500 meters perpendicular to either side of the segment midpoint. These points represent theoretical start and end points of a short trip with the sole purpose of crossing the highway. No assumptions about route safety or quality were made. 6. Connected offset points to the routable network. a. Offset points were connected to the routable network at the closest available point along the network. b. If no network access point was available within 500 meters of an offset point, routing between the access points was considered unviable. 7. Found shortest path along network. a. Dijkstra's algorithm was used to find the shortest path between the network access points. b. The routing algorithm accounted for roadway weights so that, for example, priority would be given to routing along a residential roadway rather than a primary roadway. This weighting helped to simulate the most bikeable path. 8. Calculated “directness” ratio between network path length to straight-line distance. a. Network path length was the overall length of the shortest path calculated in the previous step. b. Straight-line distance was calculated between the two network access points. c. The ratio between network path length and straight-line distance summarizes the directness of connectivity between the two sides of the barrier. The ratio can be interpreted as the “times out of their way” that cyclists or pedestrians would need to travel to cross the barrier. i.Low ratio: Greater connectivity; direct crossing ii.High ratio: Less connectivity; indirect crossing d. In addition to this “true-distance” ratio, a weighted ratio was calculated with a weighted path length as its numerator. The weighted path length was calculated by breaking the network path into segments based on the types of roadways they traversed (e.g., residential or primary), multiplying each segment’s length by its associated weight, and re-summing the weighted lengths. If a network path traveled on highly weighted roadways (e.g., primary), the weighted ratio would be higher than if those roadways had lower weights. Thus, a direct path along highly weighted roadways might nonetheless result in a high ratio, indicating that the crossing would be difficult, not necessarily because of its distance but because of its quality for bicycling and walking.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 125 Figure 10. Example of a barrier with three 500-meter segments Results Analysis In the preliminary analysis results, state highways were defined by the OSM “motorway” tag. When these results were shared with the peer group, it became evident that a 1:1 correlation between motorways and state-owned roadways did not always exist. Consequently, the analysis was run a second time using HPMS data to identify state-owned roadways. The results of the second analysis, which better represent the state-owned network in each state, are summarized in Table 13. The results are summarized at the statewide scale and then by land area within and outside MPO boundaries to better represent the relative intensity of land use. Table 13. RDI for Peer States in the Nonmotorized Peer Exchange Average Route Directness Index Statewide Within MPO Outside MPO Kansas 1.63 2.15 1.57 Minnesota 2.18 2.42 2.1 New Mexico 2.43 2.56 2.4 Utah 2.08 2.1 2.06 Vermont 2.84 2.8 2.85 Washington 2.6 2.57 2.63 *Crossings that required travel along state roadways or were greater than 1000% of the straight-line distance are not considered viable and were removed from analysis.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 126 The following conclusions can be drawn from Table 13: • State highways in Kansas are the most permeable to bicyclists and pedestrians. The least out-of-direction travel occurs outside of MPO boundaries. Reducing out-of-direction travel is one consideration in whether a trip will be made on foot or bicycle. • Vermont has a similar RDI both inside and outside of MPO boundaries. There is little variation across the state. Without additional information, there is no way to interpret the impact of state highways on pedestrians and bicyclists. • It appears that state highways are less permeable in urbanized areas within MPO boundaries. This is likely true, in part, due to the number of segments where crossing was impossible outside of MPO boundaries, which resulted in the removal of these segments from the analysis. Removing these segments helps make the ratios truer to life, but they do not account for the percentage of the network where crossing is completely impossible because a person on foot or bicycling would have to travel too far out-of- direction to complete a trip. • Generally, when a crossing is feasible, the state highways require a doubling of the straight-line distance to cross the roadway. Pedestrians and bicyclists are sensitive to required detours, and this additional travel distance has the greatest impact on the shortest trips. Limitations of viewing the data in Table 13 include the following: • It is difficult to determine the impact of state highways where no route was available to cross the state highway. • This analysis does not consider the difference in character of controlled- versus uncontrolled-access roadways, which can have different impacts on nonmotorized travel. • The table does not present the viewer with any way to understand how the RDI varies across a state. A more nuanced (and complicated) assessment of results is possible when maps, knowledge of state highway character, and a summary of uncrossable state highways are provided. An updated table and maps are included below. Figures 11–16 show permeability results for Kansas, Minnesota, New Mexico, Utah, Vermont, and Washington, respectively. Table 14 provides a more nuanced analysis of the results from Table 13.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 127 Figure 11. Permeability analysis results—Kansas

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 128 Figure 12. Permeability analysis results—Minnesota

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 129 Figure 13. Permeability analysis results—New Mexico

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 130 Figure 14. Permeability analysis results—Utah

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 131 Figure 15. Permeability analysis results—Vermont

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 132 Figure 16. Permeability analysis results—Washington

Effective Performance Management by Transportation Agencies Benchmarking and Comparative Measurement Benchmarking Pilots Results Pilot Purpose • Summary • Results: Analyze Data 133 Table 14. Summary of Roadway Segments and Crossing Availability

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 134 Table 14 summarizes roadway segments and crossing availability. This information allows an analyst to better understand the impact of locations where no roadway crossing is available. Using the maps and additional table, the analysis of the results presented in Table 13 are refined. The original conclusions are shown as bold text, and new insights are shown in italic text. • State highways in Kansas are the most permeable to bicyclists and pedestrians. The least out-of- direction travel occurs outside of MPO boundaries. This is true. Kansas benefits from a regular grid of section roadways that provide regular connections across much of the state. Kansas also has the lowest percentage of state highway segments where a crossing is not available. Pedestrians and bicyclists benefit from the regular grid network. One outstanding question is the need for crossings, which is difficult to interpret without a more explicit understanding of potential demand. • Vermont has a similar RDI both inside and outside of MPO boundaries. There is little variation across the state. Figure 15 shows that crossings are consistently available within MPO boundaries. The map suggests there are many rural areas where the state roadway is probably the only roadway through a given location, and therefore there are no crossings. The results would likely change again if the character of roadways was taken into account. A look at Google Street View, or conversation with someone familiar with Vermont, leads to the information that many of the state highways are two-lane roadways that present less of a travel barrier than multilane, access-controlled roadways. The inability to take roadway character into consideration is still present, despite the additional information that is available. Given that pedestrians and bicyclists are affected by roadway character, it is appropriate to include it in an improved connectivity benchmark. • It appears that state highways are less permeable in urbanized areas within MPO boundaries. This is likely true, in part, due to the number of segments where crossing was impossible outside of MPO boundaries. Although the RDIs inside and outside of MPO boundaries were similar from state to state, the percentage of roadway segments where a crossing was available was

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 135 considerably higher within urban areas. In this case, the decision to exclude routes where travel is possible but the RDI is greater than 10 (which happens frequently outside MPO boundaries) skews the reported results. • Generally, when a crossing is feasible, the state highway requires a doubling of the straight-line distance to cross the roadway. Pedestrians and bicyclists are sensitive to required detours, and this additional travel distance has the greatest impact on the shortest trips. This is true, but more robust conclusions can be made with Table 14 and the state maps. For example, it becomes evident that large swaths of the state highway system in Utah lack crossing opportunities, and in Minnesota only a handful of key corridors likely have a substantial effect on the average RDI. In Washington State there is substantial variation in the permeability of the highways in the metropolitan areas. Although the average RDI in the Washington State MPOs is 2.57, there are many locations within the urbanized areas where highways create a less substantial crossing barrier. This assessment could be further enhanced by adding a more refined assessment of need or demand for highway crossings. This complexity in interpretation of results indicates that a peer exchange like this pilot provides a good start toward developing a nonmotorized benchmark for comparison across states. The metric could be made easier to understand and interpret through additional discussion with the peer exchange members and by refining data and exploring alternative display methods.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results Pilot Purpose • Summary • Results: Analyze Data 136 Step 5 Takeaways • After data have been obtained, reviewing for quality and completeness is critical. If review of the data reveals deep flaws, it may be necessary to complete additional processing steps or search for alternative data sources. For example, peers in the nonmotorized pilot expressed concern after reviewing the first round of results, which had been calculated using open source data. Participants reported that these data were not representative of all assets in the state-owned system, so the facilitator switched to HPMS data, a quality-checked and standardized government source, to identify state-owned roadways. • Even with metrics and parameters defined in previous steps, facilitators should expect to have questions about submitted data, solicit additional detail from practitioners, and be asked about specific parameters and the most appropriate summary methods for the data collected. Multiple short conversations with the peer group may be useful to obtain the most complete understanding of results. • Developing dedicated tools or systems to prepare base data can be time intensive but result in a more comparative metric. By standardizing the data analysis process from state to state, the data quality is maximized and errors are minimized. The nonmotorized pilot provides a case study demonstrating the potential benefit of interested peer agencies pooling resources to develop tools from which all parties can benefit. • Charts and maps are complementary in telling the story, and both types of visuals have a role to play. Select the type of visual best suited to the data and the narrative of performance outcomes. Include data on explanatory variables, such as intervention dates, changes in underlying populations, and other trends for related variables. For the wildlife measure results, seeing the annual trend would be meaningless without also showing when the project intervention was completed. For the nonmotorized measure, maps play an important role in interpreting the results. • When developing new metrics, multiple short conversations with the peer group may be useful to obtain the most complete understanding of results. Devoting time to the earlier steps of the process may result in less time and attention spent focusing on analysis and actual results. For example, the peer exchange focused on sharing results based on summary units and data sources. That information resulted in the revised analysis presented in this paper; however, the results did not benefit from a group discussion of the analysis.

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results 137 Step 6. Identify Noteworthy Practices Data analysis should lead to conversations with top-performing agencies about the ways they achieve results. Analysis of performance data should inspire questions and lead to seeking new information from peer agencies. Information exchanges can take many forms and be as informal or structured as the participants wish. – From the Guidance Key Actions to Share Noteworthy Practices • Identify noteworthy practices • Exchange noteworthy practices with peers The project team planned for two conference calls with each pilot group: one to define the measure and set parameters at the start of the pilot and the second to exchange noteworthy practices after performance results were analyzed. The second calls took place in late spring 2018, with five to six agencies taking part in each. Identify noteworthy practices—The facilitators for each pilot were responsible for developing an agenda that included topics that came out of the data analysis phase and that would lead to a discussion about practices that have worked or not worked for participants. Exchange noteworthy practices— Facilitators led the calls to ensure there was adequate momentum and content from the agenda, but they left room for discussion of unprogrammed topics that arose from the performance reviews. After the conference calls, the team developed a survey for participants to gauge the most helpful elements of the practices exchange. Results from the survey confirmed that both planned presentations on initiatives at other agencies and organic discussion were important and valuable to the participants. In fact, at least one participant flagged each as the element she wished there was more of in the exchange. One participant recommended shorter, more frequent calls that focused on only one topic each. Overall, participants felt they understood the benchmarking process and found at least some of the information shared in the final conference call helpful. Pilot Purpose • Summary • Results: Identify Noteworthy Practices

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results 138 Environmental Performance To ensure there was no “dead air” on the practices exchange conference call, the facilitator prepared an agenda that included a review of participants’ performance data, designated topics for group discussion, and a formal presentation on a noteworthy practice by one of the participants. Performance review—The practices sharing call began with a review of the projects and data each participant had submitted. The facilitator created presentation slides and visuals to summarize each agency’s results. Although the facilitator led this portion of the discussion, opportunities were given to all participants to clarify or explain their project elements and performance results. It was out of this portion of semistructured conversation that an impromptu discussion on citizen group assistance arose (see below). Prepared presentation: Mobile carcass reporting—During the initial conference call in January, the participant from Washington State DOT mentioned that her agency had recently switched from manual carcass reporting, which most participants were using, to a mobile app that automatically synced with a central database. The impact on reported carcasses was immediate, with districts’ reported carcasses increasing between 15% and 80%! Given this success with a new method, the pilot facilitator invited this participant to present on the specifics of the switch. She and a colleague who was closely involved in the app’s development and maintenance prepared a formal presentation on the change and the underlying tablet app that was developed in-house. Scheduled topic (no prepared content): End effects—A common issue for projects aimed at reducing wildlife–vehicle collisions is the end effect, in which animals deterred by fencing enter the roadway in greater numbers in the area where the fence ends. As one participant explained, if the project cannot abut a rock or steep slope, end effects are unavoidable. This topic came up on the initial conference call and surfaced in many of the reports and analyses of agencies’ projects. Because end effects are such a persistent issue across agencies, it was included in the agenda for discussion, but without a formal presentation. The resulting discussion brought up trials of a new electrified pavement at several of the participating agencies, with mixed reports on effectiveness. The Montana DOT noted they will be pursuing two formal trials of this pavement in the coming months, which interested other participants. This is an example of relevant information Montana’s peers may be able to follow up on to learn more about the pavement before trying it at their own agency. Impromptu topic: Citizen group assistance—One issue that came up organically in the group’s discussion of performance results was wildlife offices not Pilot Purpose • Summary • Results: Identify Noteworthy Practices

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results 139 having enough staff to fully meet labor- intensive work tasks, such as accounting for all carcasses from wildlife–vehicle collisions or identifying facilities and infrastructure that have been damaged and need maintenance. One solution, implemented in various forms at several participating agencies, involves recruiting the help of groups and individuals who are already in the areas near agency right-of-way, such as hunters, recreationalists, and other interested citizen groups. A complement to this citizen engagement approach is developing or using existing mobile applications, such as UDOT’s Click-n- Fix app, which allows anyone to report DOT issues, including carcasses and maintenance needs. Specific uses of such citizen assistance include the following: • Master hunters—Montana is starting a program that relies on an experienced group of hunters to help log relevant data. • Concerned citizens—Washington leveraged community groups that were concerned about the elk populations to contribute to maintenance of facilities aimed at funneling elk away from roadways. • Premium hunting tag work requirements—The Utah Department of Fish and Wildlife has a program in which hunters must do work to get premium hunting tags. • Adventure scientists—Montana had grant money to pay people biking through Montana to report carcasses. They had pointed participating bicyclists toward other types of work, as well, such as noting where fences were down or, if they see an animal in a wildlife exclusion, helping to ascertain why the animal had not used a designated facility. Pilot Purpose • Summary • Results: Identify Noteworthy Practices

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results 140 Nonmotorized Performance Rather than identifying high-performing agencies, the practice exchange for the nonmotorized pilot focused on how to make the connectivity metric most useful. This focus was due, in part, to the newness of benchmarking bicycle and pedestrian metrics at the state DOT level and the lack of experience by the participants in comparing their state to others in a data- driven process. Although peers found the metric useful, some additional considerations may make it more so. Recommendations that came out of this discussion include the following: • Further refine the definition of rural and urban land (e.g., improve on the current use of MPO boundaries); • Further refine the roadway categories assessed (e.g., consider state-owned, access-controlled roads separately from state-owned, non-access-controlled roads); and • Further refine the definition of when crossings are needed to present a more nuanced and useful RDI. Peer exchange members expressed interest in understanding how their metrics compared with other states they consider peers but that were not members of this exchange group. For example, the Vermont DOT was interested in metrics for additional New England states. Peers emphasized that representative practices of states they consider peers would be of interest, but similarities in land use patterns and transportation networks are critical to consider when thinking about how representative practices might transfer. There may be some utility in summarizing results by state highway district to facilitate comparison of conditions at a more granular level. Technically, a tool could be built to facilitate comparison at this level, but exploration of that idea was outside the scope of this project. In addition to the development of statewide summary metrics, peer exchange members expressed keen interest in using these data at a more granular level for state and other local planning efforts, including a pedestrian plan in Minnesota and an upcoming active transportation plan in Washington. Participants also indicated data will be useful for internal purposes and planning at multiple levels, including corridor comparison and district-level planning. At the statewide level there is some interest in using this metric for understanding noteworthy practices of representative peers. Pilot Purpose • Summary • Results: Identify Noteworthy Practices

Effective Performance Management by Transportation Agencies Benchmarking Pilots Benchmarking and Comparative Measurement Results 141 Step 6 Takeaways • Discussing the data and results as a group provided a more nuanced and robust understanding of relevant practices and performance outcomes. In particular, impromptu topics that the facilitator has not thought of but that are relevant and interesting to practitioners can arise in group discussions. This kind of open discussion led to several participants in the wildlife collision pilot describing how to leverage outside community groups for certain tasks, which other participants on the conference call found helpful. • Allow time for a range of prepared content and organic discussion: prepared presentations, scheduled topics for group discussion, and topics that surface on their own. Some people prefer more structured content; others find impromptu discussion more valuable. Make room for both preferences within practice exchanges and in the benchmarking process overall. • If there are many topics for discussion, consider holding a series of shorter gatherings or conference calls to allow focus on each topic, rather than fitting them all into one longer meeting. Pilot Purpose • Summary • Results: Identify Noteworthy Practices

Next: Implementation »
Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies Get This Book
×
 Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB's National Cooperative Highway Research Program (NCHRP) Research Report 902: Benchmarking and Comparative Measurement for Effective Performance Management by Transportation Agencies presents guidance and examples for selection of peer groups to ensure that benchmarking is effectively applied to enhance transportation system performance.

Benchmarking—comparison of oneself with peers—has been successfully applied in many fields as a tool for assessing system performance, communicating about system performance with a broad stakeholder audience, and supporting performance management.

The report includes practical guidance on how transportation agencies can undertake benchmarking to improve system performance management practices and highlights applications of the guidance in two specific components of system performance, for active (that is, non-motorized) transportation and environmental impact.

Guidance in the report is supplemented by a comparative benchmarking platform, a web-based tool agencies can use to share performance information and explore how aspects of their system’s performance compares with others.

The platform is maintained and available through the American Association of State Highway and Transportation Officials Transportation Performance Management Portal.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!