National Academies Press: OpenBook

Developing a Guide for Managing Performance to Enhance Decision-Making (2022)

Chapter: APPENDIX B DETAILED PEER EXCHANGE NOTES

« Previous: APPENDIX A MAKING TARGETS MATTER PRACTITIONERSURVEY
Page 38
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 38
Page 39
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 39
Page 40
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 40
Page 41
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 41
Page 42
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 42
Page 43
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 43
Page 44
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 44
Page 45
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 45
Page 46
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 46
Page 47
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 47
Page 48
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 48
Page 49
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 49
Page 50
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 50
Page 51
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 51
Page 52
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 52
Page 53
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 53
Page 54
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 54
Page 55
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 55
Page 56
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 56
Page 57
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 57
Page 58
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 58
Page 59
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 59
Page 60
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 60
Page 61
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 61
Page 62
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 62
Page 63
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 63
Page 64
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 64
Page 65
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 65
Page 66
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 66
Page 67
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 67
Page 68
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 68
Page 69
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 69
Page 70
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 70
Page 71
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 71
Page 72
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 72
Page 73
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 73
Page 74
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 74
Page 75
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 75
Page 76
Suggested Citation:"APPENDIX B DETAILED PEER EXCHANGE NOTES." National Academies of Sciences, Engineering, and Medicine. 2022. Developing a Guide for Managing Performance to Enhance Decision-Making. Washington, DC: The National Academies Press. doi: 10.17226/26663.
×
Page 76

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

38 APPENDIX B DETAILED PEER EXCHANGE NOTES Peer Exchange 1 — St. Louis WELCOME & INTRODUCTIONS The facilitators asked each participant to introduce themselves by providing their name, agency, role, and to briefly answer “What do you hope to get out of this exchange?” Most attendees were from or programming departments, but some focused on data, transit, policy and research, and project development. Participants hoped to learn how to: • Make performance management activities meaningful now that they’ve “gone through all the motions.” • Coordinate better with other agencies. • Alter their institutional processes and frameworks so that they are supporting making progress toward performance targets. • Use analysis, even when conducted at a rough level of detail, to identify what adjustments have the potential to alter performance. • Communicate about the adjustments that the agency has identified and/or made. INTRODUCTION TO THE FRAMEWORK Joe Crossett and Anna Batista of High Street gave an overview of the work done to date as part of NCHRP Project 02-27, including the motivation behind the project and the goals for the peer exchange. Anna gave a high-level overview of the proposed draft framework (monitoring and adjustment feedback loop) for the guidebook and solicited the feedback from participants. Key takeaways from this session include: • Continue to refine the guidebook title, headings, and structure to better convey the intended meaning. • There might not be a clear way to distinguish between what we presented as “long- term” and “short-term” adjustments. • The concept of improving data contains many elements that the guidebook might need to deconstruct – from data quality to communicating meaningfully with stakeholders and the public. FEEDBACK ON THE PROJECT TITLE • The title “Making Targets Matter” is not encompassing enough for what the project is trying to accomplish. The definition of a target varies from agency to agency, and the project is more about optimizing resources than it is about the actual target. • The subtitle “Managing Performance to Enhance Decision-Making” seems more helpful. DISCUSSION OF THE PROPOSED STRUCTURE OF THE FRAMEWORK • MPOs, especially smaller MPOs, heavily depend on State DOTs and transit agencies for performance data. As such, interagency coordination is critical for both monitoring and adjustment.

39 • Feedback on the list of potential adjustments: o The long list of potential near-term adjustments is overwhelming. o Some of the activities listed seem easy on their face but are not in practice (especially on the mid-cycle adjustments list). o This list of adjustments is useful as a starting point for brainstorming ideas. • The distinction between short- and long-term adjustments are based on when in the planning process the decision would be made. Should also distinguish between short- and long-term timeframes of when the adjustment might have a measurable impact. Examples included: o For safety, regulatory changes could be short-term o Introducing new policies and technologies would be outside of an investment cycle. o Investment allocation and prioritization aren’t too long term. Some agencies do a TIP every year. • The feedback loop needs to have a mechanism to aid in accountability when allocating funding. Some partners manipulate numbers to get funding for the areas where they believe there is more need (e.g., allow pavement to deteriorate to receive maintenance funding dollars). • In the peer exchange the “adjustment” called “adjust the approach,” could be more clearly communicated as the culture and workforce component of the framework, where agencies structurally set themselves up to effectively be able to monitor and adjust. • The “adjust the approach” activities (and adjust next iteration) don’t necessarily influence performance – they are more in-house structural processes -- versus the projects/programs, investments activities that can influence outcomes. THE NEED FOR DISCUSSING TRADEOFFS AND PROGRAM OPTIMIZATION • Talk about opportunity costs. The issue is not that agencies are not effectively spending money, it’s that the amount of money available is finite. • Kentucky Transportation Cabinet’s maintenance program has been successful at using optimization to address some tradeoffs. o When they pushed forward a pavement maintenance program (rather than worst-first), it became more effective. o They also improved cost-effectiveness per mile by combining activities like safety improvements with pavement programs. o If you’ve optimized within the program, then yes, you can only move the needle with more money; but start by looking carefully at whether you’ve optimized the programs first – which is sometimes constrained by public and political pressures. • The federal legislation gives agencies, particularly MPOs, the leverage to ask for more data and improvements in the data. o Some transit mechanics are tracking databases for the first time. They had never really looked at the reports of trends over time as a resource for deciding

40 on preventive maintenance; they are now learning to be predictive instead of reactive. • Public or political pressures can make it challenging to optimize a program, even if it would have the greatest impact on targets. Board members and member jurisdictions will have their own priorities. How to persuade individual jurisdictions to act to achieve regional or state targets? • Innovation is important and can’t be forced. When agencies do figure out a more effective strategy, we need to figure out how it could be spread to other agencies or departments. CHALLENGES WITH DATA, FORECASTING, AND RELATED COMMUNICATIONS • The “Improving data” step under Adjust the Approach could include improving the measures/ targets. For example, we don’t have good measures/ targets for mobility, which isn’t as clearly defined as pavement and bridge measures. • If you spend X dollars you get X bridges, which is straightforward. It’s not as easy to connect spending X dollars with saving X lives. • “Improving data” should also look at new data. For example, the federal reliability measures only apply to the NHS, and agencies need to add other data if they want to understand performance on the rest of their system. • Discuss the external factors that impact progress towards or away from a target. Agencies need to focus on variables that are inside their control, but they also need a way to explain the narrative for the factors that are not. • Discuss the public perception component of setting targets, including the identification of strategies for accurately communicating what a target means. For example, although it might be supported by the data, most of the agencies agreed that they cannot set a target of rising fatalities due to the public perception of what that means (i.e., the incorrect meaning: that the agency wants more people to die in traffic accidents). That leads to agencies setting a flat or declining line target, despite their knowledge that such a target is unattainable. • The ability to forecast outcomes is a challenge for many agencies. There are tools to determine the impacts of spending on things like pavement outcomes or bridge condition, but cause and effect is less well understood in the other performance areas. For example, in predicting fatalities, nice weather brings out more motorcycles, increasing the exposure rate, but the analysis data may not capture weather as a variable. Also, agencies can’t control people’s choices about what vehicles people choose to use. How can they account for these external factors? How should they communicate about external factors? • The “improve data” strategy may be misleading because the data might be good but might not be as directly connected to goals. “Refine data” may be a better way to word the strategy, because it involves identifying new methods for using existing data, rather than just seeking out new data. You might also use “improve the metric.”

41 MONITORING This session focused on monitoring and using system performance data to inform decision- making. Monitoring elements presented included data management, data analysis, coordination & communication, and organization and culture. Key takeaways include: • Agencies are struggling with the data and analysis but are developing in-house expertise and partnerships to improve. • Data can be very effective to support decision-makers – if communicated effectively. • Agencies are uncomfortable with the idea of another agency communicating their performance data to the public. PEER PRESENTATION: DATA ANALYSIS APPROACHES John Moore of the Kentucky Transportation Cabinet (KYTC) presented on the strengths, weaknesses, and challenges of their data analysis approaches. Key messages included: • Data partnerships are valuable and may vary for different performance areas. KYTC works with: o the Kentucky Transportation Center on safety and congestion data analysis. o University of Louisville on pavement deterioration models • The data doesn’t always capture all the potential contributing factors. For example, safety data: o Is limited for nonmotorized users; o Is provided by law enforcement officials who may not understand the importance of providing each data element; and o The difference between a fatality and a serious injury might be because of the distance to a trauma center rather than any roadway characteristics. • Pavement modeling is more advanced than the other performance areas. o In addition to the federally required monitoring of the NHS, KYTC also monitors its other major roads, even exceeding the federal criteria. o KYTC has been collaborating with University of Louisville to create pavement deterioration models and are hoping to also do tradeoff analyses. o The pavement deterioration models showed that their maintenance approach was insufficient. The models enabled KYTC to demonstrate the results under different investment scenarios. The model also led to a graphic that demonstrated that current funding levels would lead to significantly higher needs in the future (Figure B-1), which helped lead to an increase of funding for pavement maintenance. Figure B-1: Graphic illustration of the difference in outcomes of two different funding scenarios

42 • Congestion and Reliability are challenging because of differing opinions of how to define what is an acceptable level. DISCUSSION OF DATA MONITORING CHALLENGES AND OPPORTUNITIES The facilitated discussion focused mainly on the challenges faced with monitoring targets, largely as it related to in-house data analytics. Key challenges and opportunities discussed include: • Recruitment and retention of data expertise. Data analysis skills are in high demand in every industry, and highly skilled data analysts can find more lucrative opportunities in the private sector. The participants shared approaches their agencies are using to address this challenge: o When staff retire, MARC reclassifies those newly open positions as data roles. o Policy makers at the Metropolitan Council understand that having strong data scientists was a priority. They provided funding to establish positions with market-competitive salaries while also offering recruits interesting analysis questions and freedom to explore what interests them. o “Self-training” exercises enable colleagues to gather once a week to participate in free online trainings for programming or data analysis programs. o Several agencies reported partnering with universities for some of their data needs. • Communicating data analysis needs. Beyond improving data analysis, agencies expressed a need for their non-data-focused employees to be able to effectively describe or give direction on what they need the analysis to tell them. • Data management and setting priorities. With limited capacity, how do agencies prioritize? o Some agencies are not prioritizing but are incrementally addressing different analysis questions as they arise, and much of the data collection seemed to be done on an ad hoc basis. o The Indianapolis MPO has a data analytics modeling plan, which a consultant helped to create. The plan describes what data they need to acquire and purchase over the next 5 years. The plan was initiated because the MPO is developing a freight model. • Communicating about the data with the public. Several agencies had concerns about how to prepare to communicate about data as it becomes public via the federal dashboard; FHWA and others might use different messaging than what the agency would prefer. Several agencies noted that they have had conflicts with partner agencies (e.g., a transit agency that serves an MPO’s region) about the appropriate level of transparency. Agencies expressed a desire to have a consistent message to avoid public confusion, and they also hoped that they would have some advance notice so that they could prepare their own responses.

43 MAKING ADJUSTMENTS: NEAR-TERM STRATEGIES This session focused on strategies and decisions that transportation agencies can adjust in between investment cycles and plan updates. The session featured two peer speakers, Chris Upchurch of Wichita Area MPO (WAMPO) and Deanna Belden of Minnesota DOT (MnDOT). The major takeaways from this session are: • Performance analysis can sometimes tell you what NOT to spend money on. • Some agencies are finding flexibility within their existing programs. PEER PRESENTATION: DESIGN/FUNDING FLEXIBILITY WITHIN THE TIP Chris Upchurch described how WAMPO uses performance analysis to identify potential revisions to project designs after the projects are in the TIP. Chris provided the following three examples of using the TIP funding flexibly: • A suburban arterial went into the TIP as either a 3- or 5-lane project, which gave WAMPO the flexibility to conduct further analysis to identify a design that is the most effective for their goals. WAMPO then analyzed the average delay per trip under the different scenarios and found that the estimated delays were 44 seconds compared to 28 seconds. As these were not a meaningful difference in delay, WAMPO was able to select a less expensive three-lane design. • Under state DOT rules, WAMPO is not permitted to carry over any unallocated/unobligated funds into the next year. Chris described two approaches that they have applied to give WAMPO some flexibility in obligating these funds: o WAMPO identified “areas of concern” where they forecasted and existing traffic delays indicate a future need to acquire right of way for expansion. If they find that they are going to have unobligated funds, they can now apply those funds to acquiring the right of way that they expect to need. o With an eye toward their nonmotorized fatalities and serious injuries performance targets, they have also been looking for opportunities to add bicycle and pedestrian facilities that would improve connectivity in the region. These are often “shovel ready” projects. This approach enabled them to install a pedestrian bridge across a major floodway, creating a new connection to facilities on either side of a bridge that previously had no safe area on which bicycles or pedestrians could cross. Participants agreed that every agency’s funding requirements are different but that they all could probably find some flexibility. • Using a performance-based approach to design can help agencies identify cost- effective designs and avoid “overdesigning” the roads to meet a wish list of changes that may not be necessary. • In Kentucky, the state legislature allocated funding to widen a congested interstate. Through its analysis, KYTC identified that a short merge lane (as opposed to too few travel lanes) created the congestion. They were able to mitigate the congestion by lengthening the merge lane during routine maintenance.

44 • The participants concluded that it is important to have an up to date list of projects and strategies “on the shelf” that will improve performance outcomes. Then, when flexible funding is identified, the agency is ready to go with some effective projects. PEER PRESENTATION: USE OF COST-BENEFIT ANALYSIS AND SPREADSHEET TOOLS TO COMPARE OPTIONS Deanna Belden presented on a how MnDOT used performance analysis to determine not to accelerate construction of a major highway project. The contractor offered to accelerate project delivery for a price of $15 million, and MnDOT needed to determine whether the benefits of the early delivery would exceed the $15 million price tag and be worthwhile to move that $15 million away from other projects to fund the acceleration. • One element MnDOT considered was the expected benefit-cost ratio of acceleration. MnDOT had already calculated the benefit- cost ratio for the original project, and they amended that analysis to estimate the benefit- cost ratio for two different accelerated delivery dates, three different forecasts of economic conditions, and two different Figure B-2 Minnesota DOT's benefit–cost ratios for discount rates – the federal rate different scenarios of 7% and MnDOT’s usual rate of 1.2%. Discount rates are used in cost-benefit analyses to reflect that future benefits are worth less than benefits that are accrued today – because of the uncertainty relating to the potential future benefits. Because of the significant difference in discount rates, the analysis using the federal rates showed significantly higher benefit-cost ratios under all scenarios, but MnDOT determined that the discount rate would not be as significant given the short time-frame between the planned delivery date and the accelerated delivery date. See Figure B-2. • In addition to the benefit-cost analysis, MnDOT considered how the risks of project delivery might increase by accelerating delivery: that the schedule might not be reliable, that they would need to delay other projects worth $15 million, that construction quality might suffer, and that project coordination would be more challenging. • To aid in making a decision, MnDOT created a spreadsheet to contain and help synthesize the many factors that they were weighing. The spreadsheet used conditional formatting to color code the analysis results (e.g. red for challenges and green for benefits) so that decision-makers could more easily understand the complex spreadsheets. This spreadsheet showed that the accelerated schedule would have more risks than the existing schedule for delivery.

45 MAKING ADJUSTMENTS: LONGER-TERM PLANNING AND INVESTMENT DECISIONS This session focused on how agencies use performance data and analysis to support longer-term investment planning, resource allocation, and funding decisions. Key takeaways from this session included: • Agencies have been revising their project evaluation criteria to improve performance, but they don’t make major changes frequently to give project sponsors some predictability. • To improve the quality of the proposed projects, agencies offer training and feedback on what makes an effective project. PEER PRESENTATION: PROJECT EVALUATION CRITERIA Peter Koeppel of East–West Gateway Council of Governments (EWGCOG) presented about how the agency revised its project evaluation criteria to try to incentivize lower-cost higher-impact projects: • The agency developed Ten Guiding Principles (e.g., Promote Safety and Security). Many of the principles overlap with federal performance areas and with goals of their State DOTs. • For each of the guiding principles, EWGCOG identified system performance measures (such as the federal measures) and project measures that enable them to score projects based on how well the project will advance the principle. • They identified seven project types that would each have unique applications and scoring criteria: road, bridge, traffic flow, safety, active transportation, transit, and freight/economic development. Different types of projects have different scoring criteria because the project types serve different purposes and regional goals. • For each principle and each project type, EWGCOG defined measurement objectives and specific metrics to measure performance. For example, active transportation projects and safety projects both support the principle of “Support Neighborhoods and Communities,” but they have different metrics. Both applications use the metric of whether the project is in an Environmental Justice area, but active transportation projects can also earn points for providing access to schools and other neighborhood destinations. • Each project type has different maximum points that can be earned under each principle. For example, under the “Support Neighborhoods and Communities” principle, active transportation projects can score up to 22 points whereas freight projects can only score up to 4 points. Although active transportation projects can score a lot of points for supporting neighborhoods, they do not earn points for other criteria, such as “Support Quality Job Development.” Freight projects, on the other hand, can earn up to 60 points for supporting quality job development. • Projects can earn up to 100 points for their performance plus an additional 25 for cost and usage (i.e., person miles traveled). The highest scoring projects across all project types receive funding. If the proposed projects under a given type (e.g., freight) do not score highly enough relative to other project types, then EWGCOG might not fund any freight projects in that cycle.

46 • EWGCOG works with partner jurisdictions and agencies to improve their project scores. Applicants can participate in a workshop with EWGCOG staff ahead of submitting projects for consideration; EWGCOG also invites other participants such as the state DOT and local bike/ped advocacy organization. If an applicant does not receive funding, EWGCOG offers to meet with them to debrief and discuss how to improve for future rounds. DISCUSSION OF PROJECT EVALUATION METHODS The EWGCOG presentation prompted conversation on similar methods used by other agencies and challenges in using these approaches: • Metropolitan Council has a similar project-selection criteria process. The project applications have become competitive. For example, they have enough road surfacing projects that include bike lanes that road surfacing projects without bike lanes are no longer competitive. The board votes about what rough percentage of funding goes toward what types of projects. • There is no one project-selection approach that will be applicable to all agencies. Depending on which programs are allocating funding to projects, agencies will be limited in what projects they can select. Chicago Metropolitan Agency for Planning (CMAP) has three funding programs that accept applications on the same general schedule. This gives CMAP the discretion to select a project and determine which program is most applicable to fund it with, rather than being bound to selecting projects based on what funding is available. • There are often external factors that influence project prioritization or selection criteria. For example, in Minnesota the state legislature identified a relatively good performance-based selection process. However, the list of projects that resulted from this process ranked the politically preferred projects relatively low, so the legislature revised the mandate to improve the scores of those projects. • It would be helpful for agencies to have a tool that would help conduct a tradeoff analysis within an individual funding program. For the most part, the funding for each program is fixed and cannot be applied to projects outside the scope of the program. For most agencies, pavement programs are the only ones mature enough to conduct a tradeoff analysis within the program. Other programs do not have the same ability to forecast outcomes to be able to accurately make informed tradeoff decisions. • The participants generally agreed that it was best to keep project-selection criteria and funding allocations as consistent as possible, to avoid potential issues with partner agencies who are operating under current assumptions. When asked how and how often agencies revisit (and revise) their evaluation criteria, participants had a variety of answers: • Metropolitan Council automatically updates their criteria each funding cycle, but major changes only happen every 6 years. • East–West Gateway COG also includes minor updates with each cycle, which generally require board approval (unless the changes are minor administrative fixes).

47 • WAMPO, since they just did a major update, will keep the system fairly consistent for the next 5 years to evaluate how it goes. • CMAP agreed with WAMPO’s approach because it gives partners jurisdictions predictability about the rules. CMAP has told its partners that their won’t be any major changes until the next federal reauthorization bill. IMPROVING THE PROJECT PIPELINE Most of the agencies, especially the MPOs, relied to some extent on partner jurisdictions to propose and implement projects, and they shared information on how to integrate these other agencies into a performance-based project-selection process. • Educating partners on project scoring mechanisms and methods to potentially increase an individual project score is a valid strategy for increasing progress towards performance targets. o WAMPO hired a consultant to work with partners to identify areas of improvement for scoring on their projects, and they received a higher quality pool of projects that they would have done otherwise. o The Indianapolis MPO set aside funding to conduct 50 intersection safety audits but only received two applications. They then used their fatality and injury data to identify the 82 intersections that were the highest priority for the region. Then they met with the local agencies to narrow down the list and ask which intersections the local jurisdictions would be interested in funding. • Local priorities are not always the same as regional priorities. An individual project might not move the needle on a regional target, but it could have great importance to the jurisdiction that submitted it. Project-selection criteria that is focused on regional goals will not necessarily capture these local needs. • When local partners must provide a match to receive funding, lower-income communities may not be able to participate in the process. This often results in funding projects in communities that have matching funds but that might not be the “best” projects from a regional perspective. One option might be that the amount of local match required could vary depending on how well the project scores on the regional criteria. APPLICATION DEMONSTRATION High Street gave an overview of the application they have been developing to solicit feedback from participants on how to improve it and make it more relevant to transportation practitioners. Comments included: • There needs to be greater transparency in what data is influencing the model and how that data affects the forecasts for metrics like bridge condition. One issue with current models is that agencies do not feel like they have enough relevant data to accurately create a forecast. By explicitly describing the inputs and assumptions, agencies will have a much better idea if the application is relevant to their purposes. • There needs to be an element of “lag time” in the projection of impacts from increased investments. As it stands, the application suggests that if an agency increases funding

48 at a set point in time, they will begin to see an increase in performance from that set point in time. In reality, there is a lag – often at least 2-4 years – between investment in an asset and a resulting impact to performance. o When considering safety targets, many metrics are based off moving averages (e.g., fatalities are a five-year moving target). An adjustment made now will be a minimal change to the final outcome, since that adjustment will not impact the prior 5 years of data which the calculation is based on. • It would be helpful to have real research backing the current assumptions in the application. For example, if the forecasts attributed to policy decisions like seat belt laws have a legitimate scientific backing, then the application would be highly useful. In general, all assumptions surrounding benefits should be based on real research. • Participants generally agreed that the application would be more beneficial if it describe a more universal situation rather than attempting to give results for each individual state. In its current form, the application could be used by uninformed partners or the public to raise questions over why a transportation agency is not taking a particular action if it could result in an increase in performance. o The historical trend aspect of the application is beneficial, so creating a set of generic, representative states based off certain qualities (e.g., geography, population) would be better than a single universal state. WHAT TO DO WHEN ADJUSTMENTS MEET RESISTANCE The final session focused on what triggers resistance to making adjustments and how agencies can counteract that resistance to continue making meaningful progress towards targets. Key takeaways included: • There are not perfect options for using data to measure how every project contributes to regional or local goals. To account for these gaps in the data, the decision-making process needs to have room for qualitative evidence, including public opinion. • Discussions about tradeoffs among goals are most effective when conducted early (as opposed to during project selection), but agencies find it challenging to get full participation from stakeholders until projects are at stake. • Outside parties (e.g., consultants or other outside experts) can play a role as convener and neutral arbiter. Participants described a three-circle Venn diagram, with categories of “what data the MPO wants to measure,” “what data is feasible to measures,” and “what data the jurisdictions want to measure.” The overlap between these three categories is often very small. Participants also talked about how the quantitative measures might not capture intangible values and priorities of the region and their stakeholders, such as: • Improving economic vitality • Transportation and health • Equity • Connectivity of neighborhoods to jobs and transit • Quality of life or quality of place

49 • Quality of place as a tool for attracting and retaining talented employees To combat this, participants highlighted the difference between “soft selection criteria” and “hard selection criteria.” There are some metrics that do not need to be quantified in order to be considered valuable in the project-selection process. For example, if there is a lack of public buy- in for developing a certain green space, an agency should not feel compelled to determine the intrinsic and monetary value of that land as an argument against developing on it – instead they should be able to rely on the public opinion. In general, participants voiced a need for the ability to let anecdotal and qualitative evidence influence decisions. Agencies struggle to communicate (and stakeholders do not understand) how important it is to participate in early discussions about setting targets or goals, which are then used to develop project evaluation criteria. Because many goals may directly conflict with each other (e.g., safety improvements may require traffic calming, but traffic calming slows vehicle speeds), it is important to have active participation in the discussion of these tradeoffs when setting the goals and targets. Participants noted that bringing in consultants can help bridge communication gaps. WRAP UP AND NEXT STEPS The final session asked the participants to reflect on the key takeaways or insights that they would be taking away with them. Overall, the participants found it helpful to learn that other agencies are facing challenges that are similar to what they face. Several participants noted the importance of meeting regularly with a performance management workgroup, and one participant described their agency’s approach: • They have two workgroups that meet monthly, one focused on the federal measures and one focused on their other measures. • Agenda items vary but are always topics that require an in-depth one-hour conversation. For example, general updates occur via email rather than during the meetings. When safety data came in showing that they were not on track for their target, the meeting focused on discussing the causes and their options for adjustments. An upcoming meeting will focus on how to collaborate on a new transit plan that they need to develop. • Attendees at the meeting may also vary depending on the topic to be discussed. Each division is represented in the meetings, and the division leadership identifies who from the division should be at the table for that month’s discussion. What key questions should the guidebook try to answer? • How to communicate with the public about worsening performance (especially safety); • How to get project applications to focus on performance goals, rather than flashy projects; • How to communicate the differences between agency performance measures/ goals and the federal ones;

50 • How to identify advocates, such as Metropolitan Council’s example of the business community supporting performance-based approaches to argue for transportation investments. What should the guidebook’s key messages be? • “The targets won’t matter if you don’t believe that they can.” Clearly link strategies to changes in outcomes. Focus the guidance on what works. • Focus on making performance measures matter rather than making targets matter. Consider using the project’s subtitle (Managing Performance to Enhance Decision- Making). • Emphasize the incremental process that occurs over the medium and long term, as opposed to short-term targets. • Don’t go deep on organizational structure, but provide some highlights of successful structures. • Rather than telling how to prioritize, describe how performance measures can be used to influence prioritization. • The guidebook can’t cover every approach. Give guiding principles, questions to consider, and potential pitfalls of different approaches. • Provide hypothetical and real-life examples, possibly from different ends of the spectrum. Also provide enough context so that readers can understand why it worked in that situation. Peer Exchange 2 — Baltimore INTRODUCTION TO THE FRAMEWORK Joe Crossett and Anna Batista of High Street gave an overview of the work done to date as part of NCHRP Project 02-27, including the motivation behind the project and the goals for the peer exchange. Anna gave a high-level overview of the proposed draft framework (monitoring and adjustment feedback loop) and solicited feedback from participants. Key takeaways included: • Effective communication and cooperative relationships are the essential element to such a framework. Participants provided feedback on communication and engagement approaches that might be relevant to the framework: • Delaware pavement conditions are tied to the bond rating for the state of Delaware, so the pavement condition must be kept in high condition to maintain the state’s high bond rating. If the pavement condition slips to less than good, the bond rating will be lowered. New secretaries and boards are educated about this core program. Decisions are made more centrally so that pavement doesn’t take away from other needs. • The most effective engagement happens when the TIP is happening, so it is important to figure out timing of communication. Some MPOs don’t have a great process yet for

51 knowing about performance landscape, introducing early communication, trying to figure out communication. The closer they got to needing to make a decision, the more attention they paid, hence the more important the PMs became. • Safety measures are relatable and clearly communicated; TTTR and congestion measures aren’t as easily understood, but safety is personal. Safety can be a successful and important way to connect operations and performance measures and targets. Relating congestion and safety data bridges the connection between something that everyone understands (safety) and can use that understanding to shift into conversations about operations (e.g. how many accidents happen in congested conditions, reducing congestion can increase safety) • MWCOG asked DOTs (3 DOTs- DDOT, MDOT, VDOT) to show up for a meeting each quarter to discuss safety measures. These DOTS are sharing information amongst themselves more now because of these meetings; MWCOG can’t say whether it has led to any changes, necessarily, but has led to a new study, and the conversation happening is important. Passive submittal of data for annual performance measure setting did not lead to as much accountability as in-person meetings. The conversation has created accountability for the states to the MPO partners, and the frequency of meetings (quarterly, 3x year) is an important part of that accountability. The relationship between customers and the partners is more important than frequency; three states requires higher frequency of meetings (other partnerships might not need such high frequency). “Having customers and partners and DOTs working in the same conversation makes the targets matter.” • Some indicators were going the wrong way and worsened conditions; states were very responsive to that. There is a regional consensus developing that reciprocity is important (able to implement goals and effectively engage with each other to meet targets). • The Virginia DOT board adopted performance goals to show how decisions could affect outcomes. Narrative might have been presented before, but the hard data is important. • Anthony (WMATA): customer-centric-ness of the measures indicates how much the targets matter. Much of the board is elected officials; they spread the word and can advance the narrative pieces that are easily communicated to the public. o Trying to bring asset management performance targets to a level that can be easily communicated to the public o On-time performance of trains is only one aspect of customer experience. Now they are trying to be more holistic, looking at swipe in-out data. o Storytelling to the public/customer is important. MONITORING AND USING SYSTEM PERFORMANCE DATA TO INFORM DECISIONS Paul Hershkowitz of ICF facilitated the session on monitoring and using system performance data to inform decision-making. This session also featured a peer presentation from Anthony Harris (WMATA) about WMATA’s Stat Meetings and Chuck Imbrogno [Southwest Pennsylvania

52 Commission- Southwestern Pennsylvania Commission (SPC)] about how SPC compares and contrasts data and analysis issues for PM1 and PM2 (e.g. pavement) with PM3 (and other less well established less-forecastable measures). Key takeaways from this session include: • Data access and management is still a big challenge for a lot of agencies. • Communicating about the data is also still a major challenge. WMATA hosts “Stat meetings” to provide internal performance reports on Key Performance Indicators (KPIs) for each department. Rail Stat meetings were initially held to gather internally to address the public facing issue of high-profile fire and smoke track incidents, that resulted in safety risks, service impacts, and reputational damage to WMATA. The issue required a lot of coordination internally and the meetings have proven to be a useful model. More departments and areas have expressed interest in hosting Stat meetings, and they are now also used by the departments managing Bus, Facilities, Internal Business Operations, Metro Police, and Budget. Rail meetings are still the largest meetings, with 20-30 leaders who attend regularly. The Chief Operations Officer often attends different Stat meetings. Stat meetings focus on performance reporting, and they are separate from the performance improvement process, but solutions are discussed as part of Stat meetings to start the conversation about how to improve. WMATA uses IBM Cognos to pull data from the work order management systems and to calculate the mean time to repair things, but the tools are not as important as the consideration of data integrity and data collection processes. SPC works with practitioners and consultants to help determine where to start the process of determining targets. Knowing the people in the conversation has helped (like knowing people at PennDOT). Having two or three staff on the NPMRDS database to understand the tool and figure out how to manage it also helped. SPC built a dashboard to convey their targets to the public and are working to develop a narrative from their data through their LRTP process. The facilitated discussion focused on challenges related to data sharing and data consistency and some specific questions and discussion around data use by SPC and WMATA. • Guidance on data analysis that could be used to develop performance metrics. Agencies varied in whether they created a buffer for a margin of error. Some agencies conduct rudimentary forecasting to improve the target forecasting for PM3 that they will apply to the next round. Other agencies have travel demand models but aren’t using them to develop PM3 performance metrics. States adjust their methodology to be functionally efficient (e.g. 15-minute buckets to eliminate confusion over “bleeding” data). • Data quality and clarity. Agencies struggle with messy data and lack of resources to perform quality control as data is being collected. They would appreciate helpful guidance for improving data integrity. • Access to data. Access to consistent curated datasets that people can access is desired. Some data fields are not tied to an MPO, which can make it difficult to obtain appropriate information. States expressed interest in a data sharing platform because publicly available pavement data can be different than internal data, and even technical

53 experts do not know the difference. This creates difficulties, and it is costly to spend time to determine what is the right/correct data or to identify challenges existing with specific datasets. • Communication of data. Data and data governance for federal and various state and customer reports is different. Agencies would prefer messaging that says “this is what this data provides/does not provide” and recognize that each individual audience may need a lot of education about the data. They would like some way to uniformly talk about performance reporting to a wide spectrum of audiences. MAKING ADJUSTMENTS: NEAR-TERM STRATEGIES Michael Grant of ICF facilitated this session, focusing on strategies and decisions that transportation agencies can adjust in between investment cycles and plan updates in order to progress more meaningfully towards a target. Key takeaways from the discussion include: • Smaller strategies, if implemented systemically, can be more effective than major projects. • Most of the agencies “how” to make adjustments revolved around convening the right people to discuss performance and options. VDOT provided an impromptu presentation in lieu of the planned peer presentation, focusing on VDOT’s project prioritization process and performance management. VDOT is not funding any spot projects until 2026; instead, they are focusing on systemic investments that have a greater impact for a smaller cost. VDOT has the data to show that systemic investments have led to 20% improvement in safety outcomes compared to spot improvements. The cost for 20 spot treatments might be $100 million, but a $20 million systemic improvement can have as much effect on safety outcomes. Executive leadership support has helped in cases where safety benefits have been integrated into standard maintenance. Participants presented examples of how their agency had made near-term adjustments, including: • WMATA has made changes based on the Stat meetings. The department managing facilities identified that there was a huge backlog of work orders that had been opened for more than 30 days. Highlighting this issue in the meeting multiple times has led to strategies for the team to work on this and move toward meeting the target, which is to reduce the backlog by 50%. WMATA established an asset management office once they realized they needed an organizational shift. • PennDOT has four regional traffic monitoring centers (TMCs) and one statewide TMC, but knowledge of incidents was low. PennDOT determined the critical timeframes for impacts to roadways, and some TMCs increased hours/personnel capacity to increase situational awareness. • MDOT publishes a quarterly accelerator report, which has MDOT customers as its target audience. Some targets have evolved over time, and MDOT currently is focused on ones where a process improvement team has been identified (championed by senior

54 manager). A process improvement team is brought together across business units and modes to examine how to improve performance by looking at the data and the story behind the performance. It’s targeted, temporary, but driving toward long-term change in a short-term time period. The team’s senior manager then reports back on progress. • The Port Authority of Allegheny County faces challenges with on-time performance when construction season hits. A group tasked with solving the problem determined that there was no true measurement of detours (by time) caused by construction. They needed to coordinate other offices to plan and provide information to be able measure impacts and delays. • MPOs reiterated their role as “a forum of cooperative decision-making” among the Feds, State DOTs, and local communities, but MPOs do not usually have a role in the operational side (which is the shortest term). Smaller MPOs with limited staff are trying to figure out their role and what they can do to support State DOTs. Four MPOs in New Hampshire have joined together to handle performance metrics, out of necessity due to limited staff resources. • Target setting should be an iterative process that recognizes that the target might not have been set properly or was the wrong target, not that the work wasn’t being performed. Some goals could use different time frames than other goals in order to set up the projects and improvements for success. • State DOTs discussed the importance of MPOs to convene stakeholders to influence short-term adjustments, even as MPOs have limited opportunities to directly operationalize in the short-term. MPOs can help with project selection and prioritization but could also provide research and support on operational strategies that most affect the near-term. • Participants mentioned that short-term adjustments to dashboards can be key for the agency if they are updated daily. But administrators and other senior management may not be fully engaged with the dashboards, or the dashboards might not be personalized for their needs. Participants mentioned that identifying the audience is necessary in order to develop a platform. Education for mid-level managers on dashboards can help them understand their baseline and engage in their own performance. A table discussion of short-term strategies focused on organizational structures and operational changes, including staffing issues and clear communication between decision-makers and data managers, that would build broad, agencywide support for performance management. Participants stressed that communication and marketing to internal and external stakeholders is important to develop a message about costs and performance management, provide consistent interpretation of targets across levels of government and stakeholders, and for leading to greater collaboration. MAKING ADJUSTMENTS: LONGER-TERM PLANNING AND INVESTMENT DECISIONS Michael Grant of ICF facilitated this session, focused on how agencies use performance data and analysis to support longer-term investment planning, resource allocation, and funding

55 decisions. The planned peer agency presentations were skimmed through because the presenters were unable to attend the peer exchange at the last minute. The discussion focused around prioritization processes: • Prioritization is often tied to wide regional plans goals and prioritization measures of other areas and mostly based on perceived needs, not necessarily on financial data or assessments of projects’ effectiveness. • Some agencies mentioned that they have been trying to evaluate individual projects and different combinations of projects to use for local tax ballot measures. • Agencies face challenges with the precision of measuring important targets and balancing those measurable targets with those that are less precise or accurate. For example, pavement performance measures can be very precise and accurate, but forecasting reliability (even when precise) is much less accurate. • Some states have tried to adjust their prioritization processes to control for the project cost by comparing it to the size of the project (e.g. right-sizing a project, dividing by project cost). • Some agencies face administrative difficulties in adjusting investments because there might be legacy systems from past administrations. For example, MDOT has 30+ capital programs that would be difficult to bring together and reorganize into different buckets. Rather than going for a complete reorganization, they are starting with three (Safety, Mobility, and Asset Management) and will evaluate whether the new approach is effective. • Agencies were supportive of opportunities to use big data to find efficiencies. • Some agencies noted that they perform tradeoff analysis somewhat using a scenario planning process. Some agencies try to perform tradeoff analysis with both what they know, and what they know that they do not know (since there is often a backlog of condition assessment that hasn’t been performed at the same rate as higher risk assets). APPLICATION DEMONSTRATION Anna Batista and Joe Crossett from High Street gave an overview of the application they have been developing to solicit feedback from participants on how to improve it and make it more relevant to transportation practitioners. Feedback included: • Request for a user/MPO to be able to modify some things that are operating behind the scenes but influence the data that the model uses (e.g. legislation, such as “hands-free” improved safety). • The acculturalization and institutionalization of feedback loops and processes for data have been challenges; this could help that. o How is the data kept alive? What is the long-term maintenance? o If the tool proves to be useful, it might be picked up by a major funding agency (e.g. AASHTO), as have other tools developed as part of NCHRP projects.

56 • It would be preferable for the model to be expanded so that the user could establish a budget and do a much more robust and complex allocation process (this amount in safety, this amount in bridges, this amount in pavement). • Views were mixed about whether they preferred the model to be state specific or if it would be just as helpful to have a generic “urban” or “rural” state. o Spending data sources would not match up across states. Federal data sources of funding can be 2 years behind if individual states were reported, and a generic state option might eliminate the effect of the data lag. o Being able to compare to specific peers could be a benefit for a state’s argument for certain investments, but it could also be an unnecessary political challenge. o Some states admitted that they would not or could not use a tool like this to provide information for decision-making because it was too generic, but that the visualization has value for decision-makers. • Participants generally agreed that the tool could be more beneficial if it were integrated into other existing tools, such as the AASHTO benchmarking tool. • Agencies expressed interest in a base framework that agencies could use internally by inputting their own data to be more consistent with other internal decision-making analysis tools and existing forecasting tools. o Agencies suggested that there be a comprehensive methodology and some baseline data that’s universal, and make something that is more of a framework and general so that decision-makers can compare the sort of systemic versus hot spot investment projects. • A tool like this equips MPOs to hold conversations at the state level; it also can help local districts visualize adjustments. PennDOT has done these sorts of forecasts from state-level reports for safety, and every legislative district was given their own report specific to their jurisdiction about how enacting certain laws would impact fatalities. Laws have been enacted as a result; they’ve seen a lot of positive movement in response. WRAP UP AND NEXT STEPS The primary outcome of the peer exchange was a better understanding of the successes and challenges faced by transportation agencies in monitoring and adjusting. Some highlights of the key takeaways from the peer exchange include: • Uniformity in defining short-term and long-term adjustments and goals is important because it is an accessible idea to the user, but specificity allows each agency to decide for themselves what change they can affect in the short or long term. • The data is the means to the end. It allows agencies to convey their progress and understand needed activities, but the feedback loop is necessary so that data doesn’t become the end of the story. • Why the state is looking at a specific target dictates how monitoring and adjusting plays out and whether the state will adjust their actions.

57 • Data management barriers change over time as new data is collected and becomes available. Peer Exchange 3 — Atlanta PROJECT OVERVIEW Anna Batista of High Street gave a high-level overview of the work done to date as part of NCHRP Project 02-27, including the motivation behind the project and the goals for the peer exchange. Anna asked participants what it means to make targets matter, and the responses corresponded with four categories: • Providing strategic direction to agency decisions, such as when discussing the tradeoffs of different investment scenarios. • Communicating about how performance aspirations might differ from the performance that is realistically achievable within given conditions. • Educating stakeholders about what could be achieved with their collaboration, such as with increased funding from the legislature. • Improving accountability to taxpayers by having to explain why performance did or did not meet the target. INTRODUCTIONS The facilitators asked each participant to introduce themselves by providing their name, agency, role, and to briefly answer “What challenges do you or your agency face when making targets matter.” Most of the attendees were from their agencies’ planning or programming departments, but some also had roles or functions in data analysis and project selection and development. The key takeaways from this section: • Agencies often lack the capacity to meaningfully incorporate data into decision- making. • Most agencies forecast the impacts a project will have on targets, but few have the capacity to look back to assess the actual outcomes. • While some agencies have centralized performance measurement into their agency structure, others are finding communication to be a significant barrier when different divisions are responsible for different performance measures. • State DOTs and MPOs have different priorities when setting targets. Statewide goals often don’t reflect regional priorities. • Political priorities shift as administrations come and go, which can make it difficult to have a sustained focus on performance management. DATA AND DECISION-MAKING Despite the tremendous amount of data available, forecasting tools include significant levels of uncertainty, especially as technology and the available data continues to evolve. • Kyung-Hwa of ARC used the analogy of the “data lake” – a vast reservoir of information that is too large to fully process. To help deal with uncertainty in long-term forecasts

58 and around new technologies, ARC has started focusing more on the very near past and very near future. • Agencies are unsure whether they are taking enough time to fully consider and analyze data as they establish targets. • The data quality varies greatly. • Data is often incomplete, especially regarding injury data. As the data sources improve to capture more of the relevant injuries, then the number of injuries increase but may just be due to more accurate reporting and tracking rather than a decline in safety. • Decision-makers might not have the right information at the right time in the process. • With the right data, agencies hope to be able to prove their effectiveness when they ask for more funding. UNDERSTANDING THE “INFLUENCE PATHWAYS” There is a disconnect between the amount of time it takes to plan and implement a project and the amount of time in which that project will have a measurable impact on a target. Agencies are often unable to devote staff or resources to evaluate whether projects have been effective at improving performance. Without these evaluations, agencies cannot understand all the factors that influence performance (i.e., an influence pathway). • Jamie Fischer of the Georgia State Road and Tollway Authority highlighted how it is necessary to get agreement on what these “influence pathways” are so that the agency can select the most effective options. To understand how to influence outcomes, agencies can develop set of KPIs around which all the relevant contributors can coalesce. • Agencies struggle to understand what they can influence. They know that they cannot just increase spending but that they need higher quality projects. • MPOs and DOTs are often not the implementers of plans and programs and looking for options to influence in other ways. ORGANIZATIONAL STRUCTURE AND DIFFERING PRIORITIES In addition to limited staff or capabilities, it is difficult to keep established teams involved in investment recommendations when different programs are housed in different offices. There is often a disconnect between targets set at the various levels of government. • The responsibilities for tracking individual performance measures often lives within multiple departments at a single agency. It is difficult to organize continuous communication without a central organizing figure or division. Without this continuous communication, the divisions responsible for setting performance targets may not be the divisions that are involved in investment recommendations. • West Virginia DOT is currently creating a new Strategic Performance Management Division but have limited staff and are still determining the best approach to reorganizing. • When setting targets, State DOTs are looking at a statewide perspective while MPOs are more focused on their regional context. This necessitates conversations on data sharing, the aspects of target setting, challenges of predictive methods, and rationales

59 behind tradeoffs. Both state DOTs and MPOs need to have the capacities to have those conversations. • Sometimes federal or state requirements are not relevant for the MPOs or transit agencies. For example, the federal performance measures assume a four-hour peak period, but many smaller urban areas have much shorter peak periods. • As new administrations leave and enter office, some performance management activities are set back as career transportation staff educate new partners on the importance of performance management and to recruit new leaders as performance management champions. • The project sponsors are accustomed to previous ways of allocating funding – whether to the loudest voices in the room or splitting funding by population, etc. – and often resist making changes to the approach. MAKING TARGETS MATTER TO STAKEHOLDERS This session focused on how agencies can influence others to care about the targets. Key takeaways from this section include: • Tie the target to the values of that stakeholder. o Executives and leaders have personal goals for the agency. Tell the story of how the performance targets relate to those goals. o Tying employee performance to agency performance (as NCDOT has done) is an innovative strategy to encourage individual ownership over agency goals. o Identify how your agency’s targets help the other agencies meet theirs, and link project evaluation processes to achieving your targets. • Executive support is necessary for effective performance management, but efforts to gain that support are often from the bottom-up. • All staff should have some familiarity with data analysis concepts. PEER PRESENTATION: ORGANIZING AROUND PERFORMANCE MANAGEMENT The session featured a peer presentation from Gehan Elsayed of West Virginia DOT (WVDOT) on the framework for performance management created using an FHWA SHRP2 grant between 2016-2019. Prior to the grant, multiple divisions at WVDOT were using the same or similar data but there was not much coordination between the divisions. WVDOT convened these divisions in a workshop to develop a performance management process that would fit within the existing data and agency capabilities. Throughout this process, WVDOT kept leadership and stakeholders engaged through fact sheets and data visualization tools to carefully explain and justify targets. This engagement built consensus around targets among WVDOT staff and leadership, as well as among FHWA and MPO partners. The process demonstrated the importance of: • Early and constant coordination and communication • Data management and QA/QC procedures • Defined roles and responsibilities • Staff expertise • Leadership awareness

60 Following the presentation, participants discussed strategies for successfully engaging agency staff and external partners or stakeholders on performance management. Different approaches may be needed for different audiences. Conversation revolved around how to make targets matter to… …AGENCY LEADERSHIP • When new leaders come on board after an election or other turnover: o Reach out immediately to set up a briefing. Be aggressive and welcoming to get on their schedule. o Be confident in your message. o Emphasize the federal requirements • Start with the executive director to recruit them as a champion. • Use the messaging/wording that appeals to those leaders and avoids jargon. For example, leaders are more likely to endorse a plan that is “data-driven” rather than “performance-based” or “performance management.” • Learn what matters to each of the leaders who you need to convince. Then use storytelling approaches to educate them about how the performance targets support their personal goals for the agency. • Let the executives identify which 3-5 KPIs should guide the agency. • Simplify communications to suit their busy schedules. Partners and stakeholders do not need to be inundated with the process for setting a target or the calculations behind progress towards that target. Simplified communication methods are effective in communicating agency priorities and actions. When questions arise, staff should be able to explain or defend the data and metric, but the most important part of messaging is that it keeps stakeholders on board with the current course. o “Elementary School Show and Tell” o One-pagers …AGENCY STAFF AT ALL LEVELS • Leverage executive support to engage everyone else. • Agency leaders should focus on just a few (3-5) KPIs to direct agency performance. By reigning in the focus from several performance metrics to less KPIs, other staff can have a better understanding of agency priorities and make efforts in their individual roles to achieve those priorities. It gets everyone “pushing on the same side of the boulder.” • Identify relevant goals that can be used in employee performance evaluations. Start with the executives being responsible for the high-level goals or KPIs to the agency. Then identify how each employee can influence those goals or indicators, and establish related goals for each employee to achieve. This empowers each employee to take ownership over what they can influence about agency performance. o North Carolina DOT uses this approach to integrate performance management into agency culture. o Participants generally liked this idea. o Some participants expressed a desire for research into potential downsides. For example, some employees might misreport their data so that the data shows

61 performance success even though reality might not actually have the same success. (As in The Wire). • Employees might feel frustrated if they are expected to “control” performance. Focus instead on what employees can “influence.” • Provide the level of detail that is relevant to that employee. Policy makers are only going to want high-level information (with data available if requested), but data geeks are going to want to understand the data. • Make sure staff at all levels understand the importance of the data that they are collecting. For example, police officers need to know why it is important to complete crash reports accurately. • Identify and recruit champions at all levels. • Use storytelling to convey that we all play a part in achieving the agency’s goals. o Planners = story tellers. o Data analysts • Empower agency staff to understand the data. Build in-house skills rather than always relying on consultants. o In-house staff understand the agency’s needs and interests whereas new recruits or consultants might not. Staff have a passion for service and for knowledge . o All staff can have performance management in their wheelhouse without requiring them all to be the expert. o One participants’ agency encourages weekly lunch and learns so that employees can share what they have learned. Staff feel “joy” in sharing knowledge, which attracts new attendees. o Showcase skills and products so that employees can feel pride and can learn from their peers. o Georgia State Road and Tollway Authority has found success in hosting internal trainings on dashboards and other data visualizations tools, which created excited among staff around communicating performance measures. o Government agencies may not be able to offer salaries that are competitive with other data analysis industries, but the agencies can appeal to analysts who are passionate about their community or the issues they work on. …EXTERNAL STAKEHOLDERS • Identify what the other agencies are doing, and tie your efforts to theirs. • Engage stakeholders early so that they can be a true partner in the process. • “Kill them with meetings.” A well-planned in-person meeting can be substantially more effective than trying to accomplish things over email. • Allocate funding based on performance, rather than splitting funds evenly among jurisdictions. • Identify projects that would significantly improve performance. For each, then identify the relevant implementer to recruit a project sponsor.

62 AGENCY ACTIONS TO ACHIEVE TARGETS How does your agency influence outcomes within and across investment cycles? Key takeaways from this session included: • Conduct studies and research to identify effective interventions. • Adjust project-selection approaches. Favor projects that improve multiple performance areas and regional cooperation. • Identify projects that will serve the agency’s goals, and then recruit implementing agencies to sponsor them. To kick off the conversation, Jessie Jones of Arkansas DOT (ARDOT) gave examples of how ARDOT was adjusting actions to improve outcomes. Jessie spoke to ARDOT’s efforts to move from a “hot spot” approach to a systemic approach in order to focus solutions that would have a larger geographic (and per capita) impact than ones that were meant to address isolated issues. As an example, Jessie spoke about how ARDOT incorporated rumble strips into their pavement preservation program after showing increases in safety metrics from data collected before and after the addition of rumble strips. The steps for achieving this included: 1. Conduct before-and-after analyses 2. Develop a policy 3. Educate others (districts, etc.). Districts appreciate and value the information they receive from headquarters. This prompted a conversation on other successful example in adjusting actions to achieve targets. Jamie Fischer of Georgia State Road and Tollway Authority spoke of their approach to using benefit/cost analyses to support project-selection conversations: 1. High benefit-high cost: These projects are projected to have large impacts on agency targets, but there may not be the funding available to start the project. The conversation for these projects revolves around identifying the funding sources that can eventually be used to implement the project. 2. High benefit-low cost: These projects are the “low hanging fruit” that agencies can fund and work on in the interim of identifying and funding the “high benefit-high cost” projects. 3. Low benefit: Regardless of cost, the conversation for these projects centers around why they are low scoring, and if there are any actions the agency can take to improve the project’s ranking compared to others. Either the project can be left as a low priority, or it can move into one of the other two categories once strategies to increase its benefit are incorporated into the project scope. Other approaches to identifying and making adjustments included: • Conduct studies to develop concepts and solutions • Modify the agency’s analysis to account for multiple performance areas. For example, modify the analysis of signal timing to account for safety as well as delay. • Rescope projects to focus on better design rather than just widening roads.

63 • Develop communication tools that demonstrate the cost-effectiveness of different strategies • Adjust how the agency evaluates and prioritizes projects: benefit-cost analysis, indices, rankings among similar project types. • Understand the scale of the project so that you can evaluate it properly. • Adopt Planning & Environmental Linkages approaches to advance project delivery, as Georgia DOT has done. • Train and assist the local agencies and implementers • Focus more on providing loans than grants, as Georgia’s SRTA has done with the State Infrastructure Bank funds. • Award extra points to projects that are coordinated among multiple jurisdictions, as Northern Virginia Transportation Commission (NVTC) has done. • Develop and maintain good relationships with the local implementing agencies. • Make tactical adjustments throughout the year, as SRTA does. They track customer complaints by type and conduct root cause analysis of the major complaint types. The biggest issues vary month to month, but they can identify actions to take, such as a maintenance campaign. • Develop a pilot incentive program. Measure results, including public perceptions of the pilot, and adjust future pilots in response. PEER PRESENTATION: AN MPO PERSPECTIVE ON ADJUSTMENTS Hans Haustein presented on how his agency, Metroplan (the MPO for Little Rock, Arkansas), makes adjustments to improve performance outcomes. Although Metroplan (as with many MPOs) does not control much of the project-selection efforts in the region, they do control the disbursements of grants for transportation alternatives. Hans presented about two of their TAP projects to demonstrate options for selecting projects to fund. • The first project was a trail project adjacent to an elementary school. Metroplan identified that the street providing vehicular access to the school did not have any pedestrian infrastructure or even shoulders. Hundreds of school-age children lived within walking distance of the school. By adding pedestrian infrastructure along this roadway segment, they could make a significant improvement in providing safe access to the school. • The second project was a redesign of an arterial along a mixed-use corridor. The arterial had two lanes of vehicular traffic and a center turn lane. The sidewalk infrastructure was incomplete and disjointed, but Metroplan could see “desire lines” in the grass leading to bus stops. Metroplan identified that pedestrian and bicycle crash risk ran high in some portions of the corridor. To address this need, they funded a project to redesign the corridor to add a shared-use path, a vegetated buffer, and a vegetated median to improve safety along the corridor. BUILDING THE FEEDBACK MUSCLE How can an agency “close the loop” -- by looking back, internalizing results, and forecasting future performance? Key takeaways from this discussion included:

64 • Don’t be afraid of failure. Move forward on imperfect information, and then review how it went to make improvements the next time. • Dig deep to find the root causes of outcomes. This requires multidisciplinary participation. • Create a structure around which to have these conversations. PEER PRESENTATION: REINVESTING TOLL REVENUE TO ENHANCE TRANSIT OPTIONS IN NORTHERN VIRGINIA Allan Fye from NVTC gave a presentation on NVTC’s Commuter Choice program and how it has evolved to improve performance. The Commuter Choice program uses toll revenue to fund transit projects for the region, which are submitted by local partners and compete with each other for available funding. As this program has matured, NVTC has tweaked the project- selection criteria with each funding cycle to make the data reporting requirements more prescriptive and ensure data analysis between projects is consistent. NVTC recognizes that effective communication on these updates is crucial to maintaining a positive relationship with its partners. A Program Advisory Committee (comprised of elected officials and political appointees) works with an NVTC staff working group to increase program transparency by communicating about the prioritization of projects and work with localities to submit stronger projects. One of the recent changes to the scoring criteria was to simplify each project to a single, combined score to more effectively communicate prioritization decisions to stakeholders. NVTC collects data on performance so that they can see if funds were used effectively, creating a feedback loop between funding and performance data. (1) They invested in a bus. (2) The data showed that it was standing room only. (3) They invested more in that bus line. (4) The data showed that it was still standing room only. Allan’s presentation was followed by a conversation on other examples of agencies using data to implement a feedback loop. Participant examples included: • MetroPlan Orlando (Florida) conducts a before-after analysis on its signal retiming program, selecting 25-30 intersections a year to collect data on the impacts of the retiming efforts. Recently, they began incorporating more factors into this analysis, such as safety considerations and bicycle/pedestrian access. • Arkansas DOT has just begun tracking before-after data of its safety projects. They hope the data will demonstrate the effectiveness of strategies deployed so that they can decide what programs to continue or eliminate in future years.

65 • Arkansas DOT has installed cable barriers on all expressways, but they require a lot of ongoing maintenance. The state highway police manage an “E-crash system” that tracks information relating to the cable barriers and other factors. The E-crash data helped the DOT confirm that the maintenance costs are outweighed by the safety benefits of having the cable barriers on expressways. As the DOT expands the cable barriers to other types of roadways, they are continuing to analyze the cost-benefit ratio, which declines as the travel speeds decrease. BREAKOUT GROUPS: CREATE THE FEEDBACK LOOP After the discussion, participants broke out into groups to discuss and attempt to draw the monitoring and adjustment feedback loop. Key takeaways from this discussion included: • Find the “root cause” by “continuing to ask why.” This root cause analysis requires participation with a diverse group of stakeholders to discuss the “story” behind the data. • Accept risk and a “willingness to fail”; “try something.” Agencies need champions to challenge the status quo and have mechanisms for revising the feedback loop in innovative ways. • The feedback loop might need to show (possibly as an interior and exterior circle) the interactions between staff experts and decision-makers. Staff experts analyze data and research to develop options for the decision-makers to consider, and all decision- making flows between the staff experts and the decision-makers. • Statistical anomalies are sometimes not outliers to be “cleaned” from the data. Sometimes they are opportunities to identify an unknown factor that has a major impact on overall performance. • The “aha moments” only happen if you cross pollinate from different people with different expertise and knowledge bases. These multidisciplinary “why” sessions are essential. Reflect upon the lessons learned using story telling approaches, and consider any unintended consequences. • Meetings need structure – a facilitated process with decision-makers and with those who are empowered to make decisions that effect outcomes. RESOURCES AND TOOLS: WHAT DO YOU NEED NEXT? Anna Batista facilitated this session focused on the types of resources and tools available to agencies currently, and what resources and tools practitioners thought they needed next. Key takeaways included • Outreach is vital, but simple is better. Whether it is with communicating to stakeholders, presenting dashboards, or creating project-selection criteria, the most important component is to keep the forward-facing aspect as simple as possible. Then provide the option for curious users to dive into specifics. • Agencies are interested in tools that help them identify ideas for interventions, including databases, case studies, and communication templates for performance management meetings.

66 PEER PRESENTATION: ATLANTA REGIONAL COMMISSION’S DASH TOOL Kyung-Hwa of ARC presented on ARC’s DASH tool, an interactive, public performance dashboard. Key messages: • “Science is the background to an effective dashboard, but art is the foreground.” Simplify the presentation of complex data to avoid confusion over how it should be interpreted, while also maintaining a capability for a deep dive into the data for those who wanted to explore it. Reduce the amount of text of individual pages within the dashboard, but let curious users click on links to access more info and data. • “A rabbit is not a horse.” Data must be connected to a story to be meaningful. For example, you would not expect different types of projects to perform the same way, just as you would not expect a rabbit to be able to compete with a horse. The rabbit is valuable for its role and the horse for its. • Be vulnerable and open to feedback because that feedback is an opportunity to improve. Get feedback from a lot of internal teams before you even start building, and then keep collaborating and requesting feedback. The Hillsborough MPO is working with a consultant to develop a dashboard, which was part of the business plan the agency developed a few years ago. The goal is to provide a one-stop shop for interactive tools for understanding the TIP and long-range plan. SHOWCASE OF POTENTIAL APPLICATIONS Anna then presented two tools developed by High Street. The first was a prototype tool for forecasting trends based on investment and policy data, which High Street has been developing for this project and has presented at the two previous peer exchanges. • There is some risk to not actually considering the actual indicating variables. Participants would like to see a more econometric model that considers more input variables. • Especially with funding, the scale is difficult to understand to begin with and begs for misunderstanding in general communication. • Safety is a tricky metric to forecast, because there are a lot of components and a single lever adjusting funding isn’t clear where that funding is going – is it maintenance, is it safety patrol, etc. • The model needs to show a range of potential futures. • It’s important to consider anomalies, but especially for smaller data sets its more important to look at the core data and ignore the extremities. The second was a benchmarking tool, available on the AASHTO website, which allowed State DOTs and MPOs to compare their performance on certain metrics with peer states. Participants were especially receptive to the second tool, saying it would be helpful if information could be tied into the historical trend lines. For example, if an agency made an adjustment, either fiscal or policy, it would help other agencies to see the strategy used and the associated impact on the trendline. The tool could become an interventions inspiration database.

67 When asked what other tools they would like, the participants generally agreed that the following three types of resources would be very helpful: • Case studies on successful adjustment strategies would be valuable, especially if they could support a menu of potential strategies for having an impact on certain targets. • Guidance that incorporates a “user-inspired complexity” – a resource that covered the basics, but which provided enough links to external sources that a reader could go as far down the rabbit hole as they wished on a certain topic. • Communication templates that could be used by agencies in meetings with different audiences (elected officials, public, internal, etc.). How do we have conversations that uncover root causes? What is the format for that meeting? Who do I engage? What do we talk about? ADDITIONAL GUIDANCE RESOURCES Beyond the tools, participants also discussed what type of guidance documents would be helpful for them in making targets matter. Their responses included: • A menu of strategies pursued by peer states that could potentially address a similar issue. • Adapting the TRB benchmarking tool to show when intervention strategies were applied to back up the trends shown on the graphs. • A discussion template or meeting format that outlines who to engage and how would be useful in a general tool that can be used at a local level. This could take the form of several “facilitation guides” for different audiences. • Most MPOs support state targets, but it would be good to see case studies on MPOs that have decided to adopt their own targets, see their reasoning, and identify the ways they were able to influence. • Whatever resources come out of this project, they should have a “user-inspired complexity.” In other words, give a general overview of the challenges in making targets matter and potential strategies to address them, but allow the reader to dig into the strategies by tying them to additional (separate) case studies. WRAP UP AND NEXT STEPS Participants reflected on what they had learned during the workshop that they might take back to their agencies. Several of these ideas might also point to additional research needs or tools to develop: • They no longer feel alone. • NVTC Commuter Choice program as an example to model for reiterating a project- selection process. • Non-federal measures are important. • DASH tool as a model of a communication platform. • Integrate performance management roles into personnel evaluations of individual staff. • Feedback loops vary. No one approach will apply to all situations.

68 • MPO’s play an important role as the convener and to help develop meaningful cooperation among jurisdictions. • Communicate frequently, even if briefly. • Develop a benchmarking dashboard. • Develop a facilitation guide for conversations. • When evaluating projects, give points for regional cooperation. • Look back at what we have invested in and what the result was. Participants provided the following ideas for the research team to consider in developing the guidebook: • Cannot/should not be too big • Provide a clear framework to follow • Attract with the 1st paragraph of each section. • Provide the option for a reader to dive deeper. • Keep it simple and user friendly. • Use graphics • Provide steps and inspiration. • Provide a lesson plan with useful tools o Facilitation questions (as in FHWA’s PlanWorks tool) o Provide a directory of tools with links Peer Exchange 4 — Salt Lake City PROJECT OVERVIEW AND INTRODUCTIONS Anna Batista of High Street gave an overview of the work done to date as part of NCHRP Project 02-27 and outlined the purpose of the peer exchange. Anna asked participants what it meant to make targets matter. Participants gave succinct and direct responses: making targets matter is about holding people (and agencies) accountable, driving performance, and building dialogue. Participants then introduced themselves and discussed difficulties they have encountered in making targets matter. The challenges they shared included: • Communicating new or nuanced measures. Some agencies have legacy measures that they tracked prior to the existence of the federally mandated measures. It can be difficult explaining the difference between these metrics when communicating with stakeholders. Some federally mandated measures, such as the system reliability measures, are particularly difficult to break down in a simple way. • Finding effective targets. Some staff at local agencies want to set targets that will be the easiest to report on, but these might not be the most meaningful to track. This also presents challenges for DOTs and MPOs who want to encourage localities to pursue projects that will increase system performance, but do not want the localities to feel like they are being graded or judged solely based on these targets.

69 • Connecting performance management and the long-range plan. Long-Range Transportation Plans focus on projects or policies that will have a long-term impact on progress towards targets. For agencies who are looking to move the needle through actions in the present, this plan is not the logical place to report on progress towards targets. • Aligning federally mandated measures with local or regional priorities. Some states report on federally mandated measures but use a different set of measures for decision- making functions. Federal measures do not prescribe what actions to take to move the needle on targets, so the state measures give a better understanding of underlying problems. • Control over setting targets. For MPOs especially, if an agency wants to set a target beyond what their State has set, then they are responsible for collecting data to measure it. Many MPOs adopt the targets their state sets but may not have the control needed to achieve those targets. For some measures, like safety, states set unattainable targets (e.g., Mission Zero for roadway fatalities). • Communicating tradeoffs. Some measures conflict with each other. For example: travel time and travel reliability; the system can be very reliable if travel times are consistently slow. Decisionmakers, especially elected officials, want to show progress in all targets, but they do not always understand the tradeoffs: between performance areas and as a result of funding or data limitations. MAKING TARGETS MATTER TO STAKEHOLDERS Hannah Twaddell of ICF facilitated this session on influencing others to place importance on agency targets. The session featured a peer presentation from Nick Meltzer, the single staff person of the Corvallis Area MPO (Oregon) on how he makes targets matter to the stakeholders who may have more control over the achievement of targets. Key takeaways from the discussion included: • Although external pressure can be used to motivate stakeholders, it is often not very effective. • Extensive involvement with the community and other stakeholders can sway opinions and move decision-makers to endorse actions that drive success. Nick succeeded in getting the MPO policy board to voluntarily measure transportation performance. One advantage Nick has as a one-person MPO is direct engagement with stakeholders and elected officials, which allows for constant communication on the metrics tracking efforts and allowed him to spend 6 months focusing on rebuilding relationships. These decision-makers adopted a directive for GHG and VMT targets, which Nick used to encourage localities to identify what they would be able to measure, the relevance of that data, and the resources they could devote to ongoing tracking. Local officials were initially hesitant to adopt a set of measures before they had an operational plan in place outlining roles and responsibilities for collecting, analyzing, managing, and reporting on data. They accepted the initial concept and are currently working through the operational plan to move towards implementation. By not focusing on the outcome, and instead looking for the localities’ expertise on best methods

70 for implementation, Nick was able to facilitate a process that worked towards the decision-makers’ directive while gaining buy-in from the local partners who will implement it. Nick’s key lessons learned from the experience: • Seize opportunities when they arise. • Don’t be attached to a particular outcome. • Identify dual outcomes and co-benefits. • Overcommunicate. • Be willing to do all the work. This prompted a conversation on whether external pressures are motivating to stakeholders. Some agencies noted that external pressure is not always effective as a motivating force, because it does not encourage ownership over the action they are being pressured to take. • As an example, the Utah State Legislature recently passed a bill on affordable housing that dictates the actions MPOs should take in response to identified affordable housing issues. MPOs are not against affordable housing, but there is pushback on the state stepping in and saying “here’s how you report it and how you need to take action.” • Nick noted that this pushback against external pressure existed in his example too – city staff were hesitant to implement a directive of elected officials because of the frequent turnover due to elections. Nick was able to give the city staff ownership over the directive by encouraging them to find the best ways to implement it. To mitigate the perception of external pressure, participants recommended engaging local staff early in a target-setting discussion, allowing them to contribute to the conversation that will set long-range goals and outline scenarios for achieving them. By incorporating local staff’s feedback into the process, agencies can work with their localities to encourage actions that lead to preferred local outcomes. Participants provided some examples: • Wasatch Front Regional Council (WFRC) gave an example of a successful case study in local engagement: Roy City is a built-out suburb of Ogden (3rd largest city in Utah) and has a commuter rail station. The station area is surrounded by low-density housing and relatively defunct strip retail. Their mayor participated extensively in a regional visioning process, which led him to understand and accept the concept of “growth centers.” They worked locally to redo their downtown plan to enable changes in uses and zoning to support transit-oriented development, supporting both local and regional goals for development. • Utah DOT coordinated an effort to develop a Memorandum of Agreement (MOA) with MPOs and local jurisdictions when the federal performance measures were released. The MOA was a push to start the target-setting process with all partners on the same page. It outlined what the relationship would look like for data sharing and analysis, and it centralized those functions within UDOT so that all partners knew who to contact for anything related to performance management. Prior to the federal mandate, each PM was handled by different divisions within UDOT. By creating a central point of contact, the reporting function of performance management became easier.

71 • The UTA’s Service Choice Program includes a public outreach campaign to try to identify whether ridership or service coverage was more important to stakeholders in guiding they agency’s decision-making. This campaign involved public polls, workshops to discuss tradeoffs between the two metrics, and scenario planning meetings with community decision-makers to build consensus on local needs. Based on the input received, UTA developed targets that struck a balance between ridership and coverage, rather than focusing on just one or the other. • Local Streets and Roads Working Group – In Bay Area, local agencies felt that funds were not being spent equitably, and they were not at the table when strategic decisions were being made. They created this working group to be a single voice for increased investment. This created bottom-up buy-in. AGENCY ACTIONS TO ACHIEVE TARGETS Paul Hershkowitz of ICF facilitated this session on the actions agencies are taking to influence performance outcomes within and between investment cycles. The key takeaway: • Effective adjustments require effective communication strategies. The session began with a peer presentation from Carl Miller of COMPASS (Idaho) on successes the MPO has seen in certain actions to achieve targets. Some of these successes include: • Rather than continuing to argue about whether or not people biked, COMPASS used the debate as an opportunity to install permanent bicycle and pedestrian counters. Now COMPASS can provide data to these discussions. • COMPASS developed a strategy for mitigating the aspirational-versus-attainable goal in a public relations context (e.g., the issues faced by agencies who do not want to set a fatality target out of fear it would be perceived as “the agency wants to kill this many people,” but who also do not want to set a Mission Zero target because it is unachievable). Their safety target each year is to have a decrease from the previous year. • The county was continually not meeting its target on farmland preservation, which gave the City of Boise leverage to push for action on the issue. Although COMPASS did not possess direct influence over the relevant policy levers, they continually presented the year-over-year data to the policy boards who were responsible for approving suburban development projects on farmland. By providing the underlying data, COMPASS gave the impetus to the decision-makers to create a formal policy on preserving farmland. COMPASS also created a development checklist so that they can report to the decisionmakers about whether or not a proposed development meets the agreed upon local and regional goals. • COMPASS changed the way they communicated about TIP achievements. o Rather than reporting the amounts spent in each area, they focused instead on what they achieved with those funds. In estimating the impacts of their investments, they could claim, “More wins than losses,” which gave them more freedom to talk openly about the losses with feeling like those were “failures.”

72 o They also reduced the number of performance measures reported from 60 to a handful, which helped them be more effective with their messaging. “With so many things to report, you fail a lot.” • COMPASS offers small implementation grants to local partners to institute projects that work towards regional targets even if the project is small. This method of incentivizing success was widely regarded as innovative, given many funding programs’ punitive measure for not reaching targets (e.g., removing funding). The participants discussions then focused on making adjustments and communicating about these adjustments: • UDOT has created the UPortal, a website through which the agency can demonstrate how they are achieving their STIP goals. The simpler this information is to digest, the less pushback they get from stakeholders. • Metropolitan Transportation Commission (MTC) continuously refines its data collection process. They collect the data for most local streets. Improving the data quality is very important. • A possible downside of reporting good performance is that funders may erroneously believe that you do not need more funding. o If people don’t understand what you do, they will come after your funding. o One agency has explored showing the “health” of assets rather than whether a target has been met or not. • Strategic goals  Process performance measures  Tactical performance measures  Individual performance measures • Need to communicate that elected officials and governments are making things, and these things can actually be counted and measured. Metrics help to do this. BUILDING THE FEEDBACK MUSCLE Hannah Twaddell facilitated this session on how agencies are reflecting on past data and analysis, internalizing results, and forecasting trends to support performance management. Sui Tan of MTC (California) presented on MTC’s efforts to move from awarding funding to partners based on “worst-first” pavement quality to awarding funding based on positive efforts that the partners take to preserve and improve pavement quality. The impetus for this change was to measure and reward preventive maintenance efforts instead of overall pavement quality, with the assumption that preventive maintenance efforts will result in a higher pavement quality in the longer term. If MTC instead chose to invest in the jurisdictions with the worst pavement quality, then the jurisdictions that used their awarded funds most efficiently (conducted preventive maintenance, created higher pavement qualities) would not receive funding to continue doing so, while jurisdictions that have been less strategic with their spending would receive higher levels of funding. The new approach looks at the ratio of actual to recommended preventive maintenance.

73 Small Group Exercise — Visualizing the Feedback Loop Following Sui Tan’s presentation, the participants broke out into three breakout groups to discuss and attempt to draw a diagram or create a picture of an effective feedback loop. The three groups reported back and commented on each other’s feedback loops. For the purpose of this summary, the project team created nicknames for each group’s diagram. Group 1 : “Incorporating Multi-Agency Perspectives” — The first group was comprised of members from a DOT, a large MPO, and a small MPO, and structured their loop from a multi- agency perspective. In this perspective, all agencies are addressing the same problem but on different scales. State targets are looking at a more holistic picture, while MPOs have community-driven targets. State metrics are the basis for decision-making, so there needs to be collaboration to incorporate community goals into the formulas for statewide decisions. Group 1: Incorporating Multi-Agency Perspectives Group 2: “Creating a Cadence of Accountability” — The second group created a loop that started with core goals that are supported by defensible measures (i.e., what are the leading indicators on the target, and which of these indicators does the agency have influence over?). Once the goals and measures are in place, and the agency has begun measuring, the focus becomes on communicating cause and effect to stakeholders. If the proper leading indicators have been identified, then cause and effect should be clear; if not, then the agency needs to revisit its measures to look for additional data options. Engaging stakeholders gives the opportunity to identify potential new measures or data sources to support the core goals. The engagement piece is crucial to creating a “cadence of accountability,” which involves effective communication of performance and validation that stakeholder comments and concerns are fully explored and utilized.

74 Group 2: Creating a Cadence of Accountability Group 3: “Encircling the Chaos” — by communicating compelling stories, pinpointing influencers, and adjusting plans intelligently. The third group started their feedback loop with the monitoring component, with the rationale that you cannot set goals or targets until you understand a baseline condition. As part of this monitoring process, agencies produce documents tailored to the audiences. This requires a compelling narrative to keep audiences engaged, which is crucial to soliciting effective feedback. This reporting and feedback will help to identify levels of influence that can be used to move the needle towards a certain target. Once an agency is aware of the baseline conditions and the tools at their disposal to influence those conditions, they can produce goals and implement plans to obtain them. During implementation, the agency returns to monitoring to understand progress towards the goal and any new baseline conditions.

75 Group 3: Encircling the Chaos RESOURCES AND TOOLS Mark Egge of High Street presented a prototype tool for forecasting trends based on investment and policy data, which High Street has been developing for this project and has presented at the three previous peer exchanges. He walked through an example state agency forecasting exercise to show how adjusting the levers of policies and funding levels could impact future safety and bridge conditions. Participants were receptive to the tool as a training resource to help educate staff or decision-makers by providing a hands-on exercise, and they gave the following feedback: • Clarify that the information is conceptual; keep related conversations simple. • They appreciated the accuracy of the historical data for each state, which gave them confidence in the accuracy of the forecasts, but consider using “state X.” • Scenarios could be helpful for these teaching discussions, which might help the user understanding whether this is showing reality or is just a tool for learning a theory.

76 • They would like the tool to tell them more about tradeoffs and the policy measures that tend to influence performance. • They suggested making the tool more flexible in allowing the users to set targets. Since it currently allows for tradeoffs in funding decisions, users should be able to also adjust the target if they are unable to find a tradeoff solution that meets all targets. However, when asked how much practitioners would be able to input their agency’s data in order to make these more sophisticated adjustments, participants thought the tool might not be as widely used because it might be too heavy of a data lift. Mark shifted the conversation to other types of resources that participants would like to see made available. • Case studies on successful adjustment strategies would be beneficial. • References about the different datasets that are available and have been successfully used in monitoring progress towards a target. • Tools for forecasting future performance. • Since a large component of reporting on performance measures is managing public expectations, participants also noted a desire for successful outreach strategies related to missed targets. • A framework to help an agency figure out what strategy they need to implement after they’ve set their target. KEY TAKEAWAYS To close out the day, the participants reflected on their key takeaways, some of which included: • The primary outcome of the peer exchange was a better understanding of the successes and challenges faced by transportation agencies in monitoring and adjusting targets. The exchange of information and strategies between agencies was of high benefit to all. • It is not enough to just hear stakeholders; they need to know they have been heard. Effective outreach to the public and to decision-makers needs to have follow-through. Agencies should be able to report back how comments or ideas from previous engagements have been explored and/or included in the next iteration of targets. • Simple is an effective communication tool. The easier to digest a metric or set of targets is, the more receptive stakeholders will be to it. Agencies do not need to report all the nuances in calculating a metric, or necessarily report every metric to every stakeholder. A tailored approach based on audience is most effective.

Developing a Guide for Managing Performance to Enhance Decision-Making Get This Book
×
 Developing a Guide for Managing Performance to Enhance Decision-Making
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Transportation agencies largely have performance structures in place, but these structures alone do not guarantee progress on meeting performance targets.

The TRB National Cooperative Highway Research Program's NCHRP Web-Only Document 317: Developing a Guide for Managing Performance to Enhance Decision-Making provides guidance on how agencies can strengthen their gathering and use of feedback to inform actions and performance activities.

The document is supplemental to NCHRP Research Report 993: Managing Performance to Enhance Decision-Making: Making Targets Matter.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!