National Academies Press: OpenBook
« Previous: Chapter 2 - Research Guidebook
Page 27
Suggested Citation:"Chapter 3 - Testing the Tool Prototype." National Academies of Sciences, Engineering, and Medicine. 2015. Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance. Washington, DC: The National Academies Press. doi: 10.17226/22177.
×
Page 27
Page 28
Suggested Citation:"Chapter 3 - Testing the Tool Prototype." National Academies of Sciences, Engineering, and Medicine. 2015. Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance. Washington, DC: The National Academies Press. doi: 10.17226/22177.
×
Page 28
Page 29
Suggested Citation:"Chapter 3 - Testing the Tool Prototype." National Academies of Sciences, Engineering, and Medicine. 2015. Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance. Washington, DC: The National Academies Press. doi: 10.17226/22177.
×
Page 29
Page 30
Suggested Citation:"Chapter 3 - Testing the Tool Prototype." National Academies of Sciences, Engineering, and Medicine. 2015. Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance. Washington, DC: The National Academies Press. doi: 10.17226/22177.
×
Page 30
Page 31
Suggested Citation:"Chapter 3 - Testing the Tool Prototype." National Academies of Sciences, Engineering, and Medicine. 2015. Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance. Washington, DC: The National Academies Press. doi: 10.17226/22177.
×
Page 31
Page 32
Suggested Citation:"Chapter 3 - Testing the Tool Prototype." National Academies of Sciences, Engineering, and Medicine. 2015. Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance. Washington, DC: The National Academies Press. doi: 10.17226/22177.
×
Page 32

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

27 C H A P T E R 3 3.1 Summary of Testing Opportunities In order to develop an implementable framework that advances the state of the practice for performance-based cross-asset resource allocation, the tool prototype was developed and tested across multiple audiences at workshops across the country. Feedback gathered from participants has assisted the research team in understanding the ways the framework and tool prototype could be used by practitioners (detailed in Section 3.3). 3.1.1 Pre-Workshop Activities Prior to the workshops, the research team conducted a short electronic survey of planned attend- ees to help understand participants’ experience with, and interest in, cross-asset resource allocation. Results indicated that while transportation officials have considerable interest in an analytical tool, workshop attendees had little experience in applying a comprehensive, data-driven approach. Most agencies noted that they struggle with the technical challenge of comparing data, hesitate to use a program that operates without a clearly defined mathematic framework, and are not quite sure how to overcome siloed (or stove-piped) decision making. Additionally, respondents expressed concern with the industry’s current ability to develop meaningful performance measures, compare measurement results/projections, and quantify the trade-offs between different resource allocation strategies. Yet at the same time, there was not a high level of comfort with participants’ agencies cur- rent allocation process. After the tool was explained, a high level of interest was expressed in using the tool to communicate the consequences of different policies and strategies. 3.1.2 Workshops and Content From April through September 2014, workshops and presentations were given in Florida, Arizona, New Jersey, Utah, New Mexico, and Indiana to present the framework and test the tool prototype. The Miami (FL), Scottsdale (AZ), Albuquerque (NM), and Indianapolis (IN) workshop meetings were held in conjunction with Transportation Research Board and AASHTO confer- ences. This allowed for the opportunity of a broad cross-section of DOTs and other transporta- tion professionals to test the tool and comment on the framework. Tests were also conducted for several state DOTs, including those of New Jersey, Utah, North Dakota, Illinois, California, Kansas, and Missouri. All workshops were focused on cross-asset investment planning and were designed to imple- ment the project framework. Asset classes, performance measures, and general investment cat- egories were developed based on a sample data set provided by a member of the project panel. Participants were encouraged to explore the cross-asset resource allocation framework and tool Testing the Tool Prototype

28 Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance prototype and were invited to provide feedback on the usefulness of the tool in understanding and communicating the impacts of investment decisions on transportation system performance. The agenda for the workshops included the following: • Presentation of NCHRP Project 08-91 framework and tool prototype, including a briefing on the research findings and the math behind the tool. • Value matrix exercise: Participants used role-play and scenarios to weigh goals and priorities. Breakout group activities were designed as table exercises, where each table worked together as an “agency” to fulfill goals and objectives through the development of a capital program. Each agency reported its final recommended program to all workshop participants. • Real-time program optimization: Based on the weighting exercise and budgetary constraints, the tool’s trade-off analysis capabilities were showcased. Scenario role-play was used to guide tool performance measures weighting and the program optimization and budgeting exercise. Scenarios used to showcase the tool included: 1. A preservation scenario, where teams were encouraged to prioritize bridge and pavement preservation projects while still meeting moderate mobility goals; 2. An economic growth scenario, where teams were asked to meet political priorities for conges- tion reduction and job creation; and 3. A “confused legislature” scenario, where politicians provided conflicting direction to the teams on where to spend money (congestion relief) while mandating performance targets for bridges. In all scenarios, teams were given approximately half of the needed budget to meet all perfor- mance goals, so trade-offs were critical for final program recommendations, and not all targets could be achieved. 3.2 Audience Participants at the workshops and on-site tool prototype testing included NCHRP Project 08-91 Panel members, senior leaders and practitioners from DOTs, FHWA staff, representatives from MPOs, and consultants and other private-sector representatives (Table 2). Aendees Summary Workshop at 10th Annual Asset Management Conference Miami, Florida April 28, 2014 Parcipants included the NCHRP Project 08-91 Panel as well as agency decision makers, data analysts, programmers, and external communicaons professionals and consultants; state DOTs, MPOs, and transit agencies were represented. Utah DOT Tool Tesng Salt Lake City, Utah June 16, 2014 Three workshops were conducted with the following aƒendees (25 in total): Execuve leadership, including the Utah DOT Execuve Director and directors of program development, project development, transportaon, operaons, asset management, program finance, central preconstrucon, and Utah DOT regional directors; Engineers in areas including traffic operaons, traffic management, traffic and safety, pavement management, pavement modeling, bridge modeling, bridge planning, and bridge design; and The six member Asset Advisory Commiƒee, including pavement, bridge, and finance directors as well as technical staff. Table 2. Tool workshops and testing attendees.

Testing the Tool Prototype 29 Illinois Tool Tesng Springfield, Illinois August 26, 2014 A broad range of department leaders and specialists parcipated in the workshop, including directors, secon chiefs, program development managers, bureau chiefs, unit chiefs, squad leaders, and planning analysts. Disciplines represented included bridges, cost esmang, land acquision, locaon studies, operaons, pavement management, planning, performance and cost support, and programming. Kansas Tool Tesng Topeka, Kansas September 8, 2014 Program managers, bureau chiefs, analysts, and engineers represenng the following organizaonal units parcipated in the workshop: bridge, budget, construcon and materials, pavement, performance measures, program and project management, and safety and technology. Missouri Tool Tesng Jefferson City, Missouri September 9, 2014 A small group of directors, administrators, specialists, and engineers parcipated in the workshop. Their areas of experse included organizaonal performance, planning, and system analysis. North Dakota Tool Tesng Bismarck, North Dakota August 18, 2014 Parcipants included division directors; planning, programming, and asset management leaders; and bridge specialists. District engineers parcipated via webinar conferencing. Workshop at AASHTO SCOP/SCOPM Conference Sco sdale, Arizona June 20, 2014 Parcipants included agency decision makers and data analysts with levels of experse in some or all of the following areas: pavement, bridge, safety, mobility, transit, programming, construcon, and operaons; FHWA staff also a‚ended to provide a naonal perspecve with regard to MAP 21. New Jersey Tool Tes ng Trenton, New Jersey June 25, 2014 A workshop was conducted with the following a‚endees (seven in total): Execuve leadership including directors of statewide planning, statewide strategies, capital investment planning and development and the assistant commissioner for capital investment planning and grant administraon; and Engineers in areas including pavement and drainage management, project planning, and project management. Mee ng at Western Associa on of State Highway and Transporta on Officials (WASHTO) Conference Albuquerque, New Mexico July 15, 2014 Parcipants included agency decision makers and data analysts from the Western state DOTs and MPOs; FHWA staff also a‚ended to provide a naonal perspecve with regard to TAMP development, where a refocus on starng with bridges and pavements first was recommended. Mee ng at Mid America Associa on of State Transporta on Officials (MAASTO) Conference Indianapolis, Indiana Parcipants included agency decision makers and data analysts from the Midwestern state DOTs and MPOs. Parcipants had experse in operaons and maintenance, pavement, bridge, financial planning, and programming. July 30, 2014 A endees Summary Table 2. (Continued). 3.3 Findings Both the cross-asset resource allocation framework and the tool prototype were well received by workshop and testing participants. Workshop attendees were active in their breakout group exercises, and the discussion was both lively and informative. Overall, participants indicated that there was significant value in the technical analysis capabilities of the tool prototype as well as the ability to apply it toward supporting and informing decision-maker and stakeholder discussions regarding performance targets, measures, and investment strategies.

30 Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance The following are highlights of the suggested or possible uses of the tool prototype based on comments received from the participants. These are further explored in Section 4.2. • Identifying appropriate performance measures: Most participants focused on the perfor- mance metrics established by MAP-21 as the baseline for all measures nationwide. However, individual states may want to layer on their own set of metrics. With multiple metrics layered on top of one another, finding a common system for evaluation poses challenges. Participants had concerns that such a common set of standards could realistically be agreed on, giving several examples of metrics that could be at odds across states and systems. For instance, some technical experts struggled with the use of overall condition indices since these ratings could mask the true nature of disaggregated condition metrics. The tool prototype can help frame these discussions by building a consensus among stakeholders. • Establishing investment program areas: Participants indicated that it may be helpful to cat- egorize investment programs into subclasses. Programs that focused on asset management were identified as different when compared to those that focused on operations or capital projects. Additionally, specific regional differences in investment programs were acknowl- edged. Topography, population, and primary road types can differ by region, and participants suggested that investment programs may need to be aligned in regional categories. • Evaluating data availability and management systems: Workshop participants noted sev- eral practical concerns related to data collection. The ability to generate, gather, store, and analyze data varied greatly across jurisdictions and agencies. While most collect some data metrics, the format of each differed, making both intra-agency and inter-agency cross-asset comparisons difficult. A paradigm shift toward collecting post-implementation performance data, not normally collected, is additionally important to collect so as to improve future impact assessments. There were practical technical issues identified as barriers to linking such data systems. Interest was high in a system to automate linkages among management systems and the tool prototype. • Facilitating values discussions through weighting: While deriving weights, participants noted that agency goals and objectives are sufficiently publicized and instilled across all levels of the organization, yet there is still wide latitude when it comes to interpreting how those values translate to a program. By sitting down together and talking through the importance of one measure over another, revealing conversations occurred on more closely defining per- formance preferences (e.g., what is more important, the structural health of the pavement or the ride quality that the users experience? Are pavements more important than bridges because of the sheer magnitude of investment or does the larger risk with bridges dominate?). Some participants feared that these conversations could favor larger personalities getting their way but still appreciated the opportunity to think beyond their silo and make a case for their performance areas. In practice, such weighting discussions could be incorporated via a Delphi method to protect against internal biases. • Prioritizing projects from a system perspective: Many workshop participants noted that their agency does have siloed asset management systems in place, particularly for pavements and bridges. They pointed out that these systems could be integrated by using the tool prototype, and decision making could likely be improved through a more holis- tic approach. Additionally, because various groups within the agency would be required to participate in broader prioritization processes, it is likely that a better organizational understanding would be developed. In this way, agency management systems would gen- erate lists of projects, and the tool prototype would be used to select the best projects across all management systems, which could then be integrated into the STIP or midterm capital program (for example, a 10-year program). Having both top-down and bottom-up approaches was encouraging to participants so as to more directly link common manage- ment system outputs.

Testing the Tool Prototype 31 • Analyzing investment trade-offs: Participants suggested that this tool is not so much a “cross- asset” tool as a “cross-investment” tool. The tool could be expanded to look across modes and has the ability to consider performance with regard to operations (e.g., congestion) instead of just physical infrastructure. The ability to quickly evaluate the trade-offs among investment areas was found to be powerful in supporting decision makers in finding the right mix of investments. • Making a case for increased flexibility: By reflecting real-world constraints, participants appreciated having the ability to run the tool prototype with and without different policies so as to make a case for additional discretion in decision making (e.g., if the governor says I must do Project X, what are the impacts on system performance? If we could reallocate dedicated funds, what performance benefits could be realized?). While many saw the usefulness, there was also genuine concern about implementation. Hav- ing now developed and tested the tool prototype with audiences across the country, the following are possible refinements, based on participant feedback, which would help ensure that the tool prototype best meets agency needs. It is important to note that the tool prototype as developed cannot accommodate all of these refinements, but that any add-on or future deployment might consider the following: • Simplified interface: A few refinements were suggested regarding making the mechanics of using the tool prototype more user-friendly. For example, the user currently weights perfor- mance measures against each other using a numerical, nine-point comparative scale. Attend- ees suggested that a sliding bar or scale between measures might be easier to use for this task to avoid confusion about how to value relative priorities. Participants mentioned this refinement several times. • Performance measures/outcomes: The tool prototype’s output currently provides the value of performance measures as a result of implementing certain projects or portfolios of projects. Some attendees expressed a desire to show trend lines, not just points in time, commenting that it would be useful to see if asset conditions are improving or worsening. It was also noted that the tool should be able to adapt to varying performance targets by functional class. This has been incorporated into the framework but is not reflected in the sample data set provided in the tool. • Scenario comparison: Participants noted a desire to save each scenario/run within the tool so that subsequent runs can be compared between one another. This accommodation has been built into the tool prototype. Additional discussions at the workshop meetings focused on risk-based planning, the ability to incorporate economic impacts, longer-term analyses, and data needs. One participant asked what capacity the tool prototype would have to support risk and sensitivity analysis, particularly since MAP-21 includes requirements for agencies to factor uncertainty into planning and decision- making processes on the NHS. Risk has been incorporated into the tool by including standard deviations around the budget and performance impacts, which allows users to make decisions with confidence given the likelihood of various outcomes (Figure 12). After the tool prototype was demonstrated with a preloaded measure for number of jobs cre- ated, participants pointed out that economic models (e.g., IMPLAN and REMI) already exist that could supplement traditional silo analysis. Related to economic value was a discussion point a participant raised on how the tool prototype could be used to show the impact of the delay of a project (i.e., missing the window of opportunity) with regard to both cost and performance. This can be accommodated in the tool by defining lagging performance measures such as a life-cycle cost metric. If fully integrated with a management system, the tool prototype could be linked to compare the diminishing value between project alternatives (e.g., rehabilitation versus patch-fix) at any point in time. Sensitivity testing could then be conducted by iterating the project timing or activity type and evaluating the corresponding impacts on performance.

32 Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance In anticipation of implementing the framework with long-range transportation plans, several participants asked how the tool prototype could be adapted to predict performance over time. While this was outside the scope of the project, the tool can be analyzed from the top down or repeatedly run on a year-by-year basis with manual updating based on what projects were selected each year. To automate this process, a linkage to management systems is suggested. With access to deterioration predictions, life-cycle cost evaluations, and project alternatives, the tool could be enhanced to support long-range optimization, including the ability to set minimum performance levels throughout the planning horizon. Discussion at workshops and testing also included questions on what performance measures/ allocation areas should be included in the tool prototype and the availability of data to support these measures. The sample data set used in the workshops included safety data, for example, which not all DOTs consider to be a stand-alone allocation area. The flexibility of the tool allows for customization so that states/agencies can use whichever areas/performance measures work best for their unique circumstances and decision-making support needs. It was also noted that the tool prototype depends on data being entered for each project, including the impact of the project on various performance measures. Concern was expressed that this type of information does not exist in many agencies, and the tool is only as good as the data that are entered into it. The research team acknowledged that executive leadership will have to be convinced of the value of such data collection and of its reliability and further noted that as states/MPOs continue to expand their data sets, many mandated by MAP-21, information will be more readily available for use within the tool. Of course, the application of the framework and tool prototype depends on the agency’s orga- nizational structure and performance management maturity. New Jersey, for example, is unique in that it has more flexibility with state and toll dollars, so agency leadership is supportive of the concept of allocating funds that are not pre-dedicated to a specific silo. Utah has expressed interest in an enterprise solution that can accommodate strategic planning, long-range planning, and STIP development. Many states expressed interest in using the top-down tool functionality as a first step toward implementation while they work to understand the benefits of a project across performance types; for example, pavement projects may have safety benefits that are not captured in current data collection processes.

Next: Chapter 4 - Tool Implementation Playbook »
Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance Get This Book
×
 Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Report 806: Guide to Cross-Asset Resource Allocation and the Impact on Transportation System Performance provides guidance and a spreadsheet tool to help managers with applying data-driven techniques to project prioritization, program development, scenario analysis, and target setting. The tool and guidebook are intended to assist managers with analyzing and communicating performance impacts of investment decisions.

The software is available online only and can be download from TRB’s website as an ISO image. Links to the ISO image and instructions for burning an ISO image are provided below.

Help on Burning an .ISO CD-ROM Image

Download the .ISO CD-ROM Image

(Warning: This is a large file and may take some time to download using a high-speed connection.)

Software Disclaimer - This software is offered as is, without warranty or promise of support of any kind either expressed or implied. Under no circumstance will the National Academy of Sciences or the Transportation Research Board (collectively "TRB") be liable for any loss or damage caused by the installation or operation of this product. TRB makes no representation or warranty of any kind, expressed or implied, in fact or in law, including without limitation, the warranty of merchantability or the warranty of fitness for a particular purpose, and shall not in any case be liable for any consequential or special damages.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!