National Academies Press: OpenBook

Analytical Tools for Asset Management (2005)

Chapter: Section 6 - Testing Process

« Previous: Section 5 - Tool Descriptions
Page 49
Suggested Citation:"Section 6 - Testing Process." National Academies of Sciences, Engineering, and Medicine. 2005. Analytical Tools for Asset Management. Washington, DC: The National Academies Press. doi: 10.17226/13851.
×
Page 49
Page 50
Suggested Citation:"Section 6 - Testing Process." National Academies of Sciences, Engineering, and Medicine. 2005. Analytical Tools for Asset Management. Washington, DC: The National Academies Press. doi: 10.17226/13851.
×
Page 50
Page 51
Suggested Citation:"Section 6 - Testing Process." National Academies of Sciences, Engineering, and Medicine. 2005. Analytical Tools for Asset Management. Washington, DC: The National Academies Press. doi: 10.17226/13851.
×
Page 51
Page 52
Suggested Citation:"Section 6 - Testing Process." National Academies of Sciences, Engineering, and Medicine. 2005. Analytical Tools for Asset Management. Washington, DC: The National Academies Press. doi: 10.17226/13851.
×
Page 52
Page 53
Suggested Citation:"Section 6 - Testing Process." National Academies of Sciences, Engineering, and Medicine. 2005. Analytical Tools for Asset Management. Washington, DC: The National Academies Press. doi: 10.17226/13851.
×
Page 53
Page 54
Suggested Citation:"Section 6 - Testing Process." National Academies of Sciences, Engineering, and Medicine. 2005. Analytical Tools for Asset Management. Washington, DC: The National Academies Press. doi: 10.17226/13851.
×
Page 54

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

49 SECTION 6 TESTING PROCESS 6.1 INITIAL TESTING OF PROTOTYPES After prototype AssetManager tools were developed, the research team tested them. For AssetManager NT, a sample scenario was developed using runs from the Pontis bridge management system and the Deighton dTIMS system (pro- vided by the Vermont Agency of Transportation). A sample common performance measure across pavements and bridges (deficiency cost) was calculated for each run and year based on the cost to replace deficient pavements and bridges using average unit costs. The input files were developed by enter- ing management system results (as well as the derived results on deficiency costs) into two Excel worksheets and saving each to CSV format. A system metrics file was developed using data from a sample HPMS file and queries of a sample Pontis database. A scenario was run, and views were created using the what-if tool. The sample scenario dataset “Sam- ple1” provided with AssetManager NT contains the results of this process. AssetManager PT was initially tested using a small set of fabricated data. The research team revised both tools as a result of this ini- tial prototype testing period. Subsequently, a formal testing process was undertaken at two panel member states: Montana Department of Transportation (MDT) and New York State Department of Transportation (NYSDOT). The research team prepared a test plan, including a series of case-oriented test scripts covering each of the steps required to use the tools. 6.2 MDT FIELD TESTING Montana is a rural state with a land area of 147,000 square miles (fourth largest in the nation) and a population of roughly 900,000 (seventh smallest in the nation). The Montana Depart- ment of Transportation is responsible for maintaining more than 10,800 miles of highway and about 2,100 bridges. To pro- vide the Montana Transportation Commission with guidance on allocation of available transportation funds, the MDT estab- lished the Performance Programming Process (P3) in 2002. This process develops a performance-based funding distribu- tion plan for systems (e.g., Interstate, NHS, and primary), dis- tricts, and type of work (e.g., roadway reconstruction, rehabil- itation, resurfacing). Investments for bridges and safety work also are linked to performance objectives. Performance mea- sures have been established for pavement ride quality, bridge condition (e.g., the number of functionally obsolete, struc- turally deficient bridges), and safety (e.g., number of cor- rectable crash sites funded for improvement). The P3 involves a series of tradeoff analyses using MDT’s pavement, bridge, congestion, and safety management sys- tems. These analyses compare investment levels to perfor- mance outcomes and seek the distribution of funds that yields the best overall performance. The results of P3 do not deter- mine which specific projects are selected—only the distribu- tion of funding to districts and work categories and the over- all system performance expectations associated with that distribution. The AssetManager tools were tested to explore their potential value within the P3 as well as within related efforts to assess needs, screen project nominations, and relate can- didate programs of projects to established work mix and per- formance objectives. Field testing took place on February 23 through 25, 2004. During the site visit, the research team followed test scripts for both tools and recorded MDT staff comments and suggestions. AssetManager NT Data Preparation MDT ran the PMS 10 times using last year’s data set. Budget levels between $50 million annually and $400 mil- lion annually in years 2008 through 2012 were run. (Budget levels in 2003 through 2007 were constant in all 10 runs, already reflecting programmed projects.) Each run took 10 to 15 minutes. The research team developed a set of Microsoft Access tables and queries to automate the process of loading spread- sheet outputs from MDT’s PMS into AssetManager NT. These queries were used to produce the necessary input files. The research team obtained a copy of the MDT Pontis data- base and added an option to the Pontis robot to allow the first 5 years of each run to be fixed based on programmed projects and variation in budget levels to start in the 6th year. The Pon- tis robot was then run on the MDT Pontis database to create the system metrics and bridge scenario input files. The bridge metrics information was then merged with the pavement met- rics information into a single system metrics file.

50 The research team used MDT’s standard geographic and network categories and performance measures to set up a configuration in AssetManager NT and then created an ini- tial scenario with the pavement and bridge data. Researchers ran through the test scripts and found and corrected minor bugs in the configuration screens. In the initial scenario, the research team observed that, when the budget is fixed for the first 5 years and the what-if analysis focuses on the last 5 years, the user still needs to input the average annual budget for the entire 10-year period. Because all users may not understand this need, a second sce- nario was created in which the first year of the scenario was set to 2008 (instead of 2003), and only the data for 2008 through 2012 were included. This scenario was useful for looking at annual budget levels over the 5-year period of interest; however, it did not allow the entire trend lines from the present to be seen. On site, MDT staff conducted an additional set of PMS runs (with a different distribution of work across resurfacing, rehabilitation, and reconstruction) and created an additional NT scenario. Testing Results The research team demonstrated each of the NT views and walked MDT staff through using each screen. A few bugs were identified and logged. Two possible applications of the NT tool were identified: • To facilitate the investment versus performance analy- sis conducted for MDT’s P3 and • To estimate the investment required over a 10-year period to achieve stated performance objectives for the biennial needs analyses. Staff felt that the tool would definitely be of value for the biennial needs analysis. They felt that the tool would be of some, although limited, value for the P3 analysis because it does not allow scenarios for different work-type mixes to be tested, which is a key requirement of P3 analysis. However, analysis of work-type mixes is best accomplished within the pavement management system itself rather than in Asset- Manager NT. In the end, MDT staff felt that, although AssetManager NT would not dramatically cut down the amount of effort required for P3 (such reduction would require some enhance- ments to their pavement management system), it could help at the beginning of the process to estimate how much invest- ment would be required for individual districts and network categories to meet the performance targets. Providing visu- alization of how sensitive different performance measures are to varying investment levels could potentially be quite helpful. They could not be sure how beneficial the tool would be until they actually used it, but they felt that it would be worth giving it a try. Staff made the following comments: • This tool might be useful for P3 analysis if it could help to reduce the current number of PMS runs that must be done by providing a way to quickly see the impacts of different budget allocations on performance. • Given that the main challenge in P3 is to allocate a fixed budget across work types, networks, and districts in the best possible way, this tool would be more helpful if it allowed varying allocations of a given budget, rather than being focused on varying the budget level. • This tool appears to be ideally suited for the needs analy- sis that is performed at MDT every 2 years, which involves estimating the amount of funds needed over a 10-year period to achieve certain performance objectives. • In general, the process to compare two different sets of resource allocations is awkward (go to resource alloca- tion screen, set allocations, close that screen, go to the multibudgets, look at results, close that screen, return to resource allocations, reset values, close, return to multi- budgets, compare . . .). This process is not convenient to use. For this tool to really be used to compare alloca- tions, a split screen is needed so that the user can adjust one and see the other change. The user also needs an easy way to generate a tabular report of results for dif- ferent allocations. [These comments were later addressed by adding the allocation view and incorporating the resource allocation settings window within the dash- board view.] • It is hard to draw definitive conclusions about benefits without actually using the tool as part of the P3 or needs analysis process (i.e., putting it to the real test). MDT staff suggested several enhancements as a result of the testing process: • Consider modifying the system to handle the case in which the first few years of a program are fixed (with pro- grammed projects) and variations need to be tested for the last set of years only, but the entire performance trend line still needs to be seen. This case is an extension of the “base” year concept to multiple years (however, there are expenditure levels; they just happen to be fixed). • In the budget view, when switching selections on the first tab of the setup, enable automatic population of the budget ranges with ntiles (where n is the number of bud- gets selected), if any of the numbers for budgets do not fall within the ranges specified. [This comment was later addressed by adding an auto fill button on this screen.] • Put the name of the scenario on the views. • For the budget view, include in the report a tabular view of the data shown (i.e., budget level, PM, value, year) that is exportable to a spreadsheet. Especially with float- type indicators, it is not easy to read values given the axis scaling and labeling.

51 • Similarly, for the dashboard view, have a tabular report (also exportable to a spreadsheet) with PM, value, net- work category, geographic category, year, and annual budget. • On the budget levels tab of the budgeting setup screen, set the tab order to navigate across budget levels directly and not to the colors/thicknesses, etc. (which are relatively rarely changed). [This comment was later addressed.] • On the resource allocation screen, have the network cat- egories, geographic categories, and asset types appear in the same order as they were entered in the configuration. [This comment was later addressed.] Summary Evaluation The MDT staff was asked to rate the tools on a scale from 1 to 10, where 10 is the most positive rating. Staff assigned the following summary ratings to AssetManager NT: • Potential value of functionality: 7, • Ease of data preparation: 8–9, • User interface: 6–7 (staff commented that many “clicks” were needed to accomplish a given task), and • Reports/Outputs: 8–9. AssetManager PT Data Preparation AssetManager PT was set up with MDT’s network, geo- graphic, and project type categories as well as its perfor- mance measures. On site, two data sets were created using a set of proposed pavement preservation projects that were being screened. Preparation of a second data set was begun and then com- pleted after the visit. The second data set included a more complete set of capital projects from the tentative construc- tion program. Testing Results The research team demonstrated each of the AssetManager PT screens and walked MDT staff through using each screen. Bugs were identified and logged. The applications for the PT tool were initially less clear than those for the NT tool. The research team discussed how the purpose of the tool was to provide a better connection between the network-level analysis done for P3 and the proj- ects that are actually selected. Staff pointed out that decision- making about specific projects is highly decentralized in Mon- tana. The PT tool, in theory, could be used by a district to help determine which projects to nominate for the program in a given year, but the staff was skeptical that districts would perceive the tool as adding value for this process. In the end, the decision was made to focus on how planning staff could use AssetManager PT to better understand the work compo- sition and likely performance implications of the projects that were being nominated and selected. Two specific appli- cations were suggested: • Screening pavement preservation projects—to help plan- ning staff recommend which pavement preservation projects should be advanced into the program (at the time of the testing process, MDT was screening pave- ment preservation nominations for 2006) and • Analyzing work distribution and performance implica- tions of nominations—to compare project mix to the P3 recommendations and to explore the likely performance impacts of nominated projects. For this analysis, a “plug” value for the pavement preservation category would be used rather than a value for individual pavement preser- vation projects, because these projects are on a shorter development cycle than other projects. Data sets were not readily available for loading into the PT tool; multiple data sources (e.g., TCP, nominations, PMS, BMS) needed to be merged. This process could be at least partially automated. In the end, staff felt that if there was a “cookbook” procedure for loading the data, the process would not be overly burdensome. The following issues that occurred with the MDT sources are likely to occur elsewhere as well: • Project data used for capital programming are not consis- tently and accurately tied to location referencing and/or cannot be conveniently linked to condition/performance data from the management systems. This lack will make deriving “before” values of performance measures for projects difficult when the major data source for proj- ects is the capital program. • Different types of projects are on different time cycles from a budgeting perspective. Smaller preservation proj- ects (e.g., resurfacing projects) are often treated as “plug” line items without specific locations assigned until 1 to 2 years before implementation. Programming decisions about larger capital projects are made further in advance. Obviously, tradeoffs across project categories are not possible when the decisions for each category are made at different points in time. • Project data used for capital programming may not be at a sufficiently disaggregated level for direct input into AssetManager PT. For example, a single project may include multiple types of work and may span multiple types of assets and geographic and/or network categories. These projects must be split into their component parts and treated as project packages for purposes of Asset- Manager PT.

• Construction projects are frequently implemented in phases, possibly over a longer time span than the PT tool is intended to cover. In this situation, judgment must be exercised to determine what portion of a proj- ect’s impacts should be included, when only a single phase of the project is being included in the tool. Because AssetManager PT does not predict deterioration, “before” and “after” average pavement and bridge conditions must be analyzed using the current condition as the “before” case, even though the projects being considered are to be implemented several years out. Therefore, the predicted “after” condition from AssetManager PT cannot readily be compared to a target or PMS projection for the future year when the set of projects being analyzed will actually be com- pleted. The only solution would be to derive projected con- ditions from PMS simulation results. Unfortunately, obtain- ing this information would have required, at minimum, a new report or query capability to be added to the PMS, which was not feasible to do in an expedient fashion. MDT staff suggested several enhancements as a result of the testing process: • On the program analysis screen, add the capability to deal with only a subset of projects in the automated selection process. • Add a filtering capability on reports to allow results to be viewed by geographic and network categories. [This sug- gestion was implemented for the final version of the tool.] • On the performance report, add summary lines to see overall performance by network category (across geo- graphic categories), by geographic category (across net- work categories), and then total across all categories. • On the performance report, add the capability to com- pare different scenario results; currently comparison of two different scenarios is awkward. [This suggestion was implemented for the final version of the tool.] Summary Evaluation Staff assigned the following summary evaluation ratings to AssetManager PT: • Potential value of functionality: 6–7, • Ease of data preparation: 8–9 (if a “cookbook” were available), and • Reports/Outputs: 5–6 (higher if reports included filter- ing capabilities and summaries). 6.3 NYSDOT FIELD TESTING New York is a diverse state with a land area of 47,376 square miles and a population of roughly 19 million. The state ranks third in the nation in both total population and urban population and ranks first in the nation in the number of pub- lic transit passengers. NYSDOT is responsible for 15,000 miles of highway and roughly 7,500 bridges. Total vehicle- miles of travel in New York State approaches 135 billion, of which 45 percent is on the highway network administered by NYSDOT (14). NYSDOT’s asset management efforts have focused on using the capital program update process as an integrating mechanism across the various “stovepipe” programs for pave- ment, bridge, congestion/mobility, and safety. Management systems have been developed in-house to provide the capa- bility to simulate needs and relate investment to performance. NYSDOT’s program update process makes use of these man- agement systems to establish performance targets for each of these program areas; the regions propose programs of proj- ects to meet the performance targets. Programs are then cen- trally reviewed to ensure consistency with performance tar- gets as well as to look horizontally across the different program areas. New York has a program support system/project manage- ment information system (PSS/PMIS) in place that tracks candidate projects throughout their life cycles and balances alternate programs against funding sources. The capability to perform what-if analysis to determine the financial impacts of different sets of projects is handled by interfaces with a bridge needs forecasting model and a pavement needs fore- casting model. At the time the testing took place, NYSDOT had developed a prototype of an integrated asset management system that uses a common measure, “excess user costs,” for comparing alternative investments and making tradeoffs across different packages of diverse project types. Excess user costs are defined as the incremental costs incurred by users as a result of a facility in less than ideal operating conditions. Three cost components are considered: delay costs (for passengers and freight), accident costs, and vehicle operating costs. This sys- tem can be used to compare candidate project proposals based on benefit/cost, where benefits are defined as the decrease in excess user costs attributable to an investment. Field testing at NYSDOT began in late December 2003 and was concluded with a site visit by the research team on March 22, 2004. AssetManager NT Data Preparation NYSDOT staff prepared two data sets for AssetManager NT using their in-house pavement and bridge analysis systems (PNAM and BNAM). The first data set included statewide results (aggregate for all regions and network categories), from five runs using different average annual budget levels over the 10-year period between 1993 and 2003. The second data set included results for four regions and two networks (on 52

53 and off the NHS). Data for four budget levels were provided. Staff reported that they spent about 30 staff-hours to prepare NT data, including running the analysis systems. However, this process could be further automated to reduce data prepa- ration time if the tool was to be used on a regular basis. The research team created the initial configuration files for these data sets, loaded the input files, created scenarios, and sent NYSDOT the NT tool with the scenarios for testing. The field tests used a common set of performance measures for AssetManager NT and PT, which included excess user cost, pavement condition rating, percentage of poor and fair lane-miles, bridge condition rating, and number of deficient bridges by number and percentage of deck area. In addition, several output measures were used, including the lane-miles of pavement rehabilitation and reconstruction, the lane-miles of pavement preventive maintenance, the number of bridges rehabilitated or replaced, and (for PT only) the number of bridges with maintenance work. Testing Results The research team demonstrated each of the AssetManager NT views. Overall, the reaction was very positive; staff felt that this tool could be very useful in exploring investment tradeoffs as part of the development process for the 5-year plan. Staff also expressed interest in exploring how the tool could be used to look at tradeoffs across corridors as well as across assets. Currently, scenario analyses are run by request. Asset- Manager NT could be used to run and package multiple sce- narios for executives so that they could explore variations without having to request additional runs. NYSDOT staff made the following comments: • The NYSDOT analysis tools predict results by multi- year funding periods, not annually (interpolation was used to produce annual results). The NT tool can also be used in this manner, which would reduce data prepara- tion requirements. Tool documentation should be sure to say that the analysis periods need not be single years. • NYSDOT’s analysis tools also allow for different proj- ect prioritization criteria to be entered (e.g., worst-first versus minimum life-cycle cost). Different NT scenar- ios could be created for sets of runs using different cri- teria to provide a tool for visualizing the performance differences. • A help file is needed. [A help file was developed in con- junction with the documentation.] NYSDOT staff suggested the following enhancements: • Add an optimization feature to find the resource alloca- tion across a set of asset types, geographic categories, and network categories that minimizes or maximizes a single designated performance measure (in NYSDOT’s case, minimizing excess user costs). • Improve the capability to compare results across differ- ent NT scenario files. • For the cross-criteria view, rather than having a slider for the year, have each pane show a trend graph over the scenario time horizon for the selected performance mea- sure. [This comment was later addressed by adding the allocation view.] • Provide an option to fix an overall budget level and then see how a performance measure changes as the alloca- tion in resources changes across assets (and potentially geographic and network categories as well). [This com- ment was later addressed by adding the allocation view.] • Add validation to the create scenario feature to check if there are different numbers of runs entered per asset/ geography/network combination. If so, the process should terminate with a message to the user. Currently, the sce- nario is created but the results are not valid. [Validation was later added in response to this comment; errors are written to a log file.] • Add capability to print or export tabular results as opposed to just the graphical views. Summary Evaluation Staff assigned the following summary assessment ratings to AssetManager NT: • Potential value of functionality: 7 (9 if an optimization feature was provided), • Ease of data preparation: 6 (although, producing inputs for multiyear periods would have been easier), • User interface: 7, and • Reports/Outputs: 8 (higher if reports included filtering capabilities and summaries). AssetManager PT Data Preparation NYSDOT staff prepared two data sets for the PT tool using queried information from New York’s PSS. The PSS stores approved candidate projects, and automated proce- dures are in place to retrieve system performance informa- tion based on candidate project location references. Both PT data sets included 473 projects scheduled for years 2009 and 2010. In the first data set, the projects were identified by region (A or B), and a single budget category was used for all projects. In the second data set, the projects were identi- fied by region (A or B) and network (interstate, non-interstate NHS, other state system, or other touring route), and five dif- ferent budget categories (maintenance, preservation, mobil-

ity, safety, and other) were used. Approximately 32 staff- hours, mostly for data cleaning, were required to produce a PT data set. Staff felt this time could be reduced to about 10 staff-hours through further automation of the process. Testing Results Because NYSDOT staff had logged many hours working with multiple versions of the tool before the visit, the research team did not run any tests on site. Rather, researchers reviewed each screen with the staff members and asked them for com- ments and suggestions. NYSDOT staff made the following comments: • The PT tool provides the capability to explore perfor- mance results of different project mixes, which is simi- lar to a planned enhancement to New York’s PSS. • The PT tool also was useful for looking at the program balance, i.e., the mix of work by category (e.g., safety, mobility, pavement preservation). • Their work with the PT tool will likely shape require- ments for the PSS tool enhancements (specifically, the capability to represent changes in system measure as the result of a project). • The ability of work targets to be set for only those types of work that are defined by a physical system measure seemed limiting at first. For example, NYSDOT speci- fied system measures for pavement and bridge preser- vation projects, but not for safety and mobility projects. For these latter types of projects, NYSDOT would be more likely to specify a performance measure target (e.g., reduction in excess user costs). Performance tar- gets may be set on the baseline performance screen. The deterioration issue that was raised in Montana also was discussed in New York: because the PT tool’s most likely use is to look at projects being considered for implementation at least 3 years into the future, projected conditions rather than current baseline conditions need to be reflected in the tool if the performance projections are to be compared to targets for a future year. This need increases the complexity of data preparation. However, even without considering deteriora- tion, the tool is still useful for comparing different project mixes based on relative performance results. NYSDOT staff suggested AssetManager PT could be enhanced by the addition of an option for the data entry of system measures and baseline performance indicators to have the system calculate aggregate statistics based on entries for individual geographic/network category combinations. Summary Evaluation Staff assigned the following summary ratings to Asset- Manager PT: • Potential value of functionality: 10, • Ease of data preparation: 10, and • Reports/Outputs: 10. 54

Next: Section 7 - Recommended Future Initiatives »
Analytical Tools for Asset Management Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Report 545: Analytical Tools for Asset Management examines two tools developed to support tradeoff analysis for transportation asset management. The software tools and the accompanying documentation are designed to help state departments of transportation and other transportation agencies identify, evaluate, and recommend investment decisions for managing the agency’s infrastructure assets.

The software tools associated with NCHRP Report 545 are available in an ISO format. Links to instructions on buring an .ISO CD-ROM and the download site for the .ISO CD-ROM are below.

Help on Burning an .ISO CD-ROM Image

Download the NCHRP CRP-CD-57.ISO CD-ROM Image

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!