National Academies Press: OpenBook

Implementing Alternative Technical Concepts in All Types of Highway Project Delivery Methods (2020)

Chapter: Appendix B ATC Better Than or Equal To Assessment

« Previous: Appendix A Literature Review and Content Analysis
Page 83
Suggested Citation:"Appendix B ATC Better Than or Equal To Assessment." National Academies of Sciences, Engineering, and Medicine. 2020. Implementing Alternative Technical Concepts in All Types of Highway Project Delivery Methods. Washington, DC: The National Academies Press. doi: 10.17226/25865.
×
Page 83
Page 84
Suggested Citation:"Appendix B ATC Better Than or Equal To Assessment." National Academies of Sciences, Engineering, and Medicine. 2020. Implementing Alternative Technical Concepts in All Types of Highway Project Delivery Methods. Washington, DC: The National Academies Press. doi: 10.17226/25865.
×
Page 84
Page 85
Suggested Citation:"Appendix B ATC Better Than or Equal To Assessment." National Academies of Sciences, Engineering, and Medicine. 2020. Implementing Alternative Technical Concepts in All Types of Highway Project Delivery Methods. Washington, DC: The National Academies Press. doi: 10.17226/25865.
×
Page 85
Page 86
Suggested Citation:"Appendix B ATC Better Than or Equal To Assessment." National Academies of Sciences, Engineering, and Medicine. 2020. Implementing Alternative Technical Concepts in All Types of Highway Project Delivery Methods. Washington, DC: The National Academies Press. doi: 10.17226/25865.
×
Page 86
Page 87
Suggested Citation:"Appendix B ATC Better Than or Equal To Assessment." National Academies of Sciences, Engineering, and Medicine. 2020. Implementing Alternative Technical Concepts in All Types of Highway Project Delivery Methods. Washington, DC: The National Academies Press. doi: 10.17226/25865.
×
Page 87
Page 88
Suggested Citation:"Appendix B ATC Better Than or Equal To Assessment." National Academies of Sciences, Engineering, and Medicine. 2020. Implementing Alternative Technical Concepts in All Types of Highway Project Delivery Methods. Washington, DC: The National Academies Press. doi: 10.17226/25865.
×
Page 88
Page 89
Suggested Citation:"Appendix B ATC Better Than or Equal To Assessment." National Academies of Sciences, Engineering, and Medicine. 2020. Implementing Alternative Technical Concepts in All Types of Highway Project Delivery Methods. Washington, DC: The National Academies Press. doi: 10.17226/25865.
×
Page 89
Page 90
Suggested Citation:"Appendix B ATC Better Than or Equal To Assessment." National Academies of Sciences, Engineering, and Medicine. 2020. Implementing Alternative Technical Concepts in All Types of Highway Project Delivery Methods. Washington, DC: The National Academies Press. doi: 10.17226/25865.
×
Page 90
Page 91
Suggested Citation:"Appendix B ATC Better Than or Equal To Assessment." National Academies of Sciences, Engineering, and Medicine. 2020. Implementing Alternative Technical Concepts in All Types of Highway Project Delivery Methods. Washington, DC: The National Academies Press. doi: 10.17226/25865.
×
Page 91
Page 92
Suggested Citation:"Appendix B ATC Better Than or Equal To Assessment." National Academies of Sciences, Engineering, and Medicine. 2020. Implementing Alternative Technical Concepts in All Types of Highway Project Delivery Methods. Washington, DC: The National Academies Press. doi: 10.17226/25865.
×
Page 92

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

B-1 Appendix B ATC “Better Than or Equal To” Assessment Introduction Alternative Technical Concepts (ATCs) provide a means for proposers to modify contract requirements for a competitive advantage (Gransberg, et al. 2014; Molenaar et al. 2014). While a competitive advantage generally implies that a proposer will lower its price, agencies review ATCs only to confirm that they provide equal or better value for taxpayer’s money, which does not necessarily mean lower construction costs. Thus, effective ATC evaluation procedures should be able to handle trade-offs among non-cost and cost evaluation criteria as well as between conflicting project objectives. Is an ATC that reduces construction duration but increases project costs better to or equal than the original design? What would be a reasonable extra cost to reduce project duration, improve traffic control, and/or reduce environmental impacts? This section presents a wide range of approaches currently used by transportation and non-transportation organizations to measure value for money (VfM). The research team argues that formal VfM evaluation approaches could help to improve ATC review procedures, increasing consistency and transparency in the assessment of the “equal to or better” standard. Value for Money refers to the optimal use of available resources towards the achievement of the best possible benefits from investments (DFID 2011; Emmi et al. 2011; ICAI 2011; and Barnett et al. 2010). The NCHRP Synthesis 391: Public Sector Decision Making for Public-Private Partnerships (PPPs) found that VfM assessments are commonly used overseas to justify the use of PPPs for the procurement of infrastructure projects and are becoming increasingly popular among state transportation agencies (DOTs) in the US as a mechanism to establish the business case for a PPP (Buxbaum and Ortiz 2009). In PPP projects, VfM is measured in terms of cost savings through a comparison between estimated PPP costs and expected costs if a traditional procurement approach is used. The lack of ability of this approach to handle non-cost factors makes it unsuitable for the evaluation of ATCs. However, the existing literature on VfM assessment includes a wide range of non-transportation-related approaches that could better meet the needs of agencies to effectively evaluate ATCs. A more common use of formal VfM assessment methodologies can be found in programs and investments made by international humanitarian aid and non-governmental organizations (NGOs) to ensure that donors’ money is well spent, as well as to promote greater transparency and accountability in their operations (Emmi et al. 2011). These organizations have developed VfM frameworks and assessment methodologies, including both monetary and non-monetary factors. They recognize that “striving for the minimum possible cost does not necessarily maximize value for money. In some cases, spending a little more may well deliver significantly better value” (ICAI 2011). DOTs seem to agree with this statement; however, they still lack the tools to assess ATC non-cost evaluation criteria in an objective manner. For example, the “better to or equal” analysis conducted by the Washington State DOT on ATCs involves the subjective assessment of eleven factors, including roadway system functionality, structural adequacy, safety, aesthetics, environmental impact, and impacts on surrounding and adjacent communities (WSDOT 2010). Proposers are informed about the eleven criteria, but they do not know how each criterion will be evaluated nor how trade-offs will be made by decision-makers.

B-2 The implementation of standard VfM assessment processes in ATC contracting would make it easier for DOTs to justify ATC accept/reject decisions and would facilitate a more transparent process decreasing the risk of claims from contractors. DOTs trust that any potential claims “that may arise regarding conducting an ‘apples to apples’ comparison of proposals is resolved by requiring the ATC to meet the ‘equal to or better’ standard” (WSDOT 2010). However, current ATC evaluation methods are based on the expert opinion and judgement of decision-makers (Gransberg, Loulakis, and Ghada 2014), making it difficult for DOTs to demonstrate the transparency of the process. Value for Money Assessment Types The research team has identified the three types of VfM assessments: - Comparative VfM Assessment: Intended to identify the investment alternatives that offer the best VfM to ensure an optimal use of available resources. - Project/Program Management VfM Assessment: Intended to ensure that projects/programs are being managed for VfM. - Demonstrative/Auditing VfM Assessment: Intended to justify investments to stakeholders by demonstrating VfM (does not involve a comparison among investment alternatives). Given the need to compare ATCs against original contract requirements, a comparative VfM assessment would be the most suitable approach (among the three approaches listed above) to determine if an ATC meets the “equal to or better” standard, but with the proviso that “value” must be individually defined for each project. While time savings may offer the greatest value and be the most relevant ATC evaluation criterion for a given project, ATCs in other projects may be aimed for better traffic control strategies or better temporary construction (MnDOT 2015). It means that ATC evaluation protocols should provide a certain level of flexibility to adjust the evaluation process to the needs of each project. The need to define value for each investment offers an additional benefit to these organizations; it provides a base for discussion on the factors that would allow the maximum value from their programs, consolidating the understanding of the intended objectives among all decision-makers (Emmi et al. 2011). A review of effective procurement practices currently used by DOTs across the country, and intended to maximize value for taxpayer’s money, revealed a great similarity between VfM assessment methodologies used by humanitarian aid organizations and best-value practices intended to select contractors for transportation construction projects. Thus, for the purposes of this research project, best-value contractor selection is also considered as a VfM assessment approach and was analyzed by the team to identify suitable practices that could help to improve ATC evaluation procedures. Best-value is “a procurement process where price and other key factors [cost and non-cost factors] are considered in the evaluation and selection process [of contractors] to minimize impacts and enhance the long-term performance and value of construction” (Scott et al. 2006). A review of the information collected through a comprehensive literature review on VfM assessment and best-value selection models allowed the team to identify the primary elements and some methodology requirements that might facilitate effective ATC “better to or equal” decisions. The ATC primary elements and methodology requirements are presented in Table B.1. The primary elements refer to the different components of the ATC assessment process. First, the

B-3 agency needs to identify the key areas for comparison between potential ATCs and original contract requirements (evaluation areas – e.g. cost, time, quality, traffic control, and environmental impact). The agency should then proceed to determine the specific criteria to be used to assess each evaluation area (evaluation criteria – e.g. the “cost” area could be evaluated in terms of contract administration costs, future maintenance costs, and users’ costs). The rating system refers to the method used to measure the value added to the project by the ATC under each criterion (e.g. an objective methodology to rate each criterion on a worse-equal-better basis in comparison to the original design). Finally, a selection/decision methodology is required to integrate all evaluation criteria rates into a single indicator to determine if a given ATC meets the “equal to or better” standard. Table B.1. ATC Evaluation - Primary Elements and Methodology Requirements Primary Elements Methodology Requirements • Evaluation Areas • Evaluation Criteria • Rating System • Selection/Decision Method • Ability to make trade-offs among multiple conflicting objectives (e.g. minimize project costs and minimize project duration). • Ability to integrate both quantitative and qualitative factors, as well as different types of units of measurement (e.g. dollars, months, Likert scales). • Flexibility to tailor assessment methodologies according to the agency’s project-specific objectives (e.g., shortening duration, achieving a certain finish date, minimizing public impact, etc.). • Consistency and transparency in the application of the decision-making process. Value for Money Assessment Methods and Models This section describes methodologies and strategies that were considered by the research team for the development of effective ATC evaluation systems. As discussed above, the research team has found that DOTs are currently using formal VfM assessments to justify the use of P3 for the procurement of transportation infrastructure projects. However, the application of the same approach to the evaluation of ATCs has been discarded given that these assessments do not allow the use of non-monetary evaluation criteria. Therefore, this section is mainly focused on VfM assessment approaches implemented by humanitarian aid organizations, under the premise that these approaches can be adapted to meet the needs of DOTs. Most of the existing VfM evaluation approaches used by humanitarian aid organizations are built upon the widely accepted “three E’s” framework illustrated in Figure B.1. This framework argues that VfM is the result of balancing three key elements: economy, efficiency, and effectiveness (Emmi et al. 2011; ICAI 2011; and Barnett et al. 2010). Each of these three dimensions is defined below (Barnett et al. 2010): - Economy: Efforts to obtain the best-value inputs. - Efficiency: Efforts to maximize outputs with the available input. - Effectiveness: Efforts to achieve and measure (in a quantitative and qualitative manner) intended outcomes.

B-4 Figure B.1. Three E’s Value for Money Framework (adapted from Emmi et al. 2011) A study conducted by Fleming (2013) identified six main VfM assessing methods implemented by international development/health aid organizations and classified them into three groups. Table B.2 presents a description for each method as well as a summary of the similarities and differences within each group. Table B.2. Methods for Evaluating Value for Money (adapted from Fleming 2013) Group Method Description Similarities and Differences 1 Cost Effectiveness Analysis (CE) The evaluation of two or more alternatives, based on the relative costs and outcomes (effects), in reaching a particular goal. CE and CU are useful for evaluating investments that aim to reach the same goal in non-monetary terms. Both methods can be used when comparing investments that aim to achieve the same goal. The main difference between the two methods is that CU analysis takes beneficiary perspectives into account. Cost ATC Analysis (CU) The evaluation of two or more alternatives by comparing their costs to their ATC (a measure of effectiveness developed from the preferences of individuals). This method can be used where monetizing all outcomes is not possible or appropriate. 2 Cost-Benefit Analysis (CB) The evaluation of alternatives by identifying their costs and benefits in money terms and adjusting for time. This method can be used to identify if a course of action is worthwhile in an absolute sense—whether the costs outweigh the benefits. CB and SROI evaluate whether an investment are beneficial in an absolute sense. They both monetize outcomes. Both methods allow for comparison of investments with different objectives or from different sectors. The difference between them is that SROI measures social, environmental and economic costs and benefits. Social Return on Investment (SROI) Measures social, environmental and economic costs and benefits in money terms. 3 Rank Correlation of Cost vs. Impact (RCCI) Allows for the relative measurement of VfM across a portfolio of initiatives. RCCI and BERA both evaluate the relative costs and benefits of many investments. The first method ranks and correlates costs and impact while the second examines relative value by plotting investments on a four-quadrant graph based on costs and impacts. Basic Efficiency Resource Analysis (BERA) Evaluates complex investments by comparing impact to resources and offering a relative perspective on performance where units analyzed are judged in comparison to other peer units. The following sections present two specific examples of VfM assessment models developed by humanitarian aid organizations as well as the description of a best-value procurement framework

B-5 proposed by the NCHRP Report 561: Best Value Procurement Methods for Highway Construction Projects (Scott et al. 2006). These three approaches (the two VfM assessment models and the best-value procurement framework) are variations of the methods listed in Table B.2. Rating and Weighting Approach – Department for International Development (UK) The DFID developed a VfM assessment approach that can be used for comparative or demonstrative/auditing purposes. It is intended to maximize benefits from development aid programs. This approach uses a standard VfM scoring sheet to evaluate each of the three dimensions of the Three E’s framework (i.e. economy, efficiency, and effectiveness – see Figure B.1). Each dimension is evaluated using the criteria presented in Table B.3 and a 1-to-5 rating scale to determine how good a given program performs on each criterion (1=poor performance; 5=high performance). In order to improve assessment consistency and transparency, DFID provides decision-makers with a specific description of the expected level of performance at each point on the rating scale. Table B.4 shows the description of the expected performance levels for the two economy criteria within each point on the scale. Similar descriptions are provided for the efficiency and effectiveness evaluation criteria. Finally, the required assessment flexibility is achieved by weighting each evaluation criterion on a per program basis and according to the value assigned to different program aspects. Table B.3. DFID VfM Assessment Model - Dimensions & Evaluation Criteria (adapted from Barnett et al. 2010) Dimension Evaluation Criteria Economy • Procurement • Unit Costs Efficiency • Productivity Measure • Risk Analysis and Mitigation Effectiveness • Leverage/Replication • Theory of Change • Relevance and Robustness of Indicators

B-6 Table B.4. DFID VfM Assessment Model – Economy Scoring Sheet (adapted from Barnett et al. 2010) Rating Scale Economy Procurement Unit Costs 1 • No discernable use of procurement to manage or reduce costs • Very high cost compared with benchmarked unit cost (BM) • No mitigating factors identified which explain and justify additional cost • Cost exceeds BM by wide margin, and represents poor return 2 • Some identifiable management of costs through procurement • Ongoing monitoring of procurement costs not identified • Little or no assessment of effect of procurement savings on outputs/outcomes • Costs are managed through procurement • Cost is above BM • Few mitigating factors explained which justify additional cost • Cost exceeds BM and is not delivering adequate returns 3 • Costs managed and increased economies identified through procurement • Ongoing monitoring of procurement costs planned • Risks to outputs/outcomes identified • Costs are managed and reduced through procurement • Cost comparable with BM • No additional benefits identified • Cost is comparable and delivering adequate returns 4 • Costs reduced, and supported by evidence of savings achieved through better use of procurement • Ongoing monitoring of procurement costs planned • Risks to outputs/outcomes identified and assessed • Costs are managed well, and effective savings found • Cost comparable with BM • Some additional benefits described and quantified • Cost is comparable and represents good return 5 • Significant cost reductions achieved through better use of procurement, supported by evidence • Ongoing monitoring of procurement costs planned • Risks to outputs/outcomes identified, assessed and minimized • Costs are significantly reduced and managed to very good effect • Cost is below BM • Some additional benefits described and quantified • Cost is lower by wide margin and represents excellent return “Traffic Light” Rating Approach – Independent Commission for Aid Impact (UK) The Independent Commission for Aid Impact (ICAI) is in charge of ensuring that the UK aid budget is appropriately spent overseas, delivering VfM for UK taxpayers (ICAI 2011). The ICAI approach to assessing VfM uses a “traffic light” scoring system to rate aid programs based on four parameters: objectives; delivery; impact; and learning. This is a demonstrative/auditing approach aimed to produce an overall rating for the intended program as well as a rating for each of the four parameters. The evaluation criteria for each parameter consists of a series of standard questions to be assessed by decision-makers to produce ratings at the parameter and program level. The standard set of questions under each parameter is presented in Table B.5. The ratings are assigned in the form of colors as show in Table B.6. Different colors indicate different VfM achievement levels. This is a subjective approach based on the opinion and judgement of decision-makers.

B-7 Table B.5. ICAI VfM Assessment Model – Parameters and Evaluation Criteria (adapted from ICAI 2011) 1 Objectives: what is the program trying to achieve? 1.1 Does the program have clear, relevant and realistic objectives that focus on the desired impact? 1.2 Is there a clear and convincing plan, with evidence and assumptions, to show how the program will work? 1.3 Does the program complement the efforts of government and other aid providers and avoid duplication? 1.4 Are the program’s objectives appropriate to the political, economic, social and environmental context? 2 Delivery: is the delivery chain designed and managed so as to be fit for purpose? 2.1 Is the choice of funding and delivery options appropriate? 2.2 Does program design and roll-out take into account the needs of the intended beneficiaries? 2.3 Is there good governance at all levels to avoid corruption? 2.4 Are resources being leveraged so as to work best with others and maximize impact? 2.5 Do managers ensure the efficiency and effectiveness of the delivery chain? 2.6 Is there a clear view of costs throughout the delivery chain? 2.7 Are risks to the achievement of the objectives identified and managed effectively? 2.8 Is the program delivering against its agreed objectives? 2.9 Are appropriate amendments to objectives made to take account of changing circumstances? 3 Impact: what is the impact on intended beneficiaries? 3.1 Is the program delivering clear, significant and timely benefits for the intended beneficiaries? 3.2 Is the program working holistically alongside other programs? 3.3 Is there a long-term and sustainable impact from the program? 3.4 Is there an appropriate exit strategy involving effective transfer of ownership of the program? 3.5 Is there transparency and accountability to intended beneficiaries, donors and UK taxpayers? 4 Learning: what works and what needs improvement? 4.1 Are there appropriate arrangements for monitoring inputs, processes, outputs, results and impact? 4.2 Is there evidence of innovation and use of global best practice? 4.3 Is there anything currently not being done in respect of the program that should be undertaken? 4.4 Have lessons about the objectives, design and delivery of the program been learned and shared effectively? Table B.6. ICAI VfM Assessment Model – “Traffic Light” Rating System (adapted from ICAI 2011) Rating Description Green: The program meets all or almost all of the criteria for value for money and is performing strongly. Very few or no improvements are needed. Green-Amber: The program meets most of the criteria for value for money and is performing well. Some improvements should be made. Amber-Red: The program meets some of the criteria for value for money but is not performing well. Significant improvements should be made. Red: The program meets few of the criteria for and value for money. It is performing poorly. Immediate and major changes need to be made. Best-Value Award Method – NCHRP Report 561: Best-Value Procurement Methods for Highway Construction Projects The best-value award method proposed in NCHRP Report 561 uses an award algorithm to evaluate contractor’s qualifications, technical proposals, and price proposals based on a set of predetermined parameters and evaluation criteria.

B-8 Table B.7 presents and describes the set best-value parameters and evaluation criteria proposed in the NCHRP Report 561. It should be noted that the proposed criteria include cost and non-cost factors. These parameters and evaluation criteria are the result of a national transportation agency survey and the review of several best-value contract documents from DOTs across the country. Table B.7. Best-Value Parameters and Evaluation Criteria (Scott et al. 2006) Parameter Evaluation Criteria Includes Remarks Cost Initial Capital Cost Construction and procurement costs (also include design costs in a DB project) Sometimes called the “bid” price Time Schedule Time to build project (also include design time in a DB project) Sets contract performance period Qualifications and Performance Prequalification Financial and corporate information as well as bonding requirements Typically, a routine government form used for all contracting opportunities Past Project Performance Project experience on past projects that are similar to the project at hand. Also, might include past history of claims and litigation Preference is given to offerors with the most relevant experience Key Personnel Experience and Qualifications Qualifications of key personnel Licenses, registrations, and past project experience of individuals Subcontractors Information Subcontracting plan including small business utilization Often requires that goals for participation by certain types of firms be met Project Mgmt. Plan Plans for logistics, material management, equipment, traffic control, etc. Often related to schedule constraints Safety Record and/or Plan Corporate safety record and plans for specific safety hazards Often uses the Workers’ Compensation Insurance Modifier as a metric to measure safety record Quality Quality Mgmt. Plans Typical QA/QC program submitted prior to award May include design QC if bid alternates or DB is used Design Proposed Design Alternate Owner allows contractor to propose an alternate material or technology for a given feature of work Bid is submitted with and without alternates. Owner makes decision as to which alternates will be accepted prior to award Technical Proposal Responsiveness Proposals are considered responsive if they receive a minimum technical score Requires that a measurable standard be developed for each evaluation criteria Environmental Considerations Plans to prevent and/or mitigate pollution during construction Many are required by law and/or regulation Tables B.8 and B.9 show four best-value rating systems and seven award algorithms, respectively, identified in the NCHRP Report 561. These best-value approaches are currently used in horizontal and vertical construction projects. The rating system to be used on a given project depends on the intended award algorithm. Table B.9 also shows the suitable rating system(s) for each best-value award algorithm.

B-9 Table B.8. Best-Value Rating Systems (adapted from Scott et al. 2006) Rating System Description Satisficing (also called “Go/No- Go”) The simplest and easiest evaluation system to understand for evaluators and bidders. The evaluation planner must establish a minimum standard for each and every evaluation criterion against which the proposals can be measured. Each proposal is rated as responsive or nonresponsive based on these minimum standards. This is relatively simple for certain kinds of criteria such as qualifications standards. Modified Satisficing These systems recognize that there may be degrees of responsiveness to any given submittal requirement. As a result, the range of possible ratings is expanded to allow an evaluator to rate a given category of a proposal across a variety of degrees. Thus, a proposal that is nearly responsive can be rated accordingly and not dropped from the competition due to a minor deficiency. Adjectival Rating These systems use a specific set of adjectives to describe the conformance of an evaluated area within a proposal to the project’s requirements in that area. These systems are an extension of modified satisficing. Direct Point Scoring Evaluators assign points to evaluation criteria based on some predetermined scale or the preference of the evaluator. These rating systems allow for more rating levels. Table B.9. Best-Value Award Algorithms (adapted from Scott et al. 2006) Best-Value Award Algorithm Algorithm Variables Award Determination Rating System Meets Technical Criteria – Low Bid If T > Tmin, Award to Pmin If T < Tmin, Nonresponsive T = Technical Score P = Project Price Lowest Price Satisficing Adjusted Bid AB = P/T Award ABmin AB = Adjusted Bid Numerical analysis using point scoring, a mathematical combination of price and non- price factors, or a quantitative tradeoff analysis Direct Point Scoring Adjusted Score AS = (T x EE)/P Award ASmax AS = Adjusted Score EE = Engineer’s Estimate Weighted Criteria TS = W1S1 + W2S2 + … + WiSi + W(i+1)PS Award TSmax TS = Total Score Wi = Weight of Factor i Si = Score of Factor i PS = Price Score Quantitative Cost –Technical Tradeoff TIncrement = [(Tj/Ti) – 1] x 100% PIncrement = [(Pj/Pi) – 1] x 100% If TIncrement > PIncrement, Award Proposali If TIncrement < PIncrement, Retain Proposalj for possible award and repeat with Proposalj+1 Repeat Process until TIncrement > PIncrement T = Technical Score P = Project Price Fixed-Price – Best Proposal Award Tmax, Fixed P T = Technical Score P = Project Price Qualitative Cost –Technical Tradeoff Similar to above, only no quantitative analysis of difference. Award to proposal that has best value in proposed scope. Evaluation panel reaches consensus as to which proposal is the best Qualitative tradeoff analysis of cost and technical factors Modified Satisficing Or Adjectival Rating Summary The purpose of delving this deeply in the VfM literature was to be able to identify the possible range for VfM analyses that could be used by DOTs to make the “equal to or better than” decision.

B-10 The research team recognizes that many of the concepts discussed above will be burdensome to DOT if they are implemented. The comments received from both the practitioner members of the research team and the industry steering group indicate that conducting a full-blown VfM analysis on the order of those done for P3 projects is unrealistic. Nevertheless, it the litigious environment of the US highway construction sector, the risk of protest is always present and as the very use of ATCs brings the “apples to apples” and “level playing field” procurement principles into question, it seems prudent to seek objective approaches to demonstrate VfM.

Next: Appendix C: Case Study Protocol and Questionnaire »
Implementing Alternative Technical Concepts in All Types of Highway Project Delivery Methods Get This Book
×
 Implementing Alternative Technical Concepts in All Types of Highway Project Delivery Methods
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

There is an emerging view in the construction industry that better performance or better value for money can be achieved by integrating teamwork for planning, design, and construction of projects.

The TRB National Cooperative Highway Research Program's NCHRP Web-Only Document 277: Implementing Alternative Technical Concepts in All Types of Highway Project Delivery Methods seeks to assist integrated construction projects to include the construction contractor in the design process in some meaningful manner.

The report is released in association with NCHRP Research Report 937: Guidebook for Implementing Alternative Technical Concepts in All Types of Highway Project Delivery Methods.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!