National Academies Press: OpenBook
« Previous: Chapter 1 - Introduction and Research Approach
Page 18
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 18
Page 19
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 19
Page 20
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 20
Page 21
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 21
Page 22
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 22
Page 23
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 23
Page 24
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 24
Page 25
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 25
Page 26
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 26
Page 27
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 27
Page 28
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 28
Page 29
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 29
Page 30
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 30
Page 31
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 31
Page 32
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 32
Page 33
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 33
Page 34
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 34
Page 35
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 35
Page 36
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 36
Page 37
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 37
Page 38
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 38
Page 39
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 39
Page 40
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 40
Page 41
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 41
Page 42
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 42
Page 43
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 43
Page 44
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 44
Page 45
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 45
Page 46
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 46
Page 47
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 47
Page 48
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 48
Page 49
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 49
Page 50
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 50
Page 51
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 51
Page 52
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 52
Page 53
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 53
Page 54
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 54
Page 55
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 55
Page 56
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 56
Page 57
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 57
Page 58
Suggested Citation:"Chapter 2 - Findings." National Academies of Sciences, Engineering, and Medicine. 2006. Best-Value Procurement Methods for Highway Construction Projects. Washington, DC: The National Academies Press. doi: 10.17226/13982.
×
Page 58

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

62.1 State of Practice This chapter examines and analyzes the state of practice of best-value procurement methods in the construction industry found in the literature, project procurement docu- ments, domestic and international interviews, and survey data. It includes regulatory trends, concepts found in the literature and project data, parameters used in the process, a summary of results from a highway sector survey, a com- parison of performance for best-value contracting versus design-bid-build, and case study information to illustrate how best-value procurement has been implemented. A literature review of procurement methods used in the construction industry within the past 15 years is presented in Appendix A. Many of the findings highlight issues and short- comings in the traditional low-bid system and address trends in public sector construction toward the increased use of various best-value procurement methods to improve project perform- ance and enhance end-product quality. The literature draws from all facets of the construction industry in the United States, Europe, Canada, and other countries. It includes perspectives from federal and state contracting agencies, vertical and hori- zontal construction,and analysis of project outcomes correlated to various procurement systems incorporating non-cost factors in the selection process. The development of best-value procurement concepts in the public sector has to some extent borrowed ideas and approaches used to procure products and services in the private sector. Private sector construction owners have long sought to get the best value for dollars expended. For example, a major U.S. corporation with an annual construction budget of $1.5 billion has often used best-value selection with a nego- tiated procurement for industrial projects. Contractor selec- tion is typically based on multiple factors that include cost, schedule, quality management, safety, and technical ability (Dorsey 1995). Best-value procurement practices are increas- ingly being transferred to the public sector where permitted by legislation or when determined to be in the best interests of the agency under both traditional and alternative contracts. NCHRP Report 451, “Guidelines for Warranty, Multi- Parameter, and Best Value Contracting,” provided an intro- ductory framework for best-value procurement in highway construction, and the initial framework set forth in that document has been incorporated into the comprehensive study within this report (Anderson and Russell 2001). Although legislative requirements have traditionally required low bid for construction, more and more state legislatures have passed legislation that allows best-value procurement. The next section highlights some of the recent legislative revisions. 2.2 Legislative and Regulatory Trends Legislation and regulations for public sector construction at the federal and state levels are moving toward greater use of contracting approaches to achieve the best value for dollars expended. The Federal Acquisition Regulation (FAR) Part 9, Contractor Qualifications, includes commentary regarding the reasons for this trend, explaining that the low-bid method fails to serve the public interest by creating the false impres- sion that this will automatically result in the least cost to the owner (FAR 2004). FAR Section 9.103, Policy, describes the importance of setting appropriate responsibility standards whenever a low-bid methodology is used: The award of a contract to a supplier based on lowest evalu- ated price alone can be false economy if there is subsequent default, late deliveries, or other unsatisfactory performance resulting in additional contractual or administrative costs. While it is important that Government purchases be made at the lowest price, this does not require an award to a supplier solely because that supplier submits the lowest offer. A prospective contractor must affirmatively demonstrate its responsibility, including, when necessary, the responsibility of its proposed subcontractors. C H A P T E R 2 Findings

7FAR Part 15, Contracting by Negotiation, establishes a best-value “source selection” process for federal contracts (FAR 2004). This process is also known as “competitive nego- tiation” because negotiations (discussions) are conducted with multiple offerors simultaneously, instead of selecting a single offeror and negotiating with that entity. The source selection process might entail the selection of the lowest- priced technically acceptable proposals or it may consist of a tradeoff between price and other factors—resulting in an award that may not be to the lowest-priced offeror or the high- est technically rated offeror. Regardless of which approach is used, the federal agency’s source selection decision must be made based on a determination that the selected proposer has offered the best value to the government. Many federal and state agencies have implemented various source selection methods and have developed instructions or procedures for development and implementation of these methods. At the federal level, the U.S. Postal Service, the Army, the Navy, the Department of Veterans Affairs, and the Federal Bureau of Prisons have developed procedures and guidelines for source selection contracting applicable to their construction programs (U.S. Postal Service Handbook 2000, Army 2001). Federally imposed procurement requirements are appli- cable to state and local transportation agencies wishing to use federal-aid funds for highway projects. For many years 23 U.S.C. Section 112(b)(3) mandated use of a low-bid pro- curement methodology for most construction contracts, allowing alternative procurement procedures to be used only with special permission from the FHWA through its Special Experimental Project (SEP-14) initiative. Many of the projects authorized under SEP-14 involved use of best-value concepts, and the lessons learned from this program have added to the body of knowledge for best-value procurement in the highway sector (FHWA 1998). In 1998, Congress acknowledged the need to allow an alternative procurement process for design- build projects by enacting revisions to 43 U.S.C. Section 112(b)(3), allowing a best-value process to be used for award of such contracts. FHWA adopted implementing regulations that permit such projects to use a procurement procedure similar to the FAR 15 source selection process and continue to allow agencies to use other procurement processes through the SEP-14 program. A number of states have adopted legislation allowing best- value procurements, often in the context of design-build projects but also allowing best value to be incorporated into construction contract procurements. The ABA has published model legislation and implementing regulations that, if adopted by a state legislature, would allow state and local agencies to incorporate best-value concepts into a competi- tive bidding process and to use competitive negotiations under specified circumstances. Note that one flaw to the Model Code is that it does not provide a model process for procurement of innovative contracts where the nature of the contract does not allow a price competition (although it does allow for the possibility of negotiated contracts for items and services available only from a single source). As a result, agen- cies proposing legislation based on the Model Code may wish to consider including an alternative process for such con- tracts. The Model Code does however provide an excellent prototype for legislation to allow best value to be considered in awarding traditional construction contracts. A copy of Article 3 of the Model Code is included in Appendix B. The Model Code provides for construction contracts to be procured using competitive sealed bidding unless deemed to be impracticable or not advantageous to the owner. The com- petitive sealed bidding process is established by Section 3-202 of the Model Code. Section 3-202(5) requires bids to be eval- uated based on requirements set forth in the Invitations for Bid, and those criteria shall be “objectively measurable, such as discounts, transportation costs and total or life-cycle costs.” The process thus permits the traditional low-bidding process where the owner awards to the responsible bidder that has provided the lowest responsive bid, and also permits agencies to implement a process addressing items that have a cost impact to the owner outside of the contract price. The com- petitive sealed bidding process cannot, however, be the basis for selecting one proposer over another simply because the owner believes the first proposer has offered a better product. If such a result is desired, the owner has the ability to use the competitive sealed proposal process, provided that the owner is able to justify use of such process. The competitive sealed proposal process is described in Section 3-203 of the Model Code. The Model Code intends for this process to be used for design-build projects and for other projects for which competitive sealed bidding is determined to be impracticable or not advantageous to the owner. The competitive sealed proposal process may involve multiple steps, including prequalification, receipt and review of initial propos- als, discussions to ensure that the proposer is fully aware of the owner’s requirements and to advise the proposer of any necessary clarifications regarding its proposal, and receipt and review of final proposals. Award is based on evaluation of final proposals in accordance with the criteria specified in the request for proposals. At the state level, various statutes allow use of best-value procurement for public works construction contracts. Refer to Appendix B for a list of various statutes that may allow DOTs in various states to incorporate best-value elements into procurement of construction contracts. Appendix B also includes excerpts from the FAR and from best-value statutes passed in Colorado, Delaware, and Kentucky. It should be noted that the Colorado and Kentucky laws do not appear applicable to DOT projects, but they may nevertheless be of

interest in developing legislation for DOT projects. The Colorado revised statute provides for “competitive sealed best- value bidding,” using some of the same terminology as the Model Code but offering less flexibility than the Model Code. The Colorado statute permits the procurement officer to “ . . . allow bidders to submit prices for enhancements, options, or alternatives that will result in a product or service to the state having the best-value at the lowest cost,” if a high-level determination has been made that such a process will be advantageous to the state. (The Model Code does not require a special determination to be made before incorporating best-value elements, but does include restrictions regarding the types of items that may be included.) The Colorado statute allows award to a bidder where the total price offered by the bidder, including the prices for enhancements, options, or alternatives, exceeds the total price offered by the other bidders, if it is determined “that the higher total amount provides a contract with the best value at the lowest cost to the state”based on criteria set forth in rules adopted by the procuring agency. The Colorado statute implicitly allows the owner to consider matters such as life-cycle costs in making the selection decision; the Model Code provision provides a much clearer statement regarding the process to be followed. The Delaware Code allows the use of best-value procurement for large public works contracts, with best value determined on the basis of objective criteria that have been communicated to the bidders in the invitation to bid. Delaware agencies can elect to use a best-value procurement process without special findings. However, the Delaware law includes specific require- ments regarding weightings to be assigned to the best-value criteria as follows: 1. Price—must be at least 70% but no more than 90% and 2. Schedule—must be at least 10% but no more than 30%. Under the Delaware Code, a weighted average stated in the invitation to bid must be applied to each criterion according to its importance to each project. The agency must rank the bidder according to the established criteria and award to the highest ranked bidder. Every state agency and school district is required, on a yearly basis, to file a report with every mem- ber of the General Assembly and the Governor that states which projects were bid under best-value procurement and what contractor was awarded each contract. The Delaware legislature’s decision to include specific weightings in the statute could be interpreted as requiring the agency to convert all criteria to numeric ratings even though another evaluation methodology might be more desirable. The logic underlying the requirement to give the bid price at least a 70% weighting and schedule at least a 10% weighting is unclear, and may be problematic for certain projects, for example, those for which long-term operations costs are significant. The Kentucky revised statute provides for award of con- tracts using a competitive sealed bidding process, with the contract awarded “to the responsive and responsible bidder whose bid offers the best-value.”The statute allows significant flexibility to the awarding agency in establishing the best- value criteria and their relative weightings, but makes it clear that the criteria must be objective and quantifiable. The statute includes the following definition of best value: 3. Best value means a procurement in which the decision is based on the primary objective of meeting the specific business requirements and best interests of the Commonwealth. These decisions shall be based on objective and quantifiable criteria that shall include price and that have been communicated to the offerors as set forth in the invitation for bids. In summary, legislation at the federal and state levels is moving toward allowing the use of best-value selection strate- gies. Many states have adopted legislation allowing use of design-build and permitting award to be based on a best-value determination. A number of states have also passed general procurement legislation that would allow best-value concepts to be factored into the selection decision for other construc- tion contracts as well. The best-value concepts, analysis and recommendations presented in this research work have been developed within the framework of legislative approaches currently in place for federal and state agencies. 2.3 Best-Value Contracting Concepts As described in Chapter 1, in a broad sense, the definition of best value may encompass the concepts from and variations of current highway procurement methods, including prequal- ification, post-qualification, A+B bidding, multi-parameter bidding, bid alternates, and extended warranties. The research team conducted more than 50 case studies from all sectors of construction to identify and categorize best- value concepts used in the public sector construction indus- try. These agencies include the U.S. Army Corps of Engineers, the U.S. Air Force, the Highways Agency in England, the National Aeronautics and Space Administration, the Spanish Road Administration, the Swedish Highway Administration, the U.S. Forest Service, and a number of U.S. DOTs. The majority of these case studies involve design-bid-build proj- ects, but some design-build projects have been captured as good examples of best-value procedures. These case studies are presented in summary tables throughout this chapter and in a series of detailed case studies in Appendix D. Table 2.1 provides a summary of the detailed case studies that were used to develop the best-value concepts described in this report. It also presents a systematic approach to identifying and coding best-value parameters. 8

9Table 2.1. Detailed case study index. Case ameters Award Algorithm Evaluation Rating Scales 1. Air Force Base Pedestrian Bridge A.0 + P.1 Qualitative Cost- Technical Tradeoff Adjectival Rating 2. NASA Johnson Space Center Tunnel System A.0 + P.0 + P.1 Qualitative Cost- Technical Tradeoff Adjectival Rating 3. U.S. Army Corps of Engineers Canal A.0 + P.1 + P.2 + P.4 Qualitative Cost- Technical Tradeoff Not stated 4. Swedish Highway Administration Asphalt Paving Bids A.0 + P.1 + P.2 + P.4 + D.0 Weighted Criteria Direct Point Scoring 5. Alaska DOT Interchange A.0 + A.1 + P.0 + P.4 + D.1 Weighted Criteria Direct Point Scoring 6. University of Nebraska Cleanroom B.0 + P.0 + P.2 + P.4 + D.1 Fixed Price—Best Proposal Direct Point Scoring 7. U.S. Army Corps of Engineers Dam A.0 + B.0 + P.1 + P.2 + P.3 + P.4 Qualitative Cost- Technical Tradeoff Satisficing and Adjectival Rating 8. Spanish Road Association Asphaltic Paving and Highway Maintenance A.0 + B.0 + P.1 + P.2 + P.3 + P.4 Weighted Criteria Direct Point Scoring 9. Minnesota DOT Highway A.0 + B.0 + P.0 + P.1 + Q.0 + D.1 Meets Technical Criteria—Low Bid Satisficing 10. Missouri DOT Bridge Seismic Isolation System A.0 + A.1 + B.0 + P.1 + P.3 + Q.0 + D.0 Meets Technical Criteria—Low Bid Satisficing 11. Washington State DOT Interchange A.0 + B.0 + B.2 + P.0 + P.1 + P.2 + P.4 + Q.0 + Q.4 Adjusted Score Direct Point Scoring 12. U.S. Army Corps Air Freight Terminal/Airfield A.0 + B.0 + P.1 + P.2 + P.3 + P.4 + Q.0 + Q.4 + D.0 Meets Technical Criteria—Low Bid Modified Satisficing 13. U.S. Forest Service Highway A.0 + B.0 + B.2 + P.0 + P.1 + P.2 + P.3 + P.4 + Q.4 + D.1 Quantitative Cost- Technical Tradeoff Direct Point Scoring 14. Maine DOT Bridge A.0 + A.1 + B.0 + B.2 + P.0 + P.4 + Q.0 + Q.2 + Q.3 + Q.4 + D.1 Adjusted Bid Direct Point Scoring 15. Sea to Sky Highway Improvement Project: Sunset Beach to Lions Bay A.0 + B.0 + B.2 + Q.3 + Q.4 + P.0 + P.1 + P.2 + P.4 + D.1 Meets Technical Criteria—Low Bid Satisficing 16. RFP Form of the Government of the Ontario A.0 + P.0 + P.2 + D.1 + Q.4 Adjusted Bid Direct Point Scoring 17. RFP Form of the Government of the Yukon A.0 + B.0 + P.1 + P.2 + D.1 + Q.3 Weighted Criteria Direct Point Scoring 18. Model Contract Document in England A.0 + B.2 + P.1 + P.2 + P.3 + D.1 + Q.3 + Q.4 Weighted Criteria Direct Point Scoring 19. Forth Road Bridge Toll Equipment Replacement Project in Scotland A.0 + B.2 + P.1 + P.2 + P.3 + D.1 + Q.3 + Q.4 Weighted Criteria Direct Point Scoring 20. Valuascollege Project in the Netherlands A.0 + P.1 + P.2 + P.4 + Q.3 + Q.4 + D.0 + D.1 Weighted Criteria Adjectival Rating Par Four primary concepts were derived from a review of these case studies. These concepts include parameters, evaluation criteria, rating systems, and award algorithms. Figure 2.1 illustrates how these concepts can be visualized in a best-value system. Defining best-value parameters was not a simple task for project sponsors. It is critical to identify parameters that would actually add value to a project and be defensible to the industry and the public. As a first step, the best-value param- eters must be defined and categorized. These parameters can then be further analyzed to determine which evaluation criteria add value to a project and result in a transparent and defensible procurement system. Inspection of the literature and case studies identifies a number of best-value parameters that can be mixed and matched to create a best-value procurement. Evaluation criteria associated with these general parameters can be combined to create an appropriate best-value definition, eval- uation, and award system. Some of these concepts overlap with multi-parameter bidding practices, but the parameter cate- gories described herein are more comprehensive than those described in previous NCHRP multi-parameter contracting

literature (Anderson and Russell 2001). Each flows out of a combination of the following five major categories coded with a letter designation generally consistent with the literature: • A = Cost • B = Time • P = Qualifications • Q = Quality • D = Design Alternates The first two major categories are relatively standard com- ponents of multi-parameter contracting. However, within these generic categories several options were identified. Under the cost parameter, the options included the following initial capital cost and life-cycle cost: • Cost = A.0 • Life-Cycle Costs = A.1 The time component includes lane rental and traffic con- trol, which are measured in $/unit time. These will be referred to as follows: • Time = B.0 • Lane Rental = B.1 • Traffic Control = B.2 The qualifications parameter has five major options: pre- qualification, past project performance, personnel experi- ence, subcontractor information, and project management. These will be referred to as follows: • Prequalification = P.0 • Past Project Performance = P.1 • Personnel Experience = P.2 • Subcontractor Information = P.3 • Project Management Plans = P.4 Quality has a number of variations on the theme. Some have been proposed as a component of a multi-parameter A + B + Q bid, but it is difficult to convert these concepts to a dollar or time amount in a rational way. These are referred to as follows: • Warranty = Q.0 • Warranty Credit = Q.1 • Quality Parameter Measured with % in Limits = Q.2 • Quality Parameter Using Performance Indicator = Q.3 • Quality Management Plans = Q.4 Design issues can be a critical component of many best-value parameters. This is especially true if agencies are soliciting design alternates. These are referred to as follows: • Design with Bid Alternate = D.0 • Performance Specifications = D.1 Finally, Incentive/Disincentive clauses often seem to be added to the mix of multi-parameter bidding particularly for time and quality parameters. Therefore, the suffix “with I/D” is added to the above generic set to indicate the use of that type of approach to contracting. Thus, a set of potential variations on the theme of best value is created that is equal to the number of combinations that can be developed using two or more of these parameters. For example, the following would be a best-value project that has cost, schedule, prequalification, past project performance, and a quality parameter using performance indicator with incentive/disincentives: • A.0 + B.0 + P.0 + P.1 + Q.3 with I/D The first 14 case studies in Appendix D are presented in ascending order of the number of parameters used in the best-value decision. For example, Case 1 applies only two best-value parameters, cost and past performance, while Case 14 applies a total of eleven best-value parameters from all five categories (cost, time, qualifications, quality, and design alter- nates). Six case studies were added from the international CM scan project. These cases all involved multiple parameters. Unique projects have unique parameters that define the best- value system. While some projects need little more than cost and qualifications to define the best-value system, some require a complex interrelationship of a series of parameters. In addition to the analysis of case study populations and literature review, the research team also conducted an opinion- based survey of its advisory board concerning each of the best-value concepts identified. The advisory board members were asked about their experience with each of the best-value concepts. They were then asked to rate the concepts based on 10 Figure 2.1. Best-value concepts. Best-Value Parameters Best-Value Evaluation Criteria Best-Value Evaluation Rating Systems Best-Value Award Algorithms

11 chances for success and ease of implementation. Appendix E contains a copy of the advisory board survey. The results of the advisory board survey are presented at the end of this chapter, and comments from discussions with the advisory board members have been integrated into the critical analy- sis in this chapter. 2.4 Analysis of Best-Value Concepts A thorough examination of the literature, case studies, and solicitation documents allowed the research team to further define and critically evaluate the best-value concepts and gen- erate a series of advantages and disadvantages for each cate- gory. The next sections will detail that analysis for the major components including parameters, evaluation criteria, rating systems, and award algorithms. Parameters Cost Best-value cost parameters generally include two options: initial capital costs of construction and life-cycle costs incurred after construction is complete.While best-value con- tracting seeks to award a project on a basis of other than low bid alone, cost usually plays an important part, if not the most important part, of the overall decision. In effect, the non-cost parameters are used as a way for the owner to measure the value of qualifications, schedule, quality, and design alternates. These must then be compared with the cost parameters to determine whether an increase in the project’s construction cost is justified by the enhanced value brought to the project by a particular set of non-cost parameters. It may be possible to measure the impact of schedule, quality, and design alter- nates on the project’s post-construction life-cycle cost of oper- ations and maintenance, and thus use the other type of cost parameter as the performance metric to assess the long-term value of a particular proposal. Some of the non-cost parame- ters cannot be measured on either a capital cost or life-cycle cost basis, but the owner will include them based on the owner’s perception of value to the project. Cost parameters’ greatest advantage in the best-value deci- sion is that they are inherently objective. Often, the proposed bid price can be used to determine the contractor’s under- standing of the magnitude of the actual scope of work. Thus, an unrealistically low bid, while appearing to be a real bargain, may in fact result from the bidder’s lack of competence to successfully complete the given project. This may also be cost parameters’ greatest disadvantage in that public owners must have great justification to reject a bid that is unrealistically low. Additionally, public owners usually work with historic cost data such as statewide bid averages, whereas construction contractors work with current cost data obtained from their subcontractors and suppliers. Therefore, the second disad- vantage lies in this disconnect between the owners’ and con- tractors’ estimating systems. Life-cycle cost parameters’ main advantage is that they per- mit the owner to compare the long-term advantages of com- peting proposals using an engineering economic analysis. State agencies will usually have the funding to complete new construction projects because most state statutes require funds to be available for public sector contracts. For federal- aid contracts, the state DOT signs a project agreement certi- fying that state funds will be available for the non-federal share of construction costs [23 U.S.C. Section 106(b)(1)]. However, they all have huge maintenance backlogs due to insufficient operations and maintenance funding (ASCE 2001, Ashley et al. 1998). Thus, it is quite logical for an agency to be willing to pay a marginally higher initial cost in exchange for reduced annual maintenance costs, extended design lives, or both. The difficulty in using life-cycle cost parameters lies in adequately defining the economic analysis and developing a relatively simple set of life-cycle cost input variables. Selecting arbitrary values for such important vari- ables as the discount rate or the analysis period can have unintended consequences on the validity of the output. Thus, an owner who intends to use life-cycle cost parameters should first complete an exhaustive analysis of its algorithm to ensure that it will produce the mathematically unbiased, reliable output needed to truly make a best-value award. Time Best-value time parameters not only include direct contractor-proposed schedule systems such as A+B bidding, but also those methods that use lane rental and traffic control plans to indirectly influence the contractor’s proposed schedule. Best-value time parameters can be objectively assessed based on cost by converting a time saving to user delay costs. However, these conversions are not yet universally accepted. The major advantage of best-value time parameters is allowing the con- tractor to establish a schedule that is complementary to the plan for executing the construction. These parameters also reward a contractor who proposes an aggressive schedule by making the final best-value award on a combination of both price and time, thereby allowing the price to rise as the schedule is reduced. Both lane rental and traffic control systems permit the owner to communicate the need to minimize a project’s impact on the traveling public during construction. These parameters create an incentive toward innovative management of conges- tion in work zones and reductions in detour lengths and times by rewarding the proposal that minimizes impact on traffic flow during construction. Their disadvantage is in the selection of lane rental rates and other factors to price user construction

costs. If the state highway agency is not careful when estab- lishing rates for these variables, a bias can unintentionally be created, sacrificing construction product quality to avoid oner- ous lane rental charges if a planned activity gets behind sched- ule. One alternative (or supplement) to this approach is to consider the proposer’s plan for reduction of traffic impacts as part of the proposal evaluation. Qualifications Best-value qualifications parameters allow the public owner to obtain some of the benefits from the historically accepted practice of a Brooks Act, Qualifications-Based Selec- tion (QBS) used for procurement of design profession con- tracts. The common criticism of the traditional design-bid-build award to the low bidder, whether justified or not, is that any contractor that can produce a bid bond can bid on a project, and anyone who can post a performance bond can perform the contract regardless of past perform- ance and professional qualifications. State agencies often use general past performance and experience criteria in their pre- qualification procedures to determine whether a contractor is qualified to bid. By using specific qualifications parameters in the selection process, the public agency can filter out unqual- ified contractors and can consider the contractor’s past per- formance record, thereby increasing the probability that the project will be completed successfully (Gransberg and Ellicott 1996). However, the key to public sector application of qual- ifications parameters in a bid is the use of these parameters in the selection process. Their application must be justifiable and defensible. Public agencies have used a broad range of evaluation criteria that fall within the best-value qualifications param- eter. The first advantage of using best-value qualification parameters is the ability to restrict competition to contrac- tors who have a proven track record of successfully complet- ing a specific type of highway construction project, ensuring that all bidders will have the technical skills and experience to produce a high quality product. Additionally, by not forc- ing bidders to compete with less qualified contractors, the owner will also receive a bid price that accurately reflects the scope of work and adequately compensates the contractor for assuming the project risk. This reduces the probability of a bid error and its attendant repercussions with respect to quality and timely completion. The third advantage is the ability of the owner to influence the general contractor’s subcontracting plan by elevating the importance of small business participation. Thus, a contractor may increase its potential to win the best-value contract by teaming with small business subcontractors. The final advantage is the ability to review and rate contractor project management plans before the contract is awarded. The disadvantage associated with qualifications parameters mainly concerns the possibility of creating barriers to contrac- tors who wish to participate in the competition but who cannot meet the narrow or unrealistically restrictive qualification requirements. This leads to potential accusations of favoritism, bid protests, and possible political difficulties during construc- tion. However, these concerns can be minimized by making the qualifications parameters match the project’s specific require- ments and ensuring that the best-value award system is pub- lished and totally transparent to industry (Parvin 2000). Quality The major advantage of using best-value quality parameters is the ability to review and rate contractor quality management plans before the contract is awarded. This has the potential to change the whole dynamic of quality management from an adversarial, compliance-based system to a competitive, award- to-the-best-plan system. Coupling this with some form of warranty or performance-based acceptance indicator creates a situation where the focus of the proposal is toward delivering quality. Contractors will have an incentive to deliver the quality as promised if they will likely be judged on this performance in future projects. Some of the case studies actually put an extended warranty pay item in the bid form, thus creating an environment that communicates the owner’s willingness to pay for the desired level of quality. One concern regarding this approach is that various factors may affect the functional abil- ity of the owner to enforce an extended warranty after con- struction is complete.Warranties will be formed with restrictive and exclusionary language that the owner’s facility operators must understand to avoid unintentionally invalidating the war- ranty through some error or omission.However,quality param- eters included in the RFP and enhancements included in the contractor’s proposal would become part of the final construc- tion contract and are therefore enforceable through standard contractual procedures. However, it should be noted that some agencies prefer not to include the proposal as a contract docu- ment, because of concerns that it may not fully meet all RFP requirements. This concern can be addressed through appro- priate contract language making it clear that the RFP prevails in the event that the proposal is noncompliant. If the proposal is not included in the contract documents, it is highly advisable to make sure that any features of the proposal that were the basis of the selection decision are incorporated into the contract, so that the contracting agency obtains the benefit of the bargain. Design Alternates Design criteria are a component of many best-value pro- curements, particularly when highway agencies are soliciting bid alternates under design-bid-build or using a design-build 12

13 delivery method. Design alternates have advantages and disadvantages, depending on the delivery method. The major disadvantage of using best-value design alternate parameters for design-bid-build projects relates to design liability considerations. In design-build, the owner sheds most of the design liability and transfers it to the design-build contractor who becomes a single point of responsibility for both design and construction issues. However, when an owner only allows a narrow amount of contractor-determined design scope, the responsibility for coordinating the contractor- proposed elements of work with the rest of the owner- designed construction project becomes less clear. One advantage of requesting design alternates is that it opens the door to potentially innovative design solutions for a specific design problem. Sometimes the design alternate could be a better material or a more efficient construction process. At other times, it could take advantage of a drop in the cost of a desirable material or system. In both cases, the construction contractor who is aware of the latest developments in materials and technology in its section of the industry will usually be in a better position to turn a design alternate into a timely advantage for a public agency’s project. Highway agencies have experimented with alternate bids for specific materials, construction items, or pavement types with some success and evaluated the value received in terms of life- cycle cost analysis.The State of Missouri experimented with five competitively bid pilot projects in 1996 using portland cement concrete and asphaltic concrete pavement alternates. The spec- ifications for these projects included an adjustment factor added to each asphalt concrete bid to reflect higher future reha- bilitation costs during the chosen 35-year design period. For example, based on historical records, the asphalt pavement would need rehabilitation at 15 and 25 years versus 25 years for concrete. Certain assumptions were made regarding the design life (35-year analysis period), future construction and mainte- nance costs, salvage values,and the discount rate, to calculate the life-cycle costs for each alternative for an equivalent analysis period. Of the five projects let, the low bidders used asphalt for three projects and concrete for two projects (Missouri 1994). The findings reported by Missouri indicated that • Alternate bids were in line with comparable projects and engineering estimates, and provided a savings through increased competition. • The asphalt and concrete industry questioned the assump- tions made regarding the expected design life, maintenance expenditures,pavement thicknesses,and rehabilitation needs to create a level playing field between the two alternates. • The state determined that life-cycle costs for the pavement alternatives need to be further refined to ensure that com- parisons are made on an equivalent basis and all future costs are taken into account. The success of bid alternates depends on the use of proven designs specified by the owner that can be evaluated for life- cycle costs on a reasonably equivalent basis. In summary, public agencies can create a set of potential variations on the theme of best value that is equal to the num- ber of statistical combinations that can be developed using two or more of the above best-value parameters. Looking at the case study projects, it is apparent that agencies have been experimenting with these variations in recent years. The case studies provide examples of agencies applying anywhere from two to eleven best-value parameters to a procurement from any or all of the best-value parameter categories. One conclusion emerging from this experience with best-value parameters is that the owner should customize the parameters for the needs of the given project rather than strive to find a one- size-fits-all standard system. To do otherwise would probably reduce the effectiveness of the project delivery system and create a procurement environment where minimal value, if any, could be accrued. In this vein, public agencies should also keep in mind that in many cases the tried and true design- bid-build and low-bid award system may indeed be the best delivery method for a specific project. Evaluation Criteria After defining the best-value parameters for a project, the agency must create an evaluation and award plan. This eval- uation plan will involve determining best-value evaluation criteria from the previously mentioned parameters, defining evaluation criteria rating systems, and defining a best-value award algorithm. Best-value evaluation criteria include those factors, in addi- tion to price, that add value to the procurement. Evaluation criteria vary on each project as illustrated in the detailed case studies in Appendix D. In addition to the detailed case studies, the research team summarized the best-value and eval- uation criteria from 50 RFPs as shown in Table 2.2, which illus- trates the additional information gleaned from the analysis of best-value RFPs collected during Phase I of this study. Those solicitation documents included both vertical (building) projects and horizontal (transportation/utility) projects.The population concentrated on design-bid-build/best-value RFPs specifically, but as best-value contracting is in its infancy in highway con- struction, the population also looked at design-build projects to find those types of evaluation criteria that would easily be trans- lated to design-bid-build/best-value contracts.The vertical proj- ects were surveyed for the same reason. It can be seen that most of the criteria fit into one of the best-value parameter definitions. Public agencies also must include regulatory evaluation criteria to comply with their local procurement law constraints. Additionally, the team conducted interviews, surveys, and case studies associated with the International Construction

Management Scan. This information is not shown in Table 2.2, but is incorporated into the analysis that follows the table. Looking at Table 2.2, one can see that cost and qualifications criteria are used most in all types of best-value contracts. Cost and qualifications criteria are used most in the international projects as well. Past performance, qualifications of key per- sonnel, and subcontracting/small business plans are the most popular of the qualifications parameter criteria. Of the six international projects reviewed, five used past project per- formance and six considered qualifications of key personnel. In the quality parameter group, evaluation criteria for quality management planning and warranties led the category. In the design parameter, criteria specifying an evaluation of techni- cal proposals were used in the majority of the RFPs. The heavy use of this criterion must be compared with the use of the “proposed design alternates” criterion to understand the amount of design detail the agencies were willing to allow the contractor to apply to the project. Fifteen of the case study projects using the “proposed design alternates” criterion were design-build projects requiring evaluation of proposed alternates. The other 11 projects were design-bid-build proj- ects where the agency asked for a design alternate for evalua- tion. Proposed environmental protection measures were also a popular aspect of design information that public agencies wanted to evaluate. Finally, among the design-related evalua- tion criteria, proposal responsiveness was the preeminent criterion as would be expected. The crux of communicating the requirements and selecting the best-value parameter for a project is in the owner’s devel- opment of definitive evaluation criteria. These criteria articu- late the quality, cost, schedule, and qualifications requirements for a given project. These criteria are the basis of the best-value procurement and become the foundation for the final contract. The evaluation criteria that were identified in the best-value case studies have been placed into four categories: • Management • Schedule • Cost • Design Alternate 14 Evaluation Criteria (1) Parameter Number of Contracts Using Evaluation Criteria (Total = 50) (2) Price Evaluation A.0 42 Low Bid A.0 7 Life-Cycle Cost A.1 2 Project Schedule Evaluation B.0 19 Traffic Maintenance B.2 3 Financial & Bonding Requirements P.0 35 Past Experience/Performance Evaluation P.1 44 Safety Record (or Plan) P.1 25 Current Project Workload P.1 17 Regional Performance Capacity (Political) P.1 4 Key Personnel & Qualifications P.2 41 Utilization of Small Business P.3 30 Subcontractor Evaluation/Plan P.3 29 Management/Organization Plan P.4 31 Construction Warranties Q.0 11 Construction Engineering Inspection Q.2 1 Construction Methods Q.3 1 Quality Management Q.4 27 Proposed Design Alternate & Experience D.0 26 Mix Designs & Alternates D.0 2 Technical Proposal Responsiveness D.1 37 Environmental Protection/Considerations D.1 25 Site Plan D.1 5 Innovation & Aesthetics D.1 5 Site Utilities Plan D.1 D.1 1 Coordination 1 Cultural Sensitivity D.1 1 Incentives/Disincentives 4 Best-Value I/D Table 2.2. Summary evaluation criteria as identified with best-value parameter from case study project population.

15 Each of these evaluation criteria categories corresponds to the parameters discussed in Section 2.4. Keep in mind, how- ever, that the management category includes both qualifica- tions and quality parameters. Management A strong argument can be made that the success of the best-value project depends on the people and organizations that are selected to execute it. This is because a well-qualified construction team with highly experienced team members can probably sort out the post-award technical issues regard- less of the quality and clarity of the technical requirements in the solicitation. Management criteria come in three gen- eral varieties: • Qualifications of the individual personnel • Past performance of the organizations on the best-value team • Plans to execute the project Many public owners include schedule in the management- planning portion of their best-value solicitations, but because it is a unique and overarching feature of the project environ- ment, it will be dealt with individually in the next section. Individual qualifications can generally be placed into to two broad categories. The first category is the professional credentials held by the individuals, that is, personal creden- tials that qualify an individual to perform a specific function on a team. One obvious requirement is proper licensure in the state in which the project will be built. This and certain other qualifications requirements are mandated by law and would have to be met even if not specifically articulated in the solic- itation. However, to avoid potential misunderstandings, it is good practice to publish evaluation criteria that are at least minimally responsive to legal requirements. In certain cases, it may be advisable to include requirements that exceed the minimum legal standards. The next category of qualifications is specific experience requirements. It is critical to the success of a project for the key members of the contractor’s team to have experience building similar projects. However, in developing evaluation criteria for personal experience, owners must not be arbitrary in setting the performance standard. For example, a require- ment for the project superintendent to have 20 years of expe- rience working on a particular type of project or on projects with a particular agency would probably exclude many indi- viduals who would be qualified for the job. In setting the experience requirements, agencies should also keep in mind that seniority requirements will drive up the personnel costs while reducing the competitive field of qualified candi- dates, and that high seniority requirements may exclude individuals who could perform the work competently even though their level of experience may be short of the arbitrary mark set in the solicitation. The past performance of the organizations is a criterion often used in prequalification and in most best-value solicitations—this is understandably the case because one of the reasons owners are interested in a best-value approach is to ensure that they can select the best contractor for the job. However, there are a number of issues associated with this cri- terion, and the contracting authority must carefully consider how to implement it such that it is accurate and unbiased and should evaluate the pros and cons when making the decision to use past performance in the evaluation. The federal gov- ernment and a number of state agencies have for many years maintained a database of contractor evaluations on past proj- ects and often use this resource as a means to measure the contractor’s track record. Despite certain drawbacks, this appears to be the best means of assessing past performance as it allows contractors the opportunity to appeal negative rat- ings. However, this type of system has been accused of being resource intensive, overly subjective or biased, and subject to challenge. Owners that do not have such systems in place may decide to address past performance by asking for evaluations from project owners for similar projects completed by the contractor in the recent past, often asking for specific data relating to schedule, cost, and claims performance on those specific projects. The use of these metrics can be controver- sial due to concerns relating to due process because the con- tractors do not have the opportunity to object to negative ratings and because of concerns regarding the validity of the information obtained. Careful consideration should there- fore be given to a decision to use such a process to ensure that appropriate questions are asked and that the results are both fair to the contractor and useful to the owner. The Ontario Ministry of Transportation (MTO) in Canada has developed a system to rate consultants’ and con- tractors’ past performance, which it began to implement in 2001. The Registry, Appraisal and Qualification System (RAQS) is used to prequalify consultants and contractors and is also used in what would be considered best-value selection in this report (Ministry of Transportation 2004). In addition to measuring financial status, the RAQS uses performance appraisals and infraction reports at the end of each project (no interims) to establish an overall perform- ance rating. The rating is maintained on a 3-year rolling average basis. Penalty adjustments are made for poor performance through an infraction process and contractor performance rating system. The MTO’s use of RAQS has enhanced their prequalification process and has allowed them to completely eliminate performance bonding require- ments for all construction contracts—saving approximately $2 million per year (Minchin and Smith 2001).

The MTO’s use of the performance rating is demon- strated by how they rate consultants to perform construc- tion administration. These consultants are selected on a combination of price, performance, and quality, at assigned percentages of 20%, 50%, and 30%, respectively. The system they have developed for conducting this assessment is called the Consultant Performance and Selection System (CPSS), which yields a Corporate Performance Rating (CPR). The following is a description of the consultant selection process taken from the CPSS Procedures Guide (Ontario Ministry of Transportation 2003; See RAQS website <https://www. raqsa.mto.gov.on.ca/>[viewed July 2004] Corporate Performance Rating • Past performance is measured by a consultant’s CPR, which is the weighted-average of a consultant’s appraisals over the last 3 years. • Appraisals for all types of capital project consultant assign- ments are included to calculate corporate CPR for each con- sultant. The CPR of a consultant firm is calculated by the following equation: Avg. Yr.1 = Average of all appraisals within the most recent 12 months Avg. Yr.2 = Average of all appraisals in 12 months prior to Year 1 Avg. Yr.3 = Average of all appraisals in 12 months prior to Year 2 • The following applies for calculating CPR: – When a consultant assignment is completed, an appraisal will be completed for the prime consultant only. A prime consultant is defined as the party who has signed the legal agreement with the MTO. Appraisals will not apply to subconsultants. – In the case of consortiums or legal partnerships, one over- all performance appraisal rating for the assignment will be completed. This rating will apply to each member of the consortium or partnership. – The MTO’s RAQS automatically calculates CPR on a quar- terly basis, for each consultant, using past performance appraisals (e.g., January 1, April 1, July 1, and October 1). – Only “approved” performance appraisals are included in the CPR calculation. An appraisal is “approved” if the con- sultant signs off the Performance Appraisal Form or does not respond within the 30-day time limit (to request a for- mal review). In case of a request by a consultant for a for- mal review, the appraisal is not considered approved until the completion of the regional manager review stage or the Qualification Committee review stage, depending on how far the consultant chooses to proceed with the review. Matters such as past experience, financial capability, bond- ing capacity, and other measures of financial solvency do not present the same issues as past performance criteria, since CPR 3(Avg.Yr. 1) 2(Avg.Yr. 2) 1(Avg.Yr. 3)= + + 6 they can be more readily determined in an objective manner. Experience requirements can readily be defined with refer- ence to years of experience, number of similar successful proj- ects, or a similar measure (Vacura and Bante 2003). Owners may also establish criteria for past joint performance or expe- rience of the various members of the contractor’s team such as major subcontractors and specialty consultants. The final category of management evaluation criteria that is typically included in a best-value procurement deals with the contractor’s management plans to execute the project. These plans can cover a multitude of issues that are important to the owner. The rule of thumb for deciding which plans to evaluate is to ask for those that cover areas that are critical to project success and will assist the owner in making the best- value award decision. It is a waste of both the owner’s and the proposer’s resources to require that the plan include aspects that are not significant to the project award decision. Thus, a solicitation might only ask for a specific solution to a critical construction safety problem rather than an entire project safety plan. The owner should develop proposal requirements that will enable the competitors to focus their limited resources available for preparation of proposals on submit- ting highly responsive proposals that address the key issues of the given project. The key plans that are addressed in most best-value solici- tations are as follows: • Construction quality management • Safety • Traffic control/congestion management • Environmental protection • Logistics management • Public outreach and information • Small business participation • Other management plans that are important to making the best-value award decision Table 2.3 shows the typical types of management evalua- tion criteria that were found in the case study data collection effort. When comparing these criteria with the associated number of occurrences for each type of criteria in Table 2.2, one finds that public owners currently use a wide range of management evaluation criteria to arrive at a best-value award decision. Thus, those agencies that are considering the implementation of best-value contracting have a broad base of public experience from which to draw. England, Finland, the Netherlands, and Scotland all include quality management plan in their evaluation criteria (Case Studies 18–20). Both Ontario and England have developed annual quality management rating systems that are used for both prequalification and best-value award. 16

17 The Ontario Government requires a quality management plan in the previously mentioned RAQS. To be qualified to bid construction contracts that require a qualification rating, contractors are required to submit a declaration showing that they have a Quality Management System (QMS). A QMS replaces traditional quality control (QC) plans as the “quality component” of the MTO’s qualification requirements. Con- tractors who wish to remain qualified, or become qualified, to bid for major MTO construction contracts must choose one of the following approaches: • Alternative 1: Annual declaration that the QMS meets MTO’s minimum requirements • Alternative 2: Annual declaration that the Company is cer- tified to ISO 9001 quality management standard To assist the qualifications based procurement, the High- ways Agency in England has recently developed the CAT (Highways Agency 2003). The CAT is a system for contrac- tors’ self assessment of their capabilities, which are com- bined with a past performance rating to develop a qualification based score for procurement. The system relies heavily on a company’s strategic management and quality management plans to establish the ratings. The CAT is a very structured qualifications assessment tool that was developed in consultation with industry. The CAT was developed using principles that underpin a number of business excellence models. The CAT considers what companies need to do to be effective. The CAT relies on the following capability attributes: 1. Direction and leadership 2. Strategy and planning 3. People 4. Partnering 5. Processes 6. Internal resources The following website contains detailed information on the implementation of the CAT process and how each of these attributes are scored: http://www.highways.gov.uk/roads/705.aspx Schedule Developing schedule evaluation criteria for the best-value selection is more than just setting a contract completion date. Anything that the owner knows that might have a material impact on the schedule must be disclosed in the solicitation. If the schedule is an item of competition (i.e., the owner allows the offerors to propose the schedule), definitive evalu- ation criteria must be established against which the proposal evaluation panel can rate the various proposals. Schedule cri- teria can be categorized in four general forms: • Completion criteria • Intermediate milestone criteria • Restrictive criteria • Descriptive criteria Developing completion criteria is quite straightforward. If the proposal date is set, the RFPs could simply provide that submittal of a proposal constitutes a commitment to com- plete by the stated date, or they could include a pass/fail requirement, such as the following statement: The proposal shall include a commitment to complete the project no later than [date]. Management Evaluation Criteria (1) Best-Value Parameter (2) Financial & Bonding Requirements P.0 Past Experience/Performance Evaluation P.1 Safety Record (or Plan) P.1 Current Project Workload P.1 Regional Performance Capacity (Political) P.1 Key Personnel & Qualifications P.2 Utilization of Small Business P.3 Subcontractor Evaluation/Plan P.3 Management/Organization Plan P.4 Construction Engineering Inspection Q.2 Construction Methods* Q.3 Quality Management Q.4 * Owner’s specialized means and methods to achieve desired quality levels. Table 2.3. Case study management evaluation criteria.

However, if the owner wants to ask the proposers to consider whether it will be possible to accelerate project mile- stones or project completion and take into account commit- ments to accelerate the schedule as part of the best-value evaluation, the RFPs for proposals will need to communicate the owner’s wishes to the proposers. In addition, the evalua- tion plan and rating system must give schedule an appropri- ate weight among all other rated categories. One way to communicate this concept is as follows: Offerors shall submit their proposed completion date and a critical path schedule that supports a completion no later than (date). Completion before that date is highly desirable, and pro- posals with an early completion will be given preference. Intermediate milestone criteria are called for if the owner needs to control the pace of the project. Often these criteria can be applied to those aspects of the project’s progress that are not completely controlled by either the owner or the con- tractor, such as the need to obtain permits from outside agencies. Another example would be a requirement to com- plete a portion of the project to be placed in service in advance of completion of the entire project or to require cer- tain work to be completed before proceeding with other work, a process commonly called “phased construction.” An example of this type of performance requirement is as follows: The critical path schedule shall show completion of all Phase I construction including receipt of all digging permits by (date). No Phase II work will proceed until Phase I work and permits have been inspected and accepted by the owner. Constraints that would prevent the contractor from being able to complete as fast as possible must be disclosed and are required to be included in the schedule. Items such as work hour restrictions, prohibition on performance of work dur- ing specified periods of time, limitations on work on holidays, and security precautions might all be addressed. The owner may request maintenance of traffic plans as part of the pro- posal and evaluate them in determining best value. An exam- ple of RFP language dealing with noise restrictions follows: The contractor shall minimize the use of construction means and methods that require the production of loud noise levels. The critical path schedule shall highlight in green those activities that routinely produce noise levels in excess of XX decibels. Those activities may not take place during normal business hours of 8:00 a.m. to 5:00 p.m., Monday through Friday nor late at night on any day of the week between the hours of 10:00 p.m. and 6:00 a.m. Additionally, the proposal will contain a calendar that shows those periods in which loud activities will be planned. Those proposals that show the fewest number of days that exceed the prescribed noise limit will be preferred. Descriptive schedule requirements are used to establish a uniform format for the proposal’s schedule-related submit- tals. The underlying concept is to put all proposals on a level playing field and thus facilitate equitable evaluation. In devel- oping these criteria, the owner should seek to minimize the “bells and whistles” on the schedule submittals reducing the submittal requirement to a stark, easy to analyze document. One way to do this is as follows: The critical path schedule shall be displayed as a bar chart with no more than 50 activities. The following major milestones shall be shown on the chart along with their associated completion date: (list of milestones such as major submittal completions, construction phase completions, final acceptance, etc.). Additionally, the owner can include recommendations in the RFP to influence the approach the contractor takes to the scheduling of the project. Table 2.4 lists different approaches to schedule evaluation found in the case study data collection effort. Table 2.2 shows the public agencies that constitute the case study population have frequently used best-value procure- ment to accelerate completion through an A+B formula and have also evaluated different approaches to traffic maintenance. Cost Properly written proposal submittal requirements give the owner an opportunity to obtain cost information from pro- posers allowing the owner to understand the best-value con- tractor’s thought process in developing the proposal and to obtain a competitive break-down of project costs to use later in change order negotiations. Often, cost information required to be included in the proposal can help communi- cate the relative importance of cost in the best-value award decision. Cost information can range from a simple require- ment to provide a lump sum amount to a complex require- ment to provide detailed elements of a build-operate-transfer 18 Schedule Evaluation Criteria (1) Best-Value Parameter (2) Project Schedule Evaluation (A+B) B.0 Project Completion B.0 Traffic Maintenance B.2 Table 2.4. Case study schedule evaluation approaches.

19 financing scheme. Generally, three types of cost information requirements and associated evaluation were found: • Cost limitations • Cost breakdowns • Life-cycle costs Cost limitations include cost constraints applicable to the project as well as cost-related goals for the project. Many solicitations contain only a single cost criterion: the proposed price. The following is a list of typical cost limitation criteria set by the owner: • Maximum price • Target price • Funds available • Public project statutory limits • Type of funding – Multiple fund sources – Fiscal year funding A maximum price criterion is a cost constraint that defines the allowable cost ceiling for the project. This criterion creates a constraint on the technical scope of work. In essence, the proposal must be developed within the limits established by the cost constraint, and the final proposal must not only com- ply with all the technical and schedule performance criteria but it must also be able to be delivered at or below the maxi- mum allowable price to the owner. Thus, if the owner is pro- viding the project design, measures must be taken to ensure that the design is consistent with the budget ceiling. This type of criterion would be used in the fixed price-best proposal best-value award algorithm. The following is an example of this type of requirement: The final firm fixed price shall not exceed $XX,XXX. A target price criterion operates in much the same man- ner as the maximum price criterion but is less restrictive. It conveys the level of overall quality the owner desires using financial rather than technical terms. Target price criteria are often stated as unit prices rather than lump sum amounts. The owner uses these criteria to constrain the proposed design alternates to proper cost levels and to help guide the contractor’s proposal development and to ensure that the proposed solution will be one that fits the owner’s intent. These criteria all serve to make these cost limitations a part of the final contract. For instance, requirements relating to a target price criterion using a lump sum amount would be as follows: The landscaping around bridges, interchanges, and rest areas, including sodding, trees, and plantings shall cost $XX,XXX ± Y% per site. The proposal shall contain a narrative describing the details of the proposed landscape plan for a typical area. Thus, the owner in this example is effectively telling the contractor the price payable for a specific feature of work and asking to be told how much quality will be provided in exchange for that fixed amount of money. Specifically, the contractor will be competing with other bidders to furnish as much landscaping as possible for the target price. Cost breakdown criteria establish a means for the owner to better understand the basis of the contractors’ price propos- als and help establish the foundation on which the cost of change orders and contract modifications will be negotiated. Under typical unit priced contracts for highway construction, this cost breakdown is essentially provided in the bid form. As previously stated, the price proposal is one mechanism that the owner has to evaluate the contractor’s understanding of the scope of work. Typically, the owner will have conducted its own estimate and will use this as a yardstick to measure the quality and completeness of each price proposal. (For federal- aid contracts, the owner’s estimate will be reviewed as part of the price reasonableness analysis conducted for such proj- ects.) The owner may also use cost breakdown criteria to eval- uate the realism and reasonableness of each feature of work’s value. An example of this is shown as follows: The price proposal shall be broken out as shown on the Price Proposal Form. To be deemed responsive, the value of each fea- ture of work shall not fall outside the range of ± 5% of the inde- pendent estimate for that feature of work. If any item does, the contractor will be so informed during discussions and asked to justify its proposed price in greater detail in its final proposal. Best-value procurement also allows an owner to take a longer look at the project’s ultimate costs and consider including life-cycle costs in the evaluation process, in addition to initial capital cost. Life-cycle cost criteria can be addressed through design alternates such as asking the contractor to propose the type of pavement it will use or through require- ments such as pricing of extended construction warranties that “lock in” future costs of maintenance and rehabilitation. Research has shown that the calculation of project life-cycle cost is a relatively straightforward application of engineer- ing economics (FHWA 1993). However, additional work is needed to form the algorithm by which a fair and equitable decision can be made as to the accuracy of the calculation. When using life-cycle cost criteria, the public owner must be aware of the actual ability of the offerors to guarantee a specific life-cycle cost for a given project. With the tools avail- able at this writing, the only means by which an owner can “lock in” a discrete value for annualized life-cycle costs is to award a contract that includes long-term operations and maintenance, long-term maintenance, or a long-term

warranty. Such contracts are not common for highway proj- ects and are most likely to be used for revenue generating projects such as toll roads. Proposals for a Design-Build- Maintain highway contract would include the price for the initial capital improvements, annual maintenance costs, and the costs of capital asset replacement necessary to ensure that the project will meet the specified standards at the end of the maintenance period prior to transfer of maintenance respon- sibility to the owner. The owner would evaluate the technical and price proposals, determine which proposer offered the best value based on the criteria specified in the RFPs, and would award the contract to the proposer offering the best value. To a significant extent, the risk that the actual costs will exceed their contract values is transferred to the contractor. Such contracts typically provide for certain types of costs to be passed back through to the owner; contractors are gener- ally opposed to accepting a total transfer of the risk except in the context of public-private partnerships where the private sector is granted a franchise to collect revenues. This approach has the advantage of tying the best-value contrac- tor financially to the actual success of the project after con- struction is complete. Thus, construction decisions will be made in the context of operability and maintainability rather than merely minimizing construction cost while delivering the specified standard of quality. The other method that an owner can use to ensure a proj- ect’s life-cycle cost after construction completion is using extended warranties, maintenance bonds, or both. The first approach requires the contractor to come back to the project to repair any defects in the project; the second would give the owner the right to call on the bond if project operation and maintenance costs exceed those promised in the winning pro- posal. It should be noted that the current surety market will not support bonds longer than 5 years. Also, as time passes the owner’s ability to call on either a warranty or a maintenance bond will be subject to the defense that the defect was caused by the owner’s failure to maintain or improper use. The owner bears the risk in both cases that the winning contrac- tor or surety may have gone out of business by the time a claim is made. Table 2.5 synopsizes the cost evaluation criteria that were found in the case study population. Life-cycle cost criteria were only found in two of the cases, and eleven cases used extended warranties. Most of the cases used some form of price evaluation beyond comparing low bids. Design Alternates Bidding of design alternates on highway construction proj- ects is not a new concept, but it is not a common practice in the United States. Nevertheless, traditional highway construction projects often contain limited requirements for design alternate com- ponents such as contractor-furnished/DOT-approved asphalt and concrete mix designs within owner established limits that are created as construction submittals, and such projects can be reviewed to determine how to factor design alternates into a best-value procurement. In addition, there is an extensive body of knowledge relating to evaluation of design alterna- tives for design-build projects. The only real difference between use of design alternates for design-bid-build high- way projects and use of design alternates for design-build highway projects lies in the scope of the proposed design work. In the arena of best-value competitive sealed bidding, contractors will only be asked to propose design solutions for a very narrow, discrete portion of the contract scope or a “pre-engineered” component. The amount of design work involved does not affect the process to be followed in evaluat- ing the merits of the design proposal, and as a result, knowl- edge gained in review of design-build proposals should be directly transferable to evaluation of design alternatives in connection with competitive sealed bid procurements. Table 2.6 shows typical design alternate evaluation criteria that were found in the case study population. Best-Value Evaluation Rating Systems Public owners have used a variety of evaluation (scoring or rating) systems. Many are quite sophisticated and some are quite simple. All can generally be categorized into the follow- ing four types of systems (see Figure 2.2): • Satisficing (more commonly called “Go/No-Go”) • Modified Satisficing • Adjectival Rating • Direct Point Score 20 Cost Evaluation Criteria (1) Best-Value Parameter (2) Price Evaluation A.0 Low Bid A.0 Life-Cycle Cost (of alternatives) A.1 Construction Warranties Q.0 Table 2.5. Case study cost evaluation criteria.

21 the number of alternatives to be evaluated. On the other hand, satisficing would not be an appropriate evaluation methodology for alternatives where the project owner wishes to take value-added features into account. Modified Satisficing Modified satisficing recognizes that there may be degrees of responsiveness to any given submittal requirement. As a result, the range of possible ratings is expanded to allow an evaluator to rate a given category of a proposal across a vari- ety of degrees. Thus, a proposal that is nearly responsive can be rated accordingly and not dropped from the competition due to a minor deficiency. Additionally, an offer that exceeds the published criteria can be rewarded by a rating that indi- cates that it exceeded the standard. Modified satisfied systems usually differentiate between minor deficiencies that do not eliminate the offeror from continuing in the competition and major or “fatal” deficiencies that cause the proposal to be immediately rejected. It is important for owners to include the definition of a fatal deficiency and its consequences in the solicitation. The simplest of the forms of modified satisficing that are currently in use is the “red-amber-green” system with the definitions for each rating are as follows: • Green—fully responsive to the evaluation criteria • Amber—not responsive, but deficiency is minor • Red—not responsive due to fatal deficiency Design Alternate Evaluation Criteria (1) Best-Value Parameter (2) Proposed Design Alternate & Experience D.0 Mix Designs & Alternates D.0 Technical Proposal Evaluation D.1 Environmental Protection/Considerations D.1 Site Plan D.1 Innovation & Aesthetics D.1 Site Utilities Plan D.1 D.1Coordination Cultural Sensitivity D.1 Table 2.6. Case study design alternate evaluation criteria. Satisficing Modified Satisficing Adjectival Rating Direct Point Scoring Simple Quick Bimodal Outcome Assessment Accuracy not Critical Complex Requires Analysis Array of Outcomes Assessment Accuracy Critical Figure 2.2. Best-value evaluation rating system continuum. Satisficing Satisficing is the simplest and easiest evaluation system to understand for evaluators and bidders. To use it, the evalua- tion planner must establish a minimum standard for each and every evaluation criterion against which the proposals can be measured. This is relatively simple for certain kinds of criteria such as qualifications standards. Satisficing is often referred to as “Go/No-Go” by the industry. According to U.S. Army Materiel Command, the definition of evaluation standards is “a baseline level of merit used for measuring how well an offeror’s response meets the solicita- tion’s requirements. Standards are usually a statement of the minimum level of compliance with a requirement which must be offered for a proposal to be considered acceptable.” Given these minimal values, the evaluators decide whether or not alternatives are acceptable.Because of its strong intuitive appeal, satisficing has long been used as an assessment technique (Mac- Crimmon 1968). With the satisficing method, it is possible to successively change the minimal requirements and hence to successively reduce the feasible set of alternatives. Numerical information about values is unnecessary, but can be used just as easily if the information happens to come in numerical form. Satisficing is an “all or nothing” process, thus it is not crit- ical to determine an accurate value for alternatives. An alter- native is either acceptable or not acceptable. An alternative that exceeds the minimum would merely be considered acceptable, regardless of the amount of value added. The main advantage of satisficing is that it can be used to reduce

The next step in the modified satisficing evaluation process is to roll-up the individual ratings for each evaluation crite- rion and arrive at an overall rating for each proposal. Table 2.7 shows the approach used in solicitations from two military best-value projects. One notices that both agencies use color coding to make it easier to identify the areas in which a par- ticular proposal offers advantages to the government. The Army distinguishes between proposals that offer advantages to the government and those that offer significant advantages, while the Air Force provides only one category for proposals that exceed the minimum requirements. The reader should note that the examples of modified sat- isficing evaluation systems in Table 2.7 are not examples of the standard for all projects in either of the two military departments. They were pulled from solicitations that were developed specifically for the projects for which they were written. They do furnish excellent examples of how two dif- ferent owners defined the ratings that were used on two typ- ical projects. Additionally, the reader should note that the definitions shown in Table 2.7 were published in the respec- tive RFPs. Thus, the contractors were cognizant of the evalu- ation scheme and could craft their proposals accordingly. It should also be noted that the definition of each rating is clear and offers a standard against which the evaluators can meas- ure each individual proposal. Adjectival Rating Adjectival rating systems use a specific set of adjectives to describe the conformance of an evaluated area within a proposal to the project’s requirements in that area. Adjectival rating systems are an extension of modified satisficing. They recognize that a more descriptive rating system is in order and that the rating system should be continuous rather than dis- crete. Table 2.8 illustrates how one owner developed a series of adjectival criteria to rate different components of a proposal. There are three important elements of an adjectival rating system: • Definitions • Performance indicators • Differentiators Each adjectival rating must have all three. The definition must be both clear and relevant to the specific factor being evaluated. It should portray to the evaluators the essence of what the evaluation plan writer intends to be identified and rated. In the example provided in Table 2.8, the definition provided for “Proposal Risk” indicates that evaluators are to assess and rate the “weaknesses and strengths associated with the proposed approach as it relates to accomplishing the requirements of the solicitation.” Following along with this example, the rating will take the form of one of three adjec- tives, “high,” “moderate,” or “low.” Each of these adjectives is then defined in terms of a performance indicator that is cogent to the factor that is being evaluated. The evaluators will use the indicator as a marker with which to determine the appropriate rating for the evaluated element. Again looking to the example provided in Table 2.8, the performance indi- cators associated with proposal risk include the potential to disrupt the schedule, increase costs, or degrade performance. To assist the evaluators with those proposals that seem to 22 Army Rating Definition Air Force Rating Definition Dark Blue Proposal meets the minimum SOLICITATION requirements for this item and has salient features that offer significant advantages to the Government. Blue Exceeds specified minimum performance or capability requirements in a way beneficial to the Air Force. Purple Proposal meets the minimum SOLICITATION requirements for this item and has salient features that offer advantages to the Government. N/A N/A Green Proposal meets the minimum SOLICITATION requirements for this item. Green Meets specified minimum performance or capability requirements necessary for acceptable contract performance. Yellow Proposal meets most of the minimum requirements for this item, but offers weak area or mimics SOLICITATION language rather than offering understanding of the requirements. Yellow Does not clearly meet some specified minimum performance or capability requirements necessary for acceptable contract performance, but any proposal inadequacies are correctable. Red Proposal meets some but not all the minimum requirements for this item or does not address all required criteria. Red Fails to meet specified minimum performance or capability requirements. Proposals with an unacceptable rating are not awardable. Table 2.7. Modified satisficing examples (USAED, New York 2002; U.S. Air Force 2001).

23 straddle two adjectival grades, differentiators are also pro- vided to further distinguish between the grades. In the exam- ple, a “low” proposal risk is described as one for which difficulties will probably be overcome through normal con- tractor effort and normal government monitoring. In con- trast, a “moderate” proposal risk suggests that special contractor emphasis and close government monitoring will likely be needed to overcome difficulties. Direct Point Scoring Direct point scoring evaluation allows for more rating lev- els and thus may appear to give more precise distinctions of merit. However, point scoring may lend an unjustified air of precision to evaluations, providing an appearance of objec- tivity even though the underlying ratings are inherently sub- jective. Evaluators assign points to evaluation criteria based on some predetermined scale or the preference of the evalu- ator. Case Study 14: Maine DOT Bridge, in Appendix D, illus- trates the direct point scoring system through the use of a percentage defining a raw score definition as follows that is then translated into the final point allocation. Raw Score Definition 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Marginal Average Exceptional Table 2.8. Example adjectival rating for three different evaluated areas (U.S. Air Force 2001). Evaluated Area Adjectival Rating Evaluation Plan Definition PROPOSAL RISK Proposal risk relates to the identification and assessment of the risks, weaknesses and strengths associated with the proposed approach as it relates to accomplishing the requirements of the solicitation. High Likely to cause significant disruption of schedule, increased cost, or degradation of performance. Risk may be unacceptable even with special contractor emphasis and close government monitoring. Moderate Can potentially cause some disruption of schedule, increased cost, or degradation of performance. Special contractor emphasis and close government monitoring will probably be able to overcome difficulties. Low Has little potential to cause disruption of schedule, increased cost, or degradation of performance. Normal contractor effort and normal government monitoring will probably be able to overcome difficulties. PERFORMANCE RECORD More recent and relevant performance will have a greater impact on the Performance Confidence Assessment than less recent or relevant effort. A strong record of relevant past performance will be considered more advantageous to the government. Exceptional High Confidence Based on the Offeror’s performance record, essentially no doubt exists that the Offeror will successfully perform the required effort. Very Good Significant Confidence Based on the Offeror’s performance record, little doubt exists that the Offeror will successfully perform the required effort. Satisfactory Confidence Based on the Offeror’s performance record, some doubt exists that the Offeror will successfully perform the required effort. Neutral/Unknown Confidence No performance record identifiable. Marginal Little Confidence Based on the Offeror’s performance record, substantial doubt exists that the Offeror will successfully perform the required effort. Changes to the Offeror’s existing processes may be necessary in order to achieve contract requirements. Unsatisfactory No Confidence Based on the Offeror’s performance record, extreme doubt exists that the Offeror will successfully perform the required effort. RELEVANCY OF PAST PROJECTS Past projects will be compared with the solicitation and those that involved features of work that are similar in size, scope, and technical complexity will be considered relevant. Highly Relevant The magnitude of the effort and the complexities on this contract are essentially what the solicitation requires. Relevant Some dissimilarities in magnitude of the effort and/or complexities exist on this contract, but it contains most of what the solicitation requires. Somewhat Relevant Much less or dissimilar magnitude of effort and complexities exist onthis contract, but it contains some of what the solicitation requires. Not Relevant Performance on this contract contains relatively no similarities to the performance required by the solicitation.

Some agencies use adjectival ratings as the basis of direct point scoring systems. These should still be considered direct point scoring methods, but the adjectival ratings are used to narrow down the scoring to within ranges. Case Study 19: Forth Road Bridge Toll Equipment, in Appendix D, provides a simple example of a direct point scoring system that is based on adjectives shown in Table 2.9. The Washington State DOT I405 Kirkland Stage I HOV Design-Build RFP provides a more detailed direct point scor- ing system that is based on adjectival ratings. • Excellent (90–100%): The Proposal demonstrates an approach that is considered to significantly exceed the RFP requirements/objectives in a beneficial way (providing advantages, benefits, or added value to the Project) and provides a consistently outstanding level of quality. In order for the Proposal to meet the minimum criteria to be considered to be Excellent, it must be determined to have a significant strength and/or a number of strengths and no weaknesses. The minimum score for Excellent is 90%. The greater the significance of the strengths and/or the number of strengths will result in a higher percentage, up to a max- imum of 100%. There is no risk that the Proposer would fail to meet the requirements of the RFP. • Very Good (80–89%): The Proposal demonstrates an approach that is considered to exceed the RFP require- ments/objectives in a beneficial way (providing advantages, benefits, or added value to the Project) and offers a gener- ally better than acceptable quality. In order for the Proposal to meet the minimum criteria to be considered to be Very Good, it must be determined to have strengths and no sig- nificant weaknesses. The minimum score for Very Good is 80%. The greater the significance of the strengths and/or the number of strengths, and the fewer the minor weak- nesses will result in a higher percentage, up to a maximum of 89%. There is very little risk that the Proposer would fail to meet the requirements of the RFP. • Good (70–79%): The Proposal demonstrates an approach that is considered to meet the RFP requirements/objectives and offers an acceptable level of quality. In order for the Proposal to meet the minimum criteria to be considered to be Good, it must be determined to have strength(s), even though minor and/or significant weaknesses exist. The minimum score for Good is 70%. The greater the signifi- cance of the strengths and/or the number of strengths, and the fewer the minor or significant weaknesses will result in a higher percentage, up to a maximum of 79%. The Pro- poser demonstrates a reasonable probability of meeting the requirements of the RFP. • Non-responsive (0–69%): The Proposal demonstrates an approach that contains minor and/or significant weak- nesses and no strengths. The Proposal is considered not to meet the RFP requirements and may be determined to be non-responsive. The direct point scoring system or variations of it are used by many transportation agencies. However, federal agencies do not typically use such a system because the use of numer- ical rating systems in conjunction with specific percentage weightings for the factors requires the source selection authority to convert the decision-making process to a for- mula without knowing what will be offered. Such a process allows virtually no discretion to the selection official. Direct point scoring evaluation is probably the most com- plex best-value evaluation method. One of its weaknesses is the variation that is induced by evaluators who are assigning numerical scores to the same category. Even if the evaluators are restricted to using integers, each individual will have his or her own methodology for arriving at a point score. Thus, it becomes difficult for the owner to ensure that the evalua- tion system is both fair and uniformly applied to all propos- als. Fundamentally, two engineers looking at the same thing can probably agree on whether or not it is satisfactory or unsatisfactory (i.e., an adjectival rating), but getting them to agree on exactly how many points a given category should be awarded will be much more difficult. For the evaluators, this presents a psychological issue rather than a technical issue, which is sometimes dealt with by resolving outlier scores through the use of adjectival ratings that are then converted to numbers. 24 Table 2.9. Direct point scoring example from Case Study 19. Standard Delivery Level Mark Very high standard Proposals likely to exceed all delivery targets 10 Good standard Proposals likely to meet all delivery targets and exceed some delivery targets 8-9 Acceptable standard Workable proposals likely to achieve all or most delivery targets 5-7 Poor standard Significant reservations on service delivery targets but not sufficient to warrant exclusion of bid 1-4 Not acceptable Bid excluded from further consideration 0 Service

25 Direct point scoring evaluation’s greatest strength is the flexibility of the scale on which each proposal is rated. If the owner does not require its evaluation panel to achieve con- sensus, but rather chooses to use an average of the individual scores, direct point scoring in effect becomes an “expert sys- tem” in every sense of the computer-related definition of that term of art. This becomes valuable in those projects where the salient aspects of the project are hard to quantify. Direct point scoring evaluation allows the average numerical ratings to act as the collective expert. However, an averaging approach has the potential for allowing a single evaluator with a bias to affect the outcome. Some agencies eliminate the high and low scores in order to reduce the likelihood of this type of problem. Best-Value Award Algorithms Best-value award algorithms define the steps that owners take to combine the parameters, evaluation criteria, and evalu- ation rating systems into a final award recommendation. Seven best-value award algorithms have been found through a com- prehensive analysis of the literature, case studies, and project procurement documents. Building, water/wastewater, indus- trial, and highway projects from both the public and the private sector were analyzed. The seven algorithms are as follows: • Meets technical criteria—low bid • Adjusted bid • Adjusted score • Weighted criteria • Quantitative cost—technical tradeoff • Qualitative cost—technical tradeoff • Fixed price—best proposal A description of each of these procedures follows. The algorithms are described through formulas and illustrated through generic examples. Case studies illustrating each of the algorithms can be found in Appendix D. Meets Technical Criteria—Low Bid In the meets technical criteria—low-bid algorithm, the final award decision is based on price. Technical proposals are evaluated before any cost proposals are reviewed. The price proposal is opened only if the technical proposal is found to have met the minimum requirements. The tech- nical proposal review can be done on a pass/fail basis or using numerical ratings with a predetermined minimum score required for the proposal to be considered respon- sive. If the proposal does not meet the minimum stan- dards, it is deemed non-responsive and the associated price proposal will not be opened. The price proposals associ- ated with responsive technical proposals are then opened, often publicly, and the contract is awarded to the proposer offering the lowest price. See the following generic algorithm and Table 2.10. Case Studies 9, 10, and 12 in Appendix D also provide examples of the meets technical criteria—low-bid algorithm. Algorithm: If T > Tmin, Award to Pmin If T < Tmin, Non-responsive Tmin = Determination that proposal meets minimum technical requirements P = Project Price Adjusted Bid The adjusted bid algorithm requires use of numerical scor- ing (or adjectival ratings converted to numbers). Price pro- posals are opened after the technical proposals are scored. When the price proposal is opened, the project price is adjusted in some manner by the technical score, typically through the division of price by a technical score from 0–1 or 0–100. The adjusted bid is used only for project award. The contract price will be based on the amount stated in the price proposal. The offeror with the lowest adjusted bid will be awarded the project. See the following generic algorithm and Table 2.11. Case Study 14 in Appendix D also provides an example of the adjusted bid algorithm. Algorithm: AB = P/T Award ABmin AB = Adjusted Bid P = Project Price T = Technical Score Offeror Technical Score (60 maximum) (40 minimum) Price Proposal 1 $1,400,000 2 $1,200,000 3 $1,100,000 4 NR 51 53 44 39 Table 2.10. Meets technical criteria—low-bid example.

Adjusted Score The adjusted score algorithm also requires use of numeri- cal scoring (or adjectival ratings converted to numbers). The price proposals are opened after the technical proposals are scored. The adjusted score is calculated by multiplying the technical score by the total estimated project price and then dividing by the price proposal. Award is made to the offeror with the highest adjusted score. See the following generic algorithm and Table 2.12. Case Study 11 in Appendix D also provides an example of the adjusted score algorithm. Algorithm: AS = (T x EE)/P Award ASmax AS = Adjusted Score T = Technical Score EE = Engineer’s Estimate P = Price Proposal Weighted Criteria The weighted criteria algorithm also requires use of numerical scoring (or adjectival ratings converted to num- bers). The technical proposal and the price proposal are evaluated individually. A weight is assigned to the price and each of the technical evaluation factors. The sum of these val- ues becomes the total score. The offeror with the highest total score is selected. See the following generic algorithm and Table 2.13. Case Studies 4, 5, and 8 in Appendix D also pro- vide examples of the weighted criteria algorithm. Algorithm: TS = W1S1 + W2S2 + ... + WiSi + W(i+1)PS Award TSmax TS = Total Score Wi = Weight of Factor i Si = Score of Factor i PS = Price Score Quantitative Cost-Technical Tradeoff The quantitative cost-technical tradeoff algorithm also requires use of numerical scoring (or adjectival ratings con- verted to numbers). It involves calculating the technical score and the price score increment and then examining the differ- ence between the incremental advantages of each. The incre- ment in the technical score is calculated by dividing the highest technical score by the next highest technical score minus one multiplied by 100%. The increment in price score is calculated by dividing the highest price score by the next highest price score minus one multiplied by 100%. The award is made to the offeror with the lowest price, unless the higher priced offers can be justified through a higher technical value. This justification is made by determining whether the added increment of price is offset by an added increment in techni- cal score. See the following generic algorithm and Table 2.14. Case Study 13 in Appendix D also provides an example of the quantitative cost-technical tradeoff algorithm. Algorithm: Order offers by increasing price proposals TIncrement = [(Tj/Ti) – 1] x 100% 26 Offeror Technical Score Price Proposal Adjusted Bid 1 0.85 $1,200,000 $1,411,765 2 0.95 $1,250,000 $1,315,789 3 0.90 $1,150,000 $1,277,777 4 0.70 $1,100,000 $1,571,429 Table 2.11. Adjusted bid example. Offeror Technical Score* (1,000 maximum) Price Proposal Calculations (Engineer’s Estimate = $10 million) Adjusted Score* 1 930 $10,937,200 930 x 10 6 10,937,200 85 2 890 9,000,000 890 x 10 6 9,000,000 99 3 940 9,600,000 940 x 10 6 9,600,000 98 4 820 8,700,000 820 x 10 6 8,700,000 94 * Note: Technical Score —(Sum of Tec hnical Sco re f or al l evaluation fac tors); Adjuste d Sc ore = (T echnical Score x 1, 00 0, 0 0 0 )/Price Pr oposal ($ ) Table 2.12. Adjusted score example.

27 PIncrement = [(Pj/Pi) – 1] x 100% If TIncrement < Increment, Award Proposali If TIncrement > PIncrement, Retain Proposalj for possible award and repeat with Proposalj+i Repeat Process until TIncrement > PIncrement T = Technical Score P = Price Proposal In this example, because the difference between the low and second low price proposals is 8%, the difference in the weighted scores of the two proposals should be greater than 8% to justify expending the additional increment of cost. In this case, the 33% difference in weighted scores and corre- sponding 8% increase in price indicates that Proposal #2 is a better value than Proposal #1. This is not the case when com- paring Proposal #2 to Proposal #3—the 3% increase in cost is not justified by the 1% increase in technical score. Thus, the best value in this example is Proposal #2. Qualitative Cost-Technical Tradeoff The qualitative cost-technical tradeoff is used by many fed- eral agencies under the FAR. This method relies primarily on the judgment of the selection official to determine the relative advantages offered by the proposals following a review of the evaluation ratings and prices (Army 2001). The final decision consists of an evaluation, comparative analysis, and tradeoff process that often require a subjective judgment on the part of the selecting official. Figure 2.3 depicts the qualitative cost- technical tradeoff algorithm as described in the Army Source Selection Guide (Army 2001). Case Studies 1, 2, and 3 in Appendix D provide examples of the qualitative cost-technical tradeoff algorithm. The tradeoff analysis is not conducted solely with the rat- ings and scores. The selection official must analyze the differ- ences between the competing proposals and make a rational decision based on the facts and circumstances of the specific acquisition. Although different selection officials may not necessarily come to the same conclusion, the same criteria must be met in all cases. Specifically, the decision must • Represent the selection official’s rational and independent judgment, • Be based on a comparative analysis of the proposal, and • Be consistent with the solicitation evaluation factors and subfactors. Fixed Price—Best Proposal The fixed price—best proposal algorithm is based on the premise that the project owner will establish either a maxi- mum price or a fixed price for the project. Each Offeror must submit a technical proposal accompanied by an agreement to perform the work within the specified pricing constraints. The award is based only on the technical proposal evaluation. The offeror that provides the best technical proposal will be selected. See the following generic algorithm and Table 2.15. Case Study 6 in Appendix D also provides an example of the fixed price—best proposal algorithm. Algorithm: Award Tmax, Fixed P T = Technical Rating P = Project Price Offeror Technical Score* (60 maximum) Calculation of Price Score Price Score (40 maximum) Calculation of Total Score Total Score (100 maximum) 1 51 $1,000,000 x 40$1,200,000 33 51 + 33 = 84 2 53 $1,000,000 x 40$1,250,000 32 53 + 32 = 85 3 44 $1,000,000 x 40$1,100,000 36 44 + 36 = 80 4 39 $1,000,000 x 40$1,000,000 40 39 + 40 = 79 * Note: Sum of technical scores for all evaluation factors defined in the technical review evaluation plan. Table 2.13. Weighted criteria example. Proposal Price Weighted Score Price Increment Score Increment 1 .0 M 300 -- -- 2 $4.3 M 400 + 8% + 33% 3 $4.4 M 405 + 3% + 1% $4 Table 2.14. Quantitative cost-technical tradeoff example.

Industry Applications of Best-Value Award Algorithms Table 2.15 illustrates the additional information gleaned from the analysis of best-value RFPs collected during the first phase of this study. The following case study summary is based on the same 50 cases previously presented in Table 2.1 in the best-value parameter section of this chapter. As shown in Table 2.16, it is very simple to classify the various agency best-value methodologies into the seven generic best-value award algorithms proposed in this study. All seven of the best-value award algorithms are represented in the case studies. The generic classification of the award algo- rithms provides a baseline for comparison among agencies.Fig- ure 2.4 depicts the frequency of use for the award algorithms. The qualitative cost-technical tradeoff and the weighted cri- teria algorithms are the most frequently used and make up one- half of the sample population. The adjusted score, adjusted bid, and meets technical criteria–low-bid algorithms are approxi- mately equal in number and constitute 44% of the sample. The quantitative cost-technical tradeoff and the fixed-price–best proposal algorithms represent only 6% of the sample. Comparison of Award Algorithms Ultimately, no matter which algorithm is selected, the owner must have a result that allows it to differentiate a less competent contractor with a low bid from a more highly competent contractor whose proposal adds value to the proj- ect. The next step is to differentiate between those apparently competent and valuable proposals to determine which pro- posal is the optimum combination of price and non-price factors that delineate the true best value. Meets technical criteria–low bid (cost) is defined as any selection process where the eventual award will be made to the lowest priced, fully qualified and/or responsive bidder. This category includes the processes named “equivalent design/low bid” and “meets criteria/low bid” and the FAR method named “fully responsive–lowest price” as well as other variations on this theme. As a general rule, the low-bid approach was preferred on projects where the scope was very tight and clearly defined, and innovation or alternatives were not being sought. This might include highway projects with a specified type of pavement, 28 Offeror Technical Score (100 maximum) 1 91 2 93 3 84 4 79 Table 2.15. Fixed price—best proposal example. Lowest priced proposal is the superior proposal in terms of non-cost proposal Proposals are essentially equal in terms of non-cost factors Conduct tradeoff analysis Award to offeror that represents the best value Award to lowest priced offeror NO NO YES YES Figure 2.3. Decision model of determining the successful offeror using qualitative cost-technical tradeoff (Army 2001).

29 State/Agency Agency Terminology Remarks Best-Value Award Algorithm Alaska DOT Criterion Score Divide Technical Score by Price Adjusted Score Arizona DOT Quality Adjusted Price Ranking Percentage system used to adjust bid price for technical score Adjusted Bid Colorado DOT Pre- 1999 Low Bid, Time Adjusted Multi-parameter bid with qualifications Meets Technical Criteria—Low Bid Colorado DOT Post-1999 Best Value May use weighted criteria to arrive at an adjusted score Adjusted Score Delaware DOT Competitive Proposals Design Alternates, Qualifications, Scheduled, and Price scored Weighted Criteria District of Columbia DPW Best Value Adds owner contract administration costs to price Adjusted Score Florida DOT Adjusted Score May also include time adjustment Adjusted Score Georgia DOT Low Bid, Prequalified Short list by qualifications Meets Technical Criteria—Low Bid Idaho DOT Weighted Selection Cost 51%; Qualifications/Past Experience 49% Weighted Criteria Indiana DOT Low Bid, Fully Qualified Minimum technical score to be found qualified Meets Technical Criteria—Low Bid Maine DOT Overall Value Rating Divide Price by Technical Score Adjusted Bid Mass Highway Best Value Included life-cycle cost criteria Weighted Criteria Michigan DOT Low Composite Score Divide Price by Technical Score Adjusted Bid Minnesota DOT Low Bid, Fully Qualified Short list by qualifications Meets Technical Criteria—Low Bid Missouri DOT Low Bid + Additional Cost Additional costs include life-cycle cost calculation Meets Technical Criteria—Low Bid New Jersey DOT Modified Low Bid Included design costs Meets Technical Criteria—Low Bid North Carolina DOT Quality Adjusted Price Ranking Percentage system used to adjust bid price for technical score Adjusted Bid Ohio DOT Low Bid Includes design costs Meets Technical Criteria—Low Bid Oregon DOT Best Value Combine technical with cost by weights Weighted Criteria South Carolina DOT Low Composite Score Divide Price by Technical Score Adjusted Bid South Dakota DOT Best Value Divide Price by Technical Score Adjusted Bid Utah DOT Best Value Combine technical with cost by weights Weighted Criteria Virginia DOT Two Step Selection Qualifications/Experience in Step 1 and Price and Technical in Step 2 Weighted Criteria Washington DOT High Best-Value Score Divide Technical Score by Price Adjusted Score Alberta, Canada, Ministry of Highways Value Index Divide Technical Score by Price Adjusted Score Alameda Transportation Corridor Agency Lowest Ultimate Cost Add Price to Authority’s Costs Associated with Proposal Meets Technical Criteria –Low Cost Table 2.16. Best-value award algorithm case study summary. geometric design, and minimal ancillary works. It also is used on building projects where the owner has completed most of the design development and the contractor only needs to com- plete the final construction documents. If the “cost” element is added to the selection process, it can also be used for more complex projects where different proposals impact life-cycle costs, right-of-way expense, or other costs incurred by the proj- ect owner. The adjusted bid algorithm is identified by the act of divid- ing the price by some factor related to the technical evaluation. Its thrust is to logically modify the price in a manner that reflects the value of the underlying proposed qualitative factors. Its selection as an award algorithm indicates that price is an impor- tant consideration but that some other aspects of the project must be included in the algorithm to determine best value. This is in effect a unit pricing of quality (Gransberg et al. 1999). Adjusted score is the mathematical reciprocal of adjusted bid. In this case, some function of the technical score is divided by the proposed price to give an index in the units of technical points per dollar. It would follow that the adoption (continued)

30 State/Agency Agency Terminology Remarks Best-Value Award Algorithm City of Reno, Nevada Best Value Qualifications & Past Performance equal to Price Weighted Criteria City of Santa Monica, California RFP Process Requires Guaranteed Maximum Price and life-cycle criteria Qualitative Cost- Technical Tradeoff City of Wheat Ridge, Colorado RFP Process Uses Weighted Criteria approach to arrive at technical score Fixed Price/Best Design District of Columbia Schools Best Value Responsiveness check for qualifications, experience & subcontracting plan Award to lowest, fully responsive bid Meets Technical Criteria—Low Bid Federal Bureau of Prisons Best Value Uses Weighted Criteria approach to arrive at technical score Qualitative Cost- Technical Tradeoff Federal Highway Administration Best Value Adds owner contract administration costs to price. Uses Adjusted Score formula to differentiate between bids Quantitative Cost- Technical Tradeoff Fort Lauderdale County, Florida Selection/Negotiation Requires Guaranteed Maximum Price Weighted Criteria General Services Administration Best Value Uses Weighted Criteria approach to arrive at technical score Qualitative Cost- Technical Tradeoff Los Alamos National Laboratory Best Value Two phase selection Weighted Criteria Maricopa County, Arizona Quality Adjusted Price Ranking Uses Weighted Criteria approach to arrive at technical score. Then computes a “$-value” of technical proposal and subtracts from price Adjusted Bid Naval Facilities Engineering Command Best Value Uses Weighted Criteria approach to arrive at technical score Qualitative Cost- Technical Tradeoff Nashville County, Tennessee Competitive Sealed Proposals Qualifications, Management Plan and Price plus Warranty Adjusted Score National Aeronautics and Space Administration Best Value Uses Weighted Criteria approach to arrive at technical score Qualitative Cost- Technical Tradeoff National Institute of Standards and Technology Best Value Uses Weighted Criteria approach to arrive at technical score Qualitative Cost- Technical Tradeoff National Park Service Best Value Uses “technically acceptable” approach to arrive at technical score Qualitative Cost- Technical Tradeoff Pentagon Renovation Program Office Best Value Uses Weighted Criteria approach to arrive at technical score; includes incentive clauses Qualitative Cost- Technical Tradeoff Seattle Water Department Best Value Uses Weighted Criteria approach to arrive at technical score Cost-Technical Tradeoff University of Colorado Best Value Qualifications/Experience in Step 1 and Price and Technical in Step 2 Weighted Criteria University of Nebraska Best Value Qualifications/Experience in Step 1 and Price and Technical in Step 2 Weighted Criteria Table 2.16. (Continued) of this approach would signal that the owner is less con- cerned about cost than quality. The adjusted score approach seems to work well when overall outcomes can be clearly defined and a number of alternatives exist which could pro- vide the desired outcomes. This could include public build- ings where the owner has some design constraints but is open to innovative solutions within the constraints. It has also been used in highway projects where alternative geo- metric designs and material types are acceptable or water treatment plants where the owner wants to evaluate alterna- tive treatment processes. The definition of weighted criteria is the broadest defini- tion of all best-value algorithms. The weighted criteria algo- rithm is selected when innovation and new technology are to be encouraged or specific types of experience are required to obtain the desired outcome. This approach may also be used when a fast track schedule is required or when constructabil- ity is inherent to the successful execution of the project. The (continued)

31 weighted criteria algorithm has the advantage of distinctly communicating the owner’s perceived requirements for a successful proposal through the weights themselves. For instance, if a project owner is very concerned about the archi- tectural appearance of the project, a disproportionate weight can be given to the evaluation criteria that directly define the ultimate aesthetic appeal. On the other hand, if an owner is concerned that the project’s program might exceed the avail- able budget, price can be given a weight of greater than 50% of the total. Thus, best-value bidders will be encouraged to propose design alternates that will reduce the price or will only cause a minimal price increase. Next, both qualitative and quantitative cost-technical tradeoff are algorithms that include the federally mandated variations of best-value award and those jurisdictions where technical and price must be evaluated separately (USACE 1994, NAVFAC 1996). The qualitative cost-technical best-value algo- rithm could be the most subjective of all the award algorithms. In essence, the owner compares the value of the various features of the technical, schedule, and organization against the pro- posed price, and, using professional judgment, determines if the aspects of a given proposal justify its price and whether the additional positive attributes of a higher bid are worth more than the attributes contained in the low bidder’s proposal. The quantitative cost-technical tradeoff best-value algorithm uses the classic industrial engineering “Defender- Challenger Analysis” (Riggs and West 1986) to structure the comparison of price and all other non-price criteria. This algorithm starts by ranking the proposals from lowest to high- est based on price. Then, it uses an incremental analysis of the percentage increase in price versus the percentage increase in technical score. If the technical incremental increase is greater than the price incremental increase, then the higher priced proposal is preferred. This analysis is continued proposal by proposal until the relative amount by which the score goes up is less than the relative amount by which the price goes up. The best-value proposal is the highest rated proposal with an incre- mental analysis showing that the increase in price is justified by the increase in technical rating. Finally, fixed cost—best proposal is a relatively recent addi- tion to the best-value award discipline. In design-build proj- ects, it is sometimes called “Design-to-Cost.” This method stipulates a fixed or maximum price and uses project scope, qualifications, schedule, and other non-cost factors instead of bid price. This method has the advantage of immediately allowing the owner to determine if the required scope is real- istically achievable within the limits of a tight budget. It also reduces the best-value decision to a fairly straightforward analysis of proposed design alternates and other non-cost fac- tors. Lastly, it truly is responsive to the efficient use of capital by committing virtually all available funding up front and using the quantity and quality of project proposals to deter- mine the most attractive offer. Thus, given the previous discussion, it is now possible to classify each of the existing best-value award algorithms into the proposed seven general categories. It is believed that by doing so, much confusion about the details of the various selection methods can be eliminated. Each algorithm brings strengths and weaknesses to the best- value contract award process. Meets technical criteria–low bid State/Agency Agency Terminology Remarks Best-Value Award Algorithm U.S. Army Corps of Engineers Best Value Uses Weighted Criteria approach to arrive at technical score Qualitative Cost- Technical Tradeoff U.S. Customs Service Best Value Uses Weighted Criteria approach to arrive at technical score. Requires Guaranteed Maximum Price Qualitative Cost- Technical Tradeoff U.S. Department of Energy Best Value Uses Weighted Criteria approach to arrive at technical score Cost-Technical Tradeoff U.S. Forest Service Best Value Uses Adjusted Bid formula to differentiate between bids Quantitative Cost- Technical Tradeoff U.S. Postal Service Best Value Uses Weighted Criteria approach to arrive at technical score Qualitative Cost- Technical Tradeoff Utah Dept. of Natural Resources Value Based Selection Combine technical with cost by weights Weighted Criteria Table 2.16. (Continued) Adjusted Score 14% Qualitative Cost- Technical Tradeoff 26% Meets Technical Criteria – Low Bid 16% Fixed Price - Best Proposal 2% Adjusted Bid 14% Weighted Criteria 24% Quantitative Cost- Technical Tradeoff 4% Figure 2.4. Frequency of use for the award algorithms.

(cost) is by far the simplest and mechanically the closest to the existing design-bid-build/low-bid award process. As such, it is probably the easiest to implement by an agency that has no previous best-value experience. It is also the algorithm that will probably face the least opposition to its use for two rea- sons. First, the concept of short-listing design firms on a qual- ifications basis is well accepted. Therefore, extending that concept to determining a short list of the best qualified con- struction contractors should be fairly direct. Secondly, award- ing to the lowest priced proposal from the list of prequalified firms is not very different from the typical public agency low- bid paradigm, and factoring owner costs into the equation involves a minimal change to that paradigm. This approach would fit into the “competitive sealed bidding” category of the ABA Model Procurement Code even though it allows the owner to consider certain elements in addition to the bid price. It is likely to be acceptable in those states that still require both qualifications-based selection of designers and low-bid award for construction (Wright 1997). The greatest weakness of meets technical criteria—low bid is its focus on price alone, which eliminates one of best-value procurement’s greatest benefits: the ability to compare different construction solu- tions to the same problem. The addition of cost elements helps to solve this problem, allowing a process to be used that is very close to low bid, where the differences between the proposals can be converted into future out-of-pocket expense to the project owner. Adjusted bid and adjusted score, on the other hand, allow competition between varying design alternates, construction management approaches, and contractor qualifications if appropriate for the project. This encourages innovative approaches by industry while preserving the ability to rate the qualifications of the contractors. The major drawback to these methods and other technical score-related algorithms is the reliance on the evaluator’s ability to develop an accu- rate technical score for a proposal. Evaluators often have dif- ficulties translating evaluation criteria into points and end up trying to equate a dollars-per-point for each evaluation decision. Additionally, by mathematically combining the technical and the proposed contract price, there is a poten- tial to create an environment where construction contractors may be tempted to play games with the numbers to increase their adjusted bids or scores. The adjusted bid system appears to be useful on projects where funding is constrained but where some qualitative feature of the project, such as a fast track schedule or external factors such as traffic disruption or innovative environmental protection, is also very impor- tant to the owner. Adjusted bid seems to be most appropri- ate for projects where innovation is encouraged but where a high degree of price competition is desired. Adjusted score is more appropriate where the technical content is more important than the price. Weighted criteria allows significant flexibility to the proj- ect owner in determining the best-value proposal. It preserves the ability to tailor the evaluation plan to the specific needs of each project and rate the qualifications of all bidders. It pro- vides a method for including price as only one of several eval- uation areas and permits the agency to adjust the weights of each rated category as required to meet the needs of the par- ticular project. Its greatest drawback is the complexity of the evaluation planning. To properly implement the weighted criteria algorithm, a great deal of up-front investment in time and human resources must be made during the development of the RFP and its evaluation plan. The cost-technical tradeoff algorithms preserve the owner’s option to award based on a qualitative (possibly, sub- jective) comparison of the value of higher priced proposals or to make the cost-technical tradeoff decision quantitatively as shown in the U.S. Forest Service Highway Case Study. It also furnishes the most robust method with which to make the best-value decision. The use of a cost-technical tradeoff forces the owner to relate the price and the value of the other evaluated factors in a way that highlights the best features of best-value award algorithms. Additionally, the cost-technical tradeoff is mandated for federal projects (FAR 2004). It is probably best used when the owner anticipates a very com- petitive set of proposals submitted by a sophisticated, well-qualified group of competitors. It furnishes an avenue to step back after the evaluation and contemplate the relative desirability of the various combinations of qualifications, design approach, and price. Finally, it should be noted that this discussion assumes the technical evaluation is conducted using a methodology such as the weighted criteria algorithm. The fixed-price best proposal award algorithm is similar to all of the algorithms that assign a technical score to the best- value offer, but the price is fixed for all offerors. This award algorithm should only be considered when the bidding of design alternates is being entertained. This is an excellent sys- tem when owners have a fixed budget and would like to get more construction for their money. However, the engineer’s estimate for the scope of the project must be sound. An esti- mate that is too high may result in offerors adding unneeded scope to win the project. Even worse, an estimate that is too low could result in offerors proposing scopes that do not meet technical standards. The selection of an algorithm requires determining which approach is the most appropriate for the project in question. The goal of best-value contract award is to devise a system that maximizes the probability of selecting a contractor who will successfully complete the project. In many cases, the tried and true low-bid price only method will in many cases be the most appropriate method. Therefore, a careful analysis of the project must be made before deciding on a project award algorithm and its associated criteria. 32

33 The best-value parameters, evaluation criteria, evaluation rating systems, and award algorithms described in this section are a generic synthesis of what the entire design and construc- tion industry defines as best-value procurement. Some of the differences in concepts are due to the agencies that use them and some are due to the nature of the projects themselves. The next sections discuss the results of a survey regarding the use of best-value procurement in the highway construction indus- try and a benchmark comparison of project performance results for best-value contracting with design-bid-build. 2.5 National Transportation Agency Survey Results As outlined in the Research Approach presented in Sec- tion 1.4, the research team developed a survey to obtain information related to the state of practice of best-value pro- curement in the transportation industry. The questionnaire, shown in Appendix C, was designed to identify the current state of practice in the industry and to identify key respon- dents that could provide additional project-related informa- tion for follow-up case studies. It identified transportation agencies that are using or considering the use of a best-value procurement process consistent with the definitions and concepts discussed in the previous section. It also asked respondents to identify any new best-value concepts that may not be reflected in the literature or the team’s database. Lastly, it asked if respondents had specific projects using best-value procurement where the team could obtain case study and performance data for Phase 2 of this study by fol- lowing up with the designated contact person. The survey was e-mailed to AASHTO representatives from each of the 50 state highway agencies and various other affiliated trans- portation organizations. The initial contact list consisted of representatives from the AASHTO Subcommittee on Con- struction and related highway organizations. The survey asked that questions be completed by the personnel respon- sible for procuring and administering the agency’s con- struction program, particularly with regard to alternative contracting methods. The research team received 44 responses, including 41 from transportation agencies. Of the 41 agency representatives responding, 27 respondents answered that the agency had some experience with best-value procurement, two agency representatives responded that the agency had no experience but planned to use best-value procurement in the near future, and 12 respondents indicated that the agency had no experi- ence with best-value procurement. The answers to this ques- tion revealed that among the respondents, the majority (66%) of agencies had experience with some form of best- value procurement. The second question asked respondents to define the par- ticular selection strategy or strategies used among the meth- ods defined in the questionnaire. The following list summarizes and Figure 2.5 depicts the variety of selection strategies used and the frequency of their use: Figure 2.5. Selection strategy used in best-value procurement.

• 10 of 27 used Meets Technical Criteria—Low Bid (37%) • 7 of 27 used A+B (26%) • 6 of 27 used Adjusted Bid (22%) • 6 of 27 used Weighted Criteria (22%) • 3 of 27 used Multi-parameter (11%) • 2 of 27 used Cost-Technical Tradeoff (7%) • 1 of 27 used Adjusted Score (4%) The responses indicated that the best-value selection strat- egy used most often (37%) was meets technical criteria—low bid. Several respondents included A+B bidding and multi- parameter bidding as selection strategies in the “other” cate- gory. If these strategies are assumed to be equivalent as noted in the definition, the multi-parameter strategy was the next most frequently used strategy (31%). This distribution indi- cates that the best-value selection strategies adopted by transportation sector agencies are more closely aligned with the low-bid system compared with the frequency distribu- tion of the award methods of a larger sample of projects, including vertical projects and projects outside of the trans- portation sector, presented in this chapter. The larger sample population presented in Figure 2.4 indicated that the weighted criteria and cost-technical tradeoff strategies were the most frequently used, constituting one-half of the sample population. The third question asked respondents to identify what key criteria were used by the agency in the qualification or selec- tion process. The following list summarizes and Figure 2.6 depicts the key criteria and frequency of their use: • 16 of 25 used Past Performance (64%) • 15 of 25 used Projected Time (60%) • 13 of 25 used Personnel Qualifications (52%) • 11 of 25 used Management Capabilities (44%) • 6 of 25 used Public Interface Plan (24%) • 6 of 25 used Technical Capability/Solutions (24%) • 9 of 25 used other categories (36%) The survey results for the transportation agencies indicate that past performance and projected time are the most fre- quently used criteria followed by qualifications of personnel. In comparison, the larger sample population cited past per- formance and qualifications of key personnel as the most fre- quently used criteria. In the case of transportation agencies, it appears that projected time performance is a more impor- tant criteria, and they have more experience with time as a bid parameter than other commonly used criteria. The fourth question asked respondents to identify a for- mula or algorithm (if applicable) used to combine price and technical criteria. Eleven of 27 (41%) respondents provided a formula or algorithm to combine price and technical criteria. The most frequent algorithm (cited by 4 of 11 respondents) was a multi-parameter formula (A+B) using time as the addi- tional parameter. This result is consistent with the responses to the third question. Other formulas cited were adjusted bid, adjusted score, a prequalification rating formula, and weighted criteria combined with life-cycle cost. The fifth question asked respondents to identify what rela- tive weightings of price and technical factors were used, where 34 Figure 2.6. Key criteria used in the qualification or selection process.

35 applicable. Fifteen of 27 respondents gave relative weightings of price and technical factors. The following list summarizes and Figure 2.7 depicts the distribution: • 1 of 15 used 1/100 and 10/90 (7%) • 1 of 15 used 11/89 and 20/80 (7%) • 2 of 15 used 21/79 and 30/70 (13%) • 8 of 15 used No Relative Weightings of Price vs. Technical Used (53%) • 9 of 15 listed other combinations (60%) The majority of respondents chose the “Other Combina- tions” category. Under this category, the responses ranged from variable (project-specific weightings) to 25/75 price and technical to not applicable (for prequalification). Finally, 16 of 27 (59%) respondents supplied projects using best-value procurement that the research team could follow up with a case study. Twenty-five projects were identified as candidates for further study. Based on these responses, a second questionnaire, also included in Appendix C, was developed and sent out in Phase 2 to obtain more detailed information and performance results for highway projects using best-value procurement. At the time of publication of this report, the additional data gathered was minimal and inconclusive in terms of perform- ance results for traditional design-bid-build projects. This confirmed that highway agency experience with best-value procurement was limited, and that it was primarily used in conjunction with design-build projects. 2.6 Baseline Project Performance Results As part of the investigation into the state of practice of best- value procurement, the research team identified factors that may be included in a best-value procurement that appear to have the greatest measurable impact on actual project performance. This effort started by first adding an additional 500+ projects to the research team’s original 600+ project database to craft a study database of more than 1,100 projects with an aggregate contract value of more than $5 billion. The next step involved separating those projects in the study population into two major groups: those delivered by traditional design-bid-build, low bid, and those delivered using a best-value award method.Next,each major group was divided by type into horizontal projects (high- ways, bridges, runways, etc.) and vertical projects (buildings, water treatment plants, transit stations, etc.) to give the researchers a basis to compare best-value procurement and design-bid-build within the two major types of projects. This was also done to develop a foundation on which to gauge the performance of those vertical projects that were used for the case studies that are a part of the research. In addition to vertical and horizontal design-bid-build projects, the sample contained three types of projects awarded using best-value methods: • A+B bidding • Design-Bid-Build/RFP with award based on bid price and at least one other parameter • Design-Build Figure 2.7. Weighting of price and technical factors.

Table 2.17 shows the breakdown of the types and numbers of projects in each category. It should be noted that a sizable sample of vertical design-build projects was also available. However, they were not included in the analysis because of this project’s emphasis on best-value delivery of design-bid- build projects. Therefore, only the vertical design-bid-build RFP projects were included in the sample population. As show in Figure 2.8, the projects came from 20 different agencies made up of 16 state DOTs, a state turnpike authority, a state port authority, a state transit authority, and a federal military department. The projects were located in 19 different states across the country. The A+B projects were primarily highway construction or rehabilitation jobs. The design-bid- build RFP projects were airfield and marine upgrade and expansion projects. The design-bid-build projects were mainly resurfacing, upgrade, and bridge projects. The geo- graphic dispersion of sample projects is from coast to coast and border to border. Additionally, the preponderance of projects came from state agencies, which helps make the study results more specifically aligned with the highway construc- tion focus of this study. A short explanation of the various project delivery methods follows. Best-Value Contracting The Utah Technology Transfer Center published Best Prac- tices Guide for Innovative Contracting Procedures (UTTC 2001). This guide established an elegant definition for alter- native project delivery methods (i.e., innovative contracting). The following statement is from the guide: Traditional contracting requires that the selection of a con- tractor be based solely on the low bid of a responsive bidder. The equation below identifies the factors that go into a bid for 36 Category Horizontal Vertical Delivery Method DBB Projects Best-Value Projects* A+B Projects DBB/RFP Projects DB Projects DBB/RFP Projects DBB Projects Projects in Database 708 119 77 10 32 20 394 Aggregate Value $3.4 billion $1.1 billion $824 million $140 million $166 million $131 million $273 million * Includes all non-low-bid projects Table 2.17. Sample populations. TxDOT WSDOT UDOT IDOT NDDOT MnDOT CALTRANS IDOT MiDOT IDOT FDOT NYDOT NCDOT VDOT MDOT MassDOT Mass Turnpike MPA MTA NAVFAC NAVFAC NAVFAC NAVFAC Figure 2.8. Locations of public agencies with projects in the database population.

37 a construction project. In traditional contracts, motivations to satisfy the social costs are only met to provide a responsive bid for a project. Innovative contracting procedures, however, place the emphasis on meeting performance criteria for one or more of the social cost variables: Contractor’s Bid Price = CS + CM + CQ+ CT + CO Eq. 1 Contract Costs: CS = Cost or profit for providing the service CM = Cost of providing materials and equipment Social Costs: CQ = Cost of providing a quality service or product CT =Cost of finishing a project on time CO =Cost associated with the risk of other social cost considerations such as legal/administrative, complexity of design, environmental, and safety A contractor approaches their function in the traditional bid- ding process by determining the cost to meet the owner’s respon- sive parameters for CM, CQ, CT, and CO. They also try to minimize the “Contractor’s Bid Price” and maximize profit (CS). The levels the owner sets for these parameters could be met or exceeded by a responsive bidder (UTTC 2001). This approach directly applies to the problem of awarding highway construction contracts on a best-value basis.Using this terminology, the research seeks ways to quantify the “social costs” and combine them with the “contract costs” to arrive at an objective calculation of best value. Each of the following project delivery methods takes on one or more of the social costs and forms a best-value decision-making algorithm to arrive at an objective determination for a construction contract award. Design-Bid-Build Design-bid-build is the traditional method of delivering highway construction projects. Its universal acceptance in public infrastructure project delivery springs from the con- cern that a construction contractor will not adequately safe- guard public health and safety and, therefore, needs the close supervision of a design professional. Thus, the owner retains an engineer on a separate contract to complete the design of the public facility. Once the design is finished, a set of plans, specifications, and contract boilerplate is advertised for bid by the construction industry. Construction contractors submit a price, and the project is awarded to the lowest responsive and responsible bidder. In design-bid-build, responsive means that the bidder has properly completed the required bid forms and posted the requisite bid security (Ellicott 1994). Responsible normally means that the low bidder can post the required performance bond within the established award timeframe (Konchar and Sanvido 1998). By requiring bonds in this method, the owner is in effect relying on the surety industry to filter out unqualified contractors. Many states have laws requiring the registration or prequalification of bidders. While this takes the qualification of contractors one step farther, most requirements merely consist of the submission of a form list- ing the contractor’s business information and are treated as another responsiveness check rather than a critical look at contractor qualifications. The State of Oklahoma recently passed a law authorizing alternative project delivery methods for public buildings (Stamper 2001). This law requires con- tractors to attain individual national certification to qualify for construction management or design-build contracts. The law was designed to create a professional requirement for constructors that mirrors the qualifications-based selection for registered professional engineers and architects. To attain certification, the constructor must employ individuals whose combination of professional education, experience, and a national examination qualify them to perform the duties of a construction professional on a construction management or design-build project. Under the design-bid-build approach, the owner has sepa- rate contracts with the designer and the builder, and therefore assumes constructability risk vis-à-vis the builder. Thus, if a design error is found and must be corrected, the owner must first pay the contractor for the change and then attempt to collect the added cost from the designer. While in theory this should be possible, in practice it is very difficult, because the owner must prove that the designer has liability based on neg- ligence or another legal theory. Cost-Plus-Time Bidding The FHWA recognized cost-plus-time bidding (referred to herein by its more commonly used name, A+B bidding) in its SEP-14 as one desirable means to break from traditional design-bid-build award of highway projects (FHWA 1998). These contracts often include an incentive clause that rewards the contractor for completing the project ahead of schedule and exacts a disincentive in addition to the requirement to pay a liquidated amount for the owner’s administrative costs for completing the project late. The incentive/disincentive clause enforces the spirit of the A+B method by discouraging bidders from deliberately underbidding the time component and by encouraging the selected contractor to finish earlier than the proposed contract time. It rewards the contractor that can most efficiently manage a project by allowing it to win the contract with a bid that is higher but accurately reflects the cost of faster completion. In the UTTC equation, A+B brings the social cost of finishing the project on time (CT) out of the contractor’s bid price and lays it on the table for all to see. A+B contracts are awarded based on a combination of the price for the contract items (A) and the associated cost of time (B) needed to complete the work according to a formula that

calculates an economic cost (the cost to the driving public) per day of work. The price portion is not the only considera- tion in the award. The project is awarded to the contractor with the lowest sum of A+B. The A+B bidding technique is designed to shorten the total contract time by allowing each contractor to “bid” the number of days in which the work can be accomplished. This method of bidding allows the contrac- tor with the best combination of price and estimated time cost per day (time) to attain the bid. This cost-plus-time method of bidding enables the contractor to determine a rea- sonable contract duration required for project completion. Awarding agencies believe that the contractor is often best qualified to determine the length of time necessary to com- plete a project (Bordelon 1998). Various public agencies have used A+B along with financial incentives. Different agencies use different names and different methods to do this. Florida uses the same daily dollar amount of the B portion of the A+B bid as the incentive/disincentive. If the general contractor completes the job early, the contractor earns the daily B por- tion for every day that it beats the target. If the contractor exceeds the allotted number of days, the general contractor is contractually obligated to pay the excess B portion of the work as a disincentive (WSDOT 1997). Design-Bid-Build Request for Proposals Design-bid-build RFP delivers a project by advertising a completed design and asking for proposals on other parame- ters as well as a bid price. The award is usually made on a basis of some formula where price is given a certain percentage weight and the rest of the parameters make the remaining portion. In NCHRP Report 451, a best-value case study, Inter- state 5 Columbia River Bridge in Oregon, was presented as an example of best-value procurement in the highway sector. The best-value non-price parameters included specialized construction experience, qualifications, and project staffing. The award was based on a 50/50 split of technical and price using a cost-technical tradeoff evaluation (Anderson and Russell 2001). The South Carolina DOT uses a formula where price counts for 60% and the remaining parameters make up the remaining 40%. The Naval Facilities Engineering Com- mand’s policy is that price will be roughly equal to all other factors combined (NAVFAC 1996). Because it retains the sep- aration of designer and builder, the design-bid-build RFP cat- egory probably has good potential for immediate acceptance by public owners, consulting engineers, and the highway con- struction industry. The idea of creating project-specific con- structor qualifications rather than general financial qualifications is quite intuitive. This best-value method also allows great flexibility in the inclusion of other parameters such as extended warranties, design alternatives, traffic con- trol planning and public outreach programs as means to add value to a given proposal and justify not awarding to the low bid. In the UTTC equation, CQ (the cost of quality), CO (other social costs), or both may be parameters used to identify best value. Design-Build Design-build RFP development is driven by specific proj- ect requirements, and award procedures are constrained by both legal and policy restrictions (FHWA 1996). Thus, the most important piece of the design-build contract is the eval- uation process. The definition of success is the creation of a fair, consistent evaluation system that has a bias to select the design-builder with the highest probability of successfully completing the project at a higher level of quality than is required by the RFP. By giving design responsibility to the contractor, design-build allows the owner to evaluate the effectiveness of each proposal and is the only method that combines all of the parameters in the UTTC innovative con- tracting equation. The evaluation process for a best-value design-build pro- curement typically has three parts (Molenaar et al. 1999). First, the qualifications of the design-build contractor team must be checked to ensure that the proposed designer-of- record possesses both the requisite registrations and the nec- essary past experience to develop a design that will meet the project’s technical requirements. The design-build process permits something that is not as common in the construction industry: a qualifications check on the construction contrac- tor. The second part of the evaluation is a technical review of the design-build contractor’s proposed design solution. This mainly consists of ensuring that the design is fully responsive to the requirements outlined in the RFP and satisfies the proj- ect’s functional requirements. This portion of the evaluation permits competing technical solutions, such as concrete ver- sus asphalt pavement, to be compared. In addition, the design-builder is allowed to propose a technical solution that it, as an organization, is particularly well qualified to imple- ment and for which it has excellent past history to aid in the accurate estimation of project price. Evaluating the proposed project price for realism and reasonableness is the final step in the process. Project Performance Metrics A series of project performance metrics were created to measure each dataset and allow comparison. As some of the projects in the database did not have both cost and time infor- mation, a decision was made to calculate each metric sepa- rately for those projects in the database that contained the relevant input data. Thus, for each metric the actual number of projects that were used in its calculation will be shown to 38

39 permit the reader to gauge the depth and significance of the output. This technique allows the research team to maximize the information gleaned from the available data. The follow- ing project performance metrics were calculated: • Award growth • Cost growth • Time growth • Construction placement • Average contract value Award growth (AG) is an indicator of the feasibility of awarding a construction project. It is defined by the difference between original contract cost and engineer’s estimate, divided by the engineer’s estimate as shown in Equation 2. Before the advertisement, the engineer’s estimate needs to be calculated. This estimate is done for the owner and indicates how much money the project will require. The owner then obtains financing in this amount, trusting that the project’s actual bid price will be less than the engineer’s estimate.A pos- itive award growth indicates that the owner’s financing is insufficient and obtaining additional funds may cause a delay. This is especially true for public projects where agencies typi- cally have to return to legislative bodies for increases in proj- ect authorization amounts (Gransberg 1999). Negative award growth indicates that the owner may have budgeted money against project requirements that were not realized. This often reduces the annual size of an agency’s annual construction program by obligating available funding that is not used (Gransberg 1999). Award growth is an excellent measure of how well an owner understands the market in which the facil- ities are to be constructed. This metric furnishes a view of the government’s ability to forecast the cost of capital improve- ments. As a project proceeds from concept to completion, the owner’s commitment to actual delivery gets greater and greater. If the owner underestimates the project’s cost in early stages, that owner is liable to be more willing to pay an inflated price for the project as it draws closer to fruition. It is very important that the owner be able to develop a good cost fore- cast immediately after design is complete so that a project that is marginally feasible is not awarded for construction. A high award growth indicates the potential that a public agency will build projects that are economically unjustified merely because a public commitment to project delivery has been made. This metric also measures the efficient use of available funding. If the award growth is negative, then it means that the public agency has needlessly tied up available funding that might have been used on other projects. Eq. 2AwardÄ OriginalÄ ContractÄ Growth (AG) = Amount ($) Engineer’s Estimate ($) Engineer − ’s Estimate ($) Cost growth is the percentage change in cost between the final contract cost and the original contract cost, expressed as a percentage and shown in Equation 3. Cost growth can be positive or negative. When cost growth is positive, there were change orders or claims increasing the cost of the project dur- ing its performance. If cost growth is negative, the original contract cost was possibly overestimated or the actual scope of work was reduced. Eq. 3 Time growth is the percentage change in time between the final contract time and the original contract time, expressed as a percentage. Time growth can also be positive or negative depending on the outcome of the project. In fact, time growth changes as the scope of the project changes. When time growth is positive, it means that the project was performed using more time than specified in the original contract, and therefore, the project finished late. When time growth is neg- ative the project’s time growth was overestimated, that is, the project was completed ahead of schedule. TG is calculated as shown in Equation 4. Eq. 4 Construction placement (CP) is the measure obtained by dividing the final construction cost by the final construction time as shown in Equation 5. Therefore, construction place- ment measures the average rate at which the contractor earns the contract value across the period of a construction con- tract. A high rate of construction placement indicates an effi- cient and effective construction management system. If two contractors performed identical lump sum projects in iden- tical environments, the one that finished first would have incurred the least cost, and this would be indicated by a higher rate of construction placement. The U.S. Army Corps of Engineers uses construction placement as one of its fun- damental project performance parameters and has more than 30 years of experience with its use (USACE 1994). Eq. 5 Next, the non-traditional projects were separated and com- pared by procurement method type using the same set of met- rics. This allows the research team to quantitatively rank the impact of different best-value elements. For instance, com- paring the performance of A+B bidding projects with the per- formance of low-bid projects will allow the research team to measure the impact of permitting the construction contractor rather than the owner to establish the project schedule. The Construction Placement  (CP) FinalÄ Constructi = onÄ Contract Cost ($) FinalÄ Construction Contract Time (days) Time Growth (TG) FinalÄ ContractÄ Original Co = − ntract Time (days) Time (days) Original ContractÄ Time (days) Cost  GrowthÄ (CG) Final Contract OriginalÄ C = − ontract AmountÄ ($)Ä AmountÄ ($) OriginalÄ Contract AmountÄ ($)

performance of design-bid-build RFP projects will give an indication of the impact of including contractors’ qualifica- tions. Finally, the performance of design-build projects will quantify the impact of allowing the contractor to set the level of quality through the details of the design. The results of this analysis are shown in Figures 2.9 through 2.12. In Figure 2.9, one can see that award growth is about the same for horizontal best-value and horizontal design-bid- build projects. This shows that an across the board move to implement best-value contracting for highway projects will probably not adversely affect the efficient use of capital. This observation does not consider the possible positive effect of incorporating life-cycle costs in the evaluation plan. Looking at the three best-value types in the best-value pop- ulation as shown in Figure 2.9, one sees that A+B projects have a slight increase in cost from the engineer’s estimate. This increase is due to the fact that these projects are not gen- erally awarded to the low bidder, and the engineer’s estimates are probably formed using traditional design-bid-build bid tabulations. One would therefore expect to see award growth 40 0.69% 19. 84% -24.83%-2.40% -1.82% -2.19% -17.62% -25.00% -20.00% -15.00% -10.00% -5.00% 0.00% 5.00% 10. 00% 15. 00% 20. 00% Horizonta l Be st Va lue Projects (77) Horizont a l DBB P rojects (339) Horizonta l A+ B Projects (52) Horizonta l DBB /RFP Pr ojects (7) Horizonta l DB P rojects (18) Vertical DBB/RFP Projects (20) Vertical DBB Projects (392) (#) = number of projects in individual sample Figure 2.9. Results of award growth analysis.

41 in A+B projects. The horizontal design-bid-build RFP proj- ects have a large negative award growth. However, the sample is small and this probably represents a statistical anomaly rather than a trend. Therefore, it is discounted. Horizontal design-build projects’ award growth is in line with the total population and the traditional projects. Comparing the hor- izontal award growth numbers to the vertical ones is also quite interesting. The vertical best-value projects had a large negative award growth while the vertical design-bid-build projects had a commensurately large positive award growth. Awarding vertical projects using best-value procurement is a relatively new development (Allen et al. 2002). Therefore, it appears that the owners of public vertical projects have not yet “calibrated” their estimating system to account for this delivery method, hence the large negative award growth. As for the traditional vertical projects, one must remember that architectural and engineered process plant projects are typi- cally more complex in terms of design detail. Therefore, it is reasonable that the owners and their designers would have less accurate pre-award estimates than horizontal owners. 3.70% 5.67% 22% 0.95% 12.63% 6.44% -5.92% -10.00% -5.00% 0.00% 5.00% 10.00% 15.00% 20.00% 25.00% Horizontal Best Value Projects (47) Horizontal DBB Projects (1000) Horizontal A+B Projects (15) Horizontal DBB/RFP Projects (10) Horizontal DB Projects (22) Vertical DBB/RFP Projects (20) Vertical DBB Projects (392) (#) = number of projects in individual sample Figure 2.10. Results of cost growth analysis.

Figure 2.10 shows that horizontal best-value projects have less average cost growth than similar projects delivered by design-bid-build. The best performing projects were A+B projects that actually had a negative actual cost growth. This is an interesting phenomenon. It can be argued that A+B projects are by nature schedule driven. Therefore, it is in the contractor’s best interest to finish the project on time or, if there is an early completion bonus, ahead of schedule. As a result, the incentive to generate change orders may be reduced. Again, no conclusion can be made with regard to the performance of horizontal design-bid-build RFP projects. However, it is interesting to note that while they were awarded at about 25% less than the engineer’s estimate, they were completed at 22% over the original contract price, basically breaking even with the original pre-award estimate. Hori- zontal design-build projects had less than 1% cost growth, and this result tracks with similar results found in the litera- ture (Ellis et al. 1991, Bordelon 1998). One can see from Figure 2.11 that the principal benefit accrued from implementing best-value contracting is 42 14.76% 27% 5.49% 16.51% 52. 3% -3.70% -9.23% -20.00% -10.00% 0.00% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00% Horizontal Best Value Projects (103) Horizontal DBB Projects (507) Horizontal A+B Projects (77) Horizontal DBB/RFP Projects (10) Horizontal DB Projects (16) Vertical DBB/RFP Projects (20) Vertical DBB Projects (392) (#) = number of projects in individual sample Figure 2.11. Results of time growth analysis.

43 a substantial reduction in time growth. This appears to be true for both horizontal and vertical projects. The A+B projects are probably the best example of schedule-driven project delivery and show an average time growth of -9.23%. This validates the previous assertion that creating an incentive to finish early drives the contractor to finish early. Horizontal design-build projects have more than 10% less time growth than traditional projects. This is due to the flexibility and greater control allowed the contractor and to the fact that the owner is no longer liable for delays caused by design errors and omissions when the design responsibility is shifted to the design-build contractor. Thus, any time growth that occurs in these projects is most likely a result of either unforeseen conditions (which neither party can control) or owner-caused increases in proj- ect scope after award. It can be seen in the vertical projects that a substantial reduction in time growth is realized by using best-value award procedures instead of low-bid award. Once again, the design-bid-build RFP horizontal projects buck the trend and, as before, the small sample size makes it impos- sible to infer any trend with regard to these types of projects. $39,635 $19,339 $51,810 $22,174 $8,309 $15,674 $8,331 $0 $10,000 $20,000 $30,000 $40,000 $50,000 $60,000 Horizontal Best Value Projects (78) Horizontal DBB Projects (480) Horizontal A+B Projects (52) Horizontal DBB/RFP Projects (10) Horizontal DB Projects (16) Vertical DBB/RFP Projects (20) Vertical DBB Projects (392) (#) = number of projects in individual sample Figure 2.12. Results of construction placement analysis.

The final performance metric that was calculated is con- struction placement, and the results are shown in Figure 2.12. Based on Equation 5, a larger number is more desirable than a smaller number. This metric measures the efficiency with which the project delivery method is implemented by creating a measure of financial velocity ($/day) at which the contractor earns the full value of the contract amount. It can be seen that implementing best-value procurement effectively doubled the average construction placement for horizontal projects. The A+B projects had 25% more construction placement than the population as a whole. This would be expected because the A+B projects are by definition schedule driven. Only the hor- izontal design-build projects failed to outperform the hori- zontal design-bid-build. However, this outcome is misleading because the design-build project contract period includes the design phase. Therefore, one should expect that overall CP would be lower than those projects that completed construc- tion only. The doubling of CP over traditional low-bid projects also held true for vertical best-value projects. Average contract value is not a performance metric but it must be calculated to allow the research team to put the previ- ous discussion in perspective.Table 2.18 shows that to date pub- lic owners seem to have reserved best-value contracting for their larger projects. In both the horizontal and vertical cases, the average contract amount of the traditional low-bid projects is about an order of magnitude less than the best-value projects. Conclusions from the Project Performance Metrics Analysis A number of conclusions can be drawn from this analysis. It appears that the implementation of best-value contracting on horizontal projects has the potential to accrue both cost and time benefits to the public owner. The experience por- trayed in the database shows that the use of best-value pro- curement reduced both time and cost growth and increased the financial efficiency of the projects. While this is significant in itself, it is even more convincing when one takes into account the results of Table 2.18 that show that these savings were accrued on projects that were on average 10 times as large as the traditional projects. The analysis also shows the A+B projects have the best per- formance as measured by these metrics. This result clearly demonstrates that letting the construction contractor estab- lish the project schedule and implementing an incentive for early completion accrues a direct benefit to the owner. In high- way construction, the user costs of congestion, delay, and acci- dents can reach as high as $250,000 per day on an urban freeway (Walls and Smith 1998). Thus, the use of a project delivery method that creates a bias toward timely completion (and possibly a bonus for early completion) can quickly amor- tize the incrementally higher cost for accelerated completion in a matter of days or weeks when the cost to the traveling pub- lic is factored into the project life-cycle cost equation. Finally, the implementation of best-value contracting does not appear to have a significant impact on bid prices as meas- ured by the change in award growth between best value and design-bid-build. The results show that best-value projects can be awarded in a variety of forms with no apparent nega- tive impact to the public owner’s project delivery process. 2.7 Expert Interviews To further validate the results of these findings, the research team surveyed the 14 members of the industry advisory board to ascertain their opinions of the best-value system. While the sample size is small, the board represents a panel of experts, all of whom have personal experience with imple- menting best-value contracts in highway construction. Thus, the results of this survey act as a “reality check” for the results of the national survey of state highway agencies and federal construction agencies. The board also contained members from the construction contractor community and therefore furnishes a counterpoint to the opinions expressed by the community of owners surveyed in the first group. The survey results are summarized as follows. The responses indicated that the panel had experience with all of the best-value parameters except warranty credits and the two measured quality parameters. They rated cost, schedule, and past performance the most likely to be successful and cost and incentive/disincentive schemes as the easiest to implement. Warranties were rated as least likely to be successful and design 44 * Includes all non-low-bid projects Category Horizontal Vertical Delivery Method Best-Value Projects* DBB Projects A+B Projects DBB/RFP Projects DB Projects DBB/RFP Projects DBB Projects Projects in Database 119 708 77 10 32 20 394 Average Contract Value $13.0 million $2.0 million $15.9 million $17.9 million $6.0 million $6.5 million $1.0 million Table 2.18. Average contract value for the sample population.

45 alternates and traffic control alternates as the most difficult to implement. Table 2.19 contains a summary of the responses. With regard to the best-value evaluation criteria summa- rized in Table 2.20, the advisory panel rated price and sched- ule as most important and having the highest probability of success. Warranties were rated both least important to proj- ect success and least likely to be successfully implemented. The panel rated bid price as easiest to implement and design alternates as the most difficult. Most of the respondents had experience with direct point scoring systems. In Table 2.21, one can see that no trend exists for this component of the best-value contracting system. Advisory board responses for award algorithms appear in Table 2.22. Adjusted bid was the most frequently used best- value award algorithm. Interestingly, adjusted score was ranked higher than adjusted bid with regard to its probability of success. Finally, as would be expected, meets technical criteria—low bid was rated as the easiest to implement and weighted criteria was rated as the most difficult. A written comment came from one construction contrac- tor representative that responded to the survey. It is a good summary for this section: “My overriding comment is that the various criteria are not better or worse but should be most applicable to meet the owner’s need. For example, an owner with limited funds (e.g., issuing bonds based on future toll revenues) may place a higher emphasis on price and strong contract terms while other own- ers may be more concerned with the quality of the finished product and be willing to pay a premium to get that quality (e.g., higher emphasis on subjective quality criteria or on long term warranty or life-cycle pricing).” Greg Henk, Flatiron Structures This statement confirms the findings that the best-value selection and award system will probably be most successful if maximum flexibility is preserved and state highway agen- cies are allowed to customize the selection process to meet the specific needs of each project. Best-Value Parameter Number That Had Used It Average Success Rating (1=none; 5=absolute) Average Ease of Implementation Rating (1=effortless; 5=difficult) Cost = A.0 5.0 4.2 Schedule = B.0 5.0 4.0 2.8 1.2 Lane Rental = B.1 3.0 2.7 3.0 Traffic Control = B.2 1.0 3.0 4.0 2.7Prequalification = P.0 3.0 3.3 Past Project Performance = P.1 1.0 4.0 2.0 Personnel Experience = P.2 3.0 3.0 2.7 3.5 Warranty = Q.0 2.0 2.5 Design with Bid Alternate = D.1 3.0 2.7 4.0 1.5 Incentive/Disincentive clauses 4.0 2.0 Table 2.19. Summary of advisory board responses regarding best-value parameters. Best-Value Evaluation Criteria Average Importance Rating (1=no importance; 5=imperative) Average Success Rating (1=none; 5=absolute) Average Ease of Implementation Rating (1=effortless; 5=difficult) Bid Price 4.8 4.6 1.6 Past Performance 4.0 2.8 3.5 Qualifications of Project Personnel 3.5 2.8 3.4 Management Plan 3.3 3.3 3.0 Life-Cycle Cost 3. 4. 2. 0 2.0 3.0 Schedule 3 4.3 2.3 Warranties 0 2.0 3.0 Technical Design 4.0 3.5 3.5 Design Alternatives 3.0 3.0 4.5 Table 2.20. Summary of advisory board responses regarding best-value evaluation criteria.

2.8 Summary of Findings This chapter has defined the state of the industry for best- value procurement methods. Current trends in practices and legislation are paving the way for widespread use of best- value procurement for highway construction projects. The four key best-value concepts—parameters, evaluation crite- ria, evaluation rating systems, and award algorithms—have been defined in this research and presented in this chapter. The application of these concepts was validated through 50 summary level and 14 detailed best-value case studies from all sectors of public construction both nationally and interna- tionally. The universe of evaluation criteria, rating systems, and award algorithms were defined, categorized, and analyzed in terms of their relative advantages and disadvantages. A highway industry survey was conducted to introduce best- value concepts, gauge the level of experience of highway users, and obtain additional case study and performance data. Lastly, best-value procurement use in the highway industry was benchmarked though a nationwide survey of state trans- portation agencies. Chapter 3 addresses the development of a recommended best-value system, criteria for screening projects, and strate- gies for implementation. Ultimately, the best-value pro- curement system must include appropriate criteria, rating systems, and algorithms tailored to the project to ensure that the best-value system truly adds value to the products of construction. 46 Best-Value Rating Systems Number That Had Used It Average Success Rating (1=none; 5=absolute) Average ease of Implementation Rating (1=effortless; 5=difficult) Satisficing 1 3 3 3 2 Modified Satisficing 2 3.5 3 Adjectival Rating 1 Direct Point Scoring 4 3.5 3.5 TABLE 2.21. Summary of advisory board responses regarding best-value rating systems. Best-Value Award Algorithm Number That Had Used It Average Success Rating (1=none; 5=absolute) Average Ease of Implementation Rating (1=effortless; 5=difficult) Meets Technical Criteria—Low Bid 1.0 2.0 2.0 Adjusted Bid 3.0 3.7 3.3 Adjusted Score 2.0 4.0 3.5 Weighted Criteria 1.0 3.0 4.0 Cost-Technical Tradeoff 0.0 n/a n/a Fixed Cost-Best Proposal 1.0 3.0 3.0 TABLE 2.22. Summary of advisory board responses regarding best-value award algorithms.

Next: Chapter 3 - Interpretation, Applications, and Recommendations for Implementation »
Best-Value Procurement Methods for Highway Construction Projects Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s National Cooperative Highway Research Program (NCHRP) Report 561: Best-Value Procurement Methods for Highway Construction Projects examines procurement methods, award algorithms, and rating systems for use in awarding best-value highway construction contracts. The report also explores screening criteria for selecting projects for application of best-value procurement, implementation strategies, and a model best-value specification.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!