Characteristics of Effective External Independent Reviews
The foremost purpose of any peer review is to increase the probability of project success. Historically, well-planned and well-conducted peer reviews add value to the project and the sponsoring organization. Not submitting projects to an independent peer review or conducting poorly planned and executed peer reviews can lead to projects that cost too much, take too long to complete, and do not adequately support the owner’s missions (NRC, 1998).
Chapter 9 of DOE Manual 413.3-1, Project Management for the Acquisition of Capital Assets (DOE, 2003a, p. 9-1), lists the following as key objectives of project reviews:
Ensure readiness to proceed to a subsequent project phase.
Ensure orderly and mutually supportive progress of various project efforts.
Confirm functional integration of project products and efforts of organizational components.
Enable identification and resolution of issues at the earliest time, lowest work level, and lowest cost.
Support event-based decisions.
The manual and Order 413.3 describe various types of reviews that can be used to accomplish these objectives. They include periodic reviews, such as independent project reviews (IPRs) and external independent reviews (EIRs), to assess the status of a project and reviews that address a particular problem or technical issue, such as independent cost or design reviews. Peer review can provide a wealth of information about a project’s progress and performance and can communicate that information to a range of stakeholders.
The purpose of DOE peer reviews is determined by the various offices that initiate and conduct them. For example, the integrated project team (IPT) or project management support office (PMSO) in each of three program secretarial offices (PSOs)—the National Nuclear Security Administration (NNSA), the Office of Environmental Management (EM), and the Office of Science (SC)—may initiate an assessment of a particular technology and use its own staff to analyze the data. IPRs are used to identify and look into critical issues before they become serious problems. In this manner a review serves as a tool for risk management and mitigation.
Alternatively, DOE’s Office of Engineering and Construction Management (OECM) might oversee an EIR conducted by contractors to assess cost or the estimated schedule to support validation of a project’s performance baseline (CD-2). The OECM also oversees EIRs conducted prior to CD-3 for major systems. The main objective of an EIR is to ensure the completeness of front-end project planning and that a project can be managed in accordance with performance baselines. EIRs provide the OECM and senior management with substantive, independent, unbiased assessments of project assumptions and alternatives.
Many individuals and project teams naturally resist reviews of any kind. To overcome this resistance, an organization’s culture needs to be molded to look at a review as a means of adding value, not just as an audit or oversight function. Reviews can do many things, not just corroborate baseline costs and schedule estimates. They can improve overall management processes as well as a specific project’s performance. In addition to ensuring compliance with policies and procedures, they can assure quality.
Maximizing the value of a review requires all the parties involved to collaborate in tailoring the scope of the review to suit the risks and characteristics of a specific project and to satisfy multiple objectives.
DIVERSE STAKEHOLDERS AND BENEFITS
Currently, EIRs are primarily used to validate a project’s performance baseline and to estimate prospects of future performance for various stakeholders, including Congress and the public, DOE’s senior managers, the OECM, the PSOs, the federal project director, and the IPT.
Having projects reviewed by outside, independent experts allows DOE managers to present credible evidence to Congress that project funding requests have been carefully analyzed and are well founded. Because peer reviews help to avoid cost overruns and other problems/difficulties/troubles that reflect poorly on government capabilities, they help to assure the public that sound practices are being followed in carrying out agency missions.
DOE senior managers (deputy secretary, under secretaries, program secretarial officers, and deputy administrators) are collectively responsible for all project acquisitions, program and capital asset execution, and implementation of policy. They make decisions, including critical decisions, and conduct quarterly project performance reviews. They use EIRs to help decide if a project can be included in the DOE’s annual budget submission to Congress or if construction can be started for major systems projects.
The federal project director is accountable to the acquisition executive or PSO for executing a project and assuring it meets cost, schedule, and performance targets. The director leads the IPT and serves as the single point of contact between federal and contractor staff for all matters relating to the project and its performance.
It is worth noting that while there appears to be universal agreement among DOE staff, M&O contractors, and EIR contractors that projects benefit from the effort expended in conducting peer reviews and that this benefit increases as the size, complexity, and inherent risks of the project increase, there is not necessarily agreement on all aspects of timing, cost, and comprehensiveness of specific reviews.
REVIEW PLANNING PROCESS AND TAILORING
The EIR process is controlled by the OECM and has been implemented by three outside contractors.1 These independent contractors are tasked with assembling a team of experts with the knowledge and skills appropriate for the project being reviewed, examining project management and design documents, planning and conducting on-site interviews, and reporting on the readiness of the projects to proceed beyond approval of the project performance baseline (CD-2). The EIR manual developed by the OECM provides general guidance for the process (DOE, 2003b). The IPT is responsible for assembling information required by the EIR team, providing on-site briefings, and, eventually, responding to the EIR recommendations.
An EIR is initiated prior to CD-2, when a project’s front-end planning should be complete and its scope has been judged to be valid and stable. To produce an effective review, the EIR team must understand the information in a large array of complex documents, the decisions they record, the interaction of these decisions, and their impact on the validity of the project baseline. Transferring and acquiring the necessary understanding of the project requires a significant investment of time by the IPT and the EIR team.
It is the committee’s opinion that the current practice of initiating this effort four or five weeks prior to an on-site review and relying on IPT briefings to complete the process is not always efficient or effective. The process could be facilitated by involving some of the external reviewers in IPRs that occur earlier in the project. The objective would be to facilitate the transfer of information across process
“interfaces” when information is handed off from DOE staff to external contractors. By participating in earlier discussions, the external contractors could hear first hand about potential issues and the discussions surrounding them and thus avoid going over the same ground when the IPT briefs the external contractors. It would also provide a crosscheck to help ensure that significant information does not “fall through the cracks” when the IPT relays information from IPRs to the EIR contractors. The IPRs, in turn, could benefit from the added perspective of fully independent reviewers who might be less constrained than in-house staff or M&O contractors to question planning assumptions.
DOE projects often involve specialized scientific equipment or innovative technologies with significant risks and implementation challenges. The nonroutine, sometimes unique, nature of DOE projects requires a case-by-case treatment. For this reason, EIRs often call for peer reviewers who have unique expertise. In addition, critical issues change over the course of a project’s development, with planning issues dominating in the early stages and implementation issues in the later stages. As a result, the composition of a review team must be carefully considered to ensure that members have the appropriate expertise.
The nonroutine characteristics of DOE projects is recognized in DOE policies regarding tailoring which note that an EIR can be tailored to the size, risk, technological readiness, and complexity of the project being reviewed (DOE, 2000, 2003). Tailoring allows for modifying the methodology and approach used to meet EIR requirements and might involve consolidating decisions or process concurrency or other changes. Thus, an EIR for small, routine projects might be streamlined and involve a smaller team, although all key review elements must be addressed.
The committee observed little evidence of effective tailoring for EIRs. Determining how to tailor an EIR requires experience as well as judgment about the value of peer reviews per se. For example, tailoring that involved eliminating the on-site sessions for smaller projects was reported to severely diminish the effectiveness of the EIR. Because much of the value of a peer review lies in the interaction of the participants, it is not surprising that eliminating the on-site review was counterproductive. In contrast, tailoring that was done by a collaboration of the EIR contractor, the IPT, the PSO, and the OECM produced effective reviews. For example, the CD-3 review for the Microsystems and Engineering Science Applications (MESA) project at Sandia National Laboratories conducted by Burns & Roe Enterprises in February 2003 was formulated jointly by the OECM, Burns & Roe, and MESA project personnel.
The timing of peer reviews is also significant. For most projects the practice is to conduct an EIR prior to CD-2 and for major systems another EIR prior to CD-3. However, it may be advantageous to tailor the timing. For quality assurance, some projects could probably benefit from annual reviews when more than 1 year elapses between CDs. Such reviews could provide a fresh look at the project, account for changing circumstances, identify emerging problems, and revisit project assumptions and the information on which decisions were based to ensure they remain valid.
Committee members and DOE staff, M&O contractors, and EIR contractors generally agreed that review team members should be independent of the project team. Having reviewers who did not participate in the planning and execution of the project is vital for objectivity and for increasing confidence in decisions on major, complex projects with significant inherent risks.
What constitutes independence, however, is subject to interpretation. In IPRs, independence is generally considered to be achieved by using federal staff or DOE M&O contractors who are not directly associated with the particular project. They may be personnel who work in the same DOE program or at the same site as the project team, personnel who work in other DOE programs and sites, or personnel who have never worked for DOE or one of its contractors or who no longer do.
EIR contractors are external to DOE and not directly associated with the project. However, because only three contracting firms were used prior to 2006, their ongoing relationship with DOE and its
federal staff might compromise their independence. The addition of two firms, for a total of five, should help to mitigate this concern. In general, the more detached a reviewer is from economic, political, social, or other influences, the more independent he or she can be. The need for objectivity is also recognized by private sector firms such as Chevron-Texaco, whose project development and execution process requires totally independent design review boards (NRC, 2002).
Some disagreement was found among DOE staff and contractors regarding the relative value of IPRs and EIRs. Some federal staff believe that EIRs are more effective than IPRs because the participants are more independent and take more trouble to prepare for the reviews and to ensure that project plans and documentation are complete and up to date. Others believed IPRs are preferable to EIRs because external reviewers do not always have the required technical expertise.
The committee believes that peer reviews using a mix of internal and external reviewers would best assure independence while providing the necessary expertise.
INTEGRATION OF REVIEWS
Developing a comprehensive plan for EIRs and other peer reviews would give DOE more overall value by creating a continuum of risk management and quality assurance during all phases of project development. Current DOE practices disperse the responsibility for organizing the various types of reviews: contractors manage independent design and technical reviews, PSOs manage IPRs, and OECM manages EIRs. Greater coordination in the planning of reviews could improve the outcomes of all reviews. For example, EIRs and IPRs have been performed simultaneously for some projects. For other projects, they have been scheduled sequentially over time. Conducting reviews simultaneously reduces the time required for briefings and data-gathering interviews. Conducting them sequentially allows DOE to schedule IPRs first to determine if the project is ready for an EIR review to validate the performance baseline. On balance, the committee believes that sequential reviews are preferable because actions can be taken to correct problems identified during the IPR and to ascertain the effectiveness of those corrective actions in a subsequent EIR (the later reviews should be scheduled to allow sufficient time to do these assessments). For all projects, the schedules for EIRs and IPRs should be carefully coordinated to maximize the value of reviews and minimize disruptions to the project.
If EIRs were conducted by the PSOs and the OECM assumed an oversight role it might be expected that the portfolio of reviews will be more coordinated, comprehensive, unbiased, and effective.
COST AND VALUE
Workshop participants indicated that an EIR costs about $150,000 on average. The range of costs is significant, however—from about $50,000 for a small project using a three-person team to as much as $1.2 million for an EIR assessment of a large, complex project.
Currently, EIR costs (travel, contractor salaries, and overhead) are funded by the program direction budgets, which are also used to fund other project management activities. This practice puts pressure on DOE program managers to choose between funding EIRs and funding the training of personnel or other activities to improve project management. By contrast, the only IPR costs requiring direct funding are the travel expenses of the review team, because labor costs are absorbed by the team members’ program. Comparing the full costs of EIRs to the partial costs of IPRs results in a bias in favor of IPRs. This bias is reinforced when program managers are required to make trade-offs between EIRs and other important activities.
Analyzing the costs and benefits of EIRs and IPRs is a challenge. A full cost accounting should include labor, travel, and overhead for all review personnel. For EIRs, contract costs can be compared to project costs, but for IPRs, current accounting procedures make only the travel costs easily identifiable. The full costs of federal staff time and overhead cannot be disaggregated.
Quantifying the benefits of peer reviews is also difficult because benefits accrue primarily in the form of problem avoidance and increased confidence in project performance, outcomes that are difficult to quantify. For all these reasons, determining what type of review to conduct and when to conduct it for a particular project based on the costs and benefits is a subjective process.
As already mentioned, the committee sensed agreement among DOE staff and contractors at headquarters and in project offices that EIRs provide value. There was also general agreement that large, complex projects benefit more than small, routine projects. Whether this justifies the cost of EIRs was not clear due to the paucity of easily measured, quantifiable data on costs and benefits.
The committee believes that some indicators of the performance of EIRs should be developed, although care must be taken in doing so. For example, the number of projects that require rebaselining might be an indicator of the effectiveness of the EIR program. However these data are not readily available and are in any case influenced by factors outside the purview of the EIR: The committee observed situations where EIR team recommendations to change the baseline cost and schedule or to make a security- related change were not followed for budgetary or other reasons. Thus, using the number of projects requiring rebaselining as an indicator of the value of EIRs could be misleading because there were no outcomes for the EIR recommendations and nothing could be measured.
This is not to say that useful indicators cannot be developed. The report Measuring Performance and Benchmarking Project Management at the Department of Energy (NRC, 2005) provides advice and guidance intended to help DOE develop and implement effective performance measures and an effective benchmarking program for project management. It suggests 30 possible performance measures in four sets: project-level input/process measures; project-level output/outcome measures; program- and department-level input/process measures; and program- and department-level output/outcome measures. The report notes that the “value of an individual performance measure is limited, but combined, the measures provide a robust assessment of the quality of project management for individual projects and programs” (NRC, 2005, p. 1).
The report suggests using corrective actions from IPRs, EIRs and quality assurance reports as a “project-level input/process measure” to assess the resources provided to deliver an individual project and the management of the project against standard procedures. The use of IPRs and EIRs and implementation of corrective action plans is also suggested as a “program- and department-level input/process measure.” Such measures are used to assess the total resources provided for all projects within a program or department and the degree to which program- and department-wide goals for projects and their management are met. The report notes that “Independent reviews can alert project directors to potential problem areas that will need to be resolved as well as provide suggestions for handling these problems” (NRC, 2005, p. 17). The measures would be calculated by determining if an EIR was conducted on the project prior to CD-3 and the percent of EIR issues addressed by corrective actions.
By using a set of performance measures, improvements to the EIR process, in conjunction with other management improvements should be visible in the trends for DOE project management performance across projects and over time.
USING PEER REVIEWS TO IMPROVE PROJECT MANAGEMENT CAPABILITIES
Both EIRs and IPRs provide information for making effective decisions about DOE projects. The data developed, recommendations made, and effective practices identified during reviews can also be used to improve the project management process for a particular project by indicating areas that need closer scrutiny or a different approach by the project management team. EIRs also add discipline to the project management process and provide opportunities to increase control of the outcomes. For small projects undertaken by less experienced personnel, EIRs were reported to provide training and contribute to their professional development.
In some instances, especially on smaller projects managed by less experienced personnel, mentoring by individuals on the review panels may be a value-added benefit for the costs and time
invested. EIRs assemble a panel of very experienced and skilled individuals to perform the review and meet with project staff to discuss the content and meaning of various items of project work. Each meeting between the review panel and project personnel is an opportunity to mentor the project team members. If this broader view of peer reviews were to become a part of DOE’s management culture, project staff might look forward to the next review to improve themselves rather than to dread them as an audit or an inquisition into their work.
Peer reviews (EIRS and IPRs) can be a primary vehicle for gauging an organization’s project management and maturing it. The committee learned that this had happened during reviews, but the efforts were ad hoc and inconsistent from project to project and from program to program. When EIR contractors are exposed to problems and solutions from multiple projects at multiple sites they can disseminate these lessons learned through on-site discussions and EIR reports. DOE staff and M&O contractors who participate in IPRs are exposed to methods and procedures from other sites and bring their experience to benefit the personnel associated with the project being reviewed. However, these benefits have not been documented or measured. NNSA reported that an effort had been initiated to capture lessons learned from IPRs of their projects, but this effort does not apply to EIRs, and the committee found no entity other than NNSA that used such information. A clearly defined, systematic process for documenting data from peer reviews along with a system for cataloging, storing, and disseminating it is needed; if implemented, it would greatly enhance the benefits of peer reviews. The OECM, in collaboration with the PSO and the federal project directors, and with input from contractors, should develop criteria and guidelines for such a system.
DOE (Department of Energy). 2003a. Project Management for the Acquisition of Capital Assets (Manual M 413.3.1). Washington, D.C.: Department of Energy.
DOE. 2003b. External Independent Review Standard Operating Procedures. Washington, D.C.: Department of Energy.
NRC (National Research Council). 1998. Assessing the Need for Independent Project Reviews in the Department of Energy. Washington, D.C.: National Academy Press.
NRC. 1999. Improving Project Management in the Department of Energy. Washington, D.C.: National Academy Press.
NRC. 2002. Proceedings of Industry/Government Forum: The Owner’s Role in Project Management and Preproject Planning. Washington, D.C.: National Academy Press.
NRC. 2005. Measuring Performance and Benchmarking Project Management at the Department of Energy. Washington, D.C.: The National Academies Press.
U.S. Congress. 1997. Committee of Conference on Energy and Water Development. HR 105-271. Washington, D.C.: Government Printing Office.