National Academies Press: OpenBook
« Previous: 2 Project Management Performance Measures
Suggested Citation:"3 The Benchmarking Process." National Research Council. 2005. Measuring Performance and Benchmarking Project Management at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/11344.
×

3
The Benchmarking Process

INTRODUCTION

Management theory and practice have long established a link between effective performance measures and effective management (Drucker, 1995). The effectiveness of any given performance measure depends on how it will be used. For performance measures to have meaning and provide useful information, it is necessary to make comparisons. The comparisons may evaluate progress in achieving given goals or targets, assess trends in performance over time, or weigh the performance of one organization against another (Poister, 2003).

The Government Performance and Results Act of 1993 (GPRA) established the requirement for performance measures to assess how well departments and agencies are achieving their stated goals and objectives. The emphasis of GPRA performance measures is on output and outcome measures at the program level.

Performance measures used as a management tool need to be broadened to include input and process measures. One approach is to use an array or scorecard composed of multiple measures. The Balanced Scorecard is one such approach that assesses an organization and its programs from four different perspectives: customer, employee, process, and finance. “The scorecard creates a holistic model of the strategy that allows all employees to see how they contribute to organizational success…. [It] focuses change efforts. If the right objectives and measures are identified, successful implementation will likely occur.” (Kaplan and Norton, 1996, p. 148)

The objectives and process for construction and construction project management create a good environment for the effective use of benchmarking for measuring and improving performance. Benchmarking is a core component of continuous improvement programs. As Gregory Watson noted in his Benchmarking Workbook, 12 of the 32 criteria for the Malcolm Baldrige National Quality Award refer to benchmarking as a key component of quality assurance and process improvement (Watson, 1992). The role of benchmarking in process improvement is similar to that of the Six Sigma1 process improvement methodology. The Six Sigma methodology comprises five integrated steps: define, measure, analyze, improve, and control (DMAIC). These steps are also central to the benchmarking process defined in this chapter.

Benchmarking is an integral part of the continuous improvement cycle shown in Figure 3.1 (CII, 2004). Measuring, comparing to competition, and identifying opportunities for improvements are the essence of benchmarking.

1  

Six Sigma refers to a body of statistical- and process-based (e.g., process mapping, value stream mapping) methodologies and techniques used as part of a structured approach for solving production and business process problems plagued with variability in execution (Harry and Schroeder, 2000).

Suggested Citation:"3 The Benchmarking Process." National Research Council. 2005. Measuring Performance and Benchmarking Project Management at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/11344.
×

FIGURE 3.1 Continuous improvement cycle. SOURCE: CII, 2004.

BENCHMARKING ROADMAP

Many definitions of benchmarking are available. The following definition, from the Construction Industry Institute (CII), illustrates a number of important points.

Benchmarking is the systematic process of measuring one’s performance against recognized leaders for the purpose of determining best practices that lead to superior performance when adapted and utilized. (CII, 1995)

To be successful, benchmarking should be implemented as a structured, systematic process. It will not be successful if applied in an ad hoc fashion on a random basis. In most cases benchmarking is best-practice-oriented and is part of a continuous improvement program that incorporates a feedback process. Benchmarking requires an understanding of what is important to the organization (sometimes called critical success factors) and then measuring performance for these factors. The gap between actual performance and preferred achievement is typically analyzed to identify opportunities for improvement. Root cause analysis usually follows to assess the cause of unsatisfactory performance, and a search for best practices may be used to help address performance problems. Figure 3.2 illustrates the process with a benchmarking roadmap.

The roadmap was adapted from the 10-step process introduced by Robert Camp at Xerox. Camp pioneered much work in benchmarking, and some even credit him with the first use of the term “benchmarking.”

EXTERNAL VERSUS INTERNAL BENCHMARKING

Benchmarking can be internal or external. When benchmarking internally, organizations benchmark against their own projects. When benchmarking externally, organizations seek projects from other companies or perhaps, in the case of DOE, from separate program offices for comparative analysis. External benchmarks are generally considered to provide the greater advantage; however, internal benchmarking can be useful where no external benchmarks are available. Internal benchmarks are often the starting point for quantitative process examination. Trends can be identified by examining these data over time, and the impact of performance-improving processes can be assessed. External benchmarks provide the added advantage of comparing against competitors. Without external benchmarks, an organization and its managers may lack an understanding of what constitutes “good” performance.

Suggested Citation:"3 The Benchmarking Process." National Research Council. 2005. Measuring Performance and Benchmarking Project Management at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/11344.
×

FIGURE 3.2 Benchmarking roadmap. SOURCE: Adapted from Camp, 1989.

Application at Various Project Phases and Management Levels

Benchmarking can and should be used at various levels throughout the organization, but if project improvement is the goal, data will typically be entered at the project level. Program- and department-level measures can be provided by roll-ups of the project-level data.

Benchmarking can be applied during various phases of a project for different purposes. When applied early on, such as at project authorization, it can be used to identify characteristics that may be associated with potential future problems and to identify aspects of project management (e.g., risk management) that need special attention to ensure project success. When applied during project execution, it can serve as a project management tool to guide project decisions. Postproject benchmarking is usually used to assess performance of a project delivery system to provide for lessons learned and feedback that can be used to establish benchmarks for future comparisons. Most organizations tend to begin with postproject comparisons and later progress to the earlier uses as confidence in the benchmarking process builds. Over time, when sufficient data are available, trends can be analyzed to provide insight into the performance of project management systems. Since integrated project team (IPT) members will normally have moved on to new projects, trend analyses of project-level cost and schedule metrics would typically be used at program and department levels.

Benchmarking needs buy-in at various levels of an organization in order to be successful. Most often, benchmarking is driven from the top. Senior management commitment is critical if resources are to be made available for the process. While benchmarking may succeed with senior management support alone, it is far more likely to succeed if it has the support of middle management and the project team. Furthermore, the project team is far more likely to support the benchmarking initiative if it is understood that the goal is system improvement and not individual or team performance appraisal. The IPT members should be confident that data submitted for benchmarking will not be used for performance appraisals if accurate data are to be obtained.

Suggested Citation:"3 The Benchmarking Process." National Research Council. 2005. Measuring Performance and Benchmarking Project Management at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/11344.
×

Validation

The validation of benchmarked data is a critical component of any benchmarking system. Some benchmarking services collect data through a survey instrument and then use an experienced analyst to review them. The project team is interviewed to clarify and resolve issues.

A different approach to validation is to share responsibility between the project team and an outside organization. The project team is responsible for reviewing the data to be submitted to ensure that they accurately reflect the project’s experience. An independent reviewer serves as an honest broker and validates the data by ensuring their completeness and accuracy. The reviewer should be a trained, independent professional with a good understanding of the data to be collected, the measures to be produced, and the project management process used. A rigorous examination of all data is performed by the service provider as a final check. Whatever approach is used, a validation process assists in maintaining consistency across organizations.

IMPLEMENTATION OF BENCHMARKING

Benchmarking processes are not easy to implement, and to be successful an organization must overcome numerous barriers. Some private-sector companies fear that they may lose their competitive advantage by sharing information, and others fear exposure of organizational weakness. Use of an identity-blind process, whereby data are posted without attribution, can mitigate these concerns.

For some organizations, arrogance is a major obstacle. These organizations may believe they are the best, so why benchmark? As renowned management consultant W. Edwards Deming would probably ask superconfident organizations that lack performance data and comparison to other organizations: How do you know? (Watson, 1992). Other organizations are unaware of the value of benchmarking and believe that benchmarking systems do not adequately address their needs. Benchmarking agreements and training increase familiarity with the benchmarking process and can help to reduce these barriers.

One of the greatest barriers to benchmarking is a lack of resources. Most organizations are leaner today than in the past, and dedicating the essential resources can be difficult. For some organizations, project processes and computer systems are not sufficiently developed to easily support benchmarking (CII, 2002). For these organizations the benchmarking process will require more manual intervention and consequently greater resources. As project processes become automated, this barrier should shrink.

Lessons Learned

Lessons learned from past benchmarking efforts can be helpful for an organization embarking on a benchmarking initiative:

  • Senior management buy-in and support are vital to success, but even with this support, generating enthusiasm is difficult (McCabe, 2001).

  • Department- and program-level champions are essential.

  • Even though projects may be unique, the processes are very similar.

  • A high code of ethics is essential.

  • Benchmarking will be successful only if made an integral part of the project process.

  • Commonly accepted, effective metrics for assessing project performance are necessary to assess the extent to which best practices are used. Input, process, output, and outcome performance measures are necessary, and it is possible to implement them.

  • Performance measures should be applied through a structured benchmarking process.

  • Cost-effective, value-added benchmarking can be implemented through standardization of definitions and application of computer-based technologies.

Suggested Citation:"3 The Benchmarking Process." National Research Council. 2005. Measuring Performance and Benchmarking Project Management at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/11344.
×

REFERENCES

Camp, Robert C. 1989. Benchmarking, The Search for Industry Best Practices That Lead to Superior Performance . Portland, Ore.: ASQC Quality Press.

CII (Construction Industry Institute). 1995. Construction Industry Institute Data Report. Austin, Tex.: Construction Industry Institute.

CII. 2002. Member Company Survey. Austin, Tex.: Construction Industry Institute.

CII. 2004. Presentation to the Construction Industry Institute Annual Conference, Vancouver, British Columbia, July 2004. Austin, Tex.: Construction Industry Institute.


Drucker, Peter F. 1995. Managing in a Time of Great Change. New York, N.Y.: Penguin Putnam, Inc.


Kaplan, Robert S., and David P. Norton. 1996. The Balanced Scorecard. Boston, Mass.: Harvard Business School Press.


McCabe, Steven. 2001. Benchmarking in Construction. Oxford, U.K.: Blackwell Science, Ltd.

Mikel, Harry, and Richard Schroeder. 1996. SIX SIGMA: The Breakthrough Management Strategy Revolutionizing the World’s Top Corporations. New York, N.Y.: Currency/Doubleday.


Poister, Theodore H. 2003. Measuring Performance in Public and Nonprofit Organizations. San Francisco, Calif.: Jossey-Bass.


Watson, Gregory H. 1992. The Benchmarking Workbook: Adapting Best Practices for Performance Improvement. Portland, Ore.: Productivity Press.

Suggested Citation:"3 The Benchmarking Process." National Research Council. 2005. Measuring Performance and Benchmarking Project Management at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/11344.
×

This page intentionally left blank.

Suggested Citation:"3 The Benchmarking Process." National Research Council. 2005. Measuring Performance and Benchmarking Project Management at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/11344.
×
Page 21
Suggested Citation:"3 The Benchmarking Process." National Research Council. 2005. Measuring Performance and Benchmarking Project Management at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/11344.
×
Page 22
Suggested Citation:"3 The Benchmarking Process." National Research Council. 2005. Measuring Performance and Benchmarking Project Management at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/11344.
×
Page 23
Suggested Citation:"3 The Benchmarking Process." National Research Council. 2005. Measuring Performance and Benchmarking Project Management at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/11344.
×
Page 24
Suggested Citation:"3 The Benchmarking Process." National Research Council. 2005. Measuring Performance and Benchmarking Project Management at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/11344.
×
Page 25
Suggested Citation:"3 The Benchmarking Process." National Research Council. 2005. Measuring Performance and Benchmarking Project Management at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/11344.
×
Page 26
Next: 4 Implementation »
Measuring Performance and Benchmarking Project Management at the Department of Energy Get This Book
×
Buy Paperback | $29.00 Buy Ebook | $23.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

In 1997, Congress, in the conference report, H.R. 105-271, to the FY1998 Energy and Water Development Appropriation Bill, directed the National Research Council (NRC) to carry out a series of assessments of project management at the Department of Energy (DOE). The final report in that series noted that DOE lacked an objective set of measures for assessing project management quality. The department set up a committee to develop performance measures and benchmarking procedures and asked the NRC for assistance in this effort. This report presents information and guidance for use as a first step toward development of a viable methodology to suit DOE’s needs. It provides a number of possible performance measures, an analysis of the benchmarking process, and a description ways to implement the measures and benchmarking process.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!