The Influence of Organizational Linkages and Measurement Practices on Productivity and Management
D. Scott Sink and George L. Smith, Jr.
There are at least three world views regarding the productivity paradox:
There is no paradox. Information technology (IT) and other improvement interventions are improving performance. Researchers and practitioners simply cannot measure the improvement.
There is a problem but not a paradox. Improvement initiatives are not being driven by rationality or profound knowledge (defined below). They are more like random-walk processes, initiated on the basis of what is in vogue or what is easily available, not on the basis of what makes sense or will work best. Thus, the problem is not necessarily with measurement but with the quality of decision making.
There is a paradox, and addressing it is confounded by the lack of systems thinking, lack of profound knowledge, and inadequate measurement methodology.
We believe the third view is the most accurate.
The productivity paradox has a threefold nature. It is a measurement problem, a management and decision-making problem, and a combined problem of stragegy, action, and measurement. In this chapter we present a conceptual model that portrays the organization, its management system, and the planning process in a way we hope will stimulate a thorough reexamination of managerial planning and decision making and some of the fundamental notions regarding performance. We
explore the relationship of the measurement function to performance at the individual, group, and organizational levels in a way that integrates measurement, planning, and managing the organization. We also discuss research issues associated with the use of measurement to support improvement and to address linkage questions. The alignment of strategies, actions, and measures is a key theme in this chapter.1
LINKAGES AND PROFOUND KNOWLEDGE
Many organizational improvements are undertaken without knowledge of cause-and-effect relationships. Some are undertaken because they are the ''in" thing to do; others are undertaken because they are believed to be the "right" thing to do. Some improvements, however, are initiated based on evidence that they will improve performance. Sometimes the interventions do improve the performance of certain subsystems, but the improvement cannot be tracked either horizontally or at higher levels of the organization. One expects that the interventions will cause a change in performance because there are linkages among entities within levels as well as between levels of the organization.
Deming (1986, 1991) and Goldratt (Goldratt and Cox, 1986; Goldratt and Fox, 1986) provide good examples of how failure to understand linkages can lead to interventions at one level having neutral or negative outcomes, particularly at the level of the macro system. Defining specifically what linkages are expected is critical in developing a performance measurement methodology. Much evaluation research simplistically assumes that an improvement at one organizational level will automatically cause an improvement at another level. Linkages are far more complex, however. Implementing an intervention in a given entity can cause multiple dimensions of performance to improve within the entity itself and, then, in other entities through linkages.
Deming (1986, 1991) suggests that making an improvement intervention in one entity and projecting positive performance linkages at
the next higher level or within other entities at the same level require profound knowledge. He describes profound knowledge as comprising theory of systems, theory of variation, theory of psychology, and theory of knowledge. It is the blending of wisdom, experience, conceptual and operational understanding, skill, and judgment. We would add that it also includes understanding the organizational system and well-founded beliefs concerning cause-and-effect relationships. This is the crux of the paradox, in our opinion. Many managers involved in performance improvement do not have profound knowledge. Often, improvement interventions are implemented not with the aim of optimizing the larger system's performance, but rather with the aim of maximizing the performance of an individual or a subsystem. It can be argued that lack of systems thinking is at the heart of the productivity paradox.
A SYSTEMS MODEL OF ORGANIZATIONAL PERFORMANCE
A management system comprises three elements: who manages, what is managed, and how managing is done (i.e., the "tools" and methods used to convert data to information). An organizational system is a component of a management system; it is "what" is being managed. An organizational system has upstream systems (suppliers, vendors, other providers); inputs (labor, capital, energy, materials, and information and data); value-adding processes; outputs (goods and/or services); downstream systems (customers); outcomes (profits, customer satisfaction); and up-line systems (parent organizations, hierarchically superior systems). The Management Systems model shown as Figure 6-1, which was adapted from Kurstedt (1986) and Sink and Tuttle (1989), depicts a systems view of an organizational system.
The management team (i.e., "who" does the managing) makes decisions and takes actions aimed at improving the performance of the organizational system. Performance is multidimensional, and ambiguity and inconsistency regarding the criteria for measuring are major problems common to researchers and practitioners involved with the measurement of organizational performance. Operational definitions, which express the performance criteria in measurable terms, are sorely lacking in the literature and in practice. Thus, confusion reigns in the field of measurement as it relates to organizations. Less confusion exists regarding individual-level performance because industrial psychologists have established more specificity in the terminology used. Nonetheless, considerable variance exists among disciplines such as industrial engineering, industrial sociology, industrial psychology, organizational behavior, and human factors engineering when it comes to measurement terminology.
In the absence of accepted operational definitions of performance and its components, the task of measuring and evaluating improvement is difficult. Researchers and practitioners become "wrapped around the axle" because there is no agreed language of performance measurement. They operationalize performance criteria differently because they have not grasped the fundamentals. Yet their hypotheses regarding what will actually improve if an intervention is made (e.g., IT) are crucial to understanding the productivity paradox.
Our ongoing and recently updated review of the literature (in preparation) confirms that there are at least seven interrelated and interdependent performance criteria for an organizational system: (1) effectiveness, (2) efficiency, (3) productivity, (4) quality, (5) quality of work life, (6) innovation, and (7) profitability (profit center) or budgetability (cost center). These seven criteria are substantially inclusive but not necessarily mutually exclusive. They represent level zero in a measurement-breakdown structure. An intervention to improve the performance of an entity may be expected to improve one or more of the seven basic criteria. For example, it is reasonable to expect that a specific IT intervention will increase productivity (output/input), or might it be more reasonable to expect that it will improve the quality of the output, which may be difficult to discern in measurements of productivity? Below we define each of the seven performance criteria. Their integration in the Management Systems model is depicted in Figure 6-2.
As seen in Figure 6-2, effectiveness focuses on the output side of an organizational system. An example of an indicator of effectiveness is
actual output versus expected output. Attributes commonly used to refine the effectiveness criterion are timeliness, quality, quantity, and price/cost (i.e., value). An example of an operational definition is accomplishment of the right things on time, within specifications or expectations. The word right highlights the fact that effectiveness often incorporates elements of judgment, uncertainty, and risk.
Efficiency focuses on the input side of an organizational system. An indicator of efficiency would be resources actually consumed versus resources expected to be consumed. The same four attributes of timeliness, quality, quantity, and cost/price are often used to refine the measurement of efficiency.
Quality is pervasive throughout the organizational system. One can stop short of a thorough operational definition of quality by espousing the overly simplistic but often-cited adages "quality is conformance to requirements" and "quality is making the customer happy." However, one can do business with an operational definition (Deming, 1986). It is a definition from which one can measure. Assigning a number to each element of the Management Systems model shown in Figure 6-1
yields five quality checkpoints (see Figure 6-2). Quality checkpoint 1 (q1) is the selection and management of upstream provider systems; quality checkpoint 2 (q2) is incoming quality assurance; quality checkpoint 3 (q3) is in-process quality management; quality checkpoint 4 (q4) is outgoing quality assurance; and quality checkpoint 5 (q5) is proactive and reactive assurance that the organizational system is meeting or exceeding customer needs, requirements, expectations, specifications, and desires (perhaps even including understanding the customer's latent quality desires). These definitions, at the checkpoint level of specificity, get one closer to an understanding of total quality. The extent to which an organizational system is measuring and managing the performance of quality at each of the five checkpoints, over time, is an indication of whether total quality is being managed. This is not circular reasoning; it is the application of systems thinking to quality management.
Productivity is the relationship between what comes out of the organizational system and what is consumed to create those outputs. It is a set of ratios and indices comparing output to input. Taking a static, snapshot approach to measuring productivity yields ratios that can be analyzed over time (e.g., a run chart). Taking the dynamic approach yields indices (ratios over ratios) that provide rate-of-change information. The definition of productivity is one of the simplest of the seven criteria; operationalizing it is the difficult part. The problem most researchers and practitioners have with measuring productivity is that of capturing all the outputs of an organizational system (inputs are relatively easier to capture). People often allow their emotional and intuitive understanding of productivity to cloud their attempts to measure this criterion.
Quality of work life is the affective response of the people in the organizational system to any number of factors, such as their job, pay, benefits, working conditions, coworkers, supervisors, culture, autonomy, and skill variation. Measuring quality of work life suggests that one must measure these affective responses and evaluate changes in them over time. Standard instruments have been devised to do this. However, surrogate indicators, such as turnover and absenteeism, are often used as correlates of quality of work life.
Innovation is the reactive, proactive, creative, and successful response to changes (perceived or otherwise) in the internal and external environments of an organizational system. Innovation can include problem solving or opportunity capturing. The linkage issue is particularly salient for this aspect of performance. Organizations in the United States have traditionally attempted to encourage innovation largely through
individual-based suggestion systems. World-class organizations, however, have developed group processes for sparking quality proposals for improvement. Today, world-class organizations monitor the quantity and quality of team-generated proposals that are developed and implemented. Team-based processes build quality in, reduce reject rates, improve motivation, make sustaining the process of employee involvement in innovation and improvement easier, and yield much higher payoffs. The U.S. levels of performance in this area pale by world-class standards: 1 to 2 proposals per employee per year with a 10 to 50 percent implementation rate versus 40 to 50 proposals per individual/team per year with an 80 to 90 percent implementation rate. By world class we mean best of best, best in class, highest level of performance for a given system or process regardless of industry. Clearly, there are world-class performances in North America; however, in key industries and for key processes and systems and at an aggregate level, North American business and industry have clearly been faltering over the past 30 years (Grayson and O'Dell, 1988; Kottler et al., 1985). This is clearly a linkage issue relative to individual-and group-focused performance improvement processes. Historically, U.S. managers have had difficulty in distinguishing between situations in which groups are appropriate and situations in which an individual approach is more applicable. The result has been weakened linkages among individual, group, and organizational performance (Kanter, 1983; Kanter et al, 1992; Lawler, 1986; Mohrman et al., 1989; Weisbord, 1991).
Profitability (relevant for profit-center organizational systems) measures the relationship between revenues and costs. An analogous criterion, budgetability (relevant for cost-center organizational systems), measures the relationship between what the organizational system said it would do and the cost, and what it did and the actual cost.
Figure 6-3 portrays the relationships among the seven criteria. The model is conceptual and provides only rough-order relationships. The approach we advocate is to consider these seven criteria as variables that explain variation in performance. One might benefit from being able to pull certain variables in and out of an analysis to see which ones explain variation in performance for a given organizational system, much as one would do in a multiple regression analysis. In defining and understanding each of the seven criteria, one might attempt to "partial out," as in a stepwise regression analysis, the other six. Lacking precise tools, this is difficult to do. However, once all seven criteria have been defined and are understood individually, the seven together can be used to examine linkage effects.
MEASUREMENT, LINKAGES, AND MANAGING ORGANIZATIONAL PERFORMANCE: FOUR GUIDELINES
A management team makes decisions and takes actions aimed at (1) ensuring that the organizational system performs, (2) ensuring that the organizational system continually improves its performance, and (3) responding to problems and crises. To ascertain whether its decisions and actions are working, the management team measures for the purpose of obtaining selected data. Those data are then converted into information, which is portrayed and perceived by the management team.2 The management team formulates or reformulates decisions and ac
tions based on this feedback. We address this cycle, known as the improvement cycle, or the PDCA cycle (plan, do, check, act; see Deming, 1986; Shewhart, 1939), in more detail later in this chapter.
The transformation of performance data into management information requires an understanding of the linkages that relate the activities of individuals, groups, and the organization as a whole. In the process of identifying and then modeling those linkages, one must pay particular attention to the way in which performance is measured and productivity is assessed at the different levels of the organizational hierarchy. The literature and our own experience suggest the following four guidelines for constructing performance measurement systems for organizations. Each of the guidelines is examined in turn below.
An organization's system of performance and productivity measures should be designed to support and complement the organization's mission and objectives. Strategies, actions, and measures should be aligned.
The system of performance measures should reflect the differing needs and perspectives of managers and leaders at various levels of the organization. Measurement systems should be user driven.
Measures of performance should be flexible and dynamic in light of changes within the organization and its operating environment.
Reliance on traditional performance and productivity measures can be problematic because they are unlikely to provide all the information needed to model the relationships across organizational levels, or even to assess organizational performance and productivity completely.
Guideline 1: The Measurement System Should Support the Organization's Mission and Objectives
In the preface to their book The New Performance Challenge, Dixon et al. (1990:5) state, "The goal is to achieve better alignment among the organization's strategies, actions, and measures." The alignment sought by Dixon et al. is especially critical because an organization involves, first and foremost, the coordinated actions of individuals and groups. Understanding the effect measurement has on individuals and groups can provide the organization with a powerful key to unlocking the performance and productivity of sociotechnical systems. The mechanism that controls this effect is contained in the principles of behavior modification.
Measuring performance has a dual effect. According to the principles of behavior modification, measurement not only generates data regarding individual or group performance on a particular task, it can
also help to modify the performance that is being measured. Powerful motivation can be provided by feedback on one's performance, often referred to as knowledge of results (KOR). Feedback is one of the five core job dimensions in job characteristics theory (Hackman and Oldham, 1976). Measuring and feeding back the results of positively regarded behavior can increase its frequency of occurrence, and measuring and reporting unacceptable behavior can decrease its frequency of occurrence (Fitts and Posner, 1968:26–31).
Designers and users of management information systems must realize that performance measures provide KOR and must take positive action to take advantage of their effects. Regardless of whether they do or not, the effects will be present, and the results can be disastrous if inappropriate KOR is given. For example, designers and users of management information systems typically focus their attention on generating business or financial information. But in doing so, their exclusive view is that these measures of performance and productivity are the raw data from which decision makers extract the information needed to perform managerial functions. They ignore the behavioral consequences that accompany measurement. As organizations become more willing to locate operational and tactical decision making closer to the point of production, an interesting phenomenon appears. In an empowered work group, the user of performance data is often also the one whose performance is being measured and portrayed. To the extent that the system of measurement simultaneously reflects and reinforces the personal goals of the individual workers and the operational and tactical objectives of the organization, the system generates data for decisions and motivates individual or work group effectiveness (i.e., the accomplishment of unit objectives) (Akao, 1991; Dixon et al., 1990; Hall et al., 1991; Mali, 1978; Sink and Tuttle, 1989:143–152). When the measurement system is not designed in a way that achieves this positive alignment, the organization's productivity can be sabotaged by its own information system (Goldratt and Cox, 1986; Sink and Tuttle, 1989:143–152).
In an era of increasing lower-level empowerment to make decisions and solve problems, congruency of strategy, actions, and measurement, at all levels of the organization, is paramount. Lack of such congruency will become more crucial in the future. As organizations deploy quality policy and empower teams at all levels to solve problems and make decisions aimed at improving performance, ensuring alignment is crucial to coordination and cooperation. If that local optimization and global suboptimization will be the result. Clearly, effective information and knowledge sharing is key to achieving congruency. Lawler (1986) has argued that sharing information, knowledge, power, and then
rewards, in that order, will be in the future to creating congruency. As the premium on flexibility, adaptability, responsiveness, and innovation increases, sharing information and knowledge to ensure that strategies, actions, and measures are aligned will become more important. The nonexistence of measurement in most office, professional, and technical settings and, where measurement exists, the noncongruence of goals and actions may well be part of the productivity paradox. This is clearly the case in academic settings. The measurement and reward systems, perhaps unwittingly, seek to optimize the performance of the individual faculty member. When a departmental chairperson complains that the faculty members do not think departmentally and urges them to work as a team, it is simply an exhortation. (Deming [1986, 1991] defines an exhortation as a goal that is set in the absence of a method by which to achieve the goal. In this sense, seeking to improve the performance of the department when the measurement system focuses on the individual is merely an exhortation.)
In light of the foregoing discussion, the system of measurements adopted by the organization must be viewed as a part of the total system of performance portrayal and incentives provided by the organization. The system of measurements must be designed as a total system; otherwise, the organization will optimize a subcomponent and often thereby suboptimize the system. This phenomenon seems to be widespread in U.S. organizations (Deming, 1991; Dixon et al., 1990; Hammer and Champy, 1993; Senge, 1990). If organizations fail to do a better job of integrating strategy, action, and measurement, the best that can be said will be that they passed up an opportunity to increase organizational effectiveness. In the worst case, they might elicit or reinforce counterproductive behavior. Case examples of counterproductive performance are fairly frequent in the literature (e.g., Deming, 1991; Kerr, 1975; Senge, 1990) and common in personal experience.
Guideline 2: A Performance Measurement System Should Reflect the Differing Needs and Perspectives of Managers and Leaders at Various Levels of the Organization
A brief set of examples will illustrate the differences in the information needs of decision makers at various levels of the organization. Consider first the operational level, typically the individual level of the organization. Whether on the shop floor, at a retail sales counter, or in a classroom, the operational decisions that must be made are typically for action in the immediate planning horizon. Making a process adjustment, responding to a customer complaint, or finding a new way to explain a particularly difficult concept requires knowledge of the situa-
tion at hand and calls for application of expertise in real time. At the work-group level, planning a monthly production schedule, determining the number of additional clerks to call in for the upcoming sales period, or establishing the annual roster of course offerings is a tactical issue and has a longer planning horizon (typically measured in weeks or months). On the other hand, an executive-level or organizational-level decision to pursue a new line of products, establish an advertising campaign to attract a new class of customer, or develop a program of evening courses to serve the needs of nontraditional or working students is a strategic matter and has a planning horizon that can be several years in length. These three levels of decision makers—first-line employees, management teams or work groups, and executives—are typically positioned at the three levels in the organizational hierarchy. Even though these distinctions are being blurred by attempted shifts to self-management, the distinctions still exist.
Not only do the decision makers at various levels of the organization deal with different planning horizons, but more often than not, they need very different and very specific kinds of information to support the decisions for which they are responsible. Finally, their organizational objectives are expressed in differing degrees of specificity, and the type of information they need to determine whether their decisions are moving their particular unit toward the accomplishment of those objectives may differ radically on many dimensions. The following list summarizes some of the attributes performance measures should have and the way in which the measures relate to the organizational hierarchy.
As a basis for motivation and incentives, measures should
allow for disaggregation of outcomes as a result of human effort (controllable factors) versus external (uncontrollable) factors;
be relevant to the desired behaviors;
be comprehensive enough to ensure balanced performance; and
be accepted by those whose performance is being measured.
For assessing and evaluating organizational entities, measures should
be specific to the mission of a given individual or unit; and
be sensitive to the idiosyncrasies of the particular unit entity being controlled, scheduled, or managed.
For strategic planning and policymaking, measures should
provide information regarding change over time so trends can be ascertained and
be sufficiently standardized to allow comparison among entities and also to establish benchmarks for comparison with other organizations.
Regardless of the application, measures should
measure what they are supposed to measure (be valid);
reflect the actual content of the activity measured (be unbiased);
reflect the full range of states of the particular attributes or variables being measured (be representative);
move when ''things change" and not move when they do not change in the appropriate direction;
be intelligible to the users of the measures;
give the same value when assessed by different people (be verifiable/reliable); and
enhance statistical thinking and avoid errors of attribution.
Guideline 3: Measures of Performance Should be Flexible and Dynamic
Dixon et al. (1990:vii) also state in the preface to their book that "the solution to the performance measurement problem lies not in creating some new monolithic system of measurement, but in institutionalizing a process for continuously changing measures." Later, they conclude that true global competitiveness requires that organizations establish and continuously redefine goals for all levels of the organization that are consistent with winning customer orders and achieving ever-increasing levels of excellence. McNair et al. (1986:137) concur with Dixon et al. (1990): "Translating the strategic goals of the organization into the performance measurement system provides management with a means to manage change and channel employee behavior. Proactive management suggests that changing measurements and incentives are critical."
For both sets of authors above, the ability of an organization to adapt its system of measurement is a seminal feature of the management information system and a key to success. One obvious reason for continually changing the system of performance measures will become clear below when we discuss material velocity management, in which one attempts to maximize the flow of materials through a manufacturing
facility and minimize in-process inventories. The principles of material velocity management presented in Goldratt and Cox's (1986) highly acclaimed and widely read work The Goal dictate that manufacturing cells only produce in quantities demanded by customer orders, not to the capacity of the cell. The result is that workstations that are not on the critical path or do not represent "bottlenecks" would produce at a "sub optimal" or "unproductive" pace (when viewed individually) so as to not exceed the (system) optimal pace dictated by customer demand. The question that is raised is whether one seeking to evaluate the introduction of IT would apply similar logic to achieve "information velocity management."
As discussed in The Goal, the measures for success for the more capital-intensive operations in the plant under consideration were altered drastically when management began to think "systems." When the organization's leaders began to think seriously about the success of the plant (the system), they altered their measures. Maximizing the utilization of the most expensive piece of technology was no longer the aim. The aim was optimizing the performance of the system. Even though the setting for The Goal, and hence the example, is manufacturing, the principle also holds for office, professional, and technical settings. The inability to change paradigms and systems of measurement and rewards over time is clearly a key element in the productivity paradox. In retrospect, focusing improvement and measurement efforts on bottleneck elements in the office, professional, and technical settings makes sense.
Also inherent in the notion of flexible measurement systems is the realization that to be effective organizations must continually redefine their purpose. The successive redefinition of purpose must, in turn, be followed by a review of the system of measures to ensure that the factors being recorded are still indicators of effectiveness and provide the necessary reinforcement to ensure that workers' activities are consistent with the redefined goals. Vaill (1989) suggests that today's managers are managing and leading in "permanent white-water." As such, the balancing of strategy, action, and measurement is fast becoming a prerequisite for survival.
Guideline 4: Reliance on Traditional Measures Can be Problematic
It follows from guideline 3 that traditional performance measures, which tend to focus directly on financial-related data, can be a problem. Traditional measures also typically stress efficiency as the principal criterion for evaluation. That managers tend to blame the failure
of accounting systems for many of their problems is one symptom of rising discontent with the utility of financial measures.
According to McNair et al. (1986:144), cost accounting traditionally serves three purposes: (1) financial reporting to outside groups (e.g., shareholders, creditors, and regulatory agencies); (2) managerial reporting and cost modeling for planning (e.g., one-time studies to determine pricing, product line evaluations, make-or-buy decisions); and (3) feedback and control of factory operations (e.g., productivity assessment, incentive pay). But as Dixon et al. (1990:118) point out, "For control of factory operations, the traditional accounting measures are too irrelevant due to allocations, too vague due to 'dollarization,' too late due to accounting period delays, and too summarized due to the length of the accounting period." The problem of inappropriate cost-based measures that confronts manufacturers also applies to the service sector and to office, professional, and technical settings.
Of particular concern is the effect created when managers focus on the financial performance and productivity of direct labor. First, as noted in Chapter 5, a great many organizations operate in an environment in which the direct labor component of their products and services is continually shrinking. This is certainly the case in manufacturing, perhaps less so in the service sector. In this regard, one aspect of the productivity paradox seems to stem from assumptions that the introduction of IT would improve direct labor productivity. We question this assumption and argue that IT might improve aspects of performance but not necessarily productivity or even efficiency.
Second, in an era when throughput time (responsiveness) has been widely identified as a key to productivity, management attention should be focused on bottleneck operations. Goldratt and Cox (1986) provide convincing arguments that full utilization of workers and equipment may well be the enemy of organizational productivity. This concept is clearly counterintuitive to managers who are operating on the basis of traditional performance measures.
DESIGNING AND DEVELOPING MODERN MEASUREMENT SYSTEMS
Measurement is inextricably interwoven with the management process. Indeed, the control function implies measurement. Deming (1986); Dixon et al. (1990); Hammer and Champy (1993); Imai (1986); Juran (1988); Kanter (1983, 1989); Kilmann (1989); Mali (1978); Wheeler (1993); and others, have argued for systematic efforts to improve the quality of management systems and processes. Deming has gone so far as to state that 85 percent or more of the quality and productivity problems in the United States are caused by management. He further explains that
management is to blame because it "owns" the management systems and the management systems are inadequate.
What is a management system? The model shown in Figure 6-1 provides a viewpoint that can lead the way to developing measurement systems required for world-class competition. Figure 6-4 combines Figures 6-1 and 6-2 and illustrates the components and interfaces of the management system that Deming and others are challenging managers to improve. To reiterate, the management system model comprises three components: (1) who manages (the management team), (2) what is managed (the organizational system), and (3) how managing is done (tools and methods to convert data into information). The management system also involves three interfaces: (1) the interface between decision and action, (2) the interface between information portrayal and information perception, and (3) the interface between measurement and
data. The PDCA cycle is also superimposed on the composite model in Figure 6-4.
The organizational system (e.g., department, work group, section, branch, division, plant, company) has providers, inputs, value-adding processes, outputs, and customers. The five lines passing through the decision-to-action interface represent improvement interventions being made at the five key quality checkpoints in the management system. (Recall that the quality checkpoints are (1) selection and management of providers, (2) incoming quality assurance, (3) in-process quality management, (4) outgoing quality assurance, and (5) proactive and reactive assurance that the organization meets or exceeds its customers' needs, expectations, desires, requirements, and specifications.) If the organizational system manages and measures performance at each of the five quality checkpoints, total quality is managed.
A shortcoming of this model is that it is descriptive, not prescriptive. To overcome this, the measurement activity must be integrated with the planning process. Deming (1986) has suggested that, in actuality, the United States is the most underdeveloped nation in the world—it does so little with so much, particularly its human resources. Americans have spent much of the past two decades searching for quick fixes (Kilmann, 1989)—roaming from one quick fix to another, in almost a "random-walk" process. What is needed are more comprehensive and integrated initiatives aimed at improving overall performance.
Strategic Planning for Performance Improvement
When the goal is continuous performance improvement, no organizational management process is more important than planning. We believe the productivity paradox is as much a planning and action problem as it is a measurement problem. Strategic planning is not done very well in the United States (Sink and Tuttle, 1989). The problem is not so much that the plans are bad, rather that the process leading to the plans is rarely well designed or executed. In addition, strategic plans are not accompanied by commitment. Hence, there is a significant discontinuity among the plan, the planners' expectations, and the actual implementation. To achieve commitment, the planning process must be executed in a way that establishes positive linkages between levels in an organization. The process must (1) involve more people; (2) achieve better balance among the business plans, policies and strategies, and the performance improvement plan; (3) be structured, yet flexible and responsive to user needs and preferences; (4) be led from the top down and implemented from the bottom up; (5) be focused on the process as well as the plan; (6) provide for sharing significant amounts of
information and knowledge; and (7) be alive, comprehensive, and well integrated. (We do not discuss the mechanics of strategic planning for performance improvement here because that is not the thrust of this chapter and has been detailed elsewhere, for example in Sink and Tuttle, 1989.)
Measurement should be viewed as a key step in a strategic management process, not the reverse. Too often, measurement has been viewed as an end in itself. Measurement is a means to an end; the end is survival, made possible by constant improvement and best-of-best class performance. The aims of the organization are to make good products, provide good services, provide stable employment, keep the customer happy, and stay in business. The introduction of IT to increase productivity is an example of an intervention that can be made at the individual, group, or organizational level with the goal of accomplishing these aims. However, IT, like any other intervention, has to be understood in the context of strategy, cause-and-effect relationships, and current performance levels.
Figure 6-5 depicts how performance measurement and continuous improvement are built into a strategic management planning process. Measurement supports and enhances strategic plans aimed at performance improvement. Note that planning corresponds to steps 1 and 2 of the process (Figure 6-5), actions are represented by step 3, and measures are reflected in step 4.
Organizations must institutionalize a process of continuously improving performance, and measurement systems must be an integral component of that effort. In doing this, the linkages issue must be addressed. That is what the planning process illustrated in Figure 6-5 can do for an organization when developed in the recommended fashion. Systematic planning, action, and measurement enhance the probability that there will be congruency across levels and ensure that linkages are positive. The following section focuses specifically on step 4 of the strategic management planning process.
Developing Enhanced Measurement Systems
In this section we describe the information portrayal-to-perception interface, the conversion of data to information, and the measurement-to-data interface. These elements characterize the measurement process within a management system.
Developing measurement systems for world-class competition entails the following: (1) identifying users and their information requirements as they support performance improvement; (2) identifying data requirements for the information needed; and (3) developing collection,
storage, retrieval, processing, and portrayal tools and techniques. Dixon et al. (1990) have identified three phases that organizations are likely to go through on the road to improved performance measurement systems: (1) tinkering with the existing measurement system (e.g., the cost accounting system); (2) cutting the "Gordian knot" between accounting and performance measurement; and (3) embracing change in strategies, actions, and measures.
Building measurement systems to support continuous improvement and address the productivity linkage paradox is a significant departure from the traditional orientation of organizational control. As such, some underlying issues and principles should be noted.
Key Issues, Principles, and Assumptions
Many measurement problems and failings can be traced to attitudes about measurement that are based on paradigms of the past. Listed below are issues, principles, and assumptions associated with the development of measurement for world-class competition, many of which challenge existing paradigms:
The goal is to design, develop, and implement successfully measurement systems that share information and thereby support and enhance continuous performance improvement.
Organizations that learn faster than their competitors have little to fear. Continuous learning must be cultivated through strategies, actions, and measures and must evolve over time (Dixon et al., 1990).
Control-oriented measurement systems often hinder continuous improvement efforts. It is important to distinguish who is doing the controlling to understand this issue fully. The aim is to move toward control and improvement by those doing the work, to build quality in versus inspecting it in. It is the overreliance on external control that is hindering the rate of improvement.
Measurement is often resisted due to fear of negative consequences: Visibility of good performance might lead to diminished resources. Visibility of poor performance might lead, initially, to more resources but eventually to punishment. Visibility of performance might promote catering to crises, excessive measurement, and micromanagement.
Measurement biases and paradigms are dominated by disciplinary (industrial engineering, industrial psychology, accounting, corporate finance, statistics, quality control) and often myopic thinking.
Measurement is complex. Once this is accepted, measurement can become less difficult.
Any measurement system should consist of a vector of performance measures, not a single comprehensive measure. Much of the controversy and lack of acceptance of measurement stems from attempts to make a very complex problem appear too simple (Morris, 1979).
Acceptance of the measurement process is essential to its success as a performance improvement tool. The process by which an organization determines what to measure, how to measure, and how to utilize measures is more important than the actual product of the measurement.
The greater the participation in the process of creating a performance measurement system, the greater the ease of implementing future changes based on performance measurement, and the greater the resulting performance change (Morris, 1979).
Organizations must measure what is strategically important, not just what is easy to measure.
An experimental approach to developing measurement systems must be adopted—fear must be driven out (Deming, 1986).
The arbitrary use of numerical goals, work standards, and quotas must be eliminated (Deming, 1986).
What is needed is a method by which measurement teams and their various "customers" can create and continually modify performance measurement systems suited to their own special needs and circumstances, not a standard set of measurements created by experts or obtained from a "shopping list" and imposed on the organization (Morris et al., 1977).
A performance measurement system must not appear to those involved as simply a passing fad (Morris et al., 1977).
The measurement system must clearly fit into the management process and be acknowledged as decision making and problem solving aimed at supporting performance improvement.
The behavioral consequences, unintended and potentially dysfunctional, of performance measurement must be anticipated and reflected in system design and implementation.
The measurement system must be seen by those whose behaviors and performance are being assessed as being nonmanipulative and nongamed (Morris et al., 1977).
An effective measurement system is built on consistent and well-understood operational definitions for a set of performance criteria.
The unit of analysis/target system of a measure must be defined clearly if measurement is to succeed. A necessary precondition is an input-output analysis, which essentially "models" the system by identifying customers/users, outputs, value-adding processes, inputs, and upstream systems, suppliers, vendors, customers, and so on.
Visibility, ownership, and line of sight must be created for resulting measurement systems in order to ensure effective utilization. Line of sight is a term used to represent understanding and/or visibility for cause-and-effect relationships on the part of the person performing. "To what extent is it clear that if I do this, this will result?" "What is the relationship between my behaviors and my performance?" Visibility often leads to control, and it certainly leads to improved understanding of cause-and-effect relationships (Wheeler, 1993).
The process of measurement must be separated from the process of evaluation. For example, the difference between a control chart and specifications, requirements, and standards must be understood.
The processes from measurement to data, data to information, portrayal to perception, and decisions to actions must be thoroughly understood in the context of performance measurement.
This rather long list characterizes the "new thinking" about measurement. In many respects, the concepts are consistent with ideas discussed by Deming (1986), Hackman and Oldham (1976), Kanter (1989), Lawler (1986), and others, with regard to the transformation from control-dominant to commitment-oriented organizations. The requirements for developing measurement systems for world-class competition are substantially different from those on which traditional performance measurement systems are based. The productivity paradox is caused, in part, by an inability to deal with these new requirements. To remedy this, the people participating in the process of improving measurement systems must be "masters," that is, they must possess profound knowledge of the new requirements. Further, the design principles for performance measurement systems have been altered substantially. The task of designing management systems, particularly performance measurement systems, has become more complex and challenging. In order to understand linkage issues, to measure their effects, and to predict their impact so that valid performance evaluation can be conducted at various organizational levels, the design of the measurement systems will have to be approached much more systematically.
Identifying Suitable Measures
Designers of a measurement system must be aware of the attributes of numerous possible performance measures. This will ensure that there is a suitable match between the measure selected and the requirements of the measurement system regarding a particular attribute. Four attributes of particular interest are sampling rate, character, precision,
and ease of observation. They give rise to the following measurement issues:
Performance measures differ in appropriate sampling rate. Flying a plane, for example, requires that altimeter readings be available on a continuous, real-time basis. However, deciding whether to purchase an additional Boeing 767 might require data from several years of monthly reports on passenger demand to support such a decision.
Performance measures differ in character. Deciding whether parts being produced meet specifications may require only a numerical value from a dial or a red or green signal from a go-no-go gauge. On the other hand, deciding whether to purchase an additional machine for the shop floor or contract out for the additional orders requires data about allocation of overhead or equivalent annual cost estimated from discounted cash flow calculations.
Performance measures differ in precision. Decision makers from different organizational levels typically have different requirements for precision. A decision maker at the organizational level might forecast demand for computing services as part of a long-range planning effort, which would be expressed in thousands or tens-of-thousands of hours per planning period. A scheduler, on the other hand, would require estimates precise to within minutes, or possibly hours.
Performance measures differ in ease of observation. Some phenomena (e.g., sizes, speeds) can be measured directly, whereas phenomena such as comfort and timeliness may require referred indicators. A matter of great importance in measuring organizational performance stems from the difference between measures of inputs and outputs. In general, input measures are more easily observed. On the other hand, it is more likely that a truly useful system of performance measurement will focus on output measures.
These issues and examples are not exhaustive, but they are representative of measurement attributes that must be considered in developing the information system that will help address the productivity paradox. They illustrate the basic principle that performance measurements must be uniquely appropriate to the individual, group, or organization in their most elementary attributes.
Key Design Variables
The foregoing issues and principles translate into design variables for developing an organizational performance measurement system. Several aspects of measurement consistently bog down the process, how-
ever. The method commonly used by organizations is analogous to buying a tool off the shelf and simply installing it. It is not uncommon, for example, for a data center to buy a software package, install it, generate the reports the package provides, and simply expect the user to figure out how to use the reports. This is the ''hammer looking for a nail to pound" approach to measurement. Tinkering with the existing measurement system is another common approach. Systems approaches to designing a performance measurement system are rare.
It will take a systems approach to develop the measurement systems that organizations need. Key design variables, such as the unit of analysis, user purpose, and operational definitions of measures, must be addressed and specified if the measurement system is to be successful. Specifying the unit of analysis entails defining the organizational system for which the performance measurement system is being established. What are the organizational system's boundaries? What are the outputs? What are the inputs? Who are the providers and the customers? What are the value-adding processes? Input-output analysis is a tool designed to provide answers to these questions. Once the questions are answered, the unit of analysis will have been adequately defined. One of the most common mistakes made when developing a performance measurement system is failing to define the system of interest. This is a key element of the linkage issue. For a given unit of analysis, measures are frequently developed outside the context of the larger system. An example would be evaluating the payoff of an IT intervention at the individual or group level without considering linkages to higher organizational levels. In other words, measures should be specific to a given unit of analysis, but the data should be interpreted in the light of other unit-of-analysis perspectives.
The users of the measurement system and their purposes also must be defined clearly. Who are the end users of the measurement system? What do they need from measurement to help them improve how they solve problems and make decisions? These questions may seem simple and obvious, yet it is quite common for measures to be specified without these questions ever being addressed. Again, the implication for the linkage issue is significant. Who is trying to confirm that improvement has taken place—the IT vendor, the manager who purchased the IT product, the critic who is against IT and therefore has a hidden agenda, the IT user who is skeptical of the benefits and is resistant to something new, or the analyst who is attempting to understand organizational performance over time? At the heart of the linkage issue and the productivity paradox is this question of user and purpose.
Operational definition of the aspects of performance to be measured is another important design variable. The seven performance criteria
articulated previously are analogous to categories of instruments on an airplane control panel. For example, there are engine performance instruments, spatial location instruments, and communication instruments. In the well-designed measurement system, there will be a hierarchy of "instruments," "indicators," and "gauges." At the highest level in the model proposed herein is the measurement construct called performance. At the level below performance, we have postulated seven performance criteria. It might require a half dozen levels of detail in a measurement system in order to understand and model both system and subsystem performance on these criteria. This is not dissimilar to a multilevel work-breakdown structure for tasks. The difference here is that the breakdown is done for measurement. Once the hierarchy of measures is determined, specific indicators must be established, operational definitions written, the measurement-to-data interface determined, and ultimately, the user interfaces and utilization completed.
In the final analysis, however, examining the organization and designing the measurement system and the management information system can only do so much. The strategies employed by the decision makers and the decisions they make ultimately determine the success or failure of the organization.
APPLYING MEASUREMENT TO MANAGEMENT
Goldratt and Cox's (1986) The Goal provides a valuable insight into the relationship of measurement to management. When discussing the optimal production technique strategy (also referred to as material velocity management), they emphasized the need for identifying and managing bottleneck operations in systems with coupled, or linked, elements. The extraordinarily simple key is that the production level of the bottleneck operation must be managed in light of customer demands. Once this relationship is established, all other production units, in turn, must operate at the pace dictated (pulled) by the pace of the bottleneck operation. Idle people or machines are not arbitrarily considered "waste." In fact, nonbottleneck operations can only be understood and evaluated in light of the productivity of the total system. Production at any rate greater than that dictated by the pull of the bottleneck operation simply adds to unnecessary inventory.
In a subsequent work The Race, Goldratt and Fox (1986) provide more technical detail. They demonstrate that forcing workers or facilities in nonbottleneck operations to "be productive" not only generates unneeded inventory, but increases costs, reduces product quality, degrades system flexibility, and restricts the ability to respond to customer demands. However, for the purposes of this discussion, a later
section of The Race makes an even more relevant point: "We are not dealing here with a change in the foreman's culture, but a culture controlled by how management measures a foreman's performance" (p. 112; emphasis added).
Why do organizations measure? Many of the references we have cited and much of our own experience suggests that traditional managers measure for control. However, world-class organizations measure not for control, but to drive continuous improvement. The management information system and, in particular, the performance measurement and portrayal system that support it are key to total productivity and world-class competitiveness. Foreign competitors of U.S. industry may not have worked out the theory or the underlying mechanisms, but in material velocity management, for example, they have sensed one of the keys to achieving "the goal." Organizations can put the information system to work to enhance their ability to achieve that goal.
We have discussed how the performance measurement system can provide management with information and a mechanism to reinforce behaviors consistent with organizational goals. To achieve these ends, organizations must become much more sophisticated in designing and implementing portrayal systems that display performance. If the portrayal system encourages local maximization and suboptimization of individual and group performance, the organization's total productivity falls. Local optimization can occur at the individual or group level. Further, there can be horizontal variation in local optimization (one individual is optimized another suboptimized) and almost infinite variations in next-level performance. This is what makes the linkages issue so complex to model and to analyze. If the system shapes the behavior of the entire work force toward the common goal, total productivity is enhanced.
The foregoing observations suggest that an alternative to precise control of direct labor, with its attendant dysfunctional consequences, can be found in the adoption of an organizational perspective that views measurement as an integral part of the managerial process. A key objective in adopting this new perspective is to establish a system of measures that tracks progress toward achieving the organization's strategic goals. Why haven't these changes occurred?
Goldratt and Cox (1986) see the goal as "making money," but we regard profitability as a means to an end. In the final analysis, Chester Barnard (1938:44) said it most decisively: the only true measure of organizational performance is its "capacity to survive."
CONCLUSION AND NEXT STEPS
In this chapter, we have not addressed organizational linkages and the productivity paradox methodologically or quantitatively. Our aim has been to spark systems thinking about the origins of the paradox. Some elements of the paradox may be explainable, others may not. But we strongly believe the methodological issues are much more tractable if systems principles, theories, and concepts are understood and put into practice. In order to address the productivity paradox and organizational linkages methodoligically or quantitatively, several prerequisites must be resolved. First, profound knowledge of productivity and quality improvement is necessary to model and predict improvement in an entity. We believe that profound knowledge did not exist for much of the work being evaluated in the literature. Thus, it is impossible to rely on existing research and evaluation work as a basis for verifying that there is, in fact, a paradox. It is just as reasonable to conclude there is no paradox, only the perception of one based on poor measurement and evaluation.
Second, the lag between when an improvement intervention is made in an entity and when actual (predicted) improvement is seen, felt, and measured in the entity or in other entities as a result of linkages must be understood and dealt with methodologically. It is widely accepted that many short-term (tactical) operational improvements have long-term (strategic) consequences. The evaluation research that has been done to date, however, does not appear to be sufficiently longitudinal. Thus, researchers and practitioners may not be waiting long enough to see their beliefs in cause-and-effect relationships come to fruition. The lag between the time a potential improvement is made and when true improvement can be seen, felt, and measured presents a challenge for researchers and practitioners. Macroeconomic methodology does not have enough granularity to address this issue at the level of an organizational system.
Third, a science and methodology of measurement for performance improvement for organizational systems must be developed. Performance must be operationally defined and a theoretical measurement-breakdown structure developed and utilized so that evaluation results are comparable. Defining predicted linkages from entity to entity on the basis of beliefs in or, better yet, knowledge of cause-and-effect relationships is crucial to resolving the apparent paradox.
The viewpoint we expressed in this chapter was threefold: (1) a fixed system of performance and productivity measures cannot meet the informational needs of management in a modern production organization, (2) macro performance cannot be deduced from micro mea-
sures, and (3) measurement-driven suboptimization poses a significant threat to organizational productivity. Further, we believe that (1) a common set of measures cannot be used to assess and compare performance and productivity at all levels of the organization; (2) a particular system of measurement cannot serve the organization's needs indefinitely; and (3) total system performance cannot be deduced from measures of individual performance.
Researchers and practitioners should rethink and eventually abandon the strategy of measuring, rewarding, and attempting to maximize the "productivity" of virtually all individuals and production subsystems. Rather, productivity drivers should be pinpointed and useful portrayal mechanisms introduced to ensure that managers and practitioners accomplish the desired end of world-class organizational productivity.
The next steps to be taken include the following:
Investigate whether potentially useful interventions are being forgone as a result of the productivity paradox.
Determine if the reluctance to make performance improvement interventions is due to lack of information. If so, concentrate on evaluating candidate performance improvement interventions from a systems perspective.
Model organizational linkages and analyze the productivity paradox for a selected set of specific examples in an effort to generate tangible theories about cause-and-effect relationships and frame the problem in a manner susceptible to solution.
The paradox of unrealized productivity improvements from IT interventions seems to us an example of incomplete systems thinking and failure to understand the nature of linkages at the individual, group, and organizational levels. Questions that need to be answered include the following:
At which level would one expect performance to improve as a result of IT or any other productivity improvement initiative?
Which aspects of organizational performance will be quantifiable and which will require qualitative assessments?
To what extent have researchers and practitioners clarified what they know versus what they believe or feel about cause-and-effect relationships as they evaluate the linkage issue?
These are central questions that have been stimulated by a systems
perspective and that demand concentrated study as part of the effort to unravel the productivity paradox.
Akao, Y., ed. 1991. Hoshin Kanri: Policy Deployment for Successful TQM. Cambridge, Mass.: Productivity Press.
Barnard, C.I. 1938. Functions of the Executive. Cambridge, Mass.: Harvard University Press.
Benfari, R. 1991. Understanding Your Management Style: Beyond the Myers-Briggs Type Indicators. Lexington, Mass.: Lexington Books.
Deming, W.E. 1986. Out of the Crisis. Center for Advanced Engineering Study. Cambridge, Mass.: M.I.T. Press.
1991. Four day workshop, by W.E. Deming. Atlanta, Georgia.
Dixon, J.R., J. Nanni, and T.E. Vollmann. 1990. The New Performance Challenge: Measuring Operations for World Class Competition. Homewood, Ill.: Dow Jones-Irwin.
Fitts, P.M., and M.I. Posner. 1968. Human Performance. Belmont, Calif.: Brooks/Cole.
Goldratt, E.M., and J. Cox. 1986. The Goal: A Process of Ongoing Improvement, rev. ed. Croton-on-Hudson, N.Y.: North River Press.
Goldratt, E.M., and R. Fox. 1986. The Race. Croton-on-Hudson, N.Y.: North River Press.
Grayson, C.J., Jr., and C.S. O'Dell. 1988. American Business: A 2-Minute Warning. New York: Free Press.
Greif, M. 1989. The Visual Factory: Building Participation Through Shared Information. Cambridge, Mass.: Productivity Press.
Hackman, J.R., and G.R. Oldham. 1976. Motivation through the design of work: Test of a theory. Organizational Behavior and Human Performance 16:250–279.
Hall, R.W., H.T. Johnson, and P.B.B. Turney. 1991. Measuring Up: Charting Pathways to Manufacturing Excellence. Homewood, Ill.: Business One Irwin.
Hammer, M., and J. Champy. 1993. Reengineering the Corporation: A Manifesto for Business Revolution. New York: Harper Business.
Imai, M. 1986. Kaizen: The Key to Japan's Competitive Success. New York: Random House.
Juran, J.M. 1988. Juran on Planning for Quality. New York: Free Press.
Kanter, R.M. 1983. The Change Masters: Innovation for Productivity in the American Corporation. New York: Simon & Schuster.
1989. When Giants Learn to Dance: Mastering the Challenges of Strategy, Management, and Careers in the 1990s. New York: Simon & Schuster.
Kanter, R.M., B.A. Stein, and T.D. Jick. 1992. The Challenge of Organizational Change. New York: Free Press.
Kerr, S. 1975. On the folly of rewarding A while hoping for B. Academy of Management Journal 18:769–783.
Kilmann, R.H. 1989. Managing Beyond the Quick-Fix. San Francisco: Jossey-Bass.
Kottler, P., L. Fahey, and J. Jatusripitak. 1985. The New Competition. Englewood Cliffs, N.J.: Prentice-Hall.
Kurstedt, H. 1986. The Industrial Engineer's Systematic Approach to Management. MSM Working Draft and Articles and Responsive Systems Article . Management Systems Laboratories, Virginia Polytechnic Institute and State University, Blacksburg.
Lawler, E.E. 1986. High Involvement Management. San Francisco: Jossey-Bass.
Mali, P. 1978. Improving Total Productivity. New York: John Wiley & Sons.
McNair, C.J., W. Mosconi, and T. Norris. 1986. Beyond the Bottom Line: Measuring World Class Performance. Homewood, Ill.: Dow Jones-Irwin.
Mohrman, A.M., S.A. Mohrman, G. Ledford, T.G. Cummings, and E.E. Lawler. 1989. Large-Scale Organizational Change. San Francisco: Jossey-Bass.
Morris, W.T. 1979. Implementation Strategies for Industrial Engineers. Out of print; available from Virginia Productivity Center, Virginia Polytechnic Institute and State University, Blacksburg.
Morris, W.T., G.L. Smith, and D.S. Sink. 1977. Productivity Measurement Systems for Administrative Computing and Information Services. Grant No. APR 75-20561. Columbus, Oh.: The Ohio State University Productivity Research Group.
Sebge, P.M. 1990. The Fifth Discipline: The Art and Practice of the Learning Organization . New York: Doubleday.
Shewhart, W. 1939. Statistical Method from the Viewpoint of Quality. Washington, D.C.: U.S. Department of Agriculture, Graduate School.
Sink, D.S. 1993. Developing measurement systems for world class competition. In Handbook for Productivity Measurement and Improvement. Cambridge, Mass.: Productivity Press.
Sink, D.S., and T.C. Tuttle. 1989. Planning and Measurement in Your Organization of the Future. Norcross, Ga.: Industrial Engineering and Management Press.
Vaill, P.B. 1989. Managing as a Performing Art. San Francisco: Jossey-Bass.
Weisbord, M.R. 1991. Productive Workplaces: Organizing and Managing for Dignity, Meaning and Community. San Francisco: Jossey-Bass.
Wheeler, D.J. 1993. Understanding Variation: The Key to Managing Chaos. Knoxville, Tenn.: SPC Press.