Measuring, Describing, and Predicting System Performance
When you can measure what you are speaking about, and express it in numbers, you know something about it, . . . (otherwise) your knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in thought advanced to the stage of science.
—Lord Kelvin (1824–1907)
Operating a business enterprise in such a way that objectives are achieved in the most efficient and timely fashion requires the optimization of system performance. One is seeking to maximize or minimize multiobjective functions within a specified set of constraints; for example, maximizing profits, or minimizing capital expenditures, defects, or material use per unit of product, within such constraints as fixed total resources, equipment configuration, or product mix.
Determining an optimal strategy for a complex manufacturing system, whether it is for control, investment, or processing, is seldom straightforward. Since the system often consists of elements that respond in a nonlinear way to inputs provided by other elements of the system, one must understand the detailed interactions that exist if one is to optimize the total. Considering this complexity, the operational approach is frequently to decompose the system into supposedly independent subsystems, such as areas, shops, cells, or units, and then to optimize the performance of each subsystem and impose interactions among the subsystems in such a way that an
overall optimum is obtained. The intent is to arrive, through repetitions of this process, at a solution that properly optimizes the original system.
Despite the difficulties that can arise in this procedure, it is important to recognize that there may be few alternatives to its use. The alternatives are limited for the following reasons:
Many manufacturing systems are too large to treat as a single entity.
The interactions among the various subsystems of a manufacturing system are frequently nonlinear.
The behavior of some subsystems cannot be described mathematically.
In formulating “foundations” of manufacturing systems, it is important to understand the value and the limitations of this piecewise approach to determining optimal operating strategies. While the insight that these analyses provide can be of great value in understanding many aspects of the manufacturing system, it is important to recognize the difficulty in determining the value of any given solution. This limitation, however, does not diminish in any way the importance of using models and mathematical constructs to provide insight into the performance of the manufacturing system.
METRICS: QUANTIFYING THE PERFORMANCE OF THE MANUFACTURING ENTERPRISE
Performance evaluation is a process applied throughout the manufacturing enterprise to measure the effectiveness in achieving its goals. Because of the variety, complexity, and interdependencies found in the collection of unit processes and subsystems that define the manufacturing system, appropriate means are needed to describe and quantify rigorously the performance of each activity. Metrics are mechanisms used in describing and appraising those systems, subsystems, and elements (see Dixon et al., 1990, and Johnson and Kaplan, 1987).
Manufacturers have three basic sources for metrics; many are part of general business knowledge and are readily available in the management literature, especially the metrics used to describe and evaluate the financial performance of the firm. A second group might be characterized as industry specific metrics. They are commonly recognized as appropriate measures of some aspects of the manufacturing system, usually within a single discipline; for instance, metal forming. And finally there are metrics that are developed by individual companies reflecting their special circumstances. Metrics developed for these specific contexts can provide the basis for a period of unique competitive advantage, although these metrics will be broadly
adopted within a particular industry and later within the general pool of recognized manufacturing wisdom. For instance, lean production (Womack et al., 1990) and just-in-time inventory control (JIT) both depend upon the development and use of appropriate metrics. In the case of lean production, these metrics are used to measure the amount of resources consumed in design and production of products, driving toward a minimum. JIT metrics focus on the time between entry of materials into inventory and their incorporation in products on the production line. Although both of these metrics were refined within the automotive industry, they are becoming widely adopted for manufacturing in general.
Taxonomy for Metrics
An orderly means of classification should provide some initial help in the selection and use of suitable metrics; knowing where to find them, however, offers only a minimal basis for understanding which to use and when their use is appropriate. Cook (in this volume) suggests a reasonable taxonomy can be developed with a simple division of all metrics into direct and proxy metrics. Direct metrics have no intervening transformation between the measures of a variable and its associated value; the correlation is immediate. For instance, if the number of product defects is indicative of the quality of a process, it would be expected that if the number of product defects declines the quality of the process has increased.
Proxy metrics, however, involve transformation of the values of several variables to arrive at the value of the enterprise performance metric. They are often complex aggregations of many, possibly diverse, characteristics that may not directly influence characteristics of interest in the organization. For example, patent activity is a proxy measurement sometimes used as an indicator of the innovativeness of a manufacturer (see Howard and Guile, 1992). Profitability is a proxy metric that a manufacturer may use to gauge performance. However, it is not usually possible to influence profitability by adjusting one or two quantities or characteristics. Therefore, proxy metrics may be better thought of as indicators of change rather than cause-and-effect relationships that are directly manipulable.
The rules and policies that guide decisions at many different levels of the manufacturing company must incorporate the metrics appropriate to each. At the corporation level, proxy metrics are often used for measuring the profitability and customer responsiveness of the entire enterprise. For the subsystems there are additional metrics, such as yield from a series of processes, that reflect the performance of an integrated subsystem of production equipment and its accompanying labor force; once again these are likely to be aggregations of many variables that “indicate” the performance of the subsystems. However, at the component and unit operation level of
the manufacturing firm, we find a set of direct metrics, reflecting the performance of the machine, process, or worker; for instance, the number of defects identified, the productivity of the equipment, or the number of days of employee absence. Marsing (in this volume) notes the importance of formally developing and using statistical metrics for each step of the production process, and describing the range of upper and lower bounds within which the machines and equipment are expected to perform. Then, when the performance exceeds previously defined limits, corrective action can be taken at the operator level by modifying machine settings rather than at the aggregated process level, which is less well understood.
Matching Metrics with Goals and Concerns
In the choice of metrics, manufacturers have two perspectives that should be considered; one external measure and one internal.
First, . . . measure the performance of the organization against that of its competitors and, second, . . . assess the trends in one 's performance in order to take appropriate actions to ensure continuous improvement. In the first case, an absolute measure of performance is needed. This is sometimes referred to as “benchmarking,” or measuring oneself against the world leader—the “best-of-the-best” in a product or process arena. In the second, progress over time is the prime concern, that is, how well the organization is achieving continuous improvement in performance. A proper combination of these two measures is critical. Without them an organization cannot properly evaluate its absolute competitive status, nor can it be assured of its ability to remain competitive over a long period of time (Compton et al., in this volume, pp. 107–108).
A particular concern to manufacturers of complex or innovative products incorporating rapidly changing technologies is the difficulty in identifying the correct metrics for important characteristics of the product and associated processes. In the early phases of the product life cycle, gross measures of performance are likely to be appropriate or sufficient, many times because a set of clear cause-effect relationships has not been established for the new product. However, as the product and associated process mature and are better understood (more scientific foundations, better models), those metrics do not adequately reflect the advances made in the product's performance or in the production activities required for its manufacture. This indicates the need either to consider the continuing validity of those metrics being used or to expand the portfolio of metrics.
When metrics are not readily available to measure the important characteristics, then modification of available metrics or development of new bases for measurement is called for. Hewlett-Packard found it necessary to develop its own “internal” metrics for measuring the effectiveness of their
Appropriate Metrics for Rapidly Changing, High-Technology Products
When General Electric Medical Systems first began manufacturing magnetic resonance imaging (MRI) systems, the number of defects identified in each unit prior to shipment was the primary metric for internal product quality. However, as design and production engineers developed better understanding of the complex interactions among the technologically sophisticated components of the MRI products, they also increased the number of opportunities to detect defects or out-of-specification occurrences in their product. This created a paradox: as the quality of the products delivered to the customers of GE increased, the internal metric used to evaluate the production staff—the number of defects identified and corrected before shipment—indicated falling performance. Clearly a better means of reflecting the increasing quality and competence of the production system was needed.
The length of time that the order spent in the manufacturing facility —its cycle time—and the variance of the cycle time were identified as a more representative metric for the quality of the product and production processes. Cycle time focuses attention on a characteristic that remains consistent throughout successive product generations, and it can incorporate changes in process and product technologies. Therefore, it does not penalize design and production engineers for furthering their understanding of the fundamentals underlying the product and developing mechanisms for managing the idiosyncracies of its manufacture. The variance of the time that the orders spend in the facility is an important indicator of the level of control the production facility exercises over the associated manufacturing processes.
The effectiveness of this metric is illustrated by the significant decreases in product cycle time over successive generations of MRI products. When GE engineers began measuring cycle time, the unit of measure was weeks; current measures are indicated in days, with their goal cycle time measured in number of work-shifts.
SOURCE: Personal Communication, 1990. Frank Waltz, Manager of MagneticResonance Manufacturing, General Electric Co., Waukesha, Wisconsin.
cross-functional product development teams (House and Price, 1991). While the “return map” developed by HP contains many metrics that are similar to those discussed in this volume, the metrics have been modified, tweaked, and combined in a manner that reflects the unique measurement needs identified by HP for its collaborative product development efforts. The metrics create a unique internal language, or grammar for communication among
the many different functional groups in the company (Lardner, in this volume). As described by House and Price (1991), the metrics are used to measure, communicate, and understand the “contributions of all team members to product success in terms of time and money . . . (and the return map) includes the critical elements of product development and the return or profits from that investment.”
Of course the survival and success of all manufacturers—even those enjoying dominant market positions and offering the highest quality products—depend on the ability of the company to make a profit for its owners. Therefore, there is a real need for the proper financial metrics to measure, compare, and furnish information for decisions about the management of the enterprise.
Care in selection of appropriate metrics is extremely important because they focus attention on a particular set of variables and thus affect the kind and direction of control taken. Cook (in this volume) points out that U.S. manufacturers are beginning to realize that proxy financial metrics, such as profitability, market share, and return on investment have not been very good for measuring the effectiveness of manufacturing in a global marketplace. He suggests that quality, lead time, flexibility, and innovation may be better reflections of their competitiveness. These goals represent fundamental metrics that can be directly influenced by the enterprise and, in turn, improve future financial performance and resilience to change.
For some time, engineering managers have expressed concern with the difficulty of available methods for justifying capital investments in such areas as flexible manufacturing equipment, increasing product quality through better production process controls, and promoting greater work force involvement through employee education programs. The management accounting community has finally recognized these problems and begun to question the metrics incorporated in the manufacturing operating policies, control parameters, and performance-evaluation criteria that are used to evaluate the return/viability of new projects (Johnson and Kaplan, 1987). Policies based on information derived from aggregate financial reporting data (proxy metrics) offer almost no basis for operational decisions or evaluation of investments in new technologies (Eccles, 1991). Chew and coauthors (1990) show how a company with 40 plants, all producing basically the same products, missed opportunities for increasing profitable performance because when comparing the plants, division management focused on the wrong financial metrics. They considered the most effective plants to be those that were most profitable and ignored the special circumstances that caused locations with outstanding productivity to exhibit only good profit-
ability. The transfer of the ideas, methods, and technologies from the high-productivity plants would further increase the profitability of the higher ranking profit centers.
It must also be kept in mind that the use of financial measures alone should not, as is so frequently the case, be the only source for metrics. Turnbull and his coauthors (in this volume) suggest that nonfinancial metrics be used to assess the plant, business unit, or enterprise performance and that they in turn will contribute to predicting the expected performance of the business financially. They point out that there is likely to be a wealth of data available to the planner but it requires a systematic search, often from a number of sources that have not commonly been included, such as operating staff and even outside sources. It is important to develop the correct mix of financial and nonfinancial measures to guide the manufacturing organization. Each set of indicators provides different perspectives on the manufacturing system, and one must recognize that changes in one set of metrics may not be reflected by accompanying changes in another set. For example, it may not be easy to quantify the financial returns expected from investment in computerized flexible machining systems or training the work force in quality function deploy-
Nonfinancial Indicators and Long-Term Profitability
In an important sense, a call for more extensive use of nonfinancial indicators is a call for a return to the operations-based measures that were the origin of management accounting systems. The initial goal of management accounting systems in the nineteenth-century textile firms and railroads was to provide information on the operating efficiency of these organizations. Measures such as conversion cost per yard or pound and the cost per gross-ton-mile provided easy-to-understand targets for operations managers and valuable product cost information for business managers. These measures were designed to help management, not to prepare financial statements. The need to expand summary measures beyond those used to measure the efficiency of conversion reflects the greater complexity of product and process technology in contemporary organizations. But the principle remains the same: to devise short-term performance measures that are consistent with the firm's strategy and its product and process technologies. We need to recognize the inadequacy of any single financial measure, whether earnings per share, net income growth, or ROI, to summarize the economic performance of the enterprise during short periods.
SOURCE: Johnson and Kaplan (1987).
ment. But nonfinancial metrics that highlight the effect of switching rapidly between a wide variety of production setups will readily indicate the opportunity for a significant improvement in company performance.
The Application of Metrics as Operational Guidelines
Assuming that the proper metrics have been selected, one must next determine how those metrics can be applied. This implies developing rules or policies to guide the collection of data and judging the meaning of the values found. Although the desired direction of the effect can generally be defined, qualitative measures are in general not sufficient; a quantitative means for establishing the norms for policy parameters is best. For example, Compton and coauthors (in this volume) show that learning curve models can be used to establish expectations for quality metrics and to correlate values of those metrics with specific actions taken for their improvement. In their view
Learning curves are not to be viewed as merely descriptive. They can be, and frequently have been, used as an aid in making predictions, in that early experience in the production of a product can be used to predict future manufacturing costs. [Assuming confidence in the parameters chosen] one can readily predict the costs to produce a unit after some future cumulative production volume has been achieved.
Although these metrics are important for considering the activities within the organization over time, it is at least as important to maintain a vigilant awareness of competitors' capabilities and to adopt or define effective metrics for comparisons. Competitive strengths of the manufacturing firm have become more dependent on the quality of its work force and their ability to incorporate appropriate new technologies in products, processes, and services of the company (see also Prahalad and Hamel, 1990). Compton (in this volume) emphasizes the importance of developing the proper metrics for identifying, measuring, and evaluating these characteristics to ensure the future viability of the company. His metrics for gauging the capabilities of the organization to evaluate technological developments include the level of support for internal research and development; the portion of the R&D budget devoted to long-term projects, exploratory activities, new concepts, and technological innovations; the level of encouragement and support personnel receive to participate in worldwide technical meetings and activities; and the level of investment in technical libraries and information resources.
When assessing the capabilities of the technical work force relative to one's competition, credible indicators include the distribution of professional and advanced degrees as well as involvement in continuing educa-
tion. Professional awards and participation in outside organizations (for example, as officers, speakers, and lecturers) are indicative of the quality of the employees most responsible for maintaining the technological competencies of the firm (Compton, in this volume).
Although it is necessary to measure performance throughout the manufacturing enterprise, it is also necessary to apply metrics beyond the internal measures discussed above. Edmondson (in this volume) discusses the importance of using metrics that reflect whether the product definition, captured by designers and marketing staff, matches that articulated by customers. Each time there is additional information to share with the customer—presenting the design specifications, demonstrating a prototype—the metric remains the same: “Is this what you had in mind?” Cautioning against shortcuts, Edmondson points out that making use of this metric imposes considerable work on the manufacturer and also, to a lesser extent, on the customer.
Some firms might be tempted to establish metrics that they can generate and test from within the firm itself. . . . Metrics of this sort can give some indication of how well you are meeting your product definition but certainly seem to be a poor substitute for a real, live customer reaction. . . . In the final analysis the customer 's reaction to your product has a nearly 100 percent correlation with whether or not your product will sell (Edmondson, in this volume, p. 135).
FOUNDATION: World-class manufacturers recognize the importance of metrics in helping to define the goals and performance expectations for the organization. They adopt or develop appropriate metrics to interpret and describe quantitatively the criteria used to measure the effectiveness of the manufacturing system and its many interrelated components.
MODELS AND LAWS
Laws, in the context of scientific and engineering discovery, provide intellectual foundations and explanations by describing the relationships among the variables and parameters of the phenomena under investigation. These science-based laws also make it possible to predict the consequences of changes in variables based on an understanding of the relationships among them. However, as the number of variables increases and their relationships become more complicated and less well understood, we are less certain of
the effect caused by changes in one or more of the variables. As the relationship of the variables becomes more complex and the phenomena described become less specific, we begin to consider their explanations to be less reliable and more subject to outside forces or dependent on the context or environment. Eventually, the “laws” are considered as models of the situation of interest.
Many of the phenomena of nineteenth-century physics that were identified as laws of nature were, by the mid-twentieth century, spoken of as models of phenomena. Models and modeling continue to be the popular terminology, particularly as Little observes (in this volume) “in the study of complex systems, social science phenomena, and the management of operations.” Little suggests that the word model conotes the “tentativeness and incompleteness” often appropriate to our descriptions of complex systems “in which there are fewer simple formulas, fewer universal constants, and narrower ranges of application than were achieved in many of the classical ‘laws of nature'.”
The goal then for metrics and models is the identification of manufacturing science-based explanations and foundations—“laws of manufacturing systems”—that could be used to describe and understand current manufacturing systems, predict the consequences of actions, and confidently initiate the actions necessary to achieve organizational goals. Little notes that we are more likely to find a taxonomy or hierarchy of “manufacturing models” that provide various degrees of generic applicability. The bases available for constructing descriptions of phenomena are limited to mathematical expressions that have no necessary relationship to the real world, physical laws, which require observation of the world and induction about the relationships among observable variables, and empirical descriptions of the world in which there are fewer simple formulas and only approximate representations for phenomena. For example, the use of elementary queueing theory by Krupka (in this volume) to represent the flow of parts and materials through production equipment and machines in the factory is an instance of mathematics without physical foundation applied to manufacturing systems (for further discussion of models and mathematical formulations applied to manufacturing systems, see Striving for Manufacturing Excellence, 1990).
Models based on observations, such as relating the physical distance between pairs of researchers and the number of “messages” they exchange per week, can be applied to a broad range of engineering and managerial practices—they are generic models. When commenting on the relationship between distance and communication frequency, Little noted that while there does not appear to be a “strictly prescribed functional form or universal
constant, . . . there is definitely a general shape and an experimentally determined range of parameter values.” Moreover, although “the regularity of the curves can be distorted by a variety of special circumstances . . . the basic phenomenon is strong and its understanding is vital for designing buildings and organizing work teams effectively.”
Compton and coauthors (in this volume) propose learning curves for quality as another example of generic model applicability. The learning curves, based on empirical observations, have shown a general relevancy across a diverse range of manufactured products. These models for projecting quality improvement share a similar form with the models developed in the late 1930s to explain the significant decreases in product unit costs as a consequence of accumulated production volume. Several possible reasons can be proposed for their comparable configurations:
Although the specific actions taken to improve quality differ from those taken to reduce unit costs, a striking similarity exists between the two. . . . Both result from conscious actions taken by management and employees to accomplish a common strategic objective for the enterprise. Both combine human commitment and training with technical improvements. Both require extensive knowledge of the processes being employed and the products being produced. Therefore, quality and costs might be expected to share a common representation (Compton et al., in this volume, pp. 110– 111).
Although the applicability of models in the manufacturing enterprise is most commonly thought of in the context of production operations and processes closely associated with the physical manufacture of products, Krupka suggests that models and modeling should be considered in a much broader context. An example is the use of models to discover problems in the subsystems involved in new product introduction and other nonmanufacturing steps, before the start of physical production activities. Krupka notes that nonmanufacturing operations “are often more complex than those encountered on the factory floor.” Moreover
Analysis of such [models] often reveals the presence of steps that add no value or that consist of re-creating—at the risk of introducing errors—information created elsewhere. Eliminating these steps will shorten the system's interval, reduce costs, and often improve quality by reducing opportunities for the introduction of errors (Krupka, in this volume, p. 168).
Another reflection of the intricacies of the systems that operate within the manufacturing enterprise but may be hidden until observations are described with empirically derived charts is evidenced in “decision-expenditure ” curves (Bowen, in this volume). The formal and informal linkages and delays that occur as new products and processes are commercialized
result in long feedback loops. These empirical observations suggest the magnitude and value of up-front knowledge when time is of the essence. The startling issue is that often 80–85 percent of the project expenditures are determined during the first 15 percent of the project time. Therefore, a high priority is placed on starting research efforts in the earliest phases of the project, because, in Bowen's view, the people involved in these projects “perceive the actual time of the decisions that triggered the expenditures as occurring very early in the process.”
Complex processes involving numerous variables and elements of subsystems (such as information, technology, human, financial, and marketing) result in longer than anticipated execution consequences and, thus, strongly influence feedback loops in the manufacturing system (Bowen, in this volume, p. 94).
Bowen proposes that the important aspect of these models is the circumstances they suggest rather than the specific values indicated. Furthermore, when looking at the cases involving the “best-of-the-best,” the 15/85 rule does not seem to apply. Bowen points out that in those cases, “the decisions are much more closely linked to the doing and the expenditures,” and that “the feedback and corrections are different in number, timing, and quality.” He suggests the following explanations for the 15/85 trends:
The ineffective working of teams pulled from functional groups,
The lack of standards or a single data set,
Procedures and mechanisms for problem solving and structuring of the solutions, and
Infrastructural issues such as lengthy procedures and justification for obtaining resources—people or capital.
Examination and attention to these relationships should be helpful in establishing proper goals and expectations and understanding how they can be facilitated.
Modeling and Understanding
The complexity, nonlinearity, and stochastic nature of models are reflected in the number of variables they include, the number of relationships and interdependencies described, and the amount of information that the model generates. The degree of complexity is, to a significant extent then, a matter of how detailed the model representation is. And the complexity will directly influence the comprehension of the model and the acceptability of the results obtained, as well as increase the difficulty of changing and enhancing the model.
Solberg (in this volume) proposes simplicity as a key to the acceptance and use of models. He emphasizes the importance of the relationship between the credibility of models and user understanding of those models and what they depict.
It is neither necessary nor desirable to build complicated models to deal with complicated situations. Indeed, we should be trying to find a point of view that makes complicated situations seem simple. . . . We must be aware that simple does not mean trivial or obvious. We cannot define relations arbitrarily, make capricious assumptions, or generalize recklessly. . . . Finding the adequate level of detail, the appropriate assumptions, and the elegant formulation is a matter of hard work and inspired wisdom (and perhaps a large dose of luck) (Solberg, in this volume, p. 218).
And when there are several alternative representations of a particular system, Compton too advises that although determining the most appropriate model depends upon many factors, such as the data sampling protocol, it is probably best to use “the simplest formulation possible.” But often the simple representation can be discovered only after constructing and examining more complex models to gain additional insight into the problem; when constructing a first model it may be difficult to determine which variables and relationships define or constrain the performance of the problem. Therefore, the first model will include many more factors than will ultimately be needed (Pritsker, 1986b).
Of course, the opportunity to compare the results of several representations can offer further assurances that the results of the models are valid. In fact, Little encourages the development of a “validity-check” model:
If the results of running a complex model suggest a particular course of action, it is imperative to know why the model produced those results, that is, what were the key assumptions and parameter values that made things come out as they did. . . . We should have a simple model that uses a few key variables to boil down the essence of why the recommendations make sense (Little, in this volume, p. 186).
Determining Limits for Improvement
Manufacturers must be able to establish realistic goals and to plan for their accomplishment in the face of uncertainty.
The future success of a business will be influenced both by processes over which the business has little control and by those it can affect directly. For a process in the former category, we are interested in forecasting its expected performance over time. For a process in the latter category, we are interested in forecasting its potential performance, based on our understanding of “what could be” and our capacity to act (Turnbull et al., in this volume, p. 226).
If it is not possible to understand and describe adequately the potentials and limitations of the firm's capabilities, goals for improvement will have little rational basis for those who are charged with accomplishing them.
Turnbull and his coauthors suggest that rational bases for improvement are available in the form of limits—theoretical and engineering. Theoretical limits provide “both an outer bound for forecasts of potential process performance and a framework for clarifying the principles that govern the process.” Based on fundamental principles and reasoning, they are numerical estimates of process performance. Engineering limits, on the other hand, “provide numerical estimates of the levels process variables could attain, using known technologies. ” The engineering limit for a specific indicator of process performance is intended as a practical estimate of what is achievable without regard to possible adverse effects on other indicators. Although theoretical limits are expected to be universally applicable (within some particular domain), engineering limits take into account the local context and circumstances of a specific production system. Therefore, engineering limits should move more closely to the theoretical limits with the introduction of newer production technologies or with change in the local context (such as organizational changes that promote cooperation between design and manufacturing groups).
Similarly, while continuous improvement is an important foundation of world-class manufacturing, it must be supported by appropriate mechanisms to measure improvement and to define the appropriate limits or goals to prevent excessive and wasteful investment. Knowledge of the theoretical limits can provide a benchmark for expectations of future improvement (see Foster, 1986).
Identifying Critical Variables
When the complexity is significant enough to suggest that the development of a model is necessary to understand the relationships, dependencies and interactions among the variables, it is also likely that one is not able to identify directly which of the variables exert the most control over the performance of the modeled system. In a model of a single-server queue (such as a machine tool with one operator) Krupka (in this volume) illustrates the significance of identifying and focusing on the critical aspects of the system. He draws attention to the sensitivity of the throughput to the variance of the service time and arrival rate of the modeled machine:
Small decreases in the service rate (which effectively shift the system to a higher level of capacity utilization) lead to a large increase in throughput time . . . (and) an increase in variability, either in the arrival or in the service rate, leads to a large increase in throughput time. . . . The prescriptions for reducing throughput time (or manufacturing interval) are the same:
reduce variability in the system and strive to increase the service rate (Krupka, in this volume, p. 170).
Therefore, when models appropriately represent the systems, they can be used to help identify the characteristics of the system that determine its control and thereby provide a basis for systematic improvement.
Strategic Planning and Management
Models are useful for considering operational questions that confront the manufacturing enterprise and to explore, in a timely way, strategic alternatives for the firm. Mize (in this volume, p. 196), comments on the changed context in which manufacturing managers find themselves today:
Managers are rapidly losing many of the planning aids that have allowed them to proceed in an orderly, progressive fashion. In the past, managers could safely assume that tomorrow will be much like today, with only marginal changes. In fact, randomness was often much larger than the average marginal change; thus, the “noise” masked the “signal.” Consequently, many of today's managers know how to manage only on the margin, in a static mode. Today's managers are faced with the fact that change is continuous, pervasive, and often traumatic. . . . A rapidly changing total environment has become the norm, replacing the relatively stable and static environment of the past.
Mize goes on to characterize the challenge of working backward from a desired future state to the present in a way that clearly shows a path of action. He suggests that models will be needed to help most people to deal with the interdependent variables and dynamic changes affecting the necessary day-to-day control to achieve their organization's strategic visions.
Basis for Decisions and Predicting Performance
Models provide a rational basis for predicting the impact of decisions before their implementation by (quantitatively) describing the important elements, interactions, and dependencies. Empirical models comprise valuable knowledge that provides a basis for engineering and managerial practice. Even simple models like the 15/85 rule described by Bowen (in this volume) are useful drivers of improvement and change.
The construction and continued refinement of models also make it easier to evaluate and transfer the assembled know-how from individuals and groups to others in the organization. Lardner suggests, that as a vehicle for capturing and conveying organizational knowledge, models are a “more accurate process than depending . . . on the experience of a few people and what they remember about the past.”
Factories As Human Phenomena
A major difficulty with the topic of future factories is that the mind usually grasps it visually, as a static picture. But a snapshot view of the future factory is at best incomplete. It ignores continuing developments in technology, and it encourages debate about the desirability of specific renderings of technological possibilities, forms unlikely to appear in any event, far less to be influenced by the debate.
However, if we focus on the process of the design of future factories, a topic far more significant than any specific technological possibility, such as the robot, or for that matter any specific picture, such as the totally automated factory, three issues must be considered:
The factory is a human phenomenon. Every step from conception to eventual destruction is for, by, and because of people.
SOURCE: Nadler and Robinson (1983).
Pritsker (in this volume) also draws attention to the use of simulation in manufacturing companies as a mechanism for explaining and distributing complex rules and policies throughout the organization, especially to the operational areas on the factory floor. Using the same data to drive models throughout the enterprise, allows shop floor workers to acquire a perspective of operations that is in concert with the goals of the manufacturing system.
Models can be immensely powerful competitive weapons when used to capture particular competencies of the manufacturer and then leveraged throughout the enterprise that developed them. They offer an important means of accomplishing organizational learning as they extend their use well beyond a particular control activity.
Models should also be considered as a basis for evaluating continuous improvement efforts and changes made in the manufacturing system. Predictive capabilities of models are especially important when dealing with
uncertainty about the nature of the problems being addressed and about the likely result of any proposed solution. Lardner (in this volume) emphasizes that the complexity, uncertainty, interdependence of the many elements of the manufacturing system, and the reliance on the experience of individuals are significant impediments to “good, timely decision making.”
Efforts to discover appropriate mathematical formulations for expressing and predicting performance are important for extending the science of manufacturing. However, much of the complexity and interdependency found in manufacturing systems does not readily lend itself to such rigorous and exact descriptions. A frequently used method for describing and exploring manufacturing systems is simulation:
Manufacturing models analyzed by simulation (simulation models) are developed to study the dynamics of the manufacturing system. Such models are built without having to fit the manufacturing system into a preconceived model structure because the analysis is performed by playing out the logic and relationships included in the model. . . . Of fundamental importance is the building of simulation models iteratively allowing models to be embellished through simple and direct additions (Pritsker, in this volume, p. 205).
Manufacturing organizations offer a rich variety of opportunities for using simulation modeling. For example, it can be used to explain operating procedures, often through animations of the manufacturing system being modeled; to present graphical summaries of large volumes of data generated by the system, including tabulations, statistical estimators, statistical graphs, and sensitivity plots for analysis of manufacturing systems; to rank and select from among design alternatives; to schedule production; to dispatch resources; and to train machine operators, schedulers, and process design engineers.
FOUNDATION: World-class manufacturers seek to describe and understand the interdependency of the many elements of the manufacturing system, to discover new relationships, to explore the consequences of alternative decisions, and to communicate unambiguously within the manufacturing organization and with its customers and suppliers. Models are an important tool to accomplish this goal.