Page 84

4—
Shop Floor Production

Introduction

Several generic tasks characterize production, the process through which parts and materials are transformed into final products. These tasks include, among others, the receipt and acknowledgment of orders, the acquisition of materials, the performance of shop floor operations, and the generation of information needed to support continuous improvement. Together, these tasks (when properly done) constitute a qualified production process. Qualifying a production process is a demanding and important task that requires people trained and physically qualified for a given job, machines and process instruments that can be guaranteed to operate within specifications, production capacity that can match the order demand, and the availability of production capacity in the desired time frames.

The information-processing view of a production facility is in essence the same as that for an individual work cell within the facility. Both factories and work cells process orders and turn out products. For a factory, the order usually comes from a customer outside the factory; for a work cell, the order comes from inside the factory. For a factory, the product delivered is a final product that can be sold to an external customer; for a work cell, the product delivered is a partially finished product that goes on to the next work cell, which regards it as a part or a raw material for that next cell. The demands made by factories on suppliers for components are the same as the demands made by an individual process to be carried out by a work cell.

This chapter focuses on those aspects of production related to the scheduling



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 84
Page 84 4— Shop Floor Production Introduction Several generic tasks characterize production, the process through which parts and materials are transformed into final products. These tasks include, among others, the receipt and acknowledgment of orders, the acquisition of materials, the performance of shop floor operations, and the generation of information needed to support continuous improvement. Together, these tasks (when properly done) constitute a qualified production process. Qualifying a production process is a demanding and important task that requires people trained and physically qualified for a given job, machines and process instruments that can be guaranteed to operate within specifications, production capacity that can match the order demand, and the availability of production capacity in the desired time frames. The information-processing view of a production facility is in essence the same as that for an individual work cell within the facility. Both factories and work cells process orders and turn out products. For a factory, the order usually comes from a customer outside the factory; for a work cell, the order comes from inside the factory. For a factory, the product delivered is a final product that can be sold to an external customer; for a work cell, the product delivered is a partially finished product that goes on to the next work cell, which regards it as a part or a raw material for that next cell. The demands made by factories on suppliers for components are the same as the demands made by an individual process to be carried out by a work cell. This chapter focuses on those aspects of production related to the scheduling

OCR for page 84
Page 85 of specific factory activities, control of the activities and the operation of machines on the shop floor, and mechanisms for providing rapid feedback that becomes the basis for near-real-time adjustments in various production activities. Table 4.1 summarizes information technology (IT)-related research needed to advance shop floor and production systems. Scheduling Factory Activities Centralized Control The dominant issues in production planning today are achieving major reductions in manufacturing lead time and major improvements in honoring promised completion times. Better scheduling and planning might well help to reduce the total time required for converting raw materials and parts to finished products. Today's dominant scheduling paradigms are based on material requirements planning (MRP) and manufacturing resources planning (MRP-II), although other approaches are used from time to time. MRP and MRP-II were developed in the 1970s and 1980s. Originally developed to handle planning purchases of parts required for the products to be manufactured, MRP assumes a plant of infinite capacity. MRP-II goes beyond MRP to take into account inventory, labor, actual machine availability and capacity, routing capabilities, shipping, and purchasing. MRP-II generates a master production schedule that serves as the driver or trigger of shop floor activities. By design, the characteristic time scale of scheduling (in MRP jargon, the "bucket") based on MRP and MRP-II is days or weeks, perhaps even months; depending on the implementation, MRP and MRP-II may or may not be the basis for "to-the-minute" or "work-to" timetables for equipment use and material flows. Under any circumstances, however, arranging for the moment-to-moment control of factory operations is the job of the shift supervisor. For the shift supervisor, situational awareness is at a premium. He or she must draw on information provided by workers on the previous shift (reflecting matters to which he or she must attend on this shift) and sensors and worker reports of what is happening on the shop floor during this shift (e.g., the condition of various tools, how various processes are operating, who has reported for work, what materials are available), as well as directives from senior management concerning overall objectives. From these sources, the shift supervisor must develop a local plan of operation for the next 8 hours. In principle, the real-time factory controllers of today operate by periodically receiving a list of jobs that must be completed if the MRP-based scheduling plan is to be followed. For each production job, a process plan is retrieved from a database. Such a plan involves the identification of routings of particular work in progress to specific process cells, scheduling of all activities at both the cell and workstation levels, coordinating those activities across the equipment, monitoring

OCR for page 84
Page 86 TABLE 4.1 Research to Advance Shop Floor and Production Systems Subject Area Example of Research Needed Equipment controllers Architecture and technology for shop floor equipment and data interfaces   Open architecture for control systems   Appropriate operating systems, languages, data structures, and knowledge bases   Human-machine interfaces to permit people to interact effectively in a modern manufacturing environment   Design for repairability and the ability to work around equipment crashes, including diagnostic software   Better real-time control   Wireless communication Sensors Standardized interface connections   Manufacturing control architecture Dynamic (real-time) scheduling Dynamic shop floor models with high-speed recomputing time and the ability to handle numerous variables   Real-time planning and scheduling tools for the flexible factory and the distributed factory   Techniques to ensure graceful degradation of production operations in the event of local problems   Tools to facilitate situation assessment and scheduling by the factory manager and operations team   Multilevel understanding of large-scale systems   Means for identifying the relevant measures and quantifying the relative performance of alternative systems   Tools to support the brokering of priorities and obligations among cooperating entities, based on minimizing overall transportation, material handling, inventory, capital, and labor costs Intelligent routing systems Identification of appropriate interfaces among product design, product engineering, manufacturing engineering, and factory floor procedures as they emerge in computer-augmented work groups

OCR for page 84
Page 87 TABLE 4.1 Continued Subject Area Example of Research Needed Intelligent routing systems (continued) Demonstration of the resilience of intelligent routing systems with respect to the vagaries of factory conditions Smart parts (automatic routing) Practical open standards for recording and communicating data among parts, assemblies, subsystems, and their network of makers and maintainers   Mechanisms for cost-effective embedding of information and for ensuring access throughout the life of a part Modeling of manufacturing systems Ways to efficiently manage large amounts of related data spread over many machines and locations   Ways of specifying complex data relationships   Ways of ensuring interoperability of design process tools   Better ways of encoding and decoding data   Improved data retrieval methods, including human interfaces Rapidly reconfigurable production systems Development of agility in the face of rapid change in a number of important product or process variables   Investigation of the feasibility of developing a reasonably universal product configuration language and methodology   Assessment of the feasibility of using programming languages to represent manufacturing operations in the same sense that design languages represent designs   Development of systems that simulate the operation of a given manufacturing configuration under a variety of conditions to optimize configuration Resource description models Development of schemata and models to represent manufacturing resources and their interconnection Knowledge bases for new process methods Development of a robust and flexible system that can model efficiently nearly any process that may be developed

OCR for page 84
Page 88 activities, and job tracking. The real-time control system then determines the sequence of activities necessary to execute that process plan. Once determined, the sequence of activities is passed down a control hierarchy that is often organized around equipment (e.g., numerical control machining centers, robot systems, automated ground vehicles, or any other computer-controlled manufacturing equipment and related tooling resources), workstations of interrelated equipment, and cells of interrelated workstations. In practice, the activities on the shop floor in most MRP or MRP-II installations are orchestrated by an informal, manual, chaotic process that tries to adhere to the assumptions on which a given MRP scheduling plan is based. Such plans are difficult to change when unforeseen contingencies occur (e.g., delays, material defects, unexpected machine downtime). Thus, in an attempt to avoid disrupting the schedule, factory managers expend considerable effort in solving real-time shop floor problems, for example, by arranging for priority delivery of parts at added expense. Nevertheless, considerable replanning and rescheduling are necessary when managers are unable to fix these problems, a not-infrequent occurrence in a manufacturing enterprise. Effective real-time control of a factory depends on scheduling that remains close to optimal (relative to some performance measure(s)) over a realistic range of uncertainties and over time. With such scheduling, priorities from moment to moment can be balanced against circumstances prevailing in the plant and in the manufacturer's supply chain. Such circumstances might include sudden changes in conditions generated by drifting machine capability, material shortages, worker absenteeism, unplanned downtime, tool breakage, and the arrival of unexpected rush items. Today, managing such sudden changes often requires costly crisis intervention. An effective real-time schedule must be flexible enough to absorb minor perturbations in the system and require many fewer "reschedulings" when something goes wrong. Real-time dynamic scheduling with this kind of flexibility would be able to reflect the external priorities of the manufacturing enterprise. Dynamic scheduling would increase the overall utilization of material, labor, and equipment by improving the whole system and modifying the operations at each cell, thus reaching levels unattainable with the best of present methods; for example, dynamic schedulers would account for interactions among receiving, material handling, and cell schedules and propagation of changes. A dynamic scheduler would continually track the status of jobs, cells, tooling, and resource availability. Through network communications, each cell would have access to information pertinent to the shop floor. What should be done next by any particular cell at any particular moment would thus be determinable at that cell and would be based on current conditions throughout the factory. A follow-on to MRP/MRP-II technology would provide tools for production planning, scheduling, and control with: • The ability to build and maintain high-quality finite capacity schedules

OCR for page 84
Page 89   (both aggregate and resource-based) that realistically account for the major constraints and preferences characterizing the production environment at hand; • Integration of adaptive planning, scheduling, and control capabilities, thus making it possible to quickly revise a production plan or schedule to account for a wide range of contingencies and making it possible to drive automated equipment in real time and to make trade-offs of local goals in favor of higher-level ones; • The ability to effectively integrate improvements in a wide range of production planning and scheduling decisions within a given manufacturing site as well as across the supply chain; • Support for powerful interactive capabilities making it possible for the user to incrementally manipulate the production schedule at multiple levels of detail and to explore "what if" scenarios in overall factory planning as well as scheduling on the shop floor, and helping the user to identify possible inefficiencies in the production schedule and determine ways to correct these inefficiencies (e.g., adding overtime, shifting personnel around, buying new equipment, modifying raw material reordering policies, and so on); • Reconfigurable and/or reusable software, making it easy to customize these techniques for a wide variety of manufacturing environments; • Support for integration with other important functionalities (e.g., process planning, factory layout, accounting, preventive maintenance, and so on); • Capabilities to deal with the stochastic and sometimes reentrant nature of shop floor events. Within the MRP/MRP-II paradigm, inputs are deterministic estimates of the times for various shop floor operations that are obtained by time and motion studies. However, in reality, the timing of events is probablistic rather than deterministic, and the distributions of service times and of interarrival times, far from being deterministic, look like nonstationary exponential probability density functions with long tails.1 Manufacturing operations may also be reentrant, in the sense that the same equipment is often used for performing several different steps at different times during a production process (e.g., photolithography in semiconductor manufacturing). Such nonlinearities introduce added difficulties for the development of new scheduling paradigms; and • Backwards compatibility with MRP/MRP-II technology. While technologists often believe that a clean break with current technology will enable newer technologies to avoid their most basic problems, as a matter of practicality, few manufacturers will be willing to abandon familiar though perhaps flawed systems in favor of new technologies not proven in the factory. Whether this means that the follow-on to MRP/MRP-II will be an entirely new paradigm for 1 Personal communication: Charles Hoover, professor of electrical and mechanical engineering, director of the Manufacturing Engineering Program at Polytechnic University, Brooklyn, New York; July 1994.

OCR for page 84
Page 90   technology with translators that bridge the gap to legacy systems or an evolutionary improvement to MRP-II is an open question. To support effective scheduling, research is needed in the following areas: • Dynamic shop floor models with fast run times and the ability to handle numerous variables. Factors that affect production scheduling include the following:   • Are the appropriate resources available at the right time? (Such resources include people, tooling, jigs, fixtures, or other special product or process needs.)   • Is the equipment up and running?   • What is the next scheduled downtime? Will people be there to handle this?   • Is the ''downstream" process ready to handle the output?   • What is the status of any inventory buffers?   • Can a particular lot be fabricated using factory resources other than the one for which it is planned? If so, how?   All these and other variables have to be entered into the factory model and scheduling system in order for the results to have any benefit. The model could include agents that would take requests from customers (i.e., other agents that are "fed" information provided by these agents), decompose those requests into information requirements, and then request that information from its suppliers (i.e., still other agents that feed information to these agents). As suppliers respond with estimates of their capability, the agents in question make realistic forecasts of their ability to respond. Once the requested information is in hand, the agents pass status reports to customers. • Real-time planning and scheduling tools for the flexible factory and the distributed factory. Such tools would also provide capabilities for integrating scheduling and control reactively (e.g., they would make use of adaptive scheduling techniques based on the severity of the contingency at hand and the time available to repair the schedule and/or scheduling techniques that could exploit windows of opportunity that occur fortuitously) and for propagating of schedule changes throughout the system. • Techniques to ensure graceful degradation in the event of local problems. Many production operations today are brittle, in that a problem in one crucial location can halt an entire production line. More desirable would be a real-time scheduler that could detect cell problems as they occur and reroute work flows around them while the cell in question was being repaired. If such dynamic rerouting were possible, factory output would be damaged only to

OCR for page 84
Page 91   the extent that one cell's contribution to the overall process was no longer available, rather than producing the functional equivalent of losing the contribution of many cells to overall output. Dynamic rerouting would also enable the production of varying items. Research in this area would address diagnosis of plan failures, plan repair, and planner modification, as well as analysis of material handling. • Tools to facilitate situation assessment and scheduling by the factory manager and operations team. For the immediate future, management of shop floor operations will be under direct human supervision. Human decisions makers will need such tools if they are to perform their jobs effectively. Tools in this area should also provide techniques for analyzing interactive planning and scheduling. Better techniques would help to develop tools to monitor and analyze the operation of an enterprise and identify bottlenecks and opportunities. One important aspect of this problem is avoiding suboptimization by understanding the impact of alternative goals at each workstation and downstream consequences of changes made. In addition, since such tools are likely to be able to produce scheduling-relevant information in voluminous quantities, good visualization and other human-computer interfaces will be necessary as well. • Tools to support the brokering of priorities and obligations among cooperating entities, based on minimizing overall transportation, material handling, inventory, capital, and labor costs. Such tools must also integrate multiple dimensions in which managers must make decisions, including release decisions (e.g., when to release jobs and orders to the shop floor), reordering decisions (e.g., when to reorder raw materials and/or components, how much, and from whom), sequencing and batching decisions (e.g., grouping orders for similar parts to reduce setup costs), safety stock and safety lead-time decisions (e.g., increasing lead times as a hedge against various sources of uncertainty), overtime or order-promising decisions (e.g., promising delivery dates to potential customers), coordination decisions (e.g., using distributed scheduling and control within a given plant as well as across the supply chain), and material-handling decisions (e.g., when to add capacity or when to change the strategy of the material-handling system). • Development of new techniques for optimizing production planning and scheduling. Traditionally closer to operations research (OR), several successful optimization techniques are better characterized as a mix of techniques drawn from artificial intelligence and OR.2 Learning can conceivably 2 For example, certain scheduling techniques developed at Carnegie Mellon University are based on constraint satisfaction techniques developed in the artificial intelligence literature and also combine various OR-like optimization subroutines.

OCR for page 84
Page 92   be used at all levels to enhance the performance of these tools (e.g. to enhance the performance of schedule-optimization techniques, help determine strategies for selecting among multiple reactive control policies, learn important user preferences to facilitate interactive planning and scheduling, learn to identify specific types of inefficiencies in the production schedule, and learn effective coordination strategies with suppliers and subcontractors). A related research problem involves the development of optimization techniques that lend themselves to efficient implementation on parallel hardware. Decentralized Control An alternative to top-down scheduling of production activities is a decentralized organizational structure in which activities are initiated on the shop floor itself, without regard for a master factory scheduler. Since it is not known with confidence whether a top-down or a bottom-up approach to initiating and performing production activities is more likely to result in success, it is reasonable to investigate both approaches. At least two aspects of decentralized control seem worth exploring, autonomous agents and work and logistics flow. Autonomous Agents The use of autonomous agents may offer some potential as a means to handle complex dynamic environments. Implemented as software objects or collections of objects (perhaps representing physical robotic agents),3 they may provide solutions for manufacturing problems in the areas of planning, monitoring, and control. Agents may be able to take responsibility for shutting down a machine, starting up a program, or sending a message to another agent. Because such agents automate control and interaction functions, they can eliminate activities otherwise performed by people and allow for simpler organizational structures, which in turn can simplify requirements for software development and maintenance. If autonomous agents are to be successful, better architectures for manufacturing systems involving distributed intelligence will need to be developed. Agents should model real-world behaviors, enable encapsulation, promote flexible distribution of control versus centralized control, capture the inherently 3 The committee recognizes that the precise definition of an agent is a topic of debate in the community. (A significant amount of time was spent engaged in this debate in the AAAI Special Interest Group for Manufacturing (SIGMAN) workshops of 1992 and 1993, and the IJCAI SIGMAN workshop of 1993.) Another definition of an agent is that it is an actor with certain cognitive characteristics, such as motivation and intent.

OCR for page 84
Page 93 nondeterministic concurrent nature of the environment, and model the needed level of knowledge or behavioral detail at any instance in time. A critical consideration for sensible agent behavior is level of autonomy. The practical deployment of agents in manufacturing requires that they behave sensibly, incorporating an understanding of both global and local goals. The issue of agent autonomy is very relevant to planning, monitoring, and controlling agent behaviors. The purpose of planning behaviors is to develop and plan steps of action. Monitoring behaviors involves evaluating the environment external to agents. Controlling behaviors encompasses modifying behavior or data. The extent of autonomy can range from local autonomy, which requires that agents be capable of initiating their own thread of execution, to consensus (negotiation) autonomy, with each agent adhering to a set of rules or constraints governing how consensus is achieved and ultimately how agents behave, to command-driven autonomy, with agents restricted to simply executing some given request (command) without the benefit of queries or "fact finding" about the state of the global world (state of agents external to a given agent). Research is needed to develop tools to find information (such as autonomous agents), tools to distribute information, and stable sets of rules for interacting agents (rules for agent behavior). With regard to development of autonomous agents, the following research topics appear important: • Architectures supporting dynamic distribution of intelligence among autonomous agents for planning, monitoring, and control as well as dynamic levels of autonomy. • System stability. For sensible decision-making capabilities, autonomous agents must understand how to resolve conflicting goals. Instabilities may arise when multiple agents interact, although collections of agents may also demonstrate emergent characteristics, that is, modes of stable (and desirable) behavior that arise from the complexity of their interaction. Strategies for resolving conflicting goals may need to be developed. • Better representations of spatial and temporal dimensions relevant to shop floor operations, rather than simple declarative knowledge. • Representations supporting multiple perspectives of data and knowledge. Simulation, analysis, and design tools need to use product information. Yet each of these applications of the data has a different perspective on the product data. These data must be consistent and persistent, and translators often fall short. To date, no autonomous agent built has demonstrated sophisticated behavior. Even a task such as searching the Internet for the e-mail address of a particular

OCR for page 84
Page 94 individual remains out of reach, although many researchers argue that an agent with this particular kind of capability will soon be available. If the potential of agent technology is to be broadly believed, a convincing demonstration must be created.4 Work and Logistics Flow A complementary approach to achieving decentralized control is the use of an intelligent routing system that can route a partially finished product to the next available manufacturing cell or station capable of performing the work needed in the next step. In such a system, each manufacturing cell knows the operations that it has been certified to perform and bids on work to signal its availability when it is free. Cells communicate their status to intelligent parts carriers through a communications network. Work flow is thus controlled in a bottom-up manner. One approach to an intelligent routing system presents acceptable methods (or routes), including one determined to be the best according to specified criteria (e.g., minimum cost, maximum speed, and/or maximum quality). All acceptable methods become functional alternate routings. Whenever an engineering change occurs, the routing is downloaded to all cells that have been authorized by the engineering change to bid for work on the part. Another major aspect of shop floor management is coordinating the flow of various components to appropriate locations on the shop floor. Today, the transport of parts to these locations is performed by intelligent parts carriers. These carriers are in essence automatically guided vehicles that carry parts to any available workstation or to the preferred workstation (selected, for example, on the basis of its having the shortest queue). The carriers expedite the progress of parts through the plant. Such carriers are most useful in a single facility. However, when products require production that crosses building or enterprise borders, "smart parts" will carry the information necessary to supply production instructions and history. (A simple illustration of a smart part is one that signals its location within a factory as it is moved about, perhaps through the use of bar codes or attached radio transmitters.) As manufacturing operations are carried out, instructions embedded in the part can be deleted or marked to indicate completion. Smart parts can also monitor and indicate their own performance. Thus maintenance requests can be triggered if the part senses its own performance to be substandard. Planned maintenance regimens may also be recorded within the part, automatically triggering requests for maintenance. To realize intelligent routing systems, research is needed to identify appropriate 4 Even if no single agent will be able to demonstrate sophisticated behavior, the possibility remains that a collection of agents may be able to do so.

OCR for page 84
Page 98 BOX 4.1 An Example of an Open Architecture for Machine Controllers The MOSAIC project of the late 1980s (Greenfeld et al., 1989) was one of the first attempts at establishing an open architecture for machine control. MOSAIC provides all of the expected functionality of a conventional "closed" computer numerically controlled (CNC) machine tool. That is, the parts-programmer of a CNC machine tool can interact with the MOSAIC system from any automatic program for machine tools (APT)-like, CAD/CAM terminal with the result that the files generated through this interaction are postprocessed into machine-level commands that provide exactly the same functionality as G&M codes. However, MOSAIC adds a computer operating system that serves as a uniform platform on which CNC applications can build, specifically, a real-time version of UNIX. The architecture of this operating system allows a set of independent application programs to be developed. This set of application programs can be developed by any third party and brought into the MOSAIC controller for use by the local programs or system developers. By contrast, a closed architecture constrains local programmers to work with the predefined set of G&M codes that are supplied with the machinery company's vendor-specific controller (Fanuc, Mazak, and Cincinnati-Milacron are today's most-often-seen controllers). Recently, some of these vendors have also claimed an "open" system; but close examination usually shows that the "openness" is only in relation to their own expanded library functions, written in local formats. They will not be "open" to any arbitrary third-party software developers able, say, to supply new routines for new sensors coming onto the market. In general, machines that plug into an open system must be equipped with a general-purpose computer environment and bus structure that in a "local" sense will control the axes of motion and the various sensor-based devices and will manage programs and data locally. However, to be part of a "broad" intelligent environment, the machinery must use communications and networks that are universally accepted in the computer culture. The machine should be adaptable to the changing environment and tasks, and thus, modular, in terms of its controller's computer configuration, and in terms of the mechanical construction. MOSAIC is suggestive of what an open architecture controller might be. Nevertheless, any commercially viable product would have to deal with interactions between the large number of controller features required in such products. • Advanced manufacturing languages for unit process programming. While existing languages (such as APT and Compac for machining) must be supported, a more flexible language is needed. It should include provisions not only for real-time control, but also for the operation of accessory devices in conjunction with the machining process, a more direct connection to CAD/CAM systems, and a flexible interface for user applications. Such a language might also provide controllers with information defining the work being performed, rather than simply specifying the motion of a point through space (as is the case for most current languages).

OCR for page 84
Page 99 • Appropriate data and knowledge structures. Different needs and different local working environments require different data and knowledge structures and different ways of managing these structures. For example:   • Special real-time database systems are required for real-time acquisition and control of data, since conventional database systems do not have the responsiveness necessary to support rapid acquisition of data. For example, data may need to be time-stamped, so that events can be synchronized at a later time in a conventional computing environment. In addition, databases for ostensibly different purposes must often couple to each other: databases related to equipment maintenance are different from databases that support lot-scheduling needs, but since schedulers need to know the status of equipment, the equipment maintenance database must be linked to the scheduler.   • Expert systems today tend to be rigid in the sense that they are not readily interoperable with another knowledge base derived from the expertise of a different expert. Research is needed to find ways to reconcile knowledge bases derived from different experts on the same subject. In addition, knowledge-based technology must be developed that is responsive enough to be used with real-time systems.   • Databases that pass and manage information about events and activities on the shop floor are needed to inform design engineers as well as shop floor workers handling successive shifts. Design engineers may not understand very well what is actually happening on the shop floor; expert system shells have been useful for presenting operators with theoretical best practices and capturing their immediate reactions, which are then routed instantly to engineers along with other information that explicitly defines the context of an operator's remark. For shop floor workers, special databases are needed that will track on the fly newly discovered and ad hoc information (e.g., flaws discovered on the midnight shift) for use in the next shift.   • Databases and knowledge representations that enable decision makers to distinguish between unusual system behavior that nevertheless reflects "reasonable" or "appropriate" behavior and unusual system behavior that reflects some kind of failure. As factory managers become privy to larger amounts of information, such questions will arise more frequently. • Validation of models underlying controller designs, for example, a controller based on an expert system that interprets sensor input. The relationship between different inputs and different rules in an expert system needs to be well understood if operators are to have confidence in the operation of the controller. Thus, reliable techniques are needed to demonstrate the validity of the underlying expert system model.

OCR for page 84
Page 100 • The human-computer interface (HCI). Given the complexity of a factory's information environment, an effective factory HCI must provide displays that will provide a human being with the appropriate level of detail for his or her needs (which may change as more is learned about a given situation). Desirable features for a factory HCI include:   • A direct, easy-to-use interface into product design databases so that product specifications can be easily reviewed on the plant floor;   • Support for unusual input/output options necessitated by the factory environment. For example, an operator with both hands full may need to interact by voice, or an information display may need to be viewed at a distance, or a very concise set of keystrokes may be needed to save time in user input; and   • Tools that enable the end user to tailor the interface for individual needs and to standardize the interface across multiple machines. The varying HCIs of differing equipment currently present significant obstacles to training employees and attaining quality in manufacturing. • Design of shop floor equipment for repairability and the ability to work around equipment ''crashes," including diagnostic software. Factories are fragile environments; small changes can cause large disruptions in work flow. Such disruptions can be very expensive; 1 hour of shutdown can cost upwards of several hundred thousand dollars. Thus, problems and disruptions must be diagnosed quickly, and fixes implemented promptly. Quick fixes may be applied to bring crashed equipment up, even as more permanent remedies are studied and adopted. Many capabilities related to prompt diagnosis and repair will be achieved by using sophisticated software, databases, expert systems, and networks. The requirement will be speed and accuracy; a quick but poor solution may be more costly than a slower but more correct one. Study of the trade-offs between the time for and the completeness of solutions is a worthy area for manufacturing research, as is the designing of work flows in factories so that there are no single points of failure. (Today, such failure points include network servers, central databases, and cell controllers.) • The nature of real-time requirements. Factories operate in real time, and a controller that issues a command too soon or too late may cause inadvertent damage. But real-time computing and control are extraordinarily complex and demanding. Thus, several areas require attention:   • The role of time in various control situations. Delay times owing to computer operation cannot be predicted on the basis of simple formulas as they are in the case of analog circuits; rather, they are currently a matter of best guess and experiment. Even then, they are subject to considerable statistical variation. Research in this area should seek to

OCR for page 84
Page 101     establish the characteristics of the time distribution that will lead to successful control, characterized by the distribution of response times, especially the mean and average times and the worst-time behavior.   • Mathematical formalisms for real-time control. Developing appropriate formalisms for representing and controlling devices is key to the development of good real-time systems. Today, classical control theory is the dominant formalism used in real-time control, although modern control theory, state space analysis, fuzzy logic, and neural networks have made inroads to some degree. Controllers of the future are likely to involve all the control techniques—classical, modern, fuzzy logic, and neural networks—in an integrated approach to control.   • The development of a real-time operating system for manufacturing, suitable for the very high speed control required for unit processing operations. Current general-purpose operating systems do not provide the response time essential for maintaining the speed, accuracy, and safety features needed in future machinery. A real-time operating system for manufacturing should be compatible with industry-standard operating systems such as UNIX or MS-DOS with respect to high-level management, file-system operations, communications, and programming environments. Capabilities for network-based coordination with other tools running on the same real-time operating system will also be necessary for the integration of equipment controlled by separate controllers. • Wireless communications. Since computing will be used throughout the manufacturing environment, and it is impossible to predict beforehand where these interactions will occur, wireless communications will probably be required. Wireless communications will make factory reconfiguration easier, faster, and cheaper—as the locations of equipment need not be tied to communications portals—and will be useful in supervision, maintenance, handling of materials, and other activities in which personnel cannot be tied to fixed workstations. But the trade-offs associated with wireless communications in a factory environment are complex, for example:   • Trade-offs between the capital costs of fixed building wiring with very high bandwidth versus the facility costs of wireless transmitters and receivers that operate with a very restrictive bandwidth;   • Trade-offs among broadcast power, geographical coverage, and multistation interference; and   • The potential hostility of the factory environment to such communications (e.g., wireless communications in the presence of radiation-sensitive equipment). Research will be needed to support a rational decision-making process with respect to these trade-offs. Research will also be needed in the area of the

OCR for page 84
Page 102 transmission and reception of complex and varied signals in a sensitive environment. Other relevant research (and standardization) would address such issues as maximizing the use of the limited electromagnetic spectrum and spread-spectrum-type wireless communications using trellis-type encodings to ensure maximum security, reliability, and noninterference with sensitive manufacturing equipment. Beyond research, standards (for components, configurations, interconnections, and control software) and related assistance (e.g., from Sematech activity for semiconductor manufacturing) are needed to ensure consistency and interoperability of implementations, to coordinate and standardize open systems concepts in control systems, and to motivate system and equipment suppliers to use these mechanisms in their equipment. Additional dimensions of equipment controllers are discussed later in this chapter under "Facilitating Continuous Improvement" for reasons that are made clear in that section. Sensors To ensure the optimum performance of production processes, it is necessary to have information on what those processes are actually doing in real time and how those processes are affecting equipment, tooling, work zones, material handling, and workpiece material. Knowledge databases and predictive models of a process and its components provide a baseline of "in-advance" information, but if routine deviations are greater than can be tolerated for production within tolerances, sensors are needed to augment this baseline information for process monitoring and feedback to controllers. In existing systems, estimates of the impact of sensing systems on process performance indicate as much as a sixfold increase in effective operation speed (Eversheim, 1991). Add to that the increased yield due to prevention of defective workpieces and the impact is more impressive. Moreover, sensors can also support worker safety, prevention of damage to a machine, prevention of rejected workpieces, prevention of idle time on the machine, better interfacing of transfer mechanisms with equipment, and optimal use of resources. The increasing demands of unit processes have encouraged the development of systems using a variety of sensors. Examples of devices and processes into which sensors will be incorporated include extrusion dies for control of temperature and metal flow and surface finish; turning tools, for thermal control to provide maximum life and load; and continuous processes, for transmitting process parameters. Sensors will interface so closely with processes that complicated external interpretation of data will no longer be necessary. Future sensors will provide diagnostics (as many do today) as well as fault tolerance and in some cases self-healing capability. Sensors will provide salable sample rates for analysis

OCR for page 84
Page 103 of process variation, summary-level information, and statistical descriptions of process parameters. Sensors can play an important role in many aspects of production, including: • Quality control. Many quality control schemes rely on pre- and post-process inspection—that is, catching a mistake before a component enters an individual unit process (when nothing is happening) or when it exits from the subprocess (when the unit process is completed). Extension of quality control to in-process inspection, so that problems can be caught in "real time" and perhaps corrected before much additional processing, is even more dependent on sensor performance and the availability of advanced sensors. • Workpiece placement. Machining tools depend on information about tolerances, orientation, material characteristics, and assembly sequence. Sensors are needed to assess these characteristics before the partially finished assembly or part is worked on. • Use of intelligent processing equipment. A unit process could do more or less to a part being fabricated (e.g., heat it more, remove less metal) depending on how the particular part being worked on responded to its treatment. A piece of equipment with location sensors could signal its location as it was moved around on the shop floor. • High-precision fabrication. High-precision fabrication is characterized by very stringent tolerances on form, dimension, or surface features in the presence of a number of material, tooling, and environmental variables. As a production tool shapes material for a finished (or intermediate) product, the reaction of the material to the tool has an influence on what the tool should do next. For example, a machine that unexpectedly encounters more resistance than anticipated to a given cut may need subsequently to exert more force, and a sensor is needed to close the feedback loop to the tool's controller so that the necessary orders may be given. • Use of tools. A sensor-loaded tool moving along with the process could provide information on product and process specification compliance and location. The new active sensors will change the way products are transported, tracked, monitored, and maintained at every stage of their existence—from creation to recycling to disposal. Researchers have developed miniature vibration sensors and accelerometers that, if appropriately built into a machine structure (like the resin concrete structures now in use as machine tool structures), could provide in situ sensing capability for control of machine stability and deflection. Sensors are key elements in the other enabling technologies of process control and process precision and metrology. A vast array of sensors are commercially available or under development in research laboratories; these sensors support unit processes through an extensive array of sensor technologies ranging

OCR for page 84
Page 104 from optical, infrared techniques to high-frequency ultrasonic and acoustic emission.6 Sensor innovation will be necessary in the future, because many characteristics of machining processes today (e.g., very low power consumption) tend to reduce the effectiveness of, or render useless altogether, a large number of traditional sensing methodologies based on force, torque, motor current, or power measurement. Examples of more novel sensors include acoustic sensors to monitor and improve tool performance and surface finish or to monitor the relative location of parts in a finished product; photoelectronic and ultrasonic sensors that enable intelligent processing equipment to position and set functions; and bar code and radio frequency sensors that identify and report on the status and location of parts. In many cases, it is likely that relevant sensor technology will have been first developed for application in other fields. The first major contribution of information technologies to sensors was the idea of digitized output, which removed analog variation from the outputs. Later, self-calibration and error detection were added to allow self-diagnostics to assure that the information from the sensor was not detrimental to the process control. Sensors for the future production environment pose a number of challenging information technology research questions: • Standardization. Today's sensors (and actuators) require customized drivers for their operation, greatly increasing the cost of an otherwise inexpensive sensor or actuator. Architectures and interface standards are needed so that users can select from a catalog of sensors and actuators, each with a certain limited set of parameters specified in the catalog, and plug those selected into a common control system with only minor, automatic configuration. When such architectures and standards are in place, sensors should be able to feed data directly into a process control system. Sensors connected through such architectures would be linked directly into databases for dynamic updates usable by machine controllers. • Sensor characterization. An appropriate characterization would describe a sensor or actuator at various levels of abstraction (e.g., as an abstract physical process, as an electrical device producing an output, as a "smart" device with a certain amount of preprocessing capability, including accuracy corrections, zero-offset correction, and perhaps self-calibrating and self-scaling functions). The method of characterization should be applicable to a wide range of devices, including sensors from simple thermocouples up to image systems, and actuators from simple on-off switches to multiaxis robots. • Sensor fusion. Many sensing techniques are not individually reliable 6 An excellent review of these sensors as related to machining is provided in Shiraishi (1988; 1989a,b).

OCR for page 84
Page 105   enough for process monitoring over the normal range of process operation. Multiple sensors that are collectively effective over the entire operating range may be enablers for effective real-time process monitoring if questions of integration can be resolved. To be effective, a multisensor approach requires more attention to feature extraction, information integration, and decision making in real time. • Sensors with on-board intelligence. With on-board intelligence, the collection of data might be collapsed into a statistical distribution (for routine data) and specific reports of exceptions to the distribution, thereby reducing bandwidth requirements for reporting sensor data. On-board intelligence might enable a sensor to recognize physical features of a machined part (e.g., color, size, shape) to detect the presence or absence of a component or characteristic of a part. Intelligent sensors also might include capabilities for signal conditioning and processing (reducing the impact of uncertainty in sensor data), and perhaps models relating measured values to monitoring and control variables and strategies for using the information gathered. Facilitating Continuous Improvement When applied to the shop floor, continuous improvement refers to the continual monitoring of shop floor practice and how that practice is reflected in the products that result. Whereas traditional shop floor environments emphasize the desirability of procedures that are consistent and hence unchanging, continuous improvement suggests instead the incremental evolution of procedures to make products of ever-higher quality and time-liness. Products are monitored for time-liness and for quality at every stage in production; the resulting information is passed back to process operators with enough detail to suggest corrective actions that may need to be taken to improve the process on the next round. Process operators implement certain changes and then see if their changes have had the desired effect. If so, such changes are recorded as annotations to the then-current operating procedures to effect the improvement. In this manner, continuous improvement relies on standardization of practice, resulting in an empirically relevant point of comparison against which to measure change in practice, rather than standardization (and hence invariance) of procedure. At the heart of continuous shop floor improvement is the idea of operator-centered control systems that enable operators to take a wide range of actions that may be necessary to improve production quality or speed. Research is needed on several dimensions of operator-centered control systems: • Capture of most recent agreements. Annotations to standard operating procedures can be described as the most recent agreements (i.e., details of understanding reached on the last shift) between operators and the most

OCR for page 84
Page 106   recent version of those operating procedures. Such agreements must be passed on to subsequent shifts if the knowledge is not to be lost. When captured, these agreements should specify the nature of the changes in operating procedures indicated, the rationale for these changes, the changes in output to be expected, and the circumstances under which these changes are to be implemented. A particularly promising approach is the use of expert system shells that allow informed agreements to be captured on the spot using the operator's expertise. • Effective diagnostic capability. A tool should be able to report its state of health: where the process it implements is weak, what problems are impending, and when action will need to be taken. • Communication with other shop floor systems. This connection would provide, for example, scheduling that reflects the most recent priorities so that the most important jobs could be done first. A connection to the material-handling system that feeds that tool would inform the operator of unexpected delays in the arrival of materials and permit, for example, maintenance to be performed in that unexpected free time. Information about the tool's output would be available to workstations downstream. • User-configurable interface. A display (control panel) should be dynamically modifiable in accordance with user preferences and needs. Once a change called for by a new agreement has been implemented, the user should be able to modify the interface to reflect that change. • Administrative control. Operator-centered tools should account automatically for operator time and attendance, labor distribution, production counts and rates, quality levels, and similar administrative data. Greater accuracy and precision in obtaining these statistics should be possible. All of these elements are aspects of the equipment controller. But they are discussed separately from the earlier discussion of equipment controllers in this chapter to underscore their importance in capturing information generated at the back end of the production cycle. Controlling And Managing Product Configuration Manufacturers often produce variants of a given product. For example, Intel Corporation produces microprocessor chips that are "upwardly compatible" in variants known as the 386, 486, and Pentium series. Each model also has variants (e.g., those with math coprocessors versus those without, those with power-management features versus those without, and so on). Production operations

OCR for page 84
Page 107 that can shift easily and rapidly from producing one variant to another would give manufacturers a needed degree of lexibility in their management. Flexible production of this sort is likely to depend on the implementation of two ideas of different time scales. The first is that of a single production facility whose internal operations can be modified substantially by software-based changes to machine controllers and schedulers.7 The second notion is that of a manufacturing operation that can draw globally dispersed units providing specialized expertise and unite them temporarily to carry out a specific production task (e.g., to produce a specific product or product line) and then disband them when the task has been completed. To effectively control and manage product configuration, research is needed to manage product variations and to check the validity of proposed configurations. At present, companies develop product-specific systems or may use artificial intelligence techniques (also often product-specific themselves) to deal with configuration control. In addition, configuration control and management require a universal product configuration language and methodology that can be used by both marketing and manufacturing personnel to develop valid product configurations. Specific Research Questions As in Chapter 3, a variety of research questions are motivated by discussions earlier in this chapter. But the caveat posed in Chapter 1 also applies here: research to fill the gaps in the scientific and engineering knowledge about shop floor processes to be supported by information technology is essential if the promise if IT is to be exploited fully. For example, a deeper basic understanding about the physics of part-tool interactions or the physics and chemistry that might underlie possible faults in a process will be necessary if IT is to control tools or to help diagnose faults in a production line. As always, IT can help immeasurably in exploiting knowledge and information that people have, but by itself it is not a substitute for that knowledge and information. • How extensive should any factory system be? One hierarchical system 7 Even an idea as simple as changing the order in which variations of a product are manufactured can have significant effects on productivity. For example, an AT&T manufacturing plant was responsible for the production of 80 PBX (private telephone exchange) systems each week in a wide range of sizes with many different options. The order of manufacture was determined by the sequence in which orders for these systems were received; this pattern resulted in a large and undesirable variation in flow rates of work through the plant as the result of the many different feature sets needed. By rearranging and thereby improving the manufacturing sequence of these 80 systems, a significant improvement in throughput times from the lines feeding final assembly was achieved with no additional equipment purchases. See Luss et al. (1990), pp. 99-109.

OCR for page 84
Page 108   with many embedded applications for specific purposes, or a set of distributed applications with few interactions? • How complex may the user interaction systems be before the complexity overwhelms the user? • How are new systems introduced into a factory that contains many older or even obsolete legacy systems? How can an existing, obsolete system be replaced with minimum factory disruption? • How are decision-making systems transferred into decision-making roles? • What level(s) of abstraction are appropriate for controlling factories? • What is the trade-off between levels of detail required and the speed of response? • What characteristics and configurations of autonomous agents give rise to emergent behaviors, swarm stability, and divergent behavior? • How will humans interact with factory systems? What will be the nature of the user interfaces?