Simulation Models for Information Sharing and Collaboration
Globalization and technological innovation have become catch phrases for the collection of competitive challenges, changes, and shocks that our economy is beginning to encounter. For manufacturing, broadened competition and technical advances are succinctly reflected in two metrics: time, the profound pressure to respond to the rapid ebb and flow of the market; and complexity, the expanded scope and functionality of products and the new bodies of knowledge that influence products and processes.
All of these changes present major challenges to manufacturers, but they are notably difficult for small firms. Here simulation modeling is presented as a cost-effective way for small firms to acquire the knowledge they will need to become competitive manufacturers in the evolving global economy. This paper is based on research the author conducted at Purdue University and the University of Washington (Heim, 1994).
SMALL MANUFACTURING ENTERPRISES
Manufacturing firms are commonly grouped according to the type and volume of goods they produce, the number of people they employ, and their total annual sales. These criteria tend to be positively correlated. For instance, most aircraft and automobile manufacturers would cluster at one end of the spectrum, whereas most of the companies that design and build production machinery for those large firms would be found at, or near, the opposite end of the scale. Accordingly, we would expect the firms supplying components and machinery to generally have lower sales and fewer employees than their customers, and that is indeed the pattern.1
Another contrast between small manufacturing enterprises (SMEs) and larger firms is the scale and scope of resources they have to address current problems and pursue new opportunities. Most small firms vest most of their intellectual capital and technical talent in the management of day-to-day operations, keeping the company alive. They have few resources left to develop skills with newer technologies or to thoroughly consider new market opportunities (National Research Council, 1993).
How, then, can we expect small firms to understand and anticipate the performance expected of them when the systems in which they participate—the markets, products, partners, technologies, and processes—are in such flux and so dispersed? How can SMEs affordably access the information and knowledge they need to succeed in the global economy? How will they assimilate that new information and make it an integral part of their organizational knowledge? What tools and mechanisms can we provide to facilitate the learning needed to adopt and implement new methods, standards, technologies, and techniques? And how can we efficiently share critical information and knowledge without compromising the intellectual property rights of the organizations that have created the information?
At a basic level, there are two kinds of processes for sharing information and knowledge: synchronous processes, in which people are the agents for the instantaneous exchange of knowledge and information, and asynchronous processes, in which an intermediary technology carries an encoded representation of the information and knowledge for retrieval at some time in the future. Common synchronous sharing mechanisms include face-to-face meetings, telephone conversations, and videoconferences. Text-based materials, databases, audio and video recordings, and models of various kinds are some of the ways we share information asynchronously. Obviously, computers and electronic means can be used to facilitate both synchronous and asynchronous exchanges. They can also be used, to some extent, to increase the number of people involved in synchronous exchanges.
If we examine these two information-sharing categories closely, it is evident that the critical difference between them is the immediacy of interaction and, therefore, the types of feedback and dialog each accommodates. Dialogs between people, as well as dialogs with other kinds of information agents (such as Internet search engines), are inherently ambiguous. The key difference is that human-to-human communication relies on various resources to detect and repair communication failures, and these mechanisms are generally lacking when people represent only one side of the interaction pair (Suchman, 1987).
Synchronous processes support high interactivity and debate, but do not provide a good format for extended thought and reflection. On the other hand,
asynchronous approaches more readily accommodate protracted investigation and consideration, but do not afford the animated and tightly coupled communications of a synchronous exchange. To share new, complex information, synchronous processes are the preferred method of transfer because they assure that error-free information has been received, and, most importantly, understood in the shortest amount of time. But synchronous processes are also the most expensive, because the ratio between information source and recipients tends to be much lower than with asynchronous methods. Consider, for example, successful telephone conference calls (an example of synchronous communication). If the information to be shared is complex, you can expect more questions (correlated more or less to the number of participants) and you must allocate time for extended explanations. The entire set of questions and responses during the call will be of little value to some because of prior preparation or knowledge of the topic, but all parties must endure the “education” of each member.
For the small manufacturing firm, the objective of sharing information and knowledge is to improve capabilities and performance. In most cases, however, improvements will not happen as a consequence of access, but rather as a result of assimilation—the new information must become an integral part of a firm’s organizational knowledge, and it must be applied effectively. Assimilation of new information or knowledge is accelerated when people are able to actively examine, question, test, and understand its applicability within the context of their own enterprise (Papert, 1980). Therefore, the manner in which we package and provide access to information is critical, because the mechanisms we use must help people learn.
For small manufacturers, the attractiveness of any such mechanism will be strongly influenced by its cost, the resources and expertise needed to use it, and the time it takes to obtain worthwhile results with it. An attractive methodology would most likely be a compromise between a highly coupled, dialog-rich learning environment and a more cost-effective, self-directed, and self-paced method of exploration and discovery. For instance, simulation could be used to construct a waste management system that meets the particular needs of a smaller manufacturing firm and conforms to environmental regulatory guidelines.
Flight training, power plant operation, and various other types of simulation models have demonstrated their ability to help people learn by providing a computer environment in which they can experiment without the fear of consequences they would encounter otherwise (De Geus, 1992). But special skills and resources are required to construct simulation models that are applicable to the specific context of a firm. These skills and resources are not found within most small manufacturing firms, and even if a firm has the requisite tools, the cost of developing models can be difficult to justify. One approach might be to have “someone” else build the models and then distribute the models to those with whom we wish to share. In the next section we look at why that may not be effective.
MODELS, MODEL BUILDING, AND LEARNING
Models are typically used in manufacturing to gain a better understanding of the possible interactions and consequences when certain choices are made—that is, direct support for decision making in a computational environment. But many decision makers are not comfortable with models constructed by others; they want to be assured that their ideas and knowledge are represented and indeed reflect the situations for which they are responsible. Furthermore, learning takes place when people discover for themselves the relationships and contradictions between observed behavior and their perceptions of how the world should operate; they benefit from experimentation and testing the scenarios they define. At some level, individuals and organizations must construct their own representations if they are to have confidence in the results obtained (Morecroft, 1992).
But creating comprehensive, illuminating models of complex interacting systems can be expensive. Building and validating models often takes longer than expected and requires special technical skills such as computer programming. And when models are constructed to answer questions of a nonrecurring nature, they are difficult to justify because most of the return on modeling investment must be recouped by the one-time use of the model.
For smaller firms, the adoption of model-based learning and decision making is a kind of Catch-22 situation: They are inexperienced users of sophisticated modeling techniques, so they are unable to justify the type of investments needed to construct and maintain models. The modeling approach we discuss presents one response to this pattern. It is a way that small firms can construct models that provide an environment in which users can learn by experimentation. By playing with their models and conducting their own what-if scenarios, they will acquire a better understanding of their particular world and improve the capabilities of their organization by integrating new information and knowledge (Wegner, 1997).
Much of the cost associated with modeling can be attributed to the crafts-manlike approach we take toward model construction and software development in general. In the early 1800s, classical manufacturing was in a similar situation: Products were made by skilled craftsmen, each component fashioned in a cut-to-fit fashion, so product costs were high. Interchangeable component parts ushered in the Industrial Revolution. Because skilled workmen were no longer required for product assembly, the cost of manufactured products were significantly lower, and, accordingly, many more people were able to afford manufactured products.
Model building (and software in general; see Cox, 1996) is still in the craft mode of production. It needs to move from the classical handiwork approach to an industrial-based method of fabrication from interchangeable component parts.
Model “assemblers” would integrate the model component parts in a plug-and-play manner, thus minimizing the time, cost, and expertise required to construct comprehensive models within the context of their organization. We examine a bit more closely how such a component-based modeling approach might work.
The phrase plug and play is often associated with the idea of adding new components to a personal computer and having them interoperate automatically, with no complicated efforts on the part of the user. A somewhat more accessible metaphor to explain plug and play is the common home stereo system.
Although we have to be careful to not push the analogy too far, simulation models of complex systems could be constructed in a manner similar to how we create audio systems for playing and recording music. Industry guidelines dictate standard physical connections and electrical characteristics of stereo components, enabling users to create a wide range of audio systems. System configurations can range from the simple—adequate for listening to broadcast radio, to the complex—able to present rich sound that is difficult to distinguish from a live performance. We can easily add new components to a system, substituting higher-fidelity components where we believe that we gain the most benefit, or replacing multipurpose subsystems (e.g., a combined preamplifier, amplifier, and tuner) with individual components that provide the same functions with greater control and precision (e.g., the ability to adjust frequency curves).
The basic result of each configuration is the same, providing the ability to hear broadcast or recorded music. The difference is in the precision, quality, and fidelity of reproduction, the results of the particular component parts we used. Interoperability of the components is based on well-defined protocols for communication.
Similar standards and protocols are being developed that will allow simulation models to interoperate in much the same way. Recent advances in data communication software, networks, computers, and programming language technologies (National Research Council, 1994) provide an opportunity to develop an open-systems architecture for integrating simulation model component parts. The sections that follow introduce an architecture that could provide a vendor- and language-neutral foundation upon which model builders could construct comprehensive systems models using component models accessible on the Internet.
MODELS AS COMMUNICATING OBJECTS
First of all, where would model assemblers obtain the component parts? We believe that a palette of models, from which enterprise models would be constructed, might come from public institutions such as universities and national labs, or commercial developers and industry-sector groups (e.g., a machine tool
builders association). In some circumstances, the model components would be gathered from various sources and assembled on a single computer system. The scope of the components could range from a comprehensive model that reflects most operational aspects of a piece of production equipment, or a mathematical smoothing algorithm that would be one component of a customized forecasting system. In other cases, the proprietary nature of the information would suggest that some component parts would be licensed or used only with permission. For instance, a manufacturing company considering the purchase of new production equipment could examine the consequences of using that equipment in its plant by linking machine tool models provided by various vendors with the company’s current production system model (assembled from component parts obtained in the market). The company could then quickly determine the overall impact on performance (a systems perspective) as well as consider interactions between the new equipment under consideration and machinery it already owns.
Our work on simulation model component parts has three fundamental concepts: (1) models are objects, (2) models communicate with one another in client-server relationships by passing messages, and (3) each model is represented by an agent that explains the capabilities of the model and assists with integration of that model. A few words about objects, client-server relationships, and agents will help clarify this approach to component model integration.
Three basic principles define object-oriented programming: encapsulation, the way objects hide implementation and associated data but advertise functionality; message passing, a strict protocol by which objects communicate and request performance of advertised functionality by other objects; and classes and inheritance, a means of organizing the kinds of objects that are created to maximize code reuse and minimize maintenance efforts (Goldberg and Robson, 1983). A significant benefit of object-oriented programming is the reduction in cognitive distance between the world that one wants to represent in the computer and the mechanisms that are available to accomplish that representation. Object-oriented programming does this by preserving the decomposition of the system in the computer code that is created.
For instance, for creating a model of a manufacturing plant, one might want to represent the machines, routings, work in process, the tools and fixtures, the customer orders, and the workers. Using an object-oriented approach, we identify the “things” or entities in the system and the relationships among the various entities—what they do to accomplish the objectives of the system. Our application entities, or objects, might be the customer orders, workers, routings, machines, tools, and tooling fixtures, and work in process. Each of these objects would be represented by a coherent chunk of code that contained both the functionality for that object and the state of the object. The functionality would be
that set of activities that the object would perform if asked, and the state of the object would be the value of all variables describing that particular object. For example, the customer order object would be able to answer questions about its internal state such as “what kind of product are you?” or “what is your due date?” The machine objects might respond to messages such as “begin busy state with this order.” The result of that message would be that the machine object receiving the message would change the value of the internal-state variable representing its busy or idle status.
Unlike procedural approaches to program development, in object-oriented programming the state and implementation of the functionality are hidden. The only way for one piece of code (i.e., object) to change the state of another object is by sending a message requesting that change. The focus is on the objects in the system and the activities they must accomplish. Object-oriented programming languages provide constructs that allow programmers to maintain the relationships between chunks of code and things in the world to be represented. A major milestone of the research is developing the ability to encapsulate individual models so that they might function as objects and exchange messages to accomplish modeling tasks without the user of the model needing to address details of integration and implementation.
The most widely dispersed example of client-server relationships is the World Wide Web. Browser software packages, such as Netscape Navigator and Microsoft Internet Explorer, are the clients, and the applications providing information and data are the servers. We say that the requests come from clients and that the server responds to those requests. In more advanced applications the client or server attribution is likely to be dynamic, based on the context of the communicating program processes: Sometimes a program process will be a server, but in other contexts it may also be a client of another program process.
For instance, it is easy to envision a situation in which our work-in-progress (WIP) order objects, machine objects, and material handling objects could be both clients and servers. An order (client) requests that a machine object (server) perform some transformation; the machine object (client) in turn requests that the material handling system (server) transport the WIP order from its present location. In all cases, servers are not concerned with the source of the request (in object terms, the request is a message) except to know where the results of the request must be returned. Accordingly, the client is not concerned about the manner in which its request is accomplished by the server (the server encapsulates, or hides the manner in which it computationally achieves its activities). This kind of relationship among program processes provides great flexibility for implementation. Because clients have no concern about internal changes to implementation, revisions and improvements to the server side of the relationship
can proceed independently. The server is only responsible for continuing to respond to its previously advertised capabilities (services) in the agreed-upon manner (the protocols for exchange).
This means that we can substitute modeling implementations, even going so far as to move the model to a new platform for higher-speed computation, and the users, the remote clients of that model, will not have to make changes to the manner in which the models interact. Improvements and maintenance may proceed independently of users of the model services.
The purpose of model agents is to facilitate the assembly of more complex models. Model agents know what their model object can accomplish, what data they need to perform those actions, and what information the model will provide as it executes. For example, to construct a network model,2 the model agents representing all of the necessary model objects are downloaded. These agents could be commercial products developed and maintained by a for-profit firm, or they could be developed by national labs or university research programs. Obviously, a certain amount of infrastructure would have to be developed to support the creation and distribution of the model objects. If we were to construct a network model of a workcell, some of the object models needed would be models for each of the machines in the cell, a control object to manage the activities of the cell, an order object to schedule release of parts to the cell, and a quality assurance model to reflect cell performance. The agents will configure the interface of their respective model objects and provide the information necessary to configure the network model. The agents also help the user select the appropriate model objects from those available on the network by providing semantic information about the model objects they manage. The builder of the individual models creates each model agent using tools and templates also developed in this project.
The agent is created and maintained separately from the model. For instance, there could be several implementations of the same modeling function and all could be represented by the same agent. The agent would help the user select the most appropriate implementation based on execution speed, size of the task to model, and ancillary capabilities (e.g., graphical or animation output).
As the specificity, functionality, and intellectual property content of models from equipment suppliers and other commercial sources grows, the importance of retaining control and restricting access will become important. The information and intellectual property captured in models would be of significant interest and value to competitors. Agents can provide the intellectual property controls and accounting mechanisms needed to allow customers considering adoption of their equipment access to their high-fidelity models. Vendors could charge users
for access to their models and rebate those costs if the equipment modeled was subsequently purchased from them.
Our basis for integrating simulation model component parts then is to create the framework and methodology in which individual models can become message-passing objects that communicate with one another as both clients and servers. Each model has associated with it an agent that describes the capabilities of the model, its constraints, and data needs as well as the data it produces and any coordination requirements. The agents for the models also generate the interface programming logic needed to participate in the distributed modeling activity.
For manufacturing firms, the consequences of competition and technological innovation are reflected in the profound pressure to respond synchronously to the rapid ebb and flow of the market, the expanded scope and functionality of products, and the distributed nature of the entire product realization process. Decisions become less intuitive as the complexity of the systems increase, the time to make good decisions is shortened, and, for small firms, the cost for incorrect decisions can be life threatening. New information must be assimilated and the organizational knowledge, skills, and expertise must quickly adjust to use that information to advantage.
Models have always been used to reduce the time, cost, and risks associated with decision making, but they can also be an effective mechanism for formally transferring new information. In this paper we have examined object-oriented component parts as a mechanism for constructing comprehensive simulation models that reflect the context of the individual manufacturer. We believe that our approach has particular relevance for small firms that must quickly acquire appropriate information and integrate that information with the unique knowledge, talents, and skills they currently possess while minimizing investment in additional resources.
Cox, B. 1996. Superdistribution: Objects as Property on the Electronic Frontier. Reading, Mass.: Addison-Wesley.
De Geus, A.P. 1992. Modelling to predict or to learn? European Journal of Operational Research 59(1):1–5.
Goldberg, A., and D.Robson. 1983. Smalltalk-80: The Language and Its Implementation. Reading, Mass.: Addison-Wesley.
Heim, J.A. 1994. Integrating distributed models: the architecture of ENVISION. International Journal of Computer Integrated Manufacturing 7(1):47–60.
Morecroft, J.D.W. 1992. Executive knowledge, models and learning. European Journal of Operational Research 59:9–27.
National Research Council. 1993. Learning to Change: Opportunities to Improve the Performance of Smaller Manufacturers. Washington, D.C.: National Academy Press.
National Research Council. 1994. The open data network: achieving the vision of an integrated national information infrastructure. Pp. 43–111 in Realizing the Information Future: The Internet and Beyond. Washington, D.C.: National Academy Press.
Papert, S. 1980. Mindstorms. New York: Basic Books.
Suchman, L.A. 1987. Plans and Situated Actions. The Problem of Human-Machine Communication. Cambridge, U.K.: Cambridge University Press.
Wegner, P. 1997. Why interaction is more powerful than algorithms. Communications of the ACM 40(5):80–91.