Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
FUTURE COMPUTING ENVIRONMENTS FOR MICROSIMULATION MODELING 190 the new co-op/company formed to produce memory chips in the U.S., all expect to be cranking out the denser memories within the next year or so .â37 â¢ Experimental 16-megabit chips have already been manufactured, although commercial production is not yet assured. Advances in Secondary Memory Unfortunately, it is unlikely that there will ever be sufficient amounts of primary memory. Increases in the supply of memory address space available to model designers have historically invoked an interest in increased model complexity and therefore additional demand for memory. The characteristics and costs of secondary memory are thus of interest. Secondary memory is generally measured in terms of capacity in megabytes (MB) and speed of access and transfer. Most secondary memory is either rotating (disk, drum) or sequential (tape), and most can be serially reused (i.e., written and read ma ny times). Some newer forms of secondary memory can be written only once and then read many times, making them suitable for archival purposes. Secondary memory developments reported recently include the following: â¢ Rotating magnetic disk memory prices continue to fall. Disks with 100-MB capacity are now available in a form factor for about $1,000, while larger disks such as 600-MB models are available for about $3,500. â¢ Since the commercial introduction in late 1988 of magneto-optical disk technology by NeXT38 using Canon drives, Sony and Ricoh have announced similar drives. Drives with optical disk platters that have initial capacities of 300 MB per side are now priced at $4,000â$6,000. â¢ IBM has announced a form factor fixed disk drive with very fast access time and a capacity of 300 MB for its new workstation line. Advances in Computer Systems Archite cture Since the creation in 1945 of ENIAC, the first programmable electronic digital computer, almost all computing systems have been designed using a von Neumann architecture. This architecture is often referred to as SISD (Single Instruction Single Data stream) because it is characterized by one processor that executes a single stream of instructions sequentially and operates on a single stream of data. 37 âMicrobytes,â Byte, Vol. 14, No. 9, September 1989, pp. 17â18. 38 NeXT, Inc., founded by Stephen Jobs in 1985, announced development of its first computer in October 1988. The system is based on a Motorola 68030 main processor chip and specialized input/output chips, the Mach version of the UNIX operating system, and erasable optical disk technology.
FUTURE COMPUTING ENVIRONMENTS FOR MICROSIMULATION MODELING 191 Alternative architectures have long held promise for increasing processing throughput at a fraction of the cost that would be required by replicating the SISD architecture. Some of this promise is being realized; for example, vector processors can process multiple streams of data at a rate muc h faster than it would take to process a single stream multiple times.39 More recently, architectures containing multiple processors have become more common; such architectures generally support execution of truly simultaneous tasks in one computing system. Software operating systems such as UNIX can mesh nicely with such architectures, since programs written in UNIX can be structured to spawn subtasks to accomplish their objectives. UNIX-based systems such as those produced by Sequent and Encore provide for multiple processors that can increase system throughput almost linearly with the number of processors installed. Static microanalytic simulation exercises, by virtue of there being no interaction between micropopulation units during a forward projection in time, are well suited to exploit a multiprocessor architecture, assuming that the overhead of disaggregating the task is not large and the system software allows the disaggregation to be done efficiently. Processing of the micropopulation file can be decomposed into multiple independent threads, each of which processes a subset of the original file. The cost of the decomposition is measured in terms of the creation of the independent processes and the aggregation of the results of each of the independent threads. Computer networks (i.e., sets of computers linked by high-speed data pathways) are now beginning to be organized to work cooperatively to solve specific tasks. Such cooperation ma y take several forms, such as specialization of function in which two or more dissimilar computers are linked together so that each can perform that part of the processing task for which it has a relative advantage. 40 Other forms of cooperation include separating resource usage among computers41 as well as enlisting multiple computers to process 39 Vector processors process multiple streams by using pipelining hardware, which is analogous to a manufacturing assembly line; the appearance of processing multiple data streams is actually caused by efficient overlapping of parts of instructions. The more than proportional gain in throughput is real nonetheless. Other computing systems, mostly ones with multiple processors with multiple local memories, can truly process multiple data streams with a high degree of parallelism. The Connection Machine, manufactured by Thinking Machines, Inc., is one such architecture. 40 For example, Apollo Computer's NCS (Network Computing System) provides a mechanism whereby subroutine calls can be made between subprogram modules residing on different machines connected by a data communications link. Arguments, other linking information, and results are passed between machines over the network. Although the entire process incurs some overhead cost due to the intersystem communication, the gain accruing from the ability to tailor the resource to the subtask may yield a substantial improvement in overall productivity. 41 An example is provided by Sun Microsystems' NFS (Network File System), which allows a computer running UNIX to graft all or part of another UNIX computer's file system onto its own. Using NFS, a computer with little secondary storage can rely on the more extensive space of another system to store its files in a manner that is essentially transparent. Depending on the pattern of the data flow between the two computers, such an arrangement may be cost-effective for both systems.