National Academies Press: OpenBook
« Previous: Advances in Primary Memory
Suggested Citation:"Advances in Computer Systems Architecture." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers. Washington, DC: The National Academies Press. doi: 10.17226/1853.
×
Page 190
Suggested Citation:"Advances in Computer Systems Architecture." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers. Washington, DC: The National Academies Press. doi: 10.17226/1853.
×
Page 191

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

FUTURE COMPUTING ENVIRONMENTS FOR MICROSIMULATION MODELING 190 the new co-op/company formed to produce memory chips in the U.S., all expect to be cranking out the denser memories within the next year or so [1990].”37 • Experimental 16-megabit chips have already been manufactured, although commercial production is not yet assured. Advances in Secondary Memory Unfortunately, it is unlikely that there will ever be sufficient amounts of primary memory. Increases in the supply of memory address space available to model designers have historically invoked an interest in increased model complexity and therefore additional demand for memory. The characteristics and costs of secondary memory are thus of interest. Secondary memory is generally measured in terms of capacity in megabytes (MB) and speed of access and transfer. Most secondary memory is either rotating (disk, drum) or sequential (tape), and most can be serially reused (i.e., written and read ma ny times). Some newer forms of secondary memory can be written only once and then read many times, making them suitable for archival purposes. Secondary memory developments reported recently include the following: • Rotating magnetic disk memory prices continue to fall. Disks with 100-MB capacity are now available in a form factor for about $1,000, while larger disks such as 600-MB models are available for about $3,500. • Since the commercial introduction in late 1988 of magneto-optical disk technology by NeXT38 using Canon drives, Sony and Ricoh have announced similar drives. Drives with optical disk platters that have initial capacities of 300 MB per side are now priced at $4,000–$6,000. • IBM has announced a form factor fixed disk drive with very fast access time and a capacity of 300 MB for its new workstation line. Advances in Computer Systems Archite cture Since the creation in 1945 of ENIAC, the first programmable electronic digital computer, almost all computing systems have been designed using a von Neumann architecture. This architecture is often referred to as SISD (Single Instruction Single Data stream) because it is characterized by one processor that executes a single stream of instructions sequentially and operates on a single stream of data. 37 “Microbytes,” Byte, Vol. 14, No. 9, September 1989, pp. 17–18. 38 NeXT, Inc., founded by Stephen Jobs in 1985, announced development of its first computer in October 1988. The system is based on a Motorola 68030 main processor chip and specialized input/output chips, the Mach version of the UNIX operating system, and erasable optical disk technology.

FUTURE COMPUTING ENVIRONMENTS FOR MICROSIMULATION MODELING 191 Alternative architectures have long held promise for increasing processing throughput at a fraction of the cost that would be required by replicating the SISD architecture. Some of this promise is being realized; for example, vector processors can process multiple streams of data at a rate muc h faster than it would take to process a single stream multiple times.39 More recently, architectures containing multiple processors have become more common; such architectures generally support execution of truly simultaneous tasks in one computing system. Software operating systems such as UNIX can mesh nicely with such architectures, since programs written in UNIX can be structured to spawn subtasks to accomplish their objectives. UNIX-based systems such as those produced by Sequent and Encore provide for multiple processors that can increase system throughput almost linearly with the number of processors installed. Static microanalytic simulation exercises, by virtue of there being no interaction between micropopulation units during a forward projection in time, are well suited to exploit a multiprocessor architecture, assuming that the overhead of disaggregating the task is not large and the system software allows the disaggregation to be done efficiently. Processing of the micropopulation file can be decomposed into multiple independent threads, each of which processes a subset of the original file. The cost of the decomposition is measured in terms of the creation of the independent processes and the aggregation of the results of each of the independent threads. Computer networks (i.e., sets of computers linked by high-speed data pathways) are now beginning to be organized to work cooperatively to solve specific tasks. Such cooperation ma y take several forms, such as specialization of function in which two or more dissimilar computers are linked together so that each can perform that part of the processing task for which it has a relative advantage. 40 Other forms of cooperation include separating resource usage among computers41 as well as enlisting multiple computers to process 39 Vector processors process multiple streams by using pipelining hardware, which is analogous to a manufacturing assembly line; the appearance of processing multiple data streams is actually caused by efficient overlapping of parts of instructions. The more than proportional gain in throughput is real nonetheless. Other computing systems, mostly ones with multiple processors with multiple local memories, can truly process multiple data streams with a high degree of parallelism. The Connection Machine, manufactured by Thinking Machines, Inc., is one such architecture. 40 For example, Apollo Computer's NCS (Network Computing System) provides a mechanism whereby subroutine calls can be made between subprogram modules residing on different machines connected by a data communications link. Arguments, other linking information, and results are passed between machines over the network. Although the entire process incurs some overhead cost due to the intersystem communication, the gain accruing from the ability to tailor the resource to the subtask may yield a substantial improvement in overall productivity. 41 An example is provided by Sun Microsystems' NFS (Network File System), which allows a computer running UNIX to graft all or part of another UNIX computer's file system onto its own. Using NFS, a computer with little secondary storage can rely on the more extensive space of another system to store its files in a manner that is essentially transparent. Depending on the pattern of the data flow between the two computers, such an arrangement may be cost-effective for both systems.

Next: Economic Studies of Industry Performance »
Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers Get This Book
×
 Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers
Buy Paperback | $100.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This volume, second in the series, provides essential background material for policy analysts, researchers, statisticians, and others interested in the application of microsimulation techniques to develop estimates of the costs and population impacts of proposed changes in government policies ranging from welfare to retirement income to health care to taxes.

The material spans data inputs to models, design and computer implementation of models, validation of model outputs, and model documentation.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!