address how to choose semantic frameworks, how to ensure model fidelity (Does the model behavior match the system being modeled?), how to recognize and manage emergent behavior, and how to specify and guarantee behavior constraints.” Additional information about this project is available online at <http://ptolemy.eecs.berkeley.edu/~eal/towers/ index.html>.
37. The Computer Science and Telecommunications Board initiated a study in early 2000 that will examine a range of possible interactions between computer science and the biological sciences, such as the use of biologically inspired models in the design of IT systems. Additional information is available on the CSTB home page, <www.cstb.org>.
38. By one estimate, based on the ratio of machine lines of code to source lines of code, the productivity of programmers has increased by a factor of ten every 20 years (or 12 percent a year) since the mid-1960s (see Bernstein, 1997).
39. Problem-ridden federal systems have been associated with personnel who may have less, or less current, training than their counterparts in leading private-sector environments. The association lends credence to the notion that the effectiveness of a process can vary with the people using it. See CSTB (1995a).
40. Reuse was one of the foundations of the industrial revolution. Standard, interchangeable parts used in industrial production can be equated to IT components. The analogy to IT frameworks came later in the industrial world but recently has become common. For example, today's automobiles usually are designed around common platforms that permit the design of different car models without major new investments in the underbody and drive train.
41. The ability to define, manipulate, and test software interfaces is valuable to any software project. If interfaces could be designed in such a way that software modules could first be tested separately and then assembled with the assurance of correct operation, then large-scale system engineering would become simpler. Much of the theory and engineering practice and many of the tools developed as part of IT research can be applied to these big systems.
42. An “artifact” in the terminology of experimental computer science and engineering refers to an instance or implementation of one or more computational phenomena, such as hardware, software, or a combination of the two. Artifacts provide researchers with testbeds for direct measurement and experimentation; proving new concepts (i.e., that a particular assembly of components can perform a particular set of functions or meet a particular set of requirements); and demonstrating the existence and feasibility of certain phenomena. See CSTB (1994).
43. For example, when the Defense Department's ARPANET was first built in the 1970s, it used the Network Control Protocol, which was designed in parallel with the network. Over time, it became apparent that networks built with quite different technologies would need to be connected, and users gained experience with the network and NCP. These two problems provoked research that eventually led to the development of the TCP/IP protocol, which became the standard way that computers could communicate over any network. As the network grew into a large Internet and applications emerged that require large bandwidth, congestion became a problem in the network. This, too, has led to research into adaptive control algorithms that the computers attached to the network must use to detect and mitigate congestion. Even so, the Internet is far from perfect. Research is under way into methods to guarantee quality of service for data transmission that could support, for example, robust transmission of digitized voice and video. Extending the Internet to connect mobile computers using radio communications is also an area of active research.
44. Generally speaking, industry-university IT research collaboration has been constrained by intellectual property protection arrangements, generating enough expressions of concern that CSTB is exploring how to organize a project on that topic.