we encouraged, as much as we could, some kind of theoretical background for this proposal. (Aspray et al., 1996)
The NSF ended up funding the bulk of theoretical work in the field (by 1980 it had supported nearly 400 projects in computational theory), much of it with great success. Although funding for theoretical computer science has declined as a percentage of the NSF budget for computing research (it constituted 7 percent of the budget in 1996, down from 20 percent in 1973), it has grown slightly in real dollars.1 Mission-oriented agencies, such as the National Aeronautics and Space Administration or the Defense Advanced Research Projects Agency, tend not to fund theoretical work directly because of their emphasis on advancing computing technology, but some advances in theory were made as part of their larger research agendas.
State machine are ubiquitous models for describing and implementing various aspects of computing. The body of theory and implementation techniques that has grown up around state machines fosters the rapid and accurate construction and analysis of applications, including compilers, text-search engines, operating systems, communication protocols, and graphical user interfaces.
The idea of a state machine is simple. A system (or subsystem) is characterized by a set of states (or conditions) that it may assume. The system receives a series of inputs that may cause the machine to produce an output or enter a different state, depending on its current state. For example, a simplified state diagram of a telephone activity might identify states such as idle, dial tone, dialing, ringing, and talking, as well as events that cause a shift from one state to another, such as lifting the handset, touching a digit, answering, or hanging up (see Figure 8.1). A finite state machine, such as a telephone, can be in only one of a limited number of states. More powerful state machine models admit a larger, theoretically infinite, number of states.
The notion of the state machine as a model of all computing was described in Alan Turing's celebrated paper on computability in 1936, before any general-purpose computers had been built. Turing, of Cambridge University, proposed a model that comprised an infinitely long tape and a device that could read from or write to that tape (Turing, 1936). He demonstrated that such a machine could serve as a general-purpose computer. In both academia and industry, related models were proposed and studied during the following two decades, resulting in a definitive 1959 paper by Michael Rabin and Dana Scott of IBM Corporation (Rabin