Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.

Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 141

ADAPTING TO COMPUTER SCIENCE
Saul Gorn
University of Pennsylvania
Introduction
Although ~ am not an engineer who adapted himself to computer science but a
mathematician who did so, ~ am familiar enough with the development, concepts, and
activities of this new discipline to venture an opinion of what must be adapted to in it.
"Computer and information science," known as "~nfonnatics" on the European
continent, was born as a distinct discipline barely a generation ago. As a fresh young
discipline, it is an effervescent mixture of formal dreary, empirical applications, and
pragmatic design. Mathematics was just such an effervescent mixture in western culture
mom the renaissance to the midge of the twenties century. It was Den Hat the dynamic
effect of high speed, electronic, general purpose computers accelerated Be generalization of
the mewing of the word "computation." This caused the early computer science to recruit
not only mathematicians but also philosophers (especially logicians), linguists,
psychologists, and economists, as well as physicists and a variety of engineers.
Thus we are, perforce, discussing the changes and adaptations of individuals to
disciplines and especially of people In one discipline to another. As we all know, the very
word "discipline" indicates that there is an initial special effort by an individual to force
himself or herself to change~to adapt one's perceptions to a special way of viewing certain
aspects of the world and one's behavior in order to produce special results. For example,
we are familiar with the enormous prosthetic devices that physicists have added to their
natural sensors and perceptors in order to perceive minute particles and to smash atoms in
order to do so (at, we might add, enormous expense and enormous stretching of
computational activity). We are also familiar with the enormously intricate prosthetic
devices mathematicians added to their computational effectors, the general symbol
manipulators called computers.
The disciplines require us, perhaps less dramatically but always more subtly, to
change the way we understand, how we act, how we act in order to understand, what we
141

OCR for page 141

should understand in order to act, etc. The adaptation of a discipline over a penod of time
is described by its history; this is sometimes slow, in the human scale, over Me millennia,
and sometimes so fast for human beings as to be considered revolutionary, as in our two
examples, or, more crucially, in the scientific revolutions whose structure was studied by
Thomas S. Kuhn (19701.
Because of this concern with adaptability of individuals and disciplines, it is useful
to consider a rough taxonomy subdivision of disciplinary attitudes. Just as the individual's
perception and behavior is a result of the flow of signals, one way from receptors to
perceive and the other of commands to effecters to act, so are there two types of
disciplinary signals; there are descnptions of aspects of the world, presented in declarative
sentences, and there are prescriptions of what ought to be done about those aspects, In
imperative sentences. Theones are logical or empirically organized systems of the
former, and programs are organized systems of the latter. Some knowledge-oriented
disciplines are mainly concerned with producing theories In what we might call their "object
language," reserving their programmatic statements to the discussion of their methodology,
in what we might call their "meta-language"; examples are pure mathematics (where We
methodological language is that of logical programs) and the empirical sciences (where the
methodological language is that of experimental programs). Other disciplines are mainly
"ac~aon-oriented" and concerned with programs such as hunting, cooking, manufacturing,
and constructing. Still other disciplines such as law, medicine, and eng~neenng-have as
Weir purpose applying knowledge to achieve action. These last must be sensitive to bow
He changes In knowledge and the changes In acavines; Hey therefore change more than the
knowledge-onented and He act~on-onented disciplines they are bridging; and if one of
these goes through a revolution, the connecting professional discipline may well follow
suit. Changes in a profession caused by changes in the disciplines bridged tome the
"professional change syndrome."
A young discipline, like a young person, is an effervescent mixture of all die of
the above-mentioned types of activity. Until the m~d-twentieth century, mathematics and
physics were like this. Computer science is like this now.
As time goes on, the action of perceiving for the discipline may disturb what is
being perceived, especially if the perceiving and perceived elements are the same order of
the magnitude; this was the case with particle physics and quantum mechanics, resulting in
He indeterminacy principle. More generally, the knowledge-onented and action-onented
aspects of the young discipline m11 conflict, putting the discipline Hugh a crisis and
possibly a revolution. This has happened to mathematics. Engineering must adapt to both
142

OCR for page 141

the change in mathematics and the change in physics, two examples of the professional
syndrome in engineering.
Since the success of engineering-or engineering aspects of mathematics, physics,
and computer science is measured by how many problems it solves, there is a further
cause of change, what might be called the "solved problem syndrome." Here the result
may be a standard technique that no longer belongs to the profession but is either relegated
to the discipline that presented the action proble~for example, all of applied
mathematics-or becomes a new separate discipline. In the case of engineering, an old
example of the latter is plumbing. In transportation such results have been the train-
"engineers", flight-"engineer", etc. Somewhat similar is the eng~neenng application of
x-rays and electro-magnencs to medicine, resulting in radiology. The latter is still
changing, having developed CAT-scans, PET-scans, magnetic resonance techniques, and,
by another application of computer techniques, three-dimensional graphic imaging.
To see how engineers must adapt to computer science, we wart first see how the
engineering aspect of mathematics was originally computational. Then we will see how
Hose computational aspects of mathematics developed to involve not only physical
engineering but also psycho-linguistic aspects. This win lead us to enumerate the areas of
computer science, each of which alone has a formal, an empirical, and a design aspect.
Some of the formal aspects win be novel to engineers, adding to the mathematical
background a computer engineer requires beyond the analytic background all engineers
need; but all the design aspects will of course be eng~neenng-or~ented. What is more, so
many of these design aspects will be not physical but, rather, psycho-lingu~stic and, hence,
close enough to symbolic and human activities to be most novel to engineers. These we
require the greatest adaptation.
How Mathematics Separated
into Pure, Applied, and Computational
Because mathematics arose from the shepherd's need to count, the agricultural need
for measurement and astronomical ~nforrnanon, arid the urban commercial need for both,
calculation and geometric construction techniques are, of course, ancient. That summation
of Pythagorean mathematics, the "elements" of Euclid, therefore contained those primitive
programs, the constructions with straight-edge and compass, and those sophisticated
verifications of the correctness of those programs, the formal geometric proofs. Euclid's
143

OCR for page 141

students could therefore inscribe in a circle regular polygons with 3, 4, 5, 6, 8, 10, 12, 15,
16, 20, etc., sides, thereby computing lengths-considerably more than agriculture
required. Meanwhile He Greek, and later the Roman, worlds made mechanical use of
counting boards (abaci) and counters (calculi) to do their counting and sums.
However, it was not until the eighth century A.D. Hat the western world could
begin to profit from the Hindu programming of numerical computation. It was then that Al
Khowanzmi In the Arabic world made specific the algorithms of algebra; and it was only
several hundred years later that the Arabs picked up the Hindu numeral notation itself it
was therefore only in the renaissance that the numerical symbolism and its resulting
algorithms came to be fully developed. But it already had gone beyond simple numerical
calculation into algebraic and trigonometric calculations for astronomy as well as far
corurnerce. Even so, the Arabic-~ndu numerical data structures and the corresponding
algorithmic programs for addition and multiplication did not come into common use unto
He eighteenth century!
Thus, *am the renaissance until the mid-twenneth cent, at a leisurely but slowly
accelerating pace, mathematicians kept expanding their symbolic notations via their
algebraic expressions, number theoretic expressions and functional expressions, and
computed expressions that were fonnal denvauves or exact solutions in terms of the
elementary functions to differential and other functional equations. They were expanding
their language of descriptive, implicit and prescriptive, explicitly programmed
specifications. Solving equational problems meant converting from the first type
(declarative) to the second (imperative) specification languages. Note that this was the
reverse of the Euclidean verification of the imperative construction procedure by the
declarative language of formal proofs mentioned before.
At the same time that mathematicians were doing notational and computational
innovation, they were simultaneously developing informal theory and applying both to
physics and astronomy (these were the three types of activity mentioned in the
in~oducuon). To name only a few in this development over the centuries, there were
Vieta, Fermat, Descartes, Pascal, Newton, Leibnitz, Euler, the Bernouillis, Lagrange,
Laplace, Cauchy, and Gauss. Interested not only in pure mathematics He fundamental
theorem of algebra, number theory, and differential geometry but also in applied
mathematics (astronomy and physics) and in symbolic invention and new extensions to
computation-the congruence notation in number theory, algorithms for solving systems of
linear equations, the method of least squares, and error statistics Gauss was a formidable
calculator and programmer. In fact, it was his early developed theory of ruler and compass
144

OCR for page 141

construction, his specification of exactly which regular polygons could be so constructed,
and his program for doing it specifically for the regular polygon with 17 sides (when he
was 19 years old) that made him decide to become a mathematician rather than a linguist.
Some of Gauss's predecessors had also been famous as philosophers Descartes,
Pascal, and L`eibnitz, for example~oncerned with logic and the theory and practice of
computation. For example, Leibnitz, on the one hand, invented the differential quotient
and integral notations and infonnally derived the computational rules of the differential and
integral calculus. On the other hand, he described a formal algebra of logical "and," "or,"
and "not" as well as predicted the future formal logical algebra that he called a "universal
characteristic. "
In any event, by the end of Me eighteenth century the informal concepts of
"infinitesimal," "infinite summation," "continued," and "differentiability" became
disturbingly paradoxical; and yet they had such fertile applications to physics that
something had to be done to verify what could be verified in the symbolic procedures and
to formalize them so as to safeguard them from paradoxical misuse. The whole nineteens
century was occupied with this goal.
This kind of problem was not new. The Pythagoreans had such great success in
their reduction of measurements to counting, by the use of fractions, that they were
confounded by the seeming paradox that the diagonal of a square could not be "rationally"
related to the use of its side as a unit. The solution of this early mathematical crisis was the
geometric theory of ratio and proportion and its application to the theory of similar triangles
as given in the tenth book of Euclid; neater construction theories of the "real numbers" had
to wait until the theories of Cantor, Dede~nd, and Weierstrass in the later half of the
nineteenth century.
The Pythagorean failure of the possibility of measuring by simple counting was
only partly solved by the invention of fractions and real numbers. There still remained
logical confusion in language; for example, Zeno's paradox of Achilles and the Tortoise not
only concerned itself with an infinite series having a finite limit, but also muddied the water
by setting up a confusing correlation of a particular linguistic description of the race
between the two with the actual occurrence- the time taken to describe He race exactly had
nothing to do with the tune taken to run it. The nineteenth century development of a formal
logic even more severe than those employed by Aristotle and Euclid brought out more
mathematical crises and logical paradoxes.
For example, Boole in the early nineteenth century developed a symbolic calculus
that he called "laws of thought." One such law, called "modus ponens," states that if we
145

OCR for page 141

have two declarative sentences, can them p and q, and if we know that the sentence p is
true and also that the sentence of the form "if p is Rue, then so is q" is also true, then we
can conclude that the sentence q is also true. Boole had a neat algebraic symbolization of
this procedure, an algebraic expression: [p& (p->q) ->q]. This is, however, a functional
expression in a Leibnitzian calculus of two-valued logic; it has the value T (for "true") for
each of the four possible combinations of values T and F (for "false") that p and q can
have. Such a function is called a tautology (i.e., always true). Lewis Carrot at the end of
the century, in a paper caned "What the Tortoise said to Achilles," pointed out that to
achieve the result of modus ponens would require not merely this tautology but also one
that states that p and p->q, and it, [p& (p->) ->q] = Al, imply q that is,
and further that
A2 = [p& (p->q) & Al ->qJ,
A3 = [p& (p->q) & Al & A2 ->qj, etc.
other words, it takes an infinite number of these tautologies to achieve modus ponens.
The law of thought, modus pollens, is not in the Boolean expression language;
rather, it is a valid procedure In a prescriptive meta-language that Balks about the production
of proofs in the Boolean object language. The tautologous symbolic expression had been
designed to symbolize modus ponens, and the symbol was confused with what it
symbolized. The tautology is no more a law of thought Han Be artist Magntte's picture of
a pipe, which he entitled "this is not a pipe," is a pipe!
In general, crises in mathematics were signaled by paradoxes. The paradoxes were
usually resolved by finally recognizing that they were either (1) reductio all absurdum
proofs that something did not exist or (2) reductio ad absurdum proofs that some goal was
impossible to achieve by a finite means, or that mixing meta-language with object language
caused us to confuse symbols with what was being symbolized. And even in the reducoo
ad absurdum solutions, the main problem often was to find out what was presumed to exist
but did not, or why it did not. For example, "the real number 2 is not rational" or "the
number of prime numbers is not finite."
Many of Be dramatic advances were due to lifting Be level of thinking by meta-
inguisuc description, and by generalization at die meta-language level. For example,
algebraic symbolization is a generalization into meta-language of Be object language of
anthme~ac and analytic symbolization is a generalization into meta-language of Be
algebraic language of an~metic functions (not all functions are algebraic or even
146

OCR for page 141

representable by infinite series of algebraic expressions). But Be symbolic procedures
suggested at meta-language symbol level might not have valid meaning in the operations at
object language level. The mixture of object language and meta-language is intuitively
powerful but, like most powerful tools, is dangerously capable of misuse. This was why
mathematics went even further than Euclid In carefully formalizing proofs. Archimedes
and Euler intuitively handled infinite series correctly, but too many paradoxical operations,
such as hidden divisions by zero in algebraic meta-language or its equivalent in continuity
and differentiability discussions, made careful logic necessary.
And then the paradoxes even turned up in the logic! Cantor, for example, gave a
formal definition for two sets having the "same number of elements"; it was later used to
give a formal definition of cardinal and ordinal numbers, even for infinite sets. He was
able to show that there were just as many rational numbers between zero and one as there
were natural numbers all told (i.e., that the rational numbers are countably infinite), a
seeming paradox but not really one; and he capped this result with a reductio ad absurdurn
proof by his "diagonal method" to show that the number of real numbers between zero and
one is not countably infinite (i.e., has a larger infinity than the number of natural numbers).
All this activity of fom~alizai~on to avoid paradoxes and contradictions was resulting
in an extension of the Pythagorean program of reducing all mathematics to numbers; it was
all being reduced to a formal theory of sets and then (by Frege and later by Whitehead and
Russell) to formal logic itself, as foreseen by Leibn~tz. But, the logic itself emerged with
dangerous paradoxes, beyond what Lewis Carroll had recently found In Boole's laws of
thought. All this thinking about thinking and mixing meta-language with object languages
seemed fraught with dangers due to self-referencing: He numbers of sets of numbers, sets
of sets, etc.
In fact Russell confounded Frege, immediately after Frege's publication with his
famous paradox of "The set of all sets that are not members of themselves" (is it or is it not
a member of itself ?!?!~. And some further paradoxes brought to the surface the
relationships to language. For example, there was the Richard paradox, which first pointed
out that Here could be only a countably infinite number of English phrases that specify
functions of one natural number variable and then used Cantor's diagonal argument to
show that this could not be true. And then there was Berry's paradox: "The least natural
number not nameable in fewer than 22 syllables" is hereby named in 2 ~ syllables. Thus,
by He beginning of the twentieth century, a larger and larger portion of mathematical
activity was the axiomatization and presentation of mathematical theory in an even more
rigid fom~alizanon than Euclid's elements. Called "pure mathematics." by Hilbert, it
147

OCR for page 141

axiomatized and presented a formal system that would cover all possible mathematical
truths; any properly (i.e., mathematically) stated sentence in such a system would be
formally provable or else formally contradictable. This optimistic program was formally
proved to be unattainable toward the middle of the century; Godel showed that any
sufficiently rich formal system was either inconsistent or incomplete in the sense that there
would be statements within it that were undecidable (i.e., could neither be proved nor
disproved). His proof was something like Richard's paradox, using a Cantorian diagonal
argument on a formalization of a meta-language. About the same time, logicians were
uncovering a number of undecidable questions and unsolvable problems by developing a
number of equivalent theories of computation.
All through the history of mathematics, Be word "computation" had been
expanding its meaning. The formalization process of the nineteenth century accelerated it
by including all the algebras, geometries, and logics. But it was the necessity to formalize
the programming of computations caused by the inhuman speeds of the recent general
purpose computers that made computation include all precisely specifiable symbol
manipulation. Now general symbol manipulation includes the pragmatic effects of
deliberate ambiguity caused by shifting interpretation-for example, between object
language and meta-language. General computer programming must therefore include what
mathematical logic programming must forbid. Again, the mathematical study of arithmetic
is independent of the way numbers are represented; the computer program for addition
depends not only on the particular computer type, but also on the number representation
system used. There are so many pragmatic questions in computer theory, use, and design
Mat Be area of computation and Be area of mathematics were pulled apart
For a different reason, mainly what we called the "solved problem syndrome,"
applied mathematics also separated hum pure mathematics.
By the middle of Be twentieth century, the applications of mathematics had spread
to so many different disciplines that the spread was too vast for any single person's
curriculum. The mathematical applications, once Be ma~emai~cal aspect of Weir
disciplines had been precisely specified and handled, were relegated to the discipline of the
application. A few universities were concerned wid1 a united applied mathematics group
but, by and large, the discipline became restricted lo "pure mathematics."
148

OCR for page 141

How Computation Broadened its Scope
and Developed a Close Relationship with Engineering,
Logical Theory, and Psycho-linguistics
Mechanical aids for counting became a necessity especially in urban cultures for
example, around the Mediterranean in Hellenic and Roman times. And from Hose
beginnings computation, like mathematics generally, developed in three interacting shanks:
(~) The development of He physical aids and their psycho-linguistic aspects into
machines; He physical aids development is the obvious engineering aspect, He
psycho-linguistic the more subtle one;
The theoretical aspect beginning with mathematics as a whole, but nay ng
down to formal proofs, and then to formal logic itself; and
Forming a spiral, the psycho-linguistic development of naming and specifying the
data structures (at first, He number representation systems) and the expressions of
the procedures for handling diem (their programs or algorithms).
During the interacting development of these decree phases In computation, it was the
expressions of the procedures that were turned into machines. For example, as mentioned
before, in Hellenic and Roman times, counting and adding were facilitated by counting
boards with lines on them for placing counters. These static devices, made mobile by
Seedy calculating human fingers, were of two designs; d e first, a simpler design but more
buoy, had room for 10 counters per line, while the second, a more compact one, had 5
counters per line, and intermediate lines designed for at most 2 counters to indicate the
number of full hands of five fingers Hat had accumulated on the base lines. The
descr~p~aon of the counting and adding procedures for the bulky counting board was, of
course, much simpler. Me two types of Roman and Chinese abaci are further
developments of these placid machines, where the lines are replaced first by slots and then
by wires, and the counters are beaded in or on them vertically instead of being strung out
horizontally; in the more compact type the intellllediate lines are vertically above the ones
they relate to.
The Roman symbolization of the total number counted on a board was a compact
description of the picture on a compact board, using ~ and V for the first line and its
associate, X and ~ for the second, C and D for He Gird, etc., and a reversing grammar to
obtain extra compactness, IV for IIII, XL for XXXX, etc. The parsing of these number
149

OCR for page 141

representation phrases was fairly complicated in order to achieve compactness and
representation on one line.
The Hindu-Arabic symbolization especially when a symbol for an empty line,
zero, was introduced-was a compact description of the bulkier abacus. This second
phrase structure language used the position of the numeral to represent the position of the
wire and thereby avoided the necessity of using a larger and larger alphabet for more and
more wires. Moreover, the algorithm for addition in this language had a considerably
simpler desa~pnon than did that for addinon In the Roman numeral phrase structure
language. ~
It would seem that it took mankind 1,000 years to decide that the descriptive
language for the bulkier machine was more useful. The algorithm for addition in the Arabic
symbol system has a much simpler description as well as a need for only a finite alphabet.
The 19-year-old Pascal assisted his tax-collecting father by designing a more mobile
mechanical adding machine than Be abacus. It simulated in hardware the program for
addinon in the Arabic system. And within a generation Leibnitz adapted the same technique
to make the machine do multiplication, albeit with the vigorous help of the human hand
(i.e., some software was still necessary).
The explosion of functional equaaon-solving techniques in mathematics during He
next 200 years made a more Gauss-like calculator desirable. At He beginning of the
nineteenth century, Charles Babbage felt that all function table construction should be made
mechanically and automatically, and he spent some decades designing a "difference engine"
that would do the job. He was also convinced that a device similar to He cards condoning
a Jacquard loom In textile weaving could be used to allow He descnpnons of many
procedures to serve as different programs in one and the same "analytic engine." Although
his mechanization ambitions were defeated because technology was not advanced enough
to standardize the necessary parts' these ambitions were prophetic enough to anticipate
HoDenth's punched card machines at the end of the century and the electromechanical
relay machines of Konrad Zuse, Howard Aiken, and George Stibitz 100 years after his
lame. In effect, Babbage had invented the concept of prog~r~ng general purpose
1Because the natural numbers form a simple chain, the same linguistic expressions, the number namers,
serve the double purpose of being enumerators while the counting pond summarizers when
counting ends. When we want to stress this difference, we use ordinal language instead of cardinals; their
isomorphism is a deep property of finite natural enumeration. This isomorphism is lost in both infinite
cardinals versus ordinals and in "tree names" as ag unst "tree addresses" in ramified slouches such as
decision trees, family trees, classification systems, or organization charts.
150

OCR for page 141

calculating machines and the idea of programming languages for the specification of how to
sun numerical algorithms. His message was kept alive not only by the designers of
machines In the 1940s and 'SOs of this century in the United States and Britain and onto the
European continent, including the Russian mathematicians and engineers following
Tchebycheff, but also by the logicians in the 1930s, Yes, and 'SOB. These latter developed
a general theory of computation even before electronics took over.
We therefore return to the portion of He history of mathematics where an extension
of He concept of computation was occurring in the nineteenth century. We have already
noted that following Al Khowanzmi a meta-language for anthmetic, namely algebra,
appeared and flourished. The solution of algebraic equations as explicit functions of their
coefficients was sought. Where the solutions were Impossible In the number systems
already available, new extensions to the number concept were invented. We have already
remarked on this happening when the Pythagoreans had to deal with that anachronism of
the future, x2 = 2 . Then, through the Renaissance, it resulted in the invention of the next
imaginary class, the negative numbers, yielding the solution to all equations of the form
ax + b = 0, except, of course, for a = 0. Then a unified solution to all quadratic
equations-ax2 + bx + c = Was found as a function of a, b, and c, provided the
invention of numbers was extended to include another imaginary class, first called
imaginary numbers and finally accepted as the "complex numbers"; and the unified solution
called for addition, subtraction, multiplication, division, and, die Pythagorean crisis having
long been understood (?), the operation of taking a square root. The program, or algonthm
for the construction of He solutions, was presented explicitly in an algebraic fonn. What
could be said about all algebraic equations? Gauss finally proved that Complex
numbers would suffice to solve them, but not by an algebraic formula. (We would now
say that "the complex number system is algebraically closed") By his time the
construction of the solution by addition, subtraction, multiplication, division, and the
extraction of roots had been achieved for degrees 3 and 4, and no more.
It was then that Galois, by studying the effect of permuting the roots of an
equation, discussed the algebraic systems called groups. He found that those and only
those equations were "solvable by radicals" whose Galois group has a certain restncted
stn~cture; and he further found that the general algebraic equation of degree greater than or
equal to five did not have this structure, called, incidentally, "solvability." Thus another
condition for solvability and unsolvability emerged, as had those that prompted the
invention of extensions to the number system, or Gauss's condition for solvability by
straight-edge and compass construction.
151

OCR for page 141

Penzias, A. 1989. Ideas and Information~anaging in a High-Tech World. New York:
W. W. Norton.
Pullam, I. M. 1968. The History of the Abacus. London: Hutchinson & Company.
Ralston, A. 1971. Introduction to Programming & Computer Science. New York:
McGraw-Hill.
Randell, B. (ed.~. 1975. The Origins of Digital Computers; Selected Paper. New York:
Spr~nger-Veriag.
Shelley, G. B., and T. I. Cashman. 1984. Computer Fundamentals for an Infonnation
Age. Brea, CA: Anaheim Publishing Company.
Wood, D. 1987. Theory of Computation. New York: Harper & Row.
160

OCR for page 141

APPENDIX A
COMPUTING AS A DISCIPLINE
The final report of the Task Force on the Core of Computer Science presents a new
intellectualframeworkfor the discipline of computing and a new basisfor computing
cwric~a. This report has been endorsed and approvedfor release by the ACM Education
Bond.
Peter J. Denning (Chairman), Douglas E. Comer, David Gries,
Michael C. Mulder, Allen Tucker, A. Joe Turner, and Paul R. Young
APPENDIX
A DEFINITION OF COMPUTING AS A DISCIPLINE
Computer science and engineering is the
systematic study of algorithmic processes - their
theory, analysis, design, efficiency,
implementation, and application - that describe
and transform information. The fundamental
question underlying all of computing is, What
can be (efficiently) automated [2.31. This
discipline was born in the early 1940s with the
joining together of algorithm theory,
mathematical logic, and the invention of the
awed-program electronic computer.
The roots of computing extends deeply into
mathematics and engineering. Mathematics
imparts analysis to the field: engineering imparts
design. The discipline embraces its own theory,
experimental method, and engineering, in
contrast with most physical sciences, which are
separate from the engineering disciplines that
apply their findings (e.g., chemistry and chemical
engineering principles). The science and
engineering are inseparable because of the
fundamental interplay between the scientific and
engineering paradigms within the discipline.
For several thousand years, calculation has
been a principal concern of mathematics. Many
models of physical phenomena have been used to
derive equations whose solutions yield
predictions of those phenomena - for example,
calculations of orbital trajectories, weather
forecasts, and fluid flows. Many general methods
for solving such equations have been devised - for
example, algorithms for systems of linear
equations, differential equations, and integrating
functions. For almost the same period,
calculations that aid in the design of mechanical
systems have been a principal concern of
engineering. Examples include algorithms for
evaluating stresses in static objects, calculating
moments of moving objects, and measuring
161
distances much lamer or smaller than our
immediate perception.
One product of the long interaction between
engineenng and mathematics has been
mechanical aids for calculating. Some surveyors'
and navigators' instruments date back a thousand
years. Pascal and Leibniz built arithmetic
calculators in the middle 1600s. In the 1930s
Babbage conceived of an "analytical engine" Eat
could mechanically and without error evaluate
logarithms, trigonometric functions and other
general arithmetic functions. His machine, never
completed, served as an inspiration for later
work. In the 1920, Bush constructed an
electronic analog computer for solving general
systems of differential equations. In the same
period, electromechanical calculating machines
capable of addition, subtraction, multiplication,
division, and square root computation became
available. The electronic flip-flop provided a
natural bridge from these machines to digital
versions with no moving parts.
Logic is a branch of mathematics concerned
with criteria of validity of inference and formal
principles of reasoning. Since the days of
Euclid, it has been a tool for rigorous
mathematical and scientific argument. In the
19th century a search began for a universal
system of logic that would be free of the
incompletenesses observed in known deductive
systems. In a complete system, it would be
possible to determine mechanically whether any
given statement is either true or false. In 1931,
Godel published his 'incompleteness theorem,"
showing that there is no such system. In the late
1930s, Turing explored the idea of a universal
computer that could simulate any step-by-step
procedure of any other computing machine. His
findings were similar to Godel's: some well

OCR for page 141

.
defined problems cannot be solved by any
mechanical procedure. Logic is important not
only because of its deep insight into the limits of
automatic calculation, but also because of its
insight that strings of symbols, perhaps encoded
s numbers, can be interpreted both as data and as
programs.
This insight is the key idea that
distinguishes the stored program computer from
calculating machines. The steps of the algorithm
are encoded in a machine representation and stored
in the memory for later decoding and execution
by the processor. The machine code can be
derived mechanically from a higher-level
symbolic.form, the programming language.
It is the explicit and intricate intertwining of
the ancient threads of calculation and logical
symbol manipulation, together with the modern
threads of electronics and electronic representation
of information that gave birth to the discipline of
computing.
We identified nine subareas of computing:
1. Algorithms and data structures
2. Programming languages
3. Architecture
4. Numencal and symbolic computation
5. Operating systems
8. Software methodology and engineering
7. Databases and information retrieval
8. Ar~cial intelligence and robotics
9. Human-Computer communication
Each has an underlying unity of subject matter, a
substantial theoretical component, significant
abstractions, and substantial design and
implementation issues. Theory deals with the
underlying mathematical development of the
subarea and includes supporting theory such as
graph theory, combinatorics, or fonnal
languages. Abstraction (or modelling) deals with
models of potential unplementations; the models
suppress detail, while retaining essences features,
and provide means for predicting future behavior.
Design deals with he process of specifying a
problem, deriving requirements and
specifications, iterating and testing prototypes,
and implementing a system. Design includes the
experimental method, which in computing comes
in several styles: measuring programs and
systems, validating hypotheses, and prototyping
to extend abstractions to practice.
Although software methodology is
essentially concerned with design, it also
contains substantial elements of theory and
abstraction. For this reason, we have identified it
as a subarea. On the other hand, parallel and
distributed computation are issues that pervades
all the subareas and all their components (theory,
abstraction, and design); they have been identified
neither as subareas nor as subareas components.
The subsequent numbered sections provide
the details of each subareas in three parts---
theory, abstraction, and design. The boundaries
between theory and abstraction, and between
abstraction and design are necessarily fuzzy; it is
a maker of personal taste where some of the
items go.
Our intention is to provide a guide to the
discipline by showing its main features, not a
detailed map. It is important to remember that
this guide to the discipline is not a plan for a
course or a curriculum; it is merely a framework
in which a curriculum can be designed. It is also
important to remember that this guide to the
discipline is a snapshot of an organism
undergoing constant change. It will require
reevaluation and revision at regular intervals.
1. ALGORITHMS AND DATA
STRUCTURES
This area deals with specific classes of problems
and their efficient solutions. Fundamental
questions include: For given classes of problems
and their efficient solutions. Fundamental
questions include: For given classes of problems,
what are the best algorithms: How much storage
and time do they require? What is the tradeoff
between space and time? What is the worst case
of the best algorithms? How well do algorithms
behave on average? How general are
algorithms-i.e., what classes of problems can
be dealt with by similar methods?
1.1 Theory
Major elements of theory in the area of
algorithms and data structures are:
1. Computability theory, which defines what
machines can and cannot do.
2. Computational complexity theory, which
tells how to measure the time and space
requirements of computable functions and
relates a problem's size with the best- or
worst-case performance of algorithms that
solve that problem, and provides methods for
proving lower bounds on any possible
solution to a problem.
Time and space bounds for algorithms and
classes of algorithms.
40 Levels of intractability: for example, classes
of problems solvable deterministically in
polynomially bounded time (P-problems);
those solvable nondeterministically in
162

OCR for page 141

polynomially bounded time (NP-problems);
and those solvable nondeterministically in
polynomially bounded time (NP-problems);
and those solvable efficiently by parallel
machines (NC-problems).
Parallel computation, lower bounds, and
mappings from dataflow requirements of
algorithms into communication paths of
machines.
6. Probabilistic algorithms, which give results
correct with sufficiently high probabilities
much more efficiently (in time and space)
than determinate algorithms that guarantee
their results. Monte Carlo methods.
7. Cryptography.
8. The supporting areas of graph theory,
recursive functions, recurrence relations,
combinatorics, calculus, induction, predicate
and temporal logic, semantics, probability,
and statistic.
1.2 Abstraction
Major elements of abstraction in the area of
algorithms and data structures are:
1. Efficient, optimal algorithms for important
classes of problems and analyses for best,
worst and average performance.
2. Classifications of the effects of control and
data structure on time and space requirements
for venous classes of problems.
3. Impmant classes of techniques such as
divide-and conquer, Greedy algorithms,
dynamic programming, finite state machine
interpreters, and stack machine interpreters.
Parallel and distributed algorithms; methods
of partitioning problems into tasks Hat can
be executed in separate processors.
1.3 Design
Major elements of design and experimentation in
the area of algorithms and data structures are:
1. Selection, implementation, and testing of
algorithms for important classes of problems
such as searching, sorting, random-number
generation, and textual pattern matching.
2. Implementation and testing of general
methods applicable across many classes of
problems, such as hashing, graphs, and
trees.
Implementation and testing of distributed
algorithms such as network protocols,
distributed data updates, semaphores,
deadlock detectors and synchronization
methods.
163
4. Implementation and testing of storage
managers such as garbage collection, buddy
system, lists, tables, and paging.
5. Extensive experimental testing of heuristic
algorithms for combinatorial problems.
6. Cryptographic protocols that permit secure
authentication and secret communication.
2. PROGRAMMING LANGUAGES
This area deals with notations for virtual
machines that execute algorithms, with notations
for algorithms and data, and with efficient
translations from high-level languages into
machine codes. Fundamental questions include:
What are possible organizations of the virtual
machine presented by the language (data types,
operations, control structures, mechanisms for
introducing new types and operations)? How are
these abstractions implemented on computers?
What notation (syntax) can be used effectively
and efficiently to specify what the computer
should do?
2.1 Theory
Major elements of theory in the area of
programming languages are:
1. Fonnal languages and automata, including
theories of parsing and language translation.
Turing machines (base for procedural
languages), Post Systems (base for string
processing languages), l-calculus (base for
functional languages).
3. Formalsemantics: methods for defining
mathematical models of computers and the
relationships among the models, language
syntax, and implementation. Primary
methods include denotational, algebraic,
operational and axiomatic semantics.
4. As supporting areas: predicate logic,
tempom1 logic, modern algebra and
mathematical induction.
2.2 Abstraction
Major elements of abstraction in the area of
pro~nming languages include:
1. Classification of languages based on their
syntactic and dynamic semantic models; e.g.,
static typing, dynamic typing, functional,
procedural, object-oriented, logic,
specification, message passing, and dataflow.
2. Classification of languages according to
intended application area; e.g., business data
processing, simulation, list processing, and
graphics.

OCR for page 141

Classification of major syntactic and
semantic models for program structure; e.g.,
procedure hierarchies, functional
composition, abstract data types, and
communicating parallel processes.
4. Abstract implementation models for each
major type of language.
5. Methods for parsing, compiling,
interpretation, and code optimization.
6. Methods for automatic generation of parsers,
scanners, compiler components, and
compilers.
2.3 Design
Major elements of design and experimentation in
the area of programming languages are:
1. Specific languages that bring together a
particular abstract machine (semantics) and
syntax to fonn a coherent implementable
whole. Examples: procedural COBOL,
FORTRAN, ALGOL, Pascal, Ada, C),
functional (LISP), dataflow (SISAL, VAL),
object-oriented (SmalltaLt`, CLU), logic
(Prolog), strings (SNOBOL), and
concurrency (CSP, Occam, Concurrent
Pascal, Modula 2~.
2. Specific implementation methods for
particular classes of languages: n~n-time
models, static and dynamic execution
methods, typing checking, storage and
register allocation, compilers, cross
compilers, and interpreters, systems for
fading parallelism in programs.
3. Programming environments.
4. Parser and scanner generators (e.g., YACC,
LEX), compiler generators.
5. Programs for syntactic and semantic error
checking, profiling, debugging, and Racing.
6. Applications of programming-language
methods to document-processing functions
such as creating tables, graphs, chemical
formulas, spreadsheets equations, input and
output, and data handling. Other
applications such as statistical processing.
3. ARCHITECTURE
This area deals with methods of organizing
hardware (and associated software3 into efficient,
reliable systems. Fundamental questions include:
What are good methods of implementing
processors, memory, and communication in a
machine? How do we design and control large
computational systems and convincingly
demonstrate that they work as intended despite
errors and failures? What types of architectures
can efficiently incorporate many processing
elements that can wow concurrently on a
computation? How do we measure performance?
3.1 Theory
Major elements of theory in the area of
architecture are:
1. Boolean algebra
2. Switching theory.
3. Coding theory.
4. Finite state machine theory.
5. The supporting areas of statistics,
probability, queueing, reliability theory,
discrete mathematics, number theory, and
arithmetic in different number systems.
3.2 Abstraction
Major elements of abstraction in the area of
architecture are:
1. Finite state machine and Boolean algebraic
models of circuits that relate function to
behavior.
2. Other general methods of synthesizing
systems from basic components.
3. Models of circuits and finite state machines
for computing arithmetic functions over
finite fields.
4. Models for data path and control struck.
5. Optimizing instruction sets for various
models and workloads.
6. Hardware reliability: redundancy,e~ror
detection, recovery, and testing.
7. Space, time, and organizational tradeoffs in
the design of VLSI devices.
8. Organization of machines for various
computational models: sequential, dataflow,
list processing, array processing, vector
processing, and message-passing.
9. Identification of design levels; e.g.,
configuration, program, instruction set,
register, and gate.
3.3 Design
Major elements of design and experimentation in
the area of architecture are:
2.
3.
164
1. Hardware units for fast computation; e.g.,
arithmetic function units, cache.
The so called van Neumann machine (the
single-ins~uciion sequence stored program
computer); RISC and CISC
implementations.
Efficient methods of storing and recording
information, and detecting and correcting
errors.

OCR for page 141

4. Specific approaches to responding ~ errors; 3.
recovery, diagnostics, reconf~guraiion, and
backup Cures.
5. Computer aided design (CAD) systems and
logic simulations for the design of VLSI
circuits. Production programs for layout,
fault duagnosis. Silicon compilers.
6. Implementing machines in venous
computational models; e.g., dataflow, Bee,
LISP, hypercube, vector, and
multiprocessor.
7. Supercomputers, such as the Cray and Cyber
machines.
4. NUMERICAL AND SYMBOLIC
COMPUTATION
This area deals with general methods of
efficiently and accurately soling equations
resulting from mathematical models of systems.
Fundamental questions include: How can we
accurately approximate continuous or inDmite
processes by finite discrete processes? How do
we cope with the errors arising from these
approximations? How rapidly can a given class
of equations be solved for a given level of
accuracy? How can symbolic manipulations on
equations, such as integration, differentiation, and
reduction to minimal teens, be carried out? How
can He answers to these questions be
incorporated into efficient, reluable, high~uality
mathematical software packages?
4.1 Theory
Major elements of theory in the area of numerical
and symbolic computation are:
1. Number theory.
2. Linear algebra
3. Numerical analysis.
4. Nonlinear dynamics.
5. The supporting arms of calculus, real
analysis, complex analysis, and algebra
4.2 Abstraction
Major elements of abstraction in the area of
numerical and symbolic computation are:
1. Formulations of physical problems as
models in continuous (and sometimes
discrete3 mathematics.
2. Discrete approximations to continuous
problems. In this context, backward error
analysis, error propagation and stability in
the solution of linear and nonlinear systems.
Special methods in special cases, such as
Fast Fourier Transform and Poisson solvers.
165
Me finite element model for a large class of
problems specifiable by regular meshes and
boundary values. Associated iterative
methods and convergence theory; direct,
implicit, multifolds, rates of convergence.
Parallel solution methods. Automatic and
refinement during numerical integration.
4. Symbolic integration and differentiation.
4.3 Design
Major elements of design and experimentation in
the area of numerical and symbolic computation
are:
1. High-level problem formulation systems
such as CHUM and WEB.
Specific libraries and packages for linear
algebra, ordinary differential equations,
statistics, nonlinear equations, and
optimizations; e.g., LINPACK,
EISPEACK, ELLPACK.
Methods of mapping finite element
algorithms to specific architectures~.g.,
multigrids on hypercubes.
4. Symbolic manipulators, such as
MACSYMA and REDUCE, capable of
powerful and nonobvious manipulations,
notably differentiations, integrations, and
reductions of expressions to minimal teens.
5. OPERATING SYSTEMS
This area deals with control mechanisms that
allow multiple resources to be efficiently
coordir~i in the execution of programs.
Fundamental questions include: What are the
visible objects and permissible operations at each
level in the operation of a computer system? For
each class of resource (objects visible at some
level), what is a minimal set of operations that
permit their effective use? How can interfaces be
organized so that users deal only with abstract
versions of resources and not with physical
details of hardware? What are effective control
s~egies for job scheduling, memory
management, communication among concurrent
tasks, reliability, and security? What are the
principles by which systems can be extended in
function by repeated application of a small
number of construction rules? How should
distributed computations be organized so that
many autonomous machines connected by a
communication network can participate in a
computation, with the details of network
protocols, host locations, band-widths, and
resource naming being mostly invisible?

OCR for page 141

5.1 Theory
Major elements of theory in the area of operating
systems are:
3.
Concurrency theory; synchronization,
det~ninacy, and deadlocks.
Scheduling theory, especially processor
scheduling.
Program behavior and memory management
theory, including optimal policies for
storage allocation.
4. Performance modeling and analysis.
5. The supporting areas of bin packing,
probability, queueing theory, queueing
networks, communication and information
theory, temporal logic, and cryptography.
Abstraction
Major elements of abstraction in the area of
operating systems are:
2.
5.
Abstraction principles that permit users to
operate on idealized versions of resources
without concern for physical details (e.g.,
process rather than processor, virtual
memory rather than main-seconda~y
hierarchy, files Other than disks).
Binding of objects perceived at the user
interface to internal computational
structures.
Models for important subproblems such as
process management, memory management,
job scheduling, secondary storage
management, and performance analysis.
4. Models for distributed computation; e.g.,
clients and servers, cooperating sequential
processes, message-passing, and remote
Verdure cans.
Models for secure computing; e.g., access
controls, authentication, and
· .
communication.
a. Networking, including layered protocols,
naming, remote resource usage, help
services, and local network protocols such as
token-passing and shared buses.
5.3 Design
Major elements of design and experimentation in
the area of operating systems are: 1.
1. Prototypes of time sharing systems,
automatic storage allocators, multilevel
schedulers, memory managers, hierarchical
file systems and other important system
components that have served as bases for
commercial systems.
2. Techniques for building operating systems
such as UNIX, Multics, Mach, VMS, and
MS-DOS.
3. Techniques for building libraries of utilities;
e.g., editors, document formatters,
compilers, linkers, and device drivers.
4. Files and file systems.
5. Queueing network modeling and simulation
packages to evaluate perfonnance of real
systems.
6. Network architectures such as ethernet,
FDDI, token ring nets, SNA, and DECNET.
7. Protocol techniques embodied in the
Department of Defense protocol suite
(TCP/1P), virtual circuit protocols, interned
real time conferencing, and X.25.
6. SOFTWARE METHODOLOGY
AND ENGINEERING
This area deals with the design of programs and
large software systems that meet specifications
and are safe, secure, reliable, and dependable.
Fundamental questions include: What are the
principles behind the development of programs
and programming systems? How does one prove
that a program or system meets its
specifications? How does one develop
specifications that do not omit important cases
and can be analyzed for safety? How do software
systems evolve through different generations?
How can software be designed for
understandability and modifiability?
6.1 Theory
Major elements of theory in the area of software
methodology and tools are:
1. Program verification and proof.
2. Temporal logic.
3. Reliability theory.
4. The supporting areas of predicate calculus.
axiomatic semantics, and cognitive
psychology.
6.2 Abstraction
Major elements of abstraction in the area of
software methodology and tools are:
Specification methods, such as predicate
transformers, programming calculi, abstract
data types, and Floyd-Hoare axiomatic
notations.
2. Methodologies such as stepwise refinement,
modular design, modules, separate
compilation, information-hiding, dataflow,
and layers of abstraction.
166

OCR for page 141

s
3. Methods for automating program
development; e.g., text editors, syntax-
directed editors, and screen editors.
Methodologies for dependable computing;
e.g., fault tolerance, security, reliability,
recovery, N -version programming,
multiple-way redundancy, and check-
poiniing.
Software tools and programming
environments.
6. Measurement and evaluation of programs and
systems.
7. Matching problem domains through software
systems to particular machine architectures.
8. Life cycle models of software projects.
6.3 Design
Major elements of design and experimentation in
the area of software methodology and tools are:
1. Specification languages (e.g., PSL 2, IMA
JO), configuration management systems
(e.g., in Ada APSE), and revision control
systems (e.g., RCS, SCCS).
2. Syntax directed editors, line editors, screen
editors, and word processing systems.
3. Specific methodologies advocated and used in
practice for software development; e.g.,
HDM and those advocated by DiJ~St=,
Jackson, Mills, or Yourdon.
4. Procedures and practices for testing (e.g.,
waLic-through, hand simulation, checking of
interfaces between modules, program path
enumerations for test sets, and event
tracing), quality assurance, and project
management.
5. Software tools for program development and
debugging, profiling, text formatting, and
database manipulation.
6. Specification of criteria levels and validation
procedures for secure computing systems,
e.g., Department of Defense. 4.
7. Design of user interfaces.
X. Methods for designing very large systems
that are reliable, fault tolerant, and
dependable.
7. DATAB ASK AND INFORMATION
RETRIEVAL SYSTEMS
This area deals with the organization of large sets
of persistent, shared data for efficient query and
update. Fundamental questions include: What
modeling concepts should be used to represent
data elements and their relationships? How can
basic operations such as store, locate, match, and
retrieve be combined into effective transactions?
How can these transactions interact effectively
167
with the user? How can high-level queries be
translated into high-performance programs?
What machine architectures lead to efficient
retrieval and update? How can data be protected
against unauthorized access, disclosure, or
destruction? How can large databases be
protected from inconsistencies due to
simultaneous update? How can protection and
performance be achieved when the data are
distributed among many machines? How can
text be indexed and classified for efficient
retrieval?
7.1 Theory
Major elements of theory in the area of databases
and inforrnanon retrieval systems are:
1. Relational algebra and relational calculus.
2. Dependency theory.
3. Concurrency theory, especially serializable
nsacuons, Cocks, and synchronized
updates of multiple copies.
4. Statistical inference.
5. Sorting and searching.
6. Performance analysis.
7. As supporting theory; cryptography.
7.2 Abstraction
Major elements of abstraction in the area of
databases and information retrieval systems are:
1. Models for representing the logical structure
of data and relations among the data
elements, including the relational and entity-
relationship models.
2. Representations of files for fast retrieval,
such as indexes, trees, inversions, and
associative stores.
3. Methods for assuring integrity (consistency)
of the data base under updates, including
concurrent updates of multiple copies.
Methods for preventing unauthorized
i~loSure or alteration and for minimizing
statistical inference.
5. Languages for posing queries over databases
of different kinds (e.g., hypertext, text,
spatial, pictures, images, rule-sets).
Similarly for information retrieval systems.
6. Models, such as hypertext, which allow
documents to contain text at multiple levels
and to include video, graphics, and voice.
7. Human factors and interface issues.
7.3 Design
Major elements of design in the area of database
and infonnanon retrieval systems are:

OCR for page 141

4.
s
1. Techniques for desigrung databases for
relational, hierarchical, network, and
distributed implementations.
Techniques for designing database systems
such as INGRES, System R. dBase III, and
DB-2.
a. Techniques for designing information
retrieval systems such as LEXIS, Osiris, and
Medline.
Design of secure database systems.
Hypertext systems such as NLS, NoteCards,
Intermedia, and Xanadu.
6. Techniques to map large databases to
magnetic disk stores.
7. Techniques for mapping large, read-only
databases onto optical storage media e.g.,
CD/ROM and WORMS.
a. ARTIFICIAL INTELLIGENCE AND
ROBOTICS
This area deals with the modeling of animal and
human (intelligent) behavior. Fundamental
questions include: What are basic models of
behavior and how do we build machines that
simulate them? To what extent is intelligence
described by rule evaluation, inference, deduction,
and pattern computation? What is the ultimate
performance of machines that simulate behavior
by these methods? How are sensory data encode
so that similar patterns have similar codes? How
are motor codes associated with sensory codes?
What are architectures for learning systems, and
how do those systems represent their knowledge
of the world?
8.1 Theory
Major elements of theory in the area of aruf~cial
intelligence and robotics are:
1. Logic, e.g., monotonic, nonmonotonic, and
filzzy.
2. Conceptual dependency.
3. Cognition.
4. Syntactic and semantic models for natural
language understanding.
5. Kinematics and dynamics of robot motion
and world models used by robots.
6. The supporting areas of structural
mechanics, graph theory, formal grammars,
linguistics, philosophy, and psychology.
S.2 Abstraction
Major elements of abstraction in the area of
artificial intelligence and robotics are:
Knowledge representation (e.g., rules,
frames, logic) and methods of processing
them (e.g., deduction, inference).
2. Models of nab language understanding
and natural language representations,
including phoneme representations; machine
translation.
3. Speech recognition and syntheses,
translation of text to speech.
4. Reasoning and learning models; e.g.,
uncertainty, nonmonomnic logic, Bayesian
inference, beliefs.
5. Heuristic search methods, branch and bound,
control search.
6. Machine architectures that imitate biological
systems, e.g., neural networks,
connectionism, sparse distributed memory.
Models of human memory, autonomous
learning, and other elements of robot
systems.
8.3 Design
Major elements of design and experimentation in
artificial intelligence and robotics include:
2.
1. Techniques for designing software systems
for logic programming, theorem proving,
and rule evaluation.
Techniques for expert systems in narrow
domains (e.g., Mycin, Xcon) and expert
system shells that can be programmed for
new domains.
3. Implementations of logic programming
(e.g., PROLOG).
4. Natural language understanding systems
(e.g., Margie, SHRDLU, and preference
semantics).
5. Implementations of neural networks and
sparse distributed memories.
6. Programs that play checkers, chess, and
other games of strategy.
7. Working speech synthesizers, recognizers.
S. Working robotic machines, static and
mobile.
9. HUMAN-COMPUTER
COMMUNICATION
This area deals with the efficient transfer of
information between humans and machines via
various human-like sensors and motors, and with
information structures that reflect human
conceptualizations. Fundamental questions
include: What are efficient methods of
representing objects and automatically creating
pictures for viewing? What are effective methods
for receiving input or presenting output? How
can the risk of misperception and subsequent
168

OCR for page 141

human error be minimized? How can graphics
and other tools be used to understand physical
phenomena through information stored in data
sets?
9.1 Theory
Major elements of theory in human-computer
communication are:
1. Geometry of two and higher dimensions
including analytic, projective, Refine, and
computational geometries.
2. Color theory.
3. Cognitive psychology.
4. The supporting areas of Fourier analysis,
linear algebra, graph theory, automats,
physics, and analysis.
9.2 Abstraction
Major elements of abstraction in the area of
human-computer communication are:
1. Algorithms for displaying pictures including
methods for smoothing, shading, hidden
lines, ray Hacing, hidden surfaces,
Dansparent surfaces, shadows, lighting,
edges, color maps, representations by
splines, rendering, textunng, antialiasing,
coherence, fractals, anuma~on, representing
pictures as hierarchies of objects.
2. Models for computer-sided design (CAD).
3. Computer representations of physical
objects.
4. Image processing and enhancement methods.
5. Man-machine communication, including
psychological studies of modes of interaction
that reduce human error and increase human
.
productivity.
9.3 Design
Major elements of design and expenmenwion in
the area of human-compu~ communication are:
1. Implementation of graphics algorithms on
various graphics devices, including vector
and raster displays and a range of hardcopy
devices.
2. Design and implementation of experunental
graphics algorithms for a growing range of
models and phenomena
3. Proper use of color graphics for displays;
accurate reproduction of colors on displays
and hardcopy devices.
4. Graphics standards (e.g., GKS, PEGS,
VDI), graphics languages (e.g., PostScript),
and special graphics packages (e.g., MOGLI
for chemistry).
5. Implementation of various user interface
techniques including direct manipulation on
bitmapped devices and screen techniques for
character dlevK:es.
Implementation of various shard file
interchange formats for information transfer
between differing systems and machines.
7. Working CAD systems.
8. Worlcing image enhancement systems (e.g.,
at JPL for picas received from space
probes).
ACKNOWLED GMENTS
Many people generously provided written comments in response to drafts of this report Although it was
not possible to accommodate every comment in deml, we did take every comment into account In revising
. . .
~ ~ ·. · . ~ . .. .
this report. we are grateful to ule tollowmg people Ior senamg us user comments:
Paul Ab~a~ms
J. Mack Adams
Robert Aiken
Donald Bagert
Alan Blomann
Frank Bonach
Richard Bouing
Albert Briggs, Jr.
Judy Brown
Rick Carlson
Thomas Cheatharn
Neal Coulter
Steve Cunningham
Verlynda Dobbs
Caroline Easunan
Richard Epstein
Funk Friedman
C. W. Gear
Robert Glass
Nico Habennann
Judy Hankins
Charles Keleman
Ken Kennedy
Elliot Koffman
Barry Kurtz
Dons Lidlce
Michael Loci
Paul Lalcer
Susan Merritt
John Moti
169
J. Paul Myers
Bob Noonan
Alan Portis
Jesse Poore
Thence Pratt
Jean Rogers
Jean Sammet
Mary Shaw
J.W. Smith
Dennis Smolarski
Ed Upchurch
Garret White
Gio Wiederfield
Stuart Zweben

OCR for page 141

References
1. Abelson, M., and Sussman, G. Structure and
Interpretation of Computer Programs. MIT
Press, Cambridge, Mass., 1983
2. Arden, B., ed. See What Can Be Automated?
Report of the NSF Computer Science and
Engineering Research Study (COSERS).
MIT Press. Cambridge, Mass., 1980.
3. Denning, P. What is computer science? Am.
Sci. 73 Jan.-Peb. 1985). 16-19.
4. Flores, F., and Graves, M. Education.
(working paper available Tom Logonet, Inc.
2200 Powell Sweet, 11th Floor, Emeryville,
Calif. 94608.)
5. Nowell, A., Perlis, A., and Simon, H. What
is computer science: Sci. 157 (1987), 1373-
1374. reprinted in Abacus 4, (Summer
1987~. 32.
(copied from Communications of the ACM, Jay 1989 Volume 32, Number 1, pg. 16-23)
170