Grand Challenge Areas
After considerable discussion and debate, the following grand challenge areas were selected:
- Representation and modeling of complex systems,
- Collaborative problem solving,
- Machine learning and adaptive systems,
- Reasoning under uncertainty,
- Virtual worlds (reality), and
- Neurophysiological models of cognition.
Each of the six grand challenge areas is discussed in turn in this chapter. For the most part, these discussions include a description of the grand challenge area; research objectives over the short term (within the next 2 years), midterm (2 to 6 years), and long term (more than 6 years from now); ongoing leading-edge activities and organizations; needed resources delineated in terms of research skills, facilities, cooperative arrangements and opportunities, and level of effort; and opportunities for NRL.
Some subjects listed above may not be included in all discussions. The reader is referred to Table 1.1 and its accompanying discussion in Chapter 1 for an explanation of the cross-linkage of the grand challenge areas to the NRL priority topics.
The panel recommends that NRL give highest priority to investment and applications in the six grand challenge areas.
The panel's informal assessment of industrial interest in the six grand challenge areas is given at the end of this chapter.
Representation and Modeling of Complex Systems
Prior to the advent of the computer, traditional engineered systems (e.g., bridges, airplanes, and ships) were modeled by a mathematical formalism combined with a set of engineering heuristics for how to apply the theory to given situations. Today, we are faced with the design, construction, maintenance, and use of much more complex systems. They require representations of the problem that go beyond systems of equations and modeling techniques and require more than closed-form solving, numerical approximation, or numerical simulation. The dimensions of this complexity can be exemplified as follows:
- A system today is a heterogeneous system of subsystems, so that no one representation and modeling paradigm suffices.
- A system is typically controlled by a software-based supervisory system that is usually the single most complex subsystem and is best modeled in detail by discrete, logical models rather than continuous, physical ones.
- One or more human users are part of the overall system and require interactions with the other subsystems to allow for monitoring and control.
The grand challenge is to be able to model entirely in software a range of complex systems. Two specific application milestones that would validate this achievement are the following:
- The ability to design and evaluate in software a sophisticated new weapon system, including carrying platform, sensors, weapons, communications, control systems, and human decision makers. This "prototype before build" capability (smaller-scale examples of which include Boeing's use of computer-aided design (CAD), computer-aided software engineering
- (CASE), and computer-aided manufacturing (CAM) in the design of the 777) would help the Navy to stay even with rapidly changing threats, technologies, priorities, and budgets.
- The building of a fully autonomous undersea or air vehicle to perform one mission with reasonable generality.
Desirable research objectives in this area are as follows:
- Short term—Integration of discrete, continuous, and symbolic representations of models.
- Midterm—Validation of internal consistency and completeness and against external specifications; representation and explanation of complex models to humans; and computation-constrained, satisfactorily approximate solutions based on dynamic, variable-resolution models controlled by metamodels.
- Long term—Global optimization across complex models; and very high level languages for system design (including decision-support software) and theory of model design.
Ongoing Leading-Edge Activities and Organizations
Types of ongoing research activities in this area and organizations involved are as follows:
- Knowledge representation activities—Stanford University and Carnegie Mellon University,
- Simulation/modeling activities—Bolt, Beranek and Newman Laboratories,
- Software engineering activities—Defense Advanced Research Projects Agency (DARPA) knowledge-based systems application (KBSA) centers, Kestrel Institute, University of Southern California (USC)/ISI, Stanford University, Carnegie Mellon University, and Harvard University, and
- Hardware computer-aided design (CAD) activities—DARPA centers, Stanford University, and Carnegie Mellon University.
The necessary ingredients for research in this area are summarized below:
- Skills—Expert systems, simulation and modeling, automated software design and synthesis, computer-aided design, and human physiology.
- Facilities—Major computational resources to execute and validate complex models: distributed workstations plus parallel computer available over the network could suffice.
- Cooperation opportunities—Multiple groups must collaborate to model complex systems (e.g., a ship) because the expertise never resides totally in one organization. Two or more research groups could collaborate by modeling different subsystems of a problem.
- Level of effort—For small, theoretical tasks, one person could make progress; large, demonstration-oriented tasks would require a group of at least five, for example, one engineering system researcher, one simulation/modeling researcher, one application expert, one knowledge engineer, and one programmer.
Collaborative Problem Solving
Collaboration or group work is generally described as a social process used to more effectively perform tasks. These tasks include problem solving, idea generation, decision making, and conflict resolution. Collaboration includes both formal and informal methods. Formal methods can be characterized as structured collaboration sessions with
predetermined agendas and anticipated results. In contrast, informal methods are generally ad hoc in nature and constitute much of the daily practice of performing tasks in organizations (e.g., one seeks help from a colleague on an attribute of a problem one is trying to solve). Many researchers are emphasizing informal collaboration as a model of problem solving in organizations. Quite often, formal methods are used because of the size of the constituency needed to solve the problem, the diversity of opinions on its potential solution, and the anticipation of adversarial interactions. Although formal and informal methods are very different models of interaction, in actual practice in organizations the methods are often intertwined.
Understanding how people solve problems is requisite to determining whether or not technologies can aid in the process. A number of studies comparing problem solving with and without supporting technologies have been made over the past several years. Many of them have found that formal meetings using group decision support technologies improved the productivity of groups, shortened the time to final decisions, and in some cases quantitatively improved the decisions themselves. These studies have mainly considered formal meetings, where participants were schooled in the technologies, and therefore some researchers have questioned the validity of these studies. Sociologists, psychologists, anthropologists, and cognitive scientists have been characterizing the tasks of groups, but several issues have not been studied in depth. For example, most task characterizations do not include group activities that are important to the conduct of tasks but are not directly related to the tasks, such as learning and interpersonal relationship building. In addition, there has been little study of the impact of collaboration technologies on the problem solving task itself (primarily owing to the current state of the technology).
The grand challenge in collaborative problem solving is to provide
- More accurate descriptions of tasks and their interrelationships,
- A broader tool framework to support multiple decision-making models and dimensions,
- Understanding of the impact of group decision support system (GDSS) tools on the decision-making process,
- Understanding of the use of intelligent agents in the decision-making process, and
- Incorporation of corporate memory into the decision-making process.
Desirable research objectives can be summarized as follows:
Short term—An example of GDSS is Ventana Corporation's Group Systems Tools, originally developed at the University of Arizona and now a commercial product. Numerous other commercial GDSS tools are in use in industry and government. Most of these commercial tools have been developed by using the literature as a guide, even though some of them were originally developed in research environments. At this stage of the technology, most of the systems are based on portrayals of commonly understood activities in organizations. Elements contributing to decision making, such as brainstorming and voting, are encapsulated in these tools, with the primary objective to preserve information and to force structure on the decision-making process.
Decision making has been characterized as having three dimensions: time (synchronous or asynchronous), locale (collocated or distributed), and group size. Research needs to be conducted to verify that these are sufficient parameters, or to add others in order to more accurately describe the impact of these dimensions on decision-making processes and then reflect that research in the technologies.
- For example, does a set of tools designed for collocated groups function similarly for distributed groups with the addition of video and audio support, and do those additions properly rectify the lack of collocation?
- Midterm—Midterm research is needed to more properly formulate cognitive models of decision making in terms of the true interactions of humans and organizations, including studying the impact of the technologies themselves on the process. The question of the utility of formal processes also needs to be addressed. Researchers in organizational behavior and management should understand the real needs of formal processes of decision making, since these processes were developed to preserve information, force rigor on the process, and collocate stakeholders to ensure unanimity in the decision. The impact of decentralized organizations and the dissolution of hierarchical management techniques will have significant impacts on the technologies that were designed for hierarchical decision processes.
- Long term—Long-term research implications are anticipated to be quite complex. Questions that could be addressed in long-term research include the following: If decision making is supported by intelligent agents incorporated into the tools, what is the impact on the process (e.g., in biasing of decisions or change of resolution time)? If every element of the process and the results of decisions is preserved (including analysis of all interactions), will decisions ever be made, and if so, will they be more conservative? If elements of interaction can be appropriately modeled and described, will decision making revert to the highest levels of an organization, or, to put it another way, if decisions can be made by machine, will empowerment of the employees become a moot point?
Ongoing Leading-Edge Activities and Organizations
Current leading-edge research activities in this area are being carried out by the following organizations:
- Georgia Tech and the University of Arizona, Centers for Information Management Research;
- Xerox Palo Alto Research Center;
- Lotus Development Corporation;
- Aarhus University, Denmark;
- University of Michigan;
- Hewlett Packard Laboratories; and
- Indiana University.
A partial list of current major users of collaborative problem solving technology includes the following:
- Army, Information Systems Command;
- Many major corporations, including IBM, BellSouth, and Boeing; and
- Numerous universities, including the University of Arizona, Georgia Tech, Indiana University, and the University of Georgia.
An effective research program in collaborative problem solving would require skills in the following areas: cognitive science, sociology, anthropology, computer science, organizational behavior, and management information systems.
Opportunities for NRL
A number of highly leverageable technologies are applicable to collaborative problem solving, for example, in meetings. Traditional meetings are often frustrating and unproductive because of factors such as lack of group consensus, poor notetaking, and low group participation. Many times, groups become sidetracked. In addition, some participants may have a hidden agenda, while others may be apprehensive about participating
because they are uncertain how their ideas or comments will be received. It has been found that if GDSS, also referred to as Electronic Meeting Systems, is used to apply information technology to the meeting environment, group productivity, efficiency, and satisfaction are improved.7 When this technology is used efficiently, meetings ultimately culminate in an effective decision-making process by the given group.
This technology can be used in any type of decision-making process, including procurement decisions, strategic planning activities, planning and implementing of total quality management (TQM) principles, system and software requirement definition and representation, and war gaming (both adversarial and cooperative).
To take best advantage of such technology opportunities, NRL would benefit from having or developing the following critical resources:
- Ability to conduct large-scale, long-term case studies;
- Ability to incorporate AI research results into collaborative problem-solving technologies; and
- Connections with major research universities working in the area (e.g., through cooperative shared agreements).
Machine Learning and Adaptive Systems
A grand challenge for engineering is to design systems that can adapt to and learn from their operating environments. While learning and adaptation are basic capabilities in biological systems, they have been extremely difficult to implement in synthetic, engineered systems. Only recently have the memory and computational requirements necessary for minimal learning and adaptation become available. These advances have resulted in an explosion of ideas and research in machine learning.
Specific successful implementations of learning and adaptive systems exist in the areas of supervised learning (e.g., the ability to generalize from empirical input/output data), unsupervised learning (e.g., the ability to categorize unlabeled data using generic clustering concepts), and adaptive filtering and control (e.g., where plant structural models are known and plant dynamics need to be estimated).
In spite of successful applications and a growing body of theory, machine learning and adaptive systems suffer from having poor models of the underlying problems being solved. There are no universal criteria for characterizing one learning problem as being more difficult than another (e.g., in what quantitative measure is learning Japanese easier or more difficult than learning to discriminate sonar signals?). While some measures such as metric entropy have been proposed, they are theoretical constructs that cannot be applied directly to practical problems. The present situation, in which many methods such as neural networks, case-based reasoning, and more traditional statistical methods compete on the basis of hyperbole and performance on toy problems, cannot continue.
The grand challenge in machine learning and adaptive systems is to identify a scientific methodology and theory that characterize classes of practical learning and adaptation problems. These characterizations would then be used to evaluate performance, select solution methods, and eliminate heuristics as much as possible.
Progress in this area would result in better validation techniques and a guarantee of performance in the numerous autonomous applications for which learning and adaptation are currently proposed.
In the three time frames of interest, several research objectives can be suggested:
- Short term—A number of Navy applications are suitable for solution by machine learning and/or adaptive methods. Apply various competing existing learning and adaptation techniques to these applications and build a database of performance results.
- Midterm—Study the performance database and empirically categorize which applications are best solved by which methods. Develop models for learning and adaptation problems that explain the database results. Develop the appropriate mathematical theory to support the models.
- Long term—Apply and correct the mathematical models that characterize learning problems on new applications to verify the theory.
Ongoing Leading-Edge Activities and Organizations
Types of leading-edge research activity and organizations involved include the following:
- Empirical evaluation of classification techniques (Lincoln Laboratories, R. Lippmann),
- Statistical interpretations of neural networks (Brown University, S. Geman),
- Statistical experiments on neural networks (Oxford University, D. Ripley),
- Structural properties of learning problems (AT&T Bell Laboratories, Holmdel, V. Vapnik), and
- Minimal description length learning criterion (IBM San Jose, J. Rissanen).
For any effective research program in machine learning and adaptive systems, the types and combinations of resources required can be summarized as follows:
- Skills—Mathematicians, computer scientists, statisticians, engineers, and cognitive scientists need to cooperate in this effort.
- Facilities—Access to Navy-relevant applications is essential. Significant computer power is needed to solve realistic applications and perform simulations. Such facilities already exist at NRL.
- Cooperative arrangements—Arrangements to share data and approaches with other researchers working on similar empirical investigations are essential. Empirical data can be collected at different sites, providing efforts are coordinated.
- Level of effort—About three senior researchers and six assistants would be required.
Opportunities for NRL
NRL already has a number of efforts in machine learning and adaptive systems. Focusing those activities with the goal of discovering models for learning requires better coordination but not necessarily more resources. Overall, the field of machine learning is full of techniques but short on methodology. NRL can have significant impact on the scientific application of existing techniques on real applications problems.
Reasoning Under Uncertainty
Modern problems, including many that face Navy planners, are increasingly complex. These complexities can be characterized as follows:
- Problems have broad, possibly global, scope;
- There are subtle, time-varying multiple objectives, and different constituencies
- in the United States and abroad often have different sets of objectives;
- Some of the simplest and most desirable solutions are not attainable within the constraints that will exist; and
- Effective solutions may require involvement of forces and organizations outside of those that have traditionally cooperated.
Methods are available for computer-aided solution of even highly complex problems, if (1) they are governed by stable constraints or rules; (2) they are solvable using well-understood processes and resources, under control of the responsible manager; (3) they are analogous to similar situations within the knowledge or experience of the manager and organization; and (4) they are subject to little or no uncertainty in knowledge of the circumstances.
The grand challenges address aspects of the solving of complex problems, through representation and modeling, collaborative solution, machine learning, and virtual worlds. While few complex problems are without significant uncertainties, those uncertainties are commonly suppressed in the interest of making the problem easier, or, at best, the problem is solved for a range of parameters that are presumed to include the uncertainties to a satisfactory degree. While such methods are suitable when the nature of the solution is invariant to the uncertainty, they are not satisfying when uncertainty is so great that totally different approaches to solution could be required to meet the potential demands.
Uncertainty takes many forms in solving complex military problems. Usually, knowledge is fragmentary. One may know the classes and counts of ships constituting an enemy task force, but not the particular vessels, their state of readiness, ammunition supplies, and other critical information. On the other hand, even the information one supposedly has in hand regarding enemy resources may be erroneous. Erroneous assessments of one's own resources or of enemy resources have historically been common in military actions. Union General McClellan's belief that Confederate troops near Richmond in 1862 outnumbered his own, though quite incorrect, led to a reticence to attack and allowed Lee's much smaller army to chase him away from threatening Richmond. Further uncertainty may be imposed by deliberately deceptive actions—anything from decoy weapons to radio reporting of nonexistent activity. Overall, the battle manager must deal with uncertainties in everything from the capabilities of the opposable forces to the objectives of the enemy (or even of some of the forces reporting to him).
Not all of the uncertainty needs to be dealt with at the same time. In military activity, uncertainty prior to action often occurs in understanding the circumstances and objectives of an opponent or in assessing one's own capabilities and limitations. During military actions, a high level of confusion in reports of sightings and actions is not uncommon. The top-level realities of the drama are seldom perceived correctly by the individual players, and even the sequence of events is sometimes unclear from first-hand reports.
Preaction knowledge of the enemy is based on surveillance or military intelligence. Knowledge of one's own resources relies on complete, unbiased reporting of readiness by one's own commanders. During military action, new information arrives through the agency of command, control, and intelligence (C2I) systems.
It is probably easiest to find examples of uncertainty in intelligence activities. Examples abound from earlier wars; for example, the misreporting of cruisers as aircraft carriers (or the converse) was not uncommon in World War II. During the Cold War, continuing intelligence interest in the numbers and capabilities of Soviet intercontinental ballistic missiles (ICBMs), sea-launched ballistic missiles (SLBMs), and other high-technology weapons was thwarted by the secrecy of the Soviet Union.
It has already been established in DOD long-term intelligence activities that rule-based expert systems have utility for weapon system identification using familiar modes of observation. There is much less experience and
little confidence in approaches for reasoning under uncertainty in the near-real-time decisions of military tactical management. There, both the situation itself and reports about it are changing. If battle managers are to make rational decisions, they must reason under uncertainty and with time constraints.
An idealistic way of phrasing the grand challenge in this area is to provide means whereby a military commander can make optimal use of all available valid and timely information. Implied in this objective is not only the ability to recognize uncertainty, but also the ability to use redundancy in data from reliable independent sources to reduce that uncertainty.
This is a multifaceted problem area. The nature of its solutions may, likewise, be manifold. An approach in which a powerful computer suggests a specific action and adds ''trust me!" is clearly not acceptable. The knowledge and wisdom of human decision makers must be incorporated in the outcomes. If there are many human experts involved, some form of "calibration" of their opinions and judgment is desirable.
- Short term—In the short term, it is difficult to identify specific objectives. For one thing, the scope of the set of problems is so great that subsets of interest to the corporate Navy need to be defined. One of the possible subareas would be the application of belief theory (of which the Dempster-Shaffer approach may be the best known) to characterize the confidence of estimates. This could be used, for example, to calibrate intelligence estimates based on (belief) characteristics of the estimators. It could also be used by a decision maker to determine, in comparison to contemporary or historical norms, if his or her decision is aggressive or nonaggressive, timely (or precipitate or slow), and where its weakest aspects may lie.
For intelligence applications, applied research could address reduction of uncertainty in both automated and human reports. If, for example, the limitations of sensors are properly incorporated in fusing their data, more accurate assessment should be possible.
We have traditionally relied on human interpreters to scan sensor imagery. This too is a case of reasoning under uncertainty, since the objects of interest may be hidden or camouflaged and the image quality may be naturally limited by range, bandwidth, and/or atmospherics. Large differences are seen in human ability to recognize militarily significant objects in infrared imagery. However, whether the next logical step would be to provide automation aids for human or interpreters or to scan and analyze electronically is not clear. Although accuracy is uncertainty-limited in this application, credible research must deal with realistic constraints and objects.
A possible set of objectives can be delineated as follows for the midterm and long term:
- Midterm—Devise metrics appropriate to comparison of traditionally incommensurate information and devise computer-applicable representation algebra(s) that will permit manipulation of currently incommensurate sets of related data, which can probably reduce uncertainty in individual data, preferably using methods faster than enumeration.
- Long term—Devise computational methods that permit the combination of incomplete and/or incorrect data from multiple contemporary observations, context description, and historical experience to demonstrably reduce the
- uncertainty in situations or events described by current data.
Ongoing Leading-Edge Activities and Organizations
Types of activities in this area, and organizations and individuals involved, include the following:
- Information fusion—TRW, and
- Evidence theory—George Mason University, D. Schum.
For an effective research program, the following kinds of resources are needed:
- Mathematics expertise in representational and computational algebras,
- Computer application expertise in numerical analyses and computer problem solving,
- Military strategic or tactical data streams as well as business data suitable for use as source, and
- High-performance computing power.
These resources are available or accessible to NRL by cooperative arrangements.
Virtual Worlds (Reality)
The scientific and engineering area is concerned with the development of knowledge-based multimedia interfaces that support real-time intersection with true three-dimensional input and output devices.
The current state of the art in commercial user interfaces emphasizes the use of two-dimensional windows, with which users interact by using two-dimensional devices such as "mice." Even users of three-dimensional graphics workstations usually view graphics whose projections appear in two-dimensional windows and are manipulated under mouse control. Research in three-dimensional user interfaces addresses the use of interactive three-dimensional graphics, coupled with true three-dimensional stereo displays and three-dimensional interaction devices that monitor the user's movements in three-space, as well as three-dimensional audio and haptic feedback. The goal is to harness the physiological capabilities and training that enable us to perform physical tasks effectively in three dimensions and apply them to develop effective user interfaces for computer-based tasks.
For an information-intensive user interface to be effective for a wide range of situations and users, it must be able to
- Design and present information to people on-the-fly, using multiple output media and
- Understand user input couched in multiple input media. To meet these requirements, a system must
- Generate technical material in real time in each individual output medium (written text, speech, static graphics, and animation),
- Understand technical material in real time in each individual input medium with performance equal to that of a human expert, and
- Coordinate real-time generation and understanding of multimedia interactions with humans, combining multiple input and output media, with performance equal to that of a human expert.
The set of suggested short-term, midterm, and long-term objectives considered in totality is as follows:
- Develop real-time operating system support for highly parallel asynchronous input (from large numbers of three-dimensional trackers) and output (to multiple display modalities).
- Build effective "augmented realities" that enrich the user's existing environment with additional information, merging synthesized material with what the user normally sees, hears, and feels—overlaying or replacing it, as appropriate.
- Develop better hardware that produces high-quality, high-resolution, wide-field displays (graphics, sound, haptic, and temperature) and tracking (hand, body, and eye). For example, there is a key need for "see-through" displays whose images are overlaid on the environment to build "augmented realities." A general-purpose visual display technology must allow differential visual accommodation, corresponding to real and virtual objects at different distances in the same image. It must also be able to perform full visible-surface determination with all objects, both real and virtual. Virtual objects should be able to occlude real objects, and real objects should be able to occlude virtual ones.
- Determine how to map abstract task domains effectively to a three-dimensional environment in which it will be possible to visualize and manipulate objects in the domain.
- Determine how to take advantage of the richness of three-dimensional gesture to reduce reliance on icons to express actions in current user interfaces. For example, rather than moving an item to the "trash can," it may be disposed of by using an appropriate gesture.
- Determine how to ensure that three-dimensional user interfaces will be usable, especially in an environment that supports end-user programming and customization. The problem is that in a world of whole-body computer interaction, there may no longer be any distinction between human factors (as usually understood) and the human factors of computer interfaces. The existing hardware that limits capabilities (and that also limits mistakes) will be gone.
- Apply AI techniques (e.g., interactive knowledge-based generation and understanding) to design virtual worlds for visualization automatically.
- Design high-quality multimedia systems, but only after designing systems that function well in a single medium. Proceeding in this fashion is particularly important with regard to knowledge-based information presentation systems. Improvements are needed in the ability to perform high-quality generation and understanding in individual media. There is much work to be done in generation and understanding of individual media, ranging from those media that have long been explored by AI researchers (e.g., written text and speech) to less well-charted terrain (e.g., graphics, audio, and haptics).
- Develop methods to predict and evaluate presentation quality. The system should be able to predict the quality of a presentation in the course of designing it (and, on the basis of these predictions, to refine the presentation until it is adequate). This requires the ability to evaluate the presentation, estimating how it "will" affect the user (and evaluating the user's response, estimating how it "has" affected the user). The ability to evaluate the presentation makes possible time-quality tradeoffs. For example, in a crisis situation, a timely rush job might be preferred over a later, higher-quality presentation.
- Develop generation and understanding capabilities for temporal media (media in which information context is presented over time in a way that is controlled explicitly by the producer) in animation, speech, and audio. Issues here include how to phrase information (e.g., for maximal comprehension). For example, the ability must be developed
- to generate output and understand input that communicates complex temporal relations.
- Develop facilities for coordinated generation and understanding of multiple media. The key challenge is to ensure that material in different media reinforce rather than interfere with each other.
Ongoing Leading-Edge Activities and Organizations
Leading groups and their area of activities include the following:
- Brown University—three-dimensional user interfaces (visual displays);
- Columbia University—three-dimensional UI, virtual worlds, multimedia-VI (MM-VI), MM-UI;
- Deutsche Forschung Künstlichen Intelligenz (Saarbrücken)—MM-UI;
- IBM T.J. Watson—virtual worlds;
- NASA Ames—virtual worlds;
- University of North Carolina-virtual worlds, three—dimensional UI;
- University of Pennsylvania—three-dimensional UI, virtual worlds;
- University of Washington—virtual worlds; and
- Xerox Palo Alto Research Center—three-dimensional UI.
In order to do effective research in this area, the following types of resources are required:
- Skill base—Computer scientists, cognitive scientists, electronic engineers, optoelectronic engineers, and application area specialists.
- Equipment—High-performance, three-dimensional workstations and three-dimensional interaction and display devices (e.g., graphics, sound, and touch).
These resources exist within or are accessible to NRL through cooperative arrangements.
Neurophysiological Models of Cognition
Current approaches to human-computer interfaces are largely based on traditional sensory capabilities, that is, sight, sound, and touch. There is much commercial and research activity exploring those interfaces. This grand challenge goes beyond those ideas and proposes to explore internal neurophysiological representations of knowledge with the ultimate goal of using such representations for direct low-level computer interactions with the human nervous system. By the same token, understanding the internal representation of knowledge will be likely to result in better computer implementation of learning, cognitive, and other intelligent functions.
This challenge is timely. The past two decades have witnessed significant progress in our understanding of the biological mechanisms for memory, learning, and sensory processing. Most of that progress has been at the low-level, neuronal level, whereas correlations with higher-level functions and representations useful for cognition and intelligence are not yet understood. Accelerating the study of those correlations will allow more direct human-computer interfaces to be implemented.
Notable examples of these ideas are already being explored. Preliminary studies of semiconductor chip implants into neuronal tissue are being studied for motor control interfaces (at Dartmouth Medical Center and Stanford University). Moreover, the use of the electroencephalographic (EEG)-type readings of brain activity has allowed researchers to interface thought patterns directly with computer input (at Fujitsu Laboratories in Japan). These are but two examples of researchers attempting to bridge the gap between low-level neural activity and higher-level functionality.
Continued progress will require collaboration among neurophysiologists,
computer scientists, electrical engineers, mathematicians, and psychologists. Success will enable the development of more efficient human-computer interfaces that occur at a lower level and the design of better-performing artificial intelligence systems.
The following objectives are suggested for research in this area:
- Short term—Organize an interdisciplinary NRL team to select a specific cognitive function and/or knowledge representation. Models for that phenomenon are formulated. Experiments and equipment to test those models are designed.
- Midterm—Perform experiments to validate hypothesized models. Models are modified and retested. Development of novel interface technology coincides with experimentation.
- Long term—Implement technology to use verified models for low-level human-computer interfaces in Navy applications.
Ongoing Leading-Edge Activities and Organizations
Types of research and institutions and individuals involved include the following:
- Neurophysiological models of the ear (N. Kiang, Massachusetts General Hospital),
- Neural chip implants (J. Rosen, Dartmouth Medical School and White River Junction VA Hospital; G. Kovaks, Stanford University),
- Silicon retina and ear (C. Mead, California Institute of Technology),
- Brain wave-computer interface (Fujitsu Laboratories, Japan), and
- Functional electrical stimulation and cardiomyoplasty (various engineering and medical schools).
For effective research in this area, the following kinds of resources are needed:
- Skills—This research is best carried out by teams whose members are versant in neuroscience, physiology, signal processing, control theory, computer science, and mathematical modeling and instrumentation.
- Facilities—An effort encompassing all aspects of the challenge would most likely consist of team members who have access to dedicated facilities. Wet laboratories for experimenting with live tissue might be best located in hospitals or medical schools. Modeling and computing facilities can be off-site. Fabrication of interface instruments will probably require machine room capabilities typically found at NRL and other engineering research laboratories.
- Cooperative arrangements—Arrangements to share data and approaches with other researchers working on similar empirical investigations are essential. Empirical data can be collected at different sites, providing coordination is made.
- Level of effort—Five senior scientists distributed over the technical areas, ten research associates, and five technicians are required for a sustained multiyear effort.
Opportunities for NRL
NRL has significant in-house expertise in neural networks, control, cognitive science, and instrumentation. With unique access to Navy applications, the laboratory can develop its research program in this direction and play a leading role in future opportunities as the field opens up.
Industrial Interest in Grand Challenge Areas
The panel's informal assessment of
industrial interest in the six grand challenge areas is as follows:
- Representation and modeling of complex systems—considerable and increasing interest by industry,
- Collaborative problem solving—considerable and increasing interest by industry,
- Machine learning and adaptive systems—significant interest and investment by industry,
- Reasoning under uncertainty—some interest by industry but no trend apparent in industry,
- Virtual Worlds (reality)—significant interest and investment by industry and a growing consumer and commercial market envisioned by industry, and
- Neurophysiological models of cognition—little interest by industry.