National Academies Press: OpenBook
« Previous: 1 Introduction
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

2
Push Factors

Progress in science and technology can come from any direction at almost any time, but there is fairly broad agreement that major developments in the next 10 years are most likely to come from within or at the intersection of three broad fields: biological science and engineering; materials science and technology; and computer and information science and technology. Each is characterized by an extremely rapid rate of change of knowledge; each has obvious and wide utility; and each will benefit from advances in the others, so that the potential for synergy among them is particularly great. For example,

  • Sequencing the human genome would not have been possible without the enormous improvements in computational capacity in the last few decades.

  • Those computational advances would not have been possible without improvements in materials and materials processing techniques.

  • What we have learned about interfacial phenomena (physicochemical behavior within a few molecular lengths of the boundary between two phases) in biological systems is contributing to the development of new man-made materials, which, in turn, have allowed us to grow functional biological tissues.

Therefore, in considering “push” factors, the committee has focused on these three fields, identifying the subfields within each that seem particularly ripe for major advances or breakthroughs in the next decade. To ensure that this division into fields does not neglect the strong potential for synergy between the fields, the committee’s discussions and the report pay particular and deliberate attention to the ways in which progress in any one field will benefit from progress in the others.

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

BIOLOGICAL SCIENCE AND ENGINEERING

Advances in molecular and cellular biology lead the list of changes in the biological sciences if for no other reason than that they have opened up new fields. But how the new understanding of molecular and cellular structures and events is used in health care and agriculture, for example, depends upon other advances as well. In health care, changes are afoot in diagnostics, drug design, tissue and organ growth, and artificial organs, particularly those known as hybrid organs. In agriculture, advances in the understanding of nutrition and pest control, as well as increasing concern about the environment, guide strategies for modifying organisms to increase the value of foods and to decrease the environmental insult that accompanies their growth.

The field of biology is progressing at a rapid rate, because the scientific opportunities are many, advances in materials and computer and information science and technologies based on them have enabled exploitation of new opportunities in biological research, and public and private funding has increased at a rapid rate. The following sections discuss some of the important areas of advance in biology, focusing especially on those that are enabled by or interact with contributions from nonbiology fields, or that are methodologies enabling a broad range of biological research, or both. These include macromolecule microarrays (or gene chips), synthetic tissues and hybrid organs, and microsensors. The impact of genomics on the genetic modification of plants and animals is also discussed.

Molecular and Cell Biology

Sequencing of the human genome has been nearly completed, setting the stage for the next set of advances: understanding the role of genes in health and disease and using that knowledge to improve screening, diagnosis, and treatment of disease. Since the fundamental structure of DNA has been known for some time, as have been methods for identifying genes and their chromosomal location, the breakthroughs that allowed, in the brief span of a few years, the sequencing of the human genome and the genomes of other plant and animal species have been largely in the development of experimental and computational methods for extremely rapid data generation and analysis, as well as in the management of enormous banks of data. DNA sequencing rates doubled every 29 months in the mid-1980s, and then every 13 months by 2000.

Sequence data will, however, be only a part of the accelerated flow of information during the next decade, and perhaps not the main part. The new techniques for rapid data generation, storage, and analysis of DNA, proteins, and other molecules and cells are providing the basis for various commercial applications. Entire industries have emerged, perhaps the most notable being the biochip industry, whose diverse technological infrastructure encompasses imaging, materials, and a range of information and computational technologies. This section

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

reviews some of the areas of research that will be contributing to our knowledge of disease and health and to better technologies for diagnosis and treatment.

Macromolecule Microarrays

Macromolecule microarrays, or biochips, are surfaces, usually of a size comparable to a microscope slide, to which an array of extremely small clusters of macromolecules is attached by one of a number of techniques. The small size of each cluster—no more than a microdot—allows thousands of these clusters to be arrayed on a single chip. These chips can then be used for a variety of purposes.

The availability of whole genome sequences has enormously increased the utility of DNA microarrays, and it is now routine to assay simultaneously the expression levels of thousands or tens of thousands of genes on a one-squarecentimeter chip and to reperform the assay several times under changed environmental conditions. Such assays (also called expression profiles), each involving tens of thousands of genes, can be performed in a day or less, and they can be automated and run in parallel, allowing the determination in just a few days of expression profiles of dozens of cell types exposed to potentially hundreds of different ligands (drugs, toxins, and so on). Since it may be that a relatively small subset of genes (perhaps as few as several hundred) is of critical importance for understanding the origin of cellular changes induced by external or internal signals, this expression profiling is likely to be used much more frequently in clinical situations than complete genotyping. As a consequence, there is likely to be a much greater focus in the next 10 years on this aspect of genomics—and its related commercial and clinical activity—than on complete genotyping.

Whether or not the number of genes causing major diseases is in the hundreds or thousands, studying gene function and complex cellular behaviors (such as cell cycle transitions, serum effects, and heat shock) in the context of the whole cell or organism may be quite complex and call for large microarrays. It is likely that the way cells interpret extracellular signals is more complex than targeting a single gene and involves hierarchical patterns of protein-protein interactions. For example, defining a ligand for a receptor would be very useful, but transmission of the signal may require specific regulatory factors (such as G proteins) to specify an appropriate biological response.1 The question is whether biochip development will enable such analyses.

In addition to DNA microarrays, other kinds of biochips are being developed to serve as miniature chemical laboratories—“labs on a chip”—having a characteristic feature size of tens of microns. While DNA microarrays are used primarily to monitor changes in patterns of gene expression, a lab on a chip can be used

1  

G proteins are in receptors on the surface of cells. Their reaction with a hormone or neurotransmitter is a signal that triggers certain intracellular processes, such as gene expression or changes in metabolic pathways.

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

to fractionate, mix, or otherwise manipulate microliter quantities of chemicals or small populations of cells. In addition, versatile chip technologies are being developed that can be used to assay for a wide variety of ligands, but they typically involve hundreds rather than thousands of targets.

There is no question that the field is growing extremely rapidly. There are some 200 companies with total gross sales that grew from $272 million in 1998 to $531 million in 2000, and that figure is projected to increase to more than $3.3 billion by 2004.

New Methods of Detection in Microarrays

New methods of detection will play an essential role in expanding the applications of microarrays. Quantum dots are a very promising way of identifying molecules because of their special optical properties. These semiconductor crystals of a few hundred atoms emit a single wavelength, the color of the light depending on the crystal size. In principle, hundreds of millions of combinations of colors and intensities can be achieved. In the near future it should be possible to follow the concentrations of large numbers of molecules simultaneously, because each kind of molecule would, when bound with a quantum dot, cause that dot to give off a distinctive wavelength and thus “label” its presence.

Biosensors are another labeling approach that make it possible to dissect protein-protein interactions in cells that are important in understanding signal transduction, drug delivery, and other processes. Biosensors are constructed by labeling two potentially interacting proteins with reagents that produce a signal when they bind. A common approach currently is to use fluorescence resonance energy transfer, in which different fluorescent compounds are attached covalently to two proteins of interest. Individually they each produce a distinctive color emission, but if they bind, the color emission is different.

Methods of detection that do not involve labeling can also be developed for high-throughput, quantitative monitoring, including surface plasmon resonance and a variant technology based on evanescent waves. The former measures the shift in dielectric constant when molecular targets bind to a surface coated with molecular probes; the latter measures the shift in phase between magnetic and electrical components of a single-mode guided wave as it interacts with probetarget complexes on the surface of the guide. A somewhat more advanced technology is based on optical density changes when a material binds to a surface. Finally, new mass spectrometry technologies, such as matrix-assisted laser desorption ionization mass spectrometry and electrospray ionization, and other non-label-based technologies offer promising high-throughput methods for detecting genetic disease predisposition. Such methods may also help to overcome one of the barriers to the implementation of high-throughput proteomic assays (discussed below).

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Medical Impact of Microarrays

The hope, reinforced by recent clinical studies, is that microarrays will be useful for diagnosis and disease stratification. As noted above, it appears that this approach will require not comprehensive genomic characterization but, rather, more limited expression profiles. Normal and diseased cells can show differences in the expression levels of hundreds of genes, indicating that the effects of a disease process can be very complex. However, relatively few genes might be involved in disease etiology. The committee expects that over the next 5 or 10 years, perhaps a few dozen chip diagnostics will be focused on, containing relatively small numbers of genes that are associated with diseases of major importance.

On the other hand, since proteins are directly involved in cell function— unlike genes, which are more indirectly involved—proteomic technologies will need to be developed to obtain a reliable understanding of function, including pathways and network topologies. For certain applications, such as drug targeting (see below), proteomic technologies may eclipse genomic technologies; for other applications, such as identifying populations at risk for major diseases, genomic technologies are likely to remain important.

Many have suggested that massively parallel assays for proteins will follow quickly from our success with gene analysis. However, the extension of the method to protein analysis is beset with complications. To start with, the set of expressed proteins is several times larger than the number of genes (because of alternative splice variants). Furthermore, proteins can undergo a number of post-translational modifications. In addition, there is an insufficient number of easily produced probes, a greater difficulty (than with DNA) in finding substrates to which they can be attached without denaturing, and problems in achieving sufficiently rapid labeling techniques. Therefore, it is likely that attention will be focused on understanding a limited number of proteins, or polypeptide sequences, that are critically important to disease states. An effort has been made to develop mass antibody approaches to screening proteins expressed in cells, but it is difficult to achieve the immunogen purity on which the specificity of the antibodies depends. Mass spectroscopy is an alternative approach that is less expensive than micro-array assays. For example, mass spectroscopy methods for analyzing tissues are becoming available. The amount of information produced by this approach is tremendous, however, and making sense of it will require concomitant advances in bioinformatics capability.

Both genomic and proteomic assays are likely to have their main diagnostic impact on diseases with well-defined phenotypes that are detectable in cells that are readily accessible. In the near future at least, this is likely to constrain applications to most central nervous system associated diseases, including widespread psychiatric illnesses such as schizophrenia and bipolar disorders. The present best hope for precise phenotypic characterization is functional MRI with a tem-

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

poral resolution of about 100 milliseconds and a spatial resolution of 1 to 2 millimeters. Continued improvement in the ability to map at high resolution areas associated with language, motor function, memory, and audition and in the ability to detect transient events in small regions will gradually enable the enlargement of recognized disease phenotypes, thus providing better stratified populations for understanding genetic correlates. The correlation of phenotype and genotype, in general, is a major challenge for clinical medicine in this age of genetic information.

Beyond diagnosis and testing, drug target identification is an important application. The challenge is to make the connection between gene expression levels and pathways, pathways and networks, and networks and cell function. For example, an important problem is to find the smallest number of genes needed to convert an abnormal expression pattern into a normal one. This information is obviously also valuable for identifying a disease’s causative agents. Absent a more complete understanding of network topology, this drug target identification cannot be attacked rationally. Progress on network mapping is rapid and will be accelerated as new mathematical tools for inference are developed and applied. However, some companies’ research in this area has been focusing on “toxicogenomics,” that is, finding those patients with a genetic likelihood of having adverse reactions to the drug, thus potentially limiting the liability exposure of the companies. It remains to be seen how strongly market forces or other stimuli will encourage companies to put more effort into using network mapping techniques to improve drug targeting beyond avoiding adverse reactions.

Efforts to map inheritable gene sequence variations, called single nucleotide polymorphisms (SNPs), are under way. SNPs vary from individual to individual, and the SNP Consortium, a large public-private venture, and other investigators expect SNPs to act as markers of genes involved in disease and thus permit cost-effective genotyping. A large library of SNPs might enable researchers to understand complex common diseases such as diabetes, asthma, and cardiovascular diseases that involve multiple genes and environmental factors, identify the most appropriate drug therapies, and even predict individual responses to alternative therapies.

Structural Genomics

Communication within a cell relies on molecular recognition—intermolecular interactions. An ability to manipulate interactions by design requires knowledge of the three-dimensional structure of the molecules. Recent estimates indicate that the number of protein domain folds (generic architectures, as opposed to detailed geometric structures) is relatively small—there are fewer than 1300 water-soluble domains in the biological universe. Of these, we currently know approximately 500. It should be possible in the next decade, with an intelligent selection of sequences for structure determination, to find representatives of every fold using crystallography and other methods. If this is done, it will have a

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

pronounced effect on the biotechnology industry and benefits to human health. Our increased understanding of drug targets from genomics and proteomics will make structural analysis of intracellular molecules an even more powerful tool for rational drug design. Understanding protein structure will also aid in the development of biosensors (discussed above) and of peptide mimetics, in which small peptides designed to mimic protein-protein interfaces can be used to probe protein interactions in cells and define their functions.

Synthetic-Biologic Interactions

Understanding the interaction between biological materials—cell surfaces and macromolecules—and synthetic materials is key to a number of important advances. The ability of structural or functional materials to contact blood or other tissue without damaging it is critical for artificial and hybrid organ design. The effect of synthetic material composition, form, and surface properties on normal tissue growth is key to generating replacement tissues or guiding tissue growth in situ. Finding substrates that can adsorb macromolecules without denaturing them is a requirement for biochip development. New synthetic material matrices can create opportunities for controlled drug release systems or selective membrane barriers.

Because of enormous progress in materials science, discussed in detail below, new methods are being rapidly developed for designing materials with well-defined bulk and surface properties at size scales from the molecular to the macroscopic. Instrumentation for analyzing surface properties is providing increasingly detailed information on the dynamics of the interactions between biologics and surfaces, which can be fed back into the materials design process. Some of the areas likely to benefit from these developments are described in the next few sections.

Assays

Surfaces are critical to microarray technologies. For example, arrays manufactured by spotting nucleic acid probes must be prepared so as to anchor the probes while not adsorbing the targets; arrays of protein probes must bind tightly, but not so tightly as to alter the secondary or tertiary structure of a probe. Proteins are generally more sensitive to the properties of substrates than are nucleic acids, and they tend to denature at interfaces, a problem whose severity increases as the ratio of surface area to sample volume increases. At present, different systems use different surfaces, which makes quantitative comparison of assays difficult. The accuracy of gene identification, quantitation of gene expression level, and sensitivity of the various assay systems are not yet uniform across biochip sources. However, finding the best surface for a specific application is quickly moving from art to science. Recent progress in the development of novel alternatives to glass and silicon promises that widely applicable substrates will be

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

found. If they are, it will be possible for these assays to be more uniform, facilitating their medical use.

Tissues

The development of methods to grow various tissues is proceeding at a moderate pace. Sophisticated scaffoldings have been developed on which to attach and grow cells. These scaffoldings, frequently made of biodegradable polymers, have very specific geometric forms and are usually impregnated with growth factors or other solutes not only to ensure cell growth but also to control cell orientation and induce patterns of aggregation that mimic natural tissues and promote normal cell function. Epithelial tissue grown this way is already available as a commercial product for the treatment of severe burns, and a great deal of progress has been made in promoting the re-endothelialization of synthetic vascular grafts. There has also been a good deal of laboratory success in the generation of neurons.

These successes have, in almost all instances, involved culturing already differentiated cells. The current experiments with adult stem cells and the recently altered federal policy on the support of embryonic stem cell research promise very rapid progress in the near future in the generation of an increasingly broad range of tissue types, with improvements in both function and longevity of the tissues produced. Progress is likely to depend on our ability to predict the longterm behavior of these tissues from short-term observations of the physical and chemical interactions between cells and the matrices on which they are grown.

Hybrid Organs

Much of the new understanding generated by advances in tissue engineering is leading to progress in the field of hybrid organs, artificial organs created by housing natural cells in devices that allow them to carry out their natural functions while protecting them from the body’s rejection mechanisms. Usually this involves a perm-selective boundary through which chemical signals from the bloodstream can pass, stimulating the cells to produce proteins or other molecules that can pass back into the bloodstream but not allowing the active constituents of the immune system to reach the cells. The technical challenges are the design of the perm-selective membranes, the matrix for the support of the cells, and the system for maintaining them in a viable state.

The last decade has seen major advances in designing artificial pancreases that can carry out at least some functions of the liver. New materials and new understanding of mammalian cell and cell membrane phenomena should accelerate progress in this area in the next decade, and stem cell research may well provide a new, much larger source of cell material for these devices, removing what has been a significant limitation in their design thus far.

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Transplantation Immunology

It is likely that the problem of permanent tolerance to transplanted tissue will eventually be mitigated. Advances in immunology and molecular biology lead us to expect substantial progress in this direction during the next decade. In particular, progress in understanding anergy—the process by which cytotoxic cells can be made tolerant to, rather than activated by, specific antigenic signals— offers, for the first time, a realistic hope for effective immunosuppression without continuous drug therapy. In addition, progress in pathway mapping, especially as related to apoptosis, or cell death, opens the possibility of targeting specific cytotoxic T cells (cells that mediate rejection) for programmed death.

Even if the rejection problem is solved, however, the limited availability of human tissue or organs for transplant remains a problem. It is a problem more social than technical, in that organ and tissue donation in the United States (and most other countries) is well below the theoretical level. Therefore, if a significant breakthrough in the number of transplants is to occur in the next decade, there will have to be both a solution to the rejection problem and breakthroughs in tissue engineering that allow the synthesis of a large number of human tissues and organs.

An alternative approach is xenotransplantation, the use of animal cells and organs to replace or assist failing organs in humans. This area is heavily funded by the pharmaceutical industry. However, it is also controversial. Objections come from some medical scientists who fear viral host hopping, from bioethicists who raise conceptual objections to actions that “threaten human identity,” and from animal rights activists, who object to any exploitation of animals. This last objection, or at least the broad public resonance with it, is somewhat mitigated if primates are not the source of the transplants. However, it seems at best uncertain, and at worst unlikely, that the broad range of objections coming from different directions will be overcome in the next decade.

Medical Devices and Instrumentation

The advances in materials science and information technology that are making such a profound difference in molecular and cellular biology and tissue engineering have been equally important for the development of new, experimental measurement techniques and new medical devices. When combined with new sensors that take advantage of nuclear and atomic signals—for example, nuclear magnetic resonance and positron emission tomography—they allow the imaging of chemical interactions and processes, such as inflammation and substrate metabolism. Indeed, recent initiatives of the National Institutes of Health and the National Science Foundation, as well as private foundations, have already produced molecular imaging data that could allow significant insights into active physiologic and pathologic processes at the organ and cell levels.

These capabilities are being rapidly expanded by the development of micro-

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

electromechanical systems (MEMS),2 which have made it possible to construct sensors that can be implanted as part of therapeutic or diagnostic devices. Such sensors can also travel within the gastrointestinal or vascular systems, reporting image and/or physiologic data about their environment.

The miniaturization of optical devices and electromechanical systems has also made possible new devices for minimally invasive surgery and for remote tissue manipulation. These are already routinely used, but there is every indication that there will be an enormous expansion in their use in the next decade, with microchips embedded in MEMS systems to create quasi-autonomous micro-surgical tools. Since these technical advances are also likely to result in reduced morbidity and reduced costs for hospitalization, this is one area of medical technology where the market signals are expected to foster further technical developments.

One implication of these developments is that the distinction between diagnostic instrument, or monitor, and therapeutic device is rapidly becoming blurred. Implanted defibrillators combine the function of heart monitor and heart regulator. Deep brain electrical stimulators, now used for the treatment of Parkinson’s disease, can record and analyze signals to determine when they are positioned optimally and then deliver the therapeutic pulses. Microsurgical tools for cataract removal can navigate themselves into position by measuring local tissue characteristics and then perform their surgical function.

The next 10 years are likely to see a great proliferation of such devices. Since, in effect, they reduce the intermediation of the physician in the therapy, they introduce an interesting set of questions about how much information and what set of instructions must be programmed into a device to ensure that the “clinical decisions” the device makes match its “technical skill.”

At the other end of the spectrum, the introduction of electronically mediated connections between the physician and the patient—using various real-time imaging techniques or using imaging and electromechanical devices to help a physician in guiding a scalpel, a laser, or a catheter—means that the physical distance between physician and patient may become immaterial for many purposes. This has led to the introduction of telemedicine: diagnosis of a patient by a physician hundreds of miles away using images transmitted either by digital or analog signal and—even—robot-mediated remote surgery.

In this area, as well as in therapeutics, investigators sometimes work for or start companies to develop the devices they are testing, creating potential conflicts of interest. The ethical implications of this and other commercially relevant medical research will have to be addressed to protect the interests of patients, researchers, and research institutions and to avoid patient injuries and public backlash.

2  

For further information about MEMS, see page 24.

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

E-Medicine and Health Care Autonomy

It is not necessary that the remote signal communication involve a physician. A number of companies are working on ways to automate the process by allowing monitors that are implanted in the patient or merely connected to implanted sensors to transmit their signals remotely to computers that can record the information, automatically make adjustments through chips embedded in therapeutic devices implanted in the patient, or advise the patient on changes in drug regimens or on other adjustments that he or she should make. At present, the technology is at an early stage, and there are a number of different approaches to its design and use. What the approaches have in common is that they provide the patient with a certain degree of health care autonomy, which is becoming attractive to more and more patients.

From a technical point of view, one of the greatest challenges in improving the usefulness of these systems is the lack of compatibility among different monitoring systems and the lack of uniformity between these products and large hospital or laboratory information systems. Groups are working on the development of uniform digital data and patient record systems which, if adopted, are likely to stimulate a rapid expansion in the use of these technologies, many with new chip-based diagnostic devices that put the patient more directly in charge of his or her health.

A more immediate force for autonomy is the Internet and the information it provides. Indeed, it has been noted that the most popular sites on the Internet are those that provide health information. Individuals are able to know or to believe they know more about their condition and their options than was previously the case. The Internet changes the nature of the communication between patients and their healthcare providers. Now, patients are much more likely to arrive at a physician’s office armed with information about what they believe to be their problem and how they would like to have it treated.

The main concern here is the reliability of information from Internet sites. In addition to the authentic scientific data and professional opinions that are available, much misinformation and commercial information also find their way to patients. If adequate quality control and standards are instituted, the Web will be a major force for autonomy. Two interesting and important questions face the field in the next decade. Who will take responsibility for validating the accuracy of information on the Web? How will that validation system be implemented? Much of the information is based on federally funded research, and federal agencies have Web sites with authoritative and up-to-date health information.3 But currently there is no mechanism for certifying the contents of nongovernmental Web sites.

3  

MEDLINEplus, the health information Web site of the National Institutes of Health, “has extensive information from the National Institutes of Health and other trusted sources on about 500 diseases and conditions” (http://www.nlm.nih.gov/medlineplus/aboutmedlineplus.html).

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

Genetically Modified Organisms

Progress in understanding the molecular and cellular biology of plant and animal species has paralleled that in humans. Using many of the same high-throughput techniques and data management systems, the genomes of a number of plant species have already been sequenced.

Some of the uses of the newly available genetic information also parallel the way the information is used in human medicine—determining genetic proclivity for certain diseases, understanding the paths of action of certain diseases, and identifying the patterns of gene expression during development. But applications have gone much further in plants and animals because genetic modification has always been a major activity in agriculture to improve flavor, yield, shelf time, pest resistance, and other characteristics. Genomics merely adds a new set of tools for making those improvements. A number of transgenic crops and animals have already been produced, and in the United States, a large fraction of the planted crops in corn and wheat is genetically engineered.

There are three quite different ways in which genomics can be and is being used in new species development. First, rapid genetic assays can be used to quickly monitor the effects of standard cross-breeding, cutting down enormously on the time previously required to grow the cross-bred tissue to a sufficient state of development that its characteristics can be ascertained. These assays can also provide much more information on changes in the genome that are not obvious in easily observed plant characteristics. Some argue that this, in itself, will provide such an improvement in breeding that laboratory modification of plant or animal organisms will not be necessary.

The other two applications of genomics involve the creation of genetically modified organisms (GMOs)—organisms that are modified by introducing genes from other species. In the first of these, the modification is equivalent to that produced by traditional cross-breeding techniques—a new organism whose genetic structure combines desirable features from each of the two parent organisms. The major advantage to this approach is the speed and selectivity with which the new cross-bred species can be created with recombinant DNA technologies. In the second of the recombinant DNA approaches, the organism is modified by the addition of a gene not natural to the species, which may allow the plant to produce a pesticide or herbicide, or alter its nutritional value or its taste or attractiveness as a food.

Social reaction against GMOs has been strong, especially in Europe, and the ultimate determinant of how widely GMO technology will be used in the future may well be political. On the other hand, the commercial potential of GMOs, as well as their value in meeting food needs, particularly in the developing world, and in minimizing some of the environmental consequences of excessive fertilizer, pesticide, and herbicide use suggest that there will be a strong drive in the next several years to establish their safety. It is likely that this effort will stimu-

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

late much greater efforts in modeling ecological systems and will certainly require the development of low-cost techniques for measuring trace concentrations of various organic substances under field conditions.

MATERIALS SCIENCE AND TECHNOLOGY

Our understanding of the relation between, on the one hand, the structure of materials at scales from the molecular to the microscopic and, on the other, their bulk and surface properties at scales that cover an even broader range has been matched by a growing ability to use that knowledge to synthesize new materials for many useful purposes. As a result, materials science and engineering has emerged as one of the most important enabling fields, making possible some of the advances in medical care discussed above, many of the performance improvements in information technology discussed below, and innovations in energy production, transportation, construction, catalysis, and a host of other areas. It seems very likely that developments in materials science will continue to come at a rapid rate in the next decade and will continue to play a vital role in many other fields of science and technology. Progress might be more rapid if public and private investments in materials R&D matched the rate of increase experienced by investments in the biomedical and computer and information science fields. Discussed below are examples of what the committee believes are some of the most promising trends in this field.

Nanotechnology

Although the term nanotechnology is relatively new, developments that have made possible products with smaller and smaller features have been under way since the concepts of microelectronics were first introduced in the early 1960s. Progressively finer-scaled microelectronic components and microelectromechanical devices have been produced using optical lithography to cut and shape superimposed layers of thin films. However, these optical methods have a resolution limit of about 100 nanometers (nm).

Nanotechnology aims at fabricating structures with features ranging from 100 nm down to 1 nm. Nanostructures are particularly attractive because physical properties do not scale with material dimensions as the nanoscale range is approached. This means that if the structure, including size, shape, and chemistry, can be controlled in this size range, it will be possible to develop materials exhibiting unique biological, optical, electrical, magnetic, and physical properties.

Self-Assembled Materials

To build devices with features in the nanosize range will require that they self-assemble—atom by atom or molecule by molecule—to form structures that

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

follow a prescribed design pattern. This also allows considerably more customization, making it possible to imitate nature. For example, considerable research is being pursued in examining hydrogen-bonding interactions similar to those in DNA as orienting forces. Applications in which these assembling interactions upgrade low-cost engineering polymers to incorporate a functional component are suggested. Self-assembly and controlled three-dimensional architectures will be utilized in molecular electronics, highly specific catalysts, and drug delivery systems.

Quantum Dots

Semiconductor nanocrystals ranging from 1 to 10 nm represent a new class of matter with unique opto-electronic and chemical properties relative to bulk and molecular regimes. Their strongly size-dependent electronic states enable one to tune their fluorescence spectra from the UV to the IR by varying composition and size. In addition, as inorganic materials, their photophysical properties are potentially more robust than those of organic molecules. This property makes them attractive as luminescent tags in biological systems, as emitters in electroluminescent devices, and as refractive index modifiers in polymer composites. The main hurdle to commercializing quantum dots remains an effective method to synthesize them in large volumes at low cost.

Nanoparticles

The properties of nanostructured materials are a function not only of their molecular constituents but also of their size and shape. Managing size and shape on the nanoscale allows the creation of a class of materials called nanoparticles. These materials have a unique combination of physical, chemical, electrical, optical, and magnetic properties. For instance, nanoparticles have much better mechanical properties than bulk solids. Thus, carbon nanotubes, one form of nanoparticle, exhibit a Young’s modulus of 1000 gigapascal (Gpa) (compared with 210 GPa for steel), with a critical strain to failure of about 5 percent. This high rigidity and low mass make it possible to fabricate mechanical devices with very high resonant frequencies, which would be very useful in wireless technology applications.

Carbon nanotubes have other useful properties. They exhibit a very high thermal conductivity—in the range of 3000 to 6000 watts per millikelvin. Moreover, the high thermal conductivity is orthotropic, with heat transport primarily along the axis of the tube. Thus, the tubes could be used as “thermal wires,” channeling heat flow in a single direction. Finally, because they have a low coefficient of friction and excellent wear characteristics, carbon nanotubes can also serve as microsleeve bearings in MEMS applications. The controlled nature of their fine structure allows the clearance between the shaft and the sleeve to be

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

made much smaller than the particle size of common bearing contaminants, further improving their performance.

Nanoparticles constructed of other materials are also very promising. For instance, they offer interesting possibilities for significant advances in magnetic storage. Magnetic nanoparticles 10 to 20 nm in size have the potential to increase storage density to 1000 gigabits per square inch (Gbit/in2). And, the thermal, electrical, and thermoelectric properties of nanostructure alloys of bismuth can be tailored to achieve a marked increase in the thermoelectric figure of merit, which would make possible the design of high-efficiency, solid-state energy conversion devices.

Hybrid Structures

Combining nanoparticles with other structural components provides an even wider range of possibilities. For example,

  • By using size-graded nanoparticles together with high-pressure isostatic pressing, a marked increase in the density of ceramic materials can be achieved. The reduced porosity leads to enhanced strength, which has already been exploited in zirconia hip prostheses.

  • In a very different biological application, a material with the commercial name Hedrocel is produced by vapor deposition of tantalum on a vitreous carbon matrix. Hedrocel is about 80 percent porous, has an average pore size of 550 microns, and is extremely effective in fusing bone in spinal, hip, and knee reconstruction. It also seems likely that the ability to control both porosity and chemical properties will turn out to be very useful in the synthesis of perm-selective membranes in hybrid organs.

  • Nanotechnology has the potential to provide better surfaces or substrates in bioassays for chemical/biological reactions and for research on the organization of interconnected cells, such as neurons and the endothelial cells that line the circulatory system.

  • The blending of nanoplatelets of montmorillonite—layered silicate clay— with nylon has significantly improved the mechanical properties of the resulting polymer. These clay-filled composites offer better moldability, ductility, and high-temperature properties.

  • Combinations of nanoparticles and electrically conducting polymers make possible surface coatings with characteristics that can be varied with an electrical command. Surfaces that convert from transparent to opaque, change color, and heat or cool have been demonstrated. It is possible to incorporate electrical and optical properties on the surface of composites (coatings or adhesives). These surfaces can be imaged at low cost with inkjet technology, creating many possibilities for new products.

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

The great potential for enormously expanding the range of achievable materials properties through nanotechnology seems likely to bring about a paradigm shift in the development of materials. Further, it is likely that in the next several years it will become possible to produce anisotropic nanoparticles economically. Coded self-assembly of nanostructures, coupled with established methods of optical lithography, should enable the integration of structures and features with different scales, and processes for assembling these particles into controlled arrays will provide further flexibility in designing a number of novel devices, including a variety of microelectromechanical devices.

Microelectromechanical Systems

The past decade has seen considerable progress in designing microelectromechanical (MEMS) devices, helped by fabrication techniques originally developed for the semiconductor industry. Further progress, however, will depend on solving two classes of problems: those associated with control of and communication with the devices and those related to the surface properties of the materials used in the devices.

The control and communications issues are, in essence, questions of internal and external wiring design, which obviously becomes more difficult as the devices become smaller. It appears that a higher level of integration of the sensors, microprocessor, actuators, receivers, and transmitters in these devices may solve the internal wiring problems and that use of wireless technology may ease or eliminate the need for external wiring. For the latter approach to be successful, however, it will be necessary to achieve miniaturization of transmission and receiving components without an excessive reduction in signal power. This suggests that the ability to achieve high power densities in these components will become an important design focus.

It is a general feature of scaling that the surface to mass ratio of any device will increase as the device gets smaller. As a consequence, the properties of the surfaces of MEMS devices take on great importance, either creating opportunities for using the surface in the device’s function or creating problems when the surfaces are incompatible with the medium in which they function. For example, in microfluidics applications, altering the surface energy of the very small channels that comprise the device, perhaps by an electric signal, might be sufficient to allow or prevent the flow of a fluid through a particular channel, thus creating a nonmechanical switching system.

On the other hand, a bio-MEMS device has many of the problems of artificial organs and other implanted devices: the need to avoid damaging the blood or tissue with which the surfaces of the device are in contact and the need to maintain the properties and integrity of its materials of construction over the lifetime of the device if it is implanted. There is a large literature on material-biologic interactions that can help guide bio-MEMS designers, but the need to satisfy

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

simultaneously a set of compatibility criteria and a set of structure/function criteria is likely to be one of the important challenges of the next decade with respect to these devices.

Fuel Cells

Fuel cells offer an efficient, nonpolluting method for generating electricity for both mobile and stationary applications. For the past decade a significant effort has been made to commercialize the proton exchange membrane (PEM) fuel cell, which has the advantage of lower costs and better operating conditions than other types of fuel cells that operate at higher temperatures. PEM fuel cells incorporate three primary subsystems: a fuel source that provides hydrogen to the fuel cell, a fuel stack that electrochemically transforms the hydrogen into DC current, and auxiliary systems that condition the output and regulate the cell. All three subsystems will require significant improvements in cost, size, and durability before widespread applications become feasible.

Catalysts—platinum and platinum alloys—used for PEM fuel cells are not only expensive ($20,000/kg), but their long-term durability has not been established. Reduction in the amount of the platinum required and/or the use of a less expensive catalyst would reduce cost significantly. Research on the effect of lower catalyst loadings and on the stability of catalysts finely dispersed on carbon supports is under way, as is research on the development of non-platinum-based catalysts such as organometallic porphyrins and nonprecious metals.

Because this country does not have a hydrogen distribution and storage infrastructure, at present fuel cells are fed by hydrocarbons that are transformed by a fuel processor into hydrogen gas, CO2, and CO. At temperatures less than 120 °C, the absorption of CO on the platinum catalyst is significant, and it competes with the oxidation of the hydrogen on the fuel cell anode. To reduce the effect of this CO poisoning, research is under way to develop anode catalyst alloys that are less susceptible to CO poisoning and to add CO cleanup stages to the fuel processor. Direct hydrogen-fed fuel cells would eliminate these problems, but the infrastructure for hydrogen distribution and storage is not in place and will require a tremendous investment to establish.

Current membrane materials, such as perfluorinated sulfonic acid, cost about $500/m2. Because these membranes require a high level of humidification for durability and for proton conduction, the cell stack operating temperature is constrained to 80 to 90 °C. This is a serious constraint. If the operating temperature of the PEM could be increased to 120 to 180 °C, it would be possible to eliminate the CO poisoning problem. Also, at the higher temperatures heat exchangers are more effective and heat recovery for cogeneration applications becomes feasible. However, operation at the higher temperatures would require a membrane material that could conduct protons without the presence of water, a major research challenge in the next several years.

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

Car manufacturers are aggressively pursuing research to lower the costs of fuel cell systems. Predictions of the availability of fuel-cell-powered automobiles vary widely. Toyota will reportedly introduce a fuel-cell-powered automobile on a limited basis by 2003. The target of U.S. car makers is much later (2010). Currently, they are attempting to reduce the cost of a fuel-cell-powered motor to about $50 per kilowatt (kW), or about $3500 for a 70-kW engine.

Several companies are preparing to market stationary units for homes with prices ranging from $500 to $1500 per kW. If research makes possible the operation of fuel cells (and on-site hydrogen conversion units) at costs of 0.10 per kWhr or lower, then distributed generation of electricity production will become feasible. Even at the current anticipated cost of $0.30 kWhr, the stationary home units are expected to be welcomed by those living in remote areas who are not connected to the electrical grid.

Materials for Electromechanical Applications

As noted earlier, advances in information and communication technologies have always depended strongly on developments in materials science. Better sensors, higher bandwidth communication links, finer chip features for more rapid calculation speeds, and storage devices have all relied on novel and improved materials. There is no reason to expect either the importance of this area or the rate of progress in it to diminish over the next decade. Some of the most promising developmental areas are described briefly below.

Photonic Crystals

One-dimensional photonic crystals (dielectric mirrors, optical filters, fiber grating, and so on) are well-developed technologies used to manage light. In the near future, materials and processes to prepare two- and three-dimensional photonic crystals and methods for integrating these devices on a chip compactly and effectively should be available. These developments will result in improved optical functionality to meet the needs of high-bandwidth communication networks.

Materials with Improved Dielectric Properties

It is likely that more new materials with improved dielectric properties (porous silicon is an example) will be developed and will support the continued miniaturization of integrated circuits. Materials with higher dielectric constants are important for the dielectric layer in capacitors and transistors. Materials with lower dielectric constants will be valuable for insulating the metal interconnects on the chip and on the circuit board more efficiently, decreasing separation requirements and improving transmission speeds.

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Organic Electronics

Employing organic materials as active layers in electronic devices will become increasingly important. These materials will allow the fabrication of inexpensive devices using nontraditional semiconductor manufacturing processes. Device drivers, RF-powered circuits, and sensors are all potential applications. Other electronic devices, based on the modulation of the conducting properties of single organic molecules, could significantly advance circuit miniaturization.

Organic/Inorganic Hybrids

The motivation for developing organic/inorganic hybrids is the coupling of the improved processing and superior mechanical properties of the organic material with the unique functional characteristics of the inorganic material to form the hybrid material. Semiconductor applications appear possible in which the charge transport characteristics of the inorganics (because of strong ionic and covalent intermolecular associations) are vastly superior to those of the organics (which generally only have relatively weak van der Waals interactions). Electronic, optical, and magnetic properties that are special or enhanced in organics can also be utilized in the hybrids.

COMPUTER AND INFORMATION SCIENCE AND TECHNOLOGY

Fundamental Drivers

Because a major driving factor in computer technology has been its continued ability to follow Moore’s law—that is, an approximate doubling of speed every 18 months, with concomitant increases in capacity and decreases in costs— the committee does not foresee a significant slowing of this trend over the next decade. The economics of computer fabrication (for example, lithography) may be a limiting factor, but it seems unlikely that physics will be a limiting factor in the next decade. Increases in storage density and decreases in cost are also expected to follow the trends of the last few decades, which currently exceed those predicted by Moore’s law. It is less clear that the increase in consumer demand will match the ability to produce faster computers embodied in Moore’s law, but consumers will still value a broader set of computer and network applications and more reliable and secure software. Computers will also be increasingly embedded in everyday products. Hence the committee expects to see the continued adoption of computer solutions to an increasingly large number of applications, including those made possible by faster calculation speeds and those made economically practical by decreasing calculation costs.

Communication technology using both copper and optical fiber has seen a comparable rate of change. In communications it is expected that the “last-mile”

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

problem will be solved; that is, the bandwidth limitations and administrative barriers to home Internet connection will not impede deployment. (It was observed that cable connections were provided to the most of the nation in the last few decades, so there is no reason to believe that a demand for higher bandwidth connections—or better utilization of cable—cannot be met with already existing hardware. The main question is the extent to which the Internet services available are actually wanted by the public.)

How best to use the limited radio frequency spectrum for mobile computer communications is a challenge for the next decade and will be an important issue for standards as the technology is increasingly used for transnational communication. The problem may be eased or solved in part by technological advances—for example, in protocols and compression—but it is likely to require regulatory action that may, in turn, have a significant technological impact.

System Issues

While the capabilities of computation and communication will continue to increase, the complexity of larger and larger systems may limit the rate at which these increases can be effectively utilized. While incremental advances in current applications can capitalize on the improved technology, the implementation of new applications, potentially possible because of hardware technology improvements, is likely to depend on improvements in software development. Historically, more than 50 percent of the cost of software development has been in testing, debugging, and quality assurance. Reducing those costs will obviously be a high priority and will provide a strong impetus for the further development of the software engineering discipline as well as the standardization of the software development process. It will also be helped by improvements in automated software production as well as improvements in self-diagnosis and self-correction. Many would also argue that, as complexity increases, software standards and software interoperability will assume increasing importance. Also, there will be increasing emphasis on ensuring the security of software as computing becomes increasingly Web-based and thus more vulnerable to malicious hacking and cyberterrorism. Another area of growing emphasis will be the reliability of complex programs used in safety-critical applications, such as air traffic control and nuclear power plant operations.

Ergonomic Issues

The (short) history of widespread software use suggests that human-system interface issues will strongly shape, even limit, the adoption of new applications. Computer languages, operating systems, and applications that were not natural and easy to learn have struggled to gain widespread acceptance and have not always survived the introduction of those that were (although once people have

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

begun using a technology, they are reluctant to switch to a new technology even when it provides better usability). (“Natural” in the context of something as artificial as computer systems is somewhat of a misnomer, but it means that the interface must build on actions already familiar to the user from other applications or systems.) The power of the computer can be used to provide a relatively simple interface to the user in spite of the underlying complexity of the system. Unless there are major advances in the design of these interfaces, commercial demand will lag the technical capacity for increased complexity or further performance breakthroughs in computers and networks.

The cellular telephone is an example of technology that has been adopted fairly rapidly, in part because the average consumer can operate it despite the sophisticated computation and networking involved in the underlying system. In this case, the user finds the interface familiar, because it builds on interfaces that most have seen before—the ordinary telephone and the menus used in graphical user interfaces of computer operating systems, for example. Even with cellular telephones, however, people have problems using the menus, and efforts continue to make them more usable. The spreadsheet is another example of a technology that achieved widespread use in part because of its functionality but also because its operations are fairly familiar to most people. Most first-time users can quickly grasp data entry procedures and how to perform at least simple arithmetic operations and use common formulas.

The problem is that interfaces that build on already-understood technologies (i.e., so-called WIMP interfaces—windows, icons, menus, pointing) to avoid the learning problem inherently pass up opportunities for innovation. It seems likely that new interface modalities will not accept this constraint, even though it is not clear what technical approach they might take and how they will meet the requirement of being user-friendly. Voice input and output might become a factor in some applications, because the technology will soon be capable of natural-sounding speech with computing resources that are modest by present-day standards. Voice will not be the only modality on which to focus. Others include pressure-sensitive input devices, force-feedback mice, devices that allow people to “type” by tracking eye movements, and wearable computer devices. The effect such devices have on performance will be an important issue, especially in safety-critical systems, where an error could have catastrophic consequences.

Innovative application of artificial intelligence technology and advances in sensors that are able to integrate information may greatly improve a system’s ability to interpret data in context. For example, speech recognition is improving very rapidly. However, it must be noted that general speech understanding under natural conditions requires a degree of context that will not be possible within at least the coming decade. If speech becomes a major general input modality in that time frame, it will be because a subset of natural language has become an accepted computer interface language.

Visual input will assume more importance in specific contexts, but the prob-

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

lem of general visual recognition will still be difficult even with 100-fold CPU speed advances expected during the next decade. Nevertheless, existing capabilities presently used routinely by the military (for example, for target recognition from moving aircraft) will find some similar specific applications in the civil sector, such as traffic safety applications.

Some of the interface problems, especially contextual speech recognition, may be alleviated if the interface is readily personalized for each user. For applications in a mobile setting (when limited bandwidth is available), this will become possible when all users are always carrying large amounts of personal data—made possible by the increases in storage technology. By greatly improving voice recognition, and providing abundant context, these personal data might permit advances in interfaces that would otherwise be impossible, perhaps also enabling less sophisticated users to tap into high-powered computers and networks.

An important part of the solution to the user interface problem will probably lie in building computing and networking into appliances in ways that are invisible to the user. To the extent this occurs, the need to improve human/computer interfaces is obviated.

Greatly improved technology for data transmission and storage naturally raise issues of security and authentication. The committee believes that the technical issues will be solved (that is, the level of security and authenticity appropriate to the application will be affordable) but that privacy issues are not just technology issues. They involve trade-offs and choices. For example, someone carrying his or her complete medical record may want total control over access to it, but if the person arrives at an emergency room unconscious, he or she would probably want the medical team to have relatively unrestricted access. Thus, it is not a technology problem but a policy issue. Similarly, network devices will make it possible to identify the location of the user, which the user may not want known, at least without giving permission. Technology will be capable of providing the needed encryption. Society must decide what it wants. This is an issue discussed in more detail in a later section.

New Drivers

An important question is whether there are drivers on the horizon other than continuing improvement in the speed of computation, the capacity of storage, and the bandwidth of communication links that will give rise to new kinds of applications. The committee has not identified any but is of the view that the convergence of various communication and computation technologies that commingle traditional broadcasting (cable and wireless), telephony, and Internet venues, coupled with improvements in each, may well lead in time to a significant expansion in certain kinds of applications. For example, one might foresee great improvements in the quality (and therefore the attractiveness) of remote, real-time

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

social interaction. Primitive telepresence can be expected to be available within 10 years—an enhancement of videoconferencing that is likely to make it an increasingly attractive substitute for some face-to-face communication. However, improvements of several orders of magnitude in the speed of computation, communication, and display will be needed before telepresence will be an entirely acceptable substitute.

A number of observers have suggested that the integration of computers with traditional broadcasting media could complete the process of establishing a continuum or seamless web of communication possibilities, from one-to-one contacts (the telephone) to one-to-many contacts (broadcasting) to many-to-many contacts (digital networks). Moreover, with this continuum in place, the potential for a number of new types of group interaction and social group structures (electronic polling, electronic town halls, new modes of remote education, quality program support for interest groups, and so on) increases. Although many of these new interactions may ultimately become realities, it is the committee’s belief that, in the near term, entertainment applications such as on-demand movies are most likely to be the major beneficiary of the integrated technologies.

Information Technology and Medicine

One premise in the choice of the three broad fields that the committee considered in describing technological “push” factors was not only that each was a field in which knowledge was expanding rapidly but also that each had widespread impact on the other two and on technologies outside the three fields. That point is made in a number of the examples discussed above. However, the intersection of information technology and biomedicine is of such profound importance that it deserves specific emphasis and even some recapitulation.

Computation has the potential to dramatically speed the acquisition of new knowledge in the biological and medical sciences by making it possible to extract meaning from large data sets. In molecular and cellular biology, the rate at which this happens will depend on further progress in the development and successful marketing of the high-throughput assays and imaging technologies that generate data on systems, rather than data on isolated entities. It is also dependent on continued advances in computer science and technology required to organize the data and disseminate it—that is, to convert it to useful knowledge. A driver of genomic analysis has been the ability to process the enormous amount of information produced and identify important results. This capability for computing power is likely to have to increase exponentially to achieve a similar level of proteomic analysis. New systems for classifying and organizing data and for creating knowledge and knowledge bases, rather than databases, are needed.

Computational power has also changed the pharmaceutical industry, where for some time it has been providing support for three-dimensional modeling of molecular structures as part of the drug design process. It has also unleashed the

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×

power of combinatorial chemistry in the search for new drugs, providing, as in the case of molecular biology, the means for ordering and analyzing huge data sets. Another promising direction in the future is the modeling of organ systems and whole organisms, a field that has been labeled physiomics. Here, computational power provides the ability to capture a vast number of parameters related to the physiology of an individual, to model that individual’s biological responses to a drug or a combination of drugs, and thereby to customize drug therapies to an individual, reducing the uncertainties and side effects that are a feature of current drug use.

Physiomics is not without its challenges and problems. Since the validity of the models it generates and manipulates depends on the accuracy and completeness of our understanding of the biology response functions involved, the move from a clinical or laboratory setting to a computer terminal must involve a true bridging of the biomedical sciences and information technologies, a partnership between scientists in both disciplines, and the capacity to communicate in both directions.

At the clinical level, information technology has made possible major advances in imaging techniques, and there is every indication that the next decade will see a continuation of that trend. However, it is not so much overwhelming computational power that is most important as it is our ever-increasing ability to miniaturize chips that have modest, but adequate, computational capacity to be usefully incorporated in implanted sensors or in MEMS devices. Intelligent sensors based on microchips are now commonplace in heart pacing and implanted defibrillation devices. Insulin pumps can be driven by implanted glucose sensors. But microchips are likely to provide the key enabling technology in the next several years by allowing the design of MEMS devices for use in microsurgery, motor control in replacement body parts, in situ drug delivery, and a host of other applications.

Finally, there is the enormous potential for information technology to facilitate the storage and transfer of information about patients between and among different departments of a health care facility and between health care facilities. It has already been demonstrated that information technology has the potential to reduce errors in drug dispensing; to automate patient monitoring either directly if the patient is in a health care facility or through a simple analog telephone connection if the patient is at home; to store and transmit x rays and other images from one facility to another; or to store a patient’s entire medical record on a wallet card. The fact, however, is that health care has seriously lagged other industries in incorporating information technology into its operations. That, combined with the opportunities that will come in these next years as computational speeds and communication bandwidths increase, suggests to the committee that there will need to be significant catch-up in this area in the next decade.

Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 9
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 10
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 11
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 12
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 13
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 14
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 15
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 16
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 17
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 18
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 19
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 20
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 21
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 22
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 23
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 24
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 25
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 26
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 27
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 28
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 29
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 30
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 31
Suggested Citation:"2 Push Factors." National Research Council. 2002. Future R&D Environments: A Report for the National Institute of Standards and Technology. Washington, DC: The National Academies Press. doi: 10.17226/10313.
×
Page 32
Next: 3 Contextual Factors »
Future R&D Environments: A Report for the National Institute of Standards and Technology Get This Book
×
Buy Paperback | $50.00 Buy Ebook | $39.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

In September 2000, the National Institute of Standards and Technology (NIST) asked the National Research Council to assemble a committee to study the trends and forces in science and technology (S&T), industrial management, the economy, and society that are likely to affect research and development as well as the introduction of technological innovations over the next 5 to 10 years. NIST believed that such a study would provide useful supporting information as it planned future programs to achieve its goals of strengthening the U.S. economy and improving the quality of life for U.S. citizens by working with industry to develop and apply technology, measurements, and standards.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!