National Academies Press: OpenBook
« Previous: 3 Recent Advances in Computational Thermochemistry and Challenges for the Future
Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×

Session 1
Panel Discussion

Richard Hirsh, National Science Foundation: Peter, you discussed the need for new methods, and new methods can take almost any form. They can be new chemistry investigations, new numerical methods, new implementations of the chemistry and the numerical methods to make codes, and so on. I would like to know what you meant by new methods, and beyond that, I would also like to know, given what Paul said about the capability of machines, what new problems those new methods should be or could be applied to.

Peter Taylor: I guess my immediate answer to the list you gave is all of the above. I do not think we should restrict ourselves at this point. But what I was particularly thinking about was not so much reimplementing the existing methodology we have, which I think has a number of disadvantages, some that I discussed and some that were, I think, clear by example in what John presented, but going back to, say, the Schrödinger equation and looking at other strategies for developing approximate solutions of it quite different from what we have now. So I would like to see an effort across the board on that, starting from looking again at the mathematics and looking at different methods for constructing approximate solutions all the way through. We should not exclude looking at the methods we have now and reimplementing them, but that certainly should not be the only thing we do.

And I think one of the key issues here is that most of the effort we have all put in so far toward developing parallel implementations of quantum chemistry has really addressed taking the methodology we have used for a number of years and reformulating this so it runs on parallel hardware. And the scaling of this, as one criterion of how well you do, is good for some methods and good for some implementations but not so good for others. It is not at all clear that the domain of methods we use to try to construct approximate solutions to the Schrödinger equation necessarily includes the most desirable scaling or methods that might perform very well on a machine, say, with 5,000 or 6,000 or even 10,000 processors. I am glad to see, based on Paul's talk, that we are not necessarily thinking of tens of thousands of processors, but it is clear that if we want to use these machines effectively, we have to be able to run on thousands of processors, and that is a lot tougher than getting something going on, say, a 64-way symmetric multi-processor. If we are going to do it effectively we need to look at the largest

Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×

possible variety of methods for constructing approximate solutions to the Schrödinger equation, not focus simply on developing, or trying to develop, scalable implementations of what we have now.

Paul Messina: Peter, I remember from a few years ago working with a colleague at Caltech, the chemist Aron Kupperman, that when we were able to provide him a substantially bigger computing environment, he quickly found that the basis set that he was using was, in fact, not adequate, and he had to come up with a more nearly orthogonal basis set to be able to get any type of accuracy. So, would you include in the new methods examination the need to look also at those aspects where now you have a much bigger system and consequently would have to worry more about the capability of the methods?

Peter Taylor: Oh, yes. I think there is no question about that. This is just another aspect of the sorts of improvements we need to try.

Thom Dunning, Pacific Northwest National Laboratory: I think just in general, to make very concrete the kind of suggestion Peter is making, that it has become painfully clear over the past decade how slowly basis-set expansions converge. Yes, we can approach the full solution of the Schrödinger equation with the basis-set technology that we have, but it is a very painful process and it gives rise to some of this very horrible scaling that Peter is talking about. Plus, it gives rise in schemes like what John is talking about, G2 and G3, to some of those large deviations that are observed just because in the basis sets that are used, the convergence is not there for that type of molecule.

John Pople: Yes, there are worries all the time that all the technology we are using, which is based very much on Gaussian functions, may not be the best approach when we come to very large systems and very new technology.

One such possibility to be looked at by chemists is the matter of plane wave expansions that solid state physicists use quite a lot. It is more appropriate for them, perhaps, because a crystal is a periodic system. But nonetheless, one can ask whether you can do better with all the modern techniques of fast Fourier transforms and so forth by taking the molecule in the middle of an empty box and proceed to expand everything in plane waves. That technology could conceivably be a new approach that would possibly eliminate some of the difficulties we have at present.

Evelyn Goldfield, Wayne State University: One of the things that I would like to address was in Peter Taylor's talk when he divided the field into three: electronic structure, reaction dynamics, and molecular dynamics. It seems that one of the hurdles to actually using these codes effectively is that there is a step between electronic structure, when you calculate points, and having a usable potential energy surface. Someone has to laboriously fit a potential, which may or may not be an accurate reflection of the ab initio surface. And then another community, the dynamics community, uses that potential and makes some predictions that may or may not compare to experiment. Among the most challenging things that could be produced are codes that actually integrate these three parts of the problem so that the fitting steps are bypassed. Such efforts are actually a hot topic right now. To deal with realistic and interesting systems is really going to be incredibly computationally intensive and I think could effectively make use of any number of processors.

Peter Taylor: I agree. I think this is a defect in what we currently have—that we do not have enough integration between these steps. While there are, for example, dynamics programs that basically call for the electronic energy or something like the gradient of the potential surface on the fly, we do not have

Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×

enough of this. People are put off a bit, even given the sort of hardware we have been talking about, by the dimensionality of it: the fact that if you are doing something with three or four atoms or something similar, this is already fairly expensive; if you have a larger system, where you are treating explicit solvent molecules, say, the idea that you have to evaluate the energy millions of times starts to become a problem, even if you are talking about trying to do this on thousands of processors.

But, to give you an example of what I think of as new methods in this regard, one way, of course, to characterize a potential energy surface is indeed to have millions of energies that you might fit to a functional form, where you might calculate on the fly. Another way to characterize a potential energy surface, though, would be to expand it around a single point. Now, potential surfaces are very complicated functions and so if you were to do, say, a Taylor expansion, the order of derivatives you would need at that point to adequately describe the surface would be very, very high. I would submit, though, that it would be nothing like a millionth-order derivative. It would probably be tens of orders, say. Now, at present we do not know how to do better than fourth-order derivatives of the energy in a way that is more efficient than evaluating individual energies. If we put effort into understanding how to do higher derivatives of the energy efficiently (because we already know that evaluating the derivatives of the energy, at least up through fourth order, is something we can do a lot more efficiently than evaluating lots of individual energies)—if say, we need derivatives through 25th order to characterize an entire potential energy surface and if we could figure out how to calculate the 25th derivative of the energy in an efficient way—then we could do basically one calculation to characterize the whole thing. And that could then be used for the dynamics. That is another example where we need to think about new methods.

John Pople: I think there is an important point here about the interfacing of programs from slightly different areas of chemistry. This is clearly something that is very desirable, and it does bring up the question of the desirability of having source codes public. It is very important, I think, that people should know what the codes do and should be able to carry out some kind of useful interface with other codes. One has to have protection to stop people from getting a code, introducing some bug, and then giving it to somebody else. That clearly would be disastrous. But nonetheless, I think particularly with the development of object-oriented codes we should pay more attention to producing software that can effectively be interfaced with software from some related field so that we can begin to build integrated programs to handle these difficult situations.

Peter Taylor: I absolutely agree.

Paul Messina: Although I certainly did not dwell on this in my presentation, when I was at Caltech and working on the applications that we are trying to tackle, we identified as the biggest challenge the integration of codes from the different disciplines early on. And I think one of the subtexts of ASCI is to be able to do these integrated, multidisciplinary simulations. There are issues of program difficulties that include the code, and certainly of methods—different grading, different approximation techniques—that, if one is not careful, will introduce instabilities in the numerical computation, for example. So that is, indeed, a very important problem that science in general needs to be able to deal with to do the very large simulations.

Judith Hempel, University of California, San Francisco: I am really struck by the fact that we have new challenges in the computing engines and the codes and in the experiments that are needed to validate the codes, but it does seem to me that for a mature field—I think we agree that in some ways the

Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×

field is mature—a lot of the impact is in the applications of these theories, and there are new applications that are coming out. So I would like your input.

For example, in the fields of docking and scoring, if you have a biological molecule you can dock many, many different molecules into the binding cleft and score it. And that is a kind of theory in and of itself. So, do you see any particular challenges for the field as a whole, to drop back, as it were, and to simplify in some ways the theory to approach these very large and actually very important problems?

John Pople: I think the main contribution at the present time to that sort of simulation is using these more fundamental methods to work out the energies of interaction of appropriate modeled pieces. We clearly cannot handle the whole systems by these reliable methods but can work at the intermolecular potentials of the pieces, find out something about how configurations may change in the enzymatic environment, and so forth. So, it is really a two-stage step. One level of theory should provide the underlying potentials. Others will then simulate the very large molecular systems using those potentials. So, again, this is an example of integrating the software between different branches of the field.

Judith Hempel: Do you feel that a 1-kilocalorie target is the correct target also for predicting a binding agent?

John Pople: The 1-kilocalorie target is the target that we have for calculating the energy of a bond from scratch, when we do not know the energy and do not know anything about it beforehand. You can do much better than that if you look at the same bond in related circumstances. Even Hartree-Foch theory can work quite well and can be applied, of course, to very large systems, if you do a comparison between the same set of bonds in one molecule and the same set of bonds arranged in a different fashion in another molecule, what we call isodesmic reactions or isodesmic comparisons. An elementary theory can do well there, much better than 1 kilocalorie.

Judith Hempel: Right. These numbers, of course, are affected very much by solvation issues, as Peter pointed out, so there is this integration across many theories.

Peter Taylor: I would also like to address a general point that you raised there. There is a story, I believe told about Wigner, who said that rather than have a very accurate solution to the Schrödinger equation for behavior of electrons in a particular metal, which would necessarily be a very complicated thing, he would sooner have a vivid picture of what the electrons were doing in the metal. I think the two aspects of that statement represent undesirable extremes. A vivid picture is of no use whatsoever if it is a vivid picture of an answer that would be wrong, and having a simple method that lets you get a very nice picture up on the screen that has binding energies that are wrong by 10 kilocalories per mole is really no use to you either. You are not learning anything about nature from that.

This is one of the things Paul emphasized: the need for us to be able to visualize whatever type of calculation we are doing in some way that lets us learn something more about nature from those numbers. I think we learned that the ideal situation would be to have a vivid picture of exactly the right answer. But I would sooner have a vivid picture of nearly the right answer, or actually nearly the right answer and no vivid picture, than I would have a vivid picture of the wrong answer.

Douglas Doren, University of Delaware: It strikes me that the field is, in some ways in terms of our computational needs, at a dividing point. The advances over the last 10 years have made it feasible to get pretty good answers on pretty big systems so that it is now feasible, for example, to calculate the

Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×

structure and energetics of some large, inorganic molecules that are actually of a size that a real synthetic chemist would make and be interested in. It is something that we do all the time.

Making something—a method or a machine—that gives us the answer faster is simply going to make these calculations more trivial. There will certainly be some systems where it is important to be able to do very large scale simulations, but the market for those, I think, is getting smaller and smaller. I think for the last 10 years we have been expanding the market for quantum chemistry in general and making it feasible to actually do computational inorganic and organic chemistry in a reasonably useful and reliable way. The market driving the development of methods for extremely large scale solutions, I think, is going to be somewhat smaller. I mean, my synthetic organic chemist friends are not going to start making bigger molecules simply because we are able to calculate them.

There is certainly an important role for these methods. I am interested in them and working on them, but in many ways, fine-grain parallelism and massively parallel systems are going to solve only a subset of all our problems. I will give two examples that I can think of. One, a simple case, is that I have a group of five students. Each student is working on a couple of different problems. They really need separate resources: they need to be able to get all those problems solved all together. Being able to solve a single problem quickly is not their goal. If I could solve each of those problems separately on a workstation, that would be great. Being able to do them sequentially on a massively parallel computer does not necessarily bring my group's set of problems to a solution more quickly.

As another example, I do not think that calculating the 25th derivative of a potential surface at a single point is going to be enough to characterize the whole surface. We are going to need to calculate the potentials and derivatives at lots and lots of points and still have some way of fitting these together. That would work just fine with loosely coupled parallelism.

What are your thoughts? Any sense of where the balance is?

Peter Taylor: What I hope that machines like the ASCI-class machines will do is catalyze development of new methods, even if the immediate application of these to new areas or larger systems or more accurate calculations seems limited. The technology that is developed for handling those problems, the higher accuracy, the larger systems, the greater efficiency, will inevitably fit into the programs that are run on desktop machines and let people do chemistry that almost by definition they are not considering today.

An example of this is that if you go back to the earliest days of Gaussian programs in the early 1970s, the capabilities, in essence, were limited to Hartree-Fock calculations. This is partly, I guess, because of what was feasible at the time, but partly because I think up until the end of the 1960s there was not a very clear understanding of the essential role that electron correlation would play in more accurate predictions. In the mid-1970s, due to the work of various people (some here, like John, the group that I was part of in Australia, Rod Bartlett who is here at the back), more efficient methods of treating electron correlation were developed. Most of the chemists I knew who did calculations at all at that time felt that by and large this was unnecessary and could not see why we were mucking around with something that was only relevant to water or methane or hydrogen fluoride. And yet today no self-respecting chemist, not only quantum chemists, not only theoretical chemists, no self-respecting chemist would restrict his or her investigations of something to just the Hartree-Fock level. So, because now we do not see an immediate need for daily use of 5,000 processors and another order-of-magnitude accuracy in our application calculations, I do not think we should conclude from that that the market for these sorts of things, necessarily, is dwindling. I would say it is the technology that comes out of doing those calculations and the way it feeds into what people will use on the desktop in 5 or 10 years time that is the really critical thing.

Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×

Thom Dunning: I think also there are applications, as I think Paul could point out in the ASCI program itself, where if you tackle a particular problem it turns out that many of these problems are very data hungry. The combustion problem needs a lot of information on a lot of molecules in a lot of reactions so that you are faithfully representing the chemistry that is going on in a combustion process.

You could say we are going to wait until all those 1,000 species and all those 2,000 reactions are characterized in the laboratory. But, in fact, if you do that you are going to be sitting forever. And the only way to get at some of that data, in fact, is to do it computationally, and you may have to be doing a lot of calculations at a relatively high accuracy that will actually require the kinds of machinery being requested in ASCI. And that is the real driver. That is why those machines are necessary to solve their problem; they have such a complicated system that to describe the various components of that complicated system really represents computational grand challenges that require generating a lot of data that is just not available from experiment. Of course, validating against the experiment, as John said, is done whenever possible, but there are species like radicals and ions that can be very difficult to study in the laboratory, where information on them is absolutely critical to the fidelity of representing that particular physical or chemical process in the total system itself.

David Dixon, Pacific Northwest National Laboratory: We have really not talked much about nuclear motion and its problem, one of the things I think that does argue for looking at much larger architectures. It is much more straightforward to do things in zero Kelvin. If you want to look at chemistry at 298 or 500 degrees, you are after nuclear motion. If you want to treat the water dimer correctly you have to put quantum nuclear motion in, and I think that as we start to look at what we can do with the large architectures, we will actually start thinking about how we treat the nuclear motion problem, how we treat coupled routers, how we treat weakly bound systems. This gets back to the enzyme interaction. That is going to require much larger computer resources than we have today where we are working on single, accurate points.

If you do not get zero point energy at zero Kelvin in methane right, if you just take the zero point energy and cap it with what is known experimentally, you are off by 0.6 or 0.7 kcal, which is almost all of John's error. So the nuclear motion part of it will be critical to really solve, and I think this is going to be one of the arguments we need to have to go to much larger architectures to really understand what is going on. I would appreciate your comments on that.

John Pople: Yes, I agree that is a major feature. We are certainly well aware that anharmonicity is one of the things that is not well treated in the present level of theory. It is swallowed up by one of the empirical parameters. And, indeed, one knows when you come to molecules that are floppy—and most molecules, for example in biological circumstances, are very floppy—where all sorts of rigging around is going on, this is a very important feature. So I fully agree with you that one has somehow to merge the methods that are used for electronic structure with those that are used for handling nuclear motions, and this comes back to the dynamics problem again. There is a lot to be done in interfacing these areas and developing composite programs to handle the whole problem.

Andrew White, Los Alamos National Laboratory: What Paul has talked about, what quantum chemistry has pointed out, are predictive models, whether it is thermochemistry or the safety of an aging stockpile. It seems to me that in your five-step plan you need a sixth that somehow quantifies the uncertainty in the systems if you are really looking into the future—maybe some place you cannot do experiments, like stockpile stewardship or climate or nuclear winter or weather or whatever. Can you talk about what the state of the art is with Gaussian or with any other code relative to how you quantify

Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×

this uncertainty? Maybe there some lessons for the rest of us in how to put all this together for predictive models?

John Pople: Well, the uncertainty question is one that we tried to address by validating the method against all the known facts. Now, if you try to do it purely ab initio, then it is rather hard to give an uncertainty. So in principle, but not in practice, you can get upper and lower bounds to the energy to which you could eventually close. Bright Wilson used to be an advocate of that, but there are no signs of that becoming practical.

So, the best that we have been able to do, and I do not think anybody else has come up with an alternative suggestion, is to test a theory fully against everything that is known really well from experimental chemistry, and to do statistics on that. Those are the numbers that we have come up with. I think it is going to be very useful to look at the bad cases and say there is something wrong with our theory and it is showing up because these results are bad; that is a useful form of investigation.

But, that is the best best way that I can think of to describe an uncertainty.

Peter Taylor: Yes, one can perhaps quantify this a bit. The typical total energies of the sorts of molecules in John and Larry's schemes are of the order of hundreds of thousands of kilocalories per mole and one is looking for 1-kilocalorie-per-mole accuracy there. We typically do not do calculations of total energies that are accurate even to tens of kilocalories per mole, and I would claim that we have really no methodology for which it is practical currently to do total energies accurate to 1 kilocalorie per mole.

There is a very large compensation for error in what we do. We have always known about this. We understand where it comes from in our case—that is, formation of a molecule is a relatively small perturbation on a group of atoms and the systematic errors in the atoms cancel out to a significant degree when you form the molecule. But the field is aware of this, and if we really wanted to have hard, small uncertainties, whether it is going the route of calculating upper and lower bounds or whatever, we would have to work very much harder in the calculation of the total energy itself than we are currently set up to do.

Richard Kayser, National Institute of Standards and Technology: I wanted to point out that we have an effort under way to put together what we call a computational chemistry benchmark database. Right now the database contains about 600 compounds for which we believe we have a good handle on the thermochemistry. In addition to that information, the database will contain the results of many different well-defined calculations based on different levels of theory and different basis sets, with the goal of trying to get a handle on the systematic errors that are inherent in different approaches.

Edwin Wilson, University of Akron: One of the things going on at our institution is calculation of conformations and interactions of polymers. What kind of efforts should be happening in that field and how will that interact with the ASCI program?

John Pople: Well, the problems of conformations on molecular geometries were already fairly well handled many years ago at the Hartree-Fock level. It was found that for organic molecules, even if you used a moderate basis like 6-31, Hartree-Fock theory normally gave you the right conformation of individual molecules. So I think there has been fair success with quite elementary theory in dealing with that.

There needs to be refinement along the same lines, but I think the point to make is that conforma-

Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×

tional problems are somewhat simpler than getting the total energy of a bond. But undoubtedly further work is needed.

Peter Taylor: I would have to be a little less sanguine than my colleague is, I think. If you look at large molecules like polymers, one of the things they can do that the sorts of half-dozen carbon chain molecules in small-molecule organic chemistry cannot is that they can fold around on themselves. And one of the things that is key in some of this folding around on itself is relatively weak interactions, say dispersion-type interactions between different parts of the chain. Those dispersion-type interactions are not treated at the Hartree-Fock level at all, and so I think something significantly better than Hartree-Fock may ultimately be required to do reliable predictions of conformations of longer chains where there is substantial folding or coiling or things like this, assuming you want to do something that goes beyond the level of some parametrized model, some empirical type of force field.

John Pople: Yes, I completely agree with that. My points were referring to things like rotation about single bonds and boat-chair conformations and cyclohexane, which were handled fairly well some years ago.

Edwin Wilson: One also finds that the interaction that occurs at interfaces between different polymers is a fairly important aspect of that area of chemistry.

John Pople: Yes, it is true that there is somewhat of a division among quantum chemists, those who work on individual molecules and those who work on intermolecular forces, and they sometimes do not quite meet. People interested in intermolecular forces tend to look at the long-range limits and they do not know how to join with the people who deal with the strong interactions at closer ranges.

Peter Taylor: Another integration issue!

Jean Futrell, Pacific Northwest National Laboratory: I am not sure whether this is a showstopper or not, but I would like to inquire about the accuracy of present methods for defining transition states and their energetic properties, frequencies, and so on, and what one can expect from this leap forward in technology.

John Pople: One difficulty is that my suggestion that we test theories by comparing with results known experimentally to great accuracy does not really hold for transition structures. We would, indeed, like to do that, but only one or two energies of activation in the literature are really well known, and those energies are very difficult to reproduce theoretically anyway. So, I can only say that is a more difficult problem and we have fewer means of testing the reliability of our results. This is the best that we can do and we hope this is accurate to this level, but we cannot at present completely test it.

Peter Taylor: I would say this is an excellent example of an area where theory needs to do more to meet the experimentalists on their own turf rather than stopping halfway. “Experimental estimates” of barrier heights are derived from all sorts of assumptions made in the interpretation of experiments. Such assumptions may or may not be warranted and may or may not mean what theoreticians think they mean when they are arriving at their own barrier heights. A far more satisfactory way to deal with the issue of reaction mechanisms, and ultimately kinetics, is for the computational chemist to calculate—this follows, really, from Evie Goldfield's point earlier—the rates of the reaction and compare the results

Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×

directly with some kinetics from experiments that is not subject to all the varied assumptions made in order to obtain some sort of barrier height.

I think we should not get hung up on the issue of how to calibrate methods for calculating transition states and how to get error bars on the heights of barriers. We should go the next step further, integrate the dynamics into the calculation and then compute what the experimentalists actually measure and compare that with the experiment. That is the way to get reliable calibrations.

Thom Dunning: While there have not been many calculations, there have been some that would certainly indicate that the techniques that we currently have available to us can achieve very high accuracy. It does turn out that calculations of transition states are significantly more challenging than calculations of stable species. The basis sets you have to use are larger and you have to go much closer to convergence to get reliable numbers. But I would say that the best techniques can get errors on the order of tenths of a kilocalorie per mole. But for the few systems that have been checked, the problem there is, as John says, that we really do not have any good information from experiment that pertains directly to what we are calculating. We have to go to the step that Peter is talking about to be able to compare with experiments.

Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×
Page 35
Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×
Page 36
Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×
Page 37
Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×
Page 38
Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×
Page 39
Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×
Page 40
Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×
Page 41
Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×
Page 42
Suggested Citation:"Session 1 Panel Disussion." National Research Council. 1999. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9591.
×
Page 43
Next: 4 The Role of Computational Biology in the Genomics Revolution »
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!