10
The World Wide Laboratory: Remote and Automated Access to Imaging Instrumentation

Bridget Carragher

and

Clinton S. Potter

University of Illinois at Urbana-Champaign

Introduction

For the past several years we have been involved in the development of remote and automated access to imaging instrumentation for an overall project known as the World Wide Laboratory (WWL).1 There are a number of advantages to providing remote access to imaging instrumentation. First, it provides access to unique and/or expensive instruments without requiring the user to be physically present at the site of the instrument. In addition it provides the opportunity for collaboration and/or consultation with researchers anywhere in the world, thus providing for a network of distributed expertise. Finally, this technology presents unprecedented opportunities for education and training. These opportunities might otherwise be limited only to those institutions with the means to support expensive and unique instruments.

We propose that there are at least six ways in which remote-access technology can be used in practice:

  • Service. In the service mode, a local operator can consult with a remotely located principal researcher providing a specimen and who would provide input about the quality of the images and the parameters to be used to acquire data. This mode is extremely important for extending service capabilities at the National Research Resources.
  • Collaboration. In the collaboration mode, the principal researcher using the instrument can consult with other experts from around the world.
  • Education and training. Imaging instruments can be made accessible in the classroom for K-12 education and undergraduate and graduate training.
  • Remote research. The instrument can be used by a remote researcher with minimal local operator intervention.
  • Automated control. The instrument can be used by a remote researcher, and functions normally performed manually by a local operator are performed automatically by a computer system.


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 141
10 The World Wide Laboratory: Remote and Automated Access to Imaging Instrumentation Bridget Carragher and Clinton S. Potter University of Illinois at Urbana-Champaign Introduction For the past several years we have been involved in the development of remote and automated access to imaging instrumentation for an overall project known as the World Wide Laboratory (WWL).1 There are a number of advantages to providing remote access to imaging instrumentation. First, it provides access to unique and/or expensive instruments without requiring the user to be physically present at the site of the instrument. In addition it provides the opportunity for collaboration and/or consultation with researchers anywhere in the world, thus providing for a network of distributed expertise. Finally, this technology presents unprecedented opportunities for education and training. These opportunities might otherwise be limited only to those institutions with the means to support expensive and unique instruments. We propose that there are at least six ways in which remote-access technology can be used in practice: Service. In the service mode, a local operator can consult with a remotely located principal researcher providing a specimen and who would provide input about the quality of the images and the parameters to be used to acquire data. This mode is extremely important for extending service capabilities at the National Research Resources. Collaboration. In the collaboration mode, the principal researcher using the instrument can consult with other experts from around the world. Education and training. Imaging instruments can be made accessible in the classroom for K-12 education and undergraduate and graduate training. Remote research. The instrument can be used by a remote researcher with minimal local operator intervention. Automated control. The instrument can be used by a remote researcher, and functions normally performed manually by a local operator are performed automatically by a computer system. 1   See <http://wwl.itg.uiuc.edu>.

OCR for page 141
Intelligent control. An intelligent system can perform the same functions as an operator and can learn from the researcher's experience. In this paper we discuss our experience with the WWL project and some specific examples that we believe demonstrate the ideas behind a successful collaboratory.2 We address some of the issues involved in providing remote and automated access to instrumentation and its advantages to various categories of users. WWL: Current Implementations for Service, Collaboration, and Education Instrumentation currently supported by the World Wide Laboratory includes a transmission electron microscope (Philips CM200), nuclear magnetic resonance imaging spectrometers (Surrey Medical Imaging Systems, Varian, TechMag), and a video light microscope. All of these instruments are accessible through Web browser-based user interfaces. Remote Access to TEM JavaScope is a Web-based application designed to operate a Philips CM200 transmission electron microscope (TEM) and to view digital images remotely. JavaScope has been written as a client/server application (see Figure 10.1). The JavaScope applet is the client and presents the application interface to the user. JavaScope responds to actions by the user by sending commands to a camera control server and microscope control server that run on a UNIX workstation attached to the TEM and CCD camera. These servers are responsible for controlling the TEM and digital camera using applications and libraries already developed as part of the emScope library.3 The user interface is shown in Figure 10.2. As a readily accessible tool for remote consultation and exploratory grid browsing, the basic Javascope implementation has been successful. JavaScope has been used by our collaborators in California (Research Institute at Scripps Clinic) to control the TEM in Illinois and provide advice as to the worth of acquiring data from a particular specimen. It has also benefited students at the microscope by providing them with a means to consult with a remotely located advisor. Remote Access to MRI The second example in the World Wide Laboratory is remote control of a nuclear magnetic resonance (NMR) imaging spectrometer by means of a Web browser. This system evolved from our work in developing a distributed control, acquisition, and processing interface for an NMR imaging spectrometer (4T, 31-cm bore) with an acquisition console from Surrey Medical Imaging System.4 This system, 2   M. Hamalainen, S. Hashim, C. Holsapple, Y. Suh, and A. Whinston, "Structured Discourse for Scientific Collaboration: A Framework for Scientific Collaboration Based on Structured Discourse Analysis," Journal of Organizational Computing, 2(1), 1-26, 1992. 3   N. Kisseberth, M. Whittaker, D. Weber, C.S. Potter, and B. Carragher, "emScope: A Toolkit for Control and Automation of a Remote Electron Microscope," J. Struct. Biol., 120, 309-319, 1997. 4   C.S Potter, Z-P. Liang, C.D. Gregory, H.D. Morris, and P.C. Lauterbur, "Toward a Neuroscope: A Real-time Imaging System for the Evaluation of Brain Function," Proceedings of First IEEE International Conference on Image Processing, November 13-16, 1994, Austin, TX, Vol. III, pp. 25-29.

OCR for page 141
Figure 10.1 Architecture underlying the JavaScope user interface. called NSCOPE, has been interfaced to a UNIX workstation that provides a real-time processing and control capability. The significant features of this system are real-time control of all acquisition and control aspects of the magnetic resonance imaging (MRI) system, real-time processing of dynamic MRI data, and distributed processing modules for high-performance computing systems. The NSCOPE was initially developed to support functional magnetic resonance imaging and dynamic imaging applications. The modular software architecture of the system has allowed us to adapt the system to other user interfaces. For example, the Experimentalist's Virtual Acquisition Console (EVAC) used the NSCOPE system for real-time control and visualization of the MRI system from within an immersive visualization environment.5 This system used voice commands to control the MRI system from within a room-sized virtual environment called a CAVE. A real-time stereo capability was also developed for EVAC that used the flexible processing capabilities of the NSCOPE.6 A Web-based control interface was added to the NSCOPE in early 1996 (Figure 10.3). This allows the MRI system to be controlled from anywhere with Internet access using a standard Web browser. Significant features include real-time acquisition and processing of images from the MRI system; 5   C.S. Potter, R. Brady, P. Moran, C. Gregory, B. Carragher, N. Kisseberth, J. Lyding, and J. Lindquist, "EVAC: A Virtual Environment for Control of Remote Imaging Instrumentation," Computer Graphics and Applications, 16(4), 62-66, July 1996. 6   C.D. Gregory, C.S. Potter, and P.C. Lauterbur, "Interactive Stereoscopic Magnetic Resonance Imaging," U.S. Patent Number 5708359, 1998.

OCR for page 141
Figure 10.2 Java applets responsible for camera control (left) and microscope control (right).

OCR for page 141
Figure 10.3 Web-based control system provides interactive access to all imaging parameters on the MRI system.

OCR for page 141
password-protected login system to limit access to authorized users; and scheduling mechanism to limit permission for use of acquisition capabilities to specific users at selected times. The current system provides complete control of all instrument acquisition parameters from the Web browser. The Web browser interface allows users from various domains and levels of expertise to run the MRI system without the need for extensive system-specific training. Chickscope: A K-12 Education Project Using Remote MRI The remote MRI system was used in the spring of 1996 in a project called Chickscope,7,8 which demonstrated the feasibility of remotely controlling an MRI device through the World Wide Web. The purpose of the project was to enable students and teachers from 10 classrooms ranging from kindergarten through high school to control an MRI system in order to study the maturation of a chicken embryo during its 21-day cycle of development. From classroom computers with access to the Internet, students used Web browsers to control the MRI system and manipulate experimental conditions through a simple online form. Students could generate their own data and then view the resulting images of the chick embryo in real time. The objectives of the Chickscope project were twofold. First, we sought to make extraordinary hardware, software, and human resources available to the classrooms and study the impact of such a system on K-12 education. Second, we set out to “stress-test” interactive, remote control of the MRI system for further development by scientific researchers. Overall the Chickscope project was very successful in that it was well received by the teachers and students, and there has been a great deal of interest and enthusiasm for repeating the project. In addition, it also allowed us to demonstrate that very complex technology could be used effectively by students at all grade levels. Remote Instrumentation for Service, Collaboration, and Education: Lessons Learned User Interface The basic requirements for the World Wide Laboratory in the service, collaboration, and education modes are relatively straightforward. On the user end we need a network connection and a standard Web browser. On the instrument end we need a network connection and interface software to interpret the commands coming in over the network. There are several advantages to using a Web browser interface. First, because almost everyone knows how to use a Web browser, there is no need for training on a specific user interface. Second, Web browsers are now ubiquitous on all computer systems, and third, there are no special software or hardware requirements. As a result, we can be reasonably sure that a remote user anywhere in the world with access to the Internet will have the tools to run the instruments remotely. There are other remote user interfaces that use techniques such as remote screen copy (e.g., Timbuktu) or low-level distributed windowing libraries such as X-windows. These systems require specialized software to be installed and maintained on the remote user's computer system. 7   B.C. Bruce, B.O. Carragher, B.M. Damon, M.J. Dawson, J.A. Eurell, C.D. Gregory, P.C. Lauterbur, M.M. Marjanovic, B. Mason-Fossum, H.D. Morris, C.S. Potter, and U. Thakkar, "Chickscope: an Interactive MRI Classroom Curriculum Innovation for K-12," Computers and Education Journal Special Issue on Multimedia, 29(2), 73-87, 1997. 8   See <http://chickscope.beckman.uiuc.edu>.

OCR for page 141
Modes of Collaboration Our experience with the World Wide Laboratory indicates that at least three instrument access modes need to be supported: Single user. Allows dedicated access to an imaging system by a single user. Multiple non-cooperating users. Allows several users to access the system simultaneously. The users are not aware of each other. Commands from the users are queued, and the data are returned to the requesting user. This mode is useful in education projects like Chickscope in which several classrooms may be accessing the instrumentation simultaneously. Multiple cooperating users. Allows several users to use an instrument collaboratively by using mechanisms for passing instrument control among the users. WWL: Current Implementations for Research The above examples demonstrate the use of the World Wide Laboratory architecture for service, collaboration, and education. Using this architecture for remote scientific research in which reliance on a local instrument operator is minimal or non-existent poses additional requirements that are discussed below. Automated/Intelligent Control for Scientific Research Our group has been extensively involved in a project involving scientific research on remote and automated instrumentation. The goal of the project is to acquire very large numbers of good-quality images from a TEM completely unattended by a human operator. The motivation for developing this automated system arises from the field of electron crystallography, in which a TEM is used to study the structure of proteins at moderate to high resolution (5 to 30 angstroms). The technique most commonly used to preserve the proteins in the TEM is known as CryoEM, in which the protein is preserved in a very thin layer of vitreous ice.9 The ice is usually suspended over a carbon grid, and the goal of the microscopist is to identify the holes in the grid where the ice is potentially of the right thickness and acquire a high-magnification image of this area (Figure 10.4). A number of practical problems with the CryoEM procedure tend to make it extremely time consuming and tedious for the operator. The first is that producing ice of precisely the right thickness is not straightforward; as a result the ice is quite often either too thick or too thin, and a lot of searching around the grid is required to find suitable areas. Second, because the electron beam is extremely damaging to the specimen and will destroy it after a very short exposure, the grid can be examined only at very low magnification if a reasonable length of time is desired. The high-magnification image is never examined prior to shooting the micrograph, and this leads, not too surprisingly, to a rather high rejection rate; many of the acquired images are simply thrown away, and only a few turn out to be suitable for further analysis. Finally, because the beam damages the specimen, the micrographs must be acquired with a very small dose of electrons, and the images are very "noisy" as a result. Thus, to determine the protein structure to high resolution properly, the signal-to-noise ratio of the structure must be increased, and this 9   J. Dubocher, M. Adrian, J.-J. Chang, J.-C. Homo, J. Lepault, A. McDowall, and P. Schultz, "Cryo-Electron microscopy of vitrified specimens," Quarterly Review of Biophysics, 21(2) 129-228, 1988.

OCR for page 141
Figure 10.4 Acquisition of cryo-electron micrographs. A copper grid (a) is covered with a carbon-coated perforated plastic mesh (b). A droplet of buffer containing the protein of interest is applied to the grid, blotted to a thin film, and then rapidly plunged into a liquid cryogen. The protein of interest © is reserved in its native form in the vitreous ice. requires averaging together many images. The end result is that this technique by its nature requires the acquisition of large numbers of micrographs, perhaps thousands, or tens of thousands, to achieve high resolution. Manual methods are clearly impractical, and it was with this in mind that we embarked upon the project of completely automating the acquisition of large numbers cryo-electron micrographs. As a prototype for automated acquisition of TEM images, we have developed a system, called Leginon,10 to automatically acquire large numbers of acceptable quality images from specimens of negatively stained catalase, a biological protein that forms crystals. Acquiring good-quality images of this specimen is often used as a test for students taking a course in electron microscopy and thus provides an excellent driver for the research methods that must be developed to solve the general problems of automated image acquisition. Furthermore, as catalase is an ordered crystalline structure, assessment of this order provides us with an objective measure of the quality of the automatically acquired images (Figure 10.5). Each low-magnification image (Figure 10.5a) is processed to identify large contiguous areas of density by a template matching method. Image feature metrics (size, mean, variance, centroid) are calculated and stored for each of the identified contiguous regions. These image features are later used in deciding whether a high-magnification image (Figure 10.5b) of the region will be acquired; for example, regions that are too small are rejected. The image quality of each high-magnification image is automatically assessed by calculating the power spectrum (Figure 10.5c), identifying diffraction spots (Figure 10.5d), and measuring the signal-to-noise ratio of each diffraction spot. Currently, the automated system can acquire approximately 1,000 images in a 24-hour period. In one experiment, we have compared the performance of the automated system to that of a human operator. A total of 288 high-magnification images were acquired manually, and 79 percent of these were acceptable as defined by an analysis of the order of the crystal. In comparison, using the same 10   C.S. Potter, H. Chu, B. Frey, C. Green, N. Kisseberth, T.J. Madden, K.L. Miller, K. Nahrstedt, J. Pulokas, A. Reilein, D. Tcheng, D. Weber, and B. Carragher, "Leginon: A System for Fully Automated Acquisition of 1000 Micrographs a Day," manuscript submitted to Ultramicroscopy , October 1998; see <http://www.itg.uiuc.edu/tech_reports/98-008/>.

OCR for page 141
Figure 10.5 Automated acquisition of electron micrographs of negatively stained catalase crystals. specimen, the fully automated image acquisition system was used to acquire 380 images, of which 51 percent were acceptable. The system can be further improved by adding intelligence to the feature selection criteria. For example, analysis of the results indicated a correlation between average feature intensity and image quality. This feature intensity is related to the thickness of the catalase crystal and indicates that thinner specimens result in more acceptable images. The fully automated target selection criteria were further refined by incorporating an assessment of specimen thickness into the model. By acquiring high-magnification images of only those features that have an average intensity greater than a preset threshold the percentage of acceptable images can be significantly improved. For example, if the threshold, is set to 6,000, the percentage of acceptable images improves to 86 percent from a baseline of 51 percent. Thus, the automated system does as well as or slightly better than a human operator. Remote Instrumentation for Scientific Research: Lessons Learned Our experience with the development of the TEM project has shown the necessity for incorporating automation and intelligent algorithms into the data-acquisition system. To develop such a system effectively requires a distributed hardware and software environment. The basic architecture of the Leginon system is illustrated in Figure 10.6. The system has these components: Instrument interface. To develop an instrument interface requires information from the manufacturer, and distributed control is needed for accessing the instrument over a network. This process is

OCR for page 141
Figure 10.6 Major components of the Leginon system. complicated by the lack of open systems and industry standards. Ideally the instrument should require minimal human interaction during an automated experiment. For example, the time between refilling cryogens on the TEM should be extended to support overnight runs. Database. It has become clear in the course of the Leginon project that there is a critical need for a database to support the thousands of images that are acquired and the acquisition parameter data that is associated with each image. Incorporation of a database would provide improved data management as well as the ability to track acquisition, control, processing, and modeling parameters. Processing and analysis. Developing intelligent image-acquisition systems requires that the instrument is closely integrated with processing and analysis software packages. There is a need for integration with commercial and community software packages, and these need to support the interfaces for distributed access. Control. A distributed control program must effectively synchronize all of the components of the distributed system and needs to be adaptable to each experiment. Ideally, the control program should be portable between systems and extensible by the end user. User interface. The user interface must be flexible and suit the needs of the user. The system must be flexible enough to support new technologies such as next-generation Web interfaces and virtual reality. Additional features that would be desirable for use in the World Wide Laboratory include audio and video conferencing and real-time updates of the system status. These extra capabilities enhance the remote researcher's understanding of the current status of the instrument. We also believe that scheduling and security considerations are not only desirable but also essential for turning this technology into a practical reality.

OCR for page 141
Conclusions We have demonstrated that the WWL architecture can be used for service, collaboration, education, and scientific research. The remote instrumentation supported by the WWL is one component of a collaboratory. Although the Chickscope project was not originally intended to do so, we believe in retrospect that this project demonstrated all of the components defined for a working collaboratory. Chickscope provided access to remote instrumentation from the classroom and gave students access to distributed expertise. All of the participants actively contributed to and used an image database that is now used by others, and the project served to develop a community composed of students and researchers from a number of different disciplines. There is an increased interest in developing the technology to support remote instrumentation. To improve the acceptance of collaboratories in the general scientific community, we need to demonstrate the impact of this technology in the scientific research environment and systematically evaluate collaboratories' contribution to productivity. Acknowledgments The World Wide Laboratory project is a collaboration with several groups at the University of Illinois at Urbana-Champaign, including members of the Beckman Institute for Advanced Science and Technology, the Biomedical Magnetic Resonance Laboratory, and the National Center for Supercomputing Applications. We are grateful for the enthusiastic participation of all the many individuals who have contributed to these projects. Financial support and equipment were provided by the National Science Foundation (Grant No. 9730056, 9871103), IBM shared university research program, Informix Inc., and the Lumpkin Foundation. Discussion Peter Taylor, San Diego Super Computer Center: You talked at the end about this issue of cultural adjustment to collaboratories. High-energy physics and both radio and optical astronomy are areas where people have been running consortia and working collaboratively—and I am distinguishing that from a collaboratory—for a number of years. Do we turn to groups like that to find out how to make that cultural adjustment, or do you think the problems there are sufficiently different from chemistry or biology that there is not much to learn? Bridget Carragher: In truth I don't know, but I think we could look at examples like that to get some experience of what it is like. But more important than that is to get people involved in using these systems. If people are collecting their data and just doing so much better than they were before, that is going to be picked up by 10 other groups immediately, and there will be a groundswell of support. You know—it is a word-of-mouth thing. Just to talk about collaborations isn't it a great thing. I think that in my field, this does no good at all. The only thing to do is to demonstrate that it works. I could say, "It works great in high-energy physics," and people will just shrug. It means nothing to them unless you demonstrate it in their own particular paradigm. William Winter, SUNY-ESF, Syracuse: I would like to point out to the polymer and materials people in the audience that electron crystallography is a very powerful technique in that area, as well as protein crystallography, and it has been demonstrated by Douglas Dorsett in this country and Henri Chanzy in

OCR for page 141
France, among others. The question would be, Is it really the best use of very expensive instrumentation to have it dedicated online 24 hours a day for seven days a week for use in elementary school education? I don't know. I am asking you. Clint Potter: Absolutely not, and I guess one of the reasons we do projects like Chickscope or Bugscope is that it is a technology driver. Putting together the Chickscope project, having second graders bulletproof your system, is a great test of how well your system actually works. I think it also gets back to this idea of cultural training. The second graders have no qualms about collaboratories even if they don't know what one is. They are doing it, and they are not set in their ways of doing research. Maybe it is that generation that is going to be doing the scientific collaborations 20 years from now, or maybe it will come quicker. We do not have our instruments available 24 hours a day. It is for a specific project. Bridget Carragher: In Bugscope we will have a period of time once a week, maybe, for a few hours for access, which is a good thing. I think outreach and training are very good things, but of course we wouldn't want to do that all the time. One of the lessons learned from the Chickscope project was that the second graders thought this was entirely natural: Why shouldn't they have an MRI? Why shouldn't they be speaking to researchers from all over the world and accessing thousands of images? There was no surprise for them at all—no "Gee, wow." Clint Potter: They really didn't feel that this was unusual. That was one of the big things that came out of this project. They were talking to the researchers and the scientists, and it was not just a one-way thing. They did not just suck our energy out of the project. It involved having users at the other end who were really dependent on our system being up for their one shot at it during that day, and having questions coming back. We put together the Chickscope project in about a month and a half from scratch. It was exciting, and there was a lot of energy built out of it because we had these real customers out there. Sue Fratkin, Southeastern Universities Research Association: Let me also inform all of you, having worked with Chickscope on the Hill, that there is a third element to that project, and that is that the members of Congress could see, touch, taste, and feel that kind of an experiment and so could relate to it in terms of funding. They could see that the young kids were relating to it, were doing very well. The congressmen and congresswomen themselves were playing with it because we were online right then and there, and their reaction was worth all the money because that is where your funding is going to come from. And if they can understand it and relate to it, then you have a much better chance of getting your point across. Clint Potter: We have never had any funding to do collaboratories. We do it because we think it is actually needed, and even an educational project, we thought, was a good way to prove to our own communities that if second graders could run an NMR spectrometer, then maybe the principal investigator could do it as well. Allen Bard, University of Texas: In getting back to the issue of the collaboratory in the culture of chemistry, I think the model might be centers for instrumentation. Historically there were NMR centers and computer centers, and I think by and large they weren't terrifically successful. A little aspect of the chemistry culture is that if chemists can get it in their own backyard they are going to do it, but a chemist

OCR for page 141
is not going to go to a center or send samples to centers, and that is really ingrained in the culture. So, I think where this and the model of high-energy physics and radio astronomy and so on will be most successful is where there are instruments that no one is going to have in his own backyard. I am not so sure about electron microscopes. Maybe very high level electron microscopes might work, but I think that is where you will get a collaboratory. If I can do the experiment in my own backyard or find somebody to finance it, I will probably do that. Bridget Carragher: Absolutely. And I would always choose to do that myself. I run a big facility with 12 microscopes in it, but if people have it in their own network they should stay there. That is where all their stuff is. That is where their colleagues, their students, and their notebooks are. But that is really also the idea of running these experiments remotely and automatically. You are just sitting at your desk with all your stuff around you, and you are not going to a center. And certainly really decent electron microscopes these days cost $1.5 million to $2 million each. You are not going to have more than a dozen or so of those in the country. You could have thousands of people doing this. The other idea with running these instruments continuously is that if one costs $2 million, you want it up 7 days a week, 24 hours a day. You do not want that machine sitting idle.