Skip to main content

Currently Skimming:

1 Drowning in Data
Pages 1-24

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 3...
... per year over two decades. Dynamic random access memory chips deliver per dollar 40 percent more capacity per year.
From page 4...
... magnetic hard disk solid state RAM/Flash \ 1980 1985 1990 year 1 995 2000 FIGURE 1 Comparison of magnetic data storage and competing technologies.
From page 5...
... Minimizing head-disk spacing is essential to maximizing areal density, while at the same time, head-disk physical contact must be avoided to prevent interface failure. Tight flying-height control is accomplished through the use of photolithographically defined multilevel air bearings that use combined positive and negative pressure regions to minimize sensitivity to interface velocity, atmospheric pressure, skew angle, and manufacturing tolerances.
From page 6...
... At the row level, the air bearing surface is first lapped to tight flatness specifications, and then lithographically patterned and etched to create air bearing features. Air bearing features are typically 0.12 ,um in height.
From page 7...
... SOURCE: IBM. are also seeing a wave of innovation, with the implementation of higher moment materials, more compact coils, and better dimensional control to provide increased write field at higher speed in narrower tracks.
From page 8...
... However, shrinking bit size requires shrinking grain size to maintain adequate signal-to-noise (Figure 5~. Choosing materials with higher K,~ poses problems in generating sufficiently large write fields (today's disk materials already require fields that cause magnetic saturation of write head poles)
From page 9...
... (b) Choice of materials, underlayers, and sputtering conditions determines grain size.
From page 10...
... Although there are many technical challenges, magnetic data storage is expected to remain a dominant player for years to come. As magnetic data storage reaches its 50th anniversary, it is branching out in new directions.
From page 11...
... This evolution is continuing with the increasing popularity of the Web. For example, many experts believe that there will be a push toward large centralized servers used as permanent data repositories, providing users with easy access to their data from a wide range of devices connected to the Internet.
From page 12...
... market, which is a small niche market with little growth, the commercial market is large and enjoys high growth due to the increasing popularity of the Web and the push toward centralized information servers. Meanwhile, the rate of acceptance of the shared-memory programming model in the HPTC market has been slower than anticipated.
From page 13...
... In reaction to this, many organizations moved back to more centralized servers for data storage and resource-intensive computations. Furthermore, the emergence of the Web has led to numerous centralized information services, with Web browsers being the analog to the simple terminals of the mainframe era.
From page 14...
... Isolating and protecting against hardware and software faults is especially difficult when resources are transparently shared, since the faults can quickly propagate and corrupt other parts of the system. Furthermore, a number of applications require incremental upgrades or replacement of various hardware and software components while the system is running.
From page 15...
... with aggressive next-generation multiprocessor systems, processor speeds will be over 30 times higher with memory latencies improving by only 10 times. Fortunately, memory bandwidths have improved faster than processor speeds during this period.
From page 16...
... Fault containment is well understood in distributed systems where communication occurs only through explicit messages, and incoming messages can be checked for consistency. However, the efficient resource sharing enabled by shared-memory servers allows the effect of faults to spread quickly and makes techniques used in distributed systems too expensive given the low latency of communication.
From page 17...
... This type of solution has yet to appear in commercially available systems, partly because of the lack of sufficient hardware support in current designs and partly because restructuring a commodity operating system for fault containment is a challenge. Achieving Software Scalability and Reliability Through Virtual Machine Monitors A virtual machine monitor is an extra layer of software introduced between the hardware and the operating system.
From page 18...
... Commercial workloads such as databases and Web servers have become the primary target of multiprocessor servers. Furthermore, the reliance on centralized information services has created demand for designs that provide high availability and incremental scalability.
From page 19...
... Washington, D.C.: IEEE Computer Society Press.
From page 20...
... This was cogently displayed in a 1997 operation named "Eligible Receiver," in which a National Security Agency team demonstrated how to break into U.S. Department of Defense and electric power grid systems from the Internet; generate a series of rolling power outages and 911 overloads in Washington, D.C., and other cities; break into unclassified systems at four regional military commands and the National Military Command Center; and gain supervisory-level access to 36 networks, enabling e-mail and telephone service disruptions (NRC, 1999~.
From page 21...
... Most work on applying these principles to build survivable data services has attempted to duplicate the TMR approach in networked settings, where it has come to be known as "state machine replication." These systems consist of an ensemble of closely coupled computers that all respond to each client request. Again, the correct answer of the ensemble of servers is determined by voting.
From page 22...
... While effective against attacks that an attacker cannot readily duplicate at all servers (e.g., attacks exploiting configuration errors or platform-specific vulnerabilities in particular servers, or physical capture or server administrator corruption) , these approaches provide little protection against attacks that can simultaneously penetrate servers with little extra incremental cost per penetration to the attacker.
From page 23...
... Los Alamitos, Calif.: IEEE Computer Society. Malkhi, D., M
From page 24...
... Moving up the Information Food Chain: The Future of Web Search OREN ETZIONT Go2net, Inc., Seattle and Department of Computer Science and Engineering University of Washington, Seattle The World Wide Web is at the very bottom of the Information Food Chain. The Yahoos and Alla Vistas of the world are information herbivores, which graze on Web pages and regurgitate them as directories and searchable collections.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.