National Academies Press: OpenBook
« Previous: Lee Rhodes A Stream Processor for Extracting Usage Intelligence from High-Momentum Internet Data
Suggested Citation:"TRANSCRIPT OF PRESENTATION." National Research Council. 2004. Statistical Analysis of Massive Data Streams: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11098.
×
Page 299
Suggested Citation:"TRANSCRIPT OF PRESENTATION." National Research Council. 2004. Statistical Analysis of Massive Data Streams: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11098.
×
Page 300
Suggested Citation:"TRANSCRIPT OF PRESENTATION." National Research Council. 2004. Statistical Analysis of Massive Data Streams: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11098.
×
Page 301
Suggested Citation:"TRANSCRIPT OF PRESENTATION." National Research Council. 2004. Statistical Analysis of Massive Data Streams: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11098.
×
Page 302
Suggested Citation:"TRANSCRIPT OF PRESENTATION." National Research Council. 2004. Statistical Analysis of Massive Data Streams: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11098.
×
Page 303
Suggested Citation:"TRANSCRIPT OF PRESENTATION." National Research Council. 2004. Statistical Analysis of Massive Data Streams: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11098.
×
Page 304
Suggested Citation:"TRANSCRIPT OF PRESENTATION." National Research Council. 2004. Statistical Analysis of Massive Data Streams: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11098.
×
Page 305
Suggested Citation:"TRANSCRIPT OF PRESENTATION." National Research Council. 2004. Statistical Analysis of Massive Data Streams: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11098.
×
Page 306
Suggested Citation:"TRANSCRIPT OF PRESENTATION." National Research Council. 2004. Statistical Analysis of Massive Data Streams: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11098.
×
Page 307

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

A STREAM PROCESSOR FOR EXTRACTING USAGE INTELLIGENCE FROM HIGH-MOMENTUM INTERNET DATA 299 TRANSCRIPT OF PRESENTATION MR. RHODES: This should go flawlessly because of our advance planning. First, I want to tell you how honored I am to be here, and I want to thank the organizers of this conference for putting this setting together. I particularly want to thank Lee Wilkinson for being a great mentor and friend and guiding me along the way. I am going to talk about a stream processor that we have developed at HP that is currently on the market. We sell this product. Before you do any kind of measurements in terms of what we are talking about—I just want to be clear that we should be calibrated. This is not science. This is engineering. Our role, my role, at HP is to develop advanced software. Our statistical sophistication is very low. I am learning and, with the help of Lee Wilkinson, I have learned an immense amount. I hated statistics when I was in college, but now, I am really excited about it. So, I am really having fun with it. In isolation, much of the technology you will see here has been written about before in some form. Nonetheless, I think you will find it interesting for you. The context of this technology is that we develop software for communications service providers. So, this software —and particularly Internet, although not exclusively Internet providers—those are our customers. How we got started as a start-up within HP about five years ago was exclusively focused on the Internet segment, and particularly broadband Internet providers. We are finding that the technology we built is quite extensible to neighbor markets, particularly telephony, mobile, satellite and so forth. Now, the network service providers, as I am sure you know, have some very serious challenges. The first one is making money. The second one is keeping it. In terms of making money, Marketing 101 or Business 101 would tell you that you need to understand something about your customers. The real irony here is that few Internet service providers do any measurements at all about what their customers are doing. In fact, during the whole dotcom buildup, they were so focused on building infrastructures, that they didn't take the time, or invest in, the systems that would allow them to understand more about customer behavior. That even goes for the ISPs that are part of big telephone companies. Of course, telephone companies have a long history of perusing the call detail records and understanding profiles of its customers. There are some real challenges here, not only understanding your customers, but understanding what the differentiating services are. It is very competitive. What kind of services are going to make money for you. Another irony is pricing this stuff. It is not simple. It is not simple now, and it will get even more complex, because of this illusion that bandwidth is free. That won't survive. It is not free. So, there have to be some changes and, as I go through the talk a little bit later, I think you will see why pricing is such a challenge, particularly for broadband. Certainly, you want to keep your own subscribers are part of your network, but you are also concerned about use, fraud, theft, and other kinds of security breaches. Now, when you go and talk to these service providers, they own the big networks. What you find is like in any big organizations. They have multiple departments and, of

A STREAM PROCESSOR FOR EXTRACTING USAGE INTELLIGENCE FROM HIGH-MOMENTUM INTERNET DATA 300 course, the departments don't communicate very well. This is not a surprise. We have the same problem at HP. Nonetheless, the sales people are interested in revenue. So, they are really interested in mediation systems which collect the data about the usage of other subscribers so that they can bill for it in some way. This is an emerging trend and will continue to be. They are interested in not just bytes, but they are interested in what types of traffic it is, time of day. For instance, they want to be able to track gamers, say, to a local gaming host on the network, because their network bits are cheaper than peering agreements out on the open networks. So, understanding who the people are who are using games and so forth would be of interest to them. Product development. These are the folks who come out with the new services. So, they need to have some sense of, well, is this going to make money or not, what is attractive. Network operations needs understanding of utilization and performance on a day-by-day basis. They tend to be very focused on servers, on machines, on links, to make sure they are operating properly. Product planning is often in a different department. These are the ones who are interested in future capacity, how can I forecast current behavior forward to understand what to buy and vend. Realize that a lot of quality of service, if you can call it that, on the Internet, today is accomplished by over- provisioning. So, if I have bodacious amounts of bandwidth, nobody tends to notice. Of course, IP is particularly poor at quality of service, but there is work being done to do that. So, the technology challenges for the service provider, there are many, but here are some of the few key ones. They would like to capture the data that would service all these different needs once. They are expensive to capture usage data, and the tendency is, among vendors such as HP, is to go in and say, oh, great, we have this widget. We will just sample your key core routers with SNP queries and get all this valuable data for you. Of course, every other vendor comes in and wants to do the same thing. So, they end up with 50 different devices querying all their routers and virtually bring the routers down. Economic storage and management of the Internet usage data is a severe problem. Of course, they want the information right away and, of course, it has to scale. So, I am talking about some of my back of the envelope kind of analysis of this problem of data storage and analysis challenges. Starting with—this is what I call a cross over chart. What I did is very simplistic calculations saying Internet traffic, particularly at the edges, is still growing at about doubling about every, say, 12 months. At times it has been faster than that. Over the past several years, it seems to be pretty stable. One of the interesting things is that the traffic in the core of the Internet is not increasing as fast as it is at the edges, and a lot of that has to do with private peering agreements and caching that is going on at the edge, which is kind of interesting. The next thing I plotted was aerial density of disk drives. In the disk industry, this is one of their metrics, is how many millions of bits per square inch of magnetic surface

A STREAM PROCESSOR FOR EXTRACTING USAGE INTELLIGENCE FROM HIGH-MOMENTUM INTERNET DATA 301 can they cram onto a disk. That has been doubling about in a range of 15 months. So, it is a little bit slower. Then Moore's law, which doubles about every 18 months. So, the axes had no numbers on them. They don't need it. It doesn't matter where you originate these curves, you are going to have a cross over. If this continues to grow at this rate, then at some point the—choose your measure. The traffic on the Internet is going to exceed some value. I think we can help with this one by better collection strategies and using statistics. AUDIENCE: I have to admit, I am really confused here by comparing Internet traffic volumes to disk drive densities. MR. RHODES: It is just a very simplistic assumption. It says that, if I am receiving traffic and I need to store information about that traffic that is proportional to the non-traffic, I have got to put it someplace. AUDIENCE: What does it mean that they are equal? MR. RHODES: I am just saying choose a value. Suppose you can store so many trillion or terabytes of data today. If the ability to store economically their data doesn't increase as fast as the traffic increases and the need to store it, you may have a problem. AUDIENCE: So, where is the traffic coming from, if people can't store it? MR. RHODES: That is on your own machines. Remember, the Internet is still growing. There are people joining. Now, the other crossing is Moore's law, which says if the traffic continues to increase faster than Intel can produce CPUs that keep up with it, or Cisco can produce processors that keep up with it, you just have to add more horsepower. AUDIENCE: Well, isn't the traffic consumed? If I am watching a video, I consume that traffic, I don't store it. AUDIENCE: Some people might want to store it. MR. RHODES: Okay, at the service provider, they are not storing the actual traffic. What they are interested in are the summary records, which are called usage data. The usage data are summaries of flows. At least, that is very common in the service providers. It is a fraction of the actual traffic, but as a fraction, it stays about the same. So, as a service provider, the tendency—and this may seem strange—is to serve all of it. Those who have telecom backgrounds sometimes save their call detail records (CDRs) for seven years. Sometimes there are regulatory requirements. Saving the Internet traffic, number of summary records for a session which you might have on the record, is far higher, orders of magnitude higher, than a single phone call. If you make a phone call, one record is produced. If you sit hitting links on the Internet, you are producing sometimes hundreds of sessions, as far as the way these sessions are recorded. The second graph is also a back of the envelope calculation. This is based on some measurements that we have done, which is the storage required. Now, presume that you wanted to store each of these just usage records. One of the factors that we have measured on broadband Internet is the number of what we call flows. These are micro flows, really, per second per subscriber in a broadband environment. It is around .3, and varies, depending on time of day from about .1 up to about .3. Now, you multiply that through times the size of a storage record, and they don't want to store just the flow information, they usually also need to put information like the

A STREAM PROCESSOR FOR EXTRACTING USAGE INTELLIGENCE FROM HIGH-MOMENTUM INTERNET DATA 302 subscriber ID and some other key information. You assume a couple hundred bytes per record. All of a sudden, you are talking about pedabytes or exabytes of storage, if you want to store it for any kind of period. So, these represent different numbers of subscribers, different scales. The dark red one is about a million subscribers and, as a service provider we are working with today that saw this coming and realized that they had a problem. The other one is also a back of the envelope calculation, time to process this stuff. Say you get it all into a database. You have got to scan it once. That can take a long time. There, I just projected different database systems, depending on how many spindles and how sophisticated you want to get, in terms of how many records per second can you process, and how much money do you want to spend on it. So, we are talking about years, sometimes, if you wanted to scan the whole thing. So, there is a problem here and it has to do with inventory, if you just have too much inventory of data. Handling it is a severe problem. So, this is a somewhat tongue-in-cheek illustration, somewhat exaggerated to make a point, but a lot of our major customers are very used to having very big data warehouses for all their business data. Data warehouses are tremendous assets. As soon as you start trying to plug these into the kinds of volume we are talking about, it no longer makes that kind of sense. What we have developed—this is just a short cut—is a sense of how can we capture information on the fly, and build not just a single model, but hundreds or thousands of small models of what is going on in the network. Then, we have added the capability of essentially a real-time look-up, where the user here can, using a navigation scheme, can select what data they want to look at and then they look at, for instance, the distribution statistics of that intersection. This is the product—I promise I am not trying to sell anything, but I just want to say this is the architecture of the product that is the foundation of this. It is called Internet manager. It is an agent based technology. These represent software agents. It is these three things together here, encapsulator, rule engine, and a distributed data store. In a large installation, you can have scores to hundreds of these, and the whole idea is putting a lot of intelligence right up close to the source of this high speed streaming data. We have different encapsulators. These are all plug-ins. The encapsulator is like a driver. It basically connects whatever the unique source, type, or record type or whatever that these various sources produce to internal format. Then, this is a rule engine, which I won't talk about. Basically, the flow is generally to the right, although this is somewhat simplistic, so it represents a kind of pipeline. So, these rule engines process rules, and they scale in three dimensions. One is the horizontal parallelization, which you have with many agents. The second is the size of the machine you put these one. The third is you can defer certain rules downstream. So, you can spread your intelligence processing. Now, a lot of times, and where we initially got started, was supplying basically data in database form, or file form, to various other business systems like rating, billing, reporting operations and so forth. That is how we got started. Now, to give you an idea of what the data—here is an example of one format of hundreds that we read. This is a net flow, version five record format. You can see all the

A STREAM PROCESSOR FOR EXTRACTING USAGE INTELLIGENCE FROM HIGH-MOMENTUM INTERNET DATA 303 different types of information that comes out. Basically, it is summary information of the headers that you have been hearing about in the previous talks. Source destination addresses, source destination ports, bytes, packets, and a lot of very valuable information here. It is of a flow. A flow is a group of packets that is matched to a source and destination IP address and a source port, sometimes even an added destination port, and sometimes even a source port. So, it is really nailed down to the particular transaction that is going on. So, what do we do with this? Each of our engines, each one of them, can pull in records from anywhere from around 50,000 to 100,000 per second. The first task is to normalize these, collect them and normalize them. The second task is the normalization and I like to think of as a vector, which was also spoken of earlier. This is a set of arbitrary attributes. Think of them as columns in a database, but it comes in as a single record and actually can be variable in the number of attributes, and dynamic. Now, once these come in, and we can have multiple streams coming in, usually we know quite a bit about these streams. We might have a stream coming in from an authentication service like a DHCP or combination DHCP, sometimes radius, sometimes DNS, that basically authenticates a user. So, the service provider knows it is a legitimate describer, as well as the usage information coming from the router itself. What we call them is normalized metered events. It is sort of the most atomic information about usage. So, these entities come in just like a record, and they are processed in this rule change, and a stream processing engine can't have loops. So, no four statements and stuff like that. We basically can't afford it. It travels down and you can have F&L-type statements. The other interesting thing is, we have a statement where each of these, as the data is processing through, it looks at each of the field based on what rule it is—and this is all configurable, what rule you put in, about several hundred. There is an association with a data tree. One of the things that this data tree can be used is in sorting. As the ME travels through, decision are made, there is a natural selection going on based on a certain field. Then we can do simple summing, for instance. So, summing on a variable or even a group of variables is very straightforward, doing very much the joint something that was spoken about earlier. This all occurs in real-time. The other use of this data tree, we call it—and it doesn't have to be just a tree, it can be a number of different points—is each one of these triangles is a structure that can have an arbitrary container, and we can put data in it. So, one of the ways that we do stream correlation in real-time is that we effectively have like a switch, where we can select information coming from what we call a session correlation source. It will load information into the tree that is used for matching, and then virtually all you have to do is now, as the new entities come through, they correlate dynamically to information that you want. For instance, it could be the IP address to a subscriber, or you could do all different kinds of correlation.

A STREAM PROCESSOR FOR EXTRACTING USAGE INTELLIGENCE FROM HIGH-MOMENTUM INTERNET DATA 304 Now, in any one engine you could—so, I am using a symbolic representation of what you just saw, is this little triangle of a tree here, and you can have multiple ones. So, we can do fan out. So, you can have a single source because the same data needs to go to different applications and needs to be processed by different sets of rules. So, you can parallel them, going to different applications, or you can put them into sort of sequential themes for more sophisticated rule processing. So, the work that I have been doing has developed what I call capture models. So, as this data is flying by, I would like to collect more than just a sum. In fact, I would like to capture distributions of these variables or other kinds of characteristics. I think there are lots of things that you can do—Jacobeans, I haven't seen the need for that—but there is the opportunity. A capture model can have child models associated with it, but one of the rules of the capture model is that the NME that goes in left goes out of the right, because you can have a series of these in a row. So, you can have multiple of these capture models plugged together. I tend to look at this like a matrix. Inside any of these capture models you have a--inside there is a matrix where you have a number of different variables that you can track. If you are doing binning, then the other axis is the bins. So, now you can put these, instead of doing just simple summing, now you can do sorting of your data, and it feeds right into this capture model. You can put them in layers and do sequential summing. So, you create all these little matrices, and they are not very big, a few kilobytes, the largest eight to ten kilobytes. So, you can have thousands of them. Now, the end-to-end architecture looks something like this, where you may have some free staging, for instance, some basic correlation going on. Then you put it directly into the models. That is one thing our customers are doing, or you can have these models directly on the raw data. So, you can be binning of it and making decisions as the data is flying by. What we do, then, is we store just the models. Of course, the nice thing about these capture models is that they don't really grow with volume. The number of them is proportional to the size of your business problem that you are trying to deal with. Then, on the right here, you have the clients. This is an example—it is not a great example, but it is one example of a distribution that we collected. I don't have a good example of truly real-time, but this kind of data can be collected in real-time. It represents the usage of subscribers over a 30-day period. This thing is just constantly updating as the data is flying by. Red represents the actual number of subscribers and the red axis is the amount of their usage. Now, this is a broadband Internet. So, you will see, I have a subscriber out here with 23 gigabytes of usage for that period, all the way down to tens or hundreds of bytes. So, there is a huge dynamic range. If you think about it, like electric utilities or other types of usage services you might have, very few of them have this kind of wide, dynamic range. Now, I fitted this, and it fitted pretty nicely to a log normal. Plotting this on a linear axis doesn't make a lot of sense. In fact, what we do in the distribution models is do logarithmic binning. This data fits that very, very nicely. It is very probable in terms of binning.

A STREAM PROCESSOR FOR EXTRACTING USAGE INTELLIGENCE FROM HIGH-MOMENTUM INTERNET DATA 305 Now I can see up to 90 percent of my subscribers now. There are two plots here. This is the subscribers at a particular usage, and this is the traffic that they create. One of the things it took me a while to figure out is why this right-hand side of this is so noisy. Notice it is jumping around quite a bit. Part of that is not just noise. Part of that is the fact that subscribers only come in unit quantities. So, a subscriber at 10 gigabytes of usage also creates big deltas out at the right-hand edge of this. The other reason is the actual binning. So, they may not fall in a particular bin. So, you will see some oscillation and it is actually easier to see the oscillation between bins on this graph. I did some—after reading Bill Cleveland's book, I tried the QQ plot, but I did a reverse QQ plot because I have already got bytes on my X axis, and these are the standard normal quantiles on the left. What is interesting is that the fit on this is very, very good, over about four orders of magnitude. I didn't bother doing any fancier fitting at the top or the bottom of this. The users at the bottom are using more than the models would predict, of course, and at the high end, they are using less. I find that, in looking at about a dozen of these from different sites, that the top ones slop around a bit. This is an asymmetry plot, which you read a lot about in the press. Actually, here, it is quantified. You can look at, for instance, that 20 percent of the subscribers, the top 20, are using 80 percent of all the traffic. That happens to be the way this distribution fell out. What they don't talk about is, 80 percent of the users are only using 20 percent, which is the obverse of that, which means they have got a real severe pricing and fairness problem, but I won't go into that. Some extensions of this basic technology we are doing now, and actually deploying with one of our customers, is using this kind of technique for security, abuse, fraud and theft. We are doing a lot of learning in how to do this, but I am convinced that, once you have a distribution of a variable and you can normalize it, say, over some longer period of time for the standard population, then you can very quickly see changes in that distribution very quickly. If all of a sudden something pops up, like a fan in, fan out, which is the number of destination IP addresses, or destination ports all of a sudden explodes, then you know someone is scanning ports. These terms mean different things, but in the service provider industry, fraud and theft are different. Theft is when they are losing money. Fraud is only when someone is using your account, because you are still paying. Then, abuse is basically violation of the end user agreement that you signed when you signed up with the service provider. Now, the other thing I am working on is dynamic model configurations, where you can dynamically refocus a model, a collection model, on different variables, different thresholds, what algorithms are actually used and so forth, do that dynamically. That allows you to do what I call drill forward. So, instead of having to drill down always to the history, you see something anomalous. It is likely to come back. This is not like looking for subatomic particles. So, if someone is misbehaving, more likely it will occur again, and you want to zoom in on that and collect more data, and more detailed data. So, that is what I call drill forward.

A STREAM PROCESSOR FOR EXTRACTING USAGE INTELLIGENCE FROM HIGH-MOMENTUM INTERNET DATA 306 Now, some back burner stuff, something that is interesting—I haven't found a real business application for this— now that I have got this multidimensional hypercube, so to speak, of all these collections of models, and each one of these circles can be different kinds of models, it sort of represents, or can represent, the business of what the service provider's customers are doing. I thought it would be kind of interesting to take that and do a reverse transform of it, and then create a random stream of usage events that looks exactly like what your subscribers would look like. It is random, but it has exactly the same distribution behavior as the stuff coming in, and it would be multiple distributions. I figured out the algorithms for this, but I haven't found anybody that needs it yet. So, some of the paradigm shifts that I find are challenging when I talk to service providers is really, the knee jerk reaction is, oh, I want to store everything, and it is just prohibitively expensive. I find that I have to be a business consultant and not just a technologist when talking to people. What is the business you are in, do you really want to keep this stuff for this long. This belief that I have that you have to analyze this high-volume data as a stream, and not trying to store it first, do it on line, in the stream, can reduce it first. Then, consider drilling forward rather than always wanting to drill back into the history. Drill forward for more detailed analysis. We are very interested in collaboration with research laboratories. We have research licenses for this software with qualified laboratories that would like to take advantage of this kind of a rale engine. Some of the things that I think would be very interesting is capture model development for us in other kinds of purposes that I will never even think of. Certainly, we need more robust statistical approaches, and visualization work, how to visualize this stuff. The last thing I want to bring up, this is a client. It is not hooked to the network, so you can't see the real-time graphs changing. You can see, this is actual data. You can see, for example, this is a broadband supplier. If I looked at data, say, from one hour, it is pretty noisy and, as you increase the time, in about 30 days, it turns into a real nice shape. This is what I would be doing here if I were hooked to the network, is navigating along the different axes of that hypercube, where I am choosing different service plans or user pricing plans and so forth, that the service provider has chosen. The last thing here, I actually took advantage of the fact that I have a distribution of usage for a population and, if I know their pricing function, I can compute the value of the traffic, and do that virtually instantaneously. That is what this does. I am not going to demonstrate it, but basically I can help the product planners for the service provider figure out what the volume of the traffic is, without having to go back through millions and millions of records and basically try to model their whole subscriber base. Basically, you have it all here. Thank you very much. MR. WILKINSON: While Pedro Domingos is setting up, we have time for a question. MR. CLEVELAND: Lee, I just would ask if you could give some idea of where you have set this up so far. MR. RHODES: In terms of commercial deployments? We have a number of

A STREAM PROCESSOR FOR EXTRACTING USAGE INTELLIGENCE FROM HIGH-MOMENTUM INTERNET DATA 307 pilots. MR. CLEVELAND: Are most of your experiences at the edges with 80SL and cable? MR. RHODES: Yes, most of these are the same with edges. So, it is 80SL cable and we did one backbone, media backbone service provider—well, they dealt with commercial clients. So, they had a few thousands, but very large pipes. I would say most of our—in fact, our current deployment that we are working on is a very large sized service provider in Canada.

Next: 1. INTRODUCTION »
Statistical Analysis of Massive Data Streams: Proceedings of a Workshop Get This Book
×
 Statistical Analysis of Massive Data Streams: Proceedings of a Workshop
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Massive data streams, large quantities of data that arrive continuously, are becoming increasingly commonplace in many areas of science and technology. Consequently development of analytical methods for such streams is of growing importance. To address this issue, the National Security Agency asked the NRC to hold a workshop to explore methods for analysis of streams of data so as to stimulate progress in the field. This report presents the results of that workshop. It provides presentations that focused on five different research areas where massive data streams are present: atmospheric and meteorological data; high-energy physics; integrated data systems; network traffic; and mining commercial data streams. The goals of the report are to improve communication among researchers in the field and to increase relevant statistical science activity.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!