National Academies Press: OpenBook

Review of the Marine Recreational Information Program (2017)

Chapter: 2 Study Design and Estimation Considerations for the MRIP

« Previous: 1 Introduction
Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×

2

Study Design and Estimation Considerations for the MRIP

INTRODUCTION

Estimation of recreational harvest has become increasingly important with the passage of the Magnuson-Stevens Fishery Conservation and Management Reauthorization Act of 2006. Moreover, the 2006 National Research Council (NRC) report, Review of Recreational Fisheries Survey Methods, centered on the validity of the Marine Recreational Fisheries Statistics Survey (MRFSS) sampling for catch and effort, such as the lack of probability-based sampling in major components of the survey. In response to this and other recommendations in the 2006 NRC report, the National Marine Fisheries Service (NMFS) redesigned the survey and implemented the Marine Recreational Information Program (MRIP) to provide valid statistical estimates of recreational fisheries effort and catch. The following chapters review the Fishing Effort Survey (FES; Chapter 3) and the Access Point Angler Intercept Survey (APAIS; Chapter 4), which are two components of the MRIP, and assume technical knowledge of survey sampling concepts that may not be common knowledge. The purpose of this chapter is to provide perspective on data collection, sample design, and estimation relevant to the MRIP to help the reader who is not familiar with statistical methods for survey sampling of recreational fisheries.

CONTACT METHODS

Surveys of recreational fishing to obtain metrics of catch and effort rely on seven possible methods of contacting anglers (Pollock et al., 1994; Jones and Pollock, 2013; see Box 2.1 for discussion of the distinctions between censuses and sample surveys). Anglers can be contacted (1) onsite at public access points,

Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×

(2) by roving through the water body to seek out anglers, or (3) by aerial surveys (which capture effort only). In these situations, the field agent records the trip, its completion time, and the number of anglers and asks the anglers about the trip duration, species sought, species caught and number, and number released following a scripted questionnaire. Often, the agent can observe and measure the catch.

Alternatively, anglers can be contacted offsite (4) by telephone, (5) by mail, (6) electronically (e.g., web, email), or (7) door to door (which is rarely used). In a mail survey, the angler receives a questionnaire that asks them to report dates, trips, trip locations, and in some surveys the species and catch numbers. Questions about catch are less common, because the species and catch numbers must be remembered correctly from months past.

In measuring catch and effort, these contact methods have different strengths and weaknesses. Offsite methods obtain information that is self-reported by the angler and is not independently verified. Onsite methods, most commonly an access point survey, can verify trips and landed catches because they are observed by field agents. However, even onsite methods rely on angler self-reporting of released fish (Groves, 1989; Jones and Pollock, 2013). Released fish can be counted and verified when boats are large enough to carry an observer, such as with a headboat or charter boat. The MRIP relies on contacting anglers onsite

Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×

at public-access points to obtain measures of catch per unit effort (CPUE) and offsite by mail and by telephone to obtain measures of effort. These measures are then combined to estimate total catch. This approach is used throughout the United States’ Atlantic and Gulf of Mexico coasts.

Since the 2006 report was published, there have been major advances in the public’s use of technology that has the potential to alter the way that surveys are done. Even though the 2006 report recommended that NMFS explore electronic reporting, the agency has only recently expanded testing of electronic reporting of for-hire logbooks, electronic capture of onsite intercepts, and web-based surveys (Kelly, 2016). For example, Liu and colleagues (2016) undertook a project to determine whether smartphone applications (apps) could be used to estimate recreational red snapper catch in Texas. Anglers reported their catches using the iSnapper app, and some of those app users were also intercepted in a probability-based onsite interview. The total catch was subsequently estimated by a modified mark-recapture method. This approach shows promise, and the committee encourages NMFS to pursue this area of research. However, self-motivated anglers who self-report via apps may not represent the target population, which presents challenges to statistical estimation—a topic discussed further in Chapter 4.

CHALLENGES WITH DATA COLLECTION

The choice of survey method is dependent on the time frame in which data are needed and the funds available to conduct the survey; both timeliness and funding issues were raised as concerns by state agencies during the committee’s current review of the MRIP. Offsite methods using telephone and mail surveys are generally less expensive than onsite methods because the latter require trained personnel in the field (Groves, 1989; Jones and Pollock, 2013). Some methods, such as telephone surveys, can obtain data quickly, while mail surveys take more time. Both are complicated when the response rate is low because of the potential for nonresponse bias. Nonresponse bias occurs when respondents and nonrespondents differ with respect to the characteristics of interest (see, e.g., Lohr, 2010). For example, if people who caught fish respond while people who caught nothing do not respond because they think that their information is not needed, then the CPUE would be estimated as higher than what actually occurred. Onsite surveys can cost more per interview, but nonresponse is typically low. The use of electronic tablets for onsite surveys decreases the reporting time and, with added software, can increase data quality (Kelly, 2016).

Surveys are subject to biases beyond that of nonresponse (Groves et al., 2009; Pollock et al., 1994). Offsite surveys are subject to recall bias because of the delay between the fishing trips and receiving a questionnaire, telephone call, or electronic message. Unless anglers keep a log or diary, they may not remember trips or catch accurately. Species identification and number of fish caught can be inaccurately reported and are not verifiable. Direct biological measurements of

Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×

fish necessary for estimating length and age relationships and the extraction of scales or otoliths for aging are not available. Onsite public-access surveys are subject to avidity bias (i.e., avid anglers who are better at catching fish might be overrepresented in onsite surveys) and lack coverage of anglers using private access when interviews are generally conducted at public-access sites. The lack of intercept information from most private access means that the use of CPUE requires the strong assumption that catch and effort are equal between anglers using public and private access (Ashford et al., 2010, 2011, 2013). Additionally, the error structures will differ with the type of data collection (e.g., self-administered in mail surveys versus interviewer administered in telephone surveys); this topic is discussed in Chapters 3 and 4 as it pertains to FES and APAIS, respectively. Note that there are other sources of error, such as item nonresponse in returned questionnaires, that are discussed in subsequent chapters.

SOURCES OF SURVEY ERROR

Surveys are designed to provide estimates for a possibly large number of characteristics of interest. Typically, the interest lies in estimating finite population parameters (e.g., means, percentages, ratios) of the target population, which describe some aspect of the finite population (e.g., total effort). Estimates of these population parameters are calculated from information collected on the sample, which is subject to several types of errors (Groves, 1989). The committee defines the total error of an estimate as the difference between the estimate and the true population value, the latter being unknown. The total error can be expressed as the sum of sampling and nonsampling errors (Groves et al., 2009; Biemer, 2010). Sampling errors occur because the desired information is only observed for a part (sample) of the population.

Nonsampling errors can be divided into four broad groups: (1) coverage errors, (2) nonresponse errors, (3) measurement errors, and (4) processing errors. Coverage error occurs when there is frame imperfection. This includes undercoverage (some units in the target population are not in the sampling frame) and overcoverage (some units are not in the target population but are in the sampling frame). Andrews et al. (2014) suggested that “undercoverage due to unlicensed fishing activity may be as high as 70 percent in some states for certain types of fishing activity” (see discussion of the National Saltwater Angler Registry in Chapter 3).

Nonresponse errors occur because the desired information is only observed for a part of the sample. The committee distinguishes unit nonresponse from item nonresponse. Unit nonresponse is the complete lack of information on a given sample unit. It occurs, for example, when the sampled person either is not at home when a telephone interviewer calls or refuses to participate in the survey. Item nonresponse occurs when the survey responses (items) for a sampled person (unit) are incomplete. The latter occurs, for example, because the sampled person refuses to respond to sensitive items such as fishing location or may not know

Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×

the answer to some items, or because of edit failures (e.g., incorrect telephone number). Missing values may also be generated when the collected data are invalid or inconsistent.

Misresponse or measurement error occurs when the information from the respondent is inaccurate. Measurement errors can be caused by a poorly designed questionnaire or inability of the respondent to recall the requested information. Another example of measurement error is digit bias, that is, the tendency for respondents to round upward or downward (e.g., if a respondent catches 7 fish but responds with either 5 or 10), also known as rounding errors (Scholtus, 2011).

Finally, processing consists of all the handling data activities after data collection and before estimation. Processing errors occur during data coding (which is the process of assigning a numerical value to a response) and data capture.

SAMPLING FRAMES

Recreational angler surveys use sampling frames to randomly select, with known probabilities of selection, households or fishing sites and times to contact. To contact households with anglers offsite to determine effort, two approaches have been commonly used in marine fisheries (Jones, 2001). The MRFSS, the MRIP’s predecessor, used a telephone survey that relied on random dialing of noncommercial telephone numbers with a coastal county prefix. In this case, the sampling frame was any household with a landline telephone number with an appropriate coastal prefix. The efficiency of random-digit-dialing telephone surveys declined over time as fewer households had landlines, more individuals switched to only having cell phones, caller ID resulted in fewer calls answered, and telephone numbers became portable. With the portability of telephone numbers, a previous coastal county resident might move inland and no longer fish. Similarly, someone from a landlocked state with that area code and prefix may move to a coastal county and become an avid angler. Using coastal county prefixes would result in both overcoverage (anglers that have moved away) and undercoverage (people that moved to the coast) of the target population. Furthermore, because surveys are subject to restrictions on dialing cell phones, the use of telephone surveys has become more problematic (AAPOR, 2016). In its 2006 report, the NRC committee recommended that alternate sampling methods be developed to address these issues of nonresponse and inefficiencies. Specifically, the report recommended that NMFS develop a national registry of all marine anglers as a sampling frame that would consist of names, telephone numbers (including cell numbers), and addresses. A sampling frame is a list from which a sample can be selected. Such a license-based sample frame would provide a targeted and efficient list for sampling.

Undercoverage for an effort survey such as the FES is managed by re-weighting sampling units (e.g., angler trips) with data available from the onsite survey. The onsite survey includes all anglers, both coastal and noncoastal. The

Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×

noncoastal proportion is used to expand the effort estimate of the offsite survey. This method would be unbiased if the characteristics of the people on the frame are similar to those not on the frame. Furthermore, the reliability of a survey from such a licensed-based frame declines when exemptions to license requirements are allowed, such as for retirees, military personnel, and for people under age 16. Exemptions such as these cause frame undercoverage. There is also frame overcoverage, which occurs when people on the list no longer fish. Undercoverage is likely the greater problem.

For an intercept survey, the frame consists of a list of public fishing access sites, mainly piers and boat launching sites, crossed by time of day. Returning anglers are interviewed to determine their total catch for each species. Typically, the design uses selection probabilities proportional to estimated fishing effort (e.g., site/time combinations with more effort have a higher probability of being sampled) to efficiently select the access sites for interviewing. Undercoverage occurs when access sites are excluded, and overcoverage occurs when nonmarine sites are included. When combined with an offsite survey to determine effort, undercoverage of the effort survey can be addressed using the responses to the intercept survey, by determining the percentage of anglers who were contacted but not included on the effort survey. This complemented approach is based on the assumption that anglers missed by the intercept survey (e.g., fishing on private piers) have the same average CPUE as those included on the intercept survey, which can be an incorrect assumption depending on the target species. Therefore, the assumption is made that both public- and private-access users have similar fishing patterns (Ashford et al., 2010, 2013).

INTRODUCTION TO SAMPLE ESTIMATION OF TOTAL CATCH

Although catch and effort can be expanded directly in a few situations, a widely used approach to estimate the total catch of saltwater fish by recreational anglers is to split the problem into two surveys. The importance of having these two surveys is that the FES only obtains effort in the relevant states and, because of the difficulties with self-reporting, does not obtain data on catch. Meanwhile, the APAIS obtains data on anglers with residences outside the coastal states and enables observation of catches. First, from the FES one estimates angler effort (Ê), the total number of trips spent saltwater fishing, using a survey of all anglers within the household, asking the respondents, in a given time period, for the total number of occasions during which they have fished in saltwater either from the shore of their coastal state or from a boat that returned to the shore of their coastal state. Second, from the APAIS one estimates the catch per unit effort (images), which is the number of fish caught per angler trip on each occasion, and the discard species and discard rate. If both angler effort and CPUE are well estimated for a given species, region, and period, one can calculate the total catch for that species (and region and period; see Box 2.2). Total catch (also termed

Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×

total harvests in other contexts) and total discards are estimated from two surveys, one offsite and one onsite. Total discards (obtained from the onsite survey) are subsequently used to calculate dead discards (total discards times discard mortality rate). Stock assessment analysts sum the total catch and total dead discards to estimate the removals from the fish population.

Using the equations provided in Box 2.2, total removals contains the following components: (1) fish that are landed whole and can be measured and identified, (2) fish that were filleted or discarded dead, and (3) fish that were discarded alive but subsequently die from capture effects.

For stock assessment, an additional fraction of the catch released alive that subsequently dies is estimated by multiplying the fish discarded by a separately determined discard mortality rate to obtain component (3)—fish discarded alive that subsequently die from capture effects. The equations used to estimate removals, catch, effort, and CPUE by the MRIP are provided in Box 2.2.

Bias and variance of an estimator

Two important measures are usually considered when assessing the quality of an estimator: bias and variance. For simplicity, the committee assumes that estimates are only subject to sampling errors and that nonsampling errors are either negligible or adjusted for prior to estimation. By sampling errors, the committee means that each sampling event is just one possible result of many that could have occurred. In practice, only one sample is selected, but many other possible samples could have been selected from the population. Suppose that it would be possible to select all the possible samples using the same sampling design from the target population. In each sample, an estimate of the characteristic of interest (e.g., total catch) could then be computed from the observed data. The bias is then defined as the difference between the average of the estimates produced from all possible samples and the corresponding (unknown) true value for the target population. The population sampling variance is a measure of the variability of the estimates about their average that would have been observed had all possible samples been selected from the target population.

An estimator is said to be precise (or efficient) if it exhibits a small variance. Factors affecting the variance include the variance of the target population, the sampling design used to select the sample from the target population, and the sample size. For a given sample design, the variance decreases as the sample size increases. Because it refers to all the possible samples that could be selected from the population, the population variance is typically unknown but can be estimated from the selected sample. Survey statisticians use measures such as the estimated standard error (which is defined as the square root of the estimated variance) or the estimated coefficient of variation (which is defined as the ratio of the estimated standard error to the estimate). Another name for the coefficient of variation is the proportional or relative standard error.

Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×
Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×
Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×

Assuming there is no nonsampling error, bias is generally not an issue because survey statisticians typically use unbiased (or approximately unbiased) estimators. Bias is generally caused by the presence of nonsampling errors as discussed above. Depending on the source of the bias (nonresponse error, coverage error, and measurement error), several weighting procedures are used to reduce the bias as much as possible.

Weighting Methodology

The data collected in the field are typically stored in a data file that contains rows corresponding to sample units (e.g., an angler) and columns that represent characteristics of interest (e.g., number of trips taken in the past 2 months). The file includes a column of weights that account for the sample design, coverage errors, and nonresponse, and that together constitutes a weighting system. Estimates are obtained by applying the weighting system to a characteristic of interest.

The typical weighting process consists of three major stages (see, e.g., Valliant et al., 2013). In Stage 1, each sample unit is assigned a design (or base) weight, which is defined as the inverse of its inclusion probability in the sample, a characteristic of the sampling design. Stage 2 aims to reduce the potential bias due to unit nonresponse. This bias may be large when respondents and nonrespondents differ with respect to the characteristics of interest, especially if the nonresponse rate is high. For example, the most common way to deal with unit nonresponse is to eliminate the nonrespondents from the data file and to adjust the design weights of the respondents to compensate for the elimination of the nonrespondents. To that end, the basic weights of respondents are multiplied by a nonresponse adjustment factor, which is defined as the inverse of the estimated response probability. Key to achieving an efficient bias reduction is the availability of powerful auxiliary information, which is a set of fully observed variables. Finally, in Stage 3, the weights adjusted for nonresponse are further modified so that survey weighted estimates agree with known population totals available from external sources (e.g., the census or administrative data). This process is known as calibration and can be effective at reducing the biases due to undercoverage. The resulting weights are often referred to as final weights and the corresponding weighting system as the final weighting system.

In some cases, the weighting process involves an additional stage during which the final weights undergo further modification. Most often, it consists of smoothing or trimming the weights to improve the efficiency of survey estimates. This stage is often encountered when highly variable weights are poorly related to the characteristics of interest. In such cases, the resulting estimators may exhibit a large variance (i.e., low precision). Weight trimming consists of reducing the weight values above a given threshold. These weights are set to the value of that threshold.

Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×

A point estimate for a given characteristic of interest is readily obtained by applying the final weighting system to the column corresponding to this characteristic of interest. The associated (proportional) standard error also uses the final weights but with a more complex formula than is appropriate for this report—see, for example, Wolter (2007) for additional material on variance calculations.

Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×

This page intentionally left blank.

Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×
Page 31
Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×
Page 32
Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×
Page 33
Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×
Page 34
Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×
Page 35
Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×
Page 36
Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×
Page 37
Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×
Page 38
Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×
Page 39
Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×
Page 40
Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×
Page 41
Suggested Citation:"2 Study Design and Estimation Considerations for the MRIP." National Academies of Sciences, Engineering, and Medicine. 2017. Review of the Marine Recreational Information Program. Washington, DC: The National Academies Press. doi: 10.17226/24640.
×
Page 42
Next: 3 Sampling and Statistical Estimation for the Fishing Effort Survey »
Review of the Marine Recreational Information Program Get This Book
×
 Review of the Marine Recreational Information Program
Buy Paperback | $55.00 Buy Ebook | $44.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The National Marine Fisheries Service (NMFS) of the National Oceanic and Atmospheric Administration (NOAA) is responsible for collecting information on marine recreational angling. It does so principally through the Marine Recreational Information Program (MRIP), a survey program that consists of an in-person survey at fishing access sites and a mail survey, in addition to other complementary or alternative surveys. Data collected from anglers through MRIP supply fisheries managers with essential information for assessing fish stocks. In 2006, the National Research Council provided an evaluation of MRIP's predecessor, the Marine Recreational Fisheries Statistics Survey (MRFSS). That review, Review of Recreational Fisheries Survey Methods, presented conclusions and recommendations in six categories: sampling issues; statistical estimation issues; human dimensions; program management and support; communication and outreach; and general recommendations.

After spending nearly a decade addressing the recommendations, NMFS requested another evaluation of its modified survey program (MRIP). This report, the result of that evaluation, serves as a 10-year progress report. It recognizes the progress that NMFS has made, including major improvements in the statistical soundness of its survey designs, and also highlights some remaining challenges and provides recommendations for addressing them.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!