National Academies Press: OpenBook
« Previous: 5 Data Collection
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

6
Nonresponse, Imputation, and Estimation

In this chapter, the panel explores several important aspects of survey management and methodology that have critical roles to play in either contributing to or minimizing the error in the estimates. Nonresponse, imputation, and estimation are, in the view of the panel, interrelated. The high level of nonresponse in the Agricultural Resource Management Survey (ARMS) triggers the need for imputation of missing values. Imputation is an important initial step in generating estimates. Each of these processes can be a source of nonsampling error.

UNIT NONRESPONSE

When a sample unit fails to respond to the survey solicitation (unit nonresponse) or fails to complete an item on the questionnaire (item nonresponse), it may diminish the representativeness of the sample and thus lead to bias. Typical for large-scale sample surveys, ARMS experiences substantial unit nonresponse. Nonresponse is readily quantifiable, and, possibly as a result, the National Agricultural Statistics Service (NASS) has concluded that unit nonresponse is the survey’s biggest source of nonsampling error.1

In this section we address methods for reporting ARMS nonresponse, consider the nature of both unit and item nonresponse in ARMS, and discuss methods for reducing nonresponse and making nonresponse adjustments in ARMS.

1

Statement by Bob Bass, NASS, September 28, 2006.

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

TABLE 6-1 ARMS Response Rates, 2002-2006

Survey

Year

Sample Size

Response Ratea

ARMS screeningb

2006 *

80,000

77.0

 

2005 *

60,000

77.0

 

2004

73,376

76.6

 

2003

16,638

74.5

 

2002

49,156

76.9

ARMS Phase II production practices

2006 *

5,500

80.9

 

2005 *

5,500

80.9

 

2004

4,755

80.6

 

2003

8,148

79.5

 

2002

3,421

79.2

ARMS Phase III cost and returns

2006 *

30,000

72.0

 

2005

34,203

70.5

 

2004

21,075

67.7

 

2003

33,861

62.8

 

2002

18,219

74.0

aIncludes operations that responded but were out of scope.

bARMS screening identifies operations that have target commodities for Phase II and for the vegetable chemical use survey.

*Estimated.

SOURCE: National Agricultural Statistics Service, 2005 (updated by the agency).

Response Rates

As might be expected, the response rates reported by NASS vary among ARMS phases and years, since the size of the samples and the target audience varies from year to year. Table 6-1 shows the most recent reported and estimated response rates. The response rates are highest for the Phase II survey and highest for the Phase III survey in agricultural census years. The noncensus year Phase III response rates are the most troublesome, since they fail to meet the Office of Management and Budget (OMB) threshold of 80 percent, below which the agency must plan for a nonresponse bias analysis (U.S. Office of Management and Budget, 2006a, p.8).

Moreover, these published response rates tell only part of the story. The rates for each phase are calculated independently and are shown in Table 6-1. NASS computes the response rate for each phase, as defined by OMB, that is, the percentage of sample units that was accessible and did not refuse the survey. The following formula is used to calculate the response rate for each phase of ARMS:

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

Where: k denotes the specific phase of the ARMS (e.g. I, II, or III)

nk is the Phase k sample size

Refusalk is the count of samples units that refused to respond to Phase k of the survey

Inacessiblek is the count of sample units that were inaccessible (unable to be contacted) during Phase k of the survey

These independently computed rates do not take into account nonresponse from the previous phase(s). For certain uses, this is appropriate. For example, to assess the success of enumerators in a given phase, one needs to know the number of completed questionnaires divided by the number of units that were to be contacted. However, for assessing error in the survey estimates, the denominator should also include eligible units from the original sample that would have been contacted if not for their nonresponse in an earlier phase.

Put differently, in a survey with more than one phase, nonresponse typically cumulates from phase to phase. For example, the 2005 Phase II response rate is reported as 75.3 percent. This calculation appropriately reflects the success of the Phase II survey operation in securing responses, but it does not reflect the fact that the sample comes from the Phase I survey, which had its own nonresponse. Thus, the Phase II response rates reported in the table overstate the proportion of the eligible sample that participated in Phase II (and likewise for the Phase III rates).

NASS calculates but does not publish a cumulative response rate for ARMS. In essence, response rates for sample components that participate in more that one phase of ARMS are adjusted by the response rate for that component in prior phases to arrive at a cumulative response rate for each component. The sample size of the component proportionate to the total sample size for all components in the given phase is used as a weight to composite the cumulative response rates for a phase. To complicate matters, the cumulative response rate for ARMS takes several different computational schemes, depending on the relationship of the particular component to prior phases. The steps in the computational scheme are shown in Box 6-1.

The cumulative response rate for Phase III is shown in Table 6-2. According to the steps shown above, it is computed according to the following formula:

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

BOX 6-1

Steps To Compute a Cumulative Response Rate for ARMS

Step 1: Adjust each component for prior phases.


Phase III Production Practices and Costs Report—The only traditional Phase III component progressing through Phase II is the PPCR component. Calculate the adjusted Phase III response rate by multiplying the Phase I, Phase II, and Phase III response rates.


Costs and Returns Report (CRR) Components—These components only require adjustment for Phase I. Therefore the adjusted CRR response rates are the product of the corresponding Phase I and Phase III response rates.


CRR NOL (not on list) Component—The ARMS Area component does not participate in prior phases and requires no adjustment.


Step 2: Calculate the proportional weight based on sample size for each component.


Calculate the proportional sample weight by dividing each component’s Phase III sample size by the sum of the samples sizes from all Phase III components.


Step 3: Weigh the adjusted response rates.


Calculate the adjusted weighted response rate for each component by multiplying the component’s adjusted response rate (step 1) by the proportion of sample weight (step 2).


Step 4: Sum the adjusted weighted response rates.


Sum the adjusted weighted response rates from each component.

Where: R1i is the Phase I response rate for component i

R2i is the Phase II response rate for component i

R3i is the Phase III response rate for component i

n3i is the Phase III sample size for component i

n3T is the total sample size across all components for Phase III

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

TABLE 6-2 Unadjusted and Cumulative Response Rate for 2005 ARMS Phase III

 

Unadjusted (%)

Cumulative (%)

Response

70.5

51.9

Refusal

23.7

6.2

Inaccessible

5.8

1.3

The proper portrayal of the true extent of nonresponse for understanding the nature of the problem requires that NASS routinely make available not only a cumulative and unadjusted response rate, but also the information needed to independently compute response rates across phases of the survey. This involves showing the disposition of all cases, particularly how the cases in Phase II and Phase III trace back to Phase I. For example, some cases are not in scope for the survey, either out of business or not producing the commodity of interest for Phase II. In tracing these case dispositions, NASS should develop a set of categories that reflect the ARMS survey, but also that take into consideration the categories specified by the American Association for Public Opinion Research and other sources (2006; see also Hidiroglou et al., 1993). Given their differential selection probabilities, this information should be presented separately for the 15-state oversample and the remainder of the sample.


Recommendation 6.1: NASS should routinely report ARMS case dispositions linked across survey phases to provide the foundation for appropriate response rate calculations for Phases II and III.


We also note that interviews from the 15-state oversample carry smaller weights in the production of national estimates and thus should contribute correspondingly less to the overall response rate. A weighted response rate is therefore the most appropriate one to report. NASS would increase an understanding of the extent of nonresponse with this information to report weighted response rates for each survey phase, appropriately reflecting the nonresponse from the preceding phase(s).


Recommendation 6.2: All published ARMS response rates for Phase II and III should be calculated to reflect the nonresponse from the preceding phase(s).

Nonresponse Error

As important as it is to get the response rate right, the response rate is not a measure of nonresponse error, despite its often being used as if it were.

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

Nonresponse error is a joint function of the proportion of nonrespondents and their distinctiveness. Thus response rates measure only the potential for nonresponse error (Groves, 2006).

Although the potential nonresponse error increases as the nonresponse level grows, recent research has found that actual nonresponse bias may sometimes be unaffected by increases in the nonresponse rate. Keeter et al. (2000), Curtin et al. (2000), and Merkle and Edelman (2002) found little, if any, connection between nonresponse rates and nonresponse bias, and Groves (2006) reported only a small association between nonresponse level and nonresponse bias in a meta-analysis of studies that had validation measures. These findings suggest that at least some of the characteristics measured in surveys are either uncorrelated, or only weakly correlated, with the causes of nonresponse (Groves et al., 2004). Other characteristics, of course, will be more strongly associated with nonresponse. It is generally impossible to judge a priori which outcome is more likely to occur.

Unlike unit nonresponse rate, which is a characteristic of a survey, unit nonresponse error is a feature of a survey estimate. The same survey may generate estimates containing widely varying nonresponse errors. For example, the ARMS estimate of farm income could have serious nonresponse bias at the same time that its estimate of pesticide use had no nonresponse error. Moreover, even if the ARMS estimate of the univariate distribution of farm income did have nonresponse bias, this would not mean that the ARMS estimate of the multivariate farm income distribution (say, how income is related to acreage and pesticide use) also had nonresponse bias. Just as there is no necessary connection between the nonresponse errors of different variables in the same survey, there is no necessary connection between the nonresponse errors associated with descriptive uses of a survey (involving a focus on means and totals) and the nonresponse errors associated with analytic uses of the same survey (involving a focus on model coefficients).

As far as we know, attention in ARMS has focused largely on the nonresponse rate, as opposed to nonresponse error. A shift in emphasis is therefore warranted. Research should focus on the nature of the ARMS nonresponse bias. If call-record histories are available, an inexpensive first step is to run simulations of the impact of reductions in response rate from the existing level. That is, estimates from ARMS can be compared with estimates from the same survey deleting the last (and probably most expensive) 5 percent (or 10 percent, or more) of the cases. (If call-record histories are not available, the survey should change its archiving practice for the next round of data collection.)

Given sufficient uniformity in the management of fieldwork, these simulations would provide a rough approximation of the impact of various increases in unit nonresponse from the current level. Equally important, the

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

results could provide clues to the effects of various levels of nonresponse on estimation bias. If “last” cases are more similar to actual nonrespondents in ARMS than to other respondents, this procedure could give guidance to field management strategies that would reduce overall nonresponse bias. Another approach might be to use a nonresponse follow-up survey—a sample of nonrespondents—to obtain at least summary information on key characteristics of nonrespondents. Interviewer observations of nonrespondents and capture of detailed reasons given for not participating might also provide valuable insights. In light of the high nonresponse rates in ARMS, the project should be using a variety of methods to understand the nonrespondent population.

Without this kind of information, researchers make survey design changes aimed only at increasing the overall response rate (or stemming the overall decline in response rate). But in terms of data quality, decreasing bias is generally viewed as relatively more critical than increasing statistical efficiency through obtaining more completed interviews, although efficiency is, of course, important. If it is possible to raise the response rate close to 100 percent and cost is no object, then maximizing the response rate is the best strategy. But if this is not possible and if sufficient information is available, then efforts to target certain subgroups are likely to lead to greater bias reduction than those targeted at the entire sample. It is possible that strategies to reduce the overall nonresponse rate could even increase nonresponse bias, if they are disproportionately effective at increasing the response of groups that are already overrepresented. Strategies for reducing nonresponse bias (as opposed to those for reducing the overall nonresponse rate) can be cost-effective when there is information about the composition of the nonrespondents. Although such information is unlikely ever to be detailed enough to enable survey managers to target respondents to reduce nonresponse bias, collection and analysis of such information should be an important and routine part of the survey operation in developing cost-effective survey management protocols.


Recommendation 6.3: The nature of the ARMS nonresponse bias should be a key focus of the research and development program the panel recommends. This research and development program should focus, initially, on understanding the characteristics of nonrespondents.

Methods for Reducing Unit Nonresponse

Despite the preceding discussion, it is very likely that reduction of nonresponse in ARMS should remain a priority, albeit one more informed by study of survey processes and what can be learned about nonrespondents. Because nonresponse at the high rates observed in ARMS introduce

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

substantial uncertainty about the nonrespondent population, the program should also remain mindful of the importance of addressing the concerns of sample members that lead to nonresponse.

All responses in ARMS are voluntary. Thus, establishing interest among respondents and enhancing the survey’s credibility may be important in improving response rates. The U.S. Department of Agriculture (USDA) publishes advertisements in trade journals and preinterview letters are sent to sample members. The outreach program solicits active cooperation of commodity interest groups and other private organizations, and, in a special pilot study, a one-hour program was produced and televised nationally in September 2004 and February 2005 to promote the 2004 ARMS. In all, 11 percent of respondents completing the 2004 ARMS Phase III economic enterprise (face-to-face) version indicated that they had seen the program (National Agricultural Statistics Service, 2005, p. 22). Another one-hour program was produced and televised during similar time frames for the 2005 ARMS. The panel thinks that further work along these lines is merited, and it would be enhanced by more rigorous evaluation efforts.

Timing of the phases of the survey may also affect survey cooperation. Part of Phase II occurs during the busy harvest season. Phase III occurs in the spring, typically before the busy planting season, which may impose less of a burden on operators’ schedules. This timing may also be well suited to minimizing recall errors (e.g., requesting information on revenues and expenses in early spring is advantageous because many operators will have recently completed their tax returns). As far as we know, there is no systematic research that establishes how survey timing is related to ARMS data quality. Thus the design and interpretation of the research and development we have recommended on both nonresponse and measurement might consider the issue of survey timing.

The survey has also experimented with the use of incentive programs to increase response (as well as response quality). Incentives research was conducted in 2004 on the ARMS Phase III core questionnaire that is mailed to respondents. The incentive was a $20 ATM debit card. NASS also offered a $20 debit card with the spring phase in 2005 to 16,000 sample units. The overall percentage of households cashing the card was 33-35 percent, or 40 percent of respondents and 3 percent of nonrespondents. Of those who got the card after responding to the survey, the rate was 60 percent. Offering the card led to a 6-7 percent increase in response rates for the mail returns. In addition, in 2006 NASS plans to provide customized data products to some respondents as a postincentive. For sufficiently complete returns, a brief report will be provided comparing the respondent’s responses with the published summary responses for the respondent’s geographic area.

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

Recommendation 6.4: The research and development program should continue NASS’s work on both public relations and incentives, and it should do so with a focus on nonresponse bias, not simply nonresponse rate.


One far-reaching potential change to improve nonresponse has not been employed: that is, to make ARMS mandatory. Since 1997, ARMS and the Census of Agriculture have been integrated during census years (years ending in 7 and 2) to reduce respondent burden. ARMS content is reduced during census years to facilitate the integration. There were also changes in question wording to provide more consistency between the census and ARMS, which has allowed certain census variables to be refreshed annually with ARMS. The response rate is better in census years. This may be due to the fact that the census is mandatory, and the farms selected for both ARMS and the census in the census years may tend to treat ARMS as mandatory.

The heightened response in years in which the content is reduced and respondents may believe the survey to be mandatory affords the possibility of comparing ARMS results in census years with those in noncensus years, and, depending on the findings, to consider whether action should be taken to make ARMS mandatory, as are the annual Census Bureau economic surveys. Comparison of response rates within groups in the census and noncensus years may also lead to a greater understanding of potential sources of nonresponse bias.


Recommendation 6.5: The research and development program should analyze whether there are differences in ARMS unit and item nonresponse rates between census and noncensus years, with an eye toward deciding whether making ARMS mandatory would improve data quality.

ITEM NONRESPONSE

According to NASS, item missing data varies across the variables measured in ARMS. Table 6-3 shows the distribution of variables in the 2004

TABLE 6-3 Percentage Distribution of Variables by Missing Data Rate, ARMS Phase III, 2004

Refusal/Unknown Rate

 

10% or more

4.1

5 to 9.99%

11.5

1 to 4.99%

26.0

0 to 0.99%

58.4

All

100.0

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

Phase III survey by missing data rate. To support this summary table, we were provided a considerable amount of response data by variables (item codes). However, response rates by variables and characteristics of respondents were not available due to confidentiality concerns.

Many of the items with the highest rates of nonresponse involve dollar amounts for such things as assets, income, and debt. There is evidence from other surveys that this problem can be minimized through questionnaire design approaches, such as using respondent-generated intervals rather than point estimates (Press and Tanur, 2000; Juster and Stafford, 1991). These approaches should be a focus of the research and development program the panel recommends.

As noted in Chapter 5, there is concern that the ARMS questionnaire is quite complex and burdensome. One effect may be that respondents might provide false answers in order to avoid additional questions or that interviewers might steer respondents or fill in responses on their own. If such behavior occurs, it could represent an important latent form of nonresponse. This possibility further underscores the need for the ARMS program to do systematic research on how the questionnaires are perceived, how they are administered, and how they are answered. To some degree, such behavior can be detected by including consistency checks within an interview.


Recommendation 6.6: The research and development program should examine how questionnaire design and interviewing changes could reduce item nonresponse

IMPUTATION

When variables are imputed, originally missing data are replaced with values generated by an assumed model for the nonrespondent data, so that analyses and summarization can more effectively be performed. The most common methods that statistical agencies employ for imputation are based on replacing missing values with reported values from other units in the survey, or mean values for that variable for respondents in a similar group, or values generated by other model-based methods. Imputation is useful, even necessary, to support analysis and summarization, but if it is improperly done, imputation can introduce nonsampling error and attenuate measures of association among variables.

The imputation processes discussed in this section are not to be confused with changing data that are regarded as erroneous. Errors, which are typically discovered using logical comparison programs, usually reflect conflicts between the responses on a record: for example, a record reports no spouse but reports earnings by a spouse; or a record reports that 1,000 hogs

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

in total were removed from the operation in Section C, but also reports that 10,000 hogs were removed under a production contract in Section D. As in these examples, the inconsistencies may reflect simple transcription errors. NASS statisticians are careful about identifying and correcting errors—that is, conflict doesn’t necessarily imply error, and survey forms and notes are closely checked to provide further evidence, before any changes are made (in the examples above, other responses, such as age or ethnicity queries for the spouse or payments received for the hogs, give further clues as to the correct entry). Errors may be corrected manually, but the process should not be confused with imputation procedures.

Imputation is a more formal process in ARMS and is a shared responsibility between NASS and the Economic Research Service (ERS). ARMS survey items are divided into two categories—those for which a response must be provided (either by the respondent or by a NASS statistician) and those that can be initially coded as a nonresponse. For a questionnare to be “complete,” respondents must provide physical data on acreage, production, and farm production expenses. These items are not subject to imputation. For the items that must have a response, a NASS statistician typically uses manual methods to input data when the respondent can’t provide an answer.

About 75 percent of ARMS Phase III survey items fall in the category of items that can be individually refused. Refusals are designated with a minus 1 (−1) entry in the record. Using the 2005 version 1 Cost and Returns Report (CRR) questionnaire as an example, 523 items could be individually refused.

In order to meet mission requirements for developing annual farm business and farm household financial estimates, ERS designates a set of items for which imputed values must replace item refusal codes on completed questionnaires. Some of the imputations are performed by NASS and some are performed by ERS. In general, NASS creates imputations that can be based on mean reported values for farms that fall into the same location, farm size, and farm type categories as the refusal. ERS performs more complex imputations for farm and household financial refusals. The division of labor also reflects program needs: in general, NASS doesn’t produce statistics for farm households, whereas ERS does, so ERS develops the needed farm household imputations.

At the request of ERS, NASS employs model-based imputation data for a small subset of the items that can be initially coded as nonresponses. The remaining items remain coded (using −1 as a value) as nonresponses. Again using the 2005 CRR example, ERS asked NASS to replace refusal codes with imputed values for 100 of the 523 refusable items (the “cost of production” ARMS versions 2-4 add many more refusable items, and the total number of refusable items across all versions exceeds 1,400). The NASS imputations are concentrated in three sections: 27 in Section E (other farm

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

income), 15 in Section G (farm assets), and 44 in Section I (farm management and use of time), on which refused items on labor hours allocations (items 25 and 26) are imputed.

In the research databases, items that cannot be refused and items that are imputed by NASS are coded with a “P” prefix along with the cell number2 printed on the response box in the questionnaire. Thus, P508 in 2005 refers to cell number 0508 on the survey (cash or open market sales of hogs or pigs), and the P designation indicates that there are no nonresponse codes for that item. Other items, which could have nonresponse codes in the data delivered to ERS, are coded with an “R” prefix. Thus, R1241 refers to cell number 1241 (the year in which the principal operator began any farm operation), and the R indicates that the survey statistician can enter a −1 if the respondent doesn’t know or refuses to provide an answer.

ERS imputes values to replace refusal codes for about 40 of the remaining refusable items. These items are necessary to produce farm business and farm household financial estimates. One item is in farm assets (Section G) and 3 are in farm debt (Section H). The remainder are in Sections I (farm management and use of time) and J (farm household), with the bulk in Section J. They cover questions on off-farm income earned by the operator household, operator household and family living expenses, nonfarm assets and debt held by the operator household, and operator and spouse characteristics (age, gender, education, and occupation). Those items are redefined as named variables, and the research database includes the named variables as well as the original R-prefixed items. ERS also creates other named variables, and some may be combinations of survey responses (P- and/or R-cells). For example, “total off-farm income” is a named variable that is a combination of cells, and so is “farm business income.” Some of those named variables may have an imputed R-cell as a component.

To summarize, about 75 percent of the survey items can be coded with refusal codes. Nonresponse is imputed on about 13 percent of those items, 10 percent by NASS and 3 percent by ERS. Items that are subject to imputation by ERS are renamed as “named variables,” with the original data retained in the database as “R-cells.” ERS then creates more named variables, usually from combinations of items, and many of these may have imputed data in them.

For the first rounds of imputation, NASS employs both manual and model-based imputation. Manual imputation is used when a questionnaire

2

The cell number corresponds to the item code printed inside or adjacent to the response box. The cell number or item code is used for key-entry purposes. After the data are keyed, the data are placed in a data file. In the excerpt above, fields in the data file are given variable names (labels) based on the cell number corresponding to the response box that was the source of the data value.

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

is partially completed and may be completed using the knowledge of the state’s agricultural experts in the field offices, based on whatever information they have about the farm operation and the knowledge they have of local and market conditions from reported data from other recent surveys or data from other sources. Examples of data from other sources are (a) the average of amount charged or paid by the corresponding contractor for similar arrangements as reported by the contractor, (b) corresponding local averages from other government or industry sources, or (c) data from state–law based mandatory reporting or Farm Service Agency applications.

In their model-based imputations, NASS uses an algorithm that assigns values based on the mean value of valid sample values with reported data from respondents identified as “donors,” conditional on a small set of categorical variables, including region, state, sales class, farm size, and type of farm. If there are fewer than 10 donors in a group, then a fallback group is used. (In 2005, a fallback group was not necessary for 77.8 percent of missing data.) When appropriate, a relationship between two items is used to improve imputation—for example, categories of reported owned acreage are used to impute the value of owned land.

Some variables are more likely to be imputed by NASS than others. The majority of imputed items can be attributed to three groups: farm labor, other farm income, and farm assets. The most common imputed data cell is depreciation expenses, which was imputed 2,757 times. In total, NASS imputed 40,253 values (7.6 percent of the total number of nonzero values) in 2005.

ERS responsibility for imputing data items focuses on items that have been identified as important to meet its mission requirements for the annual development of farm operation/farm operator household financial and structural characteristics. These imputations are also selected on the basis of the value of the financial and structural variables that are required for the creation of the ERS ARMS Phase III research file. The methodology for imputing these items is discussed below.

ERS can be said to impute data by using “like” information in a broad sense. But determining what constitutes such information is particularly complex for the ARMS Phase III questionnaire, in which questions and the definition of the data items may vary from version to version of the survey.

Sometimes ERS aggregates input variables to the imputations differently from the underlying data, and separate models must be developed for different versions of the questionnaire. For example, off-farm income from the collective category of “disability, retirement, Social Security, unemployment, Veteran’s benefits, other public retirement and public assistance programs” is collected separately for the farm operator, that person’s spouse, and other family members in version 1 of the questionnaires. In all

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

other versions of the questionnaire, income in this category is reported for the operator’s household and not separately by type of household member. Because of these differences, imputation of the total amount of this income component takes place in two steps. ERS imputes the missing values of retirement income for each of the operator, spouse, and other household members in the version 1 using average values calculated by age and education classifications of the primary operator. ERS also imputes missing values of the total amount of household income in this category in the other versions. Imputation for versions 2-5 is based on data from all versions, since the conditional means needed for imputation require the use of weights that are available only for observations in version 1 and the set of all observations. At other times, ERS imports data from other parts of the questionnaire to aid in imputation. For example, to impute the amount of debt for farm operation loans in three categories (short term, real estate with a term over one year, and uses other than real estate with a term of over one year), ERS uses information reported in the farm debt and farm expense sections of the survey. On version 1, debt is collected by lender while on the remaining questionnaire versions it is collected in the three required categories. Debt on version 1 is first combined into the three categories. A category (for example short term debt) is classified as a refusal if debt is refused for one or more lenders in that category. When one or more categories of debt is/are refused total debt is imputed based on total reported interest expenses (real estate plus nonreal estate). Debt by category is then computed (by farm size and type) for reporting farms. This information is then used to divide total imputed debt into the three required categories.

These complex, individualized, and somewhat judgmental imputation schemes apply to several other key items, which are handled in a special manner due to structural differences in the versions of the questionnaire:

  • Off-farm net income from any other farms, income from any other business, and income from renting farmland to others.

  • Off-farm income from interest, dividends, and other off-farm income.

  • Total proceeds from the sale of farm and nonfarm assets and recognized gain or loss on the sale of capital.

  • The number of persons who lived in the primary household as of December 31.

  • The components of nonfarm assets, nonfarm debt, and consumption expenditures of the operator household.

There are some other potential problems with the imputation practices used in ARMS. Imputation relies on the use of conditional means as estimates of missing values. For survey estimates of simple univariate-level statistics or such statistics cross-classified by a couple of variables, such

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

an imputation process should be generally adequate. However, estimates of variability in the data will usually be artificially reduced. Except when actual values have an exact relationship with variables used to condition the mean estimate, the true values of the missing data should have a distribution of values, rather than the simple set of imputation values. Moreover, when more complex multivariate relationships are estimated, this approach to imputation may lead to estimates that are statistically inefficient at best and biased at worst. The reason for these problems is that conditional mean imputation generally cannot condition on a sufficiently large set of variables to maintain relationships between the variables imputed and all variables that might be included as related variables in a multivariate analysis. A more broadly conditioned and explicitly model-based approach to imputation would preserve important relationships. Such an approach would also make it simpler to create multiple imputations, which would provide a straightforward means for estimating the degree of uncertainty in the ARMS estimates that are due to missing data.

In judging the adequacy of the current ARMS imputation procedures, the panel refers to the following criteria (Charlton, 2007):

  • Predictive accuracy. The imputation procedure should maximize preservation of true values. That is, it should result in imputed values that are close as possible to the true values.

  • Ranking accuracy. The imputation procedure should maximize preservation of order in the imputed values. That is, it should result in ordering relationships between imputed values that are the same (or very similar) to those that hold in the true values.

  • Distributional accuracy. The marginal and higher order distributions of the imputed data values should be essentially the same as the corresponding distributions of the true values.

  • Estimation accuracy. The imputation procedure should lead to unbiased and efficient inferences for parameters of the distribution of the true values (given that these true values are unavailable).

  • Imputation plausibility. The imputation procedure should lead to imputed values that are plausible. In particular, they should be acceptable values as far as the editing procedure is concerned.

In the panel’s view, although the current methodology for imputing with partial nonresponse may satisfy the first criterion for means, totals, and perhaps for ratios, it can lead to poor results when the analyst is interested in distributional properties of the population.

Another concern was brought to the panel’s attention by members of the research community who use the ARMS data. Good statistical practice is to identify when data have been imputed. The Federal Committee on Statistical Methodology has argued that, at a minimum, data users should be

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

able to identify the values that have been imputed and the procedure used in the imputation. This transparency permits users the option to employ in their analyses their own methods to compensate for the missing values (Federal Committee on Statistical Methodology, 2001, pp. 7-8). This is particularly important when the individual record data are made available for research and other purposes. Failure to flag imputed data may affect the reliability of variables derived from the databases, and it will cause users often to overstate the statistical significance of their results

ARMS fails to identify the imputed data, but it is possible to ferret out the imputed from nonimputed values with a bit of work. Since the original R-cells are retained on the research database, those with an indicator of –1 are refusals. The associated named variable includes the imputed value, so a researcher can identify when an imputation has been made and can also identify the imputed value, by comparing R-cells with associated named variables. In order to do so effectively, external researchers need to be provided with much more guidance on ERS imputation procedures and a concordance that links named variables to those R-cells that are subject to imputation by ERS. Another option would be to simply flag the imputed values.


Recommendation 6.7: NASS and ERS should consider approaches for imputation of missing data that would be appropriate when analyzing the data using multivariate models. Methods for accounting for the variability due to using imputed values should be investigated. Such methods would depend on the imputation approach adopted.


Recommendation 6.8: All missing data that are imputed at any stage in the survey should be flagged as such on files to be used for analysis.

ESTIMATION

The estimation method is one of the determinants of the size of the error in the estimates. Generally speaking, statistical agencies seek to employ estimation methods that result in both the smallest bias and the smallest sampling variance (Statistics Canada, 2003). However, due to its unique design, some of the standard estimation methods are not available for ARMS. For example, use of estimation methods that exploit the correlation over time in periodic surveys, with a large sample overlap between occasions so that data from previous rounds can be used as auxiliary variables (composite estimation), is not possible in ARMS because of the lack of sample overlap.

Because of informational constraints and institutional traditions, the estimation of ARMS statistics at NASS appears to be quite unconven-

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

tional. ARMS estimation is a carefully choreographed multistage process that involves multiple inputs, including extra-survey inputs.3 This process derives from a long-standing tradition at USDA to develop estimates based on collected information filtered through a board process that applies the expertise of NASS and ERS staff who are selected to serve as members and their knowledge in the generation of the official estimates. The statistical estimates computed directly from ARMS, which in other government surveys might be considered the final estimates, are treated as “indications” in a larger process of estimation. The process is intended to maximize the use of what is believed to be the best information and to ensure consistency with other estimates that are published by USDA.

The estimation process at NASS begins rather conventionally with the stages of imputation and editing followed by estimation (called summary activities). It then begins to take a path that is distinct for USDA surveys, proceeding along the following steps:

  1. Traditional nonresponse adjusted summary.

  2. Calibration and preliminary calibrated summary

  3. Outlier identification and replacement (Outlier Board)

  4. Rerun calibration and summarization incorporating outlier replacement (Final Calibrated Summary)

  5. Establishment of farm production expenditures estimates at the national level (U.S. Farm Production Expenditures Board)

  6. Establishment of farm production expenditures estimates for 5 farm production regions (Regional Board)

  7. Establishment of state-level farm production expenditures (State Board)

  8. Establishment of final estimates for economic class, farm type, and fuel subcomponent expenditures

Summary. During summary, variances are computed using the delete-a-group jackknife method. The output from the summary is called an “indication” rather than an estimate.

Two summaries are created: a traditional nonresponse-adjusted summary and a calibrated summary. For the former, traditional nonresponse adjustments (reweighting) are made and direct expansions are computed for all items, including the production items. The expanded number of farms from the ARMS is compared with the NASS official estimates for farm numbers to evaluate the level of representation of the responding sample. Weights are scaled using the official farm number estimates as targets.

3

NASS “Estimation Process” presentation given at the ARMS III Workshop, July 2006; and NASS “Estimation Manual,” 2006.

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

However, for the calibrated summary, no traditional response adjustment is applied prior to calibration, since calibration is presumed to adjust for nonresponse as well as coverage.

Traditional nonresponse summary indications and preliminary calibrated summary indications are reviewed by the NASS field offices. When outlier adjustments are recommended, documentation is provided to the National Expenditures Board to use in determining whether to make a change.


Calibration and Preliminary Calibration Summary. The ARMS estimation process actually begins with the calibrated summary. “Calibration” is a general term for a sampling-weight adjustment that forces the estimates of certain item totals based on the sample at one phase of sampling to equal the same totals based on a previous phase or frame (control) data. As employed by statistical agencies around the world, calibration is a common and accepted procedure for adjusting for unit nonresponse and for undercoverage. NASS includes a sample of nonoverlap (NOL) farms in the original sample as discussed on Pages 4-5 and 4-6. This provides a direct measure of list incompleteness for all survey indications.

Since the 2004 ARMS a calibrated summary had been produced. A multivariate calibration has been performed which uses official estimates from other survey sources (14 listed items) to assess the level of representation of the responding sample. At that time, the decision was made to confound the nonresponse adjustment with the coverage assessment. The term “coverage” may or may not include list incompleteness and can be overcoverage as well as undercoverage. In ARMS, the NOL farms address list undercoverage, but they are eligible for calibration adjustments.

At this point, NASS adds another step to meet another goal. In the process of calibration, NASS uses truncated4 linear calibration to meet predetermined official estimates for acreage and production for specific items raised on the farms, called calibration targets (see list below). The ARMS weights are adjusted so that the ARMS survey replicates the control totals for these 14 items. The expenditure estimates are then regenerated with these “calibrated” weights. This calibration process ensures that the ARMS data replicates the official NASS estimates for the 14 selected crop and livestock items set by the board process.

The following official NASS estimates are used as calibration targets:

  1. Corn for grain, harvested acres

  2. Wheat for grain, harvested acres

  3. Soybeans, harvested acres

4

Calibrated weights are constrained to be greater than or equal to one, in the belief that each respondent should at least represent itself. Other constraints may be applied as appropriate.

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
  1. Cotton, all, harvested acres

  2. Potatoes, harvested acres

  3. Vegetables acres (sum of fresh and processing)

  4. Fruit and Nut acres (sum of 28 fruits and nuts)

  5. Broilers raised

  6. Turkeys raised

  7. Eggs produced

  8. Cattle inventory

  9. Pig inventory

  10. Milk production

  11. Number of farms (by 7 economic sizes)

Outlier Identification and Replacement. The total calibrated expenditures are used to determine objective decision rules for identifying outliers. These are referred to the respective Field Offices for closer examination. Field Offices check for undetected non-sampling errors, review the history of the respondent, and develop a more detailed profile of the operation. This information is used by the Outlier Review Board to decide on a consensus course of action. These actions are posted to the dataset and the same calibration procedure is rerun with these additional constraints using the 14 external targets listed above.


Review of Final Summary. After calibration, the calibrated summary is rerun. These regenerated indications are delivered to the National Expenditures Board and the field offices. All field offices again review the indications and write comments, which may be presented to the National Expenditures Board. During this time, NASS and ERS staff prepare for the National Expenditures Board.


Preparation of Official Estimates. The National Expenditures Board sets farm production expenditure estimates. The National Expenditures Board consists of about 20 NASS and ERS representatives, and, since 2006, state board members. In a consensus process, it sets 16 farm production expenditures estimates5 at the U.S. level, and sets the stage for subsequent establishment of expenditure levels for farm regions, states, and economic class, farm type, and fuel subcomponents. Later, the board sets estimates for the 5 farm regions, and, supplemented by representatives of the states, sets estimates for the 15 largest agricultural states.

The estimation process has four major inputs: estimates from the previ-

5

Farm production expenditure estimates are prepared for feed, farm services, rent, agricultural chemicals, fertilizer, lime and soil conditioners, interest, taxes (real estate and property), labor, fuels, farm supplies and repairs, farm improvements and construction, tractors and self-propelled farm machinery, other farm machinery, seeds and plants, and trucks and autos.

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×

ous year; ARMS indications from the current year; other data; and knowledge of NASS, ERS, and state economists and statisticians. The data other than ARMS that the board uses in its work include the Census of Agriculture, prices paid indexes from the NASS Agricultural Prices Report, Crop Acreage and Production, Livestock Production, ERS farm income, ERS cost of production for various commodities, and data from the National Pork Producers Council, the Federal Reserve, the producer price index (PPI), and the Association for Equipment Manufacturers (AEM). ARMS III staff prepares recommendations prior to the board.

The boards rely on two major inputs or “summaries.” One is a direct expansion of the summarized data. The other is the final calibrated summary.


Regional Estimates. After the national estimate for each expenditure item is set, board members set regional-level estimates for the five farm production regions. These regional estimates are constrained to sum up to the National Expenditures Estimates.


State Estimates. The board members then set state-level estimates that sum up to the regional expenditures estimates. Thus, in a cascading effect, the board conducts several reviews to ensure that the totals estimated in the ARMS estimation process are consistent with official production estimates, and that regional and state estimates are consistent with the national totals and are additive.


Other Estimates. Other values are estimated directly, including the level of aggregate expenditures by item within each size class of gross farm revenue, and within each major type of farm enterprise and fuel subcomponent expenditures (diesel, gas, liquid propane gas, and other fuels).


In this section, we have outlined the several steps in the development of published estimates from ARMS. Overall, the effects of the various adjustments on statistics estimated using ARMS are not clear. In particular, the interventions in ARMS based on board processes introduce changes that are not replicable in the normal sense expected in scientific research. These interventions may well lead to better estimates, or they may simply impose consistency across various key estimates at the cost of disturbing other relationships in the data.


Recommendation 6.9: NASS and ERS should provide more clarification and transparency of the estimation process, specifically the effect of calibrations on the assignment of weights and the resulting estimates.

Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 107
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 108
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 109
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 110
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 111
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 112
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 113
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 114
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 115
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 116
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 117
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 118
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 119
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 120
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 121
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 122
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 123
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 124
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 125
Suggested Citation:"6 Nonresponse, Imputation, and Estimation." National Research Council. 2008. Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey. Washington, DC: The National Academies Press. doi: 10.17226/11990.
×
Page 126
Next: 7 Methods for Analysis of Complex Surveys »
Understanding American Agriculture: Challenges for the Agricultural Resource Management Survey Get This Book
×
Buy Paperback | $68.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The Agricultural Resource Management Survey (ARMS) is the federal government's primary source of information on the financial condition, production practices, and resource use on farms, as well as the economic well-being of America's farm households. ARMS data are important to the U.S. Department of Agriculture (USDA) and to congressional, administration, and industry decision makers when they must weigh alternative policies and programs that touch the farm sector or affect farm families.

ARMS is unique in several respects. As a multiple-purpose survey with an agricultural focus, ARMS is the only representative national source of observations of farm-level production practices, the economics of the farm businesses operating the field (or dairy herd, greenhouse, nursery, poultry house, etc.), and the characteristics of the American farm household (age, education, occupation, farm and off-farm work, types of employment, family living expenses, etc.). No other data source is able to match the range and depth of ARMS in these areas. American agriculture is changing, and the science of statistical measurement is changing as well. As with every major governmental data collection with such far-reaching and important uses, it is critical to periodically ensure that the survey is grounded in relevant concepts, applying the most up-to-date statistical methodology, and invested with the necessary design, estimation, and analytical techniques to ensure a quality product.
ARMS is a complex undertaking. From its start as a melding of data collected from the field, the farm, and the household in a multiphase, multiframe, and multiple mode survey design, it has increased in complexity over the decade of its existence as more sophisticated demands for its outputs have been made. Today, the survey faces difficult choices and challenges, including a need for a thorough review of its methods, practices, and procedures. Understanding American Agriculture : Challenges for the Agricultural Resource Management Survey summarizes the recommendations of the committee who wrote the survey.
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!