the questionnaire, based on commodity. As noted earlier, the ARMS Phase II questionnaire was recently and temporarily integrated with another survey questionnaire for the Conservation Effects Assessment Project (CEAP) for those operations that were selected for both the ARMS and CEAP samples. It has only one collection mode: personal interviews via face-to-face contact.
The Phase III survey (cost and returns survey) asks about farm and household economics and farm/farm operator characteristics, includes several questionnaire versions (a general questionnaire, from one to three commodity-specific questionnaires, and a core questionnaire), and includes an additional number of sample units added specifically to produce sufficient reliability to produce estimates for the 15-state oversample. The sample for this phase is drawn from both the list frame (also the source of sample units for Phases I and II), and an area frame using a sample selected from eligible farms identified in an annual June area survey. Every five years it is integrated with the Census of Agriculture. It incorporates several modes of data collection (face-to-face, telephone, mailout-mailback with face-to-face follow-up for the mail nonrespondents, and, soon to come, the Internet).
Each phase of the survey operations is a survey design challenge in itself. When combined in one program, they pose an intricate, interwoven series of design challenges that must be addressed holistically. It is useful to consider those challenges in light of sampling and questionnaire design goals, such as minimizing respondent burden, minimizing cost, and ensuring compatibility among the pieces and across time.
In a survey with so many phases and lengthy questionnaires on highly technical topics, the issue of respondent burden is pressing. The response burden (in minutes) for ARMS estimated by the National Agricultural Statistics Service (NASS) varies substantially for the different components of each survey phase (Table 4-1). The costs and returns survey (Phase III) is especially burdensome, containing questions that are difficult for the respondent to answer—often because the data are hard to retrieve or estimate and sometimes because the question is conceptually complex or unclear from the respondent’s point of view.
These observations have several implications. One is that the survey suffers from both item nonresponse—that is, missing values of variables—and from unit nonresponse—that is, entirely missing observations in at least one phase. Another consequence is that there has been a conscious effort