Click for next page ( 144


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 143
Survey Research Methods and Exposure Assessment INTRODUCTION Survey research has become an integral and routine approach to orgam7a- tional decisionmaking and scientific estimation. Hardly any product can be marketed or political candidate nominated without survey data on public acceptability. However, the quantity of survey research might have out- stripped the scientific quality of survey practice. In many surveys, interviewers with little scientific training do little more than hold conversations with other survey respondents. Such surveys are not likely to maintain the methodo- logical standards that are required for scientific validity. Moreover, when surveys appear expensive and sponsors who must pay for them look for ways to reduce costs, methodological standards are usually among the first factors to be sacrificed. That situation probably characterizes most of the survey research associated with e~osure-assessment studies. The committee ex- amined many prominent studies in the field ant! found notable departures from sound survey practice; even the most sophisticated and laudable efforts toward improving survey quality, such as the standardized Environmental Inventory Questionnaire, contain some problematic survey practices. Properly designed and conducted survey research can provide the precise population estimates needed for exposure assessment relatively inexpensively. The relevant types of information that can be obtained from surveys for expo- sure assessment include: Percentage of the U.S. population or a specific group or community that uses gas stoves, pesticides, or lives in homes with attached garages. Numbers of persons who smoke cigarettes, pump gasoline, or apply pesticides on a particular day. Amounts of time per day that people spend outdoors, in automobiles, or in the presence of people who smoke. 143

OCR for page 143
144 ASSESSING HUMAN EXPOSURE Such types of data on time use or time-activity patterns have become impor- tant in eyposure-assessment research and are central to this chapter. The methodological steps in collecting time-activity data and all other data collected by surveys include: Choosing samples :of persons with random-probability methods from complete and carefully constructed sample frames. Selecting the most appropriate models) of data collection usually face- to-face or telephone interaction or the subjects' own completion of a survey form. Reaching a sample of adequate size for statistical analyses. Achieving a high response rate from the selected sample. Deciding on the most appropriate measurement approach in exposure assessment, these include personal exposure monitors combined with time- activity diaries (direct), time-activity diaries alone (indirect), and single ques- tions or self-reports (questionnaire). Designing survey protocols that are understandable by respondents, usable by interviewers, and appropriate for sample projection. Framing specific survey questions in language that is simple, direct, and unambiguous. Coding and storing collected information in computer-readable form. ~ Analyzing the data with appropriate statistical techniques. Deriving statistically valid conclusions from the data. Specific considerations of coding, analysis, and dissemination of survey data are treated only minimally in this chapter, because those aspects of survey research are familiar and present relatively few problems. What is less famil- iar and recognized is how much the results of survey studies depend on the quality of collected data, particularly when too little attention has been given the first seven steps listed above. This chapter concentrates on those steps. The seven methodological steps apply to all data involving human popula- tions and not just to activities labeled as surveys. For example, data collected in a laboratory or field study that used an inappropriate or inadequate sample might be given undue weight. Table 5.1 provides a general outline for the three topics to be addressed in this chapter: sample selection, measurement approach, and questionnaire framing and wording. This chapter focuses on advanced survey methods for exposure assessment and is not a how-to guide to survey research. Several useful textbooks on survey research methods can be recommended for the latter purpose: Warwick and Lininger (1975), Fowler (1984), Converse and Presser (1986), and the more elementary introduction of Bradburn and Sud

OCR for page 143
SURELY RESEARCH METHODS 145 TABLE 5.1 Methodological Factors in Exposur~Assessment Surveys Sample selection Measurement approach Questionnaire framing and wording Probability versus nonprobability method Sample frame (target population) Equal sampling versus oversampling of groups Response rate Sample size and related sampling error (precision) C>~er:(e.~., sequential sampling and Bayesian estimation) Survey mode (face-t~face, telephone, etc.) Direct approach (personal monitor and tim~activity diaIy3 Indirect approach (time-activity diary alone) Questionnaire approach Open versus closed questions Single versus multiple items Long versus short questions Explicit versus implicit questions Aspects of time man (1988~. EPA (1984a,b) has also produced a comprehensive and useful guide for conducting surveys. It would be a mistake, however, to assume that any text can make one a survey-research specialist. The elements of survey research and questionnaire construction are subtle arts, currently aided by little scientific guidance and, even more, by professional experience and w~s- dam. Even in well-established areas of research, each study and study instru- ment must be shaped by experienced practitioners to meet the study unique objectives. Exposure-assessment survey research presents challenging problems be- cause it is heavily involved with obtaining information about time. Personal exposure to pollutants is cumulative and complicated regarding time, and exposure assessors want to document when an important exposure to a con- taminant has occurred. They also want to know the frequency of exposure, its sources, its location, and its contaminant concentrations in air and the

OCR for page 143
146 ASSESSING HUDSON EXPOSURE factors that effect them (e.g., ventilation), the breathing rates and other physi- ologica1 states of the exposed person, and the general health status of the exposed person. The question Show long and how often was the exposure?" raises some difficult estimation problems. At one extreme, one can look at a very short period, such as an hour or a dater, such studies require a high degree of control or precision. At the other extreme, one can loolc at far longer periods, such as a decade or a lifetime; these entail much less precision or control. Survey researchers can ask how often an individual respondent might have been in a situation that was likely to involve significant exposure, but the respondent's memory of such occasions is extremely problematic. Research on respondent memory has revealed a host of biasing factors and ambiguities (Tulving, 1983; Bradburn, 1987; Pierson et al., in press). Thus, there has been increased interest in the more precise measurement of activity during very short periods, such as the day or the hour, for which the time- activ~ty diary is very promising as a measurement option. Most eyposure- assessment studies begin with only a general statement of the issues. To paraphrase a summary from another field: Some exposures to some contaminants of some sources for some periods at some frequency and concentration in some environ- ments with some ventilation during some activities at some breath- ing rates have some health effects on some people. Such a statement might seem at first glance vague or capricious. But it does show that many variables must be taken into account in estimating expo- sures and effects. It also shows that a few simple guidelines and checklists will not suffice in exposure assessment. Obtaining necessary information on the temporal and spatial distributions of contaminants of interest and on the population possibly exposed to them requires a highly coordinated multidisci- pl~nary approach. Thus, it is not reasonable to expect a single breakthrough in technique, such as automated diaries or less obtrusive monitors, to address more than part of the overall estimation problems. Once the variables mentioned earlier are considered in the study design and study participants are identified, the central problem is to ensure that the survey instrument used in the study is completed in an appropriate and under- standable manner that will result in useful information.

OCR for page 143
SURi~=S~CH METHODS 147 SAMPLE SELECTION The matter of selecting study participants is more straightforward than survey design, but strict guidelines need to be followed. Statistically proper sample selection is crucial for valid and accurate extrapolation from sample characteristics to general-population characteristics. Sample selection pro- cedures for survey research on human behavior have a solid scientific and quantitative basis. The mathematical principles involved are rather simple and straightforward, but often neglected in conducting surveys. Selection of sam- ples shoed have the following characteristics: The selection of individuals within a target population or within a speci- fied period must have a random probability basis. Otherwise, the resulting data will have at best limited generalizability. The design of a sample frame involves identifying potential participants and implies series of rules to be followed in selecting individuals in the target population. Each individual must be identified in the frame and have a known chance of selection. The selection process should ensure that all individuals in the frame have an equal chance of selection or specify that some individuals of particular interest (e.g., alder people, asthmatics, or people in rural areas) have a greater (but known) chance than others of being selected and oversampled. The sample frame should make it possible to calculate a response rate, the proportion of sampled individuals who participate in the study (i.e., pro- vide the required data). That is a crucial gauge of the quality of the sample. If a sample frame specifies that 1,000 persons be selected into the sample, but only 200 or 300 actually participate (respond), the response rate is only 20% or 30%. Such a low response rate will raise important questions about the generalizability of the data, particularly if those who do respond can be said to have self-selected themselves into the sample and thus have biased the sample-selection process. Only careful follow-up methodological studies can determine how serious such response-rate problems can be for exposure assessment. Assuming that a probability sample has been drawn, that nonresponse bias is negligible, and that measurement error also is negligible, statistical formulas should be used to calculate the sampling error of an estimate based on the sample. Calculated estimates of precision are not appropriate if the response rate is low or if there are substantial measurement errors or prob- lems.

OCR for page 143
148 ASSESSING NUM`4N EXPOSURE Target Population In studying human exposures, it is fast necessary to determine whose expo- sures are to be assessed i.e., one needs to determine the target population or sampling frame. Depending on the goal of a study, one could select from a variety of target populations, e.g., the U.S. national population, persons who reside in a particular community of interest, or persons who satisfy particular eligibility criteria. For epidemiological studies, one might be interested In a susceptible population, such as asthmatics or school-age children. For compli- ance studies, one might be interested in the general population of a geograph * ~ tic region. If the target population is small, it might be possible to take a census and measure every individual In the population. Otherwise, one needs to select a sample from the target population and measure the individuals in the sample. Even when the population is small enough for a census, it might be preferable to use a sample and obtain more detailed information from those measured. It is crucial in selecting the sample to use an appropriate probability sam- pling method, so that each individual has a known and nonzero chance of being selected. Depending on the goal of the study and the availability of prior information, one could set the sampling probabilities to be equal or unequal. If a specific subpopulation is of interest, one might oversample from that subpopulation, i.e., use higher sampling probabilities for individuals in it. If it is known or believed that the outcome of interest might be more variable in a subpopulation, one might also oversample from that subpopulation. When the goal of the study is to estimate characteristics of an entire popula- lion by combining information obtained from subpopulations, the Neyman allocation technique (Cochran, 1963) can be used to designate sample sizes for subpopulations. It is always important that each individual have a nonzero sampling probability, if the sample is to be generalizable to the target popula- tion. Samples not based on probability sampling methods are often used in preliminary stages of exposure assessment. For example, investigators might conduct a pilot study of a new instrument on their colleagues or on volunteers. The results of such measurements might be useful for evaluating the instru- ment or testing overall field procedures, but the data collected usually cannot be generalized to any wider population. The same is true for studies per- formed with small or unrepresentative samples.

OCR for page 143
SURVEY RESEARCH METHODS 149 Response Rate If one is to generalize from a sample to a target population, one must obtain a high proportion of responses from those sampled. Low response rate Is probably the source of most of the quality problems associated with survey research. The U.S. Census Bureau and other government agencies can often obtain responses from more than 90% of those eligible. Academic research organizations usually have 65-75% response rates. Most commercial firms seem satisfied with 40-50% response rates. In many surveys that receive wide attention (e.g., ~ the mass media), effective response rates are less than 20%. Low response rates warrant careful scrutiny, because they raise the possi- bility of substantial nonresponse bias. If respondents and nonrespondents are different, the data collected on the respondents are of questionable gener- alizability to the entire target population. Therefore, each survey should include an evaluation of the nature of nonresponse and the presence of non- response bias, particularly if the response rate is much less than 70%. Nonresponse can have a variety of causes, such as failure to locate potential respondents, refusal to participate, inability to provide required information (e.g., due to illiteracy or language problems). Perhaps most importantly, potential respondents' might lack interest in the study or topic. In carefully designed surveys, exact percentages of different types of nonresponse are calculated. Some types are unlikely to be associated with the outcome of interest and therefore do not lead to nonresponse bias. Analysis of the causes of nonresponse is useful for identifying the potential for nonresponse bias and for reducing nonresponse. A low response rate by itself does not necessarily lead to nonresponse bias, but it suggests a potential for it, especially in studies that make great demands on survey participants. To evaluate the presence of nonresponse bias, one needs to compare respondents and nonrespondents. Several survey devices are available, such as collection of a minimal set of data from a sample of nonrespondents with a short and less burdensome questionnaire, offering of monetary incentives to a sample of nonrespondents to persuade them to par- ticipate, and comparisons of groups of respondents, e.g., stragglers versus early respondents. If nonresponse analysis indicates little or no difference between respon- dents and nonrespondents, one might assume that nonresponse is random and does not cause nonresponse bias; one can then analyze the observed data directly. However, if respondents and nonrespondents do differ (e.g., if urban residents are more likely to respond than rural residents), it will be necessary to adjust for the difference. Given an adequate amount of background infor- mation on respondents' residence locations, demographic characteristics, etc.,

OCR for page 143
150 ASSESSING HUMAN EXPOSURE it might be possible to generalize from the observed sample to the target population with statistical techniques, such as the reweighting or imputation steps presented by Kalton and Kasprz~k (1986~. The techniques generally assume that, if nonrespondents have the same background characteristics as respondents, their exposures (and exposure distributions) are the same. The respondents' exposures are thus used to impute the nonrespondents' exposures. Sampling Error Statistical formulas can be used to estimate sampling error. Three factors are involved: the variance in the sampling, the sample size, and the sampling rate (the fraction of the entire population that is sampled). The sampling rate usually can be disregarded if the sample comprises less than 10% of the popu- lation from which it is drawn. For a simple random sample taken to estimate a specified population proportion (e.g., smokers or persons exposed to a specified contaminant on a particular day), sampling error is calculated ac- cording to the following equation: sampling error = [(p)(1 - p)/rll~ (Eq. 5.1) where p is the estimated proportion of the population having the characteristic (e.g., smokers) in the sample and n is the sample size. Multiplying the sam- pling error value by 1.96 and adding and subtracting the result from p gives a rough 95% confidence internal for the sample estimate; multiplying the sampling error value by 2.54 and adding and subtracting the result from p gives a rough 99% confidence interval. If the sample size represents more than 10% of the total population of interest, then the error can be reduced by a factor: (1- proportion of population sampled/. For example, if, in a sample of 600 people, 40% are found to have a characteristic, equation 5.1 leads to an estimated sampling error of 2%. With 99% confidence, the true proportion lies between 35% and 45% (i.e., 40% + (2% x 2.543~. Use of the equation assumes a random sample, no important response-rate anomalies, no clustering of respondents, and no generalization beyond the original popu- lation. The above calculation of the confidence interval assumes a normal distribution for the sample estimate. In practice, the assumptions for equation 5.1 are not often met. In such cases, a correction factor is used. For exam- ple, for in-home personal interviewing with high clustering of respondents,

OCR for page 143
SURVEY RESEARCH METHODS 151 equation 5.1 produces error estimates that are too small and need an appro- priate inflation factor. The size of the sample is important in reducing sampling error, but sample size alone does not ensure sample quality. Determination of a proper sample she depends on the type of inference to be drawn, the sample variance, the degree of precision desired, and the adequacy of the response rate. It is important to note that sampling error varies inversely with the square root of the sample size. Other Features Three further features of sample design can increase a surveys effective- ness. Rather than a single large survey, a series of small samples can be drawn sequentially until stability in estimates and inferences is achieved. That can be especially unportant in personal-monitor studies, which are extremely expensive. Efficiencies In sampling design also can be achieved by using data from existing time-activity pattern studies (described later). Fairly consistent esti- mates of time spent outdoors or time spent in travel, for instance, have been published, and the background factors that are associated with larger or small- er portions of time spent in such activities also are fairly predictable (Robin- son, 1977; Juster, 1985~. For example, day of the week, employment status, and educational level have more to do with time spent away from home than do age, region of the country, or season. That means that well-executed samples of single local areas taken at single times of the year can provide considerable insight into exposure conditions In other locations. In other words, activity patterns in Denver, Cincinnati, and Jackson (Michigan) do not appear that different from each other and closely resemble national patterns of time use. A third sample-relevant consi~leration is the use of panel studies that in- volve multiple periods of observation of the selected respondents. Especially in exposure studies, one needs exposure readings for individuals on more than a day or a week. Following the same individuals across time provides an essential perspective for future exposure-assessment studies. MEASUREMENT APPROACHES The process of exposure takes place across time, and exposure assessment needs to be sensitive to the time element in all data collections. Three meas

OCR for page 143
152 ASSESSING HUMAN EXPOSURE urement approaches can be distinguished in population-based exposure assess- ment: direct, indirect, and questionnaire. In the direct approach, respondents report In time-use diaries and carry personal exposure monitors that record their exposures for short periods. In the indirect approach, exposure is im- puted from the activities that respondents report in time-activity diaries. In the more traditional questionnaire approach, exposure is imputed from re- sponses to self-reported factual questions (e.g., occupation, age of dwelling unit, and fuels used for heating and cooking) or to general questions regarding activities (e.g., frequency of using pesticides). Personal exposure monitors are not used in the indirect approach or the traditional questionnaire approach. Direct Approach With the direct approach, human subjects carry personal monitors while they engage in their daily activities. Exposures are thus measured directly and objectively and without the reporting problems associated with questionnaires. However, without some respondent self-report or outside information, there is no way to understand the contaminant pollution sources that led to expo- sures or the reasons why notable exposures were recorded by a monitor. Therefore, one usually needs detailed activity reports, such as those available from a time-activity diary. The other major problems that arise in personal-monitor studies result from the cumbersome nature of most personal monitors. The use of cumber- some devices can result in high noncompliance rates. Also, presence of the monitors can reduce the utility of the data that are obtained by affecting the behavior of the respondents. Respondents might be more likely to restrict their activities when they carry equipment. When interacting with other peo- ple, subjects might try to conceal the equipment or otherwise wear it impro- perly. (Such phenomena can also affect the quality of data that respondents provide in related questionnaires.) Exposure researchers must take particular care not to overburden the subjects when collecting survey data from them in exposure studies. When time-activity data were collected in early exposure studies, such as the TEAM study conducted in the early 1980s in Denver and Washington (e.g., Hartwell et al., 1984; Johnson, 1984), completion of the diary was one of many require- ments for the respondents. Sometimes, neither respondents nor interviewers were adequately prepared for the demands and rigors of completing the time- activity diary. As a result, some diaries contained more than 24 hours' worth of activities, others less than 24 hours' worth. Large periods were not account- ed for; e.g., some respondents reported no meals or sleep during 24 hours.

OCR for page 143
SURVEY RESEARCH METHODS 153 In some diaries, the periods of instruction by interviewers or technicians were recorded as normal daily activity. Activities like child care or shopping were seriously underreported or not reported at all. Nonetheless, the researchers were able to use the diaries to identify certain locations (e.g., parking garages) or activities that involvM maximal e - sure. The estimates of time that re- spondents spent in those locations might have been flawed, but the personal exposure data could be merged with more carefully designed and conducted time-activity studies. Indirect Approach The indirect approach does not involve the problems and expenses associ- ated with using personal monitors. Instead, one separately measures the activity patterns of a sample of human subjects and merges the data with data on contaminant concentrations found in independent samples of microenv~ron- ments (e.g., from fixed-site monitoring) to assess exposure. The main advan- tage of the indirect approach is that it places less burden on respondents than personal monitoring and is thus able to achieve higher response rates. Time-activity data can be obtained with several procedures, such as estima- tion by respondents in the study sample, by direct observation, or from time- activity diaries. In the estimation approach, people are asked to estimate the amount of tune they spent in various activities during a given period, such as the previous year or the previous week. This is usually the least burdensome technique, although its reliability and validity are unknown and highly subject to the frailties of human memory and understanding. In the observational approach, respondents' activities are monitored by outside observers. That adds a valuable Resee of realism and completeness to the data collection, but the presence of an observer is likely to influence a respondent's activities. It is also likely to entail high rates of refusal to partici- pate in a study. Time-activity diaries have the greatest potential utility of the three indirect approaches. Respondents are asked to described sequentially all the activities in which they engage during a given period, usually a day or a week. The diary can be filled out concurrently with participation in the activities as in the Total Human Environmental Exposure Study (THEES) (Lioy et al., 1988), in which respondents were instructed to record all their activities for a 2=hour period. It can also be done by recall, as in the California Air Resources Board (CARB) Activity Study, in which respondents were asked to recall sequentially all their activities of the previous day. More burdensome for respondents than simple estimation, diaries provide a far more informative

OCR for page 143
158 ASSESSING HUMAN EXPOSURE task becomes much more difficult, because the time of episodes of peak or unusual exposure cannot be identified. Personal-monitor studies raise sam- pling problems in addition to time-specification issues. If monitors are not worn at a respondent's breathing height, for example, measured concentrations of contaminants might be higher or lower than those to which the respondent Is actually exposed. A respondent with a small passive monitor might cover it with an overcoat or sweater to conceal it or to keep warm. T~e-activ~ty data from representative diary studies can be used to calibrate or adjust personal-monitor data that are unrepresentative with respect to the samples of people who cooperate or activity patterns that were affected by the very presence of monitors. Data from time-activity diary studies can be used as benchmarks to reveal whether special high-risk groups deviate from the rest of the population in activity patterns (e.g., by spending more time in bars, dry- cleaning establishments, or gasoline stations). Diary data can be used to identify population groups whose activity patterns can lead to large exposures and to identify their special demographic features (e.g., younger people or urban residents). As noted in Chapters 2 and 6, time-activity data can be used in modeling general activity or exposure within various microenvironments. Diary data might be less useful for modeling pollutant effects that are difficult to detect over short periods, such as a day or a week. The data can be used to identify high-risk and low-risk populations, without further specification of such factors as sources of pollution, distance from sources, or ventilation and dispersion factors. To be optimally used for such purposes, more detailed diary data, along the lines described earlier, are needed. A well-designed set of traditional questionnaire items might help to identify high-risk populations and to predict exposure. They could include estimated frequency of exposure to tobacco smoke, gas ovens in enclosed areas, and distance from those exposure sources. In general, time-diary studies do not indicate large variations in activity or location patterns by such basic variables as marital status, presence of children, region of the country, or season by themselves. But season, age, and day of the week can make a difference in the likelihood of going to work. Going to work outside the home usually implies a change in activity patterns and location. Sex, level of education, and health status (and less critically, income) are important factors and should be included and controlled for in future monitor studies. The responses to such questions should be validated and calibrated with time-activity diary infor- mation. As mentioned previously, time-activity patterns at a single location (e.g., Jackson, Michigan) appear to resemble national patterns closely. The general consistency of time-activity data across geographic areas in practical terms means that one can gain generalizable insight from personal monitor and diary

OCR for page 143
SURVEY RESEARCH METHODS 159 studies conducted at single times of the year in single communities, provided that samples are drawn randomly, so that all segments of the community are represented equally or in proportion to their risk of exposure. Questionnaire Approach Almost all studies depend on some kind of traditional questionnaire to identify the background of respondents through factual information, such as their ages, occupations, general life styles, and access to household technology. The framing and wording of any questionnaire have a great effect on the responses to it and the inferences that are drawn from them. In general, the more standardized the question, the better it is for supporting inferences across studies. The development of the Environmental Inventory Questionnaire (EIQ) (Lebowitz et al., 1989b) constitutes an excellent beginning in improving the quality of questionnaires for exposure- assessment research. It contains seven items on geographic location, eight on housing, nine dealing with occupation and family members, two on smoking in the home, six on home appliances, six related to radon, and six related to organic pollutants. EIQ's strengths might lie more in the variables that it defines than in its specific question framing wording. Perhaps that is best illustrated by the questions related to estimated exposure episodes. In the case of tobacco smoke, for example, questions are asked about the amount of smoking in the home on the "most recent weekday" and "most recent weekend day.~ If inter- views and smoking are distributed evenly across the week, that means that Sundays have sex times as many chances to be selected as Saturdays, and Fridays three times as many chances as each of the other weekdays. If the family was away one day, estimated exposure will be underrepresented. In the case of heating sources, the F:IQ asks about the use of gas ranges, space heaters, etc., "during cold weather" and "during the winter." Those obv,- ously are subjective terms with no clear referent period. The questions should specify months or temperatures. The response alternatives for some questions include "3+ days a weeks, "1-2 days a week`', and "onlyin the morning to take the chill off (less than 1 hour)." Those scale inequities (days vs. hours) make it difficult for respondents to answer precisely. Similarly, questions about use of pesticides and herbicides outdoors refer only to episodes longer than 1 hour over the previous 6 months, whereas shorter duration or frequency of use need to be considered as well. Finally, the EIQ asks a number of factual questions about number of rooms and construction materials. Those seem straightforward at first, but

OCR for page 143
160 ASSESSING HUMAN EXPOSURE might be difficult for less-handy respondents or renters-who probably make up a majority of the population-to answer. Even the number of rooms In a house might not be clear. Year of construction and number of stories could entail difficulty for recent occupants (if roughly 20% of Americans move every year). Such terms as Attached carport. and Closed completely. are susceptible to misunderstanding. Questions that ask only about the fuel used most often miss important details. Even though the EIQ is a laudable advance in data-collection methods, its questions need to be adapted to and modified for particular study purposes- this could be done most effectively by persons trained in survey techniques. Factual Questions When contaminant-concentration measurements are not possible, factual questions can be asked to indicate the presence of exposure-related events or situations. For example, one might ask respondents whether they have been exposed to specific potential pollution sources, such as smokers, gasoline engines, and paints. Although it does not provide much quantitative information, the factual approach can be a useful screening device for revealing the presence of excess exposure. Answers to factual questions can identify smokers or people whose occupations put them at high risk. The exposure models discussed In Chapter 6 usually require data on factual features of a person's life that can also be considered surrogate measurements, such as occupation, year of construction of residence, types of fuel used for heating and cooking, and presence of attached garages. Those types of questions were developed for the EIQ. QUESTIONNAIRE FRAMING AND WORDING Questionnaire results are subject to measurement error in the same way as physical and chemical measurements based on personal monitors. Both ques- tionnaires and monitors can produce readings that differ from the actual value of the characteristic being measured. But a main advantage of exposure data obtained from monitors is the avoidance of many problems inherent in asking respondents questions. The sources of problems that arise in asking questions are endless, and there are few solid scientific guidelines for framing and wording questions. However, a substantial body of research literature and experience has devel- oped in the field of survey research, and recent studies have addressed the

OCR for page 143
SURVEY RESEARCH METHODS 161 issues with the method known as the split ballot (e.g., Schuman and Presser, 1981; Bishop et al., 1982~. In this method, one randomly chosen half of a sample group is asked a question in one form and the other half in another form. Unfortunately for exposure assessment, most of the experimental stud- ies have involved questions of opinion, rather than behavior, and there is minimal translation of different ways of asking opinion questions to ways of asking factual and behavior questions. How one resolves inconsistencies when two forms of a factual question produce different results remains a problem. Some form of validation will eventually be needed, through either direct observation, dual measurements, triangulation, or other techniques. To illustrate the reporting problems encountered with straightforward factual questions, we can consider respondents' reports of whether their homes had basements. Between two periods in one panel study, roughly a 10% inconsistency rate was reported; that is, there was only 90% agreement from one time to another regarding the presence of a basement in the home (Bev- eridge, 1983~. Because any change implies that people either filled in their existing basements or dug out new basements between the study periods, a 10% difference must be a reporting errors a factual matter that should be readily observable and reportable. This example provides a sobering perspec- tive on the ability of survey respondents to report accurately on their own behavior or circumstances. However, responses to time-estimate questions will more likely reflect potential exposure if an activity is more regular or repetitive (e.g., commuting to work versus taking clothes to the dry cleaners). Because a simple decision to word a question one way or another can result In discrepant estimates, it is necessary to conduct split-ballot eyperi- ments to identify the magnitude of discrepancies or measurement error. To illustrate more options that are available, we discuss five below: Open-end versus closed-end questions. Single versus multiple questions. Long versus short questions. Explicit versus implicit questions. Long versus short reporting periods. In contrast with closed-end questions, open-end questions yield responses that are in respondents' own words, provide insights into respondents' frame of reference regarding the questions, are more detailed, and minimize the frames of reference imposed by the researcher. Closed-end questions have the advantages of ease of administration, ease of coding, unambiguity of re- sponse, and lower costs. Cost factors invariably favor closed-end questions, which offer respondents no opportunity to describe situations in their own

OCR for page 143
162 ASSESSING HUMAN EXPOSURE words. Time-activity diary studies, in which respondents describe activities in their own words, provide a much richer and more widely useful data base than studies in which interviewers enter activity information into a limited number of preceded categories. The open-end approach has been a feature of nation _ ... .. . '. . .. __ . As, , ~ . . . . al t~me-dlary studies and the TRAM study; the closed-end approach has been used by SpengIer et al. (1985), Lioy (1988), and Leaderer (199Oa). In general, it is assumed that a series of questions provides more complete data than a single question, because a series reminds respondents of instances that they might not otherwise have considered. In exposure research, rather than asking respondents whether they generally use detergents, one could read them a list of the names of the most popular detergents as part of a question series. Much the same issues arise with regard to question length. In general, the conventional belief has been that shorter questions are better, because they are easier to follow, are less ambiguous, and involve less time (cost) to ask than longer questions. That view is now being reconsidered in light of re- search showing some benefits of longer questions. Like series of questions, longer questions tend to remind respondents of instances they might not otherwise have considered. Although longer questions can change the level of response, responses might show the same pattern of correlations with age, occupational status, emission source, etc., as shorter questions. It is usually preferable to ask respondents explicit questions, rather than elect them to retrieve information from memory in response to implicit questions. For example, it is considered easier for respondents to answer the question '~Were you in the company of anyone who smoked yesterday?" than the question Are you in the company of smokers often, sometimes, or never?~ Implicit terms like soften" mean different things to different respondents and require more inference, whereas 'yesterday should have the same meaning to all respondents. Similarly, in the case of long versus short reporting periods, one should expect more accurate reporting about short and recent periods (e.g., yesterday or last week) than about long or more distant periods. Never- theless, experimental data are needed to substantiate the expectation or to calibrate for corrections. Throughout the process of question development and evaluation, one needs to ensure that questions are tailored as closely as possible to respondents' abilities and willingness to answer. Multiple pilot tests with respondent de- briefings or analyses of taped interviews are to be encouraged. Results of split-ballot experiments provide the most persuasive case for asking a question one way rather than another. Because there is no best way to ask a question, a realistic approach is to consider all questions as having their imperfections and problems that require

OCR for page 143
SURVEY RESEARCH METHODS 163 researchers' attention. Moreover, what might improve a question on one criterion might not on another. No questionnaire can eliminate measurement error entirely. Although it is important to estimate the magnitude of measurement error for a question- naire, such estimation does not appear to be a common feature of survey questionnaires. In some situations, it might be possible to validate respon- dents' answers directly. For example, after a telephone interview, interviewers might visit a selected sample of respondents' homes to verify the presence of a basement or a type of insulation. When direct validation is not possible, repeated measurements can be used to estimate the reliability of responses. But reliability dress not address the problem of systematic errors or bias: respondents might well report the same mistake, no matter how the question is framed. IMPROVING SURVEY QUESTIONS There is a major need to develop reliable and valid questions for exposure- assessment research. Failure to appreciate many elementary principles of question format and design is evident in questionnaires done to date, even the praiseworthy EIQ. Building on the results of methodological studies and more refined time-activity diary studies, one should be able to develop a concise inventory of exposure-related questions for a variety of contaminants to identify populations at risk. That should be done in conjunction with per- sonal-monitoring studies, in which both short-term (diary) and long-term (estimate) questions could be asked of the same respondents. How well can one predict cumulative exposure (as measured by a monitor) from the time- activity diary and the estimate questions? Both diary and estimate questions would need to be tailored to capture aspects of exposure not examined exten- sively to date (e.g., breathing rates, distance to source, and ventilation). The advantages of a panel-study design, involving multiple periods of observation of selected respondents, are especially evident, in that long-term behaviors and effects can be assessed or modeled. The committee proposes the following guidelines on question wording: Questions with precise time frames (e.g., "2 days per week" or "10 times per year~) are clearer to respondents and give more consistent results than questions with imprecise time frames (e.g., "often" or "most of the timely. The narrower the time frame (e.g., "2 hours per week,~ rather than "3 days per week"), the clearer the question is to respondents and probably the more consistent the results.

OCR for page 143
164 ASSESSING HUMAN EXPOSURE Estimate tasks should be broken into manageable subtasks (e.g., if "hours per week. is desired, ask for daily estimates for each day of the week). Where possible, respondents' time estimates should be forced to sum to a specified total (e.g., 168 hours a week or 52 weeks a year) to increase accur- acy survey and comparability across respondents. Multiple-scale responses clarify a respondent's task (e.g., About 3 hours per day or about 21 hours a weekly. Memory aids can be useful for specific activities, microenv~ronments, or contaminant sources (e.g., it is sometimes useful to ask respondents to recall the most recent occurrence of the phenomenon of interests. Throughout, one needs to recognize that survey respondents' concern over and attention to issues are far lower than those of professional researchers. Therefore the researcher must help the respondent to understand the reasons for the questions and the type of information being sought. Although researchers should be wary of the lack of precision in respon- dents' estimates, such estimates can be very useful for relative measurement purposes. For instance, people who estimate that they use pesticides 10-19 days per year might report more related health effects than those who esti- mate pesticide use at less than 10 days per year-and both groups might show less effect than those who estimate 20-29 days per year. Similarly, those who have worked in high-exposure occupations for 5-9 years might report more related effects than those who have worked in such occupations for less than 5 years. Of course, there is no guarantee that such monotonic relations will be found, and the extent of such statistical correlations needs to be empirically documented. INCORPORATING SURVEY-RESEARCH METHODS INTO EXPOSURE ASSESSMENT Exposure assessment should enlist the expertise of multidisciplinary teams of specialists including survey statisticians and field specialists. Many assess- ments are being conducted independently by investigators whose interests are restricted to single subjects, such as modeling, monitoring, or time-activity estimation. As a result, current exposure studies are unnecessarily restricted to laboratory studies, epidemiological analyses, personal-monitor studies, or national surveys. Specialists need to increase their communication with one another if more-integrated exposure assessments are to be designed and con- ducted. Exposure research would benefit from the involvement of survey statisti

OCR for page 143
SURELY RESEARCH METHODS 165 clans and field specialists, who could help to reduce survey-related errors and the wide variations In survey responses observed In present studies. There seems to be little recognition of the need for probability samples or high response rates, which are as important In small studies as in large ones. A series of small benchmark surveys of normal activity patterns or microenvi- ronments would help to establish a basis for comparison in future exposure assessments by, for instance, defying healthy buildings for studies of the sick- building syndrome. Application of survey-research methods can also contribute to improving the effectiveness of exposure-study design. It can help to identify the contami- nants and microenvironments likely to result in the most important health or nuisance effects. It can also help exposure assessments to target study efforts on chronic or peak exposures on a basis consistent with the biological re- sponse time of the contaminant of concern. SUMMARY Exposure assessment presents challenging problems for survey research applications because of its intimate involvement with obtaining information about time. Personal exposure to pollutants is cumulative and complicated regarding time, and exposure assessors want to document when an important exposure to a contaminant has occurred. They also want to know the fre- quency of exposure, its sources, its locations, and its contaminant concentra- tions in air and the factors that affect them, the breathing rates and other physiological states of the exposed person, and the general health status of the exposed person. Survey researchers can ask how often an individual respondent might have been in a situation that was likely to involve significant exposure, but the respondent's memory of such occasions is problematic. Thus, interest has increased In more precise measurement of activity during very short periods, such as the day or the hour, for which the tune-activ~ty diary is promising as a measurement option. Sample selection, measurement approach, and ques- tionna~re framing and wording are the general aspects considered in this chapter for collecting time-activ~t,,r data and all other data collected by surveys. Statistically proper sample selection is crucial for valid and accurate extrap- olation from sample characteristics to general-population characteristics. The mathematical principles involved are rather simple and straightforward, but often are neglected in conducting surveys. Individuals within a target popula- tion or within a specified period must be selected on a random probability basis. Each individual must be identified in the sample frame (target popula

OCR for page 143
166 ASSESSING HUMAN EXPOSURE tion) and have a known chance of selection. All individuals in the frame should have an equal chance of selection or some greater, but known, chance than others of being selected and oversampled. The sample frame should make it possible to calculate the proportion of sampled individuals who partic- ipate in the study (i.e., provide the required data). Statistical formulas should be used to calculate the sampling error of an estimate based on the sample. Three measurement approaches can be distinguished in population-based exposure assessments: direct, indirect, and questionnaire. In the direct ap- proach, respondents report In time-use diaries and carry personal exposure monitors that record their exposures for short periods. In the indirect ap- proach, exposure is imputed from the activities that respondents report in time-activ~ty diaries. In the more traditional questionnaire approach, exposure is imputed from responses to self-reported factual questions or to general questions regarding activities. Time-activity data from representative diary studies can be used to calibrate or adjust personal-monitor data that are unrepresentative for the samples of people who cooperated in the study or for activity patterns that were affected by the presence of monitors. Diary data can be used to identify population groups whose activity patterns can lead to large exposures and to identify their special demographic features. A well- designed set of traditional questionnaire items might also help to identify high- risk populations and to predict exposure. Almost all studies depend on some kind of traditional questionnaire to identify the background of respondents. The framing and wording of any questionnaire have a great effect on the responses to it and the inferences that are drawn from them. In general, the more standardized the question, the better it is for supporting inferences across studies. Recognizing the general need to develop reliable and valid questions for exposure-assessment, the committee proposed guidelines to improve question wording. The modifi- cation or adaptation of questions for particular study purposes could be done most effectively by persons trained in survey techniques. The committee offers the following recommendations to foster the develop- ment of survey-research methods as effective tools for exposure assessment: Individual researchers should examine each survey question for its ap- propriateness for the purpose at hand. A trained survey methodologist can offer sound advice. More use can be made of less-expensive surveys conducted in commu- nities or in sequential fashion (particularly given the high cost of personal- monitor studies). Basic research is needed on the reliability (precision) and validity (ac- curacy) of respondents' answers, especially for estimating frequency or dura

OCR for page 143
SURVEY RESEARCH METHODS 167 lion of exposure to various pollutants. Well-designed studies in a single com- munity is an inexpensive way to develop more valid estimates. When the situations of high-exposure readings on personal monitors need to be identified more clearly, researchers should assess the feasibility and usefulness of having framed personnel observing respondents and collecting exposure data as unobtrusively as possible. Such a technique would require respondents' consent and selection of observation periods.

OCR for page 143