Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 130
--> 6 Opportunities for Methodologic Advances in Data Analysis Assessing the effects of environmental agents poses well-recognized challenges. Relatively weak effects call for large samples and often complex designs; measurements of both exposures and outcomes may contain errors; the effect of the agent of interest may be confounded or modified by numerous other factors that may be unmeasured or even unrecognized; and the level of effect may vary with such time-dependent factors as age and duration of time since the exposure began. These methodologic issues have motivated the development of new study designs and analytic methods. Valid characterization of the effects of environmental agents and assessment of dose-response relationships often require the application of multivariate statistical models to control for confounding and to evaluate interdependence of effects. Models may also be needed to characterize temporal patterns of risk and to evaluate the consequences of errors in the measurement of the independent and dependent variables. These challenges have been partially met by new statistical methods; advances in approaches to longitudinal data analysis have been particularly rapid. Although this chapter emphasizes advances in analytic methods, new epidemiologic approaches and the emergence and formalization of exposure assessment have contributed substantially. This chapter reviews some of the methods, assumptions, and statistical techniques that can be applied to environmental analyses to improve and strengthen inferences about the relation between exposures and health outcomes and considers some of the many opportunities for further methodologic developments. New study designs for assessing ef-
OCR for page 131
--> fects of environmental agents were described in the report of the Health Effects Institute Environmental Epidemiology Planning Project (1993). Exposure assessment for studies of environmental agents was addressed in the first volume of this report. Thomas et al. (1993) have considered advances in contending with errors in exposure measures and their consequences. Introduction The most usual epidemiologic measure of effect of an environmental agent, the relative risk, is the ratio of the incidence of disease in those exposed to the agent of interest to the incidence in those not exposed. Categorization of subjects by exposure is straightforward in some types of epidemiologic research; for example, workers may be classified as exposed or nonexposed on the basis of personnel records and measurements of workplace contaminants. This classification of subjects into strata of exposed and nonexposed has analogy to experimental studies, such as clinical trials in which exposure status is controlled by the researcher. However, in epidemiologic studies of environmental agents, there may be no population that is entirely nonexposed, and the exposure may vary greatly from person to person in intensity, timing, and duration. In estimating the relative risk of disease associated with a particular environmental agent, the researcher may need to contend with multiple continuous and discrete variables, including the exposure of interest. For example, such demanding data are encountered frequently in studying effects of environmental agents on respiratory health. Lung function, an outcome variable, is continuous whereas some predictors of interest may be (or be classified as) either discrete or continuous. Typically, this type of analytic problem is approached by modeling of the functional relations among variables. In general, a specific class of models is developed to examine functional dependence of outcome on risk factors, and a certain member or members of the class are identified as having adequately good fit to the data, or the class is rejected. For example, a common class is that of linear models, and the investigator may accept all linear models with coefficients in the calculated confidence ranges. The models include an associated distribution of differences between actual observations and those predicted by the models, and careful modeling always includes study of these differences. This approach allows for simultaneous consideration of the effects of multiple risk factors and the description of dose-response relationships for individual agents while controlling for the effects of other variables. Models in current use can accommodate both continuous and discrete risk factors. Of course, inferences about relationships between predictor variables and outcome depend on the assumptions that a model requires regarding
OCR for page 132
--> the relationships among the variables, and it is likely that no model is absolutely correct in every detail. Nevertheless, an informative and biologically appropriate model extends the informativeness of data; a poor model may obscure true relationships between outcome and predictors; and both the efficiency and validity of inferences suffer if a model is seriously incorrect from either the biologic or statistical perspective. Regression models, including linear regression, can be used to examine a specific proposed functional relation between a risk factor and an outcome. If a biologically inappropriate form of the relation is proposed, model findings may be misleading and incorrect. In the past, linear models have been widely used to assess effects of environmental agents. Analysis of variance (ANOVA) and linear-regression models generally assume that the outcome varies linearly with functions of risk factors, that the individual observations are statistically independent, and that random differences from the model all have the same distribution, although models are available that relax each of these assumptions. For example, if the outcome seems to be approximately log-normally distributed, the investigator may assume that the natural logarithm of the outcome varies (approximately) linearly with continuous risk factors and that the errors of that model are (approximately) normally and independently distributed on a logarithmic scale. The outcome measures and risk factors are often assumed to be measured without error, although this assumption also can be relaxed. In any case, adherence to the assumptions underlying the chosen statistical model should be tested because violations can affect considerations of sample size as studies are designed and confidence bounds and hypothesis-testing as data are analyzed. Thus, the linear-regression analyses that have often been used in studies of environmental agents typically carry strong underlying assumptions about the distribution of the data and the nature of the relationships being examined. There is a critical tradeoff here: the stronger the assumptions (if they are nearly correct), the more can be learned from a specific set of observations, but the greater the risk of a critical failure in one or more of the assumptions. Fortunately, most currently available statistical programs incorporate approaches to test compatibility with these assumptions, and some techniques allow analysis of data that violate one or more of these assumptions. Some of these methods are described below, with examples of their use, drawn largely from studies of the health effects of air pollution. Analysis of Discrete Outcomes Generally, counts, or discrete data, are assumed to follow some version of the Poisson distribution. The Poisson distribution does approach
OCR for page 133
--> the normal distribution as the mean of the counts gets large but the dependence of the variance on the mean remains. The classical Poisson distribution may also understate the variance of data. Extra-Poisson variability (i.e., greater variability in the counts than expected from a classical Poisson distribution) may exist in count data, and the possibility of such extra-Poisson variation needs to be examined in the modeling process. Less often, variances may be less than predicted from the Poisson distribution. Modeling the covariance structure of the data is discussed in more detail in the next section. Data based on daily diaries, annual questionnaires of symptoms, and other outcomes considered in some respiratory studies are unlikely to meet all the assumptions that lie behind the common statistical approaches. Binary outcomes, such as the presence or absence of coughing, wheeze, or physician-diagnosed asthma, may be modeled as binomial data with logistic or probit regressions. Some investigators have analyzed such data using normal-theory, least-squares regressions. Such approaches make inefficient use of the data, and for rare events the linear model gives greater weight to the extremes than a logistic or Poisson model. Confidence bounds and significance tests for the effects of environmental exposures may be biased by incorrect application of these or other models. Analysis of Correlated Data Measurements related in time and/or space, such as repeated measurements of the same population at successive times or measurements of persons from nearby geographic areas, are likely to be correlated, and their error terms may not be independent. For such data, the variance is unlikely to be characterized by a single dispersion parameter. Examples of such correlations include serial correlation (where correlations among measurements of the outcome at intervals are short relative to their variation over time), intraclass or intraindividual correlation (where multiple measurements in the same person over time are likely to show a similar deviation from the mean), and spatial correlation (where measurements in the same or nearby neighborhoods are likely to be correlated). Correlations among data elements from either the same or different study units are common in many settings, and methods for dealing with the correlations are well developed. Similarly, methods exist to deal with nonuniform variance (heteroscedastic distributions). While such methods are now being used to assess effects of environmental agents, wider recognition of the problem and ways to deal with it is needed. The origins of the data often suggest the type of variance-covariance structures likely to be found. For convenience, the rest of this section is organized around
OCR for page 134
--> 3 common types of correlation structures arising from different types of epidemiologic data. These are neither exclusive nor exhaustive but illustrate common analytic problems. The first type is the serial correlation found in time-series data. This is relevant for longitudinal studies of the effects of pollution, which are increasingly used to study the effects of pollutants whose concentrations vary over time. The second type is correlation between different outcome measures. It is discussed in the context of structural equation modeling. The third type is that found within subject or site. It is discussed in the context of random-effects models, although other models are sometimes used. Longitudinal Data Analysis and Serial Correlation Longitudinal studies of the association between temporal variations in pollution and health outcomes have been useful in studying the health effects of outdoor air pollution. This design may also be informative in other areas of environmental epidemiology. Longitudinal studies within a defined population have several attractive features. First, because they examine fluctuations within a sample, they are less subject than cross-sectional comparisons to several potential problems with confounding. For example, smoking, medical history, access to medical care, and socioeconomic factors are less likely to be serious confounders in a study in which the comparison is internal, i.e., the population serves as its own control. Although patterns of disease diagnosis may vary across regions or over long periods, these factors are unlikely to vary from day to day within an area, and any variation is unlikely to correlate with environmental pollution. Potential confounding in these longitudinal studies is limited to time-varying covariates, such as weather and seasonal factors. The potential strengths of this design, along with advances in statistical methods and software, have led to substantial growth in its application. For example, studies of daily mortality and air pollution have shown associations at concentrations found in many urban regions (Dockery and Pope, 1994). Whether these associations reflect a cause-effect relation is under active study. Data from longitudinal studies also present analytic challenges, however. For example, if the value of the outcome variable under study is higher than average on a particular day, it is likely to be higher than average on the next day as well, even after conditioning on the covariates. This pattern, which affects almost all time series, is referred to as ''serial correlation." In studies of disease occurrence, it arises from the persistence over several days of the conditions that alleviate, exacerbate, or depress illness (e.g., epidemics, weather, and allergy seasons) and also from
OCR for page 135
--> the natural persistence of disease. It may also arise from the slowness of change in variables that determine effective exposure or act as confounders. As a result, daily observations of most outcomes are not independent. Analyses of such data need to test for lack of independence and appropriately control for it when present. Methods for Analyzing Serially Correlated Data For normally (Gaussian) distributed outcomes, well-established methods of analysis can be used to take account of serial correlation. Work on autoregressive models dates from the 1940s; see for instance, Cochrane and Orcutt (1949). The structure of the covariance is often parameterized in terms of autoregressive parameters, moving-average parameters, and combinations of the two. An autoregressive structure describes a model where the correlation between the residuals at time i and time i-k declines monotonically as k increases. In a first-order autoregressive structure, for instance, the correlation between today and yesterday is assumed to be r, between today and 2 days earlier to be r2, etc. A moving average has a correlation with a fixed lag and zero correlation with any further lags. Combinations can be chosen to fit the pattern of serial correlation observed in the data. In most cases, health and disease variables are likely to show autoregressive patterns because an abrupt termination of the correlation is not likely. An alternative model, often called state dependence, refers to a Markov-type structure where the outcome on day i is dependent on the outcome on day (i -1) but, given the outcome at (i - 1), not on any earlier outcome. For example, the prevalence of an illness with an average duration of a week (e.g., the common cold) will clearly depend on whether the subject had that illness the day before. Such models are described by Muenz and Rubenstein (1985) and were used in analysis of environmental data by Korn and Whittemore (1979). In contrast, incidence data are generally less subject to day-to-day correlation, though they can still be serially correlated (Schwartz et al., 1991), suggesting the covariance model described above. If there are covariates for which statistical modeling is imperfect (e.g., weather), the residuals of the model may also exhibit serial correlation. The presence of a lagged dependent variable in a model with serial correlation in the errors is unattractive because the correlation between the predictor variable and the error term means that usual least-squares regression estimates are biased and inconsistent. In these circumstances, the lagged dependent variable can be "instrumented." Instrumentation is the process of fitting a predictive model to a variable, using all possible predictors (except the hypothesis variable). Then the lagged
OCR for page 136
--> predicted value of the outcome is used as an independent variable instead of the actual lagged outcome variable. In either case, long-term patterns in outcomes may be introduced by such slowly varying factors as season. Methods for filtering out such patterns include the use of seasonal dummy variables, seasonal autocorrelation, trigonometric filtering, and moving-average filters. The degree to which such filtering should be done when examining factors that may explain some of the seasonal variation in an outcome is a matter of epidemiologic judgment as to the biologic appropriateness of the alternative strategies. The application of these methods in examining serial correlation in normally distributed continuous outcomes is illustrated in a study of air pollution reported by Pope et al. (1991). These investigators used an autoregressive model to examine day-to-day variations in peak expiratory flow, measured by mini-Wright peak-flow meters, in a panel of mildly asthmatic schoolchildren. The inhalable-particle concentration in outdoor air (PM10) was significantly and inversely associated with peak expiratory flow. A followup study in nonsymptomatic children has found similar results (Pope and Dockery, 1992). Poisson models for mortality counts, hospital admissions, or emergency-room visits may also exhibit serial correlation, as may daily diaries of binary outcomes, such as the presence or absence of coughing or wheeze; these outcomes can be modeled with logistic or probit regressions. Although ad hoc approaches have been used for serially correlated binary data, well-characterized statistical methods for dealing with serial correlated data in Poisson and logistic regressions have been developed (e.g., Gourieroux, 1984; Liang and Zeger, 1986; Zeger and Liang, 1986). These methods have been adopted to study effects of environmental agents (Schwartz et al., 1989; Braun-Fahrlander et al., 1992). Random-Effects Models The serial-correlation models discussed above are applicable in circumstances where the correlation between observations close together in time is not zero but decreases toward zero with increasing time between the observations. This pattern would be expected when, for example, the correlation is induced by external factors, such as weather or epidemics. A different pattern of correlation may arise from characteristics of persons. For example, if a child is taller than average at age i, the child is likely to be taller than average at some subsequent time i, and the correlation is not likely to go to zero even after a long interval. If a study does not need to explicitly include the trend of that factor in the model, the factor can be treated as a random subject effect. This type of correlation
OCR for page 137
--> may also affect some other end points of interest for environmental epidemiology, such as lung function. In analyzing data from studies of long-term trends in lung function, for example, this subject-mediated correlation needs to be considered. The period of measurement distinguishes between this correlation structure and the serial correlation described previously. If lung function were measured daily, there would undoubtedly be serial correlation in such data, in that each day's measurement is correlated with those of the preceding and following days. Annual measurements are separated enough in time for this short-term serial correlation to be diminished and for the correlation to be dominated by tracking of associations that do not chiefly reflect serial correlations. In most studies of environmental agents, individual effects are not of interest. Rather, we are interested in the effect of pollution on the entire population or, perhaps, the most sensitive segment of the population. The analytic strategy for data with repeated measures should recognize that the measurements on each individual are not independent. As a consequence, single intercepts are not of interest, and a large number of degrees of freedom would be used to estimate them. Estimating individual intercepts also is not consistent with the large-sample assumptions needed for estimation, because the number of parameters increases as fast as the sample size. In contrast, a random-effects model uses only a small, fixed number of parameters (sometimes just one) and thus preserves degrees of freedom for more-precise estimation of errors. Measurements of individuals over time are not the only kind of data that exhibit such serial correlation. For example, persons who live in the same town tend to be more alike than persons randomly chosen from the population. People with similar socioeconomic and ethnic backgrounds often live in the same neighborhoods. As a result, outcome measures may be more similar between two subjects randomly selected from the same location than between two subjects randomly selected from the population as a whole. This is one source of the "design effect" in stratified clustered-survey designs. Unless all the causes of that similarity are controlled for in analysis of such data, the observations from within each site will be correlated, not independent, and the analysis should account for the correlation. Two general types of correlation structures have been studied extensively. In one, the correlation between any two observations within a study unit is about the same. For example, there may be no reason to expect the correlation among subjects within a neighborhood to vary with index number or with location within the geographic area. Such models with random effects for sites, subjects, or other groupings have been extensively developed and used in environmental epidemiology and are well described by Laird and Ware (1982). In studies of environmental
OCR for page 138
--> agents by geographic region, use of random-effects models is critical if exposure data are ecologic. In ecologic studies, individuals' exposures are not direct individual measures, but estimates assigned to all members of subgroups on the basis of residence location. Exposure is often estimated or measured at only a few geographic locations in air-pollution studies; exposures measured at one location are used as proxies for all individual exposures in studies in a surrounding area of hazardous-waste sites and other pollutant sources. If variables to indicate the location of each subject were used, differences in pollution exposure would be controlled so that no effect of exposure could be detected. In contrast, if the tendency for persons living in the same area to be similar is ignored, the standard errors of the regression coefficients are likely to be too small, which may lead to inflation of the level of statistical significance. The random-effects model represents a parsimonious approach to dealing with these design concerns while maintaining the ability to study exposures that are characterized geographically. Ware et al. (1986) illustrate the use of this technique in analyses of data on lung function and respiratory illness from children living in 6 US cities. Their approach incorporated a random-city effect. They found a significant association between mean covariate-adjusted rates of acute bronchitis and total suspended particle (TSP) concentrations across the cities. No association was found between TSP and pulmonary function. These findings were confirmed in data from additional groups of children in the same cities; these analyses were performed with Poisson regression (Dockery et al., 1989). Similar correlations can affect data from studies that are not based on a clustered design. For example, Cook and Pocock (1983) reported that significant spatial correlation in a community affected study results. When some added random variability is associated with unknown or unmodeled factors, a hierarchic formulation may be used to create a more-flexible model. The hierarchy assigns additional levels of random variability to the unknown parameters (such as underlying mortality rates or response probabilities). These unknown parameters are constructed according to a random model that allows potentially rich classes of statistical models to be considered. For example, methods for fitting random effects via hierarchic constructions have been discussed by Laird and Ware (1982), Racine-Poon (1985), Tsutakawa (1988), Vacek et al. (1989), Schall (1991), and Zeger and Karim (1991). When the outcome measures are highly variable and the distribution of outcomes is well characterized, the empirical Bayes approach is useful for hierarchic modeling and for random-effects models (Reinsel, 1985). This approach is based on the concept that information about unspecified (or less than fully specified) random parameters may be imputed from
OCR for page 139
--> various portions of the model by using the Bayes rule. Conceptually, the approach imposes a distributional assumption on a set of parameters assumed to possess inherent variability. This allows one to effectively "borrow information" from the set as a whole, and then "pull back" the more-extreme and less-precise estimates of these parameters. This achieves a more-stable portrait of the pattern of variability as a whole. When there is information about errors in the exposure measures, measurement-error models are useful. The empirical Bayes methodology has multiple uses in biomedical applications (Breslow, 1990) and may be of particular value in environmental epidemiology. For parametric models, the methods have been developed in some detail (Morris, 1983) and are known as "parametric empirical Bayes." Kass and Steffey (1989) refer to such a structure as a conditionally independent hierarchic model. The methods have been described for a variety of specific applications, including Poisson models (Albert, 1988; Gaver et al., 1990) where estimating mortality rates is of issue (Hui and Berger, 1983; Tsutakawa et al., 1985; Clayton and Kaldor, 1987; Desouza, 1991), particularly as regards geographic clustering or mapping of disease rates (Manton et al., 1989; Merril and Selvin, 1992); binomial-logistic models (Levin, 1986); and normal models (DuMouchel and Harris, 1983) of diseases that may be related to environmental factors. Modeling Covariance Structures The primary effect of an agent is not always to change the expected value of a health outcome. Rather, the response to a pollutant may be heterogeneous, and the effects of interest may include modifying the response to other exposures, or the effect may be indirect through modification of outcomes other than that of primary interest. These issues are discussed below in order of increasing complexity. Heterogeneity of Response Heterogeneity of response to environmental agents is well established. For example, chamber studies of ozone exposure of exercising young adults identified a sensitive subgroup that had the largest short-term reductions in lung function in response to ozone (McDonnell et al., 1985). These differential responses were reproducible in subsequent challenges of the individual subjects. The degree of sensitivity to ozone did not seem to be associated with preexisting respiratory conditions, and markers predicting enhanced sensitivity have not been identified to date (McDonnell et al., 1985). These laboratory findings are mirrored by field epidemiology studies
OCR for page 140
--> that have shown similar short-term reductions in lung function in response to ozone exposure in real-world situations. These data have come from studies of children in summer camps (Spektor et al., 1988; Lioy et al., 1985) and from a study of schoolchildren (Kinney et al., 1989). A reanalysis of the data from schools and the study of campers by Spektor et al. (1988) showed highly significant heterogeneity of response to ozone (Brunekreef et al., 1991). In contrast, short-term exposure to TSP was also associated with short-term reductions in lung function (Dockery et al., 1982; Dassen et al., 1986) but without evidence of heterogeneity of response to TSP (Brunekreef et al., 1991). The analytic method determined whether the variation in regression coefficients across subjects was greater than random, given the standard errors of subject-specific coefficients. Similar results were noted in a study of respiratory symptoms. A panel study of asthmatics (Whittemore and Korn, 1980) found an association of exposure to TSP and ozone with increased respiratory symptoms, but no evidence of greater-than-random variability in the different TSP regression coefficients. In contrast, the ozone coefficients showed clear signs of heterogeneity of response. Heterogeneity not only indicates sensitive subgroups, it affects estimation of the standard errors of regression coefficients, leading to improper hypothesis tests for pollution variables. Korn and Whittemore (1979) proposed a 2-stage method to address this issue. In the above-cited panel study of symptoms in asthmatics, they assumed that each subject's sequence of daily binary responses (with or without the symptom) followed a logistic model. However, instead of a common regression coefficient for air pollution, each subject was assumed to have a possibly unique regression coefficient. Although conceptually attractive, this approach requires sufficient data for asymptotic normality assumptions to hold. In fact, for consistency and asymptotic normality of the estimates, both the number of subjects and the number of days need to be large. Improvements to the Korn-Whittemore approach were given by Anderson and Aitkin (1985). Population heterogeneity is an important statistical issue, even when there is no heterogeneity in response to the pollutant of interest. For logistic and Poisson models, a fixed relation between the mean and the variance of the distribution is generally assumed. There may be factors that alter the variation in the outcome and produce overdispersion or underdispersion. Either of these tendencies will result in incorrect standard-error estimates for regression coefficients, altering the probabilities of both type I and type II errors. McCullagh and Nelder (1983) discuss methods for estimating the overdispersion parameter in generalized linear models that include the logistic and Poisson regression settings.
OCR for page 143
--> parabolic, exponential, and power curves). Mathematical transformations are also often used. Another alternative to linear expression is to allow the data to determine the shape to be fitted. One can assume that the expected value of Y varies continuously but not necessarily linearly with each x, then use a host of statistical methods, generally called nonparametric smoothing, to fit the expected value of Y for each x, assuming only continuity. For a detailed description of these methods, see Chambers et al. (1983) and Hastie and Tibshirani (1990). Examination of smooth plots can suggest the appropriate transforms for the linear regression, and also identify the existence of thresholds in the data, as well as the shape of other nonlinearities in the dose-response curve. There are many different smoothing techniques, but all can be considered as generalizations of the following paradigm. If the expected value of Y is a continuous function of the independent variable x, then for a symmetric neighborhood around xi the expected value of Y should be within a neighborhood around its expected value at xi. If the neighborhood is small, the average of the expected values of Y at all the points in the neighborhood should be approximately the expected value at the center of the neighborhood, i.e., the expected value of Y at xi. Hence, the average of the observed Ys in the neighborhood is an estimate of Y at point xi that does not assume linearity. Since the error terms are random, averaging over multiple Ys allows the errors to cancel one another. The larger the neighborhood, the more error canceling there is, but the more opportunity there is for bias, if the relation is highly nonlinear within the neighborhood. This running-means approach is the basis for smoothing. More-sophisticated approaches use weighted averages, with weights that decline with distance from the central point xi and deal with the variance-bias tradeoff and the problem at the ends of the distribution, where the neighborhoods are not symmetric. The general approach is called "kernel smoothing." Cubic smoothing spline estimation is another smoothing approach. Recent simulation studies have shown that most of the modern smoothing approaches produce about the same curve. The critical parameter is the size of the neighborhood used. Some smoothing approaches, such as Supersmooth, use a cross validation method that can vary the size of the neighborhood. The generalized additive model of Hastie and Tibshirani (1990) represents an alternative approach to nonparametric regression. This approach is equally valid for logistic and Poisson models and, indeed, for the entire family of generalized linear models. The generalized linear model (Nelder and Wedderburn, 1972) is a modeling approach that unifies a range of approaches, including ordinary linear models, logistic regression, Pois-
OCR for page 144
--> son models, and, with modification, the Cox proportional hazard model. An attractive feature of these models is that nonparametric chi-square tests can be computed for model improvement (reduction in deviance) compared with the assumption that the outcome depends linearly on each of the independent variables. This allows a direct test of evidence from nonlinearities. Linear and smoothed functions can be mixed, and for the linear functions, estimates of regression coefficients, standard errors, and chi-square tests of the significance of the association are available. For the variables represented by smooth functions, a test of the overall association can be obtained by comparing the improvement of the deviance when the term is added to the incremental degrees of freedom used up by the smoothed function. Approximate standard-error bands are available in generalized additive models; however, their properties are not yet fully understood. Bootstrap estimates of the errors in prediction can also be produced. Generalized additive models can be used for hypothesis-testing and model selection. Alternatively, standard regression techniques can be used for model selection, and then the generalized additive model can be applied to the significant variables. Nonparametric regression is particularly important in the study of multifactorial outcomes, where incorrect specification of the form of the relationship between the outcome and important covariates may result in an incorrect conclusion about relationships with a risk factor of interest. Efron and Tibshirani (1991) cite an example of a study of a procedure for treating cardiac abnormalities in infants. Use of the generalized additive model identified nonlinearities in the relationship between survival and age of child and to a lesser extent its weight. In their case, the estimated impact of treatment was not substantially affected. Robust Methods To this point, this chapter has discussed methods for analyzing data that are not Gaussian by emphasizing data that clearly come from other distributions. Count data are often thought of as naturally Poisson-distributed, and the presence or absence of a condition as binomially distributed. Data may be roughly Gaussian, but there may be some deviation from that distribution that gives a relatively small number of observations an inordinate influence on some derived estimation, such as a regression coefficient. Such observations are often referred to as outliers; researchers often omit such observations from their analyses. This practice raises the issue of which observations, if any, should be deleted. Deletion is an extreme form of weighting—some observations are given no weight at all. The question can therefore be generalized: What weights
OCR for page 145
--> should be given to different observations to obtain robust results, that is, results that are not too sensitive to a few observations? Least-squares regression has attractive properties, such as being the minimal variance-unbiased estimate, if certain assumptions are met. If the data are not perfectly Gaussian, other estimators may be less variable. Since, as Mosteller and Tukey have pointed out, most real-world data are not really normally distributed, the issue of weighting arises in most situations. A number of robust techniques, such as Mestimates and Lestimates, have been devised to give estimates that are more stable in the face of nonnormality in the data (Hampel et al., 1986). Some of these are available in commercial statistics packages. Application of these procedures may increase confidence that the results of analysis of epidemiologic data are unbiased (Efron and Tibshirani, 1991). Modeling Exposure Statistical models can help to improve estimates of the exposures experienced by individuals. This section describes 2 methods that are helpful in improving exposure estimates. Kriging Exposure data in environmental-epidemiologic studies are often sparse, irregularly sampled, and not based on measurements coincident with subject locations. For epidemiologic assessment, one needs to provide an exposure value for each study subject. For agents that vary in concentration across space, the methods generally used emphasize the importance of measured values near the subject, implicitly acknowledging the spatial continuity or spatial coherence of the exposure data. For example, to provide an exposure estimate for an unmeasured location, one could assign to that location the value from the nearest measured location or the average of the few nearest measured values. A method of interpolation that explicitly acknowledges and models the spatial similarity among measured samples, and hence may be more accurate, is kriging (Journel and Huijbregts, 1978; Cressie, 1991). Kriging is a weighted, moving-average interpolation algorithm. For each point to be estimated, nearby samples are assigned weights reflecting their relative importance, and then these values (the observed datum times the weight) are averaged for the value at the new location. More-traditional approaches assume an arbitrary weight structure; for example, observations within a given distance from the point of interest may be given equal weights. Alternatively, the chosen weight may be the mean inverse function of the separation distance or an inverse function of the
OCR for page 146
--> square of the separation distance. The kriging algorithm provides a set of weights that are optimal for minimizing prediction errors. More formally, kriging is linear estimation that minimizes the mean square prediction error subject to unbiasedness conditions. Before using kriging, one typically conducts a structural analysis of the data to identify and remove trends and outliers and to estimate quantitatively the spatial structure or spatial covariance of the observed data (i.e., how the correlations process observations that are a function of distance between them). The estimated spatial-covariance function is then used in the kriging equations to define optimal weights for averaging observed values near the location where an exposure is needed. The spatial-covariance function used in kriging, known as the variogram, is defined by its functional form and 3 parameters: the sill, the range, and the nugget. The nugget is the (residual) variance among point pairs separated by zero distance. It is analogous to a sampling or replicate variance. The sill is the asymptotic variance of point pairs at infinite separation distance. The range is the separation distance at which the variogram first reaches the sill. Typical functional forms used to describe the change in variance as a function of separation distance include linear, exponential, Gaussian, and spherical models. Using the spatial-covariance function, a set of equations is solved to provide weights for linear interpolation. These weights can be used to provide interpolation estimates and the variances of these estimates, where the estimation variance is a function only of the number and location of samples and is independent of the observed values. The variances of the estimates can be used to review the sampling design and to plan future sampling to provide the most information. Various modifications to the basic kriging procedure can be used to accommodate more-complex aspects of data. For example, rather than using a single variogram, if the data are anisotropic one may use a set of variograms. For data that are not normally distributed, one can use nonlinear kriging methods, such as log-kriging or disjunctive kriging (Journel and Huijbregts, 1978). A special case of nonlinear kriging used to estimate locations with a specific value of a variable is called indicator kriging (Journel, 1983). To date, kriging has been little used in epidemiology. Wartenberg et al. (1991) conducted a simulation study that showed that, under a set of simple assumptions, kriging marginally outperformed some other methods of interpolation. They applied a modified form of indicator kriging to case-control data to estimate the probability of disease occurrence at the locations of particularly sparse exposure-data sets. Related methods have been used in a variety of epidemiologic contexts. Glick (1979) has used spatial-correlation methods to analyze and
OCR for page 147
--> describe cancer-mortality patterns, and Wartenberg and Greenberg (1990) investigated the use of spatial correlations for the study of disease clusters. Cook and Pocock (1983) modeled the spatial correlation of errors in a multiple-regression analysis of mortality patterns. Diggle et al. (1990) used kernel-smoothing methods to derive expected values for assessment of the spatial distribution of cases of laryngeal cancer near a hazardous-waste incinerator. Additional applications include assessment of the spatial pattern of soil contamination near lead smelters (Simpson, 1985). Modeling Exposure with Additional Exposure Data Cost often limits the collection of detailed exposure data. A protocol for collection of additional information might allow for the development of a better predictor of exposure than could be derived from the least-costly data available for all subjects. An example of such an improvement is the use of diaries to record activity patterns. Ostro et al. (1991) assessed the impact of air pollution on persons with asthma living in Denver. They constructed an estimate of exposure by using outdoor monitoring, diary data on time spent outdoors, and a crude estimate of indoor/outdoor ratios of air pollutants. A stronger association was found with this measure than when data from the outdoor monitor alone were used as the exposure measure. More-complicated models are possible. For example, Hasabelnaby et al. (1989) used indoor fine-particle measurements in a sub-sample of homes to estimate passive smoking exposure. These measurements were regressed against questionnaire data on maternal and paternal smoking, amount of smoking in home, housing characteristics, etc. This yielded a predictive model, whose independent variables were available for all subjects. Using the predicted exposure for all individuals improved model fit over that found using only questionnaire data on passive smoke exposure. An important caveat related to the use of these methods is that the exposure metric is altered by the decision to use a modeled personal exposure instead of measured outdoor exposure. The regression coefficients from these approaches cannot be applied directly to exposure data from, e.g., central monitoring sites, to forecast effects. Thus, while these methods generally increase the power to detect an effect in the epidemiology study, they may complicate risk assessment. Reliance on data from central monitoring sites, in contrast, simplifies the risk-assessment process. Advances in Study Design The principal observational designs for assessing the effects of environmental agents include the cross-sectional, cohort, and case-control
OCR for page 148
--> study designs. Each of these designs has well-characterized strengths and limitations (Rothman, 1986). In addition, the ecologic design is used, as in recent studies of air pollution and mortality. Recently, variants of these designs have been developed that offer increased efficiency in assessing the effects of environmental factors. Case-cohort sampling methods have been a major advance (Prentice, 1986). Methods have been developed for sampling within cohorts that provide unbiased estimates of effect while potentially enhancing feasibility and lowering costs. In the nested-case-control design, appropriate characteristics are used to match controls to incident cases. More intensive exposure characterization may be possible for the smaller number of cases and controls in comparison with the full cohort. In the case-cohort design, a sample of the total cohort is selected without regard to the characteristics of the cases. These designs are particularly appropriate if the costs of exposure assessment are substantial or if invasive sampling, e.g., phlebotomy, is needed. For example, the nested-case-control design might be used to investigate genetic determinants of susceptibility to an environmental agent or used in an industrial setting in which detailed exposure assessment is expensive. Methods have also been proposed for strengthening the case-control design (Thomas et al., 1993). In a 2-stage design, a basic case-control study is conducted with collection of information on exposure and disease variables only; in the second stage, data on other factors and possibly additional exposure data are collected on random samples of the 4 groups: exposed cases, exposed controls, nonexposed cases, and nonexposed controls. This design has the potential advantage of reducing costs. In the case-crossover design, the subjects serve as their own controls; this design has been offered as an approach to examine the effects of acute exposures on disease risk. Exposure-Measurement Error Epidemiologists have long recognized that errors may be inherent in the measurement of both exposure and outcome variables. Recent advances in the area of exposure-measurement error offer new approaches for evaluating the consequences of these errors and making adjustments that can take into account the effect of error on estimates of the effects of environmental agents. Thomas et al. (1993) have provided a comprehensive review of these new techniques. An understanding of the consequences of measurement error is particularly relevant to the quantitative estimation of the risk of disease associated with environmental agents for the purpose of policy development. Quantitative risk assessments may use exposure-response relationships derived from epidemiologic studies;
OCR for page 149
--> these data may be subject to measurement error that biases exposure-response relationships. Correction of these relationships for error may be appropriate for regulatory policy. Conclusions Analysis of data from epidemiologic studies often uses statistical models that make strong assumptions about the distributions of disease and exposure and about the relationship between them. The real world rarely offers pristine and perfect data or justifies strong assumptions about the form of the relationships among data items of interest. Often, environmental exposures may produce relative risks in the range of 1.1-1.3. However, because of widespread exposure to many environmental pollutants, such small relative risks can imply large attributable risks. Improvements in statistical analyses of both multiple exposures and multiple diseases or outcomes will enhance the role of environmental epidemiology in addressing small relative risks. Studies of environmental agents are likely to focus increasingly on multifactorial outcomes for which the exposure of interest accounts for a relatively small proportion of the variation in outcome. Many of the chronic diseases of interest in environmental-epidemiology studies have both multiple stages and multiple causes. Improvements in statistical methods that have been introduced in recent years will enhance the assessment of the contribution of multiple factors to these multiple outcomes. While much attention has focused on modeling the expected value of the outcome, attention must also be focused on modeling covariance structures of the outcome. Data from studies of environmental factors may have autocorrelations in their residuals. Ignoring those correlations can give inefficient estimates of the parameters of interest (such as the regression coefficient of pollution) and biased tests of hypothesis. Environmental-exposure data often are not normally distributed, and care needs to be taken to deal with such non-Gaussian data properly. With the small signal/noise ratios commonly examined today, the use of techniques appropriate for non-Gaussian distributions becomes critical. There has been considerable advance in the last 30-40 years in techniques for analyzing time-series and cross-sectional data. Additional work needs to be done to improve the ability of cross-sectional analyses to pinpoint risk factors. As chapters 3 and 4 indicate, many preliminary environmental-epidemiology studies rely on exposure and health-outcome databases that are inadequate. Often self-reported information forms the basis for a preliminary study. Although some of the databases need to be improved, re-
OCR for page 150
--> searchers also need to develop a greater sophistication and familiarity with newer methods. To the extent that gradients of exposure can be estimated from existing data sets, the ability to detect associations of exposure and response will be enhanced. References Albert, J.H. 1988. Bayesian estimation of Poisson means under a hierarchical log-linear model. Pp. 519-531 in J.M. Bernardo, M.H. DeGroot, D.V. Lindley, and A.F.M. Smith, eds. Bayesian Statistics 3. Oxford: Clarendon Press. Anderson, D.A., and M. Aitkin. 1985. Variance component models with binary response: interviewer variability. J. Roy. Sta. B 47:203-210. Braun-Fahrlander, C., U. Ackermann-Liebrich, J. Schwartz, and H.P. Gnehm . 1992. Air pollution and respiratory symptoms in preschool children. Am. Rev. Respir. Dis. 145:42-47. Breslow, N. 1990. Biostatistians and Bayes (with discussion). Statistical Science 5:269-298. Brunekreef, B., P.L. Kinney, J.H. Ware, D. Dockery, F.E. Speizer, J.D. Spengler, and B.G. Ferris. 1991. Sensitive subgroups and normal variation in pulmonary function response to air pollution episodes. Environ. Health Perspect. 90:189-193. Carr, G.J., and C.J. Portier. 1992. Dose-response models in quantal response teratology. Biometrics 84. Chambers, J.M., W.S. Cleveland, B. Kleiner, and P.A. Tukey. 1983. Graphical Methods for Data Analysis. Wadsworth Press. Clayton, D., and J. Kaldor. 1987. Empirical Bayes estimates of age-standardized relative risks for use in disease mapping. Biometrics 43:671-681. Cochrane, D., and G.H. Orcutt. 1949. Application of least squares regression to relationships containing auto correlated error kerms . J. Am. Stat. Assoc. 44:32-61. Cook, D.G., and S.J. Pocock. 1983. Multiple regression in geographical mortality studies with allowance for spatially correlated errors. Biometrics 39:361-371. Cressie, N.A.C. 1991. Statistics for Spatial Data. New York: Wiley. Dassen, W., B. Brunekreef, G. Hoek, P. Hofschreuder, B. Staatsen, H. de Groot, E. Schouten, and K. Biersteker. 1986. Decline in children's pulmonary function during an air pollution episode. J. Air Pollut. Control Assoc. 36:1223-1227. Davison, A.C., D.V. Hinkley, and E. Schechtman. 1986. Efficient bootstrap simulation. Biometrika 73:555-566. Desouza, C.M. 1991. An empirical Bayes formulation of cohort models in cancer epidemiology. Stat. Med. 10:1241-1256. DiCiccio, T.J., and J.P. Romano. 1988. A review of bootstrap confidence intervals. J. Roy. Sta. B 50:338-354. DiCiccio, T.J., and J.P. Romano. 1990. Nonparametric confidence limits by resampling methods and least favorable families. Int. Stat. Rev. 58:59-76. Diggle, P.J., A.C. Gattrall, and A.A. Lovett. 1990. Modeling the prevalence of cancer of the larynx in part of Lancashire: a new methodology for spatial epidemiology. Pp. 35-47 in R.W. Thomas, ed. Spatial Epidemiology. London Papers in Regional Science 21. London: Pion. Dockery, D.W., and C.A. Pope. 1994. Acute respiratory effects of particulate air pollution. Ann. Rev. Pub. Health 15:107-132 Dockery, D.W., J.H. Ware, B.G. Ferris, Jr., F.E. Speizer, N.R. Cook, and S.M. Herman. 1982. Change in pulmonary function in children associated with air pollution episodes. J. Air Pollut. Control Assoc. 32:937-942.
OCR for page 151
--> Dockery, D.W., F.E. Speizer, D.O. Stram, J.H. Ware, J.D. Spengler, and B.G. Ferris, Jr. 1989. Effects of inhalable particles on respiratory health of children. Am. Rev. Respir. Dis. 139:587-594. DuMouchel, W.M., and J.E. Harris. 1983. Bayes methods for combining the results of cancer studies in humans and other species (with discussion). J. Am. Stat. Assoc. 77:293-313; Rejoinder:313-315. Efron, B. 1982. The Jackknife, the Bootstrap, and Other Resampling Plans. Philadelphia: Society for Industrial and Applied Mathematics. 92 pp. Efron, B., and G. Gong. 1983. A leisurely look at the bootstrap, the jackknife, and cross-validation. Am. Statistician 37:36-48. Efron, B., and R. Tibshirani. 1991. Statistical data analysis in the computer age. Science 263:390-395. Faraway, J.J. 1990. Bootstrap selection of bandwidth and confidence bands for nonparametric regression. J. Stat. Comput. Simul. 37:37-44. Fisher, N.I., and P. Hall. 1991. Bootstrap algorithms for small samples.. J. Stat. Planning Inference 27:157-169. Gaver, D. P., P.A. Jacons, and I.G. Muircheartaigh. 1990. Regression analysis of hierarchical Poisson-like event rate data: superpopulation model effects on predictions. Commun. Stat. Theo. Methods 19:3779-3797. Glick, B. 1979. The spatial autocorrelation of cancer mortality. Soc. Sci. Med. [Med. Geogr.] 13D:123-130. Gourieroux. 1984. Psuedo maximum likelihood. I. Theory. Econometrica. Hall, P. 1987. On the bootstrap and continuity correction. J. Roy. Sta. B 49:82-89. Hampel, F.R., E.M. Ronchitti, P.J. Rousseeuw, and W.A. Stahel. 1986. Robust Statistics: The Approach Based on Influence Functions. New York:Wiley. Hasabelnaby, N.A., J.H. Ware, and W.A. Fuller. 1989. Indoor air pollution and pulmonary performance: investigating errors in exposure assessment. Stat. Med. 8:1109-1126; discussion 1137-1138. Hastie, T.J., and R.J. Tibshirani. 1990. Generalized Additive Models. London: Chapman and Hall. 335 pp. Huet, S., E. Jolivet, and A. Messian. 1990. Some simulations results about confidence intervals and bootstrap methods in nonlinear regression. Statistics 21:369-432. Hui, S.L., and J.O. Berger. 1983. Empirical Bayes estimation of rates in longitudinal studies. J. Am. Stat. Assoc. 78:753-760. Journel, A. 1983. Nonparametric estimation of spatial distributions. Math. Geol. 15:445-468. Journel, A., and C.J. Huijbregts. 1978. Mining Geostatistics. London: Academic Press. Kass, R.E., and D. Steffey. 1989. Approximate Bayesian inference in conditionally independent hierarchical models (parametric empirical Bayes models). J. Am. Stat. Assoc. 84:717-726. Kinney, P.L., J.H. Ware, J.D. Spengler, D.W. Dockery, F.E. Speizer, and B.G. Ferris. 1989. Short-term pulmonary function change in association with ozone levels. Am. Rev. Respir. Dis. 139:56-61. Konigsberg, L.W., J. Blangers, C.M. Krammerer, and G.E. Mott. 1991. Mixed model segregation analysis of LDC-C concentration with genotype-covariate interaction. Genet. Epidemiol. 8:69-80. Korn, E.L., and A.S. Whittemore. 1979. Methods for analyzing panel studies of acute health effects of air pollution. Biometrics 35:795-802. Laird, N.M., and T.A. Louis. 1989. Empirical Bayes confidence intervals for a series of related experiments. Biometrics 47:481-495. Laird, N.M., and J.H. Ware. 1982. Random-effects models for longitudinal data. Biometrics 38:963-974.
OCR for page 152
--> Levin, B. 1986. Empirical Bayes estimation in heterogeneous matched binary samples with systematic aging effects. Pp. 179-194 in J. van Ryzin, ed. Adaptive Statistical Procedures and Related Topics. Hayward, CA: Institute of Mathematical Statistics. Liang, K.Y., and S.L. Zeger. 1986. Longitudinal data analysis using generalized linear models. Biometrika 73:13-22. Lioy, P.J., T.A. Vollmuth, and M. Lippmann. 1985. Persistence of peak flow decrement in children following ozone exposures exceeding the National Ambient Air Quality Standard. J. Air Pollut. Control Assoc. 35:1069-1071. Manton, K.G., M.A. Woodbury, E. Stallard, W.B. Riggan, J.B. Creason, and A.C. Pellom. 1989. Empirical Bayes procedures for stabilizing maps of U.S. cancer mortality rates. J. Am. Stat. Assoc. 84:637-650. Mapleson, W.W. 1986. The use of GLIM and the bootstrap in assessing a clinical trial of two drugs. Stat. Med. 5:363-374. McCullagh, P., and J.A. Nelder. 1983. Generalized Linear Models. London: Chapman and Hall. McDonnell, W.F., R.S. Chapman, M.W. Leigh, G.L. Strope, and A.M. Collier. 1985. Respiratory responses of vigorously exercising children to 0.12 ppm ozone exposure. Am. Rev. Respir. Dis. 132:875-879. Merril, D. W., and S. Selvin. 1992. Analyzing geographic clustered response. Proceedings of the American Statistical Association, Section on Statistics and the Environment. Morris, C.N. 1983. Parametric empirical Bayes inference: theory and applications. J. Am. Stat. Assoc. 78:47-55; Rejoinder:63-65. Moulton, L.H., and S.L. Zeger. 1989. Analyzing repeated measures on generalized linear models via the bootstrap. Biometrics 45:381-394. Moulton, L.H., and S.L. Zeger. 1991. Bootstraping generalized linear models. Comput. Stat. Data Anal. 11:53-63. Muenz, L.R., and L.V. Rubenstein. 1985. Markov models for covariate dependence of binary sequences. Biometrics 41:91-101. Nelder, J.A., and R.W. Wedderbum. 1972. Generalized linear models. J. Roy. Sta. A 135:370. Ostro, B.D., M.J. Lipsett, M.B. Wiener, and J.C. Selner. 1991. Asthmatic responses to airborne acid aerosols. Am. J. Pub. Health 81:694-702. Pope, C.A., and D.W. Dockery. 1992. Acute health effcts of PM10 pollution on symptomatic and asymptomatic children. Am. Rev. Respir. Dis. 145:1123-1128. Pope, C.A., D.W. Dockery, J.D. Spengler, and M.E. Raizenne. 1991. Respiratory health and PM10 pollution: a daily time series analysis. Am. Rev. Respir. Dis. 144:668-674. Prentice, R. L.. 1986. On the design of synthetic case-control studies. Biometrics 42:301-310. Racine-Poon, A. 1985. A Bayesian approach to nonlinear random effects models. Biometrics 41:1015-1023. Reinsel, G.C. 1985. Mean squared error properties of empirical Bayes estimators in a multivariate random effects general linear model. J. Am. Stat. Assoc. 80:642-650. Rothe, G. 1989. Bootstrap for generalized linear models. Statistische Hefte 30:17-26. Rothman, K.J. 1986. Modern Epidemiology. Boston: Little, Brown. Schall, R. 1991. Estimation in generalized linear models with random effects. Biometrika 78:719-727. Schwartz, J., D.W. Dockery, J.H. Ware, et al. 1989. Acute effects of acid aerosols on respiratory symptom reporting in children. Air Pollut. Control Assoc. Preprint 89-92.1. Schwartz, J., D. Wypig, D. Dockery, J. Ware, S. Zeger, J. Spengler, and B.J. Ferris. 1991. Daily diaries of respiratory symptoms and air pollution: methodological issues and results. Environ. Health Perspect. 98:181-187.
OCR for page 153
--> Simpson, J.C. 1985. Estimation of spatial patterns and inventories of environmental contaminants using kriging. Pp. 203-242 in J.J. Breen and P.E. Robinson, eds. Environmental Applications of Chemometrics. ACS. Symp. Ser. 292. Spektor, D.M., M. Lippmann, P.J. Lioy, G.D. Thurston, K. Citak, D.J. James, N. Bock, F.E. Speizer, and C. Hayes. 1988. Effects of ambient ozone on respiratory function in active, normal children. Am. Rev. Respir. Dis. 137:313-320. Thomas D., D. Stram, and J. Dwyer. 1993. Exposure measurement error: influence on exposure-disease relationships and methods of correction. Ann. Rev. Publ. Health 14: 69-93. Tsutakawa, R.K. 1988. Mixed model for analyzing geographic variability in mortality rates. J. Am. Stat. Assoc. 83:37-42. Tsutakawa, R.K., G.L. Shoop, and C.J. Marienfeld. 1985. Empirical Bayes estimation of cancer mortality rates. Stat. Med. 4:201-212. Vacek, P.M., R.M. Mickey, and D.Y. Bell. 1989. Application of a two-stage random effects model to longitudinal pulmonary function data from sarcoidosis patients. Stat. Med. 8:189-200. Wahrendorf, J., H. Becher, and C.C. Brown. 1987. Bootstrap comparison of non-nested generalized linear models: applications in survival analysis and epidemiology. Appl. Stat. 36:72-81. Ware, J.H., B.G. Ferris, D.W. Dockery, J.D. Spengler, D.O. Stram, and F.E. Speizer. 1986. Effects of ambient sulfur oxides and suspended particles on respiratory health of preadolescent children. Am. Rev. Respir. Dis. 133:834-842. Wartenberg, D., and M. Greenberg. 1990. Detecting disease clusters: the importance of statistical power. Am. J. Epidemiol. 132 (Suppl.):156-166. Wartenberg, D., C. Uchrin, and P. Coogan. 1991. Estimating exposure using kriging: a simulation study. Environ. Health Perspect. 94:75-82. Whittemore, A.S., and E.L. Korn. 1980. Asthma and air pollution in the Los Angeles area. Am. J. Pub. Health 70:687-696. Zeger, S.L., and M.R. Karim. 1991. Generalized linear models with random effects: a Gibbs sampling approach. J. Am. Stat. Assoc. 86:79-86. Zeger, S.L., and K.Y. Liang. 1986. Longitudinal data analysis for discrete and continuous outcomes. Biometrics 42:121-130.
Representative terms from entire chapter: