National Academies Press: OpenBook

Energy Efficiency in Buildings: Behavioral Issues (1985)

Chapter: METHODS FOR ANSWERING BEHAVIORAL QUESTIONS

« Previous: ENERGY CONSERVATION POLICY AND BEHAVIOR
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 9
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 10
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 11
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 12
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 13
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 14
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 15
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 16
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 17
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 18
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 19
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 20
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 21
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 22
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 23
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 24
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 25
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 26
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 27
Suggested Citation:"METHODS FOR ANSWERING BEHAVIORAL QUESTIONS." National Research Council. 1985. Energy Efficiency in Buildings: Behavioral Issues. Washington, DC: The National Academies Press. doi: 10.17226/10463.
×
Page 28

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

CHAPTER 2 METHODS FOR ANSWERING BEHAVIORAL QUESTIONS This chapter gives an overview of the behavioral questions that are relevant for policy about energy efficiency in buildings. It describes the available methods for answering such questions; presents a general strategy for approaching the questions; and discusses the appropriate- ness of each method, given present knowledge, for addressing behavioral questions about energy information, incentives, standards, and new technology in the buildings sector. SIX ANALYTIC METHODS Traditional Energy Demand Models Energy demand models are analytic tools in which mathematical equations are used to estimate how demand might respond to various policy choices. Such models have considerable appeal as a method of energy policy analysis. They are broad, multipurpose tools that can address a wide range of policy questions and call attention to unanticipated effects of policies on other parts of the energy or economic system. They can give the sort of quantitative answers decision makers want to their questions, and they can often do this quickly. When correctly formulated, models can provide necessary checks of consistency with physical and economic constraints that might otherwise be overlooked in policy analysis. Table 1 briefly describes the major types of energy demand models. But the models usually used for energy policy analysis have many limitations. A number of general and serious criticisms have been raised by modelers and others (see Ascher, 1978; Brewer, 1983; Freedman, 1981; Freedman, Rothenberg, and Butch, 1983; Greenberger, Crenson, and Crissey, 1976~. In policy analysis, models are most appropriate for anticipating effects of interventions that are quantitative and that operate by processes that are well understood or that have been successfully modeled in similar situations. Often, however, not enough is known to defensibly quantify the variables, or the path of implementation is less straightforward. In such cases, the use of existing models cannot be easily justified. For example, available energy models lack data on variables related to information that consumers receive or act on. To 9

10 In o o U] Q4 ED m EN en U] U: I: Y 3 U] S In U] 3 o ..H O? .,~ o ~ o ·~4 aC N An-- ~ O I: ·' U] 01 A: ~ ·- U] A) ~ O 3 ~ O ~ ~ ~= O ·- ~ ~ ~ ~ O ~ ~= Y ~ Hi- - V O O ~ ~ ~ ~ ~ V ·- 54 O ~ ·- ~ ~ ·- ~ ~ ~ ~ In ~ ~~- ~ · - ~ ~ 0 a, ~ 0 ~ ~ ~ ~ s. - u, ~ ~ :^ ~ ~ a) ~ ~ u' Q ~ ~ ~ ~ ~ ~ ~ u~ ~ ~ `:: ~ ~ ~ ·- . - ~ v' O O Q ~ ~ ~ ~ ~ u~ u, ~ tt Q ~ O ~ ~ O :) ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ S O ~ l.4 V . - .- O · - ~ ~ Q O U] ~ :^ ~ ~ ~ ·'S ~ ·' ~ ~ ~ ~ O ~ V a~ :^ ~ J: u~ ~ ·— .,' ~ ~ ~ . - ra · - 3 ~ u' ~ u2 ~ ~ · - "- (L~ U] ·rl a) 0 ~ 0} ~ ~ · - ~ ~ ~-,1 ~ ~ ~ U. ~ · - ~ · - _' Q ~ L' ~l o O ~ i4 =-- ~ (L} , · - q:3 ~ ~-~1 ~ ~ ~ · - 0- - ~ O CQ ~ X ~ ~ U] S~ ~ ~ ~ ~ ~ ~ S ~ O ~ ~ ·- ~ ~ ~ V ~ ~ ~ ~ ~ O ,4 ~ ~ 3 ~ O h Q )~~ U U ~ ~ C~ ~ O v~ ~) 0 ts' =.- 1 ~ 3 ~ ,~ S: ~ 1 ~ ~ ~ 0 1 U 1 ~ ~ 3 ~ Q `: ~ ~ ~ J:: ~ - U— t) ~ o. ~: ~ ~ ~ ~ U =: 3 ~ s~ a .,l ~ ~ 0 ~ ~ ~ ~ -- ~ ~ ~ ~ ~ - ~ U2-- ~ ~ ~ O O O ~ ~ ~ 1 a-- 3 ~ ~ ~ --- S ~ O ·= .~ O ~ ~ ~ ~ ~ ~ Q ~ 3 a~ ~ {,q u~ ~ ~ ~ ~ ~ ~ O ~ ~ ·— O O ~ S ~ ~ ~ Ll ~ ~ ~ C: S L4 ~ ~ ~ ~ S · - ,0 .~ ~ S ~ ::, S Q ~ ~ ~ ~ · ~ ~ U ~ ~ O O ~ == U O ~ ~ ~ · - ~ · · - o ~ O ~ ~ ~ U ~ ~ O ~ ~ ~ ~ U o·- ,., ·- Q~1 ~ O·- ~ O ~ ~ ~ ~—-- V U · - ~ O ~ ~ ~ ~ ~ 3 ~ ~ ~ ~ U ~ ~ ~ ~ ~ ~ m- - o 3 ~ ~ ~ ~ O S ~ ~ ~ ~ 3 O ~ ~ ~ ~= ~ ~ - U) ~ ~ O ' ~ ~ O I ~ I ~ ~ ~ O .` o ~' ~ ~ -' 1 U] ~ ~ `: o u' .,, ~ ~ ~ ~ ~ ~ ~ · - O ~ 54 3 03 c: Q~ :5 ~ ~ ' _ ~ ~ O 3 ~ ~ ~ ^ ~ ~ ~ O ~ Q, ~V 4~ >,-~ ~ ~ ~ 3 ~ O 2~ ~ 4~ u~ ~ ~Q ~ ~ ~ ~ ~ 3 ~ o 3 ~ ~ ~ ~ ~ ~ ~ ~ Q ~ S · - , ~ ~ ~ ~ ~ ~ U ~ ~ U ~ ~ ~ O · - ~ ~ ~ · - ~ ~ ~ ~ ~ ~ . - ~ ~- - o ~ . ~ ~ ~ ~ ~ 3 ~ O ~ · ~ ~-^ . - ~ · - U ~ ~ ~ O ~ Q V ~ ~ ~ ~ ~ ~ O ~ ~ ~ ~ ~ ~ ~ O ~ ~ ~ ~ · O u' ~ ~ a) ~ ~ ~ ~ ~ ~ 0 ~ ~ ~ ~ ~ ~ ~ ~ 0 a,) ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ - ~ ~ ~ 0 ~ ~ ~ · - U ' s" 3 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ · - 3 S .- ~ ~ ~ ~ O ~ ·- ~ ~ ~ ~ ~ ~ o ~ ~ U a~ U] ~ ~ ~ · - ·~-— a ~; ~ ~ ~ ~ ~ ~ ~ =-- U] ~5 Q ~ ~ ~ ~ ~ ~ `4 V .,4 $4 · - · - ] s~ U ~ U2 ~ . - ·- ~ ~ ~ ~ O a) ~ ~ ~ ~ 0 ~ ~ ~ · - U · - 0 ln s ~ O ~ O

11 estimate the effectiveness of an information program, a modeler might adjust the price elasticity or a lag coefficient as a proxy for the program's effect, but to do this is to assume the program's effect rather than to estimate it. Models based on the economic theory of information and consumer search (e.g., Hirshleifer and Riley, 1979; Salop and Stiglitz, 1977; Wilde and Schwartz, 1979) can improve the situation as an empirical basis is developed for choosing among the search strategies consumers may plausibly use. Analysis of Existing Data The Energy Information Administration and the utility companies have extensive data on residential and commercial energy use. Such data are useful for relatively quick and low-cost analysis of relation- ships that are represented in the data set, such as responses to fuel price changes or to incentives offered in different conservation programs. Analyses of existing data are limited, of course, by the data available. For studies of appliance efficiency, data can be found on purchases and list prices, but information on costs of production is held by manufacturers as proprietary. Utility data, which accurately report energy use, have limited value because they usually lack infor- mat~on on consumers' incomes, demographic variables, or behavior. And in disaggregated data sets that include information on energy use, data on demographic variables and local weather conditions are not often included. There have also been problems getting access to existing data at the individual level because of concern about privacy. Better data exist for analyzing energy use in the residential sector than in the commercial or industrial sectors; aggregate data are generally more available than disaggregated data; and energy use data are better than data on equipment stocks, with data on attitudinal factors even less adequate. The value of existing data also depends on its level of aggregation in relation to the question at hand. Data sets that include disaggre- gated data on residential consumption can be aggregated to compare utility service areas or states in which different programs, incentives, or regulations are in effect. Such comparisons can be valuable if interpretations are made with sufficient care. Surveys Surveys of energy users and other relevant populations--manufaC- turers, lenders, architects, building owners, and so forth--can give information about their initial reactions to new technologies, planned programs and policies, and about responses to programs during imple- mentation. Surveys are particularly good for assessing qualitative variables such as awareness and trust of information or the attractive- ness of particular qualities of a new technology or incentive program. They are also valuable for interpreting observational data. Data on

12 miles driven in the family car or money spent on a new energy-efficient home may reflect a variety of behaviors or decision processes, and surveys can help reduce ambiguity. And surveys can ask such questions of a sample that is representative of a population of interest. But surveys suffer from some generic limitations. Respondents may give socially acceptable rather than accurate responses. Surveys may fail to predict behavior because respondents' answers are based on faulty memories of what they have done or because they are unable to predict what they will do. Unreliability increases when surveys are used to assess responses to a hypothetical situation (e.g., a planned information program) or to predict behaviors that involve many steps before completion (e.g., expensive investments in energy efficiency) . In the federal government, surveys present a practical problem because of the difficulty and delay involved in getting approval from the Office of Management and Budget (OMB) for survey instruments . The requirement for OMB approval, which rests on the rationale of reducing the burden on respondents, has stimulated researchers to develop various alternatives to the usual survey approaches: respondents have been paid, which satisfies concerns about undue burden; surveys have been funded by the National Science Foundation, whose procedures for pro- tecting human subjects satisfy concerns about burden; and data have been collected by utility companies, state governments, or other groups independently of OMB. The Department of Energy (DOE) has also some- times sponsored analysis and interpretation of such data without needing clearance. DOE can perform a useful function by sponsoring such analysis when there is a need for understanding of national trends or to explain differing success in programs that are superficially similar. The OMB clearance rule has delayed some surveys, halted others, and promoted creativity among researchers seeking timely answers for policy questions. The net effect of OMB regulation on respondent burden remains unknown. Research has continued under the rule, but it has sometimes been distorted. For example, having research conducted by different organizations in different parts of the country, which can be done without OMB clearance, is likely to result in the collection of noncomparable data. This problem plagued interpretation of the time-of-use electricity pricing experiments of the 1970s (Hill et al., 19797. To the extent that OMB clearance is perceived as an obstacle to be avoided, it becomes more difficult for policy analysts to achieve the careful design and standardization of survey questions that is needed to draw generalizable conclusions from research. A practical approach to standardization within the existing system is for researchers to use or modify survey items that have been laboriously developed by the Energy Information Administration for its Residential Energy Consumption Survey (RECS) and other surveys. A longer-term approach is to get key questions included in ongoing panel surveys such as the RECS. However, this approach is not appropriate for answering questions about particular local programs.

13 Ethnographic Methods Detailed, open-ended interviews such as anthropologists conduct when trying to understand foreign cultures are sometimes useful for gaining an initial understanding of behavior when it is not yet clear which behaviors or beliefs are most important to understand. For example, ethnographic interviews revealed that many people think of energy in budget-based units, such as dollars per month, rather than in energy units (Kempton and Montgomery, 1982~. This finding was a revela- tion to some analysts, who were designing information programs on the assumption that physical units would be meaningful to people. The ethnographic approach is also useful for getting a first approximation to the decision processes of individuals or organizations. AS under- standing of the issues becomes clearer, research can move from ethno- graphic approaches to more quantitative methods, such as surveys or small-scale experiments. Focused group discussion is a technique developed by marketers that combines features of both survey and ethnographic methods. A trained leader directs a discussion among ten or so members of a population whose response to a program element or product design is of interest. The participants' comments are used as a rough gauge of the reactions of the group they are presumed to represent. Like ethnography, focused group discussion does not involve representative sampling, and like ethnography, it can give early qualitative indication of people's reactions. Focused group discussion is not as systematic as survey research, and it is not always less expensive, but it can usually collect data faster. Small-Scale Controlled Experiments The experimental approach has been generally neglected in energy policy analysis. The best-known exception has been the time-of-use pricing experiments conducted during the 1970s, some of which involved random assignment of households to experimental electricity rates. Experimentation was the method of choice in those studies because there was no empirical basis for modeling the effect of prices based on time of use and because the experimental rates were so far from most energy users' past experience that self-reported intentions could not be relied upon. The same rationale suggests that experiments could provide the most valid answers to many questions about the design of energy information programs and about the marketing and implementation of conservation programs. The greatest advantage of experiments over other research techniques is their ability to control for large numbers of extraneous variables whose effects make the interpretation of nonexperimental data difficult. This is the situation with most conservation programs; the Residential Conservation Service {RCS) is a good example. Most evaluations have treated RCS as a single, uniform program and have attempted to make summary judgments about the RCS concept. But the variation among nominally identical programs is more striking than the averages (see

14 Chapter 3 ), and policy success depends on understanding and replicating program success where it occurs. Many factors that may be responsible for success could easily be the subject of low-cost experimental field tr ials . For example, a utility company could randomly assign some of its customers to receive telephone marketing of RCS audits, to be contacted as a follow-up to the audits r or to receive audits from community groups. Such procedures are being used in an evaluation of audit follow-up techniques that DOE is sponsoring in collaboration with the Florida Power and Light Company. Or, the effects of marketing efforts by a private company can be compared with identical efforts sponsored by a government agency (Miller and Ford, 1985 ~ . Strong inferences can be drawn from such trials if they compare new and existing approaches to program management. The experimental approach is inexpensive relative to full imple- mentation of a program or policy. In the context of an already planned pilot program, an experiment requires only normal evaluation efforts and the addition of special care in assigning participants to programs and in making data on program participants comparable with data on suitable compar ison group. The experimental method has had difficulties as a policy tool. Some researchers, unfamiliar with practical policy concerns, have experimented with unrealistic treatments, such as price rebates greater than 100 percent, and produced impractical recommendations as a result (see Stern and Oskamp, 1985~. Experimental studies often meet practical opposition from program managers who are eager to get on with their programs and who feel they know enough to act without awaiting the results of formal research. Experiments also face political opposition on the ground that if the policy is a good one, it should be made available to all, not just a small experimental group {for a discussion of such issues, see Mosteller and Mosteller, 1979~. Moreover , if experimental subjects believe an experiment to be temporary rather than a permanent change in policy, it may affect their behavior. An ethical question is sometimes raised about the propriety of exper imenting with human populations because participants in some experiments will benefit relative to participants in others. There are often ways to avoid such problems. For some policies (such as utility rate reforms) , it is possible to use crossover designs in which par- ticipants take turns living with each experimental rate so that all are -subject to the same set of incentives. Or a program can be offered to the control group after a delay to minimize the differential benefit. When it is not possible to equalize incentives, it becomes necessary to judge what the public and prospective participants will consider fair. Intuition is not always a reliable guide, and empirical methods can help. An illustration is the approach used successfully in the Wis- consin time-of-use electricity pricing experiment. The state public utility commission, which sponsored the experiment, wanted the rigor of true experimentation, which in this instance required randomly assign-' ing households to different electric rates. To see if it was possible to do this in a way that was ethically acceptable to the public, the research team convened random samples of people to judge the fairness of alternative rate structures for the experiment. The juries, and .

15 eventually the participants themselves, agreed that it would be fair to set rates so that the average household in each group would experience no change in bills if it did not change its times of using electricity. While this meant that households that normally used most of their electricity in peak times would pay more if they did not change their behavior and that other households would pay less, the approach was considered fair (Black, 1979~. This jury approach may be applicable to determining the fairness of potentially controversial experimental approaches before conducting the experiment. Evaluation Research Evaluations of past and present energy programs are a great untapped source of knowledge--not only about what works but about the reasons for successes and failures. Outcome evaluations quantitatively estimate overall program effects. For example, they may measure rates of parti- cipation in a program, sales of a new technology, improvement in the energy efficiency of building shells, or the net energy savings from a policy or program. Careful outcome studies can quantify a program's success and can be used for cost-benefit analysis. Process evaluations examine the way a policy or program is implemented rather than focusing on its final effect. They usually involve surveys, close observations, and interviews of program staff and clients and can offer insight into why a program succeeded or failed that cannot come from an outcome evaluation. When process and outcome evaluations are used together, they can tell which features of a program were responsible for its outcome. By identifying the important factors and relationships in the implementation-process, evaluation studies can suggest promising revisions for programs. Evaluation research can use any of the methods outlined above. The most reliable information comes from explicitly treating programs and policies as experiments from their beginning. To do this, an evaluation plan would include creation of a suitable comparison group, randomly assigned if possible, and careful measurement of effects in all groups. (Full accounts of issues in evaluation research design can be found in texts such as Cook and Campbell, 1979.) When random assignment is not feasible, some quasi-experimental research designs retain many of the advantages of controlled experiments. Whatever the type of research design, however, more can be learned from the experience of a program if an evaluation plan is developed as a program is developed; an evaluation plan tacked on after a program has been operated inevitably produces weaker research because of the inability to measure preprogram conditions and because important questions must be answered from memory or by reference to incomplete archives rather than by observation. Examples of evaluations begun at an early stage include the DOE- sponsored evaluation of the Alliance to Save Energy's low-income mechanical retrofit program and the evaluation of a shared-savings home retrofit program by the government of Hennepin County, Minnesota. Evaluation studies can often be strengthened by using several research methods in concert. For example, surveys are ideal for

16 getting participants' reactions to a program, even one that includes experimental controls. Surveys directly measure responses that can only be inferred from ~hard" data on energy savings or participation rates. In process evaluations, open-ended interviews can identify critical features of a program that both researchers and program operators have failed to anticipate. And small-scale experiments with program elements can be very informative as part of an evaluation study even if the overall evaluation does not use an experimental design. A problem with most of the evaluation studies of incentive and information programs is that they do not illuminate the reasons for a program's success or failure. There has been an emphasis on assessing outcomes but relatively little attention given to qualitative factors `~, in a program's marketing and implementation that can mean the difference between success and failure. Even in the few instances in which process and outcome evaluations have been done of the same program, there has been little effort to tie the two approaches together. A STRATEGY FOR ASSESSING BE=VIO~L ISSUES The success of efforts to conserve energy depends on the decisions of numerous individuals and organizations to produce, market, and adopt energy-efficient technologies. A policy or program that is designed without taking into account all the relevant actors and choices runs a high risk of failure . The r isk can be reduced by a strategy that takes the various actors into account from the start and molds the policy or program to increase its acceptability to them. The strategy requires repeated and structured interaction between the developers of the program or policy and those who are its targets. It is best described by an example. In designing a home energy rating system, one would begin by interviewing potential users to learn what they would like to learn from a rating (see Ackerman et al., 1983, for an example of the approach). The process could begin with relatively open-ended discussions involving groups of bankers, builders, realtors, homeowners, and so forth, to generate a few ideas for types of ratings that might prove acceptable. Then the potential users could be asked to respond to proposed ratings in a focused group discussion or survey format. The purpose at this stage would be to rule out some rating systems as unacceptable so that more careful attention can be given to the remaining candidates. Ratings that pass the screening could be tried in a more realistic setting on a few houses, and user reactions could be reassessed by open-ended interviews or surveys. Potentially attractive ratings can then be tried in the field with experimental controls, using different versions on different homes or in different communities, with follow-up surveys used to assess the reactions of the relevant populations. When a rating system is formally instituted, the same procedure of surveying can be used as part of the process evaluation e Note that the procedure involves changing research methods as the policy or program moves toward implementation. At each stage, the list of options is narrowed and their presentation is made more realistic.

17 Data collection moves from open-ended to more tightly methods--from ethnographic interviews and discussions then to experiments. At each stage, however, more than one method of research may be appropriate. Surveys and ethnographic methods are useful at the start for learning what issues concern people, but because people cannot reliably predict their behavior in situations they have not experienced, self- report methods have only limited value for predicting program effects. Surveys are useful as a measurement technique in pilot studies for assessing reactions to alternative versions of a program. However, the experimental method offers the most definitive knowledge of what specific versions of a policy or program work best. The above strategy is appropriate not only for informational pro- grams such as home energy rating systems, but also for incentive and regulatory programs and for the development of new technologies. Manufacturing firms are well aware that a product's success depends on the reactions of distributors and customers, which is why they have market research departments. Government, however, has sometimes failed to look carefully enough at what is acceptable before promoting policies and technologies. The failure of federal building energy performance standards is traceable to insufficient communication between the federal government and the building industry, and the resulting view in the industry that the standards did not address its legitimate concerns. Similarly, a screw-in fluorescent bulb developed with DOE funds in the 1970s met initial market resistance because DOE had focused on issues of engineering and life-cycle cost and had not given enough attention to the problems of introducing a 7-dollar product into a 50-cent market. controlled to surveys and Designing programs and technologies by involving representatives of the potential users has an added advantage. It gives the users early information about the existence of the innovation, simplifying the marketing task later on. Participation also tends to commit people to the version that they helped choose. It follows that it is important to involve individuals or groups that are influential with other members of the target population for the new program, policy, or technology. US ING BEHAVIORAL METHODS TO ANALYZE POLICY I SSUES This section discusses the role of the different research methods for addressing behavioral questions that arise in policy analyses of energy information, incentives, standards, and technologies. This fourfold classification of policies and programs is somewhat artificial: many incentive programs have informational aspects, standards can affect the use of information, the adoption of new technologies depends on incentives and information, and so forth. Furthermore, there are often synergisms between policy types that make it advantageous for policy makers to deliberately combine them in a single program. Thus, the important behavioral questions for any one policy or program may be found under more than one of the following headings.

18 The rest of this chapter presents our judgments about methods to use for answering the behavioral questions we identified in Chapter 1 as important in each type of policy or program. The judgments, which are summarized in Tables 2 through 5 below, are conditioned on the present state of knowledge and the current adequacy of analytic tools. Information and Information Programs Table 2 summarizes the appropriateness of the six analytic methods identified above for addressing the five key behavioral issues related to information. How Can a Program be Designed so that the Information it Offers is Used? The effects of energy information depend not only on its complete- ness but on its credibility, specificity, comprehensibility, vividness, and other qualities (Stern and Aronson, 1984~. For analyzing the effects of such factors, existing data sets are irrelevant and existing quantitative models are almost useless. Currently available models tend to assume information to be complete or at least constant or to subsume its effects under other explanatory concepts, such as elasticity, discount rate, or time lag. To gain understanding for the purpose of designing information, it makes more sense to address the behavioral questions directly, using nonmodeling approaches. Surveys and ethnographic methods are more promising. Ethnographic interviews can uncover fruitful hypotheses about the way people understand energy use (e.g., Kempton and Montgomery, 1982), and surveys can refine those hypotheses and determine the generality of the responses revealed by ethnographic studies. For example, survey research can identify householders' misconceptions about energy used in their homes and can also estimate the prevalence and magnitude of the misconceptions (Kempton et al., 1984~. Experiments can offer even more definitive knowledge about the role of qualitative factors in energy information. For example, experiments on the importance of sources of information in which people receive the same information from different sources (e.g., Craig and McCann, 1978; Miller and Ford, 1985) quantifies the effect of the source of infor- mation on a particular set of behaviors. Such knowledge provides important guidance for program design that cannot come from models and would not be as convincing if it came from surveys of what people believe they would do. Evaluations of information programs can offer uniquely valuable knowledge from field settings if interviews or surveys are used to determine how information about a program reached people and how they responded to that information. Even more convincing information can come from program evaluations in which experimental controls are used to study some aspect of the information offered.

A o ~5 - ~u a' U2 u] u] H o .,' I m · - U2 U] to 0 ~ o U] ~5 o ~ 0 · - U ~ A, ~ 0 H X U) O O In A O H . - S~ o AS o In V At; ED 19 to v Ed o U] a) A C) o o ~ en U] Ma v In ~ 0 In o In In us V V al ~ ~ o o ~n ~ ~ o a,) a,1 vO ~ o H a, ~Q rQ H U2 O ~ ~J ''~ ·~S [Q · - O V H (U O ~ o S S" O O tH ~~ _I a,) ~ ~ a~ ~ ~ a) ~ a —~ ~ ~ ~ ~ ~ ~ ~ ~ ~ p~ ~ 1 Q Q ~ S Q '—Q O O V ~ ~ ~ ~ ~ 3 3 3 a~ ~ ~ ~ 3 Q ~ Q~ Ql— 0 ~ 0 ~ u' ~ ~n Z ~ Z ~ a · - ·,' ~ ~ ~ a' ~ a, ~ ~ ~ ~ Q~ Q, S Q S Q ·- Q ·- Q S Q O 0 3 ~ 3 ~ V ~ V ~ 3 ~ ~1 ~1 (LJ 3 0) 3 0 3 4) 3 41) 3 ~J Q~ ~ Q 0 ~ 0 ~ 0 ~ 0 ~ ~n ~ [Q ~ 0 Z ~ Z ~ ~ ~ ~ ~ ~ ~ W .i ~Q ~ ~ , - ·,. U] _1 3 ~ a) ~ ~ ~ ~ s a, .~1 ~ _I ~ ~ ~ ~ ~ ~ ~ Q tQ U] Q Q S Q ~ ~ ~ 3 3 ~ ~ ~ 3 ~ aJ 3 3 ~ 0 3 0 3 —I _I ~ ~J ~ I O ~ ~ O ~ ~.rl ~ O Z _1 ~ 3 ~ ~ ~ ~ a) ~ , - .,' ~ ~ (o ~ ~ ~ ~ ~ ~ a~ Q Q Q ~ '- Q 0 3 ~) U1 <1) ~ ~ 3 ~1 ( Q. O o ~ ~ ~ ~ ~ Z V 1 · - .1 ·,1 V a, ~ {Q >, ~ ~ ~n O a) ~ ~ ~ ~ ~ 0 ~ ~ ~ ~ 1 Q ·~1 Q a-1 ~1 ~ Q, Q O ~ V ~ V G5 V to O 3 4) 3 ~ 4) a, ~, Q4 ~ a, ~ Q. O ~ O U] ~ U] Z ~ Z ~ ~ ~ ~ ~ ~ V V ' - ~ O O O ~ S ~1 1 ~ . - , - Q. ~ ~ O v a) ~ s ~ ~ ~ ~ v 0 3 ~ 3 25 U] U] ~ ~ ~ ~ 1 ~ ~ ~ ~ V ~ V = ~ O ° ~ ~ o 3 a) ~ ~e ~ ~ s ~ ~ ~ ~ ~ 3 s ~ ~ ~= O ~ X 3 ~ a' ~ X £

20 How Can a Program be Designed to Spread Information Widely? There is a body of literature on the diffusion of innovation that is relevant to the spread of energy information (for applications to energy conservation, see Darley and Beniger, 1981; Stern and Aronson, 1984:Chapter 4~. To learn more about the spread of information in a particular context, two strategies are appropriate. One is to ask people, using surveys or ethnographic methods, how and from what sources they get their information. The other is to try out different methods of spreading information in a field setting and measure the results. The second strategy gives more reliable results but can involve much more effort. It is easier to collect data in the context of a program evaluation. If an ongoing program uses different ways of spreading information, an evaluation study can readily assess the success of the different methods. An example is an evaluation of the Minnesota Residential Conservation Service program, in which the choice of having energy audits performed either by utility personnel, private contractors, or community groups produced very different rates of requests for audits (Polich, 1984~. How Can the Effects of a Program be Forecast? Forecasting the effects of information cannot at present be done on the basis of any well-developed theory; the only reasonable approach is to rely on data from past programs and to make judgments about differ- ences and similarities between those programs and the one whose effects are to be predicted. Most government energy information programs have had small effects or none, and the same can be expected from new pro- grams unless they adopt some of the more effective techniques that have been demonstrated in various studies (see Stern and Aronson, 1984: Chapter 4~. How Can the Effects of a Program be Assessed Accurately? The most effective outcome evaluation is one based on comparison of participants in a program with two kinds of comparison groups: nonpar- ticipants in the program and similar consumers who are not served by the program. Comparison with eligible nonparticipants gives an index of direct effects of the program, although the possibility of self- selection complicates interpretation of the results in most research designs; comparison with consumers not served allows a researcher to identify contagion effects in which a program affects nonparticipants through their indirect knowledge of it. Although each of these com- parisons offers valuable information, such quasi-experimental studies are not definitive. (More detailed discussion of evaluation design is presented in Chapter 4; for a more technical and complete discussion of quasi-experimental research methods, see Cook and Campbell, 1979.) It is useful to build some experimental control into a program, for example, by offering information to different clients in different

21 forms, but evaluation researchers usually arrive on the scene too late to use this approach. To What Can Program Effects be Attributed? To answer this question adequately requires a process evaluation in combination with an outcome evaluation. Process evaluations can help explain the results of outcome evaluations, especially when both tech- niques are applied to the same programs (e.g., Bonneville Power Adminis- tration's residential incentive programs, which have been administered in somewhat different ways by the participating utilities). After-the- fact questions to participants can give valuable insight into the reasons for a program's success or failure, but because participation can change the ways people make sense of their experience, self-reports must be interpreted cautiously. The way to be sure of conclusions from a process evaluation is to alter the program based on those conclusions and observe the effects. Incentive Programs Table 3 summarizes the appropriateness of the six analytic methods for addressing five key behavioral issues related to incentives for conservation. How Does Investment Change as a Function of the Size of an Incentive? Existing models can be useful for estimating the effect of any given size of incentive, but the usual assumption that a smooth curve relates the two variables is open to question. There is evidence to suggest that response may be a nonlinear function of the size of an incentive (Hill and Stern, 1985; Stern, Berry, and Hirst, 1985) and also that size itself may be a less important factor than awareness of the existence of an incentive (Heberlein and Warriner, 1983; see also Chapter 3~. Evaluation of these possibilities using existing data is needed to make models more reliable. Surveys offer only weak data on the effect of incentive sizes because people can only compare incentives in hypothetical situations. Experimental methods are a better alternative. How Does Investment Depend on the Type of Incentive Offered? The available energy models tend to equate different types of incentive (e.g., loan, rebate, tax credit) on net present value criteria, implicitly assuming that only the size of an incentive matters. But consumers may respond differently as a function of other financial features of incentives: a grant reduces first costs while a long-term loan can prevent negative cash flow. Also, different kinds

22 o ~5 a) A a, In U] H o .,' A m .- U] ma a) A: o U] o a, v .,' ~ · - us 0 U] U2 . - S~ 0 ~ CD a a' ·-. a) H O U] a) H O a) ~ U O ~ a) . - V ~ a) U ~ U H ~ H .,' O ~ U ~ O (1) ~ ~ U O A; En .,! O ~ V H C) O ~n U] H a' N ~ a U] ,' O ~ a, H ~_ O .~ o S a, :^ ·~H ~ ~ ~ I eQ ~ ~ S Q ~ Q .-Q ~Q 3 ~ ~ ~ U ~ 3 aJ ~ 4) U2 ~ ~ ~ ~ (U "s (V O ~ O ~ ~ O ~ O ~ ~ ~ O p. ~ ~ O ~ ~ ~ X o ::5 ·— a~~ ~ ~ ~ (V ~J .Q ~ ~ s Q ~ ·—Q ·— ~ S Q ~ ~ ~ 3 ~ ~ U ~ U ~ 3 O ~ O O O ~ O ~ ~ ~ ~ O Z o _I ::5 ~ ~1 a) ~ a) ~ a) ~ ~ ~ a ~S , - . - ,' ~ _' Q Q Q ~ Q ~ Q ~ Q U ~ 3 ~ U ~ ~ ~ ~ ~ ~ a' ::s ~ ~ ~ ~S o O ~ ~ ~ U] ~ O ~ U] a o a' ~ a) ~ ~ ~ ~ ~ ~ ~ a .,' ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ LJ ~ ~ Q S Q S Q ' - Q . - Q S Q ~ ~ ~ 3 ~ 3 ~ U ~ ~ ~ 3 (L) ~ a) ~ (L) ~ a) ~ a) ~ a) ~ a, O O U2 ~ O ~ O ~ U] ~ U2 ~ O P~ ~Q :, a) ~ ~ ~ a) ~ a (U ~ a' a Q ·~1 Q U] U] '— Q S ~Q 3 ~ O a - ~ ~ Q4 - U] ~ O O U] ~ O ~ O ~'> Z Z ~ ~ ~ ~ Z ~ o ~ v ~ ~ ~ . - ~ ~ o o Oa s -= ~ ~ ~ QJ ~ ~ 0 ~ ~ U] U2 ~ ~ ~ ~ ~ s u' ~ ~ U O ~ a, ] U] ~ ~ ~ q:5 1 a = ~ 0 0 U2 ~ ~ s ~ ~ ~ ~ ~ s ~ ~ ~ ~ ~ 0 a) O ~ ~ ~ ~ a' ~ x a ~: ~ ~ u~ ~ >: u: E~ ~ c; i

23 of consumers probably have different preferences between types of incentive (see Chapter 3~. It is possible to address questions about incentive type by asking consumers directly about their preferences but, the question being hypothetical, responses are only suggestive. A more effective way to address the question is through the comparative analysis of data on consumer responses to programs offering different types of incentives {Hirst, 1984; also see Chapter 3~. The most reliable knowledge would come from experiments that offer consumers a choice of incentives of different types but of equal value. This could be readily done in the context of ongoing incentive programs, with the results coming in the form of an ordinary evaluation study. What Programmatic Factors Affect Consumers' Use of Incentives? Nonfinancial features of incentive programs, such as the availabil- ity of technical assistance, consumer protection features, the credibil- ity of a program's sponsor, or the quality of interaction between clients and program personnel may be critically important to a program's success (Miller and Ford, 1985; Stern, Berry, and Hirst, 1985; see also Chapter 3~. Surveys and open-ended ethnographic approaches are useful for understanding the role of these factors. After an incentive has been offered, surveys of users and nonusers can help illuminate the reasons for their responses. Valuable insights about nonfinancial features of programs have also come from evaluation studies that analyze programs offering a single incentive but administering it in different ways {e.g., Lerman, Bronfman, and Tonn, 1983; Lerman and Bronfman, 1984; Polich, 1984~. The experimental approach can often yield quite precise assessments of nonfinancial factors by manipulating them in the course of conducting a program. For example, a program can give special training to some energy auditors and not others, follow up energy audits with personal contacts for some customers and not others, offer additional promotional services on a random basis, or experiment with other marketing or implementation innovations. This is probably the most practical use of the experimental method in developing incentive programs. How Much Investment Would Have Occurred Without the Stimulus of an Incentive? Program evaluators sometimes use surveys to ask people who have taken advantage of an incentive if they would have made the same investment in the absence of the incentive. Answers to such questions must be interpreted with extreme caution. A more reliable approach is to compare people to whom an incentive was made available with people who did not have the incentive available but who were otherwise similar. This can be done by adding a comparison group to a program evaluation design. Because of self-selection of program participants, a comparison of eligible nonparticipants is less than satisfactory. A comparison group of people who took advantage of the incentive later

24 (e.ge, Newcomb, 1984) is an improvement, but there remain problems of comparability (see Chapter 2 and Cook and Campbell, 1979 for fuller discussion of methodological issues). Realistically, "triangulation" on the answer through different methods is probably the best approach. To What Extent Does an Incentive Increase the Pace of Investment? Quantitative models are sometimes used for addressing this question, but to be reliable for the purpose they need a stronger empirical base, which requires using other research methods. The best approach is probably through evaluation research, using appropriate comparison groups. Following carefully chosen comparison groups on a yearly basis will indicate when program participants might have made the changes they made if the program had not been available. Standards Table 4 summarizes the appropriateness of the six analytic methods for addressing the four behavioral issues related to standards for the energy efficiency of buildings or appliances. Under What Conditions Does Energy Efficiency Influence Consumers' Purchases? The direct way to address this question is to ask consumers, using surveys or interviews. Although the results would not be definitive, they would give useful information. Surveys of salespersons, dealers, and manufacturers may also give useful information. The question can be approached differently by calculating implicit discount rates from data on purchases of appliances or other technologies for which standards might be set. High implicit discount rates indicate that energy efficiency is not a major influence on purchases; they do not, however, provide information on the conditions under which efficiency may become more influential. How Might Alternatives to Standards, Such as Appliance Labels or Energy Ratings for Buildings, Make Energy Efficiency a Prominent Consideration in Purchase Decisions? The assessment of informational alternatives to standards should use the same methods used for assessing other kinds of energy infor- mation (see above). A laboratory approach can also help assess the effects of information on appliance purchases. Consumers could be confronted with a hypothetical purchase decision and be asked to request information one piece at a time until they have enough to make a decision. The question would be '~he the r a label or rating would move energy efficiency information to a nigher position in the decision

25 o a) to - ~q In o . - m ·r1 a a) SO o U] o a, c) . - In ~ a) · - 0 ~ · - U] U] a, a) · - Q. o ~ 0 U] Or ED co In (Q 0 lo- - s U- - ~ V ~ ~ P. a U As en a. · - ~ C) 0 ~ _ a eQ a, ~ Q ~ ~ ~ lo: tQ ~R H . - V [Q ~ [Q O eQ >, ~ S a S .,' o S a) ~ ~ · - ~ · - a ~ _I _~ · - _I Ll · - (V ~1 ·~1 Q Q ~ .Q ~ J~ V ~ ~ ~ ~ O ~ O ~ O O 41) O ~ O Q~ O ~ ~ Q ~ Q z ~ ~ o ~ Z ~ Z JJ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ · - · - ·,, ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ·- ~Q ·-l ~ Q S Q ~ Q, ~ ~ V ~ V ~ ~ 3 ~ O O O ~ {Q ~ b~ ~ ~ O ~ O ~ O Q' Z ~ Z {Q tn ^-,l {Q ~. - U] ~: ~ ~ s ~ ~ a) Q ·-l Q · - aQ Q S Q U ~ ~ V ~= ~ ~ ~ ~ O ~ ~ O O eq ~ U] ~ · - ~ ~- - ~ O Z .,' ~ _' ~ .~ ~ ~ ~ . - ·,1 QJ n ,1 Q S Q S ~Q Q, Q. O ~ ~ ~ ~ ~ 3 ~ 0 0 a,) ~ a O ~ O ~ 0 ~ 0 Z ~ Z o c, · - ~ ~ 0 0 O ~ ~ ~ ~ ~ ·^ · - 0 ~ ~ ~ u ~ ·^ s .,4 ~ ~ ~ ~ ~ ~ ~ ~ O ~q {,Q · - ~ ~ ~ 1 · - ~ ~ V ~ ~ ~ ~ ~ 0 0 ~ ~ ~ ~ ~ ~ 0 `~, ~ _4 U2 =0 ~ ·X ~ S~ ~ ~ ~ ~ ~ O £ ~ w u~ ~ ~: v: ~ E~3 ~:

26 process . Being hypothetical, this approach has limits: it is better for ruling out alternatives than for deciding on a final label or rating. The effects of ratings and labels are most accurately assessed through f ield trials that use exper imental methods in realistic situations and through evaluations of ongoing programs. How is the Importance of Energy Efficiency in Purchase Decisions Affected by the Circumstances and Purposes Surrounding the Purchase? The direct approach to this question is, again, a survey. Useful knowledge can be gained simply by asking homeowners, builders, building owners, or other purchasers what factors they consider in purchasing particular appliances or other technologies. The implicit discount rate approach can also be used to address the question. If the implicit discount rate for air conditioners is about 20 percent and that for water heaters is about 150 percent (Ruderman, Levine, and McMahon, 1984), the difference may be due to circumstances of the purchase: one appliance may be purchased mainly by homeowners for their use and the other mainly by contractors for resale. Combining data from surveys with analysis of existing data provides a check on the results of each method. In the Absence of Standards, How do Manufacturers, Builders, and Others Make Choices? For aggregate forecasts, quantitative modeling is the method of choice. However, existing models need a stronger empirical basis for their assumptions about behavior, particularly the behavior of purchasers: it is clear for appliance purchases that a simple assumption of cost-minimization does not do justice to the complexity of the phenomena (Stern, 1984:Chapter 5~. The needed empirical knowledge can come from research on the three previous questions. Technological Research and Development Table 5 summarizes the appropriateness of the six analytic methods for addressing the two behavioral issues related to research and development of energy-efficient technologies. Which Energy-Efficient Building Technologies Are Most Likely to be Readily Accepted in the Market? Available models are appropriate for estimating the economic costs of producing technologies and the energy saved by adopting them. But acceptance is also influenced by many other factors those models do not address: the prices manufacturers charge for a piece of equipment with a given production cost; the rates of adoption of the new technology as ~ -- ,

27 TABLE 5 Appropriateness of Six Analytic Methods for Addressing Behav- ioral Issues Related to Research and Development of Energy Efficient Technology Issue Estimating What Factors Behavioral Enhance Component of Method Acceptance? Energy Savings Demand Models Somewhat valuable- Potentially valuable Analysis of Existing Data Not useful Not useful Surveys Especially valuable Somewhat valuable Ethnographic Methods Especially valuable Valuable Small-Scale Especially Especially Experimentation valuable valuable Evaluation Research Outcome Evaluation Not appropriate Not appropriate Process Evaluation Valuable in technology transfer programs Not appropriate a function of its consumer features; the marketing efforts of manufac- turers and dealers; and so forth. Surveys and ethnographic methods are valuable components of a behavioral strategy for developing energy-efficient technology (see above). They are especially useful for identifying design features that would be attractive to potential manufacturers or purchasers. Reactions of those groups to designs or prototypes can help guide choices of design modifications, which can be market tested while still in the prototype phase. As a new technology moves toward implementation, surveys and small-scale experiments become more useful for refining the design, just as they do for policies and programs. Design options can be subjected to experimental trial by users to assess public acceptance in the same way they are subjected to engineering tests of their costs and efficiency of operation. When new technologies are being introduced in conjunction with specific technology transfer efforts, evaluation research is appropriate for assessing those efforts.

28 How Can Reliable Estimates of Energy Savings be Developed for New Technologies? New energy-efficient technologies do more than save energy--they also free income that can be spent on other things, some of which also use energy. This issue is amenable to modeling (e.g., Dubin, 1985; Dubin and McFadden, 1984), though data needs sometimes are serious limitations (see Stern, 1984:Chapter 5 for a discussion in the context of energy-efficient appliances). Doubts about the basis for the behavioral assumptions of models leave room for nonmodeling approaches to the problem (discussed in Chapter 6 in the context of home retrofits). To assess the effect of a new technology on behavior, it is useful to give some consumers a chance to use the technology. Since only a few consumers can be involved in trying prototypes, ethnographic approaches, which gain the deepest insight from the fewest consumers, may be the method of choice for understanding reactions to prototypes. An experimental approach, comparing relevant behaviors before and after adoption of a new technology with behavior of comparable energy users without the technology, becomes useful as more prototypes become avail- able for trial. Data collected in a few small experiments may be enough to validate or refine the assumptions of models, which may then become fully appropriate for forecasting the effects of new technology on behavior. The framework outlined above can guide research on a wide ranae of behavioral issues that arise in implementing energy efficiency in buildingse The following chapters look more closely at a few areas of conservation policy, identifying the relevant behavioral issues, reviewing available evidence, and outlining how the issues can be addressed more completely in the future. _

Next: THE EFFECTIVENESS OF RESIDENTIAL CONSERVATION INCENTIVES »
Energy Efficiency in Buildings: Behavioral Issues Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!