Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 86
Conclusions and Recommendations The previous chapters examine several factors that may be important influences on energy demand but that have not been addressed extensively in existing formal models. Some of these, such as consumer mistrust of information, informal social influence, and the marketing of financial incentive programs, are rarely highlighted by modeling efforts and may be difficult to assimilate into existing models. Such factors, it might be said, are blind spots of existing models. Other factors, such as qualitative distinctions among types of financial incentives, are not emphasized by the theoretical frameworks on which most models are based, but are likely to emerge in modeling efforts and could be assimilated into models without very much difficulty. Still other factors, such as the dis- tinction between appliance list prices and transaction prices and the effects of price changes as distinct from price levels, are significant in economic theory and could be incorporated readily into existing models, even though they have not been in the past. Our analysis shows, in sum, that existing demand models describe the behavioral environment of energy demand only incompletely. Some of the gaps can be filled if models are built from more com- plete and detailed data, but modeling efforts are likely to overlook systematically some important features of the environment of energy use. Because some important gaps in demand analysis seem unlikely to be filled through modeling efforts, we judge it unrealistic to try to build a single comprehensive analytic framework to answer policy makers' and research- ers' many questions about energy demand. AS an alterna- tive to seeking such a framework, learning should proceed by developing a portfolio of analytic approaches--some general, others focused on particular policy questions. 86
OCR for page 87
87 With diverse methods of analysis, hypotheses will be gen- erated and tested that any single method might overlook. In addition, one method can sometimes provide a valuable check on the results of another. This chapter presents the panel's conclusions and recommendations regarding the use of formal models and problem-oriented research in energy demand analysis and regarding the needs for data to inform that analysis. We also discuss an approach to using different analytical approaches in concert in order to improve the quality of energy demand analysis. THE ROLE OF FORMAL MODELS Policy analysts sometimes express two erroneous opinions concerning formal energy models: one is that a policy question can be answered simply because it is represented in a model; the other is the obverse--that a question cannot be answered because no model exists to answer it. Both opinions equate energy policy analysis with formal modeling. They reflect an overreliance on formal models that is not justified by the validity of existing models and that is not necessary given the availability of other analytic techniques. Formal models do have considerable appeal as a means of energy policy analysis. They are broad, multipurpose tools that can address a wide range of policy questions and call attention to unanticipated effects of policies on other parts of the energy or economic system. They can give the sort of quantitative responses decision makers want to their questions, and they can often do this quickly. And when correctly formulated, models can pro- vide necessary checks of consistency with physical and economic constraints that might otherwise be overlooked in a policy analysis. Compared with methods that involve gathering new data, models can save both money and time. They can also evolve, along with the questions that face policy makers. From a policy maker's viewpoint, models are familiar tools once they have been used, so it is easy to continue to rely on them, sometimes even when they are outdated. In addition, the apparent objectivity of com- puterized analysis is impressive to some decision makers. But models have many limitations. As the previous chapters have demonstrated, there is no behavioral knowl- edge to support assumptions about the values of parameters and about the functional forms of the equations used to represent behavioral relationships. Many models do not
OCR for page 88
88 treat their internal uncertainty with sufficient explicit- ness. And important variables are often excluded from models, sometimes for lack of data and sometimes because modelers' conceptual frameworks do not include them. Another limitation of models that are used for policy analysis and forecasting is that they are usually vali- dated only by matching them to past events, often with numerous post hoc adjustments. This procedure does not justify confidence in a model's ability to predict the future: newer versions of a model may be as likely to need readjustment as the older ones. The documentation, validation, and maintenance of models have generally been given insufficient attention. A model must first be documented: complete records must be made of the model's contents, its assumptions, and the sources of the parameter estimates and the chosen func- tional forms of its equations. Only with full documenta- tion can a model's behavioral assumptions be identified for testing. Validation is also essential, and not only when a model is built. In our view, it is important to validate models by testing them regularly against empiri- cal data. This means, for instance, making before-the- fact predictions with a model and comparing them with actual outcomes or comparing a model's results with the results of problem-oriented studies. Such empirical testing may be the only way to build credibility for models, in light of the fact that many models undergo almost continuous revision in their structure, input, and output. This ongoing validation, combined with a commit- ment to updating the documentation of a model as it evolves, constitutes maintenance--a much neglected part of modeling. When an organization buys expensive capital equipment it usually commits itself to a budget for main- tenance. But energy models, which are not as reliable as most equipment, are often expected to work well without maintenance. The size and complexity of some energy models makes documentation, validation, and maintenance particularly . This shortcoming of models is due not to the nature of modeling but to the frequent practice of inferring regu- larities in human behavior from the evidence of past cor relations. Other research methods, particularly survey methods and exploratory data analysis, have the same shortcoming when their findings are used uncritically to make projections. -
OCR for page 89
89 difficult. Although larger models are being better able to include feedbacks among parts of the energy system, the burden of documentation and validation increases. some- times geometrically, with the number of relet inch ins i, ~ -—~^ Are represented. Models that include large numbers of param- eters compared to the volume of data shiv -=nl ~ i n are particularly suspect. : _ _ ~ . ~ ~ ~ . . ~ _—em,! ~~,~ ~ ~ 81 ~~ ~ Also suspect are large models, ~nc~ua~ng many of the system dynamics variety, whose results are sensitive to the effects of variables whose values and relationships are merely postulated. Given the state of the art, we conclude that more knowledge can be gained by improving the quality of models than by -increas- ing their size. With constant resources, modeling can be improved, on the whole, by sacrificing some comprehensive- ness in order to gain quality. This ongoing validation, combined with a commitment to updating the documentation of a model as it evolves, constitutes maintenance--a much neglected part of modeling. In addition to the above substantive problems, the process of funding for models gives cause for skepticism. When quick answers are in greater demand than documenta- tion and validation, model builders are under pressure to sacrifice quality control. ~ ~ ~ ~ . . Poorly validated models can o~ expeccea co De usea more often and better--and there- fore more expensive-- models will fail to command the support their higher quality deserves. As a result of all the above factors, when any existing energy demand model Gives an answer h~ ~ an, its? - fact ;~" ~ ~ ~ ~ ~ ^ ~~ ~ ~ ~ ^—y ~,L4~= ~ EVER t gnat answer is to a large extent taken on faith. Despite these limitations, models remain popular with policy analysts--so popular that they are sometimes overused or misused. Sometimes models are used to answer factual questions that could be answered almost as easily and much more accurately by other methods. When an oil short- age threatens, for example, it makes more sense to find out how much consumers are adding to their inventories by surveying a sample of consumers than by estimating behav- ior from a model. Sometimes models are used to answer policy questions that they are not equipped to address. For instance, most models have difficulty representing efforts to improve information. To estimate the effect of energy-efficiency labels for appliances, a modeler might postulate an effect of the labels on consumers' discount rates and use a model to estimate that effect on purchase behavior. It would make more sense to conduct a field experiment that actually tested the effects of labels.
OCR for page 90
Do Models may be quick and inexpensive relative to alter- native research methods, but there is no such thing as good, cheap energy policy analysis. If policy analysts are to offer knowledge rather than mere answers, the empirical basis of their analyses must be strengthened. We believe this can be done by making some changes in the way models are developed and by drawing more on knowledge gained by other methodologies. We offer seven conclusions and recommendations about formal energy models and their use. 1. Policy makers should maintain a healthy skepticism about the outputs of formal energy demand models. We do not assert that judgment is necessarily better than exist- ing models. Rather, the point is that models, like judg- ments, should not be accepted without corroborating empirical evidence. The support of a second model is much less convincing evidence than the support of a field experiment, a good evaluation study, or even a well- conducted survey. 2. The current system dynamics models in use at the U.S. Department of Energy should not be relied upon as heavily as they are for forecasts of energy demand. Fore- casts from those models are too dependent on postulated relationships and on judgmental elements incorporated in them to make them consistent with expectations. 3. Resources allocated to modeling should be shifted to ensure adequate documentation, validation, and main- tenance. 4. Within the modeling community, more attention should be paid to building models that are better tested and maintained. These efforts are necessary to make demand models more credible. 5. For testing purposes, some versions of some models should be "frozen, archived, and then used from time to time without judgmental readjustments to make forecasts and policy analyses that are then tested against new data and against the findings of studies that use other research methods. This step should be considered an - essential part of the validation process, and is concep- tually separate from the normal process of using new data to revise and update models. With this step, the modeling
OCR for page 91
91 community can build a track record on the basis of which formal models can be judged. 6. Innovation in modeling should be directed toward decreasing the dependence of model outputs on assumptions about parameters and the functional forms of equations. 7. Sensitivity testing of models should be used to generate hypotheses for empirical research, and resources used for validating models should be devoted in part to carrying out this research. When a model's output is highly sensitive to a parameter whose value is not well established, empirical research should be done to estab- lish the value. As these conclusions and recommendations make clear, we believe much more emphasis should be given to building the empirical base for energy demand analysis than to further elaboration of formal models based on inadequate data. Other research methods are required to build this empirical base. THE ROLE OF PROBLEM-ORIENTED RESEARCH Five types of problem-oriented research are surveys, analyses of existing data, natural experiments, controlled experiments, and evaluation research. Surveys National general-purpose surveys can provide invaluable data for problem-oriented research. Because their primary role in energy demand analysis has been to gather the multivariate time-series data essential for much policy analysis, including formal demand modeling, our conclu- sions and recommendations for these surveys appear in the next section on data collection. Specialized surveys have been responsible for most of the detailed analytical work on the effects of consumer knowledge, attitudes, and beliefs on energy use (e.g., Kempton, Harris, Keith, and Weihl, 1982; Stern, Black, and Elworth, 1982b). Specialized surveys are especially useful for explaining phenomena that appear inconsistent in terms of the variables represented in models. For example, surveys can be used to understand why conserva-
OCR for page 92
92 tion programs that offer the same financial incentives vary so widely in their levels of consumer acceptance (see Chapter 3; Berry, 1982). Surveys that explore public responses to the marketing and implementation of conser- vation programs have shown that consumer protection and convenience are two nonfinancial variables that affect consumer response to financial incentives (e.g., Stern, Black, and Elworth, 1981). But as we have mentioned, surveys also have l~mita- tions. One is the unreliability of self-reports of some variables, such as attitudes. There is evidence both of reliability and unreliability in responses to energy sur- veys (e.g., Beck, Doctors, and Hammond, 1980; Geller, 1981). Two reasonable but unproven hypotheses are that self-reports of major investments are more reliable than self-reports of changes in habits and that reports of past action are more reliable than reports of future action. There is reason to question the worth of self-reports about planned energy-saving actions. Although reported intentions to act are often good predictors of behavior (Ajzen and Fishbein, 1977), the relationship depends, among other things, on the absence of constraints on action. For expensive investments in energy efficiency that involve many steps before completion, behavioral intentions would seem a questionable predictor. We have also noted that inferences drawn from even the most accurate self-reports may not be accurate because of errors in analysts' assumptions relating energy-saving actions to subsequent energy use. And when a policy innovation is being considered, people's predictions of how they will respond to hypothetical situations are not as good a source of information as actual observation. For such situations, small-scale experiments and program evaluation studies can give more useful information, even if their generalizability is unknown. The conjunction of several small-scale behavioral studies can give more con- fidence in the conclusions about a new policy than the best-designed national survey of people's intentions. Analysis of Existing Data Analysis of existing data can still improve understanding of energy demand. Many interesting data sets collected by government agencies go unanalyzed, in whole or in part. For example, the data from the Residential Energy Con- sumption Survey (RECS) have been only partly analyzed.
OCR for page 93
93 Inadequate information about individual respondents is cited by some researchers as one reason for the lack of detailed analysis, but lack of funding for data analysis is a more serious deterrent. More analysis could also be done on utility companies' data on residential and com- mercial energy use to give a more solid empirical basis to studies of energy demand. Limited funding for research and the narrow focus given to research questions have limited what could be learned; noncomparability of data across utility companies has made analysis difficult; and access to the data has been a major problem. Concerns about customer privacy and about possible use of data in adversary proceedings make many utility companies unwill- ing to give researchers access to their files. Regulatory agencies have sometimes forced the release of data when they believe release will serve a public purpose, but as a rule, utility data are not readily available to researchers. Existing data can be studied in various ways to learn about energy demand and to generate hypotheses. Several techniques of exploratory data analysis (Breiman, Fried- man, Olshen, and Stone, 1984; Donoho, Huber, and Thoma, 1981; Fisherkeller, Friedman, and Tukey, 1974; Friedman and Tukey, 1974; Huber, 1981) can be used to examine data sets for regularities and to generate hypotheses to be tested on future data or with additional research. These methods rely heavily on informal graphic techniques and Reemphasize formal statistical models or tests of hypotheses. 8. We recommend that some of the resources devoted to energy demand analysis be redirected toward exploratory analysis of existing data. Disaggregated data should be systematically collected on energy use in the commercial and industrial sectors of the economy and on energy prices and equipment stocks. Specialized surveys relating measured energy use and observed investments in energy efficiency to demographic, institutional, and attitudinal factors are also much needed. Natural Experiments Natural experiments can produce a wealth of data that should be analyzed systematically. Ongoing data collec-
OCR for page 94
94 tion efforts would make this more possible. Analysis of the records of utility companies would allow additional studies. Natural experiments would teach more if a capability were developed to move researchers quickly into the field to study natural experiments in energy demand. Surveys of consumer response to changing utility rates or the recent decrease in inflation rates would have been a way to learn from one class of natural experiments (see Chapter 2); studies of initial responses to the threat of an oil supply cutoff could be a vehicle for learning from another class of natural experiments. 9. We recommend that some of the resources available for energy demand analysis be made available on short notice for field studies of natural experiments that occur when there are rapid changes in the energy environment. Controlled Experiments Controlled experiments are an especially valuable tool for assessing the effects of interventions that are non- financial in character and for which existing models are particularly inadequate. For example, psychologists have conducted many field experiments on the effects of energy- use feedback (reviewed by Geller, Winett, and Everett, 1982) and smaller numbers of field experiments to assess the effect of nonfinancial factors such as personal com- mitment (e.g., Pallak, Cook, and Sullivan, 1980), self- monitoring of energy use (e.g., Becker, 1978), and the presentation of energy conservation as a way to save money versus as a way to avoid losing money (Yates, 1982). Financial incentive programs are also appropriate subjects for field experiments (see Chapter 3), both because they have important nonfinancial features and because consumer responses to the incentives themselves are not well understood. Experimental techniques offer great benefits for policy analysis of conservation programs: controlled field experimentation should be the method of choice for evalu- ating promising innovations in the implementation of such programs. Conservation programs are complex and contain important elements of promotion and implementation that cannot easily be expressed or analyzed in models. For example, results from Residential Conservation Service (RCS) programs have varied greatly across the utilities that run them, leading to controversy about whether the
OCR for page 95
95 national RCS program is worthwhile, and conflicting judg- ments have been offered to policy makers on the basis of very weak evidence. But many of the likely sources of variation could easily be the subject of experimentation at low cost. A utility company could randomly assign some of its customers to receive telephone marketing efforts or to be contacted as a follow-up to energy audits, or to receive lower-cost audit procedures as controlled alter- natives to the procedures the utilities now use. Despite the fact that strong inferences could be drawn from such experiments, conservation programs are almost universally designed and implemented without the controls necessary for identifying low-cost means to improve their chances of success. 10. We recommend that controlled field experimentation be used whenever possible to evaluate promising innova- tions in Policy affecting energy demand. As we have mentioned, laboratory experiments also are appropriate analytic tools in some circumstances. They are particularly useful in efforts to design energy information so that consumers will notice and understand it (see Chapter 4). It is often feasible to experiment in a laboratory setting with alternative choices about what information to include, what metrics to use to sum- marize information, and how to design appliance labels, automobile fuel economy guides, utility bill inserts, and so forth. The laboratory approach is much cheaper than field experimentation and can be used to screen out alternatives that would almost certainly fail in field trials. Evaluation Research Evaluation research can, at least in principle, allow analysts to learn from what may be the greatest untapped source on information about energy demand--the thousands of energy programs and policies that have been tried dur- ing the last decade. The knowledge that could be gained has great practical value because the success or failure of a conservation program is probably due to more than the sum of the specified features it offers; thus, it is not enough to build a program from single features that have proved effective--even in well-controlled experiments. The experience gained in past programs and policies, if
OCR for page 96
96 it can be interpreted, can help identify and possibly harness forces that may be more important than many of those usually considered in formal energy analyses. The best example is the fact that consumer use of incentives for conservation can vary by two orders of magnitude among programs offering the same financial incentive (see Chap- ter 3). This finding presents a riddle for analysts if they define the programs simply in terms of the financial value of the incentives they offer. The riddle can prob- ably be solved only by carefully examining the ways the different programs are implemented. Such process evalua- tions, which emphasize qualitative research methods based on close observation and interviews of program staff and clients, can offer the needed insight. Outcome evalua- tions, which can use many of the research methods dis- cussed in this section, can offer quantitative estimates of program effects. Careful comparisons of outcome studies can also provide estimates of how much difference process factors make. Although much can be learned from thorough process and outcome evaluation of the experiences of energy programs, we wish to reemphasize that the most reliable information comes from explicitly treating programs and policies as experiments from their beginning. Such an approach requires the creation of a suitable comparison group, randomly assigned if possible, and careful measurement of effects in all groups (fuller accounts of issues in eval- uation research design can be found in texts such as Cook and Campbell, 1979). Experimental research methods do not imply, we repeat, rigid constriction of a program's oper- ation for the sake of some notion of scientific rigor. When controlled experiments are not feasible, some quasi- experimental research designs retain many of the advan- tages of controlled experiments. Whatever the type of research design, however, more can be learned from the experience of a program if an evaluation plan is developed as a program is developed; an evaluation plan tacked on after a program has been-operated inevitably produces weaker research because of the inability to measure pre- program conditions and because important questions must be answered from memory or by reference to incomplete archives rather than by observation. 11. Resources devoted to energy demand analysis should be shifted to favor collection and analysis of empirical
OCR for page 97
97 data over further elaboration of models that are poorly supported empirically. 12. Additional efforts should be made to identify and quantify important variables that are now omitted from formal energy models. The most obvious example is the set of marketing and implementation variables that appear to dwarf the effect of financial incentives in energy con- servation incentive programs. Evaluation research appears to be the best method for identifying the relevant vari- ables; evaluation research or field experimentation might be useful for estimating their size. 13. Additional analytic effort should be made to incorporate key nonfinancial variables into the process - of demand analysis. The effects of marketing and manage- ment in energy information programs or of consumer mis- trust may be interpreted as changes in discount rate, changes in lag coefficient, or in other ways. It will prove valuable, however, not only to quantify the impor- tant nonfinancial factors in energy demand but to improve their conceptualization. 14. The federal government should establish a fund . . . for basic research on decision making relevant to energy - efficiency, with grant awards recommended by an outside peer review Panel. Such research should include studies of nonfinancial influences on energy demand and studies with only indirect implications for existing government- supported energy programs. THE ROLE OF DATA COLLECTION In efforts to model energy demand, data on energy use and on factors that influence it have too often been imputed rather than measured. Energy use is often calculated from data on production, stocks, and imports and then allocated to end uses, sectors of the economy, and geographic regions. Data on energy use by energy-efficient technol- ogy are often estimated from engineering models rather than measured in actual operation. And the nature of consumers' and manufacturers' decisions, program imple- mentation, and other social processes is most often assumed (or ignored). Insufficient knowledge exists to justify relying on imputations or presumptions rather than measured data. Prudence dictates building some national
OCR for page 98
98 estimates from disaggregate measurements and surveys, more direct methods that can act as a check on procedures of imputation. It makes sense for such measurement efforts to emphasize major energy uses (e.g., gasoline for auto- mobiles); politically sensitive uses (e.g., home heating, which especially concerns low-income consumers and their advocates); uses for which major fuel switching is pos- sible (e.g., industrial process heat); and uses about which little is known, such as energy use in commercial and public buildings. The best current example of national data collection on energy demand is the Residential Energy Consumption Survey (RECS) of the Energy Information Administration (EIA), a detailed longitudinal survey of a rotating panel of households that has been a particularly important source of knowledge for demand analysts. Careful thought has gone into the construction of the RECS questionnaires, which have served as a model for some other surveys and could be used more in research by state and local govern- ments and by utility companies. For several reasons, however, national surveys have not achieved their potential. For example, the initial plan for EIA to survey energy use in nonresidential sectors of the economy has not been followed. Understanding energy demand in the industrial and commercial sectors--the bulk of national energy demand--is obviously critical for national demand analysis, yet the EIA survey of industrial energy use was abruptly discontinued in 1981, and a planned new survey has not yet appeared. The survey of nonresidential buildings has been a sporadic effort and deserves more support. And EIA'S data on transportation are restricted to the residential sector. These weak- nesses in EIA'S surveys should be corrected. 15. Serious and continuing support should be given to EIA surveys that address all major energy-using sectors of the economy, that use a panel design, and that are conducted by experienced and competent data collection · . ~ organlzatlons. 16. The industrial energy-use survey of the Energy Information Administration should be reinstated to gain . . essential data on a major segment of national energy demand. Technical problems have made it difficult for some researchers seeking to use the RECS public data base.
OCR for page 99
99 Data tapes are not available for up to 2 years after the data are collected.2 More important, details at the individual level, which analysts often need for micro- analysis, are not available from RECS because of concerns about privacy, disclosure, and informed consent. In par- ticular, these concerns have resulted in limiting infor- mation available about the specific location of respon- dents' homes. Without this information, however, researchers cannot take advantage of information available from other sources on such factors as prevailing wind speed and direction, differences in utility rate struc- tures, the exposure of households to local or state con- servation programs, or local consumer price indices. Information at the level of three digits of a zip code would allow analysts to assess the effects of local vari- ables more adequately than they now can. The privacy problem might be solved by requesting respondents to release more detailed information to investigators, by relying on smaller surveys in which participants volunteer to release the information needed to answer particular questions, or by allowing the data collection organization to merge a researcher's data set with the RECS data for a subscription fee. ETA has occasionally merged data sets or done additional data analyses on the request of and with funding from other federal agencies.3 17. The Department of Energy should, wherever feasible, cooperate with other federal agencies and the private sector in data collection. RECS has also failed to include enough detail to be useful to certain specialized groups of researchers. For example, it has not assessed the importance of energy efficiency and other factors in appliance purchases. It has also done little to assess motivational and social- psychological factors in energy demand. Of course, there are limits to how much a survey can include, and some 2 The delay is due at least in part to the operational difficulty of collecting and checking data from disparate sources. For example, RECS must collect data from house- holds and subsequently from energy suppliers. It can take six months or more simply to collect energy use data from fuel oil dealers. 3 Information from L. Carlson, Energy Information Administration.
OCR for page 100
100 potentially important questions will always be left out We do not offer proposals to restructure the RECS survey, but we do believe it should be improved. And all of EIA'S surveys should be designed to obtain the best and most useful data for research and policy analysis. 18. A formal advisory board of energy demand researchers should advise the Energy Information Adminis- tration on the contents of its surveys. 19. ContinuiEv is a blah or for iEv in data collection Surveys of energy consumers should repeat items over time and use a panel or rotating panel design. We also wish to emphasize the occasional need to gather representative national data on energy issues on short notice or at relatively little expense. For example, EIA conducted a survey in fall 1979 of the oil-heated house- holds in its national sample to see if people were having trouble obtaining heating oil in the wake of the oil shortage of that year (Energy Information Administration, 1979). The existence of a well-chosen representative sample for which baseline data were available made it possible to conduct a survey on short notice from which meaningful conclusions could be drawn, and we believe such samples should be maintained. 20. A large national panel for which past data exist, such as the RECS respondents, should be made available for subsampling so quick telephone surveys can be used to help answer immediate policy questions. Such a subsample might be made available to independent researchers who could insert questions on a subscription basis.4 4We have not addressed legal questions that may arise from selling subscription access to respondents to a federally sponsored survey, particularly to profit-making organizations. The point is not that the RECS survey should necessarily be the vehicle for collecting the data, but that some preexisting national survey would be valuable as background for more focused survey efforts by public or private organizations. In the residential sector, RECS is the best such survey in existence.
OCR for page 101
101 US ING VARIOUS RESEARCH METHODS IN CONCERT Energy policy analysis and related data collection tend to be closely responsive to policy questions: current policy issues drive the development of models, and the requirements of models determine data collection efforts. Immediate policy questions and formal modeling tend to dominate the research enterprise to the neglect of other methods and of more basic research. The demand for answers to today's questions today has diverted resources from the more basic task: building a knowledge base for answering policy questions more accurately. Instead, emphasis has been given to elaborating formal models even when their assumptions are poorly tested, the necessary data are lacking, and important variables are not included in them.5 We have noted the important place of formal models in energy analysis, and we believe that because of their great value for forecasting and for identifying effects of policies on disparate parts of the energy system, improving their behavioral foundation is a high priority. Models can also be useful for more narrowly focused policy analyses, though they have been overused in relation to other methods. In this context, models are most appro- priate for anticipating effects of interventions that are quantitative and that operate by processes that are well understood or that have been successfully modeled in past similar situations. In the more typical case, however, when the path of implementation is less straightforward (e.g., energy conservation tax credits, regulations, informational efforts), existing models are less useful. They have even less value for analyzing policies that are qualitative in nature, or that obviously involve institu- tional, organizational, or psychological elements (e.g., residential conservation programs). For such analyses, sModels are most often constructed by engineers, operations researchers, and economists, with little consultation with researchers in other disciplines. This lack of breadth is one reason there has been so little effort to model such variables as incomplete information, communication processes, marketing of programs, and decision under uncertainty. Data on these variables are hard to get, but the effort has not seriously been made and the variables tend, as a result, to drop out of consideration in policy analyses based on formal models.
OCR for page 102
102 problem-oriented studies are more likely to offer useful information to policy makers. In short, although good energy models are desired and needed, existing formal models are not yet up to the tasks for which they are used. Better analysis requires a serious research and data collection effort driven not only by immediate policy concerns but by a desire to improve understanding of energy use and general theories of consumer behavior. Such an effort implies changes in the use of formal models and other research methods. Formal models need input from other research methods, which are especially useful for supplying empirical tests of modeling assump- tions and predictions. Researchers should use the various research methods in a complementary fashion, using each to answer the kinds of questions for which it is best suited and, when more than one method is appropriate, using each as a check on the others. AS a general strat- egy, we advocate a combination of research methods as the best way to advance understanding of energy demand. 6 Problem-oriented research can combine with models in various ways. The results of some problem-oriented studies raise questions about which variables are most important to consider in formal demand analysis. For example, the data from evaluation studies on the wide disparity of response to a constant financial incentive suggests that something about the implementation of incen- tive programs (not now represented in models) may be more important than the monetary value of the incentive (a 6 Our discussion of the character of multimethod research on energy demand is not meant to minimize the real insti- tutional barriers to making this a normal part of policy analysis. The people who construct formal models and those who use other research methods often come from dif- ferent disciplinary backgrounds, belong to different pro- fessional associations, and communicate little with each other. And in policy-making organizations, there is often a similar split between units that do modeling and units that do other research, for example, program evaluation. There are some signs of improved communication, including some interdisciplinary conferences on energy demand issues and the existence of the present study--but the problems institutionalizing a multimethod approach still loom large.
OCR for page 103
103 major focus of modeling efforts). Problem-oriented studies are the only available way to estimate the effects of variables that are not now represented in models. Evaluation research on the implementation of conservation programs is one example; another is research on improving the quality of information available to energy users with feedback (Chapter 4). Problem-oriented research methods, in concert, can provide empirical help in estimating the parameters of models. For example, to estimate the effect of appliance labels that offer information on energy efficiency, small- scale laboratory experiments might first be used-to deter- mine what information is effective on labels and what presentation formats people consider useful. Field experiments in which labels are used in some locations and not in others would be the best way to get a realistic estimate of how much difference the best available labels make. Surveys of appliance purchasers can produce empiri- cally based estimates and act as a check on the findings from the smaller experiments. The results of these problem-oriented research efforts can inform policy about appliance labeling more usefully than can predictions from a model. They may also prove useful to modelers by pro- viding parameter estimates that would not otherwise be available in any empirically supported form. It might be possible, for example, to interpret information on the effect of labels as a change in a discount rate or a lag coefficient. (In a discrete choice model equation such as the one in Appendix A, labels might change the coeffi- cient of response to energy efficiency.) Sometimes research on qualitative factors such as pro- gram implementation or interpersonal communication cannot be used to estimate the parameters of variables in models because the variables are too hard to define and measure precisely. In those instances, however, problem-oriented studies can estimate the range of uncertainty for those parameters and can offer explanations for the variation. Problem-oriented studies and analyses of existing data can also be used to test models. A model that can predict the results of evaluation research, field experiments, or analysis of RECS data is more likely to be correct than one that cannot. Given the uncertainties in models, it would be wise practice to compare the output and assump- tions of models with empirical findings as a way of test- ing and refining models. Thus, a multimethod approach would have several impli- cations for energy modeling. It would lead to many small
OCR for page 104
104 changes in models--in parameter values and possibly in the ways variables such as price are represented--and it would probably change the variables included in models. And when qualitative factors, such as trust in information, prove important, it might lead to important innovations in the ways models are structured. In all these ways, energy modeling would be improved. A multimethod approach to research would also affect the conduct of problem-oriented research. Models would help set priorities for other research by identifying unanticipated effects of policies that call for more specific attention and, when a model's output depends critically on the value of a particular parameter and the estimate of that parameter is uncertain, by calling for research or data collection, using other methods, to estimate that value. The most important change that might arise from a multimethod approach, we hope, would be a shift of emphasis in the way energy demand analysis is conducted. Consumers of energy demand analysis might become less inclined to see in models the distillation of all knowl- edge about energy demand and more willing to see models realistically, as part of an ongoing process of analysis that relies on many techniques to build understanding.
Representative terms from entire chapter: