Click for next page ( 54


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 53
CHAPTER 4 INFORMATION-BASED HOME RETROFIT PROGRAMS In Chapter 1 we identified five general behavioral questions whose answers are important to the design and implementation of information- based conservation programs; in Chapter 2 we outlined the roles of six analytic methods in answering those questions. This chapter considers each of those questions in a more specific context, that of home retrofit programs that rely heavily on information, such as Residential Conservation Service (RCS) programs and financial incentive programs that begin with a home energy audit. We show how each question about information programs has been addressed in the past and outline ways each question could be addressed more comprehensively in the future development of such programs. HOW CAN A PROGRAM BE DESIGNED SO THAT THE INFORMATION IT OFFERS IS USED? In the United States, the most prominent conservation programs that rely on information use home energy audits to convey that information. For such programs, getting the information used has two aspects: increasing the proportion of eligible households that request energy audits and increasing the rate of retrofit activity among households that receive audits. Most audit-based conservation programs have been based on clear, if implicit, beliefs about what leads households to request energy audits and then to retrofit their homes. The original RCS regulations assumed that if consumers could get low-cost information that was accurate, specific to their own residences, and expressed in terms of economic payback, they would request that information. The regulations further assumed that people would retrofit their homes if an energy audit showed that a retrofit would have a net benefit. Thus, program design followed from some fairly straightforward assumptions about consumer behavior. However, some empirical knowledge about consumer behavior existed when RCS began, and more has been gained since. The available evidence shows the importance to energy use of principles of behavior in addition to long-term cost minimization (for a review, see Stern and Aronson, 1984~. Consumer values such as comfort and esthetics influence people's choices. The opinions and 53

OCR for page 53
54 experiences of even ill-informed friends and associates can affect choices more than the statistics of ~experts. n Consumers sometimes act to express personal values in ways that have significant, if incidental, effects on their levels of energy consumption. And people often act as problem-avoiders , doing nothing to change their habits until a sudden change in energy prices or availability or some other external factor forces them to pay attention to energy use, and then acting without thorough consideration, in the hope that the energy problem will then stop interfering with the more important things in their lives. These influences on action help explain the ways people respond to energy information. People often do not seek information actively, rather, they use it when it is conveniently available and when some- thing about it attracts their attention. People listen more closely to friends and neighbors than to mass media, and they are more likely to act on information that comes from an organization they trust. People can become committed to a major course of action if they begin with a small step, taken voluntarily. And because people's homes, motives, and life situations vary, they respond differently to the same information. ~ Knowledge about how people use information suggests a strategy for designing information programs. In order to appeal to potential clients, a program should involve from the outset representatives of groups that understand the client populations, that are trusted, and that communicate regularly with the potential clients, preferably by word-of-mouth. This strategy has been used, explicitly at times, and with apparent success. For example, when the Bonneville Power Adminis- tration initiated its Hood River Conservation Project, a team of social scientists first identified the major social groups that would have to be represented if the program was to get needed community support (Keating and Flynn, 1984~. The program also early established a com- munity advisory group that would ensure that each of the major social groups was involved. Such local groups are valuable for-planning programs so that they serve their clients well, for communicating dissatisfactions to program managers, for publicizing a program, and for increasing public confidence in it by making it more responsive to its clients. Marketing through community groups has had remarkable success. In Minnesota, the RCS program audited only 4 percent of the eligible homes in localities where energy audits were conducted by utility company personnel, but 15 percent in places where local community groups conducted the audits (Polich, 1984~. Similarly, by using local groups and direct personal contact, the Tennessee Valley Authority increased the proportion of its audits going to low-income households from 6 to 21 percent (Moulton, 1984~. Knowledge about communication processes has implications for the tactics of energy- audit programs. As we detailed in a previous work (Stern and Aronson, 1984: Chapter 4), an energy auditor is more effective when he or she presents information in clear, understandable, and attention-getting ways. Such presentation implies, among other things, interacting with the householder; presenting information in the form of case studies, preferably of nearby homes; demonstrating energy loss

OCR for page 53
55 vividly with Smoke sticks," infrared scanners, or other techniques that show how and where heat is lost around windows or through walls; and involving the householder in hands-on activity. A conservation program could use before-and-after thermograms or energy-use feedback information to make energy savings visible to their participants and thus increase the program's credibility. It could offer free low-cost energy-saving devices, such as flow restrictors for shower heads or insulating wraps for water heaters, to induce households to take further action, following the principle of behavioral momentum. me one-stop shopping and consumer protection features that have at times been part of the RCS concept are also consistent with behavioral principles because they make conservation attractive to householders who wish to avoid problems associated with the home improvement industry. A number of local conservation programs have used combinations of the above tactics with great success, in local government programs (e.g., City of Santa Monica, 1985; Fitchburg Office of the Planning Coordinator, 1980), in utility programs (e.g., Moulton, 1984; Olsen and Cluett, 1979), and in programs operated by community groups (e.g., Freedberg and Schumm, 1984; Katz and Morgan, 19837. Several of the above programs have claimed much greater success reaching low-income populations than is usually found in information-based programs. In the most ambitious of these programs, in Hood River, Oregon (see Keating and Flynn, 1984; Peach et al., 1984) and in Santa Monica, California (see City of Santa Monica, 1985), behavioral principles and free installation of energy-conserving equipment are being combined an an effort to reach 100 percent of the local population. Santa Monica canvasses the city door-to-door, offering a brief energy audit and free installation of up to three low-cost energy-saving technologies. Over the first nine months, about one-third of all homes contacted, repre- senting a socioeconomic cross-section of the community, accepted the program's audits, and energy-saving devices were installed in 97 percent of the homes. In the Hood River project, major conservation retrofits are offered free under a detailed marketing plan designed to overcome barriers of mistrust that exist even for a giveaway program. A community advisory council helps relay community reactions to the program managers. The response to these vigorous efforts offers lessons for the managers of other programs, especially because both the Hood River and Santa Monica programs include serious evaluation plans. Evaluations are an important element of the systematic analysis that is needed for program managers to learn what behavioral strategies and tactics will be useful in their particular conditions. Evaluations should emphasize process issues in order to identify the features of a program's implementation that are responsible for its outcomes (see _ _ _ - below). Small field experiments can provide conclusive knowledge about how specific program elements work in home retrofit programs. In any large utility service area, a recommended program element can be made a part of the program in selected neighborhoods or towns, with other neighborhoods or towns serving as comparison groups (for more discussion of the problem of comparison groups, see below). A program element that looks promising in field trials in one area can be assessed for

OCR for page 53
56 generalizability by trying it in other parts of the country where there may be reason to expect a different level of effectiveness. Such field trials should assess effects by recording the level of requests for energy audits and the rate of retrofit activity, by measuring energy efficiency more directly, and by assessing actual energy savings. (Energy savings are, in effect, the product of improved energy efficiency and occupant behavior; we discuss ways to separate these components in Chapter 6.) Small field trials need involve only a few hundred households. including control arouns. ~ _ _ _ _ ~ Their results can be communicated to the operators of other home retrofit programs for their information and use. HOW CAN A PROGRAM BE DESIGNED TO SPREAD INFORMATION WIDELY? This question was not explicitly asked when RCS was designed. Rather, it was assumed that program clients get information one by one, from official sources (in RCS, from energy auditors). But it has been repeatedly shown that the adoption of innovations (which is what RCS encourages) is also influenced by word-of-mouth communication from friends and associates and by relevant personal experience, and evidence has been accumulating that adoption of energy-saving home retrofits fits the usual pattern. With word-of-mouth communication, information from an energy program can spread faster than energy audits can be accomplished. This possibility of contagion of information raises issues for program evaluation (see below). Behavioral research offers general guidance for spreading the information from home retrofit programs. It suggests, for example, that delivering energy audits to a group of neighbors can spread the word about retrof its (see Olsen and Cluett [1979] for an account of this practice in the Seattle City Light program). It suggests that information will spread faster if a retrofit is installed in the home of a family with many personal contacts in the community and if a convincing measure of the energy saved is made available to that family and to others. And it suggests that mentioning the experience of a neighbor in an energy audit can promote retrofit by encouraging program participants to speak to neighbors who have such experience. Small experimental trials conducted in the context of existing retrofit programs could readily evaluate such hypothesese The results would be assessed by techniques of program evaluation (see below). HOW CAN THE EFFECTS OF A PROGRAM BE FORECAST? The initial forecasts of the effects of RCS were based on very limited data or theory. Initially, President Carter set an arbitrary expectation that the program would reach 90 percent of all residential buildings within five years. In its 1979 regulatory analysis of the program, DOE estimated 35 percent penetration within five years, based on an extrapolation of the one-year response rate to a few existing free energy audit programs. It has been argued (Glazer, 1984) that a

OCR for page 53
57 more accurate forecast could have been made at that time, by using the annual response rate to existing audit programs that charged $15 (the legal maximum charge under RCS). However, forecasting participation rates is more difficult than this implies. Even when the costs of energy audits and other financial features are the same in different programs, rates of audit requests often vary by a factor of five or more (see Chapter 3~. Forecasting energy savings from a retrofit program is even more difficult than forecasting participation. The simplest approach involves multiplying an estimate of participation by an estimate of the average energy savings for each participating residence. But this method of estimation does not take into account that the existence of a retrofit program may have effects on nonparticipants: such an effect might occur by direct communication between participants and nonpartici- pants, through heightened publicity in local mass media about the importance of retrofit, or in other ways. And program participants vary tremendously in what they do with audit information. The experi- ence of loan and grant programs, which keep records of incentive payments, give an indication of the proportion of audit recipients who follow the auditors' recommendations. The percentages may vary by a factor of ten or more between utilities offering the same incentives to similar customers (see Chapter 37. Moreover, it is not safe to infer energy savings from records of actions taken. Such predictions have been in error by at least 50 percent more than half the time (e.g., Goldman, 1984; Hirst and Goeltz, 1984~. Much of the variance is due to the behavior of installers and building occupants, though it is not clear how much. Some methods for estimating overall energy savings from programs are discussed in the next section, and the problem of assessing the relationship between retrofit activity and energy savings is addressed in Chapter 6. HOW CAN THE EFFECTS OF A PROGRAM BE ASSESSED ACCURATELY? In order to assess program effects accurately, it is necessary to systematically evaluate the outcomes of conservation programs to see what changes they produce in energy efficiency and the energy consump- tion of the program participants' buildings. Several evaluations of individual RCS programs, collections of programs at the state level, and conservation incentive programs relying on energy audits have calculated participation rates and estimated the energy saved, its economic value, and the cost/benefit ratios of the programs (for reviews, see U.S. Department of Energy, 1984; Hirst, 1984~. DOE has also offered national conclusions about RCS from a set of evaluations within states. ~ Because of the uneven quality and noncomparability of the available analyses, it is premature to draw national conclusions. This caveat is particularly true for the national evaluation of RCS (U.S. Department of Energy, 1984) because of problems in the component statewide analyses and because of the shakiness of the assumptions used to draw national conclusions from them. The national evaluation is based on completed

OCR for page 53
58 evaluations in only a few states and, because the methodologies and assumptions used in those state evaluations are different in important ways, the evaluations are nearly impossible to compare. For example, some evaluations use utility bills as the measure of energy savings, while others impute energy savings from engineering calculations based on consumers' self-reports of the energy-saving measures they have taken. Still other evaluations merely report the claimed cost of the energy-saving improvements. Some evaluations estimate energy savings by subtracting postaudit consumption from preaudit consumption, while other evaluations match respondents or use various statistical tech- niques to attempt to control for such variables as conservation by nonparticipants, changes in weather, the ages and educations of household members, energy attitudes, differences in the houses, and household incomes. It is not always clear from the reports which factors were controlled, and there has not been a systematic attempt to learn from the evaluations which of the factors make an important difference. In addition to these noncomparable factors, the DOE evaluation is based on the following assumptions: that the state evaluations are equally valid; that a system of weighting the results reE,or ted from a few states accurately represents a national total; that increased comfort in retrofitted homes has zero benefit in cost/benefit calculations; that the existence of RCS has no effect on energy use except among households receiving audits; and that RCS has zero benefit to utility companies or governments. Some of these assumptions are untrue; the rest are implaus ible . The DOE evaluation also uses a variety of assumptions to impute values for energy savings in states for which no data exist. The f irst step to improved program evaluations is adequate data . Given the present state of knowledge, data should be collected on both actual energy use and on measures adopted because there is as yet no clear basis for choosing which of these is the better index of total program effects. To tell whether the differences between these indices is due to faulty installation of retrofits, overoptimistic engineering estimates, or occupants ' choosing increased comfort levels, data on indoor temperature settings and occupant comfor t should also be col- lected. For such data to be accurate, longitudinal studies and actual measurement of temperature will be necessary. (These measurement problems are discussed in more detail in Chapter 6.) Data on climate, house size, and household sociodemographic variables are also important for assessing the degree to which participant and nonparticipant groups are different. Econometric modeling techniques have sometimes been used to hold constant the relationships of such variables to energy use and estimate the net effect of a retrofit program (e.g., Peat, Marwick, and Partners, 1983~. In addition to adequate data, better methodology is also necessary for improved program evaluation. Two key unanswered questions illus- trate this need. First, are energy audits used mainly by people who have already decided to retrofit their homes? Even among households matched on education, income, general energy attitudes, and so forth, those that have just decided to retrofit are probably more likely to I . . . 1 '

OCR for page 53
59 use energy audit programs than others. If so, comparing participants and nonparticipants would overestimate the effect of a program by giving it credit for decisions to retrofit that were made independently of the program. Second, does the Presence of a retrofit program induce . . energy savings among nonparticipants, possibly by increasing energy consciousness or through word-of-mouth influence of participants on nonparticipants (contagion)? If so, comparisons between participants and nonparticipants would underestimate a program's effect, especially when a local group of nonparticipants, who may have heard of the program, is used for comparison purposes in an evaluation study. There is no sure way to answer either question by the research methods of surveying participants and nonparticipants, checking utility bills, and modeling household decisions. More sophisticated approaches _ , , are needed. To hold constant the decision to retrofit, one must find a group of households similar in motivation to those that have decided to retrofit but that have not participated in the program. One such comparison group would consist of households that participate in the program in the future. Another would consist of households that have requested but not yet received audits. A program that has a waiting list for energy audits can randomly assign some participants to get audits immediately and use those on the waiting list as a comparison group. To answer the question about contagion, it would be necessary to study a comparison group for whom the program in question is not available, such as households in areas where the program has not been implemented or where a different program exists. It may be easier to study contagion effects when a program aims to promote adoption of a particular piece of equipment, such as a clock thermostat or a water- heater wrap, for which sales can be monitored before and after the program. None of these methods offers perfect control for all the plausible alternative explanations of observed results. Therefore, the best way to improve understanding probably involves a combination of methods. If different methods yield similar results under similar conditions, each gains credibility. Table 13 summarizes some alternative comparison groups for evaluations of conservation programs and the advantages and disadvantages of each. TO WHAT CAN PROGRAM EFFECTS BE ATTRIBUTED ? The question of attributing effects is the province of evaluation research that focuses on the processes by which programs are marketed and implemented. Few evaluations have paid close attention to the effects of particular features of conservation programs, their adminis- tration, or their participants, so few evaluations have been useful for attributing the effects of programs to their elements. Most evaluations have aimed to judge programs as a whole against criteria of energy savings or cost-effectiveness, an approach that implicitly and incor- rectly assumes that program implementation does not matter. What is needed is an understanding of what makes some versions of retrofit programs effective so that the successes can be duplicated.

OCR for page 53
60 a, A to to A o rl o v As o Sit o U] ~5 o C' o . - S~ P. o En In ~ a, As a) ~ As -. - ~ ~ ~ ~ ~ ~ ~ V Al ~ ~ ~ C ~q - {Q S O eQ ~ ~ ~ . - u ~ ~ ~ ~ ~ - u a, ~ ~ ~ ~ ~ ~ ~ 0 Q O ~ A) A ~ V ~ ,,, ~ ~ (V ~ O O ~ O en ~ O S ~ ~ ~ ~ S - in: ~ ~ V v V to ~ ~ n' ~ ~ O a~ ~ ~ ~ ~ O S ~ ~ a, ~ =,~ ~ .,1 ' a) ~ 0 ~ [Q ~ O 0 ~ c: a~ ~ P ~ O u' ~ O O V~ ~ ~ ~ O c: aJ ~ ~ tn ~ ~ .Q ~ S ~ ~ ~ QJ s ~ a) ~ ~ ~ C: Q O O ~ ~ ~ ~ - - E: V 3 ~ ~ ~ u~ t~d ~L~ ~: a) O tn :~ a~ ~ O 0 0 tQ a {Q ~ ~ ~ ~ ~ ~ cn ~ ~ O ~ ~ ~ ~ c: S al 0 c: O ~ ~ C: ~ c: 3 O ~ ~ O S ~: V ~ ~1 ~ 0 c ~ c' ' :>, ] ~ .- ~ o - O ~ 1 ~ V ~ U a - ~ - ~ ~ ~ ~ ~ ~ o ~ ~ s o .- f~ ~ ~ ~ ~ ~J U1 3 Q S ~ - , ~ ~ ~ S ~ O ~ ~ ~ O O aJ -,1 S V ~ ~ V ~ O S S ~ ~ ~ ~ ~ ~ ~ 3 3 a' ~ ~ ~ ~ ~ O V O S ~ ~ O ~ ~ ~ O eq te ~ O ~ U'-l Q ~O ~ O a,1 ~ ~ ~ ~ ~ tQ ~ ~ ~ ~ ~ S ~ ~ ~ Q, ~ ~ ~ U ~ ~ Q~ ~ Q] O O ~1 ~u U] a-1 U] ~1 ' a~ ~ 0 e,1 a,' S p,, V ~ ~ ~ Q o Q S~ a' tQ ~ ~n ~ ~ `: [Q ~ ~ S~ ~ Q4 ~ ~ 0 ~ ~ ~ ~ ~ ~ 0 s ~ U]'- ~ s V s ~ ~ ~ ~ 0 m ~ ~ ~ v ~ ~ a) ~ ~ Q ~ QlrrJ, {Q O ~5 O ~ ~: Q. ~ .- O V O U] ~ U] O S~ ~ U2 to O s~ O ~ O E~ C)

OCR for page 53
61 o ~ ~ a' ~ ~ is u] ~ is tJ ~ 3 C C to ~ ~ ~ ~ - ~ C Q. O ~ ~ O ~ ~ O ~ ~ ~ ~ O ~ ~ ~ ~ ~ ~ ~ ~ O ~ ~ ~ ~ ~ C ~ ~ C - U] ~ ~ -= ~ O . - ~ ~ . C ~ ~ ~ ~ O ' - - ~ Sit ~ C ~ ~ O ~ ~ ~ U ~ S to O O Q. al V ::5 ~ ~ ~ ~ ~ S C O ~ ~ O ~ ~ ~ ~ O O O O ~ a' ~ "l ~ ~ ~ s U O ~ m~ ~ ~ ~ ~ O -I "~ HIS Q ~ ~ ~ lo, ~5 (V O ~ ~ S Q ~ O S S S 3 ~ ~= ~ ~ ~ O ~ 3 ~ ~ O Q a, 0 ~ s 0 m. - (L) ~ [Q U ~ ~ ~ ~ '- a) u' Q. S tQ .% ~ ~ ~ ~ ~ ~ ~ . ~ ~ . - ~ ~ ~ ~ O O ~S ~ ~ ~ O ~ ~e O U] ~ U S Ut O ~Q =aQ ~ ~ ~ ~ ~ O O ~ ~ ~ ~ ~ V s a~ ~, u~ _ a,~,~ ,~ ~{~ ,~ >, ~<5 3 ~1 0 0 ~ {Q U ~ ~ ~ ~ I O ~ V ~ C a, .= ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ s o ~ [Q ~ ~ ~ ~ ~ ~ . - n O ~ ~ ~ ~ ~ QJ ~ ~ ~ ~n ~ o ~ ~ ~ ~ ~ ~ ~ ~ o ~ ~ ~ ~ ~ ~ ~ ~ a~ ~ o.~ - ~ ~ ~ o ~ - a aml .W ~ ~ ~ ~ a~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ o ~ ~ ~ s ~ ~ ~ ~ ~ ~ ~ ~ ~ v me Q ~ ~ ~ ~ ~ S 1 ~ X C - C ~ ~ ~ ~ ~ S O cn en ~ ~ ~ ~ ~ ~ ~ ~ P4 ~ ~ ~ ~ ~ ~ S C O a o ~ o ~ ~ ~ x ~ ~ ~ ms Q4 0 ~ ~ ~ s tQ ~ ~ O a) ~ ~ ~ ~ 0 u~ tn ~ ~ ~ o.- ~ ~ ~ S ~ ~ 3 O `: O ~ ~ a, ~ ~ a, ~ s~ O CQ ~ S ~ ~ O {Q '- ~ ~ ~ ~ O ~ O ~ ~ O S ~ ~ ~ ~ ~ ~ ~ . - ~ ~ ~ ~'- 3 ~ O P~ ~ Q S ~ tQ ~ ~' ~ ~5 ~ ~ ~ 3 ~ ~ ~ ~ ~ O O ~ ~1 ~ O O ~ ~ ~ S - ~ ~ = (24 ~ to ~ ~J ~ ~ ~ ~ ~ ' - ' - O 3 0 0 $~ ~ Q ~ O ~ ~ O ~ ~ k ~ E-l ~ V ~ ~ ~ ~ ~ ~ ~ ~ ~ ~'- 3 tn O ~ ~ ~ CQ tn ' O s ~ a S ~ .,, ~n ~ ~ ~ ~ O ~ .,, O ~! :=

OCR for page 53
62 Some evaluations have looked within programs to identify the cause s of their success or failure. Among the factors that have been identi- fied are marketing efforts, strength of consumer protection guarantees, the credibility of the sponsoring organization and its commitment to the program, and efforts to simplify consumer decision making (see Chapter 3 ~ . These var lab, es have been identif fed by ask ing people for their opinions or judgments--sometimes with open-ended interviews of program participants or managers, sometimes with carefully constructed survey instruments. Surveys are a useful way to gauge the effects of program features, especially in the absence of clear expectations about which program features are most important. But for definitive knowledge about the effects of particular program elements or combinations of elements, it is not sufficient to rely on the impressions of program officials or the self-reports of participants. It is necessary to treat the program elements explicitly as experiments. Ideally, promising program features would be identif fed from surveys , previous process evaluations, or other exploratory methods. They could then be offered as alternatives to existing practice in ongoing programs. Participants could be randomly selected to receive either the experimental or control program element, and the effects could be assessed in the program's ongoing evaluation research. Two recent examples of the use of controlled experiments involve a test of an enhanced informational component of a time-of-use electricity pricing program (Heberlein and Baumgartner, 1985) and a comparison of marketing techniques for a shared-savings program for residential retrofits (Miller and Ford, 1985~. Surveys of participants and program officials can give useful supplementary information about reactions to the program element in question, and the outcome portion of the evaluation would assess the effects on retrofit activity and energy consumption. RECOMMENDATIONS Behavioral research has identified promising strategies and program elements for information-based conservation programs, and available evidence supports the validity of the behavioral approach. However, from the viewpoint of managers who want to improve participation in their programs, available knowledge is too scanty to offer reliable and specific advice. This situation can be remedied by more thorough efforts to assess behavioral factors in program evaluation research and to measure their effects in field trials. We offer the following specific recommendations for research on information-based home retrofit programs: 1. Continue to implement and carefully evaluate ambitious Programs that aim to install all economically justifiable residential retrofits in Particular areas (e.q., Hood River and Santa Monica). Such programs are laboratories tor learning about the effects of various methods of overcoming resistance to home retrofits. These ambitious programs are in particular need of careful process evaluation to assess the effects i

OCR for page 53
63 of particular marketing and dissemination techniques. We emphasize that resources for collecting and analyzing data on process variables must be reserved for that purpose: too often, past programs that could have provided important lessons for other program managers have not done so because funds intended for evaluation have been used for other purposes as the program neared its end. 2. Conduct and carefully evaluate small-scale Programs aimed at low-income housing. Existing information-based programs have failed to reach proportionally as many low-income households as other households, and indications are that more aggressive marketing efforts are needed to achieve success. Programs that have claimed success in the low- income housing sector have usually applied behavioral principles, but because of insufficient funds for evaluating the programs, the effects of particular program elements cannot be separated, so the lessons of their experience are not clear. Evaluation of these efforts is impor- tant because what works in a general populations has not worked for the low-income population in the past and because low-income programs are among those that have made the most extensive use of behavioral approaches. 3. Conduct small-scale experimental field trials of promising program elements within ongoing conservation programs. A number of promising program elements can be tested rigorously under field conditions simply by making them available at random to a portion of a programs clients. Such experiments are quite inexpensive when included in an ongoing program evaluation. They can provide strong evidence about the effectiveness of an intervention that may generalize to other similar programs. Experiments should be conducted with program elements in three important areas: a. Marketina. Experiments can determine, for example, how much participation increases when potential clients are contacted door-to- door or when different kinds of organizations perform the outreach tasks. b. Audit techniques. Programs that train auditors in communication skills and in delivering information in a vivid and personalized manner should be tested experimentally to quantify the effects of these approaches and determine which ones best justify their costs. c. Postaudit follow-up. Small-scale experiments can test the effects on participation by telephoning households that have received energy audits to ask if they plan to retrofit and by sending partici- pants information documenting their and their neighbors' energy savings and the improved comfort reported by people in the community as a result of having participated in the program.