5
Process for Long-Term Assessment and Program Improvement

INTRODUCTION

There are no ideal assessment processes that can be applied to evaluating programs for advancing technologies. Programs need to be considered individually to determine the metrics that best respond to the specific program and technologies. In general, the metrics need to be replicable and assess both the means (program activities) and the ends (program outcomes) (OECD, 1998). The committee attempted a systematic assessment of the PATH program but found that the data were insufficient, and that an evaluation process needed to be integrated into the program’s management system. The following issues should be considered in designing a long-term assessment process for PATH.

Causality

The development and diffusion of innovation in housing is an evolutionary process that was operating before PATH was created. The challenge in assessing the impact of PATH is distinguishing between progress resulting from natural processes and economic initiative and progress resulting from the PATH program. The limited amount of research and baseline data on innovation in the housing industry increases the difficulty. The structure of PATH as a partnership makes it hard to distinguish the impact of the program from the actions of partners that would have occurred without PATH; the direct response to the collaborative effort must somehow be assessed. To do this will require a significant amount of supposition, because it is necessary to define what might have happened without the PATH initiative.

Quantitative Versus Qualitative Assessment

It is often easier to define quantitative indicators of a program’s performance (e.g., the number of reports) than to work out what the program has accomplished. To be valid, an assessment should incorporate professional judgment of value. Though innovation is at the heart of economic change,



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 44
5 Process for Long-Term Assessment and Program Improvement INTRODUCTION There are no ideal assessment processes that can be applied to evaluating programs for advancing technologies. Programs need to be considered individually to determine the metrics that best respond to the specific program and technologies. In general, the metrics need to be replicable and assess both the means (program activities) and the ends (program outcomes) (OECD, 1998). The committee attempted a systematic assessment of the PATH program but found that the data were insufficient, and that an evaluation process needed to be integrated into the program’s management system. The following issues should be considered in designing a long-term assessment process for PATH. Causality The development and diffusion of innovation in housing is an evolutionary process that was operating before PATH was created. The challenge in assessing the impact of PATH is distinguishing between progress resulting from natural processes and economic initiative and progress resulting from the PATH program. The limited amount of research and baseline data on innovation in the housing industry increases the difficulty. The structure of PATH as a partnership makes it hard to distinguish the impact of the program from the actions of partners that would have occurred without PATH; the direct response to the collaborative effort must somehow be assessed. To do this will require a significant amount of supposition, because it is necessary to define what might have happened without the PATH initiative. Quantitative Versus Qualitative Assessment It is often easier to define quantitative indicators of a program’s performance (e.g., the number of reports) than to work out what the program has accomplished. To be valid, an assessment should incorporate professional judgment of value. Though innovation is at the heart of economic change,

OCR for page 44
relying on economics-based measures to assess a program to stimulate innovation can be misleading. An individual firm can determine its return on investment or the cost benefit of the direct effort to develop a new product, but it cannot factor in the value of the basic research or communications with end users that made the new technology possible. On a macro scale, counting the number of patents for new technologies that can be used in housing or the number of related articles in journals is helpful for assessing the amount of activity but not for assessing the value of new knowledge or new technologies. Economic, patent, and literature data are all helpful, but additional data measures are needed to assess how much effect PATH has on the development and diffusion of technology in housing. No single metric can assess PATH completely. Therefore, it is important to identify multiple performance measures that can be attributed directly to PATH and that reflect interim progress toward its goals (Jaffe, 1998; Hatry, 1999). The cost of a detailed evaluation of innovation in the housing industry would be out of proportion to the size of the PATH program if it were used solely for assessment of the program; as noted in previous recommendations, research to develop this information is also needed to help understand the processes of innovation in housing. This information can be used by industry to plan R&D and product diffusion programs and it can be used by PATH to plan a more effective program. ASSESSMENT FRAMEWORK A continuous assessment process should do more than provide a scorecard of past activities. Analysis of assessment data can help improve management of the program and design of future activities (Hatry, 1999). In a sound process of assessment and performance measurement, measures are linked to the program’s mission, goals, and objectives (Figure 5.1). Measures should be designed to assess the potential for PATH to accomplish its goals and objectives (see the discussions of PATH goals in Chapter 2 and of the extent to which they have been accomplished in Chapter 4). FIGURE 5.1 Assessment framework.

OCR for page 44
An assessment process should include measures of: Input: These measures reflect the resources put into the program that ultimately produce programmatic outputs and outcomes. Activities: These measures monitor the day-to-day activities of the program, addressing such issues as the objectives and procedures used to achieve the anticipated outcomes. Outputs: These measures reflect the products or services that result directly from the activities. Outcomes: These measures reflect the effect of activities and outputs on the program’s mission, goals, and objectives. In the process of formulating a framework for long-term assessment of PATH, its mission, goals, and objective statements were refined and linked to activities, outputs, and outcomes based on the following principles: PATH’s goals and objectives should be concise and meaningful, and should emphasize activities where desired outcomes can be identified. There should be a logical link between PATH’s mission, goals, and objectives. Every program activity should be clearly identified with a goal, although a single activity may support more than one goal or objective. Outputs, intermediate outcomes, and long-term outcomes should be identified for each major activity. (See Appendix E for suggested evaluation questions.) ASSESSMENT DATA Performance assessment needs to be designed into the structure of each activity. Criteria for performance measures should be part of the annual reporting requirements of contracts and grants so that they can be easily aggregated to determine how the results affect the program’s mission and goals. The metrics or performance indicators should be as precise, unbiased, and stable as possible to allow for comparisons across activities; they should also be resistant to manipulation (Jaffe, 1998). Such assessments should include a measure of outputs as defined in the activity plan and describe how well the specific activity addresses its intended scope, the credibility of the process, the quality of the information generated by the activity, and how well the information has been presented and disseminated. Research should be subject to a peer review process to evaluate the impact and quality of the effort and the written materials for publication. Web pages should also be peer reviewed to assess the accuracy, bias, and completeness of the information presented as a product of the PATH program (NRC, 1999; OECD, 1998). Assessment questions related to the program’s mission and goals should be applied to each activity to assess how much it contributes to reducing barriers, disseminating information, fostering research, increasing the development or diffusion of technologies, and improving housing performance. Because it is a partnership, assessing the collaborative effort is a critical part of assessing the effectiveness of PATH. This includes evaluating both the program’s communications with its broad range of stakeholders and partner contributions of financial and in-kind support for PATH activities. It is also important to understand partner responses to PATH initiatives to determine what might have

OCR for page 44
happened without the initiatives. This will require an independent body to undertake direct discussions with PATH partners and a skilled analysis of their responses. In addition to activity-based assessment, more general analysis of mass media exposure and surveys of stakeholders can determine how well the program is communicating with its partners and customers. The committee recognizes that there is a cost to increased performance evaluation; this cost may reduce the quantity of output, but the potential for improved quality should increase PATH’s impact on the outcome measures of innovation and housing performance. Undertaking the studies over a period of 3 to 5 years can reduce the annual budget impact of generating this type of assessment data. Program Outcome Data Examples of efforts to measure innovation can be found in surveys undertaken to evaluate innovation for general economic development through Community Innovation Surveys (CORDIS, 2002) based on the Organization for Economic Cooperation and Development (OECD) Oslo Manual (OECD, 1997). Other examples are the Census Bureau’s Manufacturer’s Innovation Survey undertaken for the NSF and similar efforts by Yale and Carnegie Mellon Universities. These enterprise-based surveys ask questions that identify technologies that are new to the firm, new to the industry, or new in other ways (NRC, 1997). By reaching all levels of the housing supply-and-demand chain, innovation surveys can help explain individual roles in innovation as well as gauge the rate of technology development and diffusion. Innovation surveys might gather data on: Expenditures on activities related to R&D and other innovation processes; Output of incrementally and radically changed products; Sources of information relevant to innovation; Technical collaboration for R&D and technology transfer; Obstacles to innovation; and Factors promoting innovation. PATH has limited influence on the ultimate impact or market penetration of a technology—this will be determined primarily by its relative cost and performance advantages—but PATH can probably influence the rate of diffusion or the time required to realize maximum market penetration. The impact of PATH can be evaluated by tracking the rate of adoption of technologies in the PATH inventory or used in demonstration and evaluation projects. Frank M. Bass created a model for assessing change in the rate of diffusion of innovation over time (Figure 5.2). The Bass model, in its idealized form, is represented by an “S” curve that accounts for the influence of mass communication primarily in the early stages of diffusion and the influence of interpersonal communication as it expands and declines over time (Rogers, 1995). This model is idealized because it assumes that market potential is constant over time and ignores possible changes in the nature of the innovation, competing innovations, and other variable market factors such as price, supply, and demand. It is nevertheless a valuable tool for evaluating the diffusion of innovation (Mahajan et al., 1990). Statistical measures need to be coupled with comprehensive interviews to assess personal experience, use of or exposure to PATH activities, and actions that might have occurred without PATH activities. By including questions that determine the influence of PATH activities on innovation processes the impact of PATH on housing can be inferred. The primary outcome should be evaluated over

OCR for page 44
FIGURE 5.2 Bass innovation adoption curve. SOURCE: Payson Center (2001). a longer time frame than specific program activities (NSF, 1997)—3- to 5-year intervals are appropriate. An initial effort will be needed to establish a performance baseline. FINDINGS AND RECOMMENDATIONS Finding: Because PATH is a new and evolving program, expert review of the program’s performance and its response to reviews is especially important to its ongoing management. Effective program assessment is essential if the PATH program is to be efficiently managed. The program should be evaluated based on whether the activities it undertakes are likely to help achieve its goals, and on the quantity and quality of the results of these activities. If PATH undertakes the right mix of high-performing activities, then improvement in measures of innovation in the housing industry can be attributed, at least in part, to PATH. Recommendation: Criteria for PATH program evaluation should be made a part of all grants and contracts. Additional performance measures should be designed to evaluate how the program is affecting innovation by individuals, enterprises, and the housing industry. Performance data should be reviewed independently so that assessment and interpretation of reported performance metrics are unbiased. This review could help analyze data on the results as well as evaluate performance of the program’s strategic planning and management.

OCR for page 44
REFERENCES CORDIS (Community Research and Development Information Service). 2002. The Community Innovation Survey Today. Available on the Web at http://www.cordis.lu/innovation/home.html. Accessed February 24, 2003. Hatry, H. 1999. Performance Measurement: Getting Results. Washington, D.C.: The Urban Institute Press. Jaffe, A.B. 1998. Measurement Issues, Investing in Innovation: Creating a Research and Innovation Policy That Works. Cambridge, Mass.: MIT Press. Mahajan, V., Muller, E., and Bass, F.M. 1990. New Product Diffusion Models in Marketing: A Review and Directions for Research. Journal of Marketing 54 (1): 1-26. NRC (National Research Council). 1997. Industrial Research and Innovation Indicators: Report of a Workshop. Washington, D.C.: National Academy Press. NRC. 1999. Evaluating Federal Research Programs: Research and the Government Performance and Results Act. Washington, D.C.: National Academy Press. NSF (National Science Foundation). 1997. User-Friendly Handbook for Mixed Method Evaluations. Available on the Web at http://www.ehr.nsf.gov/EHR/REC/pubs/NSF97-153/START.HTM#TOC. Accessed March 15, 2002. OECD (Organization for Economic Cooperation and Development). 1998. Policy Evaluation in Innovation and Technology: Towards Best Practices. Washington, D.C.: Organization for Economic Cooperation and Development. OECD. 1997. Proposed Guidelines for Collecting and Interpreting Technological Innovation Data: The Oslo Manual. Washington, D.C.: Organization for Economic Cooperation and Development. Payson Center. 2001. Diffusion of ICT Innovation for Sustainable Human Development. Available on the Web at http://www.payson.tulane.edu/research/E-DiffInnova/diff-prob.html. Accessed September 18, 2002. Rogers, E.M. 1995. Diffusion of Innovations. New York: Free Press.

OCR for page 44
This page in the original is blank.