including youth well-being and development. Successful improvement in the targeted areas of development may be met with increased funding. Failure to meet specified goals may result in the provision of technical assistance to overcome problems or in the loss of funding or autonomy. Such accountability practices are becoming increasingly common in state and local educational assessment efforts, for example (Brown and Corbett, forthcoming). Community-level indicators are rarely used for this purpose, as individual programs are not generally expected to have a community-wide impact on youth well-being. Instead, they are held accountable for outcomes to program participants.

Reflective Practice

Community programs for youth can also use social indicator data to monitor their own success and to guide program refinement (i.e., they can use social indicator data as part of reflective practice). At its most formal level, programs can develop a detailed model relating program activities to interim and long-term project goals for participating youth (United Way of America, 1999; Gambone, 1998; Weiss, 1995). In the best case, such a model of the links between program activities and youth outcomes will be based on existing research (when it exists), theory, and the shared beliefs of those running the program. Both program activities and youth outcomes can then be measured and tracked over time. Failure to produce the expected results could indicate inadequate implementation of some part of the program, or it may call into question one or more of the underlying assumptions of the model. Practices, the model, or both can then be reevaluated as a result and the programs can be modified.

In many respects reflective practice functions like program evaluation, even though it lacks the methodological rigor required to draw firm causal inferences about the relations between program activities and youth outcomes (see Chapter 7 for discussion of evaluation methodologies). However, the level of certainty required to qualify results as scientific knowledge is not needed to produce good guides for responsible program management. And programs can sometimes incorporate some of the practices associated with experimental evaluations into their reflective practices. For example, they can compare the social indicator data from their catchment area with social indicator data being collected longitudinally in other comparable catchment areas (stimulating a quasi-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement