The discussions of what works in developing character pointed to questions about how specific practices and strategies can be effective. The importance of both implementing programs faithfully and evaluating their effectiveness has been documented in many fields,1 and the workshop turned next to ideas from both research and practice on these key elements of character education. Researchers Joseph Durlak of Loyola University and William Trochim of Cornell University discussed implementation and evaluation, respectively. Mike Surbaugh of the Boy Scouts of America and Donald Floyd of the National 4-H Council (retired) provided input and perspectives from the practitioner perspective.
Durlak focused on the opportunities and challenges in program implementation he has identified in research focused on out-of-school settings, and he began with the simple message that “unless you attend carefully to effective program implementation, you will probably be wasting valuable time, effort, and resources on new programs that are unlikely to be successful.”
Durlak defined implementation as “the ways a program is put into practice and delivered to participants,” and he noted that a strong litera-
1 See, e.g., Centers for Disease Control and Prevention, http://www.cdc.gov/violenceprevention/pdf/chapter1-a.pdf [December 2016]; http://www.rti4success.org/relatedrti-topics/implementation-evaluation [December 2016]; http://www.nccmt.ca/resources/search/71 [December 2016].
ture on it has developed over the last 20 years, which he reviewed for the workshop (Durlak, 2016). One point this research has made obvious, in his view, is that “the program you think you are doing almost never turns out to be the program that actually occurs.” His emphasis on the importance of implementation applies not just to programs, but also to practices or strategies, as they are carried out in the real world, he added, and is based on a consistent research finding.
A key reason why programs and practices often look so different from their designs, in Durlak’s view, is the “wide chasm” between the worlds of research and practice. Despite good intentions on both sides, he noted, very few of the many programs and strategies researchers evaluate and find to be effective are adopted in the world of practice. The results researchers quantify, he added, rarely translate into comparable degrees of effectiveness when the ideas are put into practice. There has been little incentive for researchers to offer practitioners concrete assistance in implementing their programs, he added, but a focus on implementation can be a “bridge between research and practice.” Often, he added, when researchers and practitioners collaborate in focusing on effective implementation, programs turn out to be even more effective than researchers had been able to show on their own.
Components of Effective Implementation
There are four components of effective program implementation for which the evidence is particularly strong, Durlak explained:
- Fidelity: the degree to which the major components of the program have been faithfully delivered
- Dosage: how much of the program is delivered
- Quality of delivery: how well or competently the program is conducted
- Adaptation: any changes made to the original program
The research also shows that when attention to the quality of implementation increases, “outcomes improve,” Durlak noted. Conversely, poor implementation can render a program ineffective, he added.
For example, Durlak observed, a meta-analysis of reviews of school-based programs designed to improve social and emotional learning (targeting, for example, conduct problems, emotional distress, and antisocial behavior) showed that results were significantly larger for settings in which the program was implemented effectively than in settings where it was not (Durlak et al., 2011). Many of the effect sizes were double or more than double for the effectively implemented programs, he noted.
More than 20 factors have been identified as important in implementation, Durlak noted. They reflect not only the influence of the organization and its staff, but also that of others involved, at the community, provider, program, and organization levels. At the community level, funding, as well as political and administrative pressures or mandates, can have an effect. The degree to which providers perceive that they need a program, their degree of commitment, and the benefits they expect from it all play a role. Also important is the degree to which the program is compatible with the setting and the population it is intended to serve. Similarly, the climate within the organization in which the program is to be implemented, its readiness to take the program on, and the degree to which its leaders and staff share the program’s vision are also influential.
Durlak highlighted three of the specific factors that have been shown to be important to effective implementation: professional development, the possibilities for adapting the program or system, and effective leadership at the host organization. Of these, the one he views as most important, indeed “absolutely essential,” is high-quality professional development, including both training and ongoing support. If it is not possible to include this element, he said, the program should not be implemented.
Knowing that no program is ever implemented exactly as designed, he went on, it is also important to be deliberate about the ways in which those implementing a program will deviate from the design and why. Perhaps the population the organization serves has different experiences, needs, or interests, for example. Adaptations can make programs more effective, he noted, but it should be clear who has the authority to make decisions about adaptation and how they will be documented. This is one reason why ongoing support is so important, he added: leaders and staff should have a structure for reviewing possible adaptations and the intended and unintended results they may have. It is also important to note, he added, that adopting any new program will alter the host organization in some way. Guidance and consultation in securing finding, planning for implementation, securing and training staff, and other challenges help leaders implement the program effectively. Strong leaders can also help to motivate staff, delegate authority, and negotiate challenges that arise, he added.
Barriers and Supports
Research has also pointed both to barriers to effective implementation and to circumstances or features that foster it. Barriers include competing priorities, both for young people’s time and for the time and resources adults have for addressing their needs. Another is insufficient infrastructure or resources, which might include staff, physical space, and funding. Available leaders or staff might lack sufficient knowledge or skills to imple-
ment the program, may have negative attitudes about the program, or may simply be resistant to change. If these barriers are present and cannot be overcome, Durlak suggested, “the program is not going to work.”
However, there are elements that can help organizers facilitate implementation and overcome barriers, Durlak continued. First is effective leadership and clear allocation of responsibility and lines of accountability. Another is flexibility within an organization, so leaders can, for example, reallocate resources or seek external ones in response to unanticipated needs. Coordination across all the entities involved, including local or state agencies or other governmental bodies, is also valuable, Durlak pointed out. Character education programs are usually components of policies intended to affect more than a single program, so the way the policy is implemented is just as important as the implementation of a particular program or practice.
The staff who are implementing a program or practice will need ongoing feedback, Durlak noted, so they can adapt and improve as they proceed. The feedback should be offered in a collaborative way, together with support for assessing the feedback and figuring out how to respond. Programs can use systems for tracking data and efficiently sharing feedback to make this part of the process more systematic, he added. Ongoing research can then build on the feedback by systematically examining what can be learned from the implementation experience in a particular context.
“We don’t have a nationwide structure for implementation,” Durlak pointed out, even though it is important in virtually every sector in which evidence-based interventions are used. The bridge between research and practice Durlak called for requires an implementation system that includes:
- good policies, regulations, and mandates;
- sufficient funding;
- personnel to train and consult with implementing staff; and
- adequate time for assessment, reflection, and making needed improvements.
A few groups have taken on this challenge, Durlak noted, including the Collaborative for Academic, Social, and Emotional Learning (CASEL); Communities That Care; and the Getting to Outcomes program at RAND Corporation.2 This is not a comprehensive list, he noted, and many programs do not have access to the kinds of guidance such organizations offer.
Effective implementation requires collaboration, Durlak concluded. Policy makers and funders, program developers and researchers, leaders
2 See http://www.casel.org [December 2016]; http://www.communitiesthatcare.net [December 2016]; and http://www.rand.org/health/projects/getting-to-outcomes.html [December 2016].
and staff at local organizations, trainers and consultants, and the young people who are to receive the services and their families all have a profound interest in the outcome and also have ideas to contribute. Sadly, he noted, these five groups rarely collaborate, and it is primarily individual foundations that make such collaboration possible in the United States when it does occur. Other countries, he pointed out, have institutionalized systems for implementation and evaluation. Effective implementation cannot be rushed, he observed. Some programs are quite complicated and require significant change in an organization and time to develop. It is only worth pursuing programs, in Durlak’s view, if the resources to support effective implementation are available.
Many character education programs have never been formally evaluated, William Trochim pointed out, and “we still don’t know if and how they work.” Despite the existence of resources such as the What Works Clearinghouse (see Chapter 3), there is very little formal evidence for program designers and practitioners to use, he noted. The reason in his view is that “we don’t have the time and resources to invest in evaluation.” It is often one of the lowest priorities within organizations, and few have staff with the expertise to conduct sound evaluations, he said.
He collaborated with workshop steering committee member Jennifer Brown Urban to apply findings from recent scholarship linking evaluation in other fields to the character education context (Trochim and Urban, 2016). Their objective was to offer a new way of thinking about evaluation, which Trochim described as an evolutionary systems approach.
An Evolutionary Approach
Trochim began with the connection to the theory of evolution, based on the idea that programs are like organisms. Like a potato, he suggested, a program is one in a population of varieties within the same species. The theories that guide the program—stated or assumed expectations about what the program will do and how—are the essential instructions for the program, akin to the genetic code that instructs an organism how to develop. The inevitable variations in the way the program is implemented each have consequences. In subsequent generations of the program, Trochim said, natural selection determines which of these variations continue and which are discarded.
Trochim did not intend this model only as a metaphor, he noted: he argued that programs evolve by the same rules that govern the biological process of natural selection. To elaborate, he noted that the potato was not
native to Europe but when a single variety was introduced it soon became a dietary staple, particularly in Ireland. A blight infected this variety during the 1840s, and because Ireland relied so heavily on it there was widespread famine. Trochim suggested that there is a parallel danger for character education in saying “we know what works—just do that.”
Natural selection in the context of programs or interventions, he continued, is a process of blind variation and selective retention. Blind variation, he explained, does not mean that those who implement programs are acting without reason, but rather that they make choices without having information to indicate whether they will work or not. At the same time, he added, people in these situations do draw on experience and knowledge in making their decisions, even if it does not come from formal evaluations. Thus they selectively retain particular approaches based on many factors, including personal, social, and sociopolitical influences.
Charles Darwin’s purpose in exploring the way organisms develop, Trochim continued, actually grew out of an interest in artificial selection, the breeding of plants and animals to produce new ones that have desired traits. Trying to develop character using various interventions, he suggested, is another form of artificial selection. “We are breeding ideas and trying to see which ones will survive,” he said.
Evaluation plays a critical role in this endeavor, in Trochim’s view, because it is the basis for judging whether or not an intervention worked and should be selected. It also plays a role in the development of variations to be tested, he explained: “We want a system that generates varieties [of ideas] and also allows us to judge and select them.” Every program has a life cycle—it progresses through phases or stages—and over time programs and theories evolve. Eventually, if one looks at character development programs across time, it is possible to trace the “family tree” of the programs and the theories that guide them.
Programs evolve in the way organisms do in part because their development can be compared to human development, Trochim pointed out. Thus, another key to the approach to evaluation that he and Urban advocate is relational developmental systems theory, which, as Schmid Callina had explained, focuses on the reciprocal influences that individuals and their contexts have on one another (see Chapter 2). Just as individuals have the potential to adapt across the life span, Trochim explained, programs develop and are adapted in the context in which they are implemented. Trochim observed that Darwin never used the phrase “survival of the fittest”; his argument was that it is the species that are best able to respond to changes in their environments that survive, not the strongest or most intelligent.
Thinking about evaluation has increasingly come to include a focus on the idea of systems, the interaction of many parts, Trochim explained. The interactions between the individual organism (or program) and its environment occur in the context of nested systems, and effective evaluation requires understanding of how the relevant systems interact. This is a relatively new application of an ancient idea, Trochim noted. Many philosophers and scholars have developed systems-based theories, or ideas about the way systems observed in nature or society are organized and structured.
Trochim highlighted elements that are particularly relevant to evaluation. One is that a program is a whole that is greater than the sum of its multiple parts. These parts contribute to both static processes, those that tend to operate in linear, predictable ways, and dynamic ones, which are not predictable. Like an individual, he added, a program functions within a hierarchical nest of contexts, from the personal, local, and regional, through the national and global. Each stakeholder reflects these contexts in a different way and will have a different perspective on the program, its goals, and its boundaries. For example, practitioners may focus on engaging individual young people, while a funder may focus on broad societal goals.
Applying These Ideas to Character Development Evaluation
Building evaluation into the culture of all the relevant systems is the key to making it effective, Trochim explained, and he reviewed ideas that apply at the organization and program levels.
Organizations need to adopt “evaluative thinking,” Trochim observed, which has been defined as “critical thinking applied in the context of evaluation, motivated by an attitude of inquisitiveness and a belief in the value of evidence, that involves identifying assumptions, posing thoughtful questions, pursuing deeper understanding through reflection and perspective taking, and informing decisions in preparation for action” (Buckley et al., 2015, p. 378).
In practice, he explained, this means having an evaluation policy: a rule, regulation, principle, or norm that the group or organization uses to guide decisions and actions. In practice, he noted, all organizations and groups already have evaluation norms that guide them, but they may be unwritten and informal. The idea, he added, is that as evaluative thinking takes hold, the people in the organization gradually come to incorporate these ideas into their work without necessarily thinking consciously about evaluation. People generally welcome the opportunity and tools to think through what they are doing, he noted. There are numerous methods
organizations can use to build their evaluation capacity (described more fully in Trochim and Urban, 2016).
Trochim, Urban, and their colleagues have developed the Systems Evaluation Protocol (SEP),3 a set of workbooks and other resources available on the Internet that programs and organizations can use to apply systems-based evaluation principles in a way that is tailored to their own needs. The SEP provides a step-by-step guide, Trochim explained, for introducing the ideas that are key to the systems approach at the program level. The SEP engages users in an extensive planning process, he noted, which is the foundation for an effective evaluation system. A key element of this process, for example, is a stakeholder analysis that identifies the key goals and interests that need to be considered in implementing the program. Another step is the development of a logic model that allows staff to visualize the roles of each program component and their causal links to the activities they do each day.
Trochim closed with a reminder that applying the evolutionary, ecological, and developmental systems approach to evaluation means focusing on the interactions between a program and the systems in which it is nested. Doing this requires people to expand their views of what evaluation encompasses and to keep the priority on the goals of the program.
Mike Surbaugh and Donald Floyd offered two perspectives based on their experiences as leaders of national youth organizations.
Surbaugh noted how frequently he and representatives of other organizations like the Boy Scouts have the experience of talking with adults eager to discuss the profound impact their childhood experience with the organization had on their lives. “We don’t know specifically what it is in our program that leads to character development,” he observed, “but it’s of paramount importance to find out.” From reflecting on his own experience as a Boy Scout, he has realized that he did not recognize at the time the long-term benefits the experiences might bring. He noted his appreciation for the guidance the workshop papers offer for organizations trying to better understand how they can help young people develop character
“Character is not proprietary,” he pointed out, and ideas about how to develop it should be “completely open-source.” He said he does not think about the mission of the Boy Scouts as being in competition with another organization, such as the National 4-H, or feel the need to demonstrate that the Boy Scouts program does a “better job” at developing character. It would be a waste of time, in his view, for each organization to focus on identifying its own set of priority traits or competencies, or debating the
3 See https://core.human.cornell.edu/research/systems/protocol [February 2017].
right terminology to use in discussing these objectives. “We all know it when we see it,” he noted.
However, he added, most youth-serving organizations have not been formally evaluated, and “we have little idea how they work.” The Boy Scouts have participated in several studies, he noted, and other large legacy organizations have had the scale and resources to collaborate with researchers to examine questions about their programs and strategies. But the large numbers of small youth-serving organizations, including sports teams, religious youth groups, and school programs, lack resources and tools for evaluation.
What also concerns Surbaugh is that many young people are not having the opportunity to participate in any program or activity that can develop character. Many more kinds of activities are available to families today, he noted, not all of which have benefits for character on their own. Parents do not necessarily have information about the distinctions among available options for their children and have many criteria for choosing. He suggested that parents may focus on the activities their children say they prefer without looking at the long-term picture of how they will be spending their time. For example, many young children enjoy organized sports, which are time consuming and may crowd out other activities, but tend to drop out by middle school. At this stage it can be late to engage the child in a positive youth-development activity that fosters core values, Surbaugh noted.
His hope is that structures that promote character development can be inserted in a range of activities, so that organizations of many kinds can make a difference in young people’s lives.
Floyd, who recently retired from the National 4-H Council, explained that 4-H is a community-owned, nonhierarchical institution focused on positive youth development. In its 115 years, 4-H has emphasized the importance to young people of developing a long-term relationship with a caring adult and having the opportunity to practice real-life skills and leadership in an environment in which it is safe to experiment and fail. Based on his experience leading the 4-H, he offered three ideas about why investing in implementation and evaluation—specifically allowing sufficient funding and other resources to do these things well—is so difficult for organizations of this kind.
First, governing boards, CEOs, and executive teams are “completely ill-equipped” for in-depth conversations about research and evaluation, Floyd suggested. There are nearly 250,000 youth-serving organizations in the United States, and their leaders are likely focusing primarily on marketing, fund-raising, taxes, personnel problems, and the like, rather than evaluation, he noted. People in these positions need simple ways to think about research and evaluation, in Floyd’s view, and few of these are available. Floyd also noted that few governing boards offer seats to representatives
of the populations served by the organization. Involving young people in theses roles, he said, “changes the conversation,” refocusing the board’s attention on the organization’s mission.
A second challenge Floyd noted is that leaders are “trained to be competitive.” Collaboration is seldom included in performance goals for people in these roles, Floyd pointed out, and he endorsed Surbaugh’s point that there is no “proprietary” 4-H or Boy Scouts version of good character. Organizations are not well prepared to collaborate, and the reality is that they do have to compete for funding, he added.
Collaboration is not impossible, however. Floyd described Imagine Science, a collaboration among national youth organizations to address the needs of young people who have lacked access to such organizations, with a focus on STEM (science, technology, engineering, and mathematics) education.4 The first thing that the collaborators had to do, Floyd observed, was to assert their authority in their organizations to break some rules. The group identified some principles, he continued, but many were hard to implement. For example, the leaders wanted transparency to be part of the project but they found that they had to give their local affiliates formal permission to collaborate and share resources with other organizations.
Finally, Floyd noted that the rich workshop discussion presented a complicated array of ideas, and he worried about “overcomplicating” the task for leaders and practitioners. 4-H, which began as the idea of a young teacher who had neither a plan nor any resources, has grown to serve young people in 100,000 clubs in the United States and many more across 50 countries, he pointed out. The reason the 4-H idea has spread so far and remained so consistent, he suggested, is that it is simple and time-tested.
In discussion participants focused on questions about how youth organizations can respond in a practical way to new ideas. Several participants commented on the difficulties of collaborating across organizations. There has certainly been change, one noted, in that 25 years ago it would have been astonishing to see a program such as Imagine Science that involved true collaboration across major organizations. Allowing independent researchers the access they need to conduct studies of their work was also rare for such organizations, but it is difficult to see how widespread these sorts of changes are, this person added.
Another noted that funders may hold the key. They encourage their grantees to collaborate and to evaluate their work, this person noted, but they seldom offer funding for these purposes. They are in a unique position
to promote collaboration and the sharing of both tested experience and innovative ideas. For example, a funder could work with the grantees under a particular funding portfolio to identify commonalities and facilitate the sharing of ideas. The large national organizations can also play a leadership role, as the discussions of 4-H and the Boy Scouts illustrated, noted another. The true competitors for youth organizations are not other youth organizations, another person commented, but video games, smart phones, and the many other kinds of activities that occupy young people’s time. “It’s the kids who are not involved in any organized youth activity we need to reach,” he added.
This page intentionally left blank.