Daryl Chubin,1Catherine Didion,2Josephine Beoku-Betts,3and Jann Adams4
“Best practice” is a term often uttered, but seldom achieved. “Best” compared to what? Measured how? In one cultural context or discipline—or many? The more realistic and attainable status of a program often is its “promise.”
“Promising programs” is a term popularized in the 2004 report from the public-private initiative known as BEST—Building Engineering and Science Talent. In its report, A Bridge for All, 124 university-based, undergraduate-centered STEM5 programs operating in the United States were reviewed, using the National Science Foundation (NSF) model of employing a panel of experts drawn from a range of relevant disciplines.6 This conceptual discussion is thus anchored by an empirical base that underscores the challenge before us. For we seek to look across cultural and political contexts to ask: What translates beyond disciplines and unique conditions to engender and sustain a promising intervention program?
There are many caveats and concerns to address. Above all, context matters! While effective programs are a universal vehicle for intervening in the status quo, any program is embedded in a particular national and local cultural context. Furthermore, sponsors, program organizers/leaders, and the population they are intended to serve bring different expectations to the programs in which they participate. This is no less the case for programs aimed at women’s educational transitions and their articulation with employment structures and opportunities.
A robust literature on differences in preparation and participation by certain populations—women, members of racial/ethnic groups, and those with other significant characteristics of “difference” (visible or non-apparent)—reveals the unevenness (due to residence, education, bias, tradition, etc.) of what Max Weber called “life chances.”7 Social inequalities give rise to the need for programs to those underserved.
__________________
1 Daryl Chubin, senior advisor, American Association for the Advancement of Science.
2 Catherine Didion, director, Committee on Women in Science, Engineering, and Medicine, the National Academies.
3 Josephine Beoku-Betts, director, Women’s Studies Center at Florida Atlantic University.
4 Jann Adams, associate professor of psychology and associate dean of the Division of Science and Mathematics, Morehouse College.
5 Science, technology, engineering, and mathematics (STEM) is a commonly used acronym in the United States.
6 For details on the methodology and analysis, see BEST, 2004. A Bridge for All: Higher Education Design Principles to Broaden Participation in Science, Technology, Engineering, and Mathematics. San Diego, CA: BEST [online]. Available at http://www.bestworkforce.org/PDFdocs/BEST_BridgeforAll_HighEdFINAL.pdf.
7 For example, on women in science as an underserved population, see National Research Council. 2007. Beyond
Programs—Caveats and Concerns
Even when a program is created, its design may differ from implementation. Flaws in either case harbor implications for the future: Will the program thrive long enough to be institutionalized locally, adapted and transferred to similar populations in other sites, and perhaps scaled to serve different populations and contexts? “Sustainability” is often used as a test of impact. If so, then we tend to call the program a “success;” if it cannot survive beyond its original outside funding and leverage other resources to continue with succeeding cohorts of participants, the program fades and is considered a failure.
Of course, such ideal-types are rigid and beg many questions, including differing expectations, measures of impact and success, and the role of program leaders and practices. In short, all programs—even the best ones—evolve. Can we capture their “life cycle,” identify similarities and differences in program content and sequence, distinguish intentions (design) from behavior (implementation) to evaluation (measurement)?
What does the program do? What is its core character and purpose? Any program should cite evidence that supports whatever is planned as a means of advancing the policy or mission of the organization offering it, and should be tailored to accomplish that goal.
How the program is executed on behalf of the served population is a translation of grand plans into on-the-ground delivery of services.
Following BEST as a template, we can employ a small set of criteria as a “wish list” for judging empirically what a promising program should entail. Those criteria would include
• the form of intervention (typically more than one kind of activity) designed to produce a desired outcome;
• a specified target population;
• a track record of minimally five years of operation;
• evidence of positive outcomes (ideally documented through third-party evaluation or a research study, preferably with a comparison group); and
• findings that inform the operation of similar programs.
If all of these criteria are viewed as requirements for what constitutes a promising program, and not as a menu from which program organizers can pick-and-choose, then we conclude that few programs qualify. In the BEST population, less than 10 percent of the nominated programs passed muster as “promising” or “exemplary” according to the expert review panel. Clearly, funding and leadership are keys to longevity. However, the environment must be receptive to moving a “soft-money” project into an organization’s operating budget. This transition signals that a marginal effort by a few contributes to the mainstream of the organization’s mission, thereby warranting “hard money” support.
__________________
Bias and Barriers: Fulfilling the Potential of Women in Academic Science and Engineering. Washington, D.C.: The National Academies Press. On minorities in science, see National Research Council. 2011. Expanding Underrepresented Minority Participation: America’s Science and Technology Talent at the Crossroads. Washington, D.C.: The National Academy Press.
Program Case Studies—Developing and Developed Worlds
To demonstrate variations across cultures, we highlight case studies of two successful programs. One comes from the developing world and one from the developed world.
The Organization for Women in Science for the Developing World (OWSDW, formerly the Third World Organization for Women in Science, or TWOWS), was launched as an international non-governmental organization established in 1989. The TWOWS/OWSDW Postgraduate Training Fellowship Program was established in 1998, and has funded young women scientists under the age of 40 to secure postgraduate training in centers of research excellence in the global South.8
The Training Fellowship program demonstrates excellence of the host training institutions in the global South. Africa is disproportionately represented among countries applying for and receiving awards, compared to the Asia and Pacific Regions and Arab Regions. The domination of Nigeria among fellowship applicants and recipients of the award suggests a problem of effectively accessing the targeted population. A majority of former Fellows are now working in university research institutes in Africa, even though they may not be in their home country. Thus, the program has generated some amount of South to South exchange, stemming to some extent the problem of “brain drain” to the North. Though uneven in impact, it has been particularly successful in launching careers of women scientists.
Perhaps the most promising gender-conscious science and engineering faculty-focused program in the United States is NSF’s ADVANCE program. Established in 1995, the goal of ADVANCE is to increase diversity in the science, technology and engineering workforce by increasing representation and advancement of women in the professoriate. We do not look at this program from the NSF perspective, but rather through the eyes of two mature ADVANCE projects (one at the University of Michigan, the other at the University of Wisconsin).9 For our purpose, the focus is on two project examples, initiated in 2001, as mature, indeed “graduated” ADVANCE projects that illustrate how an externally-funded program can be adapted to change the structures of the institution in which it is implemented. This is what NSF calls “institutional transformation.” ADVANCE principles have migrated to the core of few U.S. research universities, while most still struggle with sustaining the drive toward transformation.
For example, Michigan’s ADVANCE program produced an increase in the number of tenure-track women hired. Additionally, nine women were appointed to departmental chair positions. Foremost among the key interventions, the committees that managed ADVANCE were composed of 45 senior faculty members and administrators who became “organizational catalysts” for change.10 Sponsorship by NSF lent credibility to the effort as an institutional initiative to improve science research. The target of the intervention was the institutional culture, not the women in the academy. The Wisconsin ADVANCE program focused on institutional “climate.” Eleven “climate indicators” assessed individual faculty perceptions of how respected
__________________
8 For more information on the TWOWS/OWSDW Postgraduate Training Fellowship Program, see: http://owsdw.ictp.it/.
9 Meyerson, D. and M. Tompkins. 2007. “Tempered Radicals as Institutional Change Agents: The Case Advancing Gender Equity at the University of Michigan.” Harvard J. Law Gender. 30:303-322; and Pribbenow, C.M., J. Sheridan, B. Parker, J. Winchell, D. Benting, K. O’Connell, C. Ford, R. Gunter, and A. Stambach. October 8, 2007. Summative Evaluation Report of WISELI: The Women in Science and Engineering Leadership Institute.
10 Sturm, S. 2006. “The Architecture of Inclusion: Advancing Workplace Equity in Higher Education.” Harvard J. Law Gender. 29:247-289.
they felt by their colleagues, students, staff and chairs; how included, valued, recognized or isolated they felt; and whether they believed they “fit” in their departments.11 In short, women faculty felt that the departmental “climate” was more negative than did their male counterparts. And the university instituted a series of intervention workshops for chairs and senior faculty that improved both awareness and perceptions of a more supportive campus climate.
With our two case studies in mind, we could fashion a matrix of criteria (not presented here) for assessing how promising other programs appear to be. Such a matrix is a research summary that can be used as a kind of scorecard of STEM intervention programs directed to increasing the number and improving the experience of participants. Entries in the cells mark progress toward fulfillment of each criterion. The criteria would include years of operation (5+), multiple interventions, and evidence of outcomes.
Life Cycle of Programs and Beyond
Structures and practices that endure reflect the life cycle of programs. Those that contribute to achieving institutional goals tend to be retained and made available to all. Such “mainstreaming” signifies—strange as it sounds—the institutionalization of change. Beyond the local institution, such changes should have applicability to other similar institutions. This process of adaptation—the language of “adoption” denotes a too-linear transfer of knowledge and practice without integrating these cultural changes into a new setting replete with its own traditions—is a means of spreading and scaling “what works.”12 Scale-up is difficult work. Organizations tend to resist innovations that come from outside “their own backyard,” not of their own invention or without campus advocates with credibility.
In an effort to move systematically from program context to description to inference, we are engaged in the academic equivalent of converting theory to practice. Documenting program characteristics is but a step toward understanding why interventions have produced differences in participation among any underserved group in science.13 This is where research transcends evaluation and influences stakeholders in sponsor, performer, and policy-making communities to recognize promising programs.
In sum, a program can be conceptualized, rationalized, and described as an organized response to a problem around which people coalesce. How they implement the response will determine whether or not it reaches the intended audience and has the intended effects. Through research and evaluation, program leaders learn about the strengths and weaknesses of their interventions—what is working, how to modify activities, magnify impacts, expand reach, and measure with greater precision. The sum of these adjustments creates a program history and with it accountability to sponsors and host institutions who respect “promise.”
__________________
11 Sheridan, J., C. Pribbenow, E. Fine, J. Handelsman, and M. Carnes. June 2007. “Climate Change at the University of Wisconsin-Madison: What Changed and Did ADVANCE have an Impact?” Women in Engineering Programs and Advocates Network 2007 Conference Proceedings.
12 Fox, M.F., G. Sonnert, and I. Nikiforova. 2007. “Successful Program for Undergraduate Women in Science and Engineering: Adapting versus Adopting the Institutional Environment.” Res. Higher Ed. 50:333-353.
13 Chubin, D.E., A.L. DePass, and L. Blockus, eds. 2010. Understanding Interventions That Broaden Participation in Research Careers 2009: Embracing a Breadth of Purpose, Vol. III. Available at www.UnderstandingInterventions.org.