Cover Image

PAPERBACK
$39.00



View/Hide Left Panel

they felt by their colleagues, students, staff and chairs; how included, valued, recognized or isolated they felt; and whether they believed they “fit” in their departments.11 In short, women faculty felt that the departmental “climate” was more negative than did their male counterparts. And the university instituted a series of intervention workshops for chairs and senior faculty that improved both awareness and perceptions of a more supportive campus climate.

With our two case studies in mind, we could fashion a matrix of criteria (not presented here) for assessing how promising other programs appear to be. Such a matrix is a research summary that can be used as a kind of scorecard of STEM intervention programs directed to increasing the number and improving the experience of participants. Entries in the cells mark progress toward fulfillment of each criterion. The criteria would include years of operation (5+), multiple interventions, and evidence of outcomes.

Life Cycle of Programs and Beyond

Structures and practices that endure reflect the life cycle of programs. Those that contribute to achieving institutional goals tend to be retained and made available to all. Such “mainstreaming” signifies—strange as it sounds—the institutionalization of change. Beyond the local institution, such changes should have applicability to other similar institutions. This process of adaptation—the language of “adoption” denotes a too-linear transfer of knowledge and practice without integrating these cultural changes into a new setting replete with its own traditions—is a means of spreading and scaling “what works.”12 Scale-up is difficult work. Organizations tend to resist innovations that come from outside “their own backyard,” not of their own invention or without campus advocates with credibility.

In an effort to move systematically from program context to description to inference, we are engaged in the academic equivalent of converting theory to practice. Documenting program characteristics is but a step toward understanding why interventions have produced differences in participation among any underserved group in science.13 This is where research transcends evaluation and influences stakeholders in sponsor, performer, and policy-making communities to recognize promising programs.

In sum, a program can be conceptualized, rationalized, and described as an organized response to a problem around which people coalesce. How they implement the response will determine whether or not it reaches the intended audience and has the intended effects. Through research and evaluation, program leaders learn about the strengths and weaknesses of their interventions—what is working, how to modify activities, magnify impacts, expand reach, and measure with greater precision. The sum of these adjustments creates a program history and with it accountability to sponsors and host institutions who respect “promise.”

__________________

11 Sheridan, J., C. Pribbenow, E. Fine, J. Handelsman, and M. Carnes. June 2007. “Climate Change at the University of Wisconsin-Madison: What Changed and Did ADVANCE have an Impact?” Women in Engineering Programs and Advocates Network 2007 Conference Proceedings.

12 Fox, M.F., G. Sonnert, and I. Nikiforova. 2007. “Successful Program for Undergraduate Women in Science and Engineering: Adapting versus Adopting the Institutional Environment.” Res. Higher Ed. 50:333-353.

13 Chubin, D.E., A.L. DePass, and L. Blockus, eds. 2010. Understanding Interventions That Broaden Participation in Research Careers 2009: Embracing a Breadth of Purpose, Vol. III. Available at www.UnderstandingInterventions.org.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement