and illnesses for all industries for 2005 at more than $160 billion. The continued attention to further improve occupational health and safety through research is not only fully warranted but such research requires critical evaluation for its relevance and impact. The core mission of the National Institute for Occupational Safety and Health (NIOSH) is to conduct research to improve and protect the health and safety of workers.
In September 2004, NIOSH contracted with The National Academies to conduct a series of evaluations of individual NIOSH research programs. This set of independent evaluations focused on the relevance and impact of each of eight NIOSH programs on reducing work-related injuries, illnesses, and hazardous exposures. From the outset of the evaluations, NIOSH leadership established the primary goal as program improvement but the context for the evaluations also included the PART (Program Assessment Rating Tool) federal agency evaluation process.
The first step in this multiphase effort was the appointment of a committee to develop an evaluation framework that was then used by eight separately appointed evaluation committees to assess NIOSH programs in hearing loss; mining; agriculture, forestry, and fishing; respiratory diseases; personal protective technology; traumatic injury; construction; and health hazard evaluation. Individual reports were produced by each evaluation committee.
At the conclusion of the eight studies, the framework committee held a public workshop in November 2008, “Evaluating NIOSH Programs: Lessons Learned and Next Steps,” where the discussions focused on the experiences gained in the evaluation process with NIOSH program and senior staff, members of the NIOSH Board on Scientific Counselors, evaluation committee members, and National Academies’ staff. This report provides the evaluation framework developed, implemented, and refined over the course of four years and eight program evaluations. The evaluation framework may prove applicable in evaluating other federal agency research programs. This report has two goals: (1) to summarize the evaluation process and lessons learned in the development and use of the framework and (2) to provide recommendations for future evaluation efforts.
After examining different approaches to program evaluation, the framework committee decided to define the scope and stages of the evaluation process based on the logic model, a model that is widely used in program evaluation and planning.
The logic model organizes the program and its efforts into inputs (e.g., budget, staffing, facilities), activities (e.g., research studies, surveillance, exposure measurement), outputs (e.g., reports, publications, conferences, training, patents), and outcomes (e.g., collaborations, policy changes, reductions in injuries and hazard-