ment of programs will produce important information on whether the programs are yielding improvements in health, morale, safety, and resilience. This feedback mechanism will, in turn, allow DHS to calculate the potential cost savings for the department in the form of improved worker productivity, reduced turnover, and possibly reduced health care use (Baicker et al., 2010; CDC, 2011).

This chapter reviews gaps in measurement, evaluation, and reporting related to workforce readiness and resilience in DHS and provides recommendations for improvement. This is a critical centerpiece of any program and will be crucial in helping DHS to determine whether it is meeting the goals laid out in its WRR strategic plan.

ORGANIZING FRAMEWORK

The measurement and evaluation strategy that DHS puts into place needs to be organized into three broad categories: structure, process, and outcomes (Goetzel and Ozminkowski, 2001). Working backward, DHS needs first to consider the intended outcomes of its human capital programs and how the outcomes can be measured. An effective measurement and evaluation strategy needs to consider the intervention programs’ framework or structure, and how they are delivered to the target population. If, for example, increased workforce readiness and resilience are the goals, DHS needs to ask whether an intervention is evidence-based, adequately resourced, and supported by energetic leadership—that is, structured appropriately to deliver the “dose” necessary to achieve the outcome highlighted (Goetzel and Ozminkowski, 2001). Further, DHS needs to measure whether the program is well communicated, achieves high engagement rates, and is viewed favorably by its target audience—the various process measures necessary for eventual program success.

Specifically, program structure needs to be assessed in terms of the basic architecture or blueprint of each DHS program—its “inputs.” The evaluation of structure needs to focus on whether the program’s critical components are in place according to plan, that is, an “audit” of program design compliance is needed (Goetzel and Ozminkowski, 2001). Questions asked in a structure assessment would include (1) What is the intervention?, (2) What is the expected outcome of the intervention?, (3) How is the program delivered to participants?, (4) What is the intensity of the program?, (5) Are core evidence-based program elements included in the de-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement