In God we trust, all others bring data.
—The Elements of Statistical Learning (Hastie et al., 2009)
Because of its critical mission, the Department of Homeland Security (DHS) requires a committed workforce that is ready and resilient (healthy, safe, engaged, and productive). However, DHS has a paucity of data and information available to inform the Department on issues related to health, safety, productivity, and quality of life, all of which lead to workforce readiness and resilience (WRR). To be successful in achieving its mission, DHS needs to establish a robust and valid measurement and evaluation infrastructure for diagnosing individual and organizational problems, tracking and monitoring program impacts, and evaluating overall success. DHS needs to abide by the maxim that data drive change and that what is measured is managed.
A state-of-the-art measurement and evaluation system is integral to the development and monitoring of any workplace program (IOM, 2005; Lester, 2013; NIOSH, 2008). That system will coordinate data collection, aggregation, analysis, and reporting. The data emerging from the system will help DHS to understand its needs, choose proper interventions, and examine whether the interventions produce the desired outcomes, including whether and where improvements can be made. DHS needs to support the development and maintenance of high-quality data aggregation and reporting systems and the content expertise necessary to extract intelligence from stand-alone and integrated databases. Effective measure-
ment of programs will produce important information on whether the programs are yielding improvements in health, morale, safety, and resilience. This feedback mechanism will, in turn, allow DHS to calculate the potential cost savings for the department in the form of improved worker productivity, reduced turnover, and possibly reduced health care use (Baicker et al., 2010; CDC, 2011).
This chapter reviews gaps in measurement, evaluation, and reporting related to workforce readiness and resilience in DHS and provides recommendations for improvement. This is a critical centerpiece of any program and will be crucial in helping DHS to determine whether it is meeting the goals laid out in its WRR strategic plan.
The measurement and evaluation strategy that DHS puts into place needs to be organized into three broad categories: structure, process, and outcomes (Goetzel and Ozminkowski, 2001). Working backward, DHS needs first to consider the intended outcomes of its human capital programs and how the outcomes can be measured. An effective measurement and evaluation strategy needs to consider the intervention programs’ framework or structure, and how they are delivered to the target population. If, for example, increased workforce readiness and resilience are the goals, DHS needs to ask whether an intervention is evidence-based, adequately resourced, and supported by energetic leadership—that is, structured appropriately to deliver the “dose” necessary to achieve the outcome highlighted (Goetzel and Ozminkowski, 2001). Further, DHS needs to measure whether the program is well communicated, achieves high engagement rates, and is viewed favorably by its target audience—the various process measures necessary for eventual program success.
Specifically, program structure needs to be assessed in terms of the basic architecture or blueprint of each DHS program—its “inputs.” The evaluation of structure needs to focus on whether the program’s critical components are in place according to plan, that is, an “audit” of program design compliance is needed (Goetzel and Ozminkowski, 2001). Questions asked in a structure assessment would include (1) What is the intervention?, (2) What is the expected outcome of the intervention?, (3) How is the program delivered to participants?, (4) What is the intensity of the program?, (5) Are core evidence-based program elements included in the de-
sign?, and (6) Are key measures in place to evaluate the intervention? (Goetzel and Ozminkowski, 2001). Those questions and others relevant to the specific program in question need to be asked and answered routinely. The answers can be delivered in the form of qualitative assessments complemented by administrative reports that are produced regularly to keep program managers current on the operation of the enterprise.
Process evaluation involves asking how well the program is being implemented—whether the execution of the program is progressing according to plan and whether operations and delivery systems are handled smoothly. Questions addressed by process evaluations include (1) How many people participate?, (2) How many are fully engaged in behavior change efforts?, (3) How satisfied are they with how the program is run?, (4) How do they feel about the people in charge?, (5) Which programs are best attended?, and (6) How many individuals successfully complete the programs? (Goetzel and Ozminkowski, 2001).
Outcome evaluation is used to round out the measurement and evaluation paradigm. Outcomes evaluation is focused on the extent to which program objectives are achieved within a given period (Goetzel and Ozminkowski, 2001). Essentially, the purpose of an outcomes study is to determine whether the program’s goals and objectives have been satisfied according to plan. Program outcomes generally are in three broad categories: (1) improvements in the health and well-being of workers; (2) cost savings; and (3) performance enhancements. Therefore, key outcomes for DHS would include improved quality of life, cost-effectiveness and cost-benefit of interventions (return on investment or ROI), a more ready and resilient workforce, lower turnover, and higher morale and engagement.
Figure 4-1 presents a simple model of a workplace program measurement schema. The model shows various components of structure, process, and outcome measures that are important to capture in evaluating a comprehensive workplace intervention.
DHS offers many program activities that are loosely tied to one another but no standardized mechanism for measuring and evaluating
FIGURE 4-1 Model of a workplace program measurement schema.
SOURCE: Goetzel, 2013.
whether those programs work either individually or in combination with one another. There has been a tendency for DHS and its components to respond impulsively to perceived workforce health or resilience issues. New programs are often created without regard for measurement on the front or back end. Thus, there is no diagnostic analysis to justify specific interventions and no consistent method for evaluating program effects.
For example, DHSTogether was created in 2009 in response to low employee morale, as reported by the Federal Employee Viewpoint Survey (FEVS) (Green and Perkins, 2012; IOM, 2012). DHS and its components have taken steps to understand the root causes of low morale, but the Government Accountability Office (GAO) (2012a, p. 19) found that “DHS and the selected components conducted limited analysis in several areas that is not consistent with OPM [Office of Personnel and Management] and Partnership guidance that lays out useful factors for evaluating root causes of morale problems through FEVS analysis.”1 The component agencies also developed action plans to address morale, which are reviewed by the Office of the Chief Human Capital Officer (OCHCO).
1The committee and the Government Accountability Office are referring to root cause analysis at the organizational level because root cause analysis at the individual level would be time and resource intensive to identify and address.
DHS was not able to share the action plans, so the committee cannot comment on whether the plans are actionable and measurable. However, on reviewing the action plans, GAO (2012a) found that
despite having broad performance metrics in place to track and assess DHS employee morale on an agency-wide level, DHS does not have specific metrics within the action plans that are consistently clear and measurable. As a result, DHS’s ability to assess its efforts to address employee morale problems and determine if changes should be made to ensure progress toward achieving its goals is limited.
Another GAO (2012b) report reviewed the DHS Workforce Strategy (DHS, 2011) and found that “although DHS began taking positive steps for managing strategic workforce planning in 2011, DHS officials have not yet taken steps to implement an effective oversight approach for monitoring and evaluating components’ progress in implementing strategic workforce planning.” The analysis identified performance measures that reported on only 2 of the 15 elements in DHS’s strategic workforce planning model (GAO, 2012b). OCHCO, which oversees the data collection relevant to the workforce strategy, informed the committee that the office has faced challenges in collecting the data outlined in the performance measures and therefore has incomplete data at this time.
The committee frequently heard in discussions with and presentations from DHS staff that new programs are often implemented without built-in metrics for accountability and evaluation (Green, 2013; Green and Perkins, 2012). No agency or organization, including DHS, can create programs without regard for baseline measurement or built-in metrics for program evaluation. In the current fiscal climate, programs need to be created to address critical organizational needs and evaluated to ensure that they are meeting the needs. The measurement strategy that DHS uses needs to provide visibility at the highest level of the organization through data-driven reports that draw a straight line to program resource allocations. In short, the metrics that DHS chooses must be linked to its primary mission—protecting the homeland.
During the committee’s information-gathering process, OCHCO was asked to present available data on the DHS workforce. After a panel
discussion with representatives of several offices in OCHCO,2 it became clear that DHS does not have a good understanding of what data are available, how the existing data are currently being used around the department, how the data could potentially be used, and how many of the data that are collected are not readily available to them because of rules and silos. DHS uses few tools and data points to assess the readiness and resilience of its workforce. The department relies heavily on the FEVS to identify workforce health and resilience issues that need to be addressed. The FEVS can be a powerful management tool to identify strengths and weaknesses, but it is a blunt instrument—it informs leadership about what employees do and do not like but not about why or what the root causes of the problems might be.
The FEVS also does not provide information on other important measures crucial to the success of DHS operations, such as employee health and well-being (physical, mental, social, financial, intellectual, and spiritual), health care use, absenteeism, presenteeism, disability, safety, or retention. DHS does collect data on some of those areas, including absenteeism, disability, occupational injury, and retention (see Box 4-1 for a list of DHS data sources available to OCHCO). However, the data come from a variety of sources (headquarters offices, directorates, and operational components) and are not actively integrated or analyzed for the purpose of measuring workforce readiness or resilience. That makes a current-state analysis difficult. In addition, DHS informed the committee that it does not have, or has not pursued the ability to have, access to data related to
• Health risk assessments (HRAs)3
• Medical costs—health care claims information from OPM is not available to federal agencies
• Disease incidence and prevalence
• Health care utilization
• Biometric or laboratory values
2The panel in the February 2013 committee meeting included the human resources specialist for workforce engagement, the department safety and health manager, the program manager of human capital policy and programs, the program manager and policy advisor for the workers compensation program, the personnel research psychologist for human capital policy and programs, and the human capital business systems reporting team lead for Human Resources Information Technology (HRIT) operations, all of whom are housed in OCHCO.
3The US Coast Guard collects such information on active duty personnel.
• Performance and presenteeism
• Psychosocial risk factors, such as stress and depression
• Long- and short-term disability
• Worker compensation and safety (minimal data available to DHS, but the data are collected by the federal government) (Green, 2013)
Summary of DHS-Wide Data Sources Available to the Office of the Chief Human Capital Officer (OCHCO)a
Workforce demographics on age, gender, geographic location, time with agency, time in service, veteran status, job type, pay level (from the National Finance Center [NFC])
Turnover from NFC, including intent to leave (from FEVS); reasons for leaving (exit survey—response rates vary and participation is voluntary; Transportation Security Administration and US Secret Service administer their own)
Sick and annual leave, although detailed information (e.g., types of sick leave) needs to be requested from component agencies and may be subject to privacy considerations
Federal Employee Viewpoint Survey results (from the OPM)
Equal Employment Opportunity complaints—icomplaints, 462 report[s] to the Equal Employment Opportunity Commission
Workers’ compensation from the Department of Labor (DOL) Office of Workers’ Compensation, cross-referenced with DHS component systems
Accidents and injuries, including line-of-duty deaths (from DOL, component systems)
Health and safety program quality and implementation (onsite assessments)
OCHCO relies on data calls to DHS component agencies for
• employee assistance program reports
• suicide numbers
• employee relations cases
• cross-referencing for other measures
aAs presented to the committee.
SOURCE: Green, 2013.
The Office of Health Affairs (OHA), which has responsibility for overseeing the DHSTogether program, does not conduct systematic measurement activities related to readiness and resilience. A few of the pilot programs have some basic evaluation built in (such as the new Strong Bonds program), but it focuses on employee satisfaction. The US Coast Guard (USCG) has been funded by DHSTogether to run a pilot program that will use the Navy Stress Assessment tool,4 which aims to identify rapidly the stress levels of command personnel before stress affects job performance (Green and Perkins, 2012). It is a preventive health workforce tool, and is administered to help in early detection of employees’ stress and stress-related problems. Pilot program managers plan to work with the tool’s Navy creators to modify the tool to meet USCG’s needs (Green and Perkins, 2012). If the program is successful, OHA plans to test the tool in law enforcement component agencies (Green and Perkins, 2012). Although that interagency agreement has been in the works for almost a year, little progress in modifying the tool has been made. Component agencies independently conduct some data-gathering activities—such as focus groups, town halls, exit surveys—and compile suicide rates, but the efforts vary widely among components, and each component has a different level of interest and capabilities for data analysis.
Throughout the agency, there are probably some best practices for data gathering and analysis. For example, during its information gathering the committee heard from David Tumblin, the Director of Human Capital Strategy and Technology in the Immigration and Customs Enforcement (ICE) Human Capital Office, about an evaluation of the 2012 FEVS data to identify the top five drivers of overall satisfaction and dissatisfaction in its component (see Box 4-2). The work that his group is undertaking is impressive, and is a good example of processes and practices that other component agencies could learn from.
How the data available to DHS inform workforce readiness and resilience is unclear. DHS and its component agencies have created many programs focused on improving employee resilience and morale with nothing to connect them, no sense of what is working, and limited rationale for existence of such programs. Altering root causes to create a ready and resilient workforce is challenging, because the factors are often
4The Navy Stress Assessment tool is a joint effort of the Navy’s Operational Stress Control team: Navy Personnel Research, Studies and Technology, and the Defense Equal Opportunity Management Institute.
Use of Federal Employee Viewpoint Survey Data to Identify Top Drivers of Workforce Satisfaction in Immigration and Customs Enforcement
The Office of Human Capital, Strategy and Technology (HCT) in Immigration and Customs Enforcement (ICE) now reviews and deciphers ICE’s responses to FEVS. Six in 10 ICE employees completed the survey, a total of 11,400 individuals, which enabled ICE to analyze the data in a statistically valid manner. Because the Office of Personnel Management did not release the raw FEVS data, ICE used statistical methods to derive the top five drivers of overall satisfaction and dissatisfaction for the three organizations that comprise 90 percent of ICE’s workforce.
From these top drivers the analysis included information gathered from agencywide Town Hall meetings to inform the questions around the drivers. Subsequently, HCT held approximately six focus groups in each of the three large organizations within ICE, including departments that had high satisfaction ratings and low satisfaction ratings.
Following these focus groups, HCT administered pulse surveys to measure whether or not employee feedback and data collection are making an impact on overall employee satisfaction.
SOURCE: Tumblin, 2013.
woven into the fabric of the institution and causal pathways can be complex. Regardless, this type of assessment is essential for ensuring that the programs implemented can make a difference. If evaluation is not built into initial program development, there can be no continuous improvement. DHS has not identified commonalities among the various staff levels, job types, or locations in DHS that can be measured across components. Without a common set of measures, the agency cannot compare these groups. In addition, because of a lack of a vision statement and well-defined objectives, it is not clear what the goal of the DHS resilience program should be—without that, there is no measurement and thus no clear path.
Measurement and evaluation needs to be “built in” to any organizational efforts that aim to improve the health, well-being, resilience, morale, and safety of the workforce. The National Institute for Occupational
Safety and Health (NIOSH) Essential Elements of Effective Workplace Programs (2008) framework underscores the need for evaluation of comprehensive worksite programs, practices, and policies and to “hard code” measurement into environmental and individual programs aimed at preventing disease and promoting health and safety. The NIOSH framework identifies four actions related to measurement and evaluation: (1) find and use the right tools; (2) adjust the program as needed; (3) measure and analyze; and (4) learn from experience (see Box 4-3 for a description; NIOSH, 2008). Those actions imply the need to collect baseline data before a program is conceived, to align measurement systems with program objectives, to leverage integrated data to gain a full picture of program effects, to report findings in ways that are easily
NIOSH Total Worker Health, Measurement, and Evaluation Elements
Find and use the right tools. Measure risk from the work environment and baseline health in order to track progress. For example, a Health Risk Appraisal instrument that assesses both individual and work-environment health risk factors can help establish baseline workforce health information, direct environmental and individual interventions, and measure progress over time. Optimal assessment of a program’s effectiveness is achieved through the use of relevant, validated measurement instruments.
Adjust the program as needed. Successful programs reflect an understanding that the interrelationships between work and health are complex. New workplace programs and policies modify complex systems. Uncertainty is inevitable; consequences of change may be unforeseen. Interventions in one part of a complex system are likely to have predictable and unpredictable effects elsewhere. Programs must be evaluated to detect unanticipated effects and adjusted based on analysis of experience.
Measure and analyze. Develop objectives and a selective menu of relevant measurements, recognizing that the total value of a program, particularly one designed to abate chronic diseases, may not be determinable in the short run. Integrate data systems across programs and among vendors. Integrated systems simplify the evaluation system and enable both tracking of results and continual program improvement.
Learn from experience. Adjust or modify programs based on established milestones and on results you have measured and analyzed.
SOURCE: NIOSH, 2008.
understood and actionable, and to make changes in response to the data analyses. The key factors are to determine the data needed for measurement and evaluation, where the data reside, whether new data need to be collected to address information gaps, how the data should be managed, and the frequency of reporting to decision makers. A structure self-assessment tool adapted from the NIOSH Essential Elements guidelines can be found in Appendix E. The tool can be used by WRR and component agencies to identify needed improvements to include in strategic plans. Ideally, this type of evaluation would be conducted through group discussions, which the committee believes would encourage more realistic evaluations than having a single individual completing the survey. Such groups could consist of representatives with responsibility for relevant subjectss (such as health, wellness, and human capital) who are assembled by the administrator of WRR and component agency heads.
Basic Questions of Evaluation
A distinction can be made between measurement and evaluation. Measurement implies tracking of data that are essential for running a program effectively, usually derived from administrative systems and presented in the form of “dashboard” reports. Evaluation involves stepping periodically and asking some fundamental questions such as, Is the program working and is it worth the time, effort, and expense? Is DHS getting sufficient value from the program? If yes, maintain or even enhance the program. If no, fine tune or eliminate it.
In developing an evaluation framework, DHS needs to develop a clear and cogent research agenda that addresses specific questions one at a time. DHS needs to answer the broad question, Is the program successful in meeting its intended goals? That broad question can be broken down into its components that can be tested empirically, that is, verifiable from observation or experiment. For example, a more specific question can be phrased as, Was the readiness and resilience program successful in lowering absenteeism rates of employees at intervention worksites?
To help DHS approach the task of measurement in simple language, the following nine questions are offered as ones that could be asked and answered before an evaluation project is undertaken (Goetzel and Ozminkowski, 2001):
1. What do we want to know? What is the evaluation question that we are attempting to answer? What problem are we trying to solve? (Focusing Question)
2. What will the answer or solution to the problem look like? What do we expect to happen as a consequence of the intervention? How much of a program effect will we see? (Hypothesis)
3. How will we see it? What is the basic design of our study? (Design)
4. How will we collect and record the data? What instruments will we use? What will the research database contain? (Measures)
5. How will we categorize and analyze the data? How will the tables and graphs be constructed? What categories of data will be developed? What statistical techniques will be applied? (Data Analysis and Results)
6. How will we affect the data? Do we have a stake in study outcome? What can we do to minimize bias? What explicit limitations should be stated beforehand? (Limitations)
7. What will we infer from the data? What are the implications for action given alternative study outcomes? (Discussion)
8. What will we find out that we did not already know before we started? What is the “so what?” question being addressed? (Conclusions)
9. What can we do with the information we learn—what are the implications for action? (Implications)
Answering the above questions early in the process of establishing a measurement and evaluation framework will guide the efficient use of resources. It is generally recommended that about 5 to 10 percent of a program’s budget be devoted to measurement and evaluation activities (Frankel and Gage, 2007; Goetzel and Ozminkowski, 2001; International Federation of Red Cross and Red Crescent Societies, 2011; W.K. Kellogg Foundation, 2004; World Health Organization, 2009).
Developing Logic Models
An organizing structure or logic model is needed in considering key elements of an evaluation framework. The logic model is a theoretical construct that shows how an intervention is likely to influence particular outcomes and presents critical pathways and measures for testing
hypotheses. Figure 4-2 shows a logic model of worksite health promotion that was developed by the Centers for Disease Control and Prevention (CDC). The model highlights the importance of a combination of individual and environmental interventions as part of a comprehensive workplace program. The program as a whole is assessed by using a structural analysis to determine whether the key components of the program are in place.
Merely making the program available to employees is not sufficient for achieving the listed outcomes. Individuals must be aware of the program, participate in it, and be satisfied with component parts and the people running the program. Those are the process variables in the model. The expectation that “if you build it, they will come” does not always hold true. The program must be attractive, accessible, and satisfying for it to be effective.
The last set of model elements focuses on outcomes. Workers exposed to the program must become engaged in it, be motivated to change their health behaviors, and become more knowledgeable about when to use the services available. This, in turn, will lead to adoption of healthy lifestyles, improved physiological measures (such as measures of blood pressure, cholesterol, glucose, and weight), improved psychological health (such as reduced stress and depression), a reduction in risks of common illnesses (such as heart disease and cancer), and
FIGURE 4-2 Logic model for worksite wellness programs.
SOURCE: Adapted from Soler et al., 2010.
finally improved productivity. The model provides a checklist of items that need to be measured and the sequence of events, with arrows indicating the need for structural changes, which lead to process improvements and finally to the desired outcomes.
A more complex logic model of workplace programs is shown in Figure 4-3. Using a more detailed schema of causes and effects, the model highlights the various elements that ideally would be measured in evaluating a comprehensive program. As above, this model can be used to establish a checklist of measures to be collected and analyzed to determine program effectiveness. It also introduces the dimension of time so that realistic expectations can be set as to when particular outcomes are expected to occur.
Bringing Together Key Stakeholders
Within DHS, individual decision makers may have different criteria for program success that are based on their expertise and responsibilities in the organization. For example, the personnel director may wish to reduce absenteeism, the human resources professional may wish to reduce
FIGURE 4-3 Model of wellness program impacts.
SOURCE: Care Continuum Alliance, 2010.
turnover and attract the best talent, the medical director may wish to prevent accidents and manage illnesses, and the administrator may wish to improve employee ratings of the agency. It is important to bring key decisionmakers together so that a consensus opinion can be formed on conceptual and operational definitions of program success. In building consensus in this diverse group, DHS must first understand the reasons that each person has for introducing workplace programs to employees and how each person expects the programs to benefit the department. This process should culminate in articulation by those assembled specific quantifiable and measurable outcomes of the program and in a reasonable timetable for reporting results back to department leaders.
The Importance of Data Integration
In large organizations, data are often collected with disparate systems. However, the ability to connect data in an integrated fashion allows program managers to gain better insight into organizational problems and facilitates holistic and efficient monitoring of programs. For example, at the committee’s second meeting, Major Paul Lester (2013), Director of the US Army Analytics Group Research Facilitation Team, discussed his experiences with and lessons learned from the Army’s work on data integration (see Box 4-4 on requirements for successful data integration). The Army uses a hypermassive database, the Person-Event Data Environment, which includes employee data from throughout the Department of Defense (DoD) (Cornum et al., 2012). From that database, the Army is able to compare results from its Global Assessment Tool (GAT), a measure of resilience and psychological health, with results in other DoD components and look for positive and negative behavioral outcomes (Cornum et al., 2012).5 On the basis of his experience, Lester discussed the need to create a measure that is “common to all” in DHS. To achieve that, he suggested reducing the number of surveys and creating a single survey that would measure variables related to the desired outcome. The survey could have different
5The creation of the architecture for the Army’s system, the Person-Event Data Environment, has cost about $12 million and took approximately 8 years to develop (it was initially built in 2006 and is expected to be fully operational in March 2014) (Lester, 2013).
Requirements for Successful Data Integration
1. Data collection, management, and reporting according to agreed-on protocols and standards.
2. Longitudinal tracking of data among all centers and the agency as a whole.
3. A plan for integration of data from all parts of the organization(s).
4. Data sets converted to an easily used format that is syntactically and semantically comparable.
5. Senior leadership emphasis and participation (mandatory participation if possible).
6. A culture that prioritizes data integration and analysis.
7. Data collection, integration, staging, and analysis platforms.
8. Personnel to undertake the integration.
9. Data analysis team(s).
SOURCES: IOM, 2005; Isaac, 2013; Lester, 2013; NRC, 2010.
versions for specific populations. The GAT has five versions for distinct populations and purposes, which include Army family members, Army civilians, basic trainees, and testing and evaluation. The supplemental GATs are voluntarily completed and analyzed in addition to the Army’s analysis of the mandatory GAT completed by military personnel (active duty, Reserve, and National Guard) (Lester, 2013). Lester stressed the importance of measuring baselines and conducting longitudinal analysis. He also emphasized that to analyze the data and make them useful, the ability to evaluate data effectiveness must be built into programs from the beginning.
Another federal government example of successful workforce data integration is the effort of the National Aeronautics and Space Administration (NASA). Faced with budget and staffing cuts, NASA launched a Strategic Human Capital Architecture framework to define the key aspects of human capital management (Partnership for Public Service, 2005). Learning from the past mistake of a decentralized workforce planning system, NASA implemented a “structured, four-step process for integrating strategic workforce planning with every other aspect of the agency’s overall management agenda” (Partnership for Public Service, 2005). The process is executed through the agency’s Human Capital Management website, which makes workforce planning best practices available to NASA managers and employees. The Human Capital
Management website includes a “complete workforce planning guide, as well as detailed workforce analysis reports, analytical tools for examining current and past workforce characteristics, and forecasting tools for determining future workforce trends within NASA and across the entire labor market” (Partnership for Public Service, 2005). In addition to the Human Capital Management website, NASA established a Competency Management System, which allows managers to measure, monitor, and manage employees and external resources and enables them to find NASA employees who have specific expertise needed for particular projects and programs (Partnership for Public Service, 2005). That approach to data collection, integration, and analysis has improved the NASA workforce and helped to increase NASA’s ranking among agencies in the FEVS (NASA was ranked second among all agencies in the 2012 survey) (OPM, 2012).
In a private-sector example, Johnson & Johnson has worked to create a framework to identify and assess environmental, health, and safety (EHS) programs (Isaac, 2013). The framework, the Management Action and Assessment Review System (MAARS), compares current EHS programs with standards set by Johnson & Johnson; there is a built-in external assessment every 3 years (Isaac, 2013). To support MAARS and the organization’s overall commitment to employee health and wellness, Johnson & Johnson tracks utilization of health services (via the electronic medical records of onsite company clinics)6 to monitor the overall health and wellness of employees. The company has also implemented a Global Health Assessment tool that is accessible through the Johnson & Johnson intranet. Through information entered annually by each Johnson & Johnson location, the Global Health Assessment tool produces an action plan, a recommendation, or a congratulatory note on the basis of how well a given location’s goals have been reached that year (Isaac, 2013). The Global Health Assessment tool is used by each location according to a companywide set of standards; this enables the company to have greater centralization of workforce data and a clearer picture of the health of the workforce at large. Both the Global Health Assessment tool and MAARS use employee data in an integrated and statistically based way to help the company to reach EHS program goals (Isaac, 2013).
In any style of program, there is an overall need for a framework from which an organization can develop plans, policies, and procedures
6This type of clinical data is not yet available to federal agencies.
for data integration; and there needs to be emphasis by and participation of senior leadership to achieve successful data integration. Data integration can be highly technical and labor-intensive, so it requires specific technical expertise. Similarly, once data are drawn together, there is a need for multidisciplinary teams that have expertise in data analysis. There also needs to be a governance model that directs data collection, integration, and use (Acquisition Solutions, 2009; IOM, 2005).
While recognizing that “data integration can be a herculean effort,” requiring adequate resources, time commitment, and political will, and the ability to reuse data for a variety of purposes, it will result in time and monetary savings in the long run (Lester, 2013). Furthermore, data integration can be done in incremental steps, dividing the work and cost over several years while moving the project forward (Lester, 2013). In his work with the Army, Lester found that “about 60 percent of time and financial resources were spent in planning to collect data, collecting data, cleaning data, and staging data,” whereas the analysis consumed about 40 percent of the time (Lester, 2013).
An integrated data system can drive business strategy by contributing judgment to organizational decision making (for example, where investments should be made) and can provide accountability, improvement, and surveillance (Pronk, 2005). Applying business analytics to support decisions—through data analysis that allows the production of workforce profiles of health and productivity, hiring and attrition data, and employee perceptions—helps to guide actions in addressing future workforce needs. Metrics can also support accountability. OPM is creating a medical data warehouse that will provide federal agencies new types of data to work with (OPM, 2010).7
What Should Be Measured?
Tracking too few performance measures makes it difficult to get a snapshot or high-level view of the workforce, whereas tracking too many
7In 2010 OPM began its Health Data Warehouse (HDW) project to collect, maintain, and analyze claims data from federal employee health benefit carriers and pharmacy benefit managers. “One of the goals of the data warehouse is to enable OPM to approach the design and management of federal benefits in a more holistic way. For example, the data warehouse will facilitate OPM’s effort to develop worksite wellness programs to improve Federal workers health and lower costs over time. Determining the best approaches for these programs and their long term return on investment requires careful analysis of the data captured in the data warehouse” (OPM, 2010).
measures can cause an organization to lose sight of which measures contribute to meeting strategic objectives and can add noise to the system. In addition, collecting data can be expensive—both in dollars spent and in staff time spent analyzing and reporting the data. Thus, it needs to be determined which top 10 measures need to be tracked routinely (for examples, see Box 4-5). They should include key structure, process, and outcome measures collected at the organizational and individual level. Individual data on health, well-being, health care utilization and cost, disability, absence, productivity, safety, engagement, and burnout can be aggregated at the group, component, and organization level. When identifying data collection points, DHS and its component agencies need to ensure that legitimate criteria for defining achievement are identified, to know what it will look for to meet identified goals, and to justify seeking to change or add programs.
Examples of Measures That Could Be Used Throughout the Department of Homeland Security for Readiness and Resilience
1. Awareness of key health issues (physical, mental, social, financial, and intellectual).
2. Participation and engagement in programs and initiatives.
3. Improved attitudes toward employer and leadership.
4. Behavior change.
5. Risk reduction (off physical and mental disorders).
6. Health care utilization (hospital admissions, emergency room visits, doctor visits, prescription fills).
7. Health care costs (per capita spending).
8. Absenteeism and disability (costs in days and dollars).
9. Safety and worker compensation (incidents and costs).
10. Productivity (presenteeism, performance rating, recruitment, and retention).
Organizational measures can be constructed by using climate scales that are widely available and tested, including8 the Health Enhancement Research Organization (HERO) Employee Health Management Best Practice Scorecard (HERO Scorecard), the CDC Worksite Health ScoreCard, the National Business Group on Health WISCORE (Wellness Impact Scorecard), OPM WellCheck, and the Samueli Institute Optimal Healthy Environments in the Workplace Assessment (CDC, 2012; Goetzel and Terry, 2013; HERO, 2012; NBGH, 2013). The Campbell Occupational Survey (COS) could also be useful. The COS addresses the essential factors that define employees’ perceptions of their work and workplace; it measures factors that are valid throughout the hierarchy of positions and across occupational groups. Another useful example is a tool developed for Johnson & Johnson, the Health and Performance Index (HaPI), which taps into individual and organizational health issues. The HaPI combines measures of health, productivity, well-being, leadership effectiveness, and culture of health and program sustainability (it also includes two questions from the Connor-Davidson Resilience Scale). It can provide a scorecard and be used to benchmark against other companies (Isaac, 2013). (See Box 4-6 for excerpts from HaPI.) DHS could use one of those tools and add or subtract specific measures, depending on the goals adopted for WRR or individual needs of components. For example, DHS could add questions from individual or organizational resilience assessment tools (for example, Connor and Davidson, 2003, and Wagnild, 2009).9 The National Prevention, Health Promotion, and Public Health Council (2013) annual report states that OHA, in conjunction with OCHCO, will pilot an HRA tool for OHA employees (about 90 full-time employees); OHA could consider using one of the above tools in the pilot.
Organizational resilience can be thought of as resulting from three overarching factors: growth, competence, and efficacy (Vogus and Sutcliffe, 2007). The committee reviewed the questions in the FEVS and grouped the ones most relevant to WRR into those three categories (see Box 4-7). Because the FEVS is the main source of data available to all component agencies, in the first few years of the WRR strategic plan
8None of these is a “resilience” scales or scorecard, but they do address health, wellness, and organizational factors, which, as discussed in Chapter 2, are protective factors that fall under the umbrella of readiness and resilience.
9These resilience scales are for use at the individual level. There are no widely accepted resilience-specific assessment tools for use at the organizational level.
Excerpts from the Johnson & Johnson Health and Performance Index (Excerpts from Section to Be Filled Out by Employees)
About Your Health
• Would you say that in general your health is…?
• In a typical week, what is the average amount of time you spend doing moderate-intensity physical activity?
• How many hours of sleep do you usually get in a 24-hour period?
About Your Job
• My organization cares about my health and well-being.
• My organization’s leaders view the level of employee health and wellbeing as one important indicator of the worksite’s success.
• My organization offers incentives for employees to be healthy.
About Your Work
• During the past 4 weeks, how much did your health problems affect your productivity while you were working?
• At my work, I feel bursting with energy (select on a scale of 0–6).
Excerpts from section to be filled out by the organization (completed by one person in charge of health and human performance at the organization—individuals are encouraged to gather information from others in the organization knowledgeable about each section that follows):
Program Measurement and Evaluation
• Has your health and human performance program been evaluated?
• What issues were evaluated?
• About how many employees were included in the evaluation/s?
• Indicate whether or not in the past 12 months your organization had in place each of the following programs, policies, and environmental supports for a healthy lifestyle.
o Have written goals and objectives for the health and human performance program.
o Provide subsidies for gym or health club memberships.
o Provide coverage for smoking/cessation counseling, medications, or nicotine replacement therapy.
Leading by Example (Indicate the extent to which you Disagree or Agree with the following statements. Please note: consider “leadership” as those in position to influence health and human performance related activities and policies at your organization.)
• Our organization’s health and human performance programs are aligned with our organizational goals
• Our leadership shares information with employees about the effect of employee health on overall organizational success.
SOURCE: Johnson & Johnson, 2013.
Federal Employee Viewpoint Survey (FEVS) Questions on Readiness and Resilience
Numbers below are question numbers in the 2012 FEVS
1. I am given a real opportunity to improve my skills in my organization
8. I am constantly looking for ways to do my job better
27. The skill level in my work unit has improved in the past year
43. My supervisor/team leader provides me with opportunities to demonstrate my leadership skills
46. My supervisor/team leader provides me with constructive suggestions to improve my job performance
47. Supervisors/team leaders in my work unit support employee development
48. My supervisor/team leader listens to what I have to say
3. I feel encouraged to come up with new and better ways of doing things
21. My work unit is able to recruit people with the right skills
29. The workforce has the job-relevant knowledge and skills necessary to accomplish organizational goals
36. My organization has prepared employees for potential security threats
39. My agency is successful at accomplishing its mission
2. I have enough information to do my job well
7. When needed I am willing to put in the extra effort to get a job done
9. I have sufficient resources (for example, people, materials, budget) to get my job done
14. Physical conditions (for example, noise level, temperature, lighting, cleanliness in the workplace) allow employees to perform their jobs well
20. The people I work with cooperate to get the job done
26. Employees in my work unit share job knowledge with each other
30. Employees have a feeling of personal empowerment with respect to work processes
58. Managers promote communication among different work units (for example, about projects, goals, needed resources)
59. Managers support collaboration across work units to accomplish work objectives
61. I have a high level of respect for my organization’s senior leaders
62. Senior leaders demonstrate support for Work/Life programs
SOURCE: Questions excerpted from OPM, 2013a.
data from the FEVS can be analyzed at the lowest level available throughout the department (generally by physical location within a component). This more manageable core set of questions (the 2012 FEVS included 98 questions) can be used by all DHS component agencies to provide a benchmark for improvement. Some of the other FEVS questions may also contribute in important ways (for example, questions on trust and respect), DHS will have to align the questions that it decides to use with identified goals and objectives.
The committee suggests a mixed-methods approach to data collection that is needed for multifaceted change. Qualitative research can play a role in this approach with the goal of providing timely and useful information that can inform adjustments as needed. Underlying (and hidden) concepts and worker ideas and reactions can be captured from meeting notes and group discussions. These data can be collected at the component agency level and added to the quantitative data collected to obtain a more complete picture of what is working best and under what conditions and what might need to be changed to arrive at the desired outcomes. Workers can contribute to critical thinking about the overall goals and processes of the change strategies, and this will increase their investment in the change. That would provide information on the overall organizational climate and how opportunities align with the intrinsic career motivations and capacities of the workforce.
Measurement and evaluation are critical for ensuring continuous improvement, which is one of the NIOSH essential elements (“learn from experience”). Continuous improvement is the adjustment or modification of programs on the basis of established milestones and results that have been measured and analyzed. It includes a focused effort to make improvement through additional tests of change. One approach is a rapid cycle improvement strategy, such as Plan-Do-Study-Act (PDSA) (Langley et al., 1996, 2009). In that strategy, objectives and predictions about outcomes are identified, and a plan to carry out the remainder of the cycle (who, what, where, and when) is created. As the plan is carried out, data are collected and used to identify potential problems. Time is then set aside to analyze the data and look at outcomes to track progress and identify needed improvements. Finally, the analyses are acted on with needed adjustments of the initial plan that are based on the outcomes. The cycle then begins anew to monitor and improve continuously. Incorporating a method for ongoing surveillance is critical for gaining a comprehensive understanding of underlying drivers of workforce readiness and resilience.
Special Considerations Related to Law Enforcement
A substantial number of DHS components are involved in law enforcement (about 50 percent). Law enforcement personnel are frequently exposed to traumatic events that may lead to pathogenic outcomes such as acute or posttraumatic stress disorder, that erode individual resilience and readiness. In some departments, traumatic events (often termed critical incidents) are recorded in a database. That allows departments to rank such events in terms of severity and to be better informed about what sorts of action to take if an employee is so exposed. Organizational trust is essential in law enforcement resilience, so it is important for personnel involved in traumatic situations to be informed of the reasons for recording a particular event. An additional method of preventing pathogenic outcomes is to train supervisors in behaviors and symptoms of undue stress or post-traumatic stress disorder. Often, behavioral changes include changes in personality, decreased performance, work absence, increased alcohol use, and poor hygiene. Supervisors trained in this area
can assess the degree of difficulty that an officer is having and take appropriate action to remedy the situation.
Measuring health-improvement processes, understanding their relationship to the well-being of an organization, and designing effective interventions all rely on collecting understandable, valid, timely, accurate, and integrated data. Collecting appropriate data to understand critical issues can also help to bring top leadership support, something that the current resilience program needs. DHS is not using data to diagnose the needs of its workforce and is not measuring progress in addressing those needs. DHS does not the collect the necessary data, so it is impossible to tell where resources should be invested (specific interventions). A key role that DHS leadership can play is to put forward a measurement and evaluation framework to inform the long-term strategy.
The committee finds that DHS lacks a strategy, a framework, and a common set of metrics that promote, sustain, and monitor employee readiness and resilience, and ultimately program effectiveness. DHS and its components thus lack a comprehensive, consistent, coherent, meaningful top-to-bottom view of the readiness and resilience of the DHS workforce.
Recommendation 6: Develop and implement a measurement and evaluation strategy for continuous improvement of workforce readiness and resilience in the Department of Homeland Security.
The Department of Homeland Security should design and implement an ongoing measurement and evaluation process to inform and improve employee and organizational readiness and resilience initiatives. This will support planning, assessment, execution, evaluation, and continuous quality improvement of the strategic plan. Before the introduction of any new measures or the collection of any new data, DHS should access and analyze existing workforce data.
Characteristics of the evaluation strategy should include
a. A focus on structure, process, and outcome measures.
b. Implementation of a standardized core set of measures to be used DHS-wide.
c. Establishment of a baseline database for diagnostic and prescriptive purposes.
d. Establishment of clear program goals, with associated timelines, that can be tracked and monitored using DHS’s measurement and evaluation system.
e. Conducting ongoing assessment of program implementation, with regular quarterly reports on progress.
f. Use of evidence to inform resource allocation and reallocation.
g. Regular communication and dissemination of findings among components.
h. Submission of an annual measurement and evaluation report to the Secretary (see Recommendation 2).
The committee recognizes that this will require senior executive intervention to make data that have already been collected available for this purpose, that new data collection will be needed, and that all data elements should be integrated. The annual report to the Secretary should include data on the structural, process, and outcome changes outlined in this chapter, for example:
• Structural changes—a composite score of organizational culture scores that uses a subset of FEVS items or an external tool, like the CDC Worksite Health Scorecard (CDC, 2012).
• Process changes—a score that indicates employee engagement, morale, leadership support, and satisfaction with the workplace (FEVS subset of items or a separate survey).
• Outcome changes—with a focus on self-reported health improvements (stress, depression, obesity, smoking, blood pressure, cholesterol, glucose, body mass index, alcohol use, and physical activity), productivity changes (absenteeism, disability, safety, and presenteeism), and medical costs (expenditure per employee per year).
This chapter has provided a measurement framework. DHS needs to work with components to identify the core measures to be used through-
out DHS on the basis of goals adopted for WRR and in alignment with the mission. Component agencies will need to identify other measures or data points on the basis of their unique needs and goals to complement the core measures identified for use DHS-wide.
The committee recognizes that members of the DHS workforce may view the collection and analysis of some types of data outlined in this chapter in an unfavorable light, and this could potentially further diminish morale. As discussed in Chapter 3, it is imperative that DHS maintain an open conversation with the workforce, especially regarding these efforts. It will be important for DHS to demonstrate genuine concern for the workforce, communicate how building WRR directly benefits employees, and openly discuss and address any concerns that may arise. The committee believes that such open communication will garner workforce support for these efforts and help to embed a culture of resilience within DHS.
As with anything that needs improvement—measurement and evaluation are not exceptions—leadership needs to be the role model and set the pace for the endeavor to be successful. As noted, programs anchored in measuring problems and achievements know where they are going and are more likely to reach identified goals. Data must be collected and updated in a timely manner, vetted for quality, and made accessible to users. A strategic measurement and evaluation framework needs to be developed that addresses structure, process, and outcome measures, review of data sources, setting of goals, establishment a timetable, allocation of budget dollars, and requirement of regular reports with recommendations for program adjustments (or elimination if indicated). DHS is not unique among federal agencies in facing challenges in climate, culture, and performance, and it is certainly not alone in lacking the kind of systematic and data-driven improvement strategies that are discussed in this chapter. Although what the committee recommends will initially be a large undertaking, once in place it will yield important returns to the workforce and can stand as a model system for other federal agencies.
Acquisition Solutions. 2009. Integrated IT governance: Providing agency transparency and visibility. https://www.asigovernment.com/documents/Insights_Integrated%20IT%20Governance.pdf (accessed August 21, 2013).
Baicker, K., D. Cutler, and Z. Song. 2010. Workplace wellness programs can generate savings. Health Affairs (Millwood) 29(2):304–311.
Care Continuum Alliance. 2010. Outcomes guidelines report, vol. 5. Washington, DC: Care Continuum Alliance.
CDC (Centers for Disease Control and Prevention). 2011. Investing in prevention improves productivity and reduces employer costs. http://www.cdc.gov/policy/resources/Investingin_ReducesEmployerCosts.pdf (accessed August 21, 2013).
CDC. 2012. The CDC worksite health scorecard: An assessment tool for employers to prevent heart disease, stroke, and related health conditions. Atlanta, GA: Department of Health and Human Services.
Connor, K. M., and J. R. T. Davidson. 2003. Development of a new resilience scale: The Connor-Davidson Resilience Scale (CD-RISC). Depression and Anxiety 18(2):76–82.
Cornum, R., T. D. Vail, and P. B. Lester. 2012. Special feature—resilience: The result of a totally fit force. Joint Force Quarterly (66):28.
DHS (Department of Homeland Security). 2011. Department of Homeland Security workforce strategy: Fiscal years 2011–2016. Washington, DC: DHS.
Frankel, N., and A. Gage. 2007. M&E fundamentals: A self-guided minicourse. Washington, DC: USAID/MEASURE Evaluation.
GAO (Government Accountability Office). 2012a. Department of Homeland Security: Taking further action to better determine causes of morale problems would assist in targeting action plans. GAO-12-940. Washington, DC: GAO.
GAO. 2012b. DHS strategic workforce planning: Oversight of departmentwide efforts should be strengthened. Washington, DC: GAO.
Goetzel, R. 2013. Intensive training program: How to evaluate health promotion programs. Presentation at the 23rd Annual Art & Science of Health Promotion Conference, March 18–22, 2013.
Goetzel, R. Z., and R. J. Ozminkowski. 2001. Program evaluation. In Health promotion in the workplace, edited by M. P. O’Donnell. Albany, NY: Delmar. Pp. 116–165.
Goetzel, R. Z., and P. E. Terry. 2013. Current state and future directions for organizational health scorecards. American Journal of Health Promotion 27(5):11.
Green, A. 2013. Data resources for resilience. Presentation to the IOM Committee on Department of Homeland Security Workforce Resilience, February 4–5, Washington, DC.
Green, A., and L. Perkins. 2012. DHS workforce resilience: Past, current and future. Presentation to the IOM Committee on Department of Homeland Secuirty Workforce Resilience, Washington, DC.
Hastie, T., R. Tibshirani, and J. Friedman. 2009. The elements of statistical learning. 2nd ed. New York: Springer.
HERO (Health Enhancement Research Organization). 2012. HERO employee health management best practices scorecord in collaboration with Mercer, annual report 2012. New York: HERO and Mercer LLC.
International Federation of Red Cross and Red Crescent Societies. 2011. Project/programme monitoring and evaluation (M&E) guide. Geneva, Switzerland: International Federation of Red Cross and Red Crescent Societies.
IOM (Institute of Medicine). 2005. Integrating employee health: A model program for NASA. Washington, DC: The National Academies Press.
IOM. 2012. Building a resilient workforce: Opportunities for the Department of Homeland Security: Workshop summary. Washington, DC: The National Academies Press.
Isaac, F. 2013. Work, health, and productivity: The Johnson & Johnson Story. Presentation to the IOM Committee on Department of Homeland Security Occupational Health and Operational Medicine Infrastructure. Washington, DC.
Johnson & Johnson, 2013. Health and Performance Index (HAPI). New Brunswick, NJ: Johnson & Johnson.
Langley, G. J., K. M. Nolan, T. W. Nolan, C. L. Norman, and L. P. Provost. 1996. The improvement guide: A practical approach to enhancing organizational performance. San Francisco, CA: Jossey-Bass.
Langley, G. J., R. Moen, K. M. Nolan, T. W. Nolan, C. L. Norman, and L. P. Provost. 2009. The improvement guide: A practical approach to enhancing organizational performance. 2nd ed. Classic icon. San Francisco, CA: Jossey-Bass.
Lester, P. B. 2013. Data integration. Presentation to the IOM Committee on Department of Homeland Security Workforce Resilience, February 4–5, Washington, DC.
National Prevention, Health Promotion, and Public Health Council. 2013. 2013 annual status report. http://www.surgeongeneral.gov/initiatives/prevention/2013-npc-status-report.pdf (accessed August 21, 2013).
NBGH (National Business Group on Health). 2013. WISCORE, the Wellness Impact Scorecard. https://www.businessgrouphealth.org/scorecard_v4/index.cfm?event=using (accessed July 9, 2013).
NIOSH (National Institute for Occupational Safety and Health). 2008. Essential elements of effective workplace programs and policies for improving worker health and wellbeing. Atlanta, GA: CDC.
NRC (National Research Council). 2010. Steps toward large-scale data integration in the sciences: Summary of a workshop. Washington, DC: The Natiomal Academies Press.
OPM (Office of Personnel Management). 2010. The health data warehouse. Letter No. 2010-08. Washington, DC: OPM.
OPM. 2012. Agency rankings. http://www.fedview.opm.gov/2012/Reports/Ranking.asp?HCAFF=KM&VW=FULL (accessed June 7, 2013).
OPM. 2013a. Federal viewpoint survey. http://www.fedview.opm.gov (accessed July 9, 2013).
OPM. 2013b. Work/life health and wellness: Well check. http://www.opm.gov/policy-data-oversight/worklife/health-wellness/#url=Well-Check (accessed July 26, 2013).
Partnership for Public Service. 2005. Case study, NASA: Overcoming mission challenges. Washington, DC: Partnership for Public Service.
Pronk, N. 2005. The four faces of measurement. ACSM’S Health & Fitness 9(5):3.
Soler, R. E., K. D. Leeks, S. Razi, D. P. Hopkins, M. Griffith, A. Aten, A., S. K. Chattopadhyay, S. C. Smith, N. Habarta, R. Z. Goetzel, N. P. Pronk, D. E. Richling, D. R. Bauer, L. R. Buchanan, C. S. Florence, L. Koonin, D. MacLean, A. Rosenthal, D. Matson Koffman, J. V. Grizzell, A. M. Walker, and Task Force on Community Preventive Services. 2010. A systematic review of selected interventions for worksite health promotion: The assessment of health risks with feedback. American Journal of Preventative Medicine 38(2 Suppl):S237–S262.
Tumblin, D. 2013. DHS best practices. Presentation to the IOM Committee on Department of Homeland Security Workforce Resilience, Washington, DC.
Vogus, T. J., and K. M. Sutcliffe. 2007. Organizational resilience: Towards a theory and research agenda. Montreal, QC.
Wagnild, G. 2009. A review of the resilience scale. Journal of Nursing Measurement 17(2):105–113.
WHO (World Health Organization). 2009. A guide to monitoring and evaluation for collaborative TB/HIV activities. Geneva, Switzerland: WHO.
W.K. Kellogg Foundation. 2004. W.K. Kellogg Foundation evaluation handbook. Battle Creek, MI: W.K. Kellogg Foundation.