The Dietary Guidelines Advisory Committee (DGAC) selection process comprises a set of steps designed to help attain specific goals. When implemented in an environment of high-velocity change, a process may not always continue to yield desired results. It will be important for the U.S. Department of Agriculture (USDA) and the U.S. Department of Health and Human Services (HHS) to dynamically improve the DGAC selection process to achieve desired results over time.
Sustained, optimal performance is the product of systematic quality improvement activities. These activities address outcomes such as cycle time, efficiency, defects, duplication, and waste (Deming, 1982; Sehwail and DeYong, 2003; Womack et al., 1990). While these methods and tools were created in manufacturing environments, their use in services and policy are now well established in many sectors, including health care (Berwick et al., 2008; Taner et al., 2007). Quality improvement is indispensable to continuously learning systems. It helps an organization drive toward positive change, and it also contributes to enhanced adoption, use, and trust from a variety of stakeholders. High-performing processes also help to deeply embed quality improvement in an organization’s management systems and cultures.
The field of quality improvement has undergone a transformation in response to changes needing to be made in shorter, faster time intervals.
The focus has shifted from identifying, fixing, and improving processes in an ad hoc manner, to dynamically learning and adapting. One well-recognized and extensively used approach of quality improvement in the private and public sectors is the Plan-Do-Study-Act cycle. It employs iterative cycles of design, execution, measurement, and evaluation. The first step of the cycle is planning. This involves key stakeholder engagement to help design a clear statement of objectives and develop a detailed implementation plan. Actively seeking input from stakeholders is critical to developing a product relevant to the end users. Also important to the planning phase is development of actionable metrics to evaluate performance and achievement of objectives. The second step of the cycle—“do”—implements the intervention and activates the data collection process, including notation of problems and observations. The context surrounding each change is also documented in this phase. “Study” is the third step of the cycle and involves the timely analysis of data collected to quantify performance against objectives. The final step of the cycle is to “act” on the data-driven insights by identifying the next opportunities for improvement and repeating the cycle.
Ideally, the Dietary Guidelines for Americans (DGA) would engage in a continuous process improvement system, beginning with the DGAC selection process. The DGAC selection process has been modified over time but not as a consequence of a proactive, disciplined quality improvement process. As a result, little data currently exist to evaluate the effectiveness of the DGAC selection process. There are many opportunities to make the DGAC selection process more evidence based. This National Academies of Sciences, Engineering, and Medicine (the National Academies) committee believes the aforementioned attributes of quality improvement are critical to improving the DGAC selection process. A system for continuous quality improvement can have significant benefits, but takes time and commitment to develop.
Recommendation 4. The secretaries of USDA and HHS should adopt a system for continuous process improvement to enhance outcomes and performance of the Dietary Guidelines Advisory Committee selection process.
This National Academies committee sought to base its recommendations on objective science, but found little evidence to objectively assess the DGAC selection process. To measure how effective or trustworthy the selection process is, and where opportunities exist to improve, a concerted effort needs to be made. Actionable measures to evaluate the DGAC selec-
tion process need to be created. Data have to be identified and baseline measurements taken. Plans for implementation and evaluation have to be made. A commitment to a culture of change is needed to continuously learn, respond, and adapt.
Of critical importance to adopting a continuous quality improvement system is stakeholder engagement. Stakeholders of the DGA include the general public, the government, industry, and issue-specific advocates. Specific to the DGAC selection process, as discussed in Chapter 4, it will be important to offer interested parties as many opportunities as practicable to provide input. Active stakeholder engagement can help engender trust in a balanced and effective DGAC.
Recognizing that changes to the DGAC selection process will not be immediate, this National Academies committee suggests actions to be taken in the short term, focused at three levels: the overall selection process, the advisory committee’s structure, and the advisory committee itself.
Overall Selection Process
The overall selection process ought to be decomposed and each element be evaluated for its current effect on stakeholder trust and perceptions of integrity. A key hypothesis to be tested is that changes made to enhance transparency of the selection process actually result in greater public trust and insight as to how nominees are considered for appointment to the DGAC. The recommendations in Chapter 4, such as the use of a third party to narrow the pool of candidates, addition of public comment periods, and development of strategies to identify and manage biases and conflicts of interest, all ought to be studied for their ability to add value. The criteria for selection also ought to be evaluated and tailored as needed. It will be important to capture both favorable and adverse unintended consequences of such changes and for the process to react accordingly.
The effects of detailed decisions made while designing the selection process also need to be reviewed. For example, what are the most relevant materials to collect during the nominations process? What oversight processes are in place for ensuring implementation of the process is fair and just? Is there a marked advantage to collecting full bias and conflict-of-interest information from all candidates before a slate of members is proposed? These questions and others ought to be prioritized and considered over time.
To test hypotheses, interventions and outcomes first need to be measured and baseline data have to be collected, but trust is a difficult outcome to measure. Success of the DGA relies on the programs and health
professionals (e.g., individual dieticians, physicians) responsible for disseminating the guidelines. The definitive measures of trust in the DGA therefore are (1) the percent of these programs’ and health professionals’ familiarity and buy-in, and ultimately (2) the percent of the public adhering to the advice. Although these would be complex to measure, they could be longer-term measures to assess based on initial measurements by academic centers and others.
Additionally, this National Academies committee could not develop a litmus test to gauge outcomes midcourse. A number of intermediate outcomes could be developed. For example, data could be collected through surveys and focus groups using carefully crafted questionnaires asking members of the public if they believe the process is fair and if they are confident in the implementation and results. While other less descriptive assessments could be made, they are also important to capture, such as the numbers and types of public comments received. Simulation models could also be built to gauge the potential effect of specific interventions on the efficiency and effectiveness of the process.
Advisory Committee Structure
A second level of evaluation to consider is the advisory committee’s structure. The structure includes the advisory committee’s operating procedures and the roles of members, as well as the effect of biases and conflicts of interest. These factors can all influence whether a wide range of viewpoints and expertise are considered during deliberations.
An assessment of the DGAC’s operating procedures could be warranted. By tradition, the DGAC scientific report has been a consensus document. However, future DGACs may want to discuss the value in allowing members to post an explanation of why they do not agree with a particular conclusion. This could be done in a number of ways, such as issuing minority opinions and publishing unresolved questions with conflicting data as needed. Alternatively, assessments could be warranted to identify whether voting is an effective way of letting the public know what conclusions are made and by what margins.
Examples and evidence are also needed to identify the effect of different advisory committee structures. Whether having the advisory committee comprise members of voting and nonvoting status leads to inclusion of a broader range of viewpoints and expertise ought to be studied. As discussed in Chapter 4, the effects of various external inputs to the advisory committee, such as consultants, non-DGAC members on subcommittees, and invited expert speakers, would also be important to review. Future DGACs ought to reflect on the learnings from assessments of varying committee compositions.
An evaluation of the potential effect of the DGAC’s composition and structure could verify that the process does engender the fair sharing of opinions. To measure the extent to which diverse opinions are considered, it would be beneficial to hear from the members themselves, potentially through interviews or surveys.
The effects of various tools to manage biases and conflicts of interest are also critical and need to be evaluated. Because of the complexity inherent in characterizing and managing conflicts of interest, and the current variation in policies, it is not surprising that comprehensive intervention studies are not available. However, strengthening the evidence base is critical to further understanding of how conflict-of-interest policies affect the development of advisory committees and their subsequent recommendations. A number of areas have been identified where additional research could strengthen and improve management of conflict of interest, including
- identifying relationships and their associated level of risk of conflicts of interest arising,
- characterizing policies that achieve the desired outcome of reducing the risk of bias and reducing the appearance of bias, and
- monitoring any unintended negative consequences of policy implementation in order to continue to allow organizations to manage conflicts in the most effective way possible (IOM, 2009).
This National Academies committee identified several key components of a comprehensive study on conflict of interest when considering additional research to better inform selection processes:
- Clearly define conflicts of interest and the potential types and strata of disqualifying activities.
- Identify the effects of a procedural intervention or strategy to manage biases and conflicts of interest (e.g., the removal of individuals from voting on issues where there may be a conflict of interest).
- Discuss any unique considerations for the specific population or type of guidelines in consideration (e.g., the relative availability of nonconflicted subject-matter experts).
- Describe the specific effect of bias and conflict of interest depending on what is being considered (e.g., a person whose research strongly favors a certain point of view would have more relevant biases when considering an alternative point of view).
- Evaluate findings tied to outcomes of interest, including any reduction in the number of recommendations possibly influenced
by advisory committee members’ conflicts of interest or number of perceived conflicts. A correlated outcome of interest would be an increase in public trust; however, it is important to recognize that trustworthiness is multifactorial and is more than an assessment of real or perceived conflicts of interest.
Advisory Committee Functions
Specific to the DGAC selection process, evaluations related to how the advisory committee functions also ought to be conducted, such as examining the effectiveness of the leadership team and, as applicable, the roles of a chair and vice chair. Potential benefits of facilitators and other outside collaborators could also be reviewed as additional support in balancing perspectives and potential biases throughout advisory committee deliberations. Consultants could bring techniques, methods, and technologies that can help identify biases and mitigate them. More research is also needed on the effect of different potential biases on various recommendations. For example, are there particular types of recommendations that would be more or less susceptible to bias? Different methods to mitigate such biases are needed to assess how best to drive improvements at the level of the advisory committee.
The DGAC selection process needs to dynamically evolve and improve. A system needs to be developed so improvements are grounded in evidence. Lessons should be learned from each cycle and integrated into future selection processes. Best practices from other advisory committees and bodies of literature should also be incorporated. However, a continuous process improvement system requires a long-term commitment and resources to appropriately collect data and measure change. It will be important to measure not only improvements in the selection process, but also any unintended adverse consequences. Proven continuous process improvements can help improve the integrity of the DGAC selection process and merit the public’s trust.
Berwick, D. M., T. W. Nolan, and J. Whittington. 2008. The triple aim: Care, health, and cost. Health Affairs 27(3):759-769.
Deming, W. E. 1982. Quality, productivity, and competitive position. Cambridge, MA: Massachussets Institute of Technology.
IOM (Institute of Medicine). 2009. Conflict of interest in medical research, education, and practice. Washington, DC: The National Academies Press.
Sehwail, L., and C. DeYong. 2003. Six sigma in health care. Leadership in Health Services 16(4):1-5.
Taner, M. T., B. Sezen, and J. Antony. 2007. An overview of six sigma applications in healthcare industry. International Journal of Health Care Quality Assurance 20(4):329-340.
Womack, J. P., D. T. Jones, and D. Roos. 1990. The machine that changed the world. New York: Free Press.
This page intentionally left blank.