are overburdened with duplicating production of systematic reviews. Numerous private sector organizations, such as health plans and technology assessment firms, set their own priorities for assessing evidence but their research is often duplicative as many parties tend to focus on the same set of emerging technologies and new applications of existing technologies (BCBSA, 2007a; ECRI, 2006; Hayes, 2006). While some duplication may be desirable and private organizations should be free to set their own research priorities, users of evidence have little basis for deciding which available reviews to rely upon.
For a program to be feasible it must be able to function in the real world; its processes must be sound, its resources must be adequate over the long term, and its leaders must pay attention to stakeholders. A program must also be attuned to political realities. If the program lacks sufficient public support, it will be neither implemented nor sustained. If the program is not protected from political conflict and funding is withdrawn, the public investment will be wasted and any gains made will be lost. This lesson has been repeated numerous times during the decades of on-and-off federal involvement in research on clinical effectiveness (Congressional Budget Office, 2007). In particular, the committee notes the experience of AHRQ as an example of political pressures that have short-circuited the important beginnings of high-quality clinical effectiveness research in the United States. In the early 1990s, funding for AHRQ was almost eliminated due to stakeholders’ anger over the findings presented in its guideline on interventions for back pain (Gray, 1992; Gray et al., 2003).
Objectivity requires the incorporation of certain features in a program, such as balanced participation, governance, and standards that minimize conflicts of interest and other biases. Objectivity is central to the development of public confidence in the integrity of an organization. Patients, health professionals, payers, and developers of practice guidelines depend on systematic reviews to know whether the available evidence is valid. They need to be able to trust the Program to reach conclusions that are driven solely by the evidence and never by special interests that may benefit materially. The public will not trust a program that does not have adequate protections against bias and conflict of interest.
As the previous chapters have described, there is a growing literature documenting that in comparison with non-industry-sponsored research,