The ACA provided the impetus for the IOM to form a panel to make recommendations about screening and preventive services that “have been shown to be effective for women” that in turn will be considered by the Secretary for coverage on a first-dollar basis by all new private plans in operation in 2014. However, a remarkably short time frame was provided for the task of reviewing all evidence for preventive services beyond the services encompassed by the USPSTF, Bright Futures and ACIP: the final report from the committee was needed barely six months from the time the group was empanelled.
As the Report acknowledges, the lack of time prevented a serious and systematic review of evidence for preventive services. This should in no way reflect poorly on the tireless work of the committee and staff; it instead merely reflects the fact that the process set forth in the law was unrealistic in the time allocated to such an important and time-intensive undertaking. Where I believe the committee erred was with their zeal to recommend something despite the time constraints and a far from perfect methodology.
The Report posits four categories as the basis for the recommendations ranging from “high quality systematic evidence reviews” (Category I) to potentially self-serving guidelines put forth by professional organizations (Category IV). The categories alone on their face provide little basis to exclude many preventive services. For example, Category II asks whether there are any “quality” supportive peer-reviewed studies, but there is no clear benchmark for what quality means in this context; many studies published in peer-reviewed journals (even very well respected journals) are of low quality and are not generalizable. The problematic nature of the categories aside, the relative weights applied to each category vis-à-vis the recommendations were not specified, making it impossible to discern what factors were most important in the decision to recommend one service versus another. The categories were combined with expert judgment from members of the committee and supplemented with committee debate to arrive at the recommendations put forth in the Report. Readers of the Report should be clear on the fact that the recommendations were made without high quality, systematic evidence of the preventive nature of the services considered. Put differently, evidence that use of the services in question leads to lower rates of disability or disease and increased rates of well-being is generally absent.
The view of this dissent is that the committee process for evaluation of the evidence lacked transparency and was largely subject to the preferences of the committee’s composition. Troublingly, the process tended to result in a mix of objective and subjective determinations filtered through a lens of advocacy. An abiding principle in the evaluation of the evidence and the