The Information Paradox
The treatment of breast cancer is one example of the information paradox in clinical medicine. Relative to years past, a vast array of information about breast cancer is available. Five decades ago, breast cancer was detected from a physical exam, no biopsy was performed, and mastectomy was the recommended treatment for all detected breast cancers (Harrison, 1962). Today, multiple imaging technologies exist for the detection and diagnosis of the disease, including standard x-ray mammography, computed tomography (CT), ultrasound, positron emission tomography (PET), and magnetic resonance imaging (MRI) (IOM, 2001b, 2005). Similarly, traditional biopsies required surgical excision of the area of interest, whereas new methods allow for a less invasive evaluation, such as fine needle aspiration biopsy and core needle biopsy, and may be performed under imaging guidance (Bevers et al., 2009). Once diagnosed, the cancer can be further characterized by genetic characteristics (such as BRCA1, BRCA2, HER-2, and now multigene tests), in addition to its estrogen and progesterone receptor status. Treatments have developed at a similarly fast pace, with a number of surgical, radiological, chemotherapy, and endocrine therapies now being available, along with targeted therapies such as monoclonal antibodies (Kasper and Harrison, 2005; National Comprehensive Cancer Network, 2012). While progress in breast cancer diagnosis and treatment has been swift, however, the comparative efficacy and safety of these diagnostic technologies and treatments have not been evaluated; these innovations are administered without an adequate evidence basis. Likewise, the efficacy of many treatments or the accuracy of many diagnostic technologies is unknown for a given patient with a given condition (IOM, 2008). The results include widespread variation in patient care, confusion among patients and providers on the best methods for treating a specific disease or condition, and waste due to delivering services that are ineffective or even harmful for the patient.
on expert opinion, case studies, or standards of care rather than on multiple clinical trials or meta-analyses (Chauhan et al., 2006; IOM, 2008, 2011b; Tricoci et al., 2009). A study of the strength of the current recommendations of the Infectious Diseases Society of America, for example, found that only 14 percent were based on more than one randomized controlled trial, and more than half were based on expert opinion alone (Lee and Vielemeyer, 2011). Another study, examining the joint cardiovascular clinical practice guidelines of the American College of Cardiology and the American Heart Association, found that the current guidelines were based largely on lower levels of evidence or expert opinion (Tricoci et al., 2009).
The inadequacy of the evidence base for clinical guidelines has consequences for the evidence base for care delivered. Estimates vary on the proportion of clinical decisions in the United States that are adequately informed by formal evidence gained from clinical research, with some studies suggesting a figure of just 10-20 percent (Darst et al., 2010; IOM, 1985). These results suggest that there are substantial opportunities for improvement in ensuring that the knowledge generated by the clinical research enterprise meets the demands of evidence-based care.