Basic Elements and Building Blocks of a RLHS for Cancer
For a cancer RLHS to meet the promise of evidence-based personalized medicine, it must have a number of basic elements. Some of these building blocks are already in place. This chapter will discuss cancer registries that collect patient data and the computer technology that enables these registries to link to each other and to other datasets in dispersed networks that act in concert to achieve specific tasks. Augmenting registries and computer grids are electronic health records, which could enable rapid electronic reporting and seamless capture of staging and other patient test and treatment information. Electronic networks are able to provide information exchange and feedback to providers. Other key elements of the infrastructure of an active and growing RLHS for cancer discussed in this chapter include the integration of information from clinical trials, comparative effectiveness research, evidence-based clinical practice guidelines, quality metrics, and decision support tools.
Dr. Joseph Lipscomb of Emory University described cancer registries as organized systems that use observational study methods to collect uniform data to evaluate outcomes. Cancer registration is the fundamental method in the United States by which information is systematically collected from various medical facilities about the incidence and types of
cancer, the anatomic location and extent of disease at diagnosis, kinds of treatment and outcomes. Lipscomb noted that disease registries are a core resource for a learning healthcare system. Cancer registries can be used to determine the natural history of disease, determine clinical effectiveness or cost-effectiveness, measure and monitor safety and harm, and evaluate the quality of care. Dr. Robert German of the Centers for Disease Control and Prevention (CDC) noted that state cancer registries, in specific, collect population-based data on cancer incidence, morphology, primary site, stage at diagnosis, planned first course of treatment, and outcome of the treatment and clinical management. These registries collect their information from a number of sources, including hospitals, clinics, physician offices, pathology laboratories, nursing homes, and coroner’s offices. Most cancer data comes from hospitals where highly trained cancer registrars extract data from the patients’ medical record and enter it into the registry’s computing software for transfer to central cancer registries. Ms. Sandy Thames of CDC outlined some of the challenges and limitations of using registry data, due mainly to the time consuming, labor-intensive nature of the process of collecting cancer data, the risk of errors in extraction or transcription, the limited nature of the data set due to the expense of manually collecting and processing large amounts of information, the delay in availability of data, and the lack of completeness in reporting and follow-up of cases. In particular, no standards have been implemented for data collection and reporting from non-hospital sources which thus do not consistently report cases.
Notwithstanding the limitations, there are a number of state and national programs that actively collect and report cancer data, producing extensive surveillance of cancer incidence and mortality in this country, with the cancer data collected differing according to the mandates of the supporting agency. Beginning in the 1970s, the National Cancer Institute’s (NCI’s) Surveillance, Epidemiology and End Results (SEER) program has collected a non-random population-based sample of cancer incidence and survival data from a system of high-quality state and local cancer registries (NCI, 2010b). This database currently collects data from about 26 percent of the U.S. population. Since the 1990s, the CDC’s National Program of Cancer Registries (NPCR) has supported statewide, population-based cancer registries from 45 states, the District of Columbia, Puerto Rico, and the U.S. Pacific Island jurisdictions. These NPCR now covers about 96 percent of the population in the United States and provides CDC the means to receive, aggregate and disseminate cancer data from state and territorial cancer registries for public health surveillance. The SEER program and the
CDC-NPCR program are complementary. SEER routinely collects patient demographics, including ethnicity, and is updated annually, providing a research resource for temporal changes in cancer incidence and mortality for segments of the population. SEER data have also been linked to Medicare claims data, thus producing a data set of over 3 million cases that contains all people in SEER found to be Medicare-eligible.
Another national registry that can provide not only valuable national cancer care data, but also feedback to providers, is the Commission on Cancer (CoC) National Cancer Data Base (NCDB), which has been in operation since 1985. The CoC is a consortium of professional organizations dedicated to reducing morbidity and mortality from cancer through education, standard setting, and monitoring the quality of care. CoC provides accreditation for hospital cancer programs throughout the country that together treat about three-quarters of cancer patients in the United States. The NCDB is not population based, but rather is an aggregation of cancer registry data from approximately 1,800 CoC-accredited institutions. It surveys and aggregates cancer patterns of care and outcomes. The CoC recently has begun using its database to monitor the performance of cancer programs in its member hospitals and provide feedback on measures that can be used as benchmarks.
CoC’s database includes a number of quality management tools available to its accredited programs. These tools include a program that assesses benchmarks related to numbers of cases, stage of diagnosis, and survivals and applies National Quality Forum (NQF) quality measures for cancer care. Using these measures, the CoC has recently started to notify hospitals that fall in the bottom 10 or 25 percent of quality measures and help them develop an action plan to correct problems they may have with their data or with their care. “The CoC provides a unique national system for application because not only do we have the data collection infrastructure, but we have an existing structure for feedback and reporting for providers, which is neither the government nor the payers,” pointed out Dr. Stephen Edge, chair of the CoC.
The CoC is currently piloting its Rapid Quality Reporting System (RQRS), which is a registry-based system that provides more timely tracking of care processes. Providers enter about 50 pieces of data on their patients shortly after diagnosis and then are tracked for NQF measures for specific cancers. For example, an NQF standard for hormonal therapy of breast cancer is the percentage of female patients with Stage IC through IIIC, estrogen-receptor (ER)-positive or progesterone-receptor (PR)-
positive breast cancer who were prescribed tamoxifen or an aromatase inhibitor within the 12-month reporting period. The system provides an up-to-date running track record that shows, using color-coded visuals, when providers are giving the accepted standard care and warns providers when they are approaching the time limit for such care. For the example NQF standard provided, the RQRS will inform physicians when they are approaching the one-year mark after diagnosis, the time limit for giving hormonal therapy to women with breast cancer. “This allows the registry staff the opportunity to say, ‘It has been 11 months since this person was diagnosed with breast cancer and we do not yet have the fact recorded that she got hormonal therapy,’” explained Dr. Edge, with such feedback very likely to prompt follow-up.
This more rapid system is likely to be more effective at improving quality care than traditional systems, which may not inform providers about problems in care until three years after they occur, Dr. Edge added. “There’s good evidence that implementing a tracking system actually reduces disparities and systems failures in care,” he said. RQRS does require hospitals to invest in additional support to participate, which is problematic given that hospitals are currently trying to streamline their operations, Dr. Edge noted. CoC hopes to have its RQRS available nationally by the end of 2010, he said. Presently RQRS is undergoing beta testing in about 70 CoC-approved cancer centers around the country.
However, even the best registries may not be adequate for addressing key health system questions, such as comparative effectiveness or cost-effectiveness analyses. For example, the SEER program routinely collects abundant information on cancer patients, but this program does not collect information on how patients are treated after their first course of therapy, nor does it document disease recurrence, resources consumed, provider characteristics, or patient-reported outcomes. In addition, disease registries such as SEER are not linked to product registries for specific drugs or devices or to health services registries that document specific clinical procedures, encounters, or hospitalizations. An unrealized ideal, Dr. Lipscomb claimed, would be a population-based disease registry that can serve both as a health services registry and as a product registry.
An advantage to linking registries, according to Dr. Lipscomb, is that one can then acquire more information needed to answer research questions while avoiding the costs and efforts involved in collecting another set of data that duplicates, to some degree, what is already available. “Do not gather new data unless you have to gather new data,” Dr. Lipscomb said. But it is
also the case that multiple sources of information on the same event may permit cross-validation to improve data accuracy, Dr. Lipscomb noted.
Cancer illustrates the state of the art in the creation and application of linked databases to enhance registry data. Dr. German noted that data from a cancer registry can be linked to a number of external data sources, such as a state’s biostatistics or death certificates, as well as Social Security Administration data, or to the National Death Index to acquire death information about patients who die outside the state of residence. Cancer registries may also link to hospital discharge or medical claims data, such as Medicare, Medicaid, or other private insurers.
More than 200 studies have been conducted using SEER data linked to Medicare or Medicaid data or to private claims data, including studies that assess health disparities, quality of care, and cost of treatment (NCI, 2009). The SEER-Medicare database contains the more than 3 million cases in SEER that were found to be Medicare eligible, as well as a 5 percent random sample of people residing in SEER areas who have not been diagnosed with cancer to serve as controls in some studies, Dr. Potosky reported. He said that SEER-Medicare data are often used in CER because they provide longitudinal data on a large number of elderly subjects who are generally underrepresented in clinical trials. This dataset also includes patients with serious comorbidities, who would normally be excluded from cancer clinical trials.
The CoC is also exploring linking its national cancer database to administrative data, including physician records, EHRs, and billing data. In Ohio, CoC has a pilot project funded by the CDC that will enable it to link its cancer registry data with private payer claims, including those of United Healthcare and Anthem Blue Cross/Blue Shield, along with data from the Ohio Cancer Incidence and Surveillance System registry. The goal of this pilot project, which Dr. Edge is conducting along with his colleagues, is to define quality of care and identify the degree of completeness of the registry treatment data compared to care identified from claims data. This project has already demonstrated the feasibility of linking private payer claims data to the CoC database, at least in one state for one disease and one modality, and has shown a high level of agreement between the two sources of data in the surgical care of breast cancer patients. Researchers in this project plan to evaluate these same measures for lung cancer and to extend the data-linking model to other states.
State cancer registries are another useful source for researchers trying to learn what is needed to improve cancer care, especially if these registries are
extensive or extensively linked. Dr. Lipscomb discussed several advantages to having strong state-based data systems as a practical, more expeditious route to developing a national cancer data system that is also an effective learning healthcare system. He noted the ever-improving quality and capacity of state registries and the strengths of state comprehensive control plans, which increasingly call for better state data systems for surveillance and outcomes assessment. There also is a demonstrated capacity to link cancer registry data at the state level with public and private data sources. Except for the SEER-Medicare database, the ability to routinely link population-based cancer registry data with external administrative or clinical sources to create an integrated multistate or national system starts at the state level and requires collaboration across states, he noted. In particular, the process of accessing and linking confidential data is very much state-centered. Finally, he noted that the state may be the right-sized laboratory for learning, because it is large enough to reflect the complexity of mining multiple datasets that are linked together, yet small enough to avoid the chaos of managing large systems.
As an example, Dr. Lipscomb showed how the various datasets in the state of Georgia have been linked to answer a number of important cancer-related questions. These registries include the data collected by 15 counties in Georgia that are part of the SEER program and the Georgia Comprehensive Cancer Registry (GCCR), which collects data on cancer incidence for all the state’s counties. GCCR has been linked to Medicare as well as to Medicaid, with Emory University researchers using the latter linked data to evaluate the impact of the Breast and Cervical Cancer Prevention and Treatment Act.1
A new project, “Using Cancer Registry Data and Other Sources to Track Measures of Care in Georgia,” sponsored and funded by the Association of Schools of Public Health (ASPH), CDC, NCI, and the Georgia Cancer Coalition, has just begun linking several sources of state data that researchers will eventually use to evaluate quality of care for patients with breast or colorectal cancer (ASPH, 2009). However the more immediate goal of this project is to show the feasibility of doing bilateral linking of multiple data registries, including public registries such as GCCR, SEER, Centers for Medicare & Medicaid Services (CMS) data on both patients and physicians, and private registries such as those of insurance companies,
and hospital discharge records. The next planned step in this project is creation of a prototype “Consolidated Georgia Cancer Data Resource” that will represent a linkage of these bilateral linked data sets with the GCCR at the hub (see Figure 3-1).
Eventually, researchers hope to expand the project to include biomarker data from state and SEER biorepositories and patient-reported outcomes, such as quality-of-life assessments, satisfaction with care, and burden of symptoms. All the datasets linked in this project will be stripped of their patient identification features, so as to preserve patient privacy, and will be subject to rigorous quality checks.
Dr. Lipscomb pointed out that researchers could potentially use the data collected in this statewide project for CER, postmarketing regulatory studies, research on quality care assessment, and other studies needed for an effective learning healthcare system. The researchers of this project also hope to eventually demonstrate reduced time lags between receipt of care and data reporting, analysis, and feedback. Two potential vehicles are available for promoting this development: the CoC’s RQRS (since 25 of the 70 beta test sites are in Georgia), and the Georgia Quality Information Exchange, an electronic network that is being established by the Georgia Cancer Coalition. The coalition’s President and Chief Executive Officer (CEO) William Todd noted that the coalition had commissioned the IOM to develop a strategy for measuring progress in cancer control that spans the continuum of cancer care from prevention, early detection, and screening, to diagnosis, staging, treatment, and palliation. Focusing mainly on breast, colorectal, lung, and prostate cancer, the coalition used the 52 measures recommended in the resulting IOM report (IOM, 2005) to plan the development of a statewide, evidence-based cancer quality measurement program (Georgia Cancer Quality Information Exchange), with the aim of improving outcomes, patient-centered care, and adherence to standards, Todd said.
The Georgia Cancer Quality Information Exchange’s initial focus is on using the benchmarks and goals the IOM recommended for its 52 metrics as the foundation for aggregating near-real-time clinical data from all of the state’s CoC-certified cancer care facilities and linked physician practices and public health data from the Georgia Comprehensive Cancer Registry and other sources (Georgia Cancer Coalition, 2009). Providers use computerized tools created by the exchange, such as its “dashboard,” to enter their patient data and see their performance relative to that of their peers in the state. Public reporting of such metrics will only be at an aggregated level. Researchers, patients, survivors, employers, payers, federal and state agen-
cies, and public health personnel can also use the exchange’s dashboard to assess cancer care trends. Use of the dashboard can reveal weak areas that need improvement, inform ongoing cancer control planning, stimulate process improvements at participating institutions, increase adherence to the most current practice standards, and improve and make more geographically consistent patient-centered care and outcomes (Georgia Cancer Coalition, 2009).
The Georgia Cancer Quality Information Exchange, which intends to have the information technology infrastructure to accept data from all providers regardless of level of automation or technology platform, has engaged six cancer centers around the state as demonstration partners. These pilot projects revealed that many of the centers had not previously measured some of the IOM metrics because they had assumed they would perform well, when in fact the dashboard reports revealed that they performed below average in such metrics as timeliness of women’s receiving a biopsy after having an abnormal mammogram or adequacy of cancer pain management. The dashboard reports led these cancer centers to alter their operations, which improved these metrics in subsequent dashboard reports, Todd reported. “Physicians thought they were managing pain well, but they really never recorded it. Now they are and there already is a big improvement,” Todd said.
The exchange currently is setting up the statewide infrastructure to provide EHR interfaces and do more rapid reporting. As Todd noted, however, many cancer centers do not have EHRs. For those that do, screening reminders and alerts provided by exchange tools have led to measurable improvements in cancer care. Participation in the exchange has also boosted patient participation in clinical trials threefold, Todd pointed out. He noted that traditional cancer registry data are fairly useless to clinicians, but when such information is “married to some of the clinical information that is captured in this [exchange] system, it becomes useful in daily care.”
The collection of patient-reported outcomes also will expand the usefulness of cancer registries, several speakers noted. Dr. Clancy suggested that more effort be made to gather patient-reported outcomes, not just during treatments but in between or after treatments. Such outcomes should include objective information, such as whether the doctor explained the medical care adequately or how long patients had to wait for treatment. AHRQ is currently working with NCI to develop cancer patient surveys, which should be available in 2011. Todd stressed the need to get patient-reported outcomes as close to diagnosis and treatment as possible, rather
than six months to a year later. “By weaving in the patient-reported outcomes into the movement to get patient information quicker, you could be more effective overall,” he said.
Computer technology will provide the platform for a RLHS. In his presentation, Dr. Chalapathy Neti from IBM provided some perspectives on how information technology can aid physicians in practice. He noted that the human cognitive capacity is limited to roughly five different facts simultaneously when making clinical decisions. Yet the current explosion in diagnostic information made possible by advances in genetics and imaging provides about 20 times that amount of facts, all of which have to be considered when making clinical decisions. The future portends an exponential increase in data. This information explosion leads to cognitive overload that risks reduced quality of care (see Figure 3-2).
Computerized systems can provide the means to manage complexity. “One of the key things information technology [IT] can do is to take this complexity that is at the point of care, and truly simplify this so that it is manageable with respect to the cognitive capacity of the care provider,” said Dr. Neti. Dr. William Stead, the chief information officer (CIO) of Vanderbilt University Medical Center, added that more can be gained by combining the human’s superior ability to identify patterns with the computer’s ability to work out various aspects of a problem and attend to such details as sending reminders to practitioners about various steps in patient care. Dr. Kenneth Buetow, leader of NCI’s caBIG®, noted that computer grid systems enhance the capability of practitioners, but do not replace them, just like “night vision goggles do not actually see for people, they just make it so that you can see things that are present because they are displayed in a way that makes them clearer—they present the type of information that is necessary.”
Integral to a RLHS are both large- and small-scale computer systems and computer models. Computer grids are networks of computers that are dispersed geographically and work together to carry out various computing tasks involving large amounts of data and complex analyses. Both data storage and analysis are apportioned among the network of computers to accomplish these large complex tasks. Computer grids enable multiple users to access large amounts of data and conduct their own analyses in real time, because they generally have what is called an open services-oriented
architecture which allows applications, running as services, to be accessed in a distributed computing environment, between multiple systems or across the Internet. This allows different services and diverse applications to be run by local users on open publicly available platforms with open standards employed by central hosting services. Open platforms are software systems that allow for massive data sharing. Sometimes called cloud computing, such systems represent a new consumption and delivery model for IT services based on the Internet, where common applications can be accessed from a Web browser.
Computer systems are usually “federated” or virtual database systems which provide an alternative to the daunting task of merging together several disparate data bases. A federated database system is a type of meta-database management system which transparently integrates multiple autonomous data base systems into a single federated database. The constituent databases are interconnected via a computer network or grid. “We have to come to the realization that the ‘Holy Grail’ of a centralized data warehouse is not going to happen,” said Dr. Neti. He said that we have to think about federated data structures, meaning the centralized link will at best be an index to where the data reside (metadata) and that there will be federated analytics, meaning that the data are not going to the place where the analytics sit, but the analytics are going to the place where the data sit. An advantage to having a federated architecture is that it preserves local control over data generated at a particular institution, which is critical to address the patient privacy issues dictated by Health Insurance Portability and Accountability Act (HIPAA) and state laws, noted Dr. Buetow, who was instrumental in creating the cancer Biomedical Informatics Grid (caBIG™) for the NCI Center for Bioinformatics and Information Technology. There are also advantages in having data reside with the people who have generated, analyzed, and aggregated it and thus have a fuller understanding of the data, he added.
A computer grid requires a high degree of interoperability among all its users, since it must be assumed that there will be proprietary and legacy IT systems at the point of care. This in turn requires a great deal of harmonization and standardization of how data are reported, represented, and integrated in the system, Dr. Neti pointed out. This necessitates the use of open platforms, open standards, and data curating to clean up and standardize data that are “dirty” from both a machine and a human perspective. There also have to be metadata management, identity management, and security to address patient privacy issues; a master patient index that
combines different data on the same patient from different repositories; and data oversight and stewardship.
As Dr. Buetow stressed, “Interoperability doesn’t just happen—these things don’t just self-assemble,” and connecting across complex heterogeneous domains requires active management, especially since, in biomedicine, terms take on different meanings in different contexts. “Just having folks adopt IT systems does not necessarily mean that the information comes together as if by magic;” rather, this depends on a well-thought-out infrastructure, user involvement, and extensive oversight and managing. “All these information sources can interact with each other because we pay time up front worrying about how information is represented and how different sources of information can be cross-connected with each other,” he said.
Dr. Buetow pointed out that users should be involved in the creation of a computer grid from the start and throughout the process of tool development to serve as advisers, developers, adopters, and disseminators to ensure functionality. Such participation is worth the investment, he added, since it accrues dividends post-development in educating the community and driving more rapid adoption of the grid. Community is critically important.
It also important that the architecture of a computer grid be flexible to accommodate changes in biomedicine so that “data can be aggregated and interpreted correctly now and reinterpreted correctly as knowledge changes,” said Dr. Stead. He suggested separating the data from the system such that the raw signal data are recorded and then later tagged with the current interpretation. In that way, “as our knowledge changes, we can rerun our interpretations of it and re-annotate it as we move forward into the future,” Dr. Stead said. Dr. Neti agreed, saying it is important to build infrastructures that allow for augmentation.
Recognizing the need to share and do large-scale analyses of the abundant data generated in the studies it supports and to connect and support the cancer community at large, NCI decided in 2003 to create caBIG, which is a shareable, interoperable information infrastructure that connects cancer researchers and practitioners (NCI, 2010a). To create caBIG, researchers developed standard rules, a unified architecture, and common language to more easily share information. The caBIG is an open-access, open-development, and open-source federated network. In other words, caBIG is open to all; the planning, testing, validation, and deployment of caBIG tools and infrastructure are open to the entire cancer community. In addition, the underlying software code for the caBIG infrastructure is
available for use and modification, and resources can be controlled locally or integrated across multiple sites. “Our role was to create the cancer knowledge cloud—a cancer information resource that leverages the power of the Internet to bring together all the various sources of information and make it accessible to consumers, practice settings, community hospitals, research hospitals, research institutions, and industry,” Dr. Buetow said. The caBIG was built with the awareness that this rich source of information would enable data aggregations, analyses, decision support, and so forth, “with the true goal of then being able to support the entire life cycle of the learning health system and convert biomedicine into part of the knowledge economy. We wanted to interconnect all the different flavors of data—whether … clinical data, genetic data, or imaging data, and have the rich analytic tools that users could use in their laboratory, their organization, or their institution,” Dr. Buetow noted.
An advantage that caBIG and similar computer grids offer, Dr. Stead pointed out later during the panel discussion, is that they save institutions the costs of developing de novo the technology needed to do complex data analyses. “One of the things that drove the creation of caBIG originally was that all the NCI-designated cancer centers were in the process of collecting molecular data in the form of microarrays. They were all creating new microarray repositories and all the data standards needed with substantial staff and IT investment. Each of them was estimating that it was going to cost somewhere between 1 [million] and 5 million dollars to create each individual system.” Instead, caBIG created the standard framework in which this information could be collected, stored, and analyzed at a cost of only 2 million to 3 million dollars, he said. “There was at least a five- to tenfold savings by having a common framework by which people could collect the information as opposed to regenerating it de novo each time,” Dr. Stead said.
Participants in caBIG currently include 56 NCI-designated cancer centers, 16 community cancer enters, and several cooperative groups. The caBIG is now in what Dr. Buetow calls its “enterprise phase,” which will involve more widespread deployment and adoption as well as international collaborations, with an emphasis on making the grid useful beyond the research setting by bringing in data from community settings. Along with the American Society of Clinical Oncology (ASCO), caBIG is working to develop the standards-based infrastructure for an oncology-specific EHR to enable the collection of patient data from community healthcare settings, such as physician practices and community hospitals. The caBIG has also
joined forces with academia, industry, foundations, insurers, and consumers to form the BIG Health Consortium, whose mission is to demonstrate the feasibility and benefits of personalized medicine. Through a series of projects, with an expanding number of collaborators, BIG Health will bootstrap a new approach in which clinical care, clinical research, and scientific discovery are linked.
One of caBIG’s first enterprise projects, the Athena Breast Cancer Network, is to integrate diverse breast cancer data, including clinical, genomic, and molecular data, collected from 13 different sites encompassing more than 400,000 women within the University of California system, and make them accessible to end users. Grid computing will be used to standardize collection of structured data, integrating clinical and research processes including molecular profiling, starting at the point of care. Researchers will then use these data to build better models to predict risk and outcomes for low- and high-risk breast cancer patients that can be used for tailored screening and prevention strategies. In another project, caBIG is partnering with the Love/Avon Army of Women to build the first online cohort of 1 million women willing to participate in clinical trials. Leveraging Web 2.0 technology, the caBIG tools and infrastructure will facilitate the creation of this online breast cancer cohort that will match clinical researchers with individuals wanting to participate in clinical trials. In addition, Web applications enable the women to access and simultaneously review and edit their personal information. “We’re hoping it will show that you can actually have a consumer-centric, patient-involved model for conducting this next-generation type of research,” said Dr. Buetow. Such capacity for rapid learning is completely feasible and could galvanize the national community.
The CDC is also actively working on standardized electronic data exchange. Ms. Sandy Thames, a public health adviser with the agency, reported that CDC is working on a model electronic reporting project for cancer surveillance, linked to EHRs and harmonized with national health IT efforts at standardization and interoperability. She said CDC is also working on building a concept for how federal agencies, public health systems, providers, and consumers could be connected in a shared environment with a national public computer grid.
COMPARATIVE EFFECTIVENESS RESEARCH
Comparative effectiveness research is an essential element of a RLHS because it provides the evidence for best practices, thereby improving the
quality and consistency of care. As the IOM defines it (IOM, 2009a), and Dr. Harold Sox reported, CER is the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and the population levels. CER involves direct, head-to-head comparisons and has a broad range of stakeholders and beneficiaries, including health services researchers, patients, clinicians, purchasers, and policy makers. In addition, unlike a lot of clinical trials research, CER studies populations representative of clinical practice.
CER tends to focus on patient-centered decision making so that tests or treatments are tailored to the specific characteristics of the patient, Dr. Sox said. As he pointed out, if a randomized controlled clinical trial shows that treatment A has a higher response rate than treatment B, 60 percent versus 50 percent, and if you did not know anything more about the patient, except that they had the condition in question, then you should prefer treatment A, even though many patients still got better on B. Yet it is possible that some patients would have actually done better on treatment B than A, and we could identify those patients in advance, both from demographic as well as clinical predictors. This sentiment was echoed by Dr. Clancy, who explained that conducting CER requires investigating not only which type of treatment is the most effective, but which type of treatment is the most effective for specific patients. “The promise of CER is that it will provide information to help doctors and patients make better decisions,” Dr. Sox said.
As Dr. Clancy reported, the AHRQ plays an instrumental role in supporting CER. In 2003, the Medicare Prescription Drug, Improvement, and Modernization Act2 authorized AHRQ to conduct synthesis in research on the effectiveness and comparative effectiveness of healthcare services broadly defined as relevant to Medicare, Medicaid, and State Children’s Health Insurance Program (SCHIP) beneficiaries. The agency was also charged with disseminating the information it acquires from this research to multiple stakeholders in an understandable form. It is meeting this congressional request with its Effective Health Care (EHC) Program.
After consulting with stakeholders, AHRQ has prioritized its CER research agenda, and cancer is one of its priority conditions. AHRQ has
already completed a number of cancer-related CER studies, including the comparative effectiveness of particle-beam radiation therapies and of stereotactic radiosurgery for extracranial solid tumors. “Sometimes these are systematic reviews, and sometimes we are relying on a network of research contractors that have access to very large datasets with some clinical electronic data,” Dr. Clancy said. She added, “When the facts stop, these reports stop. These are not guidelines per se, they are strictly the evidence,” and it is up to practitioners on how best to apply that evidence to their patients.
The AHRQ CER efforts have been expanded as the American Recovery and Reinvestment Act (ARRA) of 2009 included $1.1 billion for CER, of which AHRQ was awarded $300 million, the National Institutes of Health (NIH) $400 million, and the Office of the Secretary of Health and Human Services (HHS) $400 million. A Federal Coordinating Council was also appointed to coordinate CER across the federal government. With this additional funding, AHRQ plans to continue to do its evidence synthesis reviews of current research, but also to do evidence generation—new research with a focus on underrepresented populations. For this research, the agency plans to expand distributed data network models and national patient registries. Part of the allocated funds AHRQ receives will also be used to support training, research, and careers related to CER.
The American Recovery and Reinvestment Act also provided funding for an IOM study to determine the high-priority healthcare conditions and interventions to guide the spending of the portion of the ARRA funds allocated to CER (IOM, 2009a). After gathering stakeholder input, the IOM Committee on CER, which Dr. Sox co-chaired, developed a list of 100 priority topics on which CER should be conducted, which included comparing the effectiveness of dissemination and translation techniques to facilitate the use of CER by patients, clinicians, payers, and others. The IOM top 100 priorities included several high-priority, cancer-related topics, such as comparing the effectiveness of
Genetic and biomarker testing and usual care in preventing and treating breast, colorectal, prostate, lung, and ovarian cancer and other conditions;
PET (positron emission tomography), MRI (magnetic resonance imaging), CT (computed tomography), and other imaging technologies in diagnosing, staging, and monitoring patients with cancer;
Management strategies for localized prostate cancer on survival, recurrence, side effects, quality of life, and costs; and
Management strategies for ductal carcinoma in situ (DCIS).
In addition, the IOM committee recommended building robust data and information systems to support CER, including clinical and administrative data networks to facilitate better use of data and more efficient ways to collect new data to inform CER. The purposes of CER and the infrastructure needed for a robust CER enterprise are actually a tight fit with the purposes and features of a RLHS. CER results will provide evidence for rapid learning, and a RLHS will provide the pathway for implementing CER. The two are synergistic.
GUIDELINES AND STANDARDS FOR CARE
In an ideal RLHS, evidence collected from point of care, clinical trials, CER, and other studies would be synthesized to create cancer care guidelines specific for various cancers with finer-grained standards specific to patient subtypes. These guidelines and standards of care would be updated continuously and widely distributed to practicing oncologists, who would be monitored and informed of their adherence to the guidelines.
Currently, the most widely recognized standards for cancer care in the United States are the guidelines developed by the National Comprehensive Cancer Network (NCCN). Established in 1995, NCCN is a network of 21 of the nation’s leading cancer centers whose mission is to “positively influence and improve decisions and policies that impact the access to, availability of, and delivery of appropriate and effective cancer care,” said Dr. William McGivney, who is the CEO of NCCN.
NCCN guidelines are developed by 44 multidisciplinary panels, with 20 to 30 disease-specific experts on each panel. “Our guideline panels, reviewers, and our discussions involve probably 1,500 to 1,800 oncologists,” Dr. McGivney said. He added that these free guidelines are widely distributed online and via a variety of media outlets and seminars, and are updated continually, with some updates given within 48 hours of new information surfacing in medical journals or other outlets. Some guidelines are updated as frequently as four to six times a year. NCCN guidelines are used increasingly as a basis for coverage policy. NCCN also produces its own drugs and biologics compendium, which is designed to support decision making regarding the appropriate use of drugs and biologics in patients with cancer.
In addition, NCCN is currently developing chemotherapy templates to
improve the safety and effectiveness of the administration of chemotherapy and biologics in both cancer centers and community settings. These templates are especially designed to aid “community doctors who do not have the time to keep up with the numerous changes in dosing in these regimens, and the addition of new chemotherapeutic regimens, drugs, or biologics,” Dr. McGivney said.
However, as Dr. McGivney pointed out, “It is just not enough to write guidelines. It is important to actually measure whether you follow those guidelines.” The NCCN has been developing several outcomes databases to monitor and benchmark concordance with its guidelines in member institutions. “These databases describe practice patterns and outcomes of care and we feed that information back to our clinicians, our institutions, and our guideline panels,” he said. NCCN databases have been established for breast cancer, non-Hodgkin’s lymphoma, and colorectal, lung, and ovarian cancer. The most developed database in that regard is the breast cancer database, encompassing 52,000 patients from 17 NCCN institutions and 15 community cancer centers.
Each year, NCCN conducts a major analysis of the data it collects and provides participating patient-level feedback to institutions and physicians regarding concordance with the management stipulated by NCCN guidelines. “About half of participating institutions look at every patient that is not concordant,” Dr. McGivney said. The NCCN analysis evaluates concordance and the reasons given for lack of concordance, identifying issues that need to be evaluated such as variation of care across institutions. As Dr. Paul Wallace of Kaiser Permanente pointed out during a later discussion, it is important to evaluate not only concordance, but also reasons for a lack of concordance. As he noted, physicians must continually assess if the NCCN guidelines are applicable to their specific patients. If they decide the guidelines do not “fit” their patients, they must be accountable for those decisions. “There’s nothing wrong with being innovative,” Dr. Wallace said, “as long as you are accountable for it.”
NCCN has also developed analysis tools that payers can use to evaluate the quality of care of their patients based on NCCN guidelines. NCCN is also currently working with informatics firms, to develop tools that will enable the integration of NCCN guideline recommendations into EHR systems to facilitate support for physician decision making and more rapid distribution of information to clinicians. This rapid support and feedback are critical, Dr. Edge stressed later during discussion; he said that “we need to change our systems to help doctors, rather than blaming doctors for
doing something that they did five years ago. We need to reengineer the system so that it helps providers and patients.” He suggested that in addition to providing physicians with online access to NCCN guidelines, there be a way for doctors to input their relevant patient data while accessing those guidelines, to immediately assess whether they are following the guidelines properly and to provide point-of-care data that can be used to continually determine the validity of those guidelines. Dr. Edge was critical of systems that do not provide immediate feedback to physicians. Dr. McGivney concluded by noting, “We have a long way to go, but the acceptance of these guidelines by clinicians, patients, and payers has important implications for improving the healthcare system.”
Less extensive than the more than 100 guidelines put out by NCCN are about 20 clinical practice guidelines developed and distributed by the clinical practice guideline group of ASCO. ASCO has also begun rapid distribution of provisional clinical opinions to inform oncologists of new developments that affect practice (e.g., the importance of testing for KRAS gene mutations in metastatic colorectal cancer patients to predict response to antiepidermal growth factor receptor antibody therapy).
Recently, ASCO developed its Quality Oncology Practice Initiative (QOPI), an oncologist-led, practice-based voluntary quality improvement initiative. The goal of QOPI is to promote excellence in cancer care by helping oncology practices create a culture of self-examination and improvement, said Dr. Joe Jacobson, an oncologist in practice at North Shore Medical Center and chair of the QOPI Steering Committee. QOPI shows how physicians’ processes of care (but not outcomes) measure up to standard practices stipulated by guidelines, published studies, and expert consensus. Adherence to these standard quality measures and processes, including documentation of care, chemotherapy planning and administration, pain assessment and control, end-of-life care, and symptom and toxicity management, is assessed every six months, allowing progress to be measured. Oncology practices choosing to participate are required to enter a limited number of patient datasets via a secure Web-based application. These data are collected by practice staff two times a year via retrospective chart review and data abstraction. At the close of data collection, practice reports are generated that compare practice-specific results to aggregate data.
QOPI began as a pilot program in 2002 involving 23 practices and then was opened to ASCO membership in January 2006. By the spring of 2009, 247 practices throughout the United States were actively par-
ticipating in QOPI, with more than 18,000 patient charts abstracted for 81 measures of care processes. QOPI has revealed that there has been the greatest degree of concordance for the treatment of cancers, such as breast and colorectal cancer, for which there is the best evidence base for care, Dr. Jacobsen pointed out.
Ninety-five percent of those physicians participating in QOPI report they do so because they want to know what sort of care they are providing and ways in which they can improve their care. Participation in QOPI also provides physicians with credits toward maintenance of board certification for the practice improvement module or continuing medical education (CME) credits. Some insurers have also promoted QOPI participation by reimbursing oncologists for their costs in participating. Notably, such reimbursements by Blue Cross/Blue Shield of Michigan were linked to a fourfold increase in provider participation in the program.
In the near future, QOPI plans to create registries that collect electronically transmitted patient data prospectively and in real time. An example of such a registry is the prospective breast cancer treatment registry that ASCO is currently creating with support from the Susan G. Komen for the Cure Foundation. This registry uses a Web-enabled application, based on the ASCO Breast Cancer Treatment Plan and Summary template that is provided to patients and other caregivers. De-identified data are entered into a registry in real time, and as the registry evolves, it may enable direct data transfer from EHRs.
QOPI data are also being used for quality improvement in collaborative networks, such as the NCI Community Cancer Centers Program and the Michigan Oncology Quality Consortium, which was created by Michigan Blue Cross/Blue Shield. These networks are using their QOPI data to define their best practices, which are then applied to all participating sites in their network. “The first sea change in oncology is getting oncologists to measure what they do. The next one, which is perhaps more challenging, is to get them to believe that they can improve the care they provide,” Dr. Jacobsen said. He added that voluntary QOPI certification is expected to be available in 2010. Such certification will be provided to QOPI participants that achieve a minimum specified score on 28 performance measures and will also include practice site assessments for 35 chemotherapy safety standards established by ASCO in collaboration with the Oncology Nursing Society and other stakeholders.
Dr. Jacobson ended his presentation by noting that as physicians, “all of us have two jobs in life. The first is to provide care, and the second is to
improve care. We should always be thinking about how what we do could be done better.”
DECISION TOOLS AND MODELS
In addition to practice guidelines, oncologists are increasingly relying on computerized decision support tools, models, and tumor-based prognostic assessments to improve their care of cancer patients. Based on cancer registry data, clinical trials, observational studies, and genetic testing, these oncology decision-making aids are well developed for some of the more common cancers such as breast cancer, as Dr. Patricia Ganz of the Jonsson Comprehensive Cancer Center noted, and can play an important role in a RLHS. For example, the Web-based tool Adjuvant! Online© provides estimates of the net benefit to be expected from systemic adjuvant treatment for individual breast cancer patients according to patient-specific characteristics, such as tumor size, grade, number of involved nodes, and hormone receptor and Her-2-neu status (Adjuvant! Inc., 2010; Ravdin et al., 2001).
Using this information, which practitioners input directly online into the computer model, Adjuvant! Online predicts how various adjuvant treatments are likely to affect the risk of relapse and mortality, enabling oncologists and their breast cancer patients to personalize their decision making on whether to pursue adjuvant therapies. The model was developed by actuarial analysis of the San Antonio breast cancer data base and SEER data, as well as on estimates of the proportional risk reduction observed in individual randomized breast cancer clinical trials and systematic overviews of randomized adjuvant trials. Dr. Ganz pointed out that “having these kinds of tools to translate these complex scientific discoveries so they can be a part of the patient conversation is absolutely essential.” They also aid physicians, she added, who cannot easily assemble and integrate all the abundant information needed to make treatment decisions. “Most of us do not even know what the background survival rate is for a 70-year-old woman,” she said, let alone the survival statistics of a 70-year-old breast cancer patient with a number of different prognostic variables.
Dr. Abernethy stressed the importance of linking decision support tools and models directly into the information technology system. “What good is a model that we have to type [patients’ data] into—that barrier in itself is going to inhibit use. We need to start building our models into our IT systems because that system already knows that the patient is 37, so why should I have to type it in?” she said. She added that there should be a
process for vetting models, as well as algorithms that enable IT systems to match the most appropriate model to the cancer patient in question.
Dr. Edge pointed out that Adjuvant! Online is just one of several oncology decision support tools, most of which are not widely known, and he and Dr. Ganz agreed that it would be helpful if these tools and models were made available within a specific publicized clearinghouse and their comparative effectiveness was assessed. Dr. Neti raised the question of whether the Food and Drug Administration (FDA) approves decision support tools and ensures that these models are accurate and can be used effectively and appropriately in a clinical setting. Dr. Abernethy responded that the FDA is currently assessing how to evaluate these tools and how to develop an approval process for them. “I should hope that these models, in a rapid-learning world, would be dynamic and that they would grow and learn as the datasets are improving, and that we actually build into those models the iteratively updated process,” she said. “Pandora© learns what kind of music I want, and yet my decision support models cannot learn from my patient populations. I do not know that we have gotten sophisticated enough in our regulatory process yet to understand how we are going to deal with that. That is going to be an important piece to build into the rapid learning system,” she added.
Discussant Dr. Mia Levy also expressed concern that decision support tools may not be updated rapidly enough to enable them to be part of a RLHS. “I use these tools all the time in my practice, but some have been delayed in bringing in Her-2-neu status into the equation, and all these other variables that go into it,” she pointed out. Dr. Ganz agreed and noted that there needs to be a financial investment in making sure these tools and models are updated regularly and used at the bedside. She pointed out that Adjuvant! Online was developed by one researcher, who was not adequately compensated for the time he spent developing it.
Dr. Sharon Murphy of the IOM pointed out that many of the data used to develop decision tools and prognostic assessments are from clinical trials that typically do not enroll elderly patients or those with poor performance data, and thus may not be applicable to all patients. Dr. Ganz pointed out that there is some representation of all patient subsets in the observational data, such as SEER, that are used in the development of decision support tools but added that, ideally, a “rapid learning health system would collect good prospective data at the bedside that would help inform us about these decisions because we have very limited information. The person who makes it to a clinical center to go on a clinical trial is not representative of that universe.”