Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
6 Applying Evidence to Health Care Delivery Substantial investments have been made in clinical research and develop- ment over the last 30 years, resulting in an enormous increase in the medical knowledge base and the availability of many more drugs and devices. Unfortu- nately, Americans are not reaping the full benefit of these investments. The lag between the discovery of more efficacious forms of treatment and their incorpo- ration into routine patient care is unnecessarily long, in the range of about 15 to 20 years (Balas and Boren, 2000). Even then, adherence of clinical practice to the evidence is highly uneven. A far more effective infrastructure is needed to apply evidence to health care delivery. Greater emphasis should be placed on systematic approaches to analyz- ing and synthesizing medical evidence for both clinicians and patients. Many promising private- and public-sector efforts now under way, including the Cochrane Collaboration, the ACP Journal Club, and the Evidence-Based Practice Centers supported by the Agency for Healthcare Research and Quality, represent excellent models and building blocks for a more comprehensive effort. Yet synthesizing the evidence is only the first step in making knowledge more usable by both clinicians and patients. Many efforts to develop clinical practice guide- lines, defined as âsystematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circum- stances,â flourished during the 1980s and early 1990s (Institute of Medicine, 1992). Although the translation of evidence into clinical practice guidelines is an important first step, however, the dissemination of guidelines alone has not been a very effective method of improving clinical practice (Cabana et al., 1999). 145
146 CROSSING THE QUALITY CHASM Far more sophisticated clinical decision support systems will be needed to assist clinicians and patients in selecting the best treatment options and delivering safe and effective care. Certain types of clinical decision support applications, most notably preventive service reminder systems and drug dosing systems, have been demonstrated to improve clinical decisions and should be adopted on a widespread basis (Balas et al., 2000; Bates et al., 1999). More complex applica- tions, such as computer-aided diagnosis, are in earlier stages of development (Kassirer, 1994), but the potential for these systems to contribute to evidence- based practice and consumer-oriented care is great. The spread of the Internet has opened up many new opportunities to make medical evidence more accessible to clinicians and consumers. The efforts of the National Library of Medicine to facilitate access to the medical literature by both consumers and health care professionals and to design Web sites that organize large amounts of information on particular health needs are particularly promis- ing (Lindberg and Humphreys, 1999). The development of a more effective infrastructure to synthesize and orga- nize evidence around priority conditions and to improve clinician and consumer access to the evidence base through the Internet offers new opportunities to enhance quality measurement and reporting. A stronger and more organized evidence base should facilitate the development of valid and reliable quality measures for priority conditions that can be used for both internal quality im- provement and external accountability. Broad-based involvement of private- and public-sector groups and strong leadership from within the medical and other health professions are critical to ensuring the success of this effort. Recommendation 8: The Secretary of the Department of Health and Human Services should be given the responsibility and neces- sary resources to establish and maintain a comprehensive program aimed at making scientific evidence more useful and accessible to clinicians and patients. In developing this program, the Secretary should work with federal agencies and in collaboration with profes- sional and health care associations, the academic and research com- munities, and the National Quality Forum and other organizations involved in quality measurement and accountability. The infrastructure developed through this public- and private-sector partner- ship should focus initially on priority conditions (see Chapter 4, Recommenda- tion 5). Its activities should include the following: â¢ Ongoing analysis and synthesis of the medical evidence â¢ Delineation of specific practice guidelines â¢ Enhanced dissemination efforts to communicate evidence and guidelines to the general public and professional communities
APPLYING EVIDENCE TO HEALTH CARE DELIVERY 147 â¢ Development of decision support tools to assist clinicians and patients in applying the evidence â¢ Identification of best practices in the design of care processes â¢ Development of quality measures for priority conditions It is critical that leadership from the private sector, both professional and other health care leaders and consumer representatives, be involved in all aspects of this effort to ensure its applicability and acceptability to clinicians and patients. BACKGROUND Early definitions of evidence-based medicine or practice emphasized the âconscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patientsâ (Sackett et al., 1996). In response to concerns that this definition failed to recognize the importance of other factors in making clinical decisions, more recent definitions explicitly incorporate clini- cal expertise and patient values into the decision-making process (Lohr et al., 1998). Contemporary definitions also clarify that âevidenceâ is intended to refer not only to randomized controlled trials, the âgold standard,â but also to other types of systematically acquired information. For purposes of this report, the following definition of evidence-based prac- tice, adapted from Sackett et al. (2000), is used: Evidence-based practice is the integration of best research evidence with clini- cal expertise and patient values. Best research evidence refers to clinically relevant research, often from the basic health and medical sciences, but espe- cially from patient-centered clinical research into the accuracy and precision of diagnostic tests (including the clinical examination); the power of prognostic markers; and the efficacy and safety of therapeutic, rehabilitative, and preven- tive regimens. Clinical expertise means the ability to use clinical skills and past experience to rapidly identify each patientâs unique health state and diagnosis, individual risks and benefits of potential interventions, and personal values and expectations. Patient values refers to the unique preferences, concerns, and expectations that each patient brings to a clinical encounter and that must be integrated into clinical decisions if they are to serve the patient. Evidence-based practice is not a new concept. One of its earliest proponents was Archie Cochrane, a British epidemiologist who wrote extensively in the 1950s and 1960s about the importance of conducting randomized controlled trials to upgrade the quality of medical evidence (Mechanic, 1998). Evidence has always contributed to clinical decision making, but the stan- dards for evidence have become more stringent, and the tools for its assembly and analysis have become more powerful and widely available (Davidoff, 1999). Prior to 1950, clinical evidence consisted of case reports, whereas during the latter half of the 20th century, results of about 131,000 randomized controlled
148 CROSSING THE QUALITY CHASM trials of medical interventions were published. Study designs and methods of analysis have also become more sophisticated, and now include decision analy- sis, systematic review of the literature, meta-analysis, and cost-effectiveness analysis. Prior to 1990, efforts to incorporate evidence-based decision making into practice encouraged clinicians to follow four steps. According to this approach, when a patient presents a problem for which the decision is not apparent, the clinician should (1) formulate a clear clinical question from that problem, (2) search for the relevant information from the best possible published or unpub- lished sources, (3) evaluate that evidence for its validity and usefulness, and (4) implement the appropriate findings (Davidoff, 1999). During the last decade, it has become apparent that this strategy of training and encouraging clinicians to independently find, appraise, and apply the best evidence will not alone lead to major improvements in practice (Guyatt et al., 2000; McColl et al., 1998). The relevant information is widely scattered across the medical literature and of varying quality in terms of methodological rigor (Davidoff, 1999). Advanced study is required to master and apply state-of-the- art approaches to analysis of the literature. The demands and rigors of clinical practice do not allow clinicians the time required to undertake this process on a regular basis. Some have proposed a greater role for specially trained clinical librarians to assist clinicians in framing clinical questions and identifying the relevant literature (Davidoff and Florance, 2000). Many efforts are also under way to make it easier for clinicians and patients to access and interpret the findings of the literature. SYNTHESIZING CLINICAL EVIDENCE The most common approaches to synthesizing and integrating the results of primary studies are the conduct of systematic reviews and the development of evidence-based practice guidelines. Interest in applying both techniques has increased dramatically in the last 15 years (Chalmers and Haynes, 1994; Chalmers and Lau, 1993). Systematic Reviews Systematic reviews are scientific investigations that synthesize the results of multiple primary investigations. Conduct of a systematic review to answer a specific clinical question generally involves four steps (Cook et al., 1997): â¢ Conduct of a comprehensive search of potentially relevant articles using explicit, reproducible criteria in the selection of articles for review â¢ Critical appraisal of the scientific soundness of the research designs of the primary studies, including the selection of patients, sample size, and methods of accounting for confounding variables (Cook et al., 1997; Lohr and Carey, 1999)
APPLYING EVIDENCE TO HEALTH CARE DELIVERY 149 â¢ Synthesis of data â¢ Interpretation of results There are two types of systematic reviewsâqualitative and quantitative (Cook et al., 1997). In a qualitative review, the results of primary studies are summarized but not statistically combined. Quantitative reviews, sometimes called meta-analyses, use statistical methods to combine the data and results of two or more studies. When applied properly, meta-analysis can be a powerful tool for reaching a decision about the efficacy of alternative treatments in a more timely fashion than is possible through the qualitative review of individual studies. A classic ex- ample is the case of the efficacy of thrombolysis in treating myocardial infarction (Davidoff, 1999). In a review of 33 randomized controlled trials published between 1959 and 1988 that examined the efficacy of thrombolysis in reducing acute mortality, it was found that most studies âsuggestedâ some benefit of therapy; however, the outcomes varied considerably from one study to another, and for the most part, the studies did not achieve statistical significance (Lau et al., 1992). But through the use of meta-analysis techniques to combine the results of multiple studies (thus increasing the statistical power), it was possible to demonstrate by 1973 that the therapeutic efficacy of thrombolysis was statisti- cally significant at the 0.05 level. Unfortunately, some medical textbooks in the early 1990s still contained statements that thrombolysis was an unproven therapy (Davidoff, 1999). Systematic reviews are highly variable in their methodological rigor. In a critical evaluation of 50 articles describing a systematic review or meta-analysis of the treatment of asthma, for example, Jadad et al. (2000b) concluded that 40 publications had serious or extensive flaws. Reviews conducted by the Cochrane Collaboration, discussed below, were found to be far more rigorous than those published in peer-reviewed journals. Two organized efforts are directed at conducting systematic reviews or meta- analyses. The first, the Cochrane Collaboration, was started in 1992 in Oxford, England. The second, the Agency for Healthcare Research and Qualityâs Evi- dence-Based Practice Centers program, started in 1997 and has resulted in the establishment of 12 centers, located mainly in universities, medical centers, and private research centers, that produce evidence-based reports on specific topics (Agency for Healthcare Research and Quality, 2000b). The Cochrane Collaboration is an international network of health care pro- fessionals, researchers, and consumers that develops and maintains regularly updated reviews of evidence from randomized controlled trials and other re- search studies (Cochrane Collaboration, 1999). It currently comprises about 50 Collaborative Review Groups, which produce systematic reviews of various pre- vention and health care issues. The Collaboration maintains the Cochrane Li- brary, a collection of several databases that is updated quarterly and distributed
150 CROSSING THE QUALITY CHASM annually to subscribers on disk, on CD-ROM, and via the Internet. One of the databases, The Cochrane Database of Systematic Reviews, contains Cochrane reviews, and another, The Cochrane Controlled Trials Register, is a bibliographic database of controlled trials. The Database of Abstracts of Reviews of Effective- ness includes structured abstracts of systematic reviews that have been critically appraised by the National Health Services Centre for Reviews and Dissemination in York, England; the American College of Physiciansâ Journal Club; and the journal Evidence-Based Medicine. The library also includes a registry of biblio- graphic information on nearly 160,000 controlled trials that provide high-quality evidence on health care outcomes. The Agency for Healthcare Research and Qualityâs 12 Evidence-Based Prac- tice Centers conduct systematic, comprehensive analyses and syntheses of the scientific literature on clinical conditions/problems that are common, account for a sizable proportion of resources, and are significant for the Medicare or Medic- aid populations (Agency for Healthcare Research and Quality, 2000b). The centers include universities (Duke University, The Johns Hopkins University, McMaster University, Oregon Health Sciences University, the University of Cali- fornia at San Francisco, and Stanford University); research organizations (Meta- Works, the Research Triangle Institute, and the RAND Corporation); and health care organizations and associations (New England Medical Center, and Blue Cross and Blue Shield Association). Since December 1998, evidence reports have been released on the following topics: sleep apnea, traumatic brain injury, alcohol dependence, cervical cytology, urinary tract infection, depression, dysphagia, sinusitis, testosterone suppression, attention deficit/hyperactivity disorder, and atrial fibrillation (Eisenberg, 2000a). In response to the rapid increase in the volume of and interest in systematic reviews generated by the Cochrane Collaboration, the Evidence-Based Practice Centers, and many other smaller-scale efforts, numerous journals specializing in evidence-based publications have emerged. The first journal devoted exclusively to systematic reviews and meta-analyses was the ACP Journal Club, first pub- lished in 1991. There are now a number of evidence-based journals, including Evidence-Based Medicine, Journal of Evidence-Based Health Care, Evidence- Based Cardiovascular Medicine, Evidence-Based Mental Health, and Evidence- Based Nursing, as well as numerous âbest-evidenceâ departments in other jour- nals (Sackett et al., 2000). One of the most recent evidence-based resources is Clinical Evidence, an âevidence formularyâ resulting from a collaborative effort of the British Medical Journal and the American College of Physicians (Godlee et al., 1999). Clinical Evidence is noteworthy because of its focus and organization around common conditions. First published in June 1999, it includes summaries on the prevention and treatment of about 70 such conditions. The summaries are based on system- atic reviews and, when these are lacking, individual randomized controlled trials.
APPLYING EVIDENCE TO HEALTH CARE DELIVERY 151 Clinical Evidence will be updated periodically, and eventually will lead to a family of products available in electronic and print form. Practice Guidelines Clinical practice guidelines can be defined as âsystematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstancesâ (Institute of Medicine, 1992). Guide- lines build on syntheses of the evidence, but go one step further to provide formal conclusions or recommendations about appropriate and necessary care for spe- cific types of patients (Lohr et al., 1998). As a practical tool to influence practice, guidelines have been used in continuing medical education and clinical practice, as well as to make decisions about benefits coverage and medical necessity. Guidelines have proliferated at a rapid pace during the last decade. During the early 1990s, the Agency for Health Care Policy and Research (now the Agency for Healthcare Research and Quality) sponsored an ambitious program for guideline development, which led to the specification of about 20 guidelines across a wide variety of clinical areas (Agency for Healthcare Research and Quality, 2000a; Perfetto and Stockwell Morris, 1996). The efforts in this area were eventually curtailed in favor of establishing the Evidence-Based Practice Centers in partnership with private-sector organizations (Lohr et al., 1998). Spe- cialty societies, professional groups, health plans, medical centers, utilization review organizations, and others have also developed many practice guidelines. Guidelines vary greatly in the degree to which they are derived from and consistent with the evidence base, for several reasons. First, as noted above, there is much variability in the quality of systematic reviews, which are the foundation for guidelines. Second, guideline development generally relies on expert panels to arrive at specific clinical conclusions. Judgment must be exer- cised in this process because the evidence base is sometimes weak or conflicting, or lacking in the specificity needed to develop recommendations useful for mak- ing decisions about individual patients in particular settings (Lohr et al., 1998). In an effort to organize information on practice guidelines and to identify those having an adequate evidence base, the Agency for Healthcare Research and Quality, in partnership with the American Medical Association and the American Association of Health Plans, has developed a National Guideline Clearinghouse, which became fully operational in 1999 (Eisenberg, 2000a). The Clearinghouse provides online access to a large and growing repository of evidence-based prac- tice guidelines. Developing and disseminating practice guidelines alone has minimal effect on clinical practice (Cabana et al., 1999; Hayward, 1997; Lomas et al., 1989; Woolf, 1993). But a growing body of evidence indicates that guidelines imple- mented with patient-specific feedback and/or computer-generated reminders lead to significant improvements (Dowie, 1998; Grimshaw and Russell, 1993). More
152 CROSSING THE QUALITY CHASM recent literature in this area also recognizes the importance of breaking down cultural, financial, organizational, and other barriers, both internal and external to health care organizations, to achieve widespread compliance with evidence-based guidelines (Solberg et al., 2000). To this end, up-front involvement of leaders from the health professions and representatives of patients in the guideline devel- opment process would likely help to ensure widespread adoption of the guide- lines developed. USING COMPUTER-BASED CLINICAL DECISION SUPPORT SYSTEMS Until now, we have believed that the best way to transmit knowledge from its source to its use in patient care is to first load the knowledge into human minds . . . and then expect those minds, at great expense, to apply the knowledge to those who need it. However, there are enormous âvoltage dropsâ along this transmission line for medical knowledge.âLawrence Weed, 1997 A clinical decision support system (CDSS) is defined as software that inte- grates information on the characteristics of individual patients with a computer- ized knowledge base for the purpose of generating patient-specific assessments or recommendations designed to aid clinicians and/or patients in making clinical decisions.1 Work on such systems has been under way for decades with minimal impact on health care delivery. Interest in CDSSs has grown dramatically during the last decade, however, in part because of the promise such systems hold for assisting clinicians and patients in applying science to practice. Publications reporting the results of clinical trials evaluating the effective- ness of CDSSs have also increased in number and quality in recent years. In a systematic review of controlled clinical trials assessing the effects of CDSSs on physician performance and patient outcomes, Hunt and colleagues identified 68 publications during the period 1974 through 1998, with 40 of these having been published in the most recent 6-year period (Hunt et al., 1998; Johnston et al., 1994). CDSS applications assist clinicians and patients with three types of clinical decisions: preventive and monitoring tasks, prescribing of drugs, and diagnosis and management. Applications in the first category and most applications to date in the second category deal with less complex and frequently occurring clinical decisions. The software required to assist clinicians and patients with these types of decisions can be constructed using relatively simple rule-based logic, often based on practice guidelines (Delaney et al., 1999; Shea et al., 1996). Applica- tions in the third category are far more complex and require more comprehensive 1This definition is adapted from a physician-oriented definition developed by Hunt et al., 1998.
APPLYING EVIDENCE TO HEALTH CARE DELIVERY 153 patient-specific data, access to a much larger repository of up-to-date clinical knowledge, and more sophisticated probabilistic mathematical models. Use of a CDSS for prevention and monitoring purposes has been shown to improve compliance with guidelines in many clinical areas. In a meta-analysis of 16 randomized controlled trials, computer reminders were found to improve preventive practices for vaccinations, breast cancer screening, colorectal cancer screening, and cardiovascular risk reduction, but not for cervical cancer screen- ing or other preventive services (e.g., glaucoma screening, TB skin test) (Shea et al., 1996). In another meta-analysis of 33 studies of the effect of prompting clinicians, 25 of which used computer-generated prompts, the technique was found to enhance performance significantly in all 16 preventive care procedures studied (Balas et al., 2000). Computer-generated reminder systems targeting patients have also been shown to be effective (Balas et al., 2000; McDowell et al., 1986, 1989). Computerized prescribing of drugs offers great potential benefit in such areas as dosing calculations and scheduling, drug selection, screening for interac- tions, and monitoring and documentation of adverse side effects (Schiff and Rucker, 1998). Many studies have been conducted on the use of CDSSs to improve drug dosing, and most (9 out of 15) show some positive effect (Hunt et al., 1998). The use of CDSSs for drug selection, screening for interactions, and monitoring and documentation of adverse side effects is far more limited because these applications generally require the linkage of more comprehensive patient- specific clinical information with the medication knowledge base. Although comprehensive medication order entry systems have been implemented in only a limited number of health care settings, the results of several recent studies have demonstrated that these systems reduce medical errors and costs (Bates et al., 1997, 1998, 1999). Computer-assisted disease management programs in areas in which decision making about medications is complex, such as the use of antibi- otic and anti-infective agents, also have been shown to have a positive impact on quality and cost reduction (Classen et al., 1992; Evans et al., 1998). The third category, computer-assisted diagnostic and management aids, is by far the most challenging. These systems require (1) an expansive knowledge base covering the full range of diseases and conditions, (2) detailed patient- specific clinical information (e.g., history, physical examination, laboratory data), and (3) a powerful computational engine that employs some form of probabilistic decision analysis. Interest in computer-assisted diagnosis goes back more than four decades, and yet there have been only a few evaluations of its performance (Kassirer, 1994). In a systematic review of 68 CDSS controlled trials between 1974 and 1998, Hunt and colleagues found only 5 studies (4 of the 5 published before 1990) that assessed the role of CDSSs in diagnosis, only one of which found a benefit from their use (Chase et al., 1983; Hunt et al., 1998; Pozen et al., 1984; Wellwood et al., 1992; Wexler et al., 1975; Wyatt, 1989).
154 CROSSING THE QUALITY CHASM These early studies generally evaluated how well a computer performed in making or generating plausible diagnoses as compared with the decisions of experts, not the ability of a computer in partnership with a practicing clinician to perform better than the clinician alone (Kassirer, 1994). One recent study com- pared the performance of practicing clinicians with and without the aid of a diagnostic CDSS, and found among the former a significant improvement in the generation of correct diagnoses in hypothesis lists (Friedman et al., 1999). The study included faculty, residents, and fourth-year medical students; while all three groups performed better with the help of the computer, the magnitude of the improvement was greatest for students and smallest for faculty. Studies conducted to date do not provide a convincing case in support of CDSS diagnostic tools. Yet it is important to recognize that changes under way in health care and computing will likely result in the development of far superior tools in the near future, for three reasons. First, CDSS diagnostic programs have been limited to date in terms of their clinical knowledge base. The cost of maintaining updated syntheses of the evidence for most conditions and translat- ing these syntheses into decision rules has been prohibitively high for commer- cial developers of these systems. As discussed above, however, interest in evi- dence-based practice has led to a logarithmic increase in systematic reviews of the clinical evidence on particular clinical questions, which are available in the public domain. Second, advances in computer technology, accompanied by dramatic de- creases in the cost of hardware and software, have greatly reduced concerns about the computing requirements of CDSS diagnostic systems. Furthermore, there are early signs of CDSS diagnostic systems becoming available on the Internet, thus further reducing the capital investment and operational costs incurred at the level of a clinical practice (McDonald et al., 1998). Third, the Internet has opened up new opportunities to address issues related to patient data. As noted, to be effective, CDSS diagnostic systems require detailed, patient-specific clinical information (history, physical results, medica- tions, laboratory test results), which in most health care settings resides in a variety of paper and automated datasets that cannot easily be integrated. Past efforts to develop automated medical record systems have not been very success- ful because of the lack of common standards for coding data, the absence of a data network connecting the many health care organizations and clinicians in- volved in patient care, and a number of other factors. The Internet has the potential to overcome many of these barriers to automated patient data. The World Wide Web offers much of the standardization technology needed to com- bine independent sources of clinical data (McDonald et al., 1998). The willing- ness of patients and clinicians to use these systems will depend to a great extent on finding ways to adequately address concerns about the confidentiality of per- sonally identifiable clinical information and a host of technical, legal, policy, and organizational issues that currently impede many health applications on the
APPLYING EVIDENCE TO HEALTH CARE DELIVERY 155 Internet. But numerous efforts are under way to address these issues as they apply to both the current and the next-generation Internet (Elhanan et al., 1996; National Research Council, 2000). Fourth, the extraordinary advances achieved in molecular medicine in recent years will further increase the complexity of both the evidence base and the clinical decision-making process, making it imperative that clinicians use com- puter-aided decision supports. Molecular medicine introduces a huge new body of knowledge that will affect virtually every area of practice, and also opens up the possibility of developing individualized treatments linked to a patientâs ge- netic definition (Rienhoff, 2000). CDSS programs offer the prospect of applying more sophisticated forms of decision analysis to the evaluation of various treat- ment options, taking into account both the patientâs genetic definition and prefer- ences (Lilford et al., 1998). Given the potential of CDSSs to enhance evidence-based practice and pro- vide greater opportunity for patients to participate in clinical decision making, the committee believes greater public investment in research and development on such systems is warranted. In fiscal year 1999, the Agency for Healthcare Re- search and Quality began a new initiative, Translating Research into Practice, aimed at implementing evidence-based tools and information in health care set- tings (Eisenberg, 2000a). The focus of the initiative is on cultivating partnerships between researchers and health care organizations for the conduct of practice- based, patient outcome research in applied settings. In fiscal year 1999, 3-year grants were awarded in support of projects to identify effective approaches to smoking cessation, chlamydia screening of adolescents, diabetes care in medi- cally underserved areas, and treatment of respiratory distress syndrome in preterm infants. The resources for this program should be expanded to support an applied research and development agenda specific to CDSSs. MAKING INFORMATION AVAILABLE ON THE INTERNET The Internet is rapidly becoming the principal vehicle for communication of health information to both consumers and clinicians. It is predicted that 90 percent of households will have Internet access by 2005â2010 (Rosenberg, 1999). The number of Americans who use the Internet to retrieve health-related informa- tion is estimated to be about 70 million (Cain et al., 2000). The connectivity of health care organizations has also increased. For example, between 1993 and 1997, the percentage of academic medical libraries with Internet connections increased from 72 to 96 percent, and that of community hospital libraries rose from 24 to 72 percent (Lyon et al., 1998). The volume of health care information available on the Internet is enormous. Estimates of the number of health-related Web sites vary from 10,000 to 100,000 (Benton Foundation, 1999; Eysenbach et al., 1999). A survey conducted by USA Today found that consumers access health-related Web sites to research an illness
156 CROSSING THE QUALITY CHASM or disease (62 percent), seek nutrition and fitness information (20 percent), re- search drugs and their interactions (12 percent), find a doctor or hospital (4 percent), and look for online medical support groups (2 percent) (USA Today, 1998). It is easy for a user to be overwhelmed by the volume of information avail- able on the Web. For example, there are some 61,000 Web sites that contain information on breast cancer (Boodman, 1999), and a simple search for âdiabetes mellitusâ returns more than 40,000 sites (National Research Council, 2000). Information available on the Internet is also of varying quality: some is incorrect, and some is misleading (Achenbach, 1996; Biermann et al., 1999). Several options have been proposed to assist users in distinguishing the good information from the bad. Silberg et al. (1997) have encouraged Web site sponsors to adhere voluntarily to a set of rules including (1) inclusion of information on authors, along with their affiliations and credentials; (2) attribution, including references and sources for all content; (3) disclosure of Web site ownership, sponsorship, advertising, underwriting, commercial funding, and potential conflicts of inter- est; and (4) dates on which content was posted and updated. To identify valuable information, users can rely on a number of rating ser- vices that review and rate Web sites, but there are problems with many of these rating services as well. In a recent review, Jadad and Gagliardi (1998) identified 47 rating services, of which only 14 provided a description of the criteria used to produce the ratings, and none gave information on interobserver reliability or construct validity. One of the richest sources of clinical information on the Internet is the National Library of Medicineâs (NLM) MEDLINE. MEDLINE contains more than 9 million citations and abstracts of articles drawn mainly from professional journals (Miller et al., 2000). In June 1997, NLM made MEDLINE available free of charge on the Web, and usage jumped about 10-fold to 75 million searches annually (Lindberg and Humphreys, 1998). When MEDLINE was established, it was assumed that its primary audience would be health care professionals, but it is now recognized that the lay public has a keen interest in accessing the clinical knowledge base as well. It is esti- mated that about 30 percent of MEDLINE searches are by members of the gen- eral public and students, 34 percent by health care professionals, and 36 percent by researchers (Lindberg, 1998). In 1998, NLM added 12 consumer health journals to MEDLINE to increase its coverage of information written for the general public, and also launched MEDLINEplus, a Web site specifically for consumers (Lindberg and Humphreys, 1999). MEDLINEplus is divided into eight sections (e.g., health topics, databases, organizations, clearinghouses), each of which provides links to reputable Web sites maintained by the National Insti- tutes of Health, the Centers for Disease Control and Prevention, the Food and Drug Administration, and professional organizations and associations.
APPLYING EVIDENCE TO HEALTH CARE DELIVERY 157 The MEDLINEplus section HealthTopics provides users with access to pre- formulated MEDLINE searches on common topics, most of which are diseases or conditions. The topics included were identified through an analysis of the most common search terms used on the NLM home page, which revealed that 90 percent or more were for specific diseases, conditions, or other common medical terms (e.g., Viagra, St. Johnâs Wort) (Miller et al., 2000). The HealthTopics list numbers more than 300, with some of the most frequently searched topics being diabetes, shingles, prostate, hypertension, asthma, lupus, fibromyalgia, multiple sclerosis, and cancer. There are many other sources of filtered evidence-based information as well, including the Cochrane Library discussed above. Access to evidence-based guidelines is provided in the United States by the National Guideline Clearing- house (sponsored by the Agency for Healthcare Research and Quality), the Ameri- can Medical Association, and the American Association of Health Plans (Agency for Healthcare Research and Quality et al., 2000), and in Canada by the CPG Infobase (sponsored by the Canadian Medical Association) (Canadian Medical Association, 2000). NOAH (New York Online Access to Health) is a library collaboration for bilingual consumer health information on the Internet (Voge, 1998). Thus many efforts are under way to assist users in accessing useful health care information on the Web. Some believe, however, that much more could be done to achieve a more âpowerful and efficient synergyâ between the Internet and evidence-based decision making (Jadad et al., 2000a). DEFINING QUALITY MEASURES The enhanced interest in and infrastructure to support evidence-based prac- tice have implications for quality measurement, improvement, and accountability (Eisenberg, 2000b). The use of priority conditions as a framework for organizing the evidence base, as discussed in Chapter 4, may also have implications for external accountability programs. Systematic reviews and practice guidelines provide a strong foundation for the development of a richer set of quality measures focused on medical care processes and outcomes. To date, a good deal of quality measurement for pur- poses of external accountability has focused on a limited number of ârate-basedâ indicatorsârates of occurrence of desired or undesired events. The National Committee for Quality Assurance, through its Health Plan Employer Data and Information Set, makes comparative quality data available on participating health plans and includes such measures as childhood immunization rates, mammogra- phy rates, and the percentage of diabetics who had an annual eye exam (National Committee for Quality Assurance, 1999). The Joint Commission on the Accredi- tation of Healthcare Organizations sponsors the ORYX system for hospitals, which includes measures such as infection rates and postsurgical complication
158 CROSSING THE QUALITY CHASM rates. Syntheses of the evidence base and the development of practice guide- lines should contribute to more valid and meaningful quality measurement and reporting. As systematic reviews, development of practice guidelines, and efforts to disseminate evidence focus increasingly on priority conditionsâa unit of analy- sis that is meaningful to patients and cliniciansâso, too, must accountability processes. To date, efforts to make comparative quality data available in the public domain have focused on types of health care organizations, for the most part health plans and hospitals, and, as noted above, measurement of a limited number of discrete quality indicators for these organizations. Numerous efforts are under way, however, to develop comprehensive measurement sets for various conditions and quality reporting mechanisms. These include the efforts of the Foundation for Accountability, the Health Care Financing Administrationâs peer review organizations, and a variety of collaborations involving leading medical associations and accrediting bodies. The Foundation for Accountability (2000b) has developed condition-spe- cific measurement guides related to a number of common conditions: adult asthma, alcohol misuse, breast cancer, diabetes, health status under age 65, and major depressive disorders. The Foundation continues to work on child and adolescent health, coronary heart disease, end of life, and HIV/AIDS. In addi- tion, it has created FACCTï£¦ONE, a survey tool designed to gather information directly from patients about important aspects of their health care (Foundation for Accountability, 2000a). The first phase of the survey addresses quality of care for people living with the chronic illnesses of asthma, diabetes, and coronary artery disease. It assesses performance related to patient education and knowl- edge, obtaining of essential treatments, access, involvement in care decisions, communication with providers, patient self-management behaviors, coping, symptom control, maintenance of regular activities, and functional status. Since 1992, the Health Care Financing Administration, through its Peer Review Organizations, has been developing core sets of performance measures for a number of common conditions, including acute myocardial infarction, heart failure, stroke, pneumonia, breast cancer, and diabetes (Health Care Financing Administration, 2000). Comparative performance data for Medicare fee-for- service beneficiaries by state were recently released for each of these conditions (Jencks et al., 2000). Quality-of-care measures for beneficiaries experiencing acute myocardial infarction have been piloted in four states as part of the Coop- erative Cardiovascular Project (Ellerbeck et al., 1995; Marciniak et al., 1998). The Diabetes Quality Improvement Project, a collaborative quality measure- ment effort involving the American Diabetes Association, the Foundation for Accountability, the Health Care Financing Administration, the National Commit- tee for Quality Assurance, the American Academy of Physicians, the American College of Physicians, and the Veterans Administration, has been under way for several years. The project has identified seven accountability measures (i.e.,
APPLYING EVIDENCE TO HEALTH CARE DELIVERY 159 hemoglobin A1c tested, poor hemoglobin A1c control, eye exam performed, lipid profile performed, lipids controlled, monitoring for kidney disease, and blood pressure controlled), six of which will be included in the National Committee for Quality Assuranceâs Year 2000 Health Plan Employer Data and Information Set (Health Care Financing Administration, 1999). The American Medical Association, working with experts from national medical specialty societies and the quality measurement community, has devel- oped measure sets for physician clinical performance in the areas of adult diabe- tes, prenatal testing, and chronic stable coronary artery disease. The core mea- sure set for adult diabetes, developed with input from the Iowa Foundation for Medical Care, was approved by the American Medical Association in July 2000, while the other two measure sets are undergoing public review and comment (American Medical Association, 2000). It will be important for the National Quality Forum, a recently created pub- licâprivate partnership developed to foster collaboration across public and pri- vate oversight organizations, to consider carefully how best to align comparative quality reporting with the developing infrastructure in support of evidence-based practice and consumer-centered health care. The National Quality Forum, a not- for-profit organization established in 1999 with the participation of both public and private purchasers, is currently developing a strategic measurement frame- work to guide the future development of external quality reporting for purposes of accountability and consumer choice (Kizer, 2000). This activity, now under way, presents a unique opportunity to influence the direction of quality measurement. REFERENCES Achenbach, Joel. Reality Check. You Canât Believe Everything You Read. But Youâd Better Be- lieve This. Washington Post. EâC01, Dec. 4, 1996. Agency for Healthcare Research and Quality. 2000a. âClinical Practice Guidelines Online.â Online. Available at http://www.ahcpr.gov/clinic/cpgonline.htm [accessed Jan. 2, 2001]. âââ. 2000b. âEvidence-based Practice Centers. Synthesizing Scientific Evidence to Improve Quality and Effectiveness in Clinical Care. AHRQ Publication No. 00âP013.â Online. Avail- able at http://www.ahcpr.gov/clinic/epc/ [accessed Oct. 11, 2000]. Agency for Healthcare Research and Quality, American Medical Association, and American Asso- ciation of Health Plans. 2000. âNational Guideline Clearinghouse.â Online. Available at http:// www.guideline.gov [accessed Jan. 2, 2001]. American Medical Association. Adult Diabetes Core Physician Performance Measurement Set. Chi- cago, IL: American Medical Association, 2000. Balas, E. Andrew and Suzanne A. Boren. Managing Clinical Knowledge for Health Care Improve- ment. Yearbook of Medical Informatics. National Library of Medicine, Bethesda, MD:65â70, 2000. Balas, E. Andrew, Scott Weingarten, Candace T. Garb, et al. Improving Preventive Care by Prompt- ing Physicians. Arch Int Med 160(3):301â8, 2000.
160 CROSSING THE QUALITY CHASM Bates, David W., Lucian L. Leape, David J. Cullen, et al. Effect of Computerized Physician Order Entry and a Team Intervention on Prevention of Serious Medication Errors. JAMA 280(15): 1311â6, 1998. Bates, David W., Nathan Spell, David J. Cullen, et al. The Costs of Adverse Drug Events in Hospi- talized Patients. JAMA 277(4):307â11, 1997. Bates, David W., Jonathan M. Teich, Joshua Lee, et al. The Impact of Computerized Physician Order Entry on Medication Error Prevention. J Am Med Inform Assoc 6(4):313â21, 1999. Benton Foundation. 1999. âNetworking for Better Care: Health Care in the Information Age.â Online. Available at http://www.benton.org/Library/health/ [accessed Sept. 18, 2000]. Biermann, J. Sybil, Gregory J. Golladay, Mary Lou V. H. Greenfield, and Laurence H. Baker. Evaluation of Cancer Information on the Internet. Cancer 86(3):381â90, 1999. Boodman, Sandra G. Medical Web Sites Can Steer You Wrong. Study Finds Erroneous and Mis- leading Information on Many Pages Dedicated to a Rare Cancer. Washington Post. Healthâ Z07, Aug. 10, 1999. Cabana, Michael D., Cynthia S. Rand, Neil R. Powe, et al. Why Donât Physicians Follow Clinical Practice Guidelines? A Framework for Improvement. JAMA 282(15):1458â65, 1999. Cain, Mary M., Robert Mittman, Jane Sarasohn-Kahn, and Jennifer C. Wayne. Health e-People: The Online Consumer Experience. Oakland, CA: Institute for the Future, California Health Care Foundation, 2000. Canadian Medical Association. 2000. âCMA Infobase - Clinical Practice Guidelines.â Online. Avail- able at http://www.cma.ca/cpgs/index.asp [accessed Jan. 2, 2001]. Chalmers, Iain and Brian Haynes. Systematic Reviews: Reporting, Updating, and Correcting Sys- tematic Reviews of the Effects of Health Care. BMJ 309:862â5, 1994. Chalmers, T. C. and J. Lau. Meta-Analytic Stimulus for Changes in Clinical Trials. Statistical Meth- ods in Medical Research 2:161â72, 1993. Chase, Christopher R., Pamela M. Vacek, Tamotsu Shinozaki, et al. Medical Information Manage- ment: Improving the Transfer of Research Results to Presurgical Evaluation. Medical Care 21(3):410â24, 1983. Classen, David C., R. Scott Evans, Stanley L. Pestotnik, et al. The Timing of Prophylactic Adminis- tration of Antibiotics and the Risk of Surgical-Wound Infection. N EngI J Med 326(5):281â6, 1992. Cochrane Collaboration. 1999. âCochrane Brochure.â Online. Available at http://hiru.mcmaster.ca/ cochrane/cochrane/cc-broch.htm [accessed Jan. 2, 2001]. Cook, Deborah J., Cynthia D. Mulrow, and R. Brian Haynes. Systematic Reviews: Synthesis of Best Evidence for Clinical Decisions. Ann Int Med 126(5):376â80, 1997. Davidoff, Frank. In the Teeth of the Evidence. The Curious Case of Evidence-Based Medicine. The Mount Sinai Journal of Medicine 66(2):75â83, 1999. Davidoff, Frank and Valerie Florance. The Informationist: A New Health Profession? Ann Int Med 132(12):996â8, 2000. Delaney, Brendan C., David A. Fitzmaurice, Amjid Riaz, and F. D. Richard Hobbs. Changing the DoctorâPatient Relationship: Can Computerised Decision Support Systems Deliver Improved Quality in Primary Care? BMJ 319:1281, 1999. Dowie, Robin. A Review of Research in the United Kingdom to Evaluate the Implementation of Clinical Guidelines in General Practice. Family Practice 15(5):462â70, 1998. Eisenberg, John M. Quality Research for Quality Healthcare: The Data Connection. Health Services Research 35:xiiâxvii, 2000a. âââ. A Research Agenda for Quality. Washington, D.C.: Presentation at the Institute of Medi- cine Thirtieth Annual Meeting, The National Academies, 2000b. Elhanan, G., S. A. Socratous, and J. J. Cimino. Integrating DXplain into a Clinical Information System Using the World Wide Web. Proc AMIA Annual Fall Symp:348â52, 1996.
APPLYING EVIDENCE TO HEALTH CARE DELIVERY 161 Ellerbeck, Edward F., Stephen F. Jencks, Martha J. Radford, et al. Quality of Care for Medicare Patients With Acute Myocardial Infarction: A Four-State Pilot Study from the Cooperative Cardiovascular Project. JAMA 273(19):1509â14, 1995. Evans, R. Scott, Stanley L. Pestotnik, David C. Classen, et al. A Computer-Assisted Management Program for Antibiotics and Other Antiinfective Agents. N EngI J Med 338(4):232â8, 1998. Eysenbach, Gunther, Eun Ryoung Sa, and Thomas L. Diepgen. Shopping Around the Internet Today and Tomorrow: Towards the Millennium of Cybermedicine. BMJ 319:1294, 1999. Foundation for Accountability. 2000a. âFACCTï£¦ONE: A Tool for Evaluating the Performance of Health Care Organizations.â Online. Available at http://www.facct.org/measures/Develop/ FACCTONE.htm [accessed Jan. 2, 2001]. âââ. 2000b. âSupporting Quality-Based Decisions. The FACCT Consumer Information Net- work, Comparative Information for Better Health Care Decisions.â Online. Available at http:// www.facct.org/information.html [accessed Jan. 2, 2001]. Friedman, Charles P., Arthur S. Elstein, Fredric M. Wolf, et al. Enhancement of Cliniciansâ Diagnos- tic Reasoning by Computer-Based Consultation. JAMA 282(19):1851â6, 1999. Godlee, Fiona, Richard Smith, and David Goldmann. Clinical Evidence: This month sees the publi- cation of a new resource for clinicians. BMJ 318:1570â1, 1999. Grimshaw, Jeremy M. and Ian T. Russell. Effect of Clinical Guidelines on Medical Practice: A Systematic Review of Rigorous Evaluations. The Lancet 342:1317â22, 1993. Guyatt, Gordon H., Maureen O. Meade, Roman Z. Jaeschke, et al. Practitioners of Evidence Based Care: Not All Clinicians Need to Appraise Evidence from Scratch but All Need Some Skills. BMJ 320:954â5, 2000. Hayward, Robert S. A. Clinical Practice Guidelines on Trial. Can Med Assoc J 156:1725â7, 1997. Health Care Financing Administration. 1999. âQuality of CareâNational Projects. Diabetes Quality Improvement Project (DQIP).â Online. Available at http://www.hcfa.gov/quality/3l.htm [ac- cessed Jan. 2, 2001]. âââ. 2000. âQuality of Care - PRO Priorities. National Clinical Topics (Task 1).â Online. Avail- able at http://www.hcfa.gov/quality/11a.htm [accessed Jan. 2, 2001]. Hunt, Dereck L., R. Brian Haynes, Steven E. Hanna, and Kristina Smith. Effects of Computer-Based Clinical Decision Support Systems on Physician Performance and Patient Outcomes: A Sys- tematic Review. JAMA 280(15):1339â46, 1998. Institute of Medicine. Guidelines for Clinical Practice: From Development to Use. Marilyn J. Field and Kathleen N. Lohr, eds. Washington, D.C.: National Academy Press, 1992. Jadad, Alejandro R. and Anna Gagliardi. Rating Health Information on the Internet: Navigating to Knowledge or to Babel? JAMA 279(8):611â4, 1998. Jadad, Alejandro R., R. Brian Haynes, Dereck Hunt, and George P. Browman. The Internet and Evidence-Based Decision-Making: A Needed Synergy for Efficient Knowledge Management in Health Care. Journal of the Canadian Medical Association 162(3):362â5, 2000a. Jadad, Alejandro R., Michael Moher, George P. Browman, et al. Systematic Reviews and Meta- Analysis on Treatment of Asthma: Critical Evaluation. BMJ 320(7234):537, 2000b. Jencks, Stephen F., Timothy Cuerdon, Dale R. Burwen, et al. Quality of Medical Care Delivered to Medicare Beneficiaries: A Profile at State and National Levels. JAMA 284(13):1670â6, 2000. Johnston, Mary E., Karl B. Langton, R. Brian Haynes, and Alix Mathieu. Effects of Computer-Based Clinical Decision Support Systems on Clinician Performance and Patient Outcome: A Critical Appraisal of Research. Ann Int Med 120:135â42, 1994. Kassirer, Jerome P. A Report Card on Computer-Assisted DiagnosisâThe Grade: C. N EngI J Med 330(25):1824â5, 1994. Kizer, Kenneth W. The National Quality Forum Enters the Game. International Journal for Quality in Health Care 12(2):85â7, 2000. Lau, Joseph, Elliott M. Antman, Jeanette Jimenez-Silva, et al. Cumulative Meta-Analysis of the Therapeutic Trials for Myocardial Infarction. N EngI J Med 327(4):248â54, 1992.
162 CROSSING THE QUALITY CHASM Lilford, R. J., S. G. Pauker, D. A. Draunholtz, and Jiri Chard. Getting Research Findings into Practice: Decision Analysis and the Implementation of Research Findings. BMJ 317:405â9, 1998. Lindberg, Donald A. B. 1998. âFiscal Year 1999 Presidentâs Budget Request for the National Li- brary of Medicine.â Online. Available at http://www.nlm.nih.gov/pubs/staffpubs/od/budget99. html [accessed Sept. 18, 2000]. Lindberg, Donald A. B. and Betsy L. Humphreys. Updates Linking Evidence and Experience. Medi- cine and Health on the Internet: The Good, the Bad, and the Ugly. JAMA 280(15):1303â4, 1998. âââ. A Time of Change for Medical Informatics in the USA. Yearbook of Medical Informatics National Library of Medicine, Bethesda, MD:53â7, 1999. Lohr, Kathleen N. and Tomothy S. Carey. Asessing âBest Evidence:â Issues in Grading the Quality of Studies for Systematic Reviews. Journal on Quality Improvement 25(9):470â9, 1999. Lohr, Kathleen N., Kristen Eleazer, and Josephine Mauskopf. Health Policy Issues and Applications for Evidence-Based Medicine and Clinical Practice Guidelines. Health Policy 46:1â19, 1998. Lomas, Jonathan Anderson Geoffrey M., Karin Domnick-Pierre, et al. Do Practice Guidelines Guide Practice? The Effect of a Consensus Statement on the Practice of Physicians. N EngI J Med 321(19):1306â11, 1989. Lyon, Becky J., P. ZoÃ« Stavri, D. Colette Hochstein, and Holly Grossetta Nardini. Internet Access in the Libraries of the National Network of Libraries of Medicine. Bull Med Libr Assoc 86(4):486â 90, 1998. Marciniak, Thomas A., Edward F. Ellerbeck, Martha J. Radford, et al. Improving the Quality of Care for Medicare Patients With Acute Myocardial Infarction: Results from the Cooperative Car- diovascular Project. JAMA 279(17):1351â7, 1998. McColl, Alastair, Helen Smith, Peter White, and Jenny Field. General Practitionersâ Perceptions of the Route to Evidence Based Medicine: A Questionnaire Survey. BMJ 316:361â5, 1998. McDonald, Clement J., J. Marc Overhage, Paul R. Dexter, et al. Canopy Computing: Using the Web in Clinical Practice. JAMA 280(15):1325â9, 1998. McDowell, Ian, Claire Newell, and Walter Rosser. Comparison of Three Methods of Recalling Patients for Influenza Vaccination. Can Med Assoc J 135:991â7, 1986. âââ. A Randomized Trial of Computerized Reminders for Blood Pressure Screening in Primary Care. Medical Care 27(3):297â305, 1989. Mechanic, David. Bringing Science to Medicine: The Origins of Evidence-Based Practice. Health Affairs 17(6):250â1, 1998. Miller, Naomi, Eve-Marie Lacroix, and Joyce E. B. Backus. MEDLINEplus: Building and Main- taining the National Library of Medicineâs Consumer Health Web Service. Bull Med Libr Assoc 88(1):11â7, 2000. National Committee for Quality Assurance. Health Plan Employer Data and Information Set, Ver- sion 3.0. Washington, D.C.: National Committee for Quality Assurance, 1999. National Research Council. Networking Health: Prescriptions for the Internet. Washington D.C.: National Academy Press, 2000. Perfetto, Eleanor M. and Lisa Stockwell Morris. Agency for Health Care Policy and Research Clini- cal Practice Guidelines. The Annals of Pharmacotherapy 30:1117â21, 1996. Pozen, Michael W., Ralph B. DâAgostino, Harry P. Selker, et al. A Predictive Instrument to Improve Coronary-Care-Unit Admission Practices in Acute Ischemic Heart Disease. N EngI J Med 310(20):1273â8, 1984. Rienhoff, Otto. Retooling Practitioners in the Information Age. Information Technology Strategies from the United States and the European Union: Transferring Research to Practice for Health Care Improvement. E. Andrew Balas, ed. Washington, D.C.:IOS Press, 2000.
APPLYING EVIDENCE TO HEALTH CARE DELIVERY 163 Rosenberg, Matt. Popularity of Internet Wonât Peak for Years: Not Until Todayâs Middle-Schoolers Reach Adulthood Will the Technology Really Take Off. Puget Sound Business Journal. May 24, 1999. Online. Available at http://www.bizjournals.com/seattle/stories/1999/05/24/focus9.html [accessed Jan. 22. 2001]. Sackett, David L., William M. C. Rosenberg, J. A. Muir Gray, et al. Evidence Based Medicine: What It Is and What It Isnât. BMJ 312:71â2, 1996. Sackett, David L., Sharon E. Straus, W. Scott Richardson, et al. Evidence-Based Medicine: How to Practice & Teach EBM. 2nd edition. London, England: Churchill Livingstone, 2000. Schiff, Gordon D. and T. Donald Rucker. Computerized Prescribing: Building the Electronic Infra- structure for Better Medication Usage. JAMA 279(13):1024â9, 1998. Shea, Steven, William DuMouchel, and Lisa Bahamonde. A Meta-Analysis of 16 Randomized Con- trolled Trials to Evaluate Computer-Based Clinical Reminder Systems for Preventive Care in the Ambulatory Setting. J Am Med Inform Assoc 3(6):399â409, 1996. Silberg, William M., George D. Lundberg, and Robert A. Musacchio. Assessing, Controlling, and Assuring the Quality of Medical Information on the Internet. JAMA 277(15):1244â5, 1997. Solberg, Leif I., Milo L. Brekke, Charles J. Fazio, et al. Lessons from Experienced Guideline Implementers: Attend to Many Factors and Use Multiple Strategies. Joint Commission Journal on Quality Improvement 26(4):171â88, 2000. USA Today. Health-Related Activities Conducted Online. Health, July 10, 1998. Voge, Susan. NOAH-New York Online Access to Health: Library Collaboration for Bilingual Con- sumer Health Information on the Internet. Bull Med Libr Assoc 86(3):326â34, 1998. Wellwood, J., S. Johannessen, and D. J. Spiegelhalter. How Does Computer-Aided Diagnosis Im- prove the Management of Acute Abdominal Pain? Annals of the Royal College of Surgeons of England 74:40â6, 1992. Wexler, Jerry R., Phillip T. Swender, Walter W. Tunnessen, and Frank A. Oski. Impact of a System of Computer-Assisted Diagnosis: Initial Evaluation of the Hospitalized Patient. Am J Dis Child 129:203â5, 1975. Woolf, Steven H. Practice Guidelines: A New Reality in Medicine. III. Impact on Patient Care. Arch Int Med 153:2646â55, 1993. Wyatt, J. R. Lessons Learnt from the Field Trial of ACORN, An Expert System to Advise on Chest Pain. Proceedings of the Sixth World Conference on Medical Informatics, Singapore. 111â5, 1989.