Designing the National Health Care Quality Report
The National Health Care Quality Report (also referred to as the Quality Report) offers an important way to increase awareness of quality issues, the amount of attention that audiences pay to quality, and the degree of involvement in efforts to improve it. To produce a report that achieves those goals, the Agency for Healthcare Research and Quality (AHRQ) should tailor reports to key audiences. This chapter provides an overview of how to produce such a report. It begins with a description of the audiences for the report and the goals that AHRQ should have in reaching each audience. The following sections provide an analysis of how audience needs should influence the presentation of data and the contents of the report. They also contain an examination of other important tasks, such as evaluating the strengths and weaknesses of the report following its release, promoting the report, and evaluating the longer-term outcomes associated with the goals of the Quality Report.
RECOMMENDATION 10: The National Health Care Quality Report should be produced in several versions tailored to key audiences—policy makers, consumers, purchasers, providers, and researchers. It should feature a limited number of key findings and the minimum number of measures needed to support these findings.
The Agency for Healthcare Research and Quality should produce a National Health Care Quality Report that will attract the attention and interest of policy makers, consumers, purchasers, providers, researchers, and other audiences. For some of these audiences, particularly policy makers, the findings should be “actionable.” Currently, health care quality issues are poorly understood and receive little notice. The National Health Care Quality Report can become an important tool to promote a better understanding of health care quality, generate support for improvement, and highlight areas that need special attention.
To accomplish these goals, AHRQ should make the Quality Report relevant, engaging, easy to read, and easy to understand. Producing different reports for different audiences is an important and feasible way to do this. The print versions should be brief, be aimed at key audiences, and summarize key findings. Different versions of the report should be available on a web site tailored to specialized audiences as well as to the general public. While the Quality Report or family of reports should be focused and selective, it should draw on a comprehensive National Quality Report Data Set covering all aspects of quality as discussed in Chapter 4. This annual data set should also be available publicly on the Web in an accessible format to the extent feasible. The committee understands that some of the files will be made available only to researchers, and that other files containing extremely sensitive or identifying information will not be released at all in order to protect confidentiality.
Like the data set, the Quality Report should be produced annually as defined by law (Healthcare Research and Quality Act, 1999). The specific elements of the data set should be relatively stable in order to track changes in quality, although data may not have to be collected every year for every measure. In contrast, measures included in the Quality Report may vary from year to year based on the key findings selected, although some will be repeated from time to time to show changes in specific aspects of quality over time.
The report should not overwhelm either general or specialized audiences with information about health care quality. Instead, the content should be highly selective, relevant to current policy concerns, and fresh from year to year, even while preserving some continuity. Furthermore, the format employed should be designed so that differences across regions or groups and trends in health care quality are easily discernible.
AUDIENCES FOR THE NATIONAL HEALTH CARE QUALITY REPORT
The committee identified several groups of people or audiences that should be the focus of the Quality Report. Because audiences have different roles to play in supporting health care quality, the report must provide them with the kinds of information that meet their particular interests and needs. Audiences for the report include members of Congress and other policy makers in national and
state government as well as consumers as the main audiences. Other important audiences include purchasers, providers, and researchers. The Quality Report should set specific goals in communicating with these audiences, including the following:
Policy makers. The Quality Report should identify actionable areas of health care quality that deserve attention from policy makers.
Consumers. The key goal with this audience is to raise awareness of important quality issues. Since relatively few consumers will see the actual report in print or on the Web, AHRQ should find ways to encourage the media to give it attention-getting, constructive, and lasting coverage, which will build public interest and understanding.
Purchasers. The Quality Report should identify areas of health care quality that these groups can help to improve and aspects that they may have to focus on when evaluating the health plans they will offer to their employees.
Providers. Health care providers, including clinicians, will have a special interest in the report findings since many will relate directly to their work. Quality Report findings should strongly encourage all those with a responsibility for providing high-quality health care to address areas in which improvement is seriously needed and to have a sense of personal satisfaction in those areas where progress has been made.
Researchers. To the extent possible, researchers should have access to Quality Report data on the Web to develop new measures, refine existing ones, examine quality of care, and otherwise contribute to the dialogue on health care quality.
Defining the Content of the Quality Report
What should the Quality Report feature? AHRQ should at most select three to five key findings about health care quality for attention in the report. While it should present enough measures to clearly support these findings, it should also aim to present only 3 to 5 measures per finding for a total of 9 to 25 measures in the report. The Quality Report should highlight what the nation has achieved, where it has made progress, what needs improvement, and areas in which a high degree of variation exists.
Recent research on cognitive processing suggests that people can process only three to five “pieces” of information at one time (Halford, 1998; Hochhauser, 1999). An understandable temptation is to want to pack the report with more findings and measures in the hope of highlighting more information. However, this will lead to the ironic result of audiences learning less rather than more about the quality of care provided in the United States. When people are over-
whelmed by information, they have a hard time differentiating and absorbing what is truly important. Often, even experts will cope with too much information by synthesizing findings. For instance, they may emphasize the importance of a single factor among several that were presented. This factor is usually something that is clear, precise, and understood (Hibbard, 1998; Hsee, 1996; Mellers et al., 1992; Slovic, 1992).
Findings in the Quality Report should be presented in a headline format. The content of the findings, of course, will depend on the evidence—what the measures and data show about the quality of health care delivery. Audience testing can also be used to fine-tune the wording of headlines, as discussed later in this chapter. Some examples of findings in headline format include the following: “Providers are getting patients more involved in their care”; “The nation is paying less attention to the importance of preventing condition X”; “Patients are less likely to wait for care in regions A and B”; and “The nation is giving the dying poorer-quality care now than before.”
Findings could focus on a variety of aspects related to health care quality, including areas that
demonstrate excellence by, for example, meeting clinical standards for treatment of particular health conditions;
need improvement because, for example, they do not meet clinical standards for treatment of particular health conditions;
show a high degree of variation, for example, from one year to the next;
capture trends of improvement or deterioration; and
indicate geographic disparities, for example, by state or region, or disparities across populations.
The guidelines for the National Health Care Quality Report should not be confused with the framework described in Chapter 2. While the report is selective, the framework is designed to ensure that quality measurement and data sources are comprehensive, that is, that enough data are gathered to support measures of the many important aspects of this complex topic. The framework provides a basis for the set of measures from which the Quality Report will draw. It is an analytical rather than a reporting framework.
Also, the Quality Report should not be confused with existing sources of comparative quality information for particular providers or organizations. There are many of these sources, including public- and private-sector organizations, accrediting bodies, national government agencies, state and local government agencies, individual health plans, other health organizations, and free or fee-based web sites (Bates and Gawande, 2000). The focus of most of these reports is on evaluating and comparing the performance of specific providers, institutions, or health plans. In contrast, the Quality Report will focus on the quality of care provided to the people of the United States by the system as a whole, rather
than by a specific entity of the system. It will also provide information at a higher level than many of these reports by focusing on the national level complemented with information at the state level whenever possible.
Presenting Information in the Quality Report
Whatever the content of the Quality Report, it will contain some mix of data-based findings and other information on quality. The following sections present guidelines on how to most effectively present report contents to members of Congress and other key audiences, including consumers, purchasers, and providers. It should be noted that most of these guidelines should make the report more appealing to all audiences. There will be policy makers and other specialists who are so highly engaged in the issue of health care quality that they will read the report and find it useful, almost no matter what form it takes. Others, however, will benefit from efforts to make information on quality more meaningful and interesting. Box 5.1 summarizes these guidelines.
Making the Report Available in Print and on the Web
Making the report available in print and on the Web will allow AHRQ to deliver it in the format preferred by each audience. This does not mean that the content of the report in both media should be the same. Businesses, for example, issue annual reports in print and web versions (see Box 5.2). They make the print versions brief and engaging by presenting an overview of major trends and developments aimed at the general reader. The Web is used to present more detailed information to financial analysts, interested stockholders, and other specialists. However, efforts are made to make web sites appealing to specialists and generalists alike (“Annual Reports,” 2000; “Corporate Annual Reports: Now More Readable, Credible, and Fashionable,” 2000).
In producing the Quality Report, AHRQ should adopt the same practices used by businesses. The print version should be brief, engaging, and targeted at
BOX 5.1 Guidelines on Presenting Information in the Quality Report
policy makers, consumers, purchasers, providers, and the media. AHRQ should also consider not one, but several, print and Web versions for different audiences, that is, a “family of reports.” For example, the one for policy makers should focus on highlighting problem areas in health care quality, while the one for consumers should focus on making the concept of quality understandable and relevant. For print reports, AHRQ should make use of accepted principles for presenting information in this medium (Schriver, 1997; Tufte, 1983; U.S. Securities and Exchange Commission, 1998). AHRQ should also evaluate the need for and feasibility of making the report available in other languages to increase access by the largest non-English-speaking populations in the United States.
The Web is a flexible enough medium to easily contain versions of the report for both generalists and specialists. For example, the Maryland Health Care Commission has a web site that contains several versions of a report on health care quality aimed at different audiences (see
Box 5.3). As already mentioned,
In the past few years, businesses have turned to new-style annual reports aimed at their two major audiences—shareholders and industry analysts. Some businesses produce eye-catching, engaging print reports to satisfy the general information needs of shareholders, placing financial data on the Web for analysts. Others, such as Merck & Co., Inc., have divided their reports into different parts for different audiences. According to Sharyn Bearse, director of corporate communications at Merck, “We found 85 percent of readers are influenced by what they see. If the cover is compelling, they'll open it up. If the call-outs, photos, lay-out and headlines capture their attention, they'll stop and read it” (“Corporate Annual Reports: Now More Readable, Credible, and Fashionable,” 2000:1). Some other innovations of new-style annual business reports include the following: Themes. Some annual reports have themes that change from year to year. Performance records, topical issues, and future plans are examples of themes. Narratives. Reports often present stories about employees or people helped by company products or about the process of developing a new product. Data presentation. Reports creatively present data by, for example, setting statistics against a background of colorful patterns or photos linked to the theme of the report.
BOX 5.2 New-Style Annual Business Reports: How to Serve Generalists and Specialists
In the past few years, businesses have turned to new-style annual reports aimed at their two major audiences—shareholders and industry analysts. Some businesses produce eye-catching, engaging print reports to satisfy the general information needs of shareholders, placing financial data on the Web for analysts. Others, such as Merck & Co., Inc., have divided their reports into different parts for different audiences. According to Sharyn Bearse, director of corporate communications at Merck, “We found 85 percent of readers are influenced by what they see. If the cover is compelling, they'll open it up. If the call-outs, photos, lay-out and headlines capture their attention, they'll stop and read it” (“Corporate Annual Reports: Now More Readable, Credible, and Fashionable,” 2000:1).
Some other innovations of new-style annual business reports include the following:
Themes. Some annual reports have themes that change from year to year. Performance records, topical issues, and future plans are examples of themes.
Narratives. Reports often present stories about employees or people helped by company products or about the process of developing a new product.
Data presentation. Reports creatively present data by, for example, setting statistics against a background of colorful patterns or photos linked to the theme of the report.
BOX 5.3 Maryland Health Care Commission's Health Maintenance Organization Quality and Performance Reports: Different Versions for Different Audiences
Which report is right for you? The web site for the Maryland Health Care Commission uses this question to guide users to the appropriate version of its HMO Quality and Performance Reports. The reports are available at http://www.mhcc.state.md.us/ in versions tailored for consumers, legislators and other policy makers, and specialists. The site briefly describes their contents, their purposes (e.g., comparison, evaluation, reference), and the kinds of people who might find them most useful.
Consumers can choose an “easy” version that provides basic overviews of managed care health plan benefits and performance ratings or an “interactive” version that allows them to select information only on the health maintenance organizations (HMOs) in which they are interested. The interactive report, which is also designed for use by employers and organizations, makes HMO comparisons easier. Legislators and other policy makers have a “policy-oriented” version that evaluates the strengths and weaknesses of Maryland's HMOs by comparing them to HMOs in the Mid-Atlantic region and elsewhere in the nation.SOURCE: Maryland Health Care Commission, 2000.
the web site should also contain the measures and data set that the reports are based for use by researchers and other policy specialists. In addition, AHRQ should make use of accepted principles of good web design for the report web site (Nielson, 1999; Spool et al., 1999; Sun Microsystems, 1999). Including audio and video components for the web-based reports would make them more appealing to a general audience and more accessible to those with limited health literacy (that is, the ability to read, understand, and act on health care information) (American Medical Association, Ad Hoc Committee on Health Literacy for the Council on Scientific Affairs, 1999).
Using Benchmarks or Standards for Comparisons
Reports can summarize or synthesize findings in ways that limit the number actually presented and make the few that are chosen more meaningful to audiences. One of the most effective means is through use of benchmarks or standards. This involves presenting data on performance, processes, outcomes, or other items and comparing them in a straightforward manner to benchmarks established, for example, by what has occurred in previous years or what has been accomplished in similar areas. Data can also be compared to standards that are, for example, set by regulations, clinical guidelines, or expert groups. These
comparisons can be general enough to encompass many discrete findings. At the same time, they can be more meaningful because they are more relevant to a greater number of people than a collection of discrete findings, which takes more effort to review and identify an interest in.
Using benchmarks or standards for comparison is an example of “evaluability,” a new concept based on decision research (Hibbard et al., 2000; Hsee, 1996, 1998). The “evaluability principle” asserts that information is more likely to be used when it is presented in a way that makes it easier to map on an affective (good–bad or value-based) scale. That is, information is more likely to be used when it is easier to distinguish between better and worse options. When information is “evaluable,” the differences among the comparisons are immediately evident to the reader, or at least the patterns in the data are immediately observable. Providing a context for understanding the information (for example, labeling care as “good,” “fair,” or “poor,” rather than just providing comparative numbers) is another way to make the information more evaluable by and meaningful to consumers.
Although providing consumer-oriented information for plan selection is not an objective of the Quality Report, it is useful to note that experiments with consumers show that comparative performance information is more likely to be used and weighted in health plan choices when it is presented in an evaluable format than when the same information is presented with little attention to evaluability (Hibbard et al., 2000).
In showing a comparison of how the 50 states are doing on seven aspects of diabetes care, an evaluable presentation might summarize the information by giving a state a star for each measure that indicates adequate to good care (or some other threshold standard determined clinically or statistically). States with performance ranging from adequate to good in all seven measures of diabetes care would have seven stars and would pop out immediately to the reader and be easy to identify. This would also be true for states that have only one or no stars. Detailed data for all seven measures for each of the 50 states could also be shown, but the stars would provide a clear visual summary of the data.
On the other hand, an example of a less evaluable (but commonly used) presentation approach might be to show a number, representing a performance level, for each of the seven measures for each of the 50 states. It would be much more difficult to identify which states were high performers and which were poor performers in terms of diabetes care from such a data display.
An important attribute of evaluability is that it appears to operate outside the awareness of the individual. That is, the presentation format influences how people perceive and use information, but they are not conscious of this influence. This has implications for how data formats are tested. Report formats should be made as evaluable as possible, and testing with consumers should focus on how well they understand the information and the labels. However, consumer preferences for how the data should be presented may not actually
facilitate the use of such data. Testing can also focus on how well users can discern patterns and easily pick out better or worse options. This is a more reliable indication of the evaluability of the format than consumer preferences, which may or may not facilitate the use of the data.
Choosing Findings That Have Strong Statistical Evidence
Some findings will have strong statistical evidence. For example, compared to others, they may be more robust (that is, consistent when tested with a wider range of assumptions or methodologies); significant at a higher confidence level; or supported by findings in the research literature. When selecting among the many findings that could be included in the Quality Report, those with stronger statistical evidence should be preferred.
Choosing Findings That Are Relevant to Prevailing Policy Concerns
AHRQ should also take into account various considerations that might make some quality topics more relevant to the report than others. These could include news events on certain quality concerns; public interest in particular health conditions; the policy agendas of administrations, congressional leaders, governors, and others; and findings from other government reports (Rushefsky and Patel, 1998).
Adding Salience to the Issue of Health Care Quality
A data-driven report on health care quality could easily be one of those important, but dry, documents that gets little attention. One way to personalize the issue of health care quality would be to spotlight findings that affect many people. Another approach could be to feature information on individuals, institutions, or other familiar focal points that personify a larger aspect of quality (also called narratives). In general, consumers prefer information on practical topics that could or do affect them or people like themselves (Blendon et al., 1998; Lubalin and Harris-Kojetin, 1999; Mennemeyer et al., 1997; Robinson and Brodie, 1997). They respond poorly to abstract, conceptual information (Eddy, 1998; Galvin, 1998; Marshall et al., 2000; Philipchalk, 1972; Yuille and Paivio, 1969). Policy makers and the media are also receptive to this kind of information (Beasley, 1998; Brodie et al., 2001; Graber, 1997; McDonough, 2001; Sharf, 2001).
Narratives can be presented in a variety of ways. These include
using sidebars to highlight stories of people who illustrate statistical trends, for example, presenting the case of a child whose immunization record mirrors national norms;
starting off a statistical presentation of trends in health care quality with an example of a health care provider whose involvement in health care quality is typical; and
featuring case studies of institutions that have improved the quality of their health care delivery.
Including narratives to illustrate information also presented in statistical form can add salience to specific aspects of health care quality and make them more meaningful. Narratives appear to work best when combined with statistics by simultaneously engaging the reader emotionally through stories and analytically through data (Kopfman et al., 1998).
In addition, the report should present selected data at the state level, as well as by relevant population subgroups. In this way, it would make use of smaller units of analysis, which people might be able to identify with more easily. In addition, this would provide members of Congress and state policy makers with the kind of detailed information they need to target quality improvement initiatives.
As already mentioned, AHRQ should not necessarily use the framework's dimensions of components of quality and consumer perspectives on health care needs as categories for reporting. For example, AHRQ may wish to focus on quality health care for families, structuring the report around a handful of the main quality concerns of families. Although these concerns may fall into specific categories of the framework such as “effectiveness” or “staying healthy,” audience research may reveal more meaningful ways to describe them in the report. Regardless of which labels are used for reporting, they should be tested, especially with the audiences that AHRQ believes might find them most relevant.
Making Health Care Quality Actionable
Policy makers, purchasers, and providers need information that will help them identify areas in which they can take effective action. To supply this information, the Quality Report should call attention to problem areas. In addition to developing long-term policy responses, executive and legislative policy makers must be able to identify the kind of incremental solutions that can be achieved within election periods. The Quality Report can help significantly by highlighting issues that lend themselves to feasible policy responses, such as immunization programs, improved access to care for specific groups, and increased appropriations for improved patient safety. In addition, purchasers and providers have responded to areas in which feasible solutions for improvement exist with innovative changes to improve quality (Bentley and Nash, 1998; Epstein, 1996; Erickson et al., 2000; Hannan et al., 1995).
BOX 5.4 Keeping an Annual Report Fresh: AARP's State Profiles on Health Care
AARP (formerly the American Association for Retired Persons) has been producing annual reports on the status of health care since 1990, and its state profiles have changed with the times. The reports have always presented basic information on each state such as demographics, health status data (e g., morbidity rates), and the use of medical services such as emergency rooms and prenatal care. However, they have also included new data to keep up with new developments. As managed care has grown, the reports have added statistics on coverage, performance, and state oversight activities. As the uninsured have gained more attention, the reports have presented more specific information on those with and without coverage. The reports have also tracked the impact of initiatives such as the State Children's Health Insurance Program (S-CHIP) and the Health Insurance Portability and Accountability Act (HIPAA). In addition, they have responded to rising concern over specific medical conditions, such as the prevalence of obesity, by providing new data.
As the producer of the report, AARP's Public Policy Institute uses different ways to identify new topics. In part, it relies on feedback from state networks, which convey what AARP members, policy makers, and others are interested in It also responds to new developments in health care and issues in the news. From time to time, it surveys those on its mailing list for feedback. In deciding on content, the institute first looks at whether reliable data are available. The importance and timeliness of the topic are other prominent considerations.SOURCES: Brangan, 2000, Lamphere et al., 1999; Landsverk, 1999; U.S. Department of Health and Human Services, 1996.
Placing Quality in Positive and Negative Frames
High-quality health care has many positive benefits, and the Quality Report should explain what they are. As discussed in Chapter 3, the measure set for the report should be balanced so that it can provide a complete picture of the quality of care in both its positive and its negative aspects. However, poor-quality health care has many negative consequences, and the report should also explain what they are. Placing selected information about quality in a negative frame is one way to draw attention because research shows that people are more influenced by negative frames (Hibbard et al., 2000; Kahneman and Tversky, 1984; Tversky and Kahneman, 1981). A negative frame has another advantage: it puts information on quality in a form that the media will find useful since it often highlights negative events or outcomes (Graber, 1997).
Keeping the Quality Report Fresh
AHRQ should guard against an annual report containing little that is new. Although some areas of health care quality may be so important and so changeable that the Quality Report (or “family” of reports) should feature updates on them each year, AHRQ should emphasize findings in different areas, touching on aspects of quality that seem especially relevant in a particular year or bringing to light aspects of quality that deserve greater attention. For example, the report could be kept fresh by spotlighting health conditions that are frequently in the news, featuring information that is especially relevant to consumer concerns, or focusing on new developments in health care policy. See Box 5.2 (on annual business reports) and Box 5.4 (on keeping annual policy reports fresh) for ways in which these publications can remain distinctive from year to year.
AUDIENCE TESTING THE NATIONAL HEALTH CARE QUALITY REPORT
Audience Testing Before Report Releases
It is essential that AHRQ conduct audience research in writing and producing the report: each release needs the kind of specific feedback that can come only from testing it with the kinds of people who are likely to use it (Backer et al., 1992; McGee et al., 1999; Rubin, 1994). Audience testing can be performed in a number of ways, including in-depth or cognitive interviews, focus groups, random sample surveys, and experiments. Each has advantages and disadvantages in terms of the kinds of data produced, the strengths and weaknesses of those data, their expense, the ease of conducting them, and other issues of feasibility (McGee et al., 1999). In conducting testing, AHRQ should keep in mind the unconscious factors that can influence audience reaction to content and format. Testing at different stages of production provides different kinds of feedback. It also saves time and money and provides the kind of evidence needed to create a more effective audience-centered product (U.S. Department of Health and Human Services, 2000). Testing is especially critical for web site development since dissatisfied users are unlikely to visit the site again (Nielson, 2000; Schriver, 1997).
Before conducting pre-tests of report material, AHRQ should undertake formative audience research that will help guide basic aspects of developing the Quality Report. These basic aspects could include
what the term “health care quality” means to audiences;
how audiences might use information on quality;
which components of quality audiences are most and least interested in; and
how different audiences prefer to receive information on quality.
Formative research can be done in several different ways. In part, it builds on secondary data gathered by other agencies and organizations. In part, it also involves gathering primary data through interviews, focus groups, experiments, and other means designed to provide direct feedback on the Quality Report itself (U.S. Department of Health and Human Services, 2000; Weinreich, 1999).
Pre-testing is conducted while the print and web versions of the report are being designed. For the print version, pre-testing might involve exploring audience preferences on the cover, the order of topics, content, design, graphics, and overall length of the report. Researchers may test prototypes or examples from material that could serve as templates. For the web version, the material to be pre-tested might include the home and first page and navigation tools, in addition to areas also explored for the print version.
Testing is performed on mock-up material developed with the feedback gathered in pre-testing. It can be used to examine whether issues identified in pre-testing have been adequately addressed. It can also be used to identify new issues that have arisen from interpreting pre-testing results.
Evaluative Testing of Report Releases
After each Quality Report (or family of reports) has been released, further audience testing is necessary to improve subsequent versions and gain insights that could be applied to other AHRQ material (Kotler and Andreason, 1996; Kotler and Roberto, 1989; Rossi and Freeman, 1993). In particular, evaluative testing should track the audiences that read the report, what they did or did not like about it, and how they learned about the report. It should also assess the report's impact.
Important questions to ask include the following:
Audience readership. Which audiences read the report? Which segments of which audiences especially used it? Which did not? Why?
Strengths and weaknesses of the report. Which sections did audiences read? Which sections appealed to which audiences? Was the report interesting to read? Was it relevant to audience needs? What would audiences like to see changed?
Distribution of the report. How did audiences learn about the report? How did they receive the report? When did they receive it?
Impact of the report. Did audiences use the report in work they do on health care quality? How? Was it timely? Are there specific policy changes that it helped to bring about? Is public opinion on health care quality different as a result of the report? Is public understanding broader? Are communities or regions developing measurement and reporting systems to track Quality Report indicators at their levels? Did the report affect the efforts of low-performing areas? Did the report lead to local improvement efforts?
Specific computer programs have been developed to evaluate web sites. These automatically gather data on usage, including how often a site is visited, whether it is operating efficiently, and whether users like it. More specific information that can be gathered includes the number of hits and page views overall and by page; user session length overall and by page; user activity by day of week and hour of day; the page used prior to exiting the site; top paths to the site; data files that are downloaded; and server response times, among others (Kotler and Roberto, 1989).
Automated surveys can be supplemented with surveys available for users to complete on the web site. Candidate questions include users' interests (for example, research, policy) and satisfaction (for example, how they liked the site; how they would improve it; and if they were able to find the topics they were interested in). Web-based surveys can also include a text box for unstructured responses that can provide greater insights into how the report and web site could be improved (Nielson, 2000).
PROMOTING THE QUALITY REPORT
No matter how great the effort to design a report with audiences in mind, it will not have an impact unless those audiences learn about it in the media in ways that make them want to read it or learn more about it. To do this, it will be important to generate publicity at and between release times to let people know about the report and its significance (Backer et al., 1992; Kotler and Andreason, 1996; Kotler and Roberto, 1989). For an example of media attention to a similar national report on health care quality, see Box 5.5.
To reach national, state, and local consumers, policy makers, and other audiences, the Quality Report must attract attention from many levels of print, broadcast, and electronic media or communication channels. Given the diversity of audiences, multiple communication channels and activities will be required. To get attention from wire services that provide news to many national and local newspapers, AHRQ should study the desirable coverage they have given to other reports. AHRQ should provide them with the information they need to give the Quality Report similar or better treatment. In addition, AHRQ should consult the “daybooks,” which are compilations used by many wire services to describe the kinds of events and topics that interest them. To identify other print and broadcast outlets, AHRQ should also research how the media have covered other reports and consult media directories for background and contact information. Attention from Internet news sources is important as well. AHRQ should distribute press kits and releases to print and broadcast news outlets, as well as
BOX 5.5 Newspaper Coverage of the National Health Service Performance Indicators
What kind of media coverage might the Quahty Report receive? One way to find out is to examine the kind of coverage received by a similar report. The National Health Service's “Quality Report on Healthcare” in England is comprehensive, containing 49 indicators, including 7 composite measures that summarize 18 discrete measures. It focuses mainly on the quality of care in hospitals.
Coverage of the latest version of the report (July 2000) by eight major daily newspapers revealed the following problems :
to on-line news services, electronic newsletters, and automatic mailing list servers (Weinreich, 1999).
With limited time between report releases, AHRQ will find it necessary to prioritize communication channels. Given the need to provide information to policy makers as soon as possible, AHRQ should prioritize those print, broadcast, and electronic sources that are most likely to reach them and their constituents. AHRQ should then turn its efforts to consumers and other key audience segments.
AHRQ must also work to attract media attention to the Quality Report and to quality issues throughout the year. It should not overlook the importance of pegging quality information to the news of the day—prominent quality-related events or crises should make the media more receptive to report-related press releases. Other newsworthy events could include public speeches, celebrity appearances, congressional hearings, and issue conferences on subjects that the report highlights (Corbett and Mori, 1999). AHRQ could also issue report updates that can be summarized in print form and placed on the Web. Data updates should also be placed on the Web. It should be noted that as new reports and data updates become available, it is important to maintain earlier archival
versions of both on the web site, with links inserted to updates (Salzmann, 1998; Weinreich, 1999).
Partnerships are another way to distribute the report to targeted audiences. AHRQ should partner with other state and local government bodies and non-governmental organizations to distribute the report and focus attention on it through conferences, conventions, scientific seminars, newsletters, trade publications, workshops, hyperlinks, special events, and other forums and media. Partnerships would have the additional benefits of better targeting interested audiences at minimal cost. They may also attract resources that could include guest speakers, opinion articles, journal articles, and video programming (Weinreich, 1999).
An important way to keep the Quality Report in the public eye would be to use the capacity of the Department of Health and Human Services (DHHS) to generate news. The Quality Report should become an integral part of all of the programming, fieldwork, Internet communication, and press activities conducted by DHHS and its agencies, including AHRQ.
Evaluating the Promotion Plan
While AHRQ is creating its plan to distribute the Quality Report, it should also define a plan to evaluate the effectiveness of the distribution and to identify areas that should be improved. In particular, AHRQ should gather data on whether the report or notifications about the release of the report reached targeted audiences. It should also gather data on whether the report and release notifications were delivered in a timely manner. In addition, it should learn whether audiences who read or learned about the report in print, on the Web, or through the media would have preferred to be reached in a different way. AHRQ should also examine whether audiences were satisfied with getting additional information or other follow-up assistance they needed (U.S. Department of Health and Human Services, 2000).
AHRQ could choose many ways to carry out its evaluation, including interviews, focus groups, and surveys. Each has strengths and weaknesses, some of which are described in the section on audience testing methods. Whichever way is chosen, it is important to conduct these evaluations soon after the report is distributed so that memories are fresh and there is time to incorporate these findings into the distribution plan for the next release.
This chapter contains an outline of the ways in which AHRQ should develop, promote, and evaluate the Quality Report. As explained, the Quality Report should not be a comprehensive document. Instead, it should contain a limited number of findings about quality and a limited number of measures to support those findings. The chapter also contains guidelines that can be used to help select findings for presentation in the report.
As also explained in this chapter, AHRQ should develop, promote, and evaluate the Quality Report for different audiences, especially members of Congress and other policy makers, consumers, and the media. To satisfy the different needs of different audiences in terms of content, accessibility and other areas, AHRQ should produce the report in print and web formats.
This chapter also contains an overview of how to let audiences know about the report. AHRQ should aggressively promote the Quality Report through the mass media and more specialized channels of communication, such as print and electronic newsletters. AHRQ should also seek to draw attention to the report at release time and between releases. It should employ private- and public-sector partnerships to encourage awareness and use of the report. Finally, as explained, AHRQ should evaluate the way each year's edition of the report was promoted, with the goal of improving its promotion in the following year.
AHRQ faces many challenges in developing, promoting, and evaluating the Quality Report in the ways that this chapter sets forth. In a report on complex, visible, and highly important issues such as national health care quality, there will be inevitable pressure to make it as comprehensive as possible in the hope of reaching as many people on as many subjects as possible. However, a strong body of evidence shows that a selective focus on the most important topics will be a far more effective means of communication. In addition, there will be pressure to save money by cutting back on development and evaluation activities. Here, too, a strong body of evidence shows that resources spent in these areas will make the National Health Care Quality Report a far more effective means of communication.
Annual Reports. 2000. Step-By-Step Graphics 16(2): 104–141.
1999. Health literacy: Report of the Council on Scientific Affairs. Journal of the American Medical Association 281(6): 551-557., .
2000. Reporting NHS performance: How did the media perform? British Medical Journal 321:248., , and .
1992. Designing Health Communication Campaigns: What Works? Newbury Park , Calif.: Sage Publications., , and .
2000. The impact of the Internet on quality measurement. Health Affairs 19(6): 104–114., and .
1998. Journalists' attitudes toward narrative writing. Newspaper Research Journal 19(1): 78–89..
1998. How Pennsylvania hospitals have responded to publicly released reports on coronary artery bypass graft surgery. Joint Commission Journal on Quality Improvement 24(1): 40–49., and .
1998. Understanding the managed care backlash. Health Affairs 17(4): 80–94., , , , , , , and
2000. Personal Communication , September 18. AARP., ,
2001. Communicating health information thorough the entertainment media. Health Affairs : ., , , , , , and .
1999. Medicine, media and celebrities: News coverage of breast cancer , 1960–1995. Journalism and Mass Communication Quarterly 76(2): 229–249., , and .
Corporate annual reports: Now more readable, credible, and fashionable. 2000. PR News 56(18).
2000. NHS Performance Indicators. : National Health Service (NHS) Executive. Available at: http://www.doh.gov.uk/nhsperformanceindicators..
1998. Performance measurement: Problems and solutions. Health Affairs 17(4): 7–25.,
1996. The role of quality measurement in a competitive marketplace. Pp. 207–234 in Strategic Choices for a Changing Health Care System . eds. Stuart Altman and Uwe E. Reinhardt. Chicago: Health Administration Press.,
2000. The relationship between managed care insurance and use of lower-mortality hospitals for CABG surgery. Journal of the American Medical Association 283(15): 1976–1982., , , , , and .
1998. Are performance measures relevant? Health Affairs 17(4): 29–31.,
1997. Mass Media and American Politics. Washington, D.C.: Congressional Quarterly Press.,
1998. Development of processing capacity entails representing more complex relations: Implications for cognitive development. Working Memory and Thinking. eds. R. H. Loie, and K. H. Gilhooly. East Sussex , U.K.: Psychology Press.
1995. The decline in coronary artery bypass graft surgery mortality in New York State. Journal of the American Medical Association 273(3): 209–213., , , , and .
Healthcare Research and Quality Act. 1999. Statutes at Large. Vol. 113, Sec. 1653.
1998. Use of outcome data by purchasers and consumers: new strategies and new dilemmas. International Journal for Quality in Health Care 10(6): 503–508.,
2000. Older Consumers' Skill in Using Comparative Data to Inform Health Plan Choice: A Preliminary, , , , and .
Assessment, AARP , Public Policy Institute. Available at: http://research.aarp.org/ppi/index.html .
1999. Health maintenance organization report cards: communication strategies versus consumer abilities. Managed Care Quarterly 7(3): 75–82., .
1996. The evaluability hypothesis: An explanation for preference reversals between joint and separate evaluations of alternatives. Organizational Behavior and Human Decision Processes 67(3): 247–257.,
1998. Less is better: When low-value options are valued more highly then high-value options. Journal of Behavioral Decision Making 11: 107–121.,
1984. Choices, values, and frames. American Psychologist 39: 341–350., , and .
1998. Affective and cognitive reactions to narrative versus statistical evidence organ donation messages. Journal of Applied Communication Research 26(3): 279–300., , , and .
1996. Strategic Marketing for Nonprofit Organizations . Upper Saddle River, N.J.: Prentice Hall., , and .
1989. Social Marketing. New York: The Free Press., , and .
Washington, D.C.: AARP Public Policy Institute., , , , and . 1999 . Reforming the Health Care System: State Profiles 1999,
1999. Patient race and ethnicity in primary care management of child behavior problems. Medical Care 37(11): 1089–1091., .
1999. What do consumers want and need to know in making health care choices? Medical Care Research and Review 56(Supplement 1): 67–102., , and .
2000. The public release of performance data: What do we expect to gain? A review of the evidence. Journal of the American Medical Association 283(14): 1866–1874., , , , and .
2000. Comparing the Quality of Maryland HMOs. Baltimore. Available at: http://www.mhcc.state.md.us..
2001. Using and misusing anecdote in policy making. Health Affairs 20(1): 207–212.,
1999. Making survey results easy to report to consumers. Medical Care 37(3 Supplement): MS32–MS40., , , , , , and .
1992. Distributional theories of impression formation. Organizational Behavior and Human Decision Processes 51: 313–343., , , and .
1997. Death and reputation: How consumers acted upon HCFA mortality information. Inquiry 34: 117–128., , , and .
1999. useit.com: Jakob Nielson's Website (Usable Information Technology) [on-line]. Available at: http://www.useit.com [Dec. 7, 2000]., .
http://www.useit.com/alertbox [Jul. 17, 2000]., . The Alertbox: Current Issues in Web Usability [on-line]. Available at:
1972. Thematicity, abstractness, and the long-term recall of connected discourse. Psychonomic Science 27: 361–362.
1997. Understanding the quality challenge for health consumers: The Kaiser/AHCPR Survey. Joint Commission Journal on Quality Improvement 23: 239–244., , and .
1993. Evaluation: A Systematic Approach. Newbury Park, Calif.: Sage Publications., , and .
1994. Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests. New York: John Wiley & Sons., .
1998. Politics, Power and Policy Making: The Case of Health Reform in the 1990s., , and .
1998. Making the News: A Guide for Nonprofits and Activists. Boulder, Colo.: Westview Press.: . , .
1997. Dynamics in Document Design: Creating Text for Readers. New York: John Wiley & Sons.,
2001. Out of the closet and into the legislature: Breast cancer stories. Health Affairs 20(1): 213–218.,
1992. Perception of risk: reflections on the psychometric paradigm. Pp. 117–152 in Social Theories of Risk. eds. Sheldon Krimsky, and Dominic Golding. Westport, Conn.: Praeger., .
1999. Web Site Usability: A Designer's Guide. San Francisco: Morgan Kaufmann Publishers., , , , , and .
1999. Sun Microsystems [on-line]. Available at: http://www.sun.com [Dec. 7, 2000]..
1983. The Visual Display of Quantitative Information. Cheshire, Conn.: Graphics Press.,
1981. The framing of decisions and the psychology of choice. Science 211(4481): 453–458., , and .
1996. Fact Sheet: Health Insurance Portability and Accountability Act of 1996 [on-line]. Available at: http://www.os.dhhs.gov:80/news/press/1996pres/960821.html [Jan. 15, 1998]..
2000. Healthy People 2010. Washington, D.C.: U.S. Government Printing Office..
Washington, D.C. Available at: http://www.sec.gov/news/handbook.htm.. 1998 . A Plain English Handbook: How to create clear SEC disclosure documents.
1999. Hands-On Social Marketing. Thousand Oaks, Calif.: Sage Publications..
1969. Abstractness and the recall of connected discourse Journal of Experimental Psychology 82: 467–471., , and .