Click for next page ( 46


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 45
Procedures for Priority Setting According to its legislative mandate, the Agency for Health Care Policy and Research's (AHCPR) Of lice of the Forum for Quality and Effectiveness in Health Care was to develop guidelines for at least three clinical treatments or conditions by January 1, 1991, which was barely a year after enactment of the legislation. That legislative timetable was "unrealistically short" given that the function was new to the U.S. Deparunent of Health and Human Services, which had no organizational unit, experienced staff, or established procedures with which to start its work on guidelines (IOM, 1990c, p. 98~. Not surprisingly, given these circumstances, the Forum's first director and the Agency's first administrator did not develop and employ explicit criteria and formal procedures to select initial topics (S. King, personal communication, June 22, 1994~. They did, however, consult with practitioners, other government officials, and consumer or patient groups. A February 1990 meeting with 45 nursing experts was particularly influential in identifying the problems that became the subjects of the first three guidelines (AHCPR, 1990c). The guidelines, which were not published until the first half of 1992, focused on acute pain, prevention of pressure ulcers and urinary incontinence (AHCPR, 1992a,b,c). Since its early days, the AHCPR's Forum has moved toward a more formal process for selecting guideline topics. This chapter describes the Forum's current procedures and compares them with those proposed by the AHCPR's Office of Health Technology Assessment (OHTA) (which were largely those recommended in the Institute of Medicine's (IOM) 1992 report on priority setting). It then recommends some improvements in Forum procedures. The committee's conclusions about the Forum and OHTA must be tempered by an acknowledgment that the actual workings of the Forum process were not audited 45

OCR for page 45
46 SEWING PRIORITIES FOR CLINICAL PRACTICE GUIDELINES by the committee and that the actual implementation of the OHTA process lies largely in the future. FORUM AND OHTA PRIORITY-SETTING PROCEDURES At the direction of Congress, both the Forum and OHTA have adopted formal priority-setting procedures. The legislation reauthorizing AHCPR required the administrator to "develop and publish a methodology for establishing priorities for guideline topics . . . [and] to establish and publish annually in the Federal Register a list of guideline topics under consideration" (P.L. 102-410, Sec. 7~. For OHTA, Congress went further, specifying in the same statute the criteria that the unit was to use in selecting topics for technology assessments. The Forum's Procedures for Topic Selection In the fall of 1993, AHCPR published a methodology for topic selection in practice guidelines (AHCPR, 1993~. As described in the Federal Register and explained to this committee by agency staff, the current approach to developing a list of topics for guideline formulation involves several steps. Figure 3.1 depicts the process in a simplified form (which does not indicate that some steps may occur more than once in different sequences, or even in parallel). Step 3, which involves the use of expert groups, warrants more comment. Before publishing topics for comment in the Federal Register in September 1993, the Forum organized several multidisciplinary groups of practitioners involved in the care of cardiovascular, infectious disease/immunology, gastrointestinal, musculoskeletal, or neurological conditions or in prenatal care. These groups nominated topics by mail and then met to discuss and agree upon rankings. Topic nominations were also solicited from other sources, including other government agencies. The specific procedures used to determine which nominated topics would be included in the 1993 notice were not described. OHTA'S Procedures for Priority Setting In April 1994, AHCPR published a notice in the Federal Register asking for public comments on a proposed priority-setting process for OHTA. The notice explained that approach was based on the model process recommended in a 1992 IOM study, the provisions of the 1992 reauthorization of AHCPR, and the conclusions from a meeting held to solicit comments on the model process.

OCR for page 45
PROCEDURES FOR PRIORllY SETTING ACTIVITY 1. Select priority-setting criteria. 2. Solicit nominations of topics for guidelines development. 1 3. Use expert practitioner groups to nominate and rank additional tonics in selected condition areas. 4. Screen nominations against priority-setting criteria, taking resources and existing guidelines into account. _ 5. Solicit comments on possible topics in the Federal Ptegister. 6. Review comments and propose list of priority topics. 7. Designate final topics. 47 RESPONSIBLE PARTY Congress, AHCPR Forum | I AHCPR Forum AHCPR Forum, private sector AHCPR Forum | AHCPR Forum l | AHCPR Forum l AHCPR Administrator FIGURE 3.1 Process for setting priorities for guideline development, Office of the Forum for Quality and Effectiveness in Health Care. SOURCE: Adapted from AHCPR, 1993. Format adapted from TOM, 1 992b. OHTA would use the proposed approach to set priorities until comments had been evaluated. Although the notice did not describe planned procedures in extensive detail, the basic elements depicted in Figure 3.2 generally follow those recommended by TOM. The third step, reducing a larger number of topics to a smaller group for more focused consideration, is not explicit in the April notice but is inferred from the rest of the text. The differences primarily involve the conceptualization

OCR for page 45
48 SEWING PRIORITIES FOR CLINICALPRACTICE GUIDELINES and weighting of selection criteria, issues already discussed in Chapter 2. The April notice says that "special consideration may be given" to requests from other federal agencies, but it does not explicitly describe how these requests will be compared to those from other sources. Because the OHTA process had not been implemented before the committee completed its work, it was not possible to assess how implementation matched the approaches suggested in the 1992 IOM study. Comparing Procedures stages; Aside from the criteria used for topic selection, procedures for setting priorities may differ along several dimensions (Eddy, 1989; IOM, 1991, 1992b). These differences include the degree to which a process is ad hoc versus ongoing; is predominately reactive rather than active in generating potential topics; relies largely on implicit rather than explicit procedures and criteria; is primarily internal or provides for external participation at various provides for relatively broad or narrow participation in the process; or employs more data-ir~tensive arid quantitative or more subjective strategies for assessing or ranking alternative topics. The procedures used by the Forum and OHTA are similar in many respects. Clearly, both the Forum and OHTA are engaged in priority setting on an ongoing, periodic basis. They are not primarily reacting on an ad hoc basis to a crisis or controversy. Both organizations now have fairly explicit procedures and criteria for priority setting. The Forum has used and the OHTA plans to use the Federal Register to solicit external nominations of topics and comments on proposed candidates for topics. Thus, both encourage nongovernmental parties to participate in nominating topics, although only organized interest groups with a stake in the process are likely to become involved. The major difference in the processes is that OHTA, consistent with IOM recommendations, has proposed a more data-intensive and quantitative analytic strategy than that currently defined by the Forum. OHTA has assigned numerical weights to criteria, and it has accepted the use of a formal model; the Forum has done neither. However, this formal model was recommended as a contribution to the decisionmaking process and not as the final arbiter. Another difference between the two organizations is that OHTA has been largely reactive, responding to requests for assessments from the Health Care

OCR for page 45
PROCEDURES FOR PRIORITY SETTING ACTIVITY 1. Select priority-setting criteria and assign weight to each. 2. Solicit nominations of topics for technology assessments. 1 1 3. Reduce a large list of nominees to those on which to obtain the data set needed for priority ranking 1 1 RESPONSIBLE PARTY | | Congress, panel Staff, responding organizations rig- I Panels 4. Obtain data sets for priority ~| Staff l ranking. I 5. For each topic, assign a score ~ Staff, model for each attribute. _ 1 1 6. Calculate priority score for each topic and rank topics in order of priority. Staff, model 1 1 7. Designate final topics. I I Administrator 49 I?IGURE 3.2 Proposed process for setting priorities for technology assessment, Office of Health Technology Assessment, 1994. SOURCE: Adapted from AHCPR, 1994. Format adapted from IOM, 1 992b. Financing Administration (HCFA). Although it has, at least in principle, been given a more active mission since 1992, it is not evident that it will be provided the resources to move beyond its traditional role. The Forum has been mandated by Congress to develop guidelines in certain areas, but it has considerable leeway for active creation of its topic list.

OCR for page 45
50 SETTING PRIORITIES FOR CLINICALPRACTICE GUIDELINES Other Organizations Among the organizations discussed in Chapter 1, most use relatively circumscribed procedures for nominating topics that is, the range of those involved is limited to "members" or "customers" of the organization. The decisionmaking processes vary in the degree to which they involve explicit techniques for securing judgments or information and making final topic selections. Even if the techniques are explicit, the way in which individual criteria are actually applied is usually subjective; that is, expert judgments are employed rather than scores derived from empirical data. TOWARD COMMON PROCEDURES? Is it reasonable for priority-setting procedures to differ for the Forum and OHTA? The committee concluded that a move toward common procedures may reasonably wait. After studying the OHTA proposal and the model process set forth in the 1992 IOM report on technology assessment, this committee concluded that the model provided a clear and generally sound framework for an organization willing and able to commit resources for the required data collection and analytic steps. OHTA is, in essence, undertaking a practical test ofthe procedures, which AHCPR will track and evaluate. Based on its evaluation, the agency can identify strengths and limitations and, as appropriate, identify ways of improving the process. At that point, the agency will be able to consider whether and to what extent the Forum should adopt the OHTA process. To assist in an assessment of OHTA experience (assuming current plans go forward), the agency might consider convening a workshop that would include members ofthe IOM priority- setting committees and representatives of organizations that also have explicit priority-setting processes. The committee's reservations about the proposed OHTA approach relate primarily to the criteria for topic selection and their application (see Chapter 2~. In implementing and evaluating the OHTA process, agency officials should be cognizant of the arguments for and against the use of formal models. The case for a formal model is that it makes priorities clear and open to debate, focuses collection and assessment of relevant information, reduces the chance of a single issue or criterion dominating choices, and helps insulate decisionmaking from political pressure. The case against such models is that in attempting to model an expert decisionmaking process, one may significantly oversimplify and distort the nature of that process. This is especially true when experts use a large number of interrelated decision criteria. In addition, the construction of the model may reflect particular biases and idiosyncrasies of the developers. Likewise, the data on which the model operates can be manipulated, particularly

OCR for page 45
PROCEDURES FOR PRIORITY SETTING 51 when a complex variable (e.g., burden of illness) must be measured using readily available but incomplete information (e.g., mortality data). The more effort required for adequate data collection, the more expensive and time-consuming the process becomes. If a formal model is used as one input to the decisionmaking process (as the 1992 IOM priority-setting report recommended), then its limitations become less worrisome, but much of its benefit as a counterbalance to purely implicit and subjective processes should remain. Furthermore, if OHTA adequately documents its data, estimates, criterion scores, and other calculations, analysts can test the consequences of varying these elements. For example, Chapter 2 discussed some problems with the definition and measurement of topic selection criteria and suggested alternatives. Until the proposed OHTA procedures are tested and evaluated, the committee believes the Forum should continue to refine its current procedures and, thereby, develop a better basis for comparison with OHTA. The Forum's procedures for topic selection have become considerably more explicit, active, open, and systematic. This is not to say that they cannot be improved. The committee's specific recommendations are discussed next. PROCESS MODIFICATIONS: IMPROVING THE USE OF EXPERT JUDGMENT Whether or not future experience warrants the eventual transfer of some or all of the OHTA process to the Forum, the committee recommended that the Forum make some modifications in its current procedures. These recommendations focus on methods for consensus development and obtaining expert judgment. Systematizing Consensus Development Many organizations employ some kind of consensus development process for using expert judgments in such areas as interpreting scientific evidence, projecting future consequences of specific actions, and ranking issues or problems in importance (as a prelude to decisionmaking). A number of strategies for improving these processes have been proposed (see, for example, IOM, 1985, 1990g,d). The clearest use of consensus in the Forum's priority-setting process occurs with the practitioner groups that nominate and rank subsets of topics for guidelines in selected clinical areas. From the committee's review of Forum materials, it appears that the Forum employed a number of standard steps for consensus development. For some if not all areas, staff identified the broad

OCR for page 45
52 SEITING PRIORITIES FOR CLINICAL PRACTICE GUIDELINES clinical areas of interest (e.g., prenatal care), selected meeting participants, provided participants with advance information on the area, and used a questionnaire to obtain participant rankings of possible topics. Staff then compiled overall rankings based on individual responses and distributed the initial rankings to participants before the meeting. According to staff, the meetings were not governed by formal protocols, and one problem with the meetings was the tendency of a few outspoken individuals to dominate . . cllscusslon. The committee believes that the consensus development procedures now used by the Forum for priority setting could benefit from several largely incremental improvements. These improvements include the use of more formal consensus development tools before and during meetings convened to discuss priorities and the use of more clearly structured questionnaires for obtaining expert judgments before and during group meetings. The committee did not explicitly estimate the financial or time costs associated with these steps. Some involve mostly one-time costs for developing prototypes for a simple procedure manual. These prototypes could be modified fairly easily for different specific applications. Other steps involve the replacement of less systematic procedures with procedures that are more systematic but probably not more costly. For example, once the agency establishes "procedural guidelines" for the development of consensus during meetings, there appears to be no reason why they should vary much depending on the general clinical focus (e.g., prenatal care or musculoskeletal problems) of an advisory group. Nor should they cost more to apply than less formal procedures. Premeeting Processes. Although the Forum has used modified versions of the Delphi method for developing consensus in its premeeting processes, the specifics applied are not conveniently documented. They appear to have been both variable and evolving. Described generally, the Delphi technique is "an interactive survey process that uses controlled feedback [of interim results] to isolated, anonymous (to each other) participants" (IOM, 1985~. It was originally developed by the RAND Corporation in its work on national defense needs (Dalkey, 1969~. The process, which is based on mailed (and now faxed or electronically mailed) communication, normally obtains anonymous opinions from members of an expert group by formal questionnaire or individual interview; provides participants an opportunity to revise their opinions on the basis of one or more rounds of summarized feedback on initial responses; and devises an overall group response (e.g., topic rankings) by aggregating individual opinions from the final iteration of the survey.

OCR for page 45
PROCEDURES FOR PRIORITY SETTING 53 The advantages cited for the Delphi technique are several (Linstone and Turoff, 1975; Fink et al., 1984; Olsen, 1982; IOM, 1985~. First, the process has a relatively low cost compared to the cost of bringing geographically dispersed participants together for a meeting. Second, participants are less constrained in the time they have to complete questionnaires than they would be in the typical formal meeting. Third, the anonymity of the process means that professional status and personality do not come into play as they do in face-to-face meetings. Fourth, the process allows for a more refined process of judgment than does a one-time mailed questionnaire. The major limitation of the Delphi method is that it does not provide as full an opportunity for clarification of positions and consideration of conflicting perspectives as does a process in which participants actually meet. Participants in an impersonal, written process may also not engage in the task of making judgments with the same degree of intensity or commitment as those involved in a meeting. The process also tends to take a relatively long time from start to finish compared to the time required to complete a single questionnaire. The committee recommended that the Forum develop model Delphi or Delphi-like procedures for obtaining expert judgments or rankings by mail, fax, or electronic mail. This model (or models) could be drafted by staff from the Forum or elsewhere in AHCPR and reviewed by recognized experts in the field. Other parts of the agency may have models used for other purposes that could be easily adapted by Forum staff, or an initial draft could be put together by an expert consultant at a relatively low cost. The objectives are to improve the use of consensus processes, standardize procedures, simplify work for staff, and reduce the discontinuities caused by staff turnover. Meeting Procedures. If the Forum continues to convene expert groups as part of its priority-setting process, the committee recommended that it experiment with more formal meeting procedures to arrive at group judgments. The objectives are to help participants clarify their thinking and their rationales, to limit the opportunity for a few participants to dominate the process, and to standardize procedures. The committee suggested an adapted version of the nominal group process, which has been used to guide meetings in which rankings or other expert judgments are sought (Delbecq, Van de yen, and Gustafson, 1975~. The approach sketched below assumes a prior Delphi process that would generate a ranked list of potential topics. Depending on its size, the list could be winnowed by consolidating closely similar topics, taking advantage of natural breaks in the rankings, or arbitrarily selecting the top quarter or some other fraction of the topics. Meeting participants would be presented with a winnowed list. Starting with this list, the process would

OCR for page 45
54 SETTING PRIORITIES FOR CLINICAL PRACTICE GUIDELINES Obtain written rankings from participants silently and in writing (laptop computers and supporting software make the process easier) OR use the rankings from the Delphi process; 2. Exhibit the initial individual rankings in some fashion, for example, by posting them (if anonymity as desired at this stage) or eliciting them through a round robin process; 3. Allow clarification of rankings or rationales through a structured discussion process intended to avoid dominance of the discussion by a few participants; 4. Ask participants to vote again on topics, for example, rank ordering their seven highest-priority topics; 5. Exhibit the rankings and ask participants to discuss the results, for example, any surprises in the rankings; 6. Repeat the preceding steps one or more times (optional); and 7. Conclude with a final request for rankings or responses from individual participants. The larger the number of participants the more time-consuming and awkward the process is likely to become. One option is to use a mail-based process for a larger group and the nominal group process for a meeting of a subset of respondents (identified in advance of the Delphi exercise, not on the basis of it). More Systematic Use of Questionnaires Whether designed as part of a Delphi process or for some other purpose, questionnaire construction is the subject of many methodology texts, and consulting firms can provide assistance in this complex enterprise. The committee restricted its comments on the use of surveys in priority setting to four issues: (1) clarity, (2) use of questionnaires to estimate costs and outcomes, (3) standardization, and (4) expanded use of written questionnaires. 1. Claris. Questions should be specific, explicit, and consistent with standard methods for questionnaire construction. Although questionnaires used by the Forum in the past have the advantage of being brief, they are not, in the committee's view, easily interpretable. tin a round robin process, the meeting chair goes around the meeting table asking one person to identify, for example, his or her highest-priority topic and then asking in sequence the remaining participants to identify their highest-priority topics, excluding topics already mentioned.

OCR for page 45
PROCEDURES FOR PRIORITY SElTING 55 2. Use of questionnaires to estimate costs and outcomes. The agency should consider using questionnaires to obtain expert estimates on priority-setting criteria such as costs and outcomes of care (e.g., morbidity, functional status). These estimates could substitute for or supplement data that are incomplete or of questionable validity. 3. Standardization. For the purposes of priority setting, the committee expects that once a model questionnaire is developed it can be fairly easily adapted and applied to obtain judgments and rankings for many purposes. The model questionnaire should become part of a procedure manual to minimize staff work on repetitious tasks. 4. Expanded use of written questionnaires. If the agency develops more sophisticated but readily analyzed questionnaires, then it might consider using questionnaires only. Unless a face-t~face meeting promises particular additional value, the agency could stretch its limited resources by avoiding the significant expenses associated with meetings. Appendix F includes two questionnaires for the Forum's consideration. The first is based on one used in advance of a past topic meeting. Its advantage is that it is relatively short and does not require complex subjective estimates. Its disadvantage is that it is not very informative. The second questionnaire, which is more specific, was developed by David Eddy and is being tested at Kaiser Permanente of Southern California. Respondents are asked a series of specific questions to identify a target clinical condition or problem, the relevant patient group and its size, the patient care options to be assessed, the cost of providing each option, and the most important health outcomes (benefits or hanms to individual patients) associated with the problem. Respondents are also asked to estimate the probability that a formal analysis of the available evidence would indicate a preferred intervention and then to estimate for each outcome the consequence (e.g., probability of death) of using that intervention compared with that from current practice or some other alternative. (Analysts calculate the differences in estimated outcomes.) Similar questions are asked about expected costs end the proportion of candidates for care that would likely use the preferred intervention rather than the alternative. - r Because the suggested questionnaires rely primarily on closed-ended questions, responses could be easily analyzed using available commercial software. The committee recommended a trial of one of these questionnaires or a similar one so that the Forum can evaluate the cost and practicality of administering it and the utility of its results.

OCR for page 45
56 SETTING PRIORITIES FOR CLINICAL PRACTICE GUIDELINES DEVELOPMENT OF A PROCEDURE MANUAL The Forum should develop a procedure manual or handbook for its priority- setting activities. The purpose of such a manual would be to simplify and regularize the priority-setting process and to allow continuing and new staff to work more efficiently. The procedure manual for priority setting should cover basic activities such as consensus development methods, questionnaire construction and analysis, meeting organization, and criterion measurement and data collection methods. If examples of actual memos, questionnaires, and similar items are used whenever possible, the preparation of a manual should not make heavy demands on staff time. In addition to assisting the Forum, a procedure manual could be a useful aid for other public and private organizations. To the extent that guidelines development becomes both more extensive and decentralized, examples oftested procedures would be a valuable by-product of the Forum's work.