Skip to main content

Currently Skimming:

4 RECOMMENDATIONS FOR A PRIORITY-SETTING PROCESS
Pages 57-102

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 57...
... Second, the process will produce a list of conditions and technologies ranked in order of their importance for assessment. Third, the process provides for broad public participation in assembling a list of candidate conditions but then winnows the list to identify important topics, using data when they are available and consensus judgments when data are unavailable.
From page 58...
... ~.:~-~-~-~-~.-:-~::-:-.~:-.:.-:~.~ Staff ~.:.; ~,.: ~.~ The proposed process includes a quantitative model for calculating a priority score for each candidate topic. In this chapter, the term process is used for the entire pnority-setting mechanism; the term model is used for the quantitative portion of that process that combines criterion scores to produce a priority score.
From page 59...
... The final "index" of importance of a topic is its priority score, which is the sum of the seven weighted criterion scores (Si) , each multiplied by its criterion weight ..2 This priority score or index is calculated as shown in Equation (1)
From page 60...
... Criteria can be both objective and subjective. The panel should also assign to each criterion a weight that reflects its relative importance The IOM committee proposes and later defines seven criteria: three objective criteria prevalence, cost, and variation in rates of use; and four subjective criteria burden of illness, potential of the results of the assessment to change clinical outcomes, potential of the results of the assessment to change costs, and potential of the results of the assessment to inform ethical, legal, and social (ELS)
From page 61...
... between a patient who has the condition and receives conventional treatment and the QALE of a person of the same age who does not have the condition Me total direct and induced cost of conventional management per person with the clinical condition Me coefficient of variation (standard deviation divided by the mean) The expected effect of the results of the assessment on the outcome of illness for patients with the illness The expected effect of the results of the assessment on the cost of illness for patients with the illness The probability that an assessment comparing two or more technologies will help to inform important ethical, legal, or social issues ao = Objective entenon; S = subjeehve entenon.
From page 62...
... One or more expert panels, which might be subpanels of the broadly representative panel that sets criterion weights, would determine criterion scores for objective criteria, using the data that have been assembled by TA program staff for each condition. Assigning scores for objective criteria will require expertise in epidemiology, clinical medicine, health economics, and statistics when data are missing, incomplete, or conflicting.
From page 63...
... In the second part of this step, TA program staff list the candidate technologies and conditions in the order of their priority scores. According to the model, higher scores will be associated with conditions and technologies of higher priority.
From page 64...
... To complete the priority-setting process, TA program staff would provide the advisory council with definitions of the criteria, a list of the criterion weights, the criterion scores for each candidate topic, and the priority list itself. After review and discussion of this material, the council might take one of several actions: recommend adopting the priority list as a whole; recommend adopting it in part and adjusting the priority rankings in various ways; or reject it outright and request a complete revision for re-review.
From page 65...
... The criteria proposed by the committee address these interests. Weighting Criteria Various approaches can be used to assign criterion weights.
From page 66...
... Finally, TA program staff would be alert to events that affect the characteristics of a technology, clinical condition, or current practice, including the potential to modify patient outcomes. Events that would put a technology or condition on a list of candidates for assessment are · a recent rapid and unexplained change in utilization of a technology; · an issue of compelling public interest; · an issue that is likely to affect health policy decisions; · a topic that has created considerable controversy; · new scientific information about a new application of an existing technology or the development of a new technology for a particular condition or practice; and · a "forcing event," such as a major legal challenge, or any other event that might raise any of a topic's criterion scores.
From page 67...
... TA program staff would use panels to provide subjective rankings on all or a subset of candidate technologies. Only the highest ranking topics would remain for the full ranking process.
From page 68...
... Specifying Alternative Technologies and Clinical Conditions After winnowing the initial list of candidate topics, TA program staff would specify all relevant alternative approaches for care of a given clinical
From page 69...
... Staff Summaries of Clinical Conditions As a first step in assigning priority scores, OHTA staff would conduct a literature search for each candidate condition and technology to summarize for the panels the data they will need to assign a score to each prioritysetting criterion. The panels would use the summaries to make subjective judgments; they would use the objective data (e.g., prevalence, costs, variation in practice)
From page 70...
... A formal consensus process provides a good way to perform this estimation. The panel engaged to assign subjective criterion scales would be constituted differently from the panels for creating the "objective criterion scores." The panel should be broadly representative and include a range of health professions as well as users of health care Each subjective criterion score can be represented by a rating on a scale of 1 to 5 (the length of the scale is arbitrary)
From page 71...
... The first three criteria form a set that estimates the aggregate social burden posed by a candidate clinical condition. The first criterion considers the general population afflicted with the condition, that is, its prevalence.
From page 72...
... In the other, the time horizon is the length of the illness. "Preva Table 4.3 Consistent Units for Prevalence Cntenon, by One Year and Lifetime Time Horizons Two Time Honzons Criterion One Year Lifetime Prevalence Prevalence Incidence Cost Annual Lifetunea Variations in rates of use Coefficient of Coefficient of variation variation Burden of illness Change in quality- Change in quality adjusted life days adjusted life in the next year expectancy due to as a result of illnessa illness Potential of the results of Expected change Expected change in an assessment to change in outcomes in outcomes over health outcomes the next year as a average patient's result of assessment lifetime owing to assessments Potential of the results of Expected change in Expected change in an assessment to change costs in the next costs over average costs year as a result of patient's lifetime assessment as a result of assessments Potential of an assessment to Expected change in Expected change in inform ethical, legal, and ELSb issues in the ELS issues in the social issues next year next year aRequires a consistent discount rate.
From page 73...
... This definition applies to assessments of a clinical condition and to assessments of a technology. Although some data on mortality and morbidity are available, at present these data are seldom obtainable at the level of specificity needed; consequently, the panels will have to assign criterion scores by a subjective estimate of the burden of illness of one
From page 74...
... __ ~_ Q = BALE for person without diabetes Person without B Person wit unheated diabetes C Person wit conventional diabetes treatment D Person win new beneficial diabetes Bea~nent Figure 4.2 Hypothetical example of burden of illness for a person without Type II diabetes and for individuals with untreated diabetes, with conventionally he ate d diabetes, and with new, beneficial treatment for diabetes. Given a specific QALE for a person without diabetes, the burden of illness is seen here as the difference in quality-adjusted life expectancy for a person with diabetes treated conventionally (not an untreated diabetic)
From page 75...
... . technology's application to relevant clinical conditions.
From page 76...
... The data used to develop scores are mortality and morbidity data and health status measures, when available. Data on the loss of qualibr-adjusted life expectancy from all medical conditions are not sufficient to estimate burden of illness as defined by the IOM committee for all candidate topics; as a result, the panels must use surrogate measures.
From page 77...
... Criterion 3: Cost Definition: Cost is the total direct and induced cost of conventional management per person with the clinical condition. This definition applies to assessments of a clinical condition and to assessments of a technology.
From page 78...
... These "indirect" costs and burdens are likely to occur most often for contagious diseases or for medical conditions that contribute to the occurrence of "accidents," interpersonal violence, and so forth. Costs associated with the suffering of victims of come, assault, and motor vehicle accidents attributable to the patient are important in assessing the societal importance of a clinical condition.
From page 79...
... For this cutenon, TA program staff would assemble data on variations in per-capita use rates across different venues of care. Compaiisons of per-capita use rates may be among small geographic areas, among nations, or even among different methods of paying for health care.
From page 80...
... TA program staff would then count the votes and identify the panel's choice of the conditions or technologies for which the results of the assessments would be most likely and least likely to affect patient outcomes. Subsequently, individual panel members would assign intermediate scale values to the other technologies, and program staff would calculate the mean scale value of each candidate topic.
From page 81...
... Instruction. Criterion scores are assigned using the method described for criterion 5.
From page 82...
... To assign a criterion score, each panel member would consider the ELS issues for each candidate condition or technology and determine a score as follows, depending on his or her response to the issues and questions de scribed above: · a score of 1 corresponds to "no" (i.e., no important ELS issues are likely to be resolved) ; · a score of 5 corresponds to an intense "yes" (i.e., important ELS issues are likely to be resolved)
From page 83...
... Once criterion scores and weights are assembled, He priority score for each condition or technology can be computed by combining the objective and subjective criterion scores. Priority scores for each condition or technology are derived from the data for the objective criteria and the scale scores for the subjective ratings, each adjusted by the weight given to each criterion.
From page 84...
... 84 Cq _ Cal .
From page 85...
... The committee adopted a multiplicative model for priority setting because such models exhibit a number of desirable characteristics in comparison with additive models. In multiplicative models, both the rank order and relative size of the priority scores of various medical interventions are preserved regardless of the scale of measurement of the criterion scores.
From page 86...
... In sum, the model yields a constant relative rank ordering regardless of the units in which the criterion scores are expressed. The same is true for the magnitude of the priority score for a condition or technology relative to all others.
From page 87...
... Indeed, the committee urges that all candidates for assessment be assigned priority scores, even when the staff or panels realize at an early stage in the priority-setting process that the data for an assessment are not available, because a high priority score for a candidate could help to shape the nation's research agenda. This discussion is continued in Chapter 5.
From page 88...
... For example, it could group the priority scores into categories, such as "most important to assess," "very important to assess," and "low priority for assessment." Within these categories, appropriate designations can indicate items that were borderline in terms of the group into which Hey fell. This form of categorization according to priority score would allow a "softening" of the numerical priority score to prevent the process from being seen as more precise than it actually is.
From page 89...
... For instance, during a h~rst-time Box 4.1 Events That Might Trigger Reassessment · A change in the incidence of a disorder (or its prevalence, if the condition is chronic) or in the degree of infectiousness of a biological agent · A change in professional knowledge or clinical practice, including a recent rapid change in utilization and increased variabili~ in the use of a given technology · Publication of new information about a technology that suggests a change in its performance or cost The introduction of a new competing technology · A proposal to expand the use of the treatment to populations not included in the original assessment (e.g., expanding breast cancer screening to women aged 40 to 49 when earlier work focused only on women aged 50 and older)
From page 90...
... Ongoing Tracking of Events Related to Previously Assessed Topics Stated Time of Review for First-time Assessments. Both at the time of an initial and of a subsequent assessment, OHTA should explicitly state whether a reassessment is likely to be needed and when it expects that circumstance to occur.
From page 91...
... OLGA currently provides inforrna tion about its assessments in individual Health Technology Assessment Reports. To document events that might apply to previously assessed topics, the committee strongly recommends that OHTA create a separate catalog of its previous assessments, keep it current, and cross-reference it by conditions and technologies.
From page 92...
... Monitoring the Published Literature on Previously Assessed Topics. The agency should establish a system to monitor the published literature on previously assessed topics, given that up-to-date knowledge of a topic is the foundation for reassessment.
From page 93...
... ~;~. .~§i~;~;~ Winnow the list of nominations to a workable size 93 REASSESSMENTS J ~ l Assign criterion scores, and calculate priority scores I-;.
From page 94...
... assigning criterion scores to each topic, using objective data for some criteria and a rating scale anchored by low- and high-priority topics for subjective criteria; (6) calcu
From page 95...
... The other four burden of illness and the likelihood that the results of the assessment will affect health outcomes, costs, and ethical, legal, and social issues are subjective; they are scored according to ratings on a scale from 1 to 5. The chapter also addressed special aspects of priority setting that apply only to reassessment of previously assessed technologies; these include recognizing events that trigger reassessment (e.g., change in the nature of the condition, in knowledge, in clinical practice)
From page 96...
... TA program staff would then add the budget allocations across ballots. For example, an organization could specify $4 for 250 technologies and conditions, $250 to only 4 technologies, or $1,000 to a single technology.
From page 97...
... In the preliminary ranking, one could select the cutenon to be used in the initial ranking according to not only the weight assigned in the process but also the costs of data gathering. For example, if the highestweighted criterion had very high data-gathenng costs but the next-highestweighted criterion had much lower data costs associated with it, one could conduct the initial ranking using the second-highest-weighted criterion instead of the highest-weighted criterion.
From page 98...
... To minimize costs, these activities could be conducted using mail ballots, or (a modern variant) electronic mail.
From page 99...
... The second set- ranking on the basis of a subset of the eventual criteria best preserves the intent of the final pnoritysetting process but is more data intensive and thus potentially more costly. Organizations engaged in priority setting may also find it useful to use a winnowing process Hat quite deliberately does not use the same approach as the final process.
From page 100...
... For these reasons, the committee advises the choice of a winnowing technique that reflects the goals of simplicity, avoidance of control by special interests, and low cost.6 APPENDIX 4.2: METHODOLOGIC ISSUES Two key methodologic issues for deriving a formula for the technology assessment priontr score are (1) the scale on which each of the criterion scores is expressed and (2)
From page 101...
... The problem of the interaction of the weights and the scale of measurement of the values that determine a criterion score can be avoided by a simple mathematical modification. By using relative importance to determine the criterion weights, the logarithmic transformation provides the same results independent of the scale by which each of the component "scales" is measured.
From page 102...
... Using logarithms is an approach that is intended to reflect relative place on a scale of importance. In producing priority scores for each candidate condition or technology, the relative ranking of each procedure will be the same, regardless of how each of the criterion scores is measured.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.