Skip to main content

Currently Skimming:

5 IMPLEMENTATION ISSUES
Pages 103-114

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 103...
... This chapter describes how to implement the pnority-setting process, suggests a cycle for priority setting, and estimates the resources Hat would be needed to set priorities for health technology assessment and reassessment. THE PRIORITY-SETTING CYCLE The pnority-setting cycle comprises the steps listed below performed according to the time frames indicated.
From page 104...
... SETTING CRITERION WEIGHTS The criterion weights mentioned above in the priority-setting cycle and examined in Chapter 4 are intended to represent the preferences of society. The committee envisions a broadly constituted panel that would set criterion weights not oftener than every 5 years.
From page 105...
... Technology Assessment Program Staff Requirements The committee carefully considered the resource and staffing requirements entailed by the process described in Chapter 4 from two perspectives: the current constraints on OHTA and AHCPR and the (idealized) goals of a credible, sound, defensible model process.
From page 106...
... During He committee's pilot test of the quantitative model, one full-time-equivalent staff person took a day to assemble the data for one condition; by that metric, over the course of a year, one staff person could probably assemble data for about 200 conditions. · After the panel has generated priority rankings, staff would also conduct informal surveys of other professional and assessment organizations to determine whether any of the conditions and technologies being considered for assessment by OHIA are already being evaluated.
From page 107...
... Here, the assignments to subpanels might be more along "expert" lines, with groups for the subjective criteria being more "broadly representative" and those for the objective criteria being more "quantitatively expert." The latter subpanels, for instance, require individuals with quantitative reasoning skills and epidemiologic expertise to adjudicate among conflicting data and estimate prevalence, costs, illness burdens, and practice variations in cases where data are missing.
From page 108...
... It would be appropn ate, however, to determine the usefulness, appropriateness, or cost-effectiveness of He results of the process, holding that adoption of the process is evidence of acceptability, feasibility, and generalizability. One can ask whether the process seems reasonable to people who are familiar with either priority setting, technology assessment, or the technologies themselves.
From page 109...
... Criteria Choosing and Changin~Criteria After extended discussion, the committee selected seven criteria by which to implement its principles of priority setting; these were described in Chapter 4. The criteria encompass the current social impact of a condition for which a technology is used, variations in use rates, and the likely changes that an assessment would engender.
From page 110...
... The committee considers these weights merely illustrative and recognizes that a given organization would probably wish to derive its own weights for priority setting. Availability of Data to Generate Criterion Scores The pnority-setting process recommended by the committee requires the use of data in explicit ways.
From page 111...
... . The data base available to the public would include the weights assigned to each criterion and the objective and subjective criterion scores for each condition and technology to which the quantitative model was applied.
From page 112...
... Given that data will always be inadequate, in some sense, the presence or absence of information does not affect whether but how a technology assessment should be done. In some cases, literature synthesis will be possible; in others, AHCPR may decide to fund secondary data analysis or primary data collection.
From page 113...
... In another effort, the federal government funded a separate project to collect primary data over a 3-year period. Both of these approaches are legitimate technology assessments, and both are useful responses to a lack of data.
From page 114...
... OHTA should adopt methods that will enable it to conduct preliminary assessments even when there is not yet adequate evidence on which to base a strong clinical policy recommendation. The committee advocates using decision analysis as a way to identify which missing evidence is most important for decision making and to use the results as input to the development of an agenda for empirical research sponsored by AHCPR.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.