National Academies Press: OpenBook

Medicare's Quality Improvement Organization Program: Maximizing Potential (2006)

Chapter: 10 Evaluation of Quality Improvement Achieved by the QIO Program

« Previous: 9 Impact of Technical Assistance for Quality Improvement and Knowledge Transfer
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 257
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 258
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 259
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 260
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 261
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 262
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 263
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 264
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 265
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 266
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 267
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 268
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 269
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 270
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 271
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 272
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 273
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 274
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 275
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 276
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 277
Suggested Citation:"10 Evaluation of Quality Improvement Achieved by the QIO Program." Institute of Medicine. 2006. Medicare's Quality Improvement Organization Program: Maximizing Potential. Washington, DC: The National Academies Press. doi: 10.17226/11604.
×
Page 278

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

10 Evaluation of Quality Improvement Achieved by the QIO Program CHAPTER SUMMARY This chapter addresses how the Quality Improvement Organiza- tions (QIOs) performed on each subtask in the 7th scope of work (SOW) and how the Centers for Medicare and Medicaid Services (CMS) evaluated them, how CMS plans to evaluate the QIOs in the 8th SOW, the impact on quality improvement of a QIO work- ing more intensely with an identified group of participants in a state as compared to the improvement among all providers state- wide, and an assessment of provider satisfaction with QIOs. At the end of the 3-year Quality Improvement Organization (QIO) pro- gram contracts, the Centers for Medicare and Medicaid Services (CMS) evaluates the performance of the QIOs on the basis of demonstrated im- provements in the quality of care provided in their respective states or juris- dictions. The assessment of a QIO's provision of technical assistance, Task 1 of the contract for the 7th scope of work (SOW), is based on, among other things, its ability to improve clinical quality performance measures for activities in four provider settings: the nursing home, home health, hos- pital, and physician's office settings. CMS calculates the scores for each care setting. These scores lead to several questions: Is a QIO that success- fully shows improvement in one setting likely to have high improvement scores in other settings? Does improvement in one setting lead to improve- ment in other settings? What patterns of improvement exist? CMS EVALUATION OF QIO PERFORMANCE ON TECHNICAL ASSISTANCE TASKS Evaluation of QIOs in 7th SOW A section of the contract for the 7th SOW (Section J-7) defines CMS's plan for the evaluation of QIO performance and involves intricate formu- 257

258 MEDICARE'S QUALITY IMPROVEMENT ORGANIZATION PROGRAM TABLE 10.1 Measures of Clinical Quality Care Setting Clinical Quality Score Measure Calculation Nursing home Relative change 1 ­ (follow-up/baseline) Home health Significant improvement (SI) (Number of identified participants rate with SI)/(total number of providers in state eligible for SI) Hospital Reduction in failure rate ( Follow-up ­ baseline)/(1 ­ baseline) Physician's office Reduction in failure rate ( Follow-up ­ baseline)/(1 ­ baseline) SOURCE: CMS (2005b). las. The evaluation formulas separately consider each subtask of Task 1, technical assistance, which correlates with activities related to each care setting (see Table A.5 in Appendix A). Improvement scores are a function of clinical quality measures and provider satisfaction. CMS weights im- provements in clinical quality more heavily than it does those in provider satisfaction in the calculation of overall quality improvement scores for each subtask, as depicted in the following formula1: overall subtask improvement score = 0.8 (clinical quality score) + 0.2 (satisfaction score) Section J-7 delineates different equations for the calculation of im- provement in clinical quality measures for each care setting (Table 10.1). These clinical quality measures are then individually divided by the target levels of improvement to create a clinical quality score for each setting. Provider satisfaction scores reflect the actual satisfaction improvement rates divided by the target satisfaction improvement rates. Overall subtask im- provement scores produce numerical values, which represent the quality improvement achieved by the provider. (See Box 10.1 for an example of scoring.) The scores are a function of change, so minimum and maximum scores did not exist in the 7th SOW (see Chapter 2 for further discussion). Adding to the complexity of scoring, the clinical quality score for the nursing home, home health, and physician's office settings includes addi- tional components: identified participant scores and statewide scores. QIOs must work more intensely with a group of providers within the state, called "identified participants." In the 7th SOW, the hospital subtask (Task 1c) 1Evaluation of the hospital subtask of Task 1 (Task 1c) differs by weighting clinical quality 0.75 and satisfaction 0.25.

EVALUATION OF THE QIO PROGRAM 259 did not require identified participants. Because QIOs work more closely with identified participants, improvements made by identified participants are weighted more heavily than statewide improvements in the calculation of clinical quality scores. However, QIOs spend only approximately one- third of the appropriated funds on Task 1 subtasks with identified partici- pants; the remaining two-thirds of the funds are devoted to making im- provements statewide (CMS, 2004b). CMS evaluated QIO work with underserved populations (Task 1f) on the basis of the judgment of the Project Officer; QIOs either pass or fail this subtask without an official numerical score. A hypothetical example of the scoring system is provided in Box 10.1. Contract Renewal in the 7th SOW To merit noncompetitive renewal of contracts for the 8th SOW, CMS required the QIOs to meet the performance criteria on 10 of the 12 subtasks in Tasks 1 to 3 of the contract for the 7th SOW. For Tasks 1a to 1e and Task 2b, the quantifiable subtasks, the QIOs had to score 1.0 or higher to meet the performance criteria. For the subtasks for which a QIO did not meet the performance criteria, the QIO had to meet the following mini- mums: (1) a score of 0.6 or higher on quantifiable subtasks (Tasks 1a through 1e and Task 2b) and (2) approval of the Project Officer on all nonquantitative tasks (Tasks 1f, Task 2a and 2c, and Tasks 3a through 3c). If these standards were not met, a QIO had to compete for the contract for the 8th SOW (CMS, 2002). Evaluations in 8th SOW For the 8th SOW, CMS developed evaluation formulas similar to those used in the 7th SOW. However, as discussed in Chapter 8, the focus of the 8th SOW broadened to include the areas of (1) clinical performance mea- sure results, (2) clinical performance measurement and reporting, (3) sys- tems improvement, (4) process improvement, and (5) organization culture change. CMS weights each of these areas differently, depending on the subtask (see Table A.6 in Appendix A). As in the 7th SOW, the improve- ments made by identified participants tend to carry greater weight than the improvements made statewide, although this does not hold true for all subtasks. The passing scores vary by subtask. Evaluation of provider satisfaction differs in the 8th SOW, with the addition of a stakeholder and a knowledge and perception survey (see the discussion below). All Task 1 subtasks require an 80 percent overall satisfaction and knowledge- perception score to pass. For all subtasks (for Tasks 1 and 3), CMS rates QIO performance as either excellent pass, full pass, conditional pass, or

260 MEDICARE'S QUALITY IMPROVEMENT ORGANIZATION PROGRAM BOX 10.1 Hypothetical Scoring Example Charles Hill Nursing Home has volunteered to work with Quality- Quest QIO as part of QualityQuest's identified participant group. Qual- ityQuest would like to work on reducing the incidences of the following selected four quality measures: (1) chronic care residents with pain; (2) chronic care residents with pressure sores; (3) acute care residents with pain; and (4) acute care residents with delirium. In 2002, the baseline, Charles Hill reported that 22 percent of its chronic care residents reported having pain. In 2005, the follow-up, the nursing home reported that only 11 percent of its chronic care residents reported having pain. For the chronic pain measure, the relative change for Charles Hill is as follows: Relative change = 1 ­ (performance at follow-up/performance at the baseline) = 1 ­ (11 percent/22 percent) = 0.50 (50 percent relative change) This calculation is repeated for all four measures. The worst score on a performance measure is dropped, and the scores for the remaining three measures are summed and averaged. This average score is then averaged with (1) those of the other identified participants in the state to create the QIO's identified participant score and (2) all the nursing homes in the state for the QIO's statewide score. QualityQuest Identified Participant Clinical Quality Score Calculation of Identified Participant Weight Identified participant nursing homes as a percentage of all nursing homes in the state = 13.8 percent Target percent participating = 10 percent Identified participant weight = 0.44 × (percent participating identified participants/target percent participating) Identified participant weight = 0.44 × (13.8 percent/10 percent) = 0.6 Calculation of Identified Participant Score Baseline average of four measures: 9 percent Follow-up average of four measures: 6.9 percent not pass. CMS deems scores (see the discussions below for the derivation of the scores) greater than 0.95 as excellent pass, scores between 0.75 and 0.94 as full pass, scores between 0.65 and 0.74 as conditional pass, and any score below 0.65 as not pass. All QIOs must complete the core activi- ties of each subtask to be considered for noncompetitive contract renewal.

EVALUATION OF THE QIO PROGRAM 261 Identified participant relative change = 23 percent improvement on the four measures Target level of improvement for identified participants (set by CMS): 8 percent Identified participant score = weight × (relative change in improve- ment/target improvement) Identified participant score = 0.6 × (23 percent/8 percent) = 1.72 QualityQuest Statewide Clinical Quality Score Calculation of Statewide Weight Statewide weight = 0.8 ­ identified participant weight Statewide weight = 0.8 ­ 0.6 = 0.2 Calculation of Statewide Score Baseline average of four measures: 12 percent Follow-up average of four measures: 11.1 percent Relative change statewide: 7.5 percent average improvement on the four measures Target level of improvement statewide (set by CMS): 8 percent Statewide score = weight × (relative change in improvement/target improvement) Statewide score = 0.2 × (7.5 percent/8 percent) = 0.19 QualityQuest Satisfaction Score (surveys completed only by identified participants) Identified participants: 89 percent More than 80 percent of QualityQuest's identified participants were satisfied with the assistance that they received. Therefore, QualityQuest has passed the satisfaction component and will receive a score of 0.2. Overall Nursing Home Improvement Score Overall score = 0.8 (clinical quality score) + 0.2 (satisfaction score) = 1.72 + 0.19 + 0.2 = 2.11 QualityQuest has scored above 1 and therefore has passed the nurs- ing home subtask. In addition, eligibility for noncompetitive renewal is generally contingent on the QIO achieving at least one conditional pass; noncompetitive re- newal is rewarded if the QIO receives a full pass or an excellent pass on seven of the nine subtasks. Upon reception of a not pass on any subtask, CMS may invite the QIO to the evaluation panel (CMS, 2005c).

262 MEDICARE'S QUALITY IMPROVEMENT ORGANIZATION PROGRAM TABLE 10.2 QIO Results for Task 1 of the 7th SOW Percentage of Task 1 Subtask Range of Scoresa QIOs Passing Other Subtasks Nursing home (Task 1a) 1.71­7.37 100 No Home health care (Task 1b) 0.22­2.2 35 No Hospital (Task 1c) 0.77­3.2 94 No Physician's office (Task 1d) ­0.13­2.4 56 No Underserved/rural (Task 1e) 0.6­1.6 94 No Managed care (Task 1f) NA 100 No NOTE: NA = not applicable. aThe range of scores indicates the lowest and the highest scores achieved among all QIOs in the 7th SOW, based on the above overall subtask improvement scoring formula. SOURCE: Derived from data collected from CMS's Dashboard section on its internal website. IOM ANALYSIS OF TASK 1 PERFORMANCE BY QIOS The Institute of Medicine (IOM) committee performed the analyses described in this section on the basis of CMS's evaluation scores for the QIOs in Task 1 of the contract for the 7th SOW. The IOM committee collected data from CMS's Dashboard section, located on its internal website, QIONet (see the discussion below and in Chapter 13), as well as other data from CMS, as requested. In the interest of understanding why some states and provider settings showed more improvement on QIO qual- ity measures than others, the committee conducted correlations between how the QIOs scored per clinical quality improvement task and the poten- tial presence of confounding variables, such as performance on other Task 1 subtasks, the spending per beneficiary on that subtask, the QIO contract round, the QIO region, and provider satisfaction.2 The data used to deter- mine correlations for all tasks were current through December 2004 and were obtained from the Dashboard section of CMS's internal website, un- less noted otherwise. Table 10.2 summarizes the results. 2These correlations used QIO scores determined by CMS evaluations described in Section J- 7 of the 7th SOW and discussed earlier in this chapter. Correlations between improvement scores, spending per beneficiary, provider satisfaction, QIO contract round, and region were determined by calculation of the correlation coefficient, r. On a scale from ­1 to 1 (with ­1 indicating a direct negative correlation, 0 indicating no correlation, and 1 indicating a direct

EVALUATION OF THE QIO PROGRAM 263 Correlation of Overall Subtask Score with: Spending per Beneficiary QIO Round QIO Region Provider Satisfaction No No No No No Some No No No No Some No No No Some No No No No NA No No No NA For all clinical quality improvement tasks, the overall improvement scores for each Task 1 subtask were not found to correlate with improve- ment scores in any of the other Task 1 subtasks. For example, a QIO's success in improving the quality of care provided by nursing homes did not indicate that the QIO would necessarily have success in improving the qual- ity of care provided by hospitals. Also, the IOM committee did not detect any correlations between subtask improvement scores and QIO spending per beneficiary or provider satisfaction. Some association between the over- all subtask score and the QIO contract round was shown only for the home health setting, suggesting that QIOs beginning in later contract rounds achieved greater improvements. The scores for hospitals and physicians' offices showed some association with QIO region. As noted above, CMS calculated the scores of QIO performance in the 7th SOW for each subtask. However, no overall score for all technical assis- tance subtasks in Task 1 combined was calculated. During the IOM com- positive correlation), all r values were less than 0.3. QIO round refers to the CMS contract cycle, which is implemented in a three-stage approach separated by 3 months for each stage. QIOs are split into four geographic regions: Boston, Dallas, Kansas City, and Seattle. Differ- ences between QIO improvement scores per task and QIO region were calculated by using a one-way analysis of variance (ANOVA) test, followed by the Scheffe test; significance was considered if the P value was <0.05.

264 MEDICARE'S QUALITY IMPROVEMENT ORGANIZATION PROGRAM mittee's study, the question of being able to identify the overall high- and low-performing QIOs thus arose. Such an identification would, however, necessitate the derivation of a single composite score that could be used to rate the progress in quality improvement that each QIO had made in all four health care settings. Although the identification of high and low per- formers in a single care setting was possible (this was done by the Best Practices Special Study for the hospital setting) (CMS, 2005a), this was not the case when an attempt was made to include the scores for all care settings. The IOM committee used multiple methods in its attempt to identify the overall high and low performers, but no clear method for the categori- zation of the QIOs on the basis of CMS scores could be devised. For in- stance, the scores for all four settings could be added together; however, if a QIO scored a 5.4 in the nursing home setting but scored ­0.1 in the physician's office setting, there would be no way to tell that the QIO's technical assistance might have worsened in physicians' offices during the 7th SOW. The IOM committee encountered other challenges, such as how to ensure that all subtasks received equal weight and how to set the cutoff between high and low performance. The lack of a correlation among tasks and wide variations in performance, complemented by the complexity of the QIO program, thus made the committee unable to identify the high- and low-performing QIOs for Task 1 as a whole in a valid manner. Although QIOs in general achieved improvements in measures that were calculated at the baseline (usually at the beginning of the 7th SOW) and that were then remeasured at the end of the 7th SOW, many of these im- provements were not statistically significant. Many other factors may have affected provider performance, such as quality interventions from other or- ganizations, but CMS did not document these factors. The evaluation in the 7th SOW focused on specific measures, subtasks, and individual QIO per- formance and does not demonstrate the actual impact of the QIO program or attribute improvements to QIO interventions; however, they do show changes in specific quality measures in each state. Nursing Homes 7th SOW Under Task 1a of the 7th SOW, each QIO worked to improve quality- of-care measures for identified participants as well as for all nursing homes statewide and developed a plan for this work as one of its four deliverables (see Table A.7 in Appendix A). For evaluation purposes, CMS used the reduction in failure rate to define "improvement." CMS based the evalua- tion of success on three components. First, the QIO had to demonstrate at least an 8 percent improvement statewide on three to five QIO-selected clini-

EVALUATION OF THE QIO PROGRAM 265 cal measures (see Table A.3a in Appendix A). Second, the QIO had to dem- onstrate an 8 percent improvement on these measures for the identified par- ticipants. Finally, according to provider surveys conducted by CMS, at least 80 percent of nursing homes, whether they were identified participants or not, had to report an adequate level of satisfaction with the work of the QIOs. CMS rolled these three components into an overall score for Task 1a, weighting statewide improvement less heavily than improvement among the identified participants. Clinical improvement constituted 80 percent of the score, with provider satisfaction levels counting for 20 percent. QIOs that scored a value of 1 or greater passed the task; a score of less than 1 was considered failing (CMS, 2002). For Task 1a, the IOM committee found no correlations between over- all improvement scores and QIO spending per beneficiary, nursing home satisfaction rates, the QIO contract round, or the QIO region. In the 7th SOW, all QIOs achieved a passing score on this subtask (for the scoring formula, see Table A.5 in Appendix A), with improvement scores ranging from a low of 1.71 to a high of 7.37. Data from CMS's Dashboard shows that the identified participants had greater levels of improvement across all nursing home measures, with an average relative change of 46.5 percent, in comparison with the statewide average relative change of 16.7 percent. As of December 2004, all QIOs recruited more than the required 10 percent of identified participants, with one QIO involving up to 100 percent of all nursing home providers in the state (CMS, 2004a). Nationwide, during the 7th SOW half of all nursing homes participated with their QIOs with vari- ous levels of involvement in multiple interventions (personal communica- tion, Y. Harris, December 28, 2004). 8th SOW In the 8th SOW, the QIOs must provide 10 deliverables (see Table A.7 in Appendix A) for the nursing home task. A notable deliverable for evalu- ation is that QIOs set their own statewide targets for clinical improvement for all measures in this task. The identified participants must set their own personal targets for clinical improvement (CMS, 2005c). As described earlier in this chapter, evaluation of QIO performance in the 8th SOW will build upon the components of the evaluation of perfor- mance in the 7th SOW (clinical quality and provider satisfaction) to in- clude the following, where applicable: clinical performance measure re- sults, process improvement, and organization culture change (see Table A.6 of Appendix A). As in the 7th SOW, CMS will weight the identified par- ticipant scores more heavily than the statewide scores, and the most em- phasis will be placed on improvements in the clinical performance measure results. The total score for this subtask is 1.1 points, but the total possible

266 MEDICARE'S QUALITY IMPROVEMENT ORGANIZATION PROGRAM score is 1.3 points, as the process improvement activities are worth 0.2 point of extra credit. However, the QIOs must meet the following core standards in the following areas: clinical performance measure results and organization culture change, as well as the satisfaction and knowledge- perception criteria. A score of 80 percent or higher on the satisfaction and knowledge-perception activities adds 0.1 point to the overall subtask score (this is discussed later in this chapter) (CMS, 2005c). Home Health 7th SOW The three QIO deliverables for Task 1b in the 7th SOW included the QIO training of home health agencies on the Outcome-Based Quality Im- provement System (OBQI) (see Table A.7 in Appendix A). The QIOs had to demonstrate improvement in OBQI measures for the set of identified par- ticipants. The evaluation did not include a measure of statewide improve- ment. CMS based the evaluation of success on two components: (1) a statis- tically significant improvement in at least one indicator by 30 percent of the identified participants and (2) satisfaction with QIO performance by at least 80 percent of the participating home health agencies. Completion of the home health task required a score of 1 point or higher to pass (see Table A.5 in Appendix A) (CMS, 2002). As in Task 1a, the IOM committee found no correlation between home health improvement scores and spending on that task per beneficiary, pro- vider satisfaction, or QIO region. The QIOs in the first contract round, however, appeared to have higher evaluation scores than the QIOs in the second and third contract rounds, suggesting that the length of time in- vested in change may influence improvement in this subtask. Work in this provider setting was new in the 7th SOW, and 35 percent of the QIOs achieved a passing score. However, the contracts for the 65 percent of the QIOs that failed (scoring below 1.0) were not put up for bid on the basis of this failure, as most scored between 0.6 and 1.0. As of December 2004, the scores for the home health subtask ranged from 0.22 to 2.2 points (CMS, 2004a). These are not final scores for the 7th SOW, as the results for the second and the third rounds were not available as of this writing. An aver- age of 22 percent of the identified participants in all QIOs attained signifi- cant improvement, which was less than the target of 30 percent. Only 17 QIOs reported that more than the required number of identified partici- pants had significant improvement; one state reported that 73 percent of all home health providers in the state had significant improvements.

EVALUATION OF THE QIO PROGRAM 267 8th SOW In the 8th SOW, the QIOs are responsible for providing nine deliver- ables (see Table A.7 in Appendix A), including lists of identified partici- pants, the selection of one Outcome and Assessment Information Set (OASIS) measure (see Chapter 8 for a description of OASIS measures) for statewide improvement, and the home health agencies' plans to reduce acute care hospitalizations. Unlike under the 7th SOW, the QIOs will be evaluated on the basis of the improvements both statewide and among the identified participants. The target reduction in failure rates varies by mea- sure and is prescribed by CMS for this task, unlike in the nursing home setting, in which each QIO is allowed to define its own targets. The targets for statewide reductions in failure rates for home health settings are lower than the targets for identified participant reductions (Table 10.3). The tar- gets for identified participants were derived in order to match the 75th percentile of the 7th SOW. Selection of statewide targets included consid- eration of the rates for identified participants as well as the complexity of the measures. TABLE 10.3 Task 1b Target Reduction Rates in the 8th SOW Target Target Identified Statewide RFRa Participant RFR OASIS Publicly Reported Measures (percent) (percent) Improvement in bathing 14 34 Improvement in transferring 8 31 Improvement in ambulation and locomotion 9 20 Improvement in management of oral medications 8 18 Improvement in pain interfering with activity 11 41 Improvement in status of surgical wounds 6 38 Improvement in dyspnea 17 41 Improvement in urinary incontinence 9 34 Provision of any emergent careb Acute care hospitalization 30 50 Discharge to community 10 35 aRFR = reduction in failure rate. Identified participants are excluded from calculation of the statewide reduction in failure rate. bThis measure will not be used in the 8th SOW. SOURCE: CMS (2005c).

268 MEDICARE'S QUALITY IMPROVEMENT ORGANIZATION PROGRAM The evaluation plan for the 8th SOW weights clinical performance mea- sures more heavily than the other components of evaluation for this task, with particular emphasis on acute care hospitalization. In addition to im- proving clinical performance measures statewide, the QIOs must improve immunization assessment processes statewide. As in Task 1a, the QIOs must achieve the targets set for the following core activities: improvements by identified participants in one selected OASIS measure, acute care hospital- ization, and telehealth. Statewide improvements must be met for clinical performance for acute care hospitalization, immunization surveys, and the satisfaction and knowledge-perception standards. The total score for this subtask is 1.0 point; 0.27 point of partial credit and extra credit is avail- able, making the total possible score 1.27 points. QIOs that achieve the satisfaction and knowledge-perception standards receive 0.1 point (CMS, 2005c). Hospitals 7th SOW The evaluation plan for Task 1c of the 7th SOW based success on state- wide improvements in quality-of-care measures and hospital satisfaction with QIO performance. The only deliverable for Task 1c was a list of con- tact information for every hospital in the state (see Table A.7 in Appendix A). CMS calculated a combined topic average based on improvements on the quality indicators in each of the four topic areas. The QIO had to dem- onstrate at least an 8 percent reduction in failure rate for a combined topic average statewide. Additionally, a provider satisfaction survey had to indi- cate that at least 80 percent of the hospitals in the state or jurisdiction were satisfied with the activities of the QIO. CMS determined scores above 1.0 for this task to be passing (CMS, 2002). The IOM committee found that the hospital measure improvement rates did not correlate with spending on this task per beneficiary, the QIO con- tract round, or provider satisfaction. An evaluation of hospital scores by QIO region did detect differences among regions (P < 0.05). This difference was driven by higher scores in the Kansas City region compared with those in both the Seattle and the Boston regions; the score for the Dallas region was not significantly different from that for any other region. On the basis of data from Dashboard, 94 percent of the QIOs appear to have achieved a passing score on the hospital subtask in the 7th SOW as of the end of 2004 (CMS, 2004a). At that time, the scores on this subtask ranged from 0.77 to 3.2 points.

EVALUATION OF THE QIO PROGRAM 269 8th SOW For the 8th SOW, CMS subdivided hospital work into two subtasks: Task 1c1 focuses on hospitals in general, whereas Task 1c2 focuses on critical access hospitals. Successful performance on Task 1c1 requires the provision of six deliverables (see Table A.7 in Appendix A), including the implementation of systems improvement interventions, such as computer- ized provider order entry, bar coding, or telehealth. The eight deliverables for Task 1c2 include the submission of quality improvement measures, in- terventions, and change models as well as a safety culture survey (CMS, 2005c). CMS bases its evaluations of these subtasks on both statewide and iden- tified participant improvements. Statewide improvement carries more weight than identified participant improvement in the evaluation of Task 1c1; in Task 1c2, identified participant improvement weights more heavily than statewide improvement. Statewide, the core activities focus on the re- porting of Hospital Quality Alliance measures (see Table A.3a in Appendix A) for both Tasks 1c1 and 1c2. For Task 1c1, identified participant work focuses on improving clinical performance measure results, processes for surgical care, and the use of electronic clinical information systems. For Task 1c2, identified participants must address the culture of safety in the critical access hospital, as well as the implementation of electronic systems. Task 1c1 is evaluated out of a total score of 1.1 points, with 0.2 point of extra credit and partial credit available, for a possible score of 1.3 points. The total possible score for Task 1c2 is 1.35 points. The meeting of satisfac- tion and knowledge-perception standards adds 0.1 point to the overall subtask score (CMS, 2005c). Physician Office and Physician Practice 7th SOW Task 1d had two deliverables, including a listing of all identified par- ticipants along with the participant's Unique Physician Identification Num- ber3 via PARTner (see Chapter 13 for a description of PARTner). CMS used three criteria to judge a QIO's success in Task 1d. First, the QIO had to demonstrate an 8 percent overall improvement statewide on quality-of- care measures for diabetes, cancer, and immunizations using a combined topic average. The score included weighting of the Health Plan Employer 3Medicare assigns a Unique Physician Identification Number to each provider or practition- er who participates in the Medicare program.

270 MEDICARE'S QUALITY IMPROVEMENT ORGANIZATION PROGRAM Data and Information Set (HEDIS) data so that Medicare+Choice benefi- ciaries and fee-for-service beneficiaries could be considered equally. The second criterion for the successful completion of Task 1d was achieving at least an 8 percent improvement on measures related to diabetes and cancer screening for the identified participant group. Finally, provider surveys had to yield at least an 80 percent satisfaction rate among the identified partici- pants. QIOs scoring 1.0 point and above passed the physician's office task (CMS, 2002) For this task, the IOM committee found no correlations between evalu- ation scores and QIO spending per beneficiary, the QIO contract round, or physician's office satisfaction. In testing for variations among QIO regions, the committee detected significant differences (P < 0.05) in physician's of- fice scores, driven by the higher average scores in the Boston region com- pared with those in the Seattle region. Differences in performance among other regions were not significant. As of this writing, 56 percent of the QIOs appear to have achieved a passing score on this subtask. As with the home health care analysis, this does not insinuate that 44 percent of the QIO contracts were not automatically renewed, as many could have scored above 0.6 point. The scores ranged from ­0.13 to 2.4 points. Physicians' offices volunteering to be an identified participant tended to have higher reductions in failure rates than physicians' offices statewide (with average reductions of 12.1 and 6.1 percent, respectively). As of December 2004, only two QIOs did not include the required 5 percent of active primary care physicians, and one QIO included up to 16 percent of all physicians' offices in the state (CMS, 2004a). 8th SOW In the 8th SOW, CMS separated work with physicians' practices into three groups: physician practice (Task 1d1), physician practice for under- served populations (Task 1d2), and physician practice and pharmacy: Part D prescription drug benefit (Task 1d3). Task 1d1 calls for 12 deliverables, including documentation that assistance was provided to Medicare Advan- tage plans, documentation of support for the Physician Voluntary Report- ing Program, implementation of electronic health records, and lists of iden- tified participant groups. The core activities for this subtask are systems improvement by the identified participant groups and satisfaction and knowledge-perception surveys. Clinical performance measures are evalu- ated by the Project Officer and have been broadened to include the provi- sion of statewide support for the Physician Voluntary Reporting Program, prevention and disease-based care processes, Medicare Advantage plans, End-Stage Renal Disease Networks, and offices participating in the Medi- care Management Demonstration Project. The areas of focus for the identi-

EVALUATION OF THE QIO PROGRAM 271 fied participant groups are clinical performance measurement and report- ing, process improvement, and systems improvement. The total score is 1.2 points; partial credit is offered for the systems improvement dimension of the subtask. The QIOs must achieve an 80 percent rate on the satisfaction and knowledge-participation surveys for 0.1 point (CMS, 2005c). The four deliverables for Task 1d2 mainly consist of cultural and lin- guistic competencies, addressed primarily by the identified participant group. The identified participant activities for systems improvement and process improvement are the core activities for this subtask. Clinical mea- sures focus on statewide improvements in immunization rates, mammog- raphy rates, and diabetes measures for underserved populations. CMS weights statewide improvements less heavily than improvements among the identified participants. The total score is 1.0 point; the satisfaction and knowledge-perception surveys are core activities that can add 0.1 point to the overall subtask score (CMS, 2005c). Task 1d3--physician practice and pharmacy: Part D prescription drug benefit--has nine deliverables, including assessments of electronic prescrib- ing feasibility, the baseline performance of Task 1d3 measures, and com- prehensive responses to beneficiary complaints about prescription medica- tions. Evaluation of performance on this subtask will be determined solely on the basis of the identified participants' activities and will be performed by the Project Officer and the Task 1d3 Government Task Leader. Achieve- ment of the satisfaction and knowledge-perception surveys requirement is required to pass this subtask (discussed further in this chapter) (CMS, 2005c). Underserved and Rural Beneficiaries 7th SOW For Task 1e of the 7th SOW, CMS required the QIOs to demonstrate a reduction in a chosen disparity in a defined population from the baseline to remeasurement by the completion of three deliverables. CMS compared the improvements made by an identified group with those made by a reference group of beneficiaries; the difference in these improvements between the two groups had to be smaller at the remeasurement than at the baseline. (This is a significant distinction, because a concomitant improvement in the reference group can affect the difference in improvement between the two groups.) QIOs could earn extra points by fully documenting the description of the intervention group, the relationship between the disparity and the chosen intervention, and the effectiveness of the intervention (CMS, 2002). CMS considered a score of 1.0 point to be passing, whereas the maximum obtainable score was 1.6 points.

272 MEDICARE'S QUALITY IMPROVEMENT ORGANIZATION PROGRAM The IOM committee found no correlations between overall Task 1e scores and spending on this subtask per beneficiary per QIO or QIO round. In total, 94 percent of the QIOs passed this subtask; the scores ranged from 0.6 to 1.6 points (personal communication, J. Kelly, July 5, 2005). 8th SOW In the 8th SOW, CMS has incorporated the underserved and rural ben- eficiary focus of the 7th SOW into selected subtasks of Task 1 (CMS, 2005c). Managed Care Organizations 7th SOW Successful performance of Task 1f required two deliverables, including submission of a plan of action to incorporate Medicare+Choice organiza- tions into all other quality improvement tasks. Under Task 1f, the Project Officer considered whether the QIO demonstrated adequate initiative to include Medicare+Choice organizations in Tasks 1a to 1e. CMS also con- sidered the Medicare+Choice organizations' satisfaction with their in- teractions with QIOs, with an expected minimum satisfaction level of 80 percent. Additionally, CMS looked to Medicare+Choice Quality Review organizations or other accreditation organizations to determine whether the Medicare+Choice organizations achieved demonstrable success in their Quality Assessment Performance Improvement projects. For overall evalua- tion of a QIO's success on Task 1f, the Project Officer gave equal weight to Quality Assessment and Performance Improvement projects (see Chapter 8) in conjunction with QIO technical assistance and achievement of an 80 per- cent satisfaction level (CMS, 2002). All QIOs with Medicare+Choice orga- nizations in their states passed Task 1f. 8th SOW QIO work with Medicare Advantage plans (formerly Medicare+Choice organizations) is incorporated into the appropriate settings during the 8th SOW. CMS did not separate out this work as a distinct task (CMS, 2005c). IMPACT OF INTENSE QIO ASSISTANCE In the 7th SOW, subsets of providers in the nursing home, home health care, and physician's office settings volunteered to work more closely with the QIO than other providers in the state; the QIO program recognizes

EVALUATION OF THE QIO PROGRAM 273 these providers as "identified participants." CMS's analyses of QIO inter- ventions in the 7th SOW attempted to determine whether the identified participants achieved greater improvements than the nonidentified partici- pants; one goal of such determinations is to show whether more intense QIO interactions yield a better quality of care. CMS shared preliminary results in public forums, such as the American Health Quality Association Technical Meeting in February 2005. The results suggested that the identi- fied participants scored higher than the nonidentified participants (Rollow, 2004). Upon initial analysis of these data, the IOM committee noted that the results are limited by many confounders, including the following: potential biases between identified participants and other providers, ignorance of other quality improvement interventions in which the providers participate, limited knowledge of the QIOs' statewide quality improvement efforts, and the impacts of these other interventions. In addition, the methods and time frames for determining the impacts of the intensity of work with the QIOs are not consistent across care settings (i.e., inconsistent remeasurement pe- riods) (personal communication, W. C. Rollow, July 8, 2005). CMS plans to publish a study with more complete data that support these findings; an advance summary of that study was provided to the com- mittee. The study assesses the level of intensity of provider work with the QIOs by separating the providers into groups (Table 10.4). Providers work- ing more intensely with QIOs appear to have achieved greater improve- ments on measures than those that did not. However, the statistical signifi- cance of these improvements could not be determined from this summary. CMS concluded that QIO assistance is, in general, valuable for improving quality (CMS, 2005b). PROVIDER SATISFACTION 7th SOW In the 7th SOW, the scores on which QIO performance was based in- cluded dimensions of provider satisfaction with their interactions with QIOs as well as the relative improvement on various performance measures. QIOs needed to attain an 80 percent rate of provider satisfaction for Tasks 1a, 1b, 1c, 1d, and 1f, which contributed to 20 percent of the QIO's overall score on each subtask (see Table A.5 in Appendix A). CMS contracted with Westat to survey all the identified participants; CMS also included a selec- tion of nonidentified participants from the nursing home and home health settings for these surveys. The surveys yielded a response rate of 90 percent from 21,710 providers nationwide. Westat sent the surveys by mail, the Internet, and telephone; and the provider representative who served as the

274 MEDICARE'S QUALITY IMPROVEMENT ORGANIZATION PROGRAM TABLE 10.4 Summary of CMS Evaluation of 7th SOW Care Setting Physician's Evaluation Parameter Nursing Home Home Health Hospital Office Levels of intensity Nonidentified Nonidentified NAb Nonidentified participants, participants, participants identified identified and identified participants, participants, participants and select and select identified identified participantsa participants Number of measures 4/6 measures 10/11 measures 17/18 measures 2/4 measures improved improved improved improved improved Number of providers 13,000 providers 6,000 providers 3,700 providers 1.7 million or beneficiaries beneficiaries (approximate) Measurement periodc Q2c 2002­ April 2002­ 2000­ 2001­2004 Q2 2004 January 2005 Q4c 2004 aSelect identified participants are identified participants focusing on improving a specific quality measure. bNA = hospital data were unavailable for this summary. cQ2 = second quarter; Q4 = fourth quarter. SOURCE: CMS (2005b). main point of contact with the QIO completed the survey. Questions ranged from asking about QIO-to-provider communications processes (i.e., "How satisfied or dissatisfied were you with the one-to-one e-mail communica- tion?" and "How satisfied or dissatisfied were you with the timeliness of the QIO's response to your question or request for assistance?") to the content and outcome of QIO technical assistance (i.e., "When implement- ing our quality improvement projects, we used the information received from this QIO" and "Medicare beneficiaries are now better served by our organization thanks to the assistance we received from this QIO") (Westat, 2005:58­59). The surveys covered the following six topic areas: access to health care, timeliness of response, information dissemination, technical support, pro- fessionalism and courtesy, and overall responsiveness to needs. The scores incorporated the aggregated survey responses, with each topic area weighted equally. Westat determined satisfaction on the basis of a range of choices: very satisfied, somewhat satisfied, neither satisfied nor dissatisfied, somewhat dissatisfied, and very dissatisfied. When applicable, Westat used compa-

EVALUATION OF THE QIO PROGRAM 275 TABLE 10.5 Provider Satisfaction with QIOs Task and Survey Respondent Provider Satisfaction (percent) Task 1a Nursing home identified participants 91 Nursing home nonidentified participants 75 Task 1b Home health identified participants 94 Home health nonidentified participants 81 Task 1c Hospital 90 Task 1d Physician's office 85 Task 1e Underserved and rural NAa Task 1f Medicare+Choice 93 aNA = not applicable. SOURCE: Westat (2005). rable choices of strongly agree, somewhat agree, neither agree nor disagree, somewhat disagree, and strongly disagree. Westat included responses of very satisfied and somewhat satisfied in the calculation of the satisfaction rate. Table 10.5 shows the providers' overall satisfaction with QIOs, listed by subtask. Westat also performed an analysis of "key drivers" for satisfaction. That analysis determined that 93 percent of all providers responded that they were satisfied with both the ease of access of contacting QIOs and the timeliness of the response. Satisfaction with the provision of technical sup- port varied depending on the type of assistance and ranged from a low of 84 percent (telephone conference calls) to 93 percent (training workshops, one-on-one telephone calls, and one-to-one e-mails). The overall value of QIO assistance was not equal across all types of providers and differed by participant status, as depicted in Table 10.6. The report suggests that the perceived usefulness may predict higher overall satisfaction with the QIOs (Westat, 2005). 8th SOW In the 8th SOW, the QIO program will continue to evaluate the satis- faction of providers as a component of Task 1 subtasks. However, a new topic will be added to the survey and will ask providers about their knowl- edge and perception of both CMS and the QIO with which they work. Under the umbrella of satisfaction surveys, an independent contractor will also survey the beneficiaries about their interactions with the QIO pro- gram. Previously, the only beneficiary satisfaction surveys administered were by the QIOs themselves as part of their case review activities (see Chapter 12). In the 8th SOW, QIOs achieving at least an 80 percent score

276 MEDICARE'S QUALITY IMPROVEMENT ORGANIZATION PROGRAM TABLE 10.6 Value of QIO Assistance Task and Survey Respondent Overall Value (percent) Task 1a Nursing home identified participants 81 Nursing home nonidentified participants 60 Task 1b Home health identified participants 88 Home health nonidentified participants 70 Task 1c Hospital 77 Task 1d Physician's office 72 Task 1e Underserved and rural NAa Task 1f Medicare+Choice 74 aNA = not applicable. SOURCE: Westat (2005). on the satisfaction and knowledge-perception surveys--core activities for all parts of Task 1--will add 0.1 point to their subtask scores. Task 1d3 is the only exception, for which 80 percent satisfaction is required to pass (CMS, 2005c). A stakeholder survey and a QIO survey will also be introduced in the 8th SOW. The stakeholder survey will be conducted twice during the 3-year contract period and will measure how key health care system stakeholders view the QIOs and CMS. The QIO survey will allow the QIOs to voice their satisfaction or dissatisfaction about CMS's operation and manage- ment of the overall QIO program (personal communication, M. G. Wang, July 6, 2005). The QIOs will also be surveyed on their views toward the QIO Support Centers (QIOSCs), which will contribute to the formal per- formance evaluation of each QIOSC (personal communication, J. Taylor, April 29, 2005). Although the IOM committee believes that surveys can be valuable sources of information for determination of the impact of the QIO pro- gram, as discussed in Chapter 5, the surveys must be designed and adminis- tered in a fair and clear manner with specific, actionable questions. In addi- tion, analyses should be conducted to discern the characteristics of QIOs receiving high and low satisfaction ratings. Also, in keeping with the trans- parency of public reporting expected of providers, it would be appropriate to make public the various satisfaction scores of each QIO. PROGRAMWIDE EVALUATION OF IMPACT As shown throughout this chapter, evaluations of quality impacts have focused on specific measures, subtasks, and individual QIOs. Some data are

EVALUATION OF THE QIO PROGRAM 277 presented for the nation as a whole but are based on summaries of changes in individual states. The program has not assessed the impact of the entire 7th SOW (including the combination of technical assistance activities with other requirements related to the protection of beneficiaries and program integrity). Although some QIOs have mentioned that the performance of case reviews sometimes brings quality improvement issues to light, these have not been documented in a method available for evaluation, nor has CMS assessed the impacts of program spending outside the core contracts, which is almost one-third of the funds apportioned. SUMMARY This chapter has discussed issues related to evaluation of the impact of quality improvement in the QIO program. The following are some of the main themes of this chapter, which are reflected in the findings and conclu- sions presented in Chapter 2: · CMS evaluates QIO performance on the basis of a number of pro- vider quality measures and deliverables provided for each task. Objective data that measure quality improvements are limited to what is in the evalu- ation scores. · The method by which QIOs are evaluated is detailed and compli- cated, does not reflect any program priorities, and thus, neither reflects patterns of effective technical assistance nor helps the QIOs prioritize how best to approach the provision of technical assistance. This complicated method of evaluation is made even more complex in the 8th SOW. The levels of expectation for the various measures have implications for answer- ing the question of whether QIOs improve quality; but because of the intri- cacies of CMS's evaluation formulas, the IOM committee was not able to determine whether the levels of expected improvement for each setting were adequate. · The data used for evaluation may have selection bias. For example, QIOs may not recruit identified participants randomly; instead, identified participants may be selected on the basis of their ability to achieve the great- est improvement, thus yielding higher scores. · Evaluation formulas do not reflect the differences between the mon- ies spent on a particular subtask and the weight given to calculation of identified participant and statewide scores. · Because identified participants volunteer to work with the QIO, they may be biased toward improving quality. This is important, as CMS weights identified participant improvements more heavily than statewide improvements.

278 MEDICARE'S QUALITY IMPROVEMENT ORGANIZATION PROGRAM On the basis of the QIO contract performance evaluations: · In general, providers in all settings seem to be improving quality on most of the performance measures on which they are evaluated. · The IOM committee was not able to identify correlations with sub- stantial implications between subtasks, QIO spending per beneficiary, con- tract round, region, or level of provider satisfaction. · However, a small but growing amount of evidence indicates that providers who work intensely with their QIOs achieve higher levels of im- provement, are more satisfied with QIO assistance, and value QIO assis- tance more than those who do not. · The lack of an overall program evaluation limits the IOM com- mittee's ability to draw conclusions about the overall impact that the QIOs have had on quality. REFERENCES CMS (Centers for Medicare and Medicaid Services). 2002. 7th Statement of Work (SOW). [Online]. Available: http://www.cms.hhs.gov/qio [accessed April 9, 2005]. CMS. 2004a. QualityNet Dashboard. [Online]. [accessed January 2005]. CMS. 2004b. QualityNet Dashboard. [Online]. [accessed August 26, 2004]. CMS. 2005a. Best Practice Methods Special Study: Report of First Year Scan of QIO Inpatient Practice. Baltimore, MD: Centers for Medicare and Medicaid Services. CMS. 2005b. Evaluation of the 7th Scope of Work: Special Advance Summary Report Submit- ted to the Institute of Medicine. September. Unpublished. Baltimore, MD: Centers for Medicare and Medicaid Services. CMS. 2005c. 8th Statement of Work (SOW), Version #080105-1. [Online]. Available: http:// www.cms.hhs.gov/qio [accessed November 4, 2005]. Rollow WC. 2004. Presentation to IOM Committee: Evaluating the HCQIP Program. Un- published. October 4. Washington, DC: Institute of Medicine. Westat. 2005. Survey for Provider Satisfaction with Quality Improvement Organizations. Unpublished. July 1. Rockville, MD: Westat.

Next: 11 Beneficiary Education and Communications »
Medicare's Quality Improvement Organization Program: Maximizing Potential Get This Book
×
Buy Hardback | $81.00 Buy Ebook | $64.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Medicare's Quality Improvement Organization Program is the second book in the new Pathways to Quality Health Care series. Focusing on performance improvement, it considers the history, role, and effectiveness of the Quality Improvement Organization (QIO) program and its potential to promote quality improvement within a changing health care delivery environment that includes standardized performance measures and new data collection and reporting requirements. This book carefully examines the QIOs that serve every state as well as the national program that guides and supports them. In addition, it highlights the important roles that a national program with private organizations in each state can play in promoting higher quality care. Medicare's Quality Improvement Organization Program looks closely at the technical assistance role of the QIO program and the need to encourage and support providers to improve their performance. By providing an in-depth assessment of the federal experience with quality improvement and recommendations for program improvement, this book helps point the way for those who strive to create higher quality and better value in health care. Intended for multiple audiences, Medicare's Quality Improvement Organization Program is essential reading for members of Congress, the federal executive branch, the QIOs, health care providers and clinicians, and stakeholder groups.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!