The Patient Protection and Affordable Care Act (ACA) has set the stage for transformation of the health care system. This transformation includes change in what the nation wants from health care as well as in how care is paid for. New care delivery systems and payment reforms require measures for tracking the performance of the health care system. Quality measures are among the critical tools for health care providers and organizations during the process of transformation and improvement (Conway and Clancy, 2009). Quality measures also play a critical role in the implementation and monitoring of innovative interventions and programs. This chapter begins by defining a good quality measure. It then reviews the process for measure development and endorsement and the existing landscape of quality measures for treatment of mental health and substance use (MH/SU) disorders. Next, the chapter details a framework for the development of quality measures—structural, process, and outcome measures—for psychosocial interventions, including the advantages, disadvantages, opportunities, and challenges associated with each. The final section presents the committee’s recommendations on quality measurement.
The Institute of Medicine (IOM) defines quality of care as “the degree to which health care services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge” (IOM, 1990, p. 21). Quality measures are tools for
quantifying a component or aspect of health care and comparing it against an evidence-based criterion (NQMC, 2014).
Quality measures are used at multiple levels of the health care system—clinicians, practices, clinics, organizations, and health plans—and for multiple purposes, including clinical care, quality improvement, and accountability. At the patient level, quality measures can address the patient experience of care and issues that are important to the patient’s treatment plan. At the care team or clinician level, quality measures can be used to assess the effectiveness and efficiency of care and inform quality improvement efforts. At the organization level (such as a health plan or delivery system), quality measures can address how well the organization supports effective care delivery—for example, by being used to assess the availability of trained staff. At the policy level, quality measures can be used to assess the effect of policies, regulations, or payment methodologies in supporting effective care. And at the level of the clinician or care team and organization, quality measures often are used for accountability purposes—for example, through public reporting to support consumer or purchaser decision making or as the basis for payment or other nonfinancial incentives (such as preferential network status).
Quality measures can address structure, process, and outcomes (Donabedian, 1980). Structure measures assess the capacity of organizations and providers to provide effective/evidence-based care likely to achieve favorable outcomes. Structure measures typically include features related to the presence of policies and procedures, personnel, physical plant, and information technology capacity and functionality. Process measures are used to assess how well a health care service provided to a patient adheres to recommendations for clinical practice based on evidence or consensus. Process measures may also be used to assess accessibility of services. Health outcomes are the “effects of care on the health status of patients and populations,” which include the patient’s improved health knowledge, health-related behavior, and satisfaction with care in addition to specific relevant health measures (Donabedian, 1988).
Various organizations have defined desirable criteria for quality measures. These criteria address such questions as importance (e.g., whether the condition or topic is common or costly and whether it has a large impact on outcomes), the evidence base or rationale supporting the measure, the scientific soundness of the measure (e.g., whether it provides valid and reliable results), the feasibility of and effort required for reporting, and the degree to which the information provided is useful for a variety of stakeholders
Importance to measure and report—measures address those aspects with the greatest potential for driving improvements; if measures are not important, the other criteria are less meaningful (must-pass)
Scientific acceptability of measure properties—the goal is to enable valid conclusions about quality; if measures are not reliable and valid, there is a risk of misclassification and improper interpretation (must-pass)
Feasibility—ideally, administering the measures should impose as little burden as possible; if administration is not feasible, consider alternative approaches
Usability and use—the goal is to be able to use endorsed measures for decisions related to accountability and improvement
Harmonization and selection of best-in-class—the steward attests that a measure’s specifications have been standardized for related measures with the same focus and that issues with competing measures have been considered and addressed
SOURCE: Burstin, 2014.
(McGlynn, 1998; NQF, 2014c; NQMC, 2014). As an example, Box 5-1 lists the criteria for evaluation of quality measures of the National Quality Forum (NQF). To illustrate, some of the most widely used quality measures address care for diabetes, including control of blood sugar and annual testing to detect complications that can lead to blindness, renal failure, and amputations. These measures are considered important because diabetes is a common and costly disease, and because there is strong evidence that maintaining glycemic control can minimize the disease’s complications and that early identification of these complications can lessen further deterioration (Vinik and Vinik, 2003). Furthermore, the information needed to report these measures can be captured reliably and validly from existing data in administrative claims, laboratory results, and medical records, thus making the measures feasible and scientifically sound. Multiple stakeholders also can use the measures for targeting quality improvement efforts and for engaging patients in self-care.
The process for developing quality measures includes specific efforts to address each of these criteria. Key steps include evaluating the impact of the quality concern and the evidence for the likely effectiveness of specific
FIGURE 5-1 The development process for quality measures.
SOURCE: Adapted from Byron et al., 2014.
interventions or actions by the health care system to address the concern, specifying in detail how to calculate the measure, and testing the measure (see Figure 5-1) (Byron et al., 2014; CMS, n.d.). Input from multiple stakeholders throughout the process is considered essential (Byron et al., 2014; NQF, 2014a); stakeholders include consumers (whose care is the focus of measurement and who will use quality information to inform their decisions), experts in the topic area of the measures, those who will implement the measures (government, purchasers), and those who will be evaluated by the measures (providers, health plans). Input may be obtained through ongoing advice from a multistakeholder panel, solicitation of input from key stakeholders, or broad input from a public comment period. While consumer involvement as stakeholders in advising on measure concepts has occurred in some settings, consumer participation on measure development teams has been limited.
A large number of quality measures have been developed by accrediting organizations such as the Joint Commission (for hospitals) and the National Committee for Quality Assurance (NCQA, for health plans). Physician groups also have developed measures; examples include the Physician Consortium for Performance Improvement, convened by the American Medical Association, and specialty societies such as the American College of Surgeons and the American Society for Clinical Oncology. Recently, the federal government has assumed a large role in measure development to support implementation of the ACA. Agencies of the U.S. Department of Health and Human Services (HHS) have contracted with a variety of organizations for the development of new measures (e.g., for the Centers for Medicare & Medicaid Services’ [CMS’s] electronic health records [EHRs] incentive program or for inpatient psychiatric facilities). Additionally, the Children’s Health Insurance Program Reauthorization Act (CHIPRA) called
for an unprecedented investment in pediatric quality measures, and many measures addressing mental health conditions are in development through that effort (AHRQ, 2010).
Given the growth in quality measurement efforts and the number of quality measures, CMS has worked to coordinate these efforts so as to avoid undue burden or mixed signals and ensure that measures are useful for multiple stakeholders (Frank, 2014; Ling, 2014). Two mechanisms supporting the rationalization of measurement and the reduction of duplication are (1) the use of a multistakeholder consensus-based process for endorsing measures, and (2) prioritization of measures for public programs.
Currently, HHS contracts with NQF, an independent, nonprofit consensus-based entity, to prioritize, endorse, and maintain valid quality performance measures. To implement its endorsement process, NQF issues calls for measures in specific content areas and convenes multistakeholder committees to review candidate measures against the criteria listed earlier in Box 5-1. The committees’ recommendations are posted for public comment, and final recommendations are made by NQF’s governing committee (NQF, 2014a). Endorsement lasts 3 years, but annual updates are required, and measures can be reevaluated when new, competing measures are proposed.
The second mechanism—prioritization of measures for public programs—is formally incorporated in the ACA. The Measures Application Partnership (MAP), convened by NQF, provides multistakeholder input prior to federal rulemaking on measures to be used in federal public reporting and performance-based payment programs. In particular, the role of the MAP is to align measures used in public and private programs and to prioritize areas for new measure development (NQF, 2014b).
To date, quality measures are lacking for key areas of MH/SU treatment. Of the 55 nationally endorsed measures related to MH/SU, just 2 address a psychosocial intervention (both dealing with intervention for substance use) (see Table 5-1). An international review of quality measures in mental health similarly showed the lack of measures for psychosocial interventions, with fewer than 10 percent of identified measures being considered applicable to these interventions (Fisher et al., 2013). The small number of nationally endorsed quality measures addressing MH/SU reflects both limitations in the evidence base for what treatments are effective at achieving improvements in patient outcomes and challenges faced in obtaining the detailed information necessary to support quality measurement from
|Depression Response at Six Months—Progress Toward Remission||1884||Outcome|
|Depression Response at Twelve Months—Progress Toward Remission||1885||Outcome|
|Depression Remission at Six Months||0711||Outcome|
|Depression Remission at Twelve Months||0710||Outcome|
|Inpatient Consumer Survey (ICS) (consumer evaluation of inpatient behavioral health care services)||0726||Outcome|
|Pediatric Symptom Checklist (PSC)||0722||Outcome|
|Controlling High Blood Pressure for People with Serious Mental Illness||2602||Outcome|
|Diabetes Care for People with Serious Mental Illness: Blood Pressure Control (<140/90 mm Hg)||2606||Outcome|
|Diabetes Care for People with Serious Mental Illness: Hemoglobin A1c (HbA1c) Poor Control (>9.0%)||2607||Outcome|
|Diabetes Care for People with Serious Mental Illness: Hemoglobin A1c (HbA1c) Control (<8.0%)||2608||Outcome|
|Promoting Healthy Development Survey (PHDS)||0011||Outcome|
|Experience of Care and Health Outcomes (ECHO) Survey (behavioral health, managed care versions)||0008||Outcome|
|Adult Current Smoking Prevalence||2020||Outcomea|
|Depression Utilization of the PHQ-9 Tool||0712||Process|
|Antidepressant Medication Management (AMM)||0105||Process|
|Adult Major Depressive Disorder (MDD): Suicide Risk Assessment||0104||Process|
|Child and Adolescent Major Depressive Disorder: Diagnostic Evaluation||1364||Process|
|Child and Adolescent Major Depressive Disorder: Suicide Risk Assessment||1365||Process|
|Developmental Screening in the First Three Years of Life||1448||Process|
|SUB-1 Alcohol Use Screening||1661||Process|
|SUB-2 Alcohol Use Brief Intervention Provided or Offered and SUB-2a Alcohol Use Brief Intervention||1663||Process|
|SUB-3 Alcohol and Other Drug Use Disorder Treatment Provided or Offered at Discharge and SUB-3a Alcohol and Other Drug Use Disorder Treatment at Discharge||1664||Process|
|Adherence to Antipsychotic Medications for Individuals with Schizophrenia||1879||Process|
|Adherence to Mood Stabilizers for Individuals with Bipolar I Disorder||1880||Process|
|Antipsychotic Use in Persons with Dementia||2111||Process|
|HBIPS-1 Admission Screening||1922||Process|
|Follow-up After Hospitalization for Schizophrenia (7- and 30-day)||1937||Process|
|HBIPS-5 Patients Discharged on Multiple Antipsychotic Medications with Appropriate Justification||0560||Process|
|HBIPS-6 Post-Discharge Continuing Care Plan Created||0557||Process|
|HBIPS-7 Post-Discharge Continuing Care Plan Transmitted to Next Level of Care Provider Upon Discharge||0558||Process|
|HBIPS-2 Hours of Physical Restraint Use||0640||Process|
|HBIPS-3 Hours of Seclusion Use||0641||Process|
|Cardiovascular Health Screening for People with Schizophrenia or Bipolar Disorder Who Are Prescribed Antipsychotic Medications||1927||Process|
|Diabetes Screening for People with Schizophrenia or Bipolar Disorder Who Are Using Antipsychotic Medications (SSD)||1932||Process|
|Cardiovascular Monitoring for People with Cardiovascular Disease and Schizophrenia (SMC)||1933||Process|
|Diabetes Monitoring for People with Diabetes and Schizophrenia (SMD)||1934||Process|
|Substance Use Screening and Intervention Composite||2597||Process|
|Antipsychotic Use in Children Under 5 Years Old||2337||Process|
|Alcohol Screening and Follow-up for People with Serious Mental Illness||2599||Process|
|Tobacco Use Screening and Follow-up for People with Serious Mental Illness or Alcohol or Other Drug Dependence||2600||Process|
|Body Mass Index Screening and Follow-up for People with Serious Mental Illness||2601||Process|
|Diabetes Care for People with Serious Mental Illness: Hemoglobin A1c (HbA1c) Testing||2603||Process|
|Diabetes Care for People with Serious Mental Illness: Medical Attention for Nephropathy||2604||Process|
|Follow-up After Discharge from the Emergency Department for Mental Health or Alcohol or Other Drug Dependence||2605||Process|
|Diabetes Care for People with Serious Mental Illness: Eye Exam||2609||Process|
|Initiation and Engagement of Alcohol and Other Drug Dependence Treatment (IET)||0004||Process|
|Preventive Care and Screening: Unhealthy Alcohol Use: Screening and Brief Counseling||2152||Process|
|Preventive Care and Screening: Screening for Clinical Depression and Follow-up Plan||0418||Process|
|Follow-up Care for Children Prescribed ADHD Medication (ADD)||0108||Process|
|Depression Assessment Conducted||0518||Process|
|Follow-up After Hospitalization for Mental Illness (FUH)||0576||Process|
|Developmental Screening Using a Parent Completed Screening Tool (Parent report, Children 0-5)||1385||Process|
|TOB-1 Tobacco Use Screening||1651||Process|
|TOB-2 Tobacco Use Treatment Provided or Offered and the Subset Measure TOB-2a Tobacco Use Treatment||1654||Process|
|TOB-3 Tobacco Use Treatment Provided or Offered at Discharge and the Subset Measure TOB-3a Tobacco Use Treatment at Discharge||1656||Process|
a Please note that NQF identifies #2020 as a structure measure.
SOURCE: NQF Quality Positioning System (NQF, 2015).
existing clinical data (Byron et al., 2014; Kilbourne et al., 2010; Pincus et al., 2011).
Most of the endorsed measures listed in Table 5-1 are used to evaluate processes of care. Of the 13 outcome measures, 4 are focused on depression. The endorsed measures address care in inpatient and outpatient settings, and several address screening and care coordination. Few address patient-centeredness.
While the NQF endorsement process focuses on performance measures for assessing processes and outcomes of care, measures used for accreditation or certification purposes often articulate expectations for structural capabilities and how those resources are used. However, these structural
measures do not currently address in detail the infrastructure needed to implement evidence-based psychosocial interventions. Examples are provided in Table 5-2 for clinical practices and hospitals.
|Chinman et al., 2003||Competency Assessment Instrument (CAI), Community Resources Scale||The CAI measures 15 competencies needed to provide high-quality care for those with severe and persistent mental illness. The Community Resources scale on the CAI is defined as “refers clients to local employment, self-help, and other rehabilitation programs” (Chinman et al., 2003).|
|State of New York||Standards for Health Homes||“The health home provider is accountable for engaging and retaining health home enrollees in care; coordinating and arranging for the provision of services; supporting adherence to treatment recommendations; and monitoring and evaluating a patient’s needs, including prevention, wellness, medical, specialist, and behavioral health treatment, care transitions, and social and community services where appropriate through the creation of an individual plan of care” (New York State Health Department, 2012).|
|NCQA||The Medical Home System Survey (MHSS) (NQF #1909)||The MHSS is used to assess the degree to which an individual primary care practice or provider has in place the structures and processes of an evidence-based patient-centered medical home. The survey comprises six composite measures, each used to assess a particular domain of the patient-centered medical home:|
|Composite 1: Enhance access and continuity
Composite 2: Identify and manage patient populations
Composite 3: Plan and manage care
Composite 4: Provide self-care support and community resources
Composite 5: Track and coordinate care
Composite 6: Measure and improve performance (NQF, 2011)
|American Nurses Association||Skill mix (registered nurse [RN], licensed vocational/practical nurse [LVN/LPN], unlicensed assistive personnel [UAP], and contract personnel) (NQF #0204)||
NSC-12.1—Percentage of total productive nursing hours worked by RNs (employee and contract) with direct patient care responsibilities by hospital unit
NSC-12.2—Percentage of total productive nursing hours worked by LPNs/LVNs (employee and contract) with direct patient care responsibilities by hospital unit
NSC-12.3—Percentage of total productive nursing hours worked by UAP (employee and contract) with direct patient care responsibilities by hospital unit
NSC-12.4—Percentage of total productive nursing hours worked by contract or agency staff (RNs, LPNs/LVNs, and UAP) with direct patient care responsibilities by hospital unit
Note that the skill mix of the nursing staff (NSC-12.1, NSC-12.2, and NSC-12.3) represents the proportions of total productive nursing hours by each type of nursing staff (RN, LPN/LVN, and UAP); NSC-12.4 is a separate rate. The measure’s focus is the structure of care quality in acute care hospital units (NQF, 2009).
To guide the consideration of opportunities to develop quality measures for psychosocial interventions, the committee built on prior work by Brown and colleagues (2014). The discussion here is organized according to the Donabedian model for measuring quality, which uses the categories of structure, process, and outcomes (Donabedian, 1980). The following sections consider opportunities and challenges for each of these types of measures.
“Structural components have a propensity to influence the process of care . . . changes in the process of care, including variations in quality, will influence the outcomes of care, broadly defined. Hence, structural effects on outcomes are mediated through process.”
—Donabedian, 1980, p. 84
Appropriately developed and applied structure measures form the basis for establishing a systematic framework for quality measurement and improvement. Thus, structure measures are viewed as necessary to ensure that key process concepts of care can actually be implemented in a way that conforms to the evidence base linking those concepts to key outcomes (both the achievement of positive outcomes and the avoidance of negative outcomes). Importantly, structure measures generally indicate the potential for these concepts to be applied effectively and to result in the desired outcomes; they are not used to assess whether these capacities are actually implemented in accordance with existing evidence or whether desired outcomes are achieved. They can, however, be used to assess whether the organization/provider has the capabilities necessary to monitor, improve, and report on the implementation of key processes and achievement of desired outcomes.
Structure measures typically are embodied in requirements for federal programs (e.g., requirements for health plans participating in CMS’s Comprehensive Primary Care Initiative [CMS, 2015a]), for independent accreditation programs (such as the Joint Commission’s accreditation for hospitals [Joint Commission, 2015]), or for NCQA’s recognition program for patient-centered medical homes (NCQA, 2015). Structure measures are applied as well in the accreditation programs for training programs for health care providers (e.g., that of the Accreditation Council for Graduate Medical Education [ACGME]). Certification and credentialing programs also apply what are essentially structure measures for assessing whether individual providers meet standards indicating that they have the knowledge, skills, proficiency, and capacity to provide evidence-based care. Typically, accreditation processes rely on documentation submitted by organizations/providers, augmented by on-site audits, including consumer or staff interviews. Certification programs also rely on information submitted by providers, as well as written, computer-based, or oral examinations, and, increasingly, on observations of actual practice (including assessment of fidelity to a level of competency). In addition, accreditation programs often include requirements for reporting of processes and outcomes (e.g., the Joint Commission’s core measures, reporting under the United Kingdom’s Improving Access to Psychological Therapies program).
The committee envisions important opportunities to develop and apply structure measures as part of a systematic, comprehensive, and balanced strategy for enhancing the quality of psychosocial interventions. Structure measures can be used to assess providers’ training and capacity to offer evidence-based psychosocial interventions. They provide guidance on infrastructure development and best practices. They support credentialing and payment, thereby allowing purchasers and health plans to select clinics or provider organizations that are equipped to furnish evidence-based psychosocial interventions. Finally, they can support consumers in selecting providers with expertise in interventions specific to their condition or adapted to their cultural expectations (Brown et al., 2014). A framework for leveraging these structural concepts to develop quality measures for psychosocial interventions might include the following:
- Population needs assessment—Determination of the array of services/interventions to be provided based on identification and characterization of the needs of the population served by the organization, including clinical (i.e., general/preventive health, mental health, and substance use) and psychosocial needs and recovery perspectives (see IOM, 2008) (through either direct provision of services or referral arrangements with other providers). Needs assessment can also consider the diversity of the population in terms of race/ethnicity, culture, sexual identity, disability, and other factors that may affect care needs and opportunities to address disparities.
- Adoption of evidence-based practices—Development and use of internal clinical pathways (including standardized assessment of key patient-centered, recovery-oriented clinical outcomes and processes) that are based on guidelines meeting the IOM standards (or other well-established evidence); that conform to a framework for systematic, longitudinal, coordinated, measurement-based, stepped care (i.e., measurement-based care) (Harding et al., 2011); and that provide a menu of available options for the provision of evidence-based psychosocial interventions.
- Health information technology—Utilization of health information technology (including EHRs) with functionalities that include the creation of registries for the implementation of a monitoring and reporting system, for use both at the point of care and for quality improvement and accountability reporting.
- Quality improvement—Establishment of an ongoing, accountable structure/committee and activities for systematically monitoring
data related to quality and safety and implementing strategies for improvement. The committee might include substantive representation from the consumer population served, as well as providers and key leaders of the organization.
- Training and credentialing—Establishment of hiring, training, and credentialing policies to ensure that clinicians meet specific standards for fidelity in the delivery of the psychosocial (or other) interventions they provide to consumers. These policies might be augmented by the provision of ongoing case-based supervision of providers.
- Access and outcome measurement—Implementation of policies and procedures to ensure that the array of strategies, systems, and services established in the items above is, in fact, addressing the needs of key populations. For example, consumers might have adequate access to evidence-based interventions through the implementation of policies regarding hours of clinic/clinician availability, maintenance of adequate workforce, monitoring of wait times, and assessment of consumer perspectives. Strategies for enhancing health literacy, utilizing shared decision-making tools, and providing peer support might be implemented.
Implementing this framework would require the development of a set of measures for evaluating each structural concept. The measures noted in Table 5-3 might be part of that set but would not be the sole measures applicable to that concept.
|Measure Concept||Examples of Existing or Proposed Measures Potentially Applicable to This Concept||Data Sources|
|Capability for delivering evidence-based psychotherapy||Hiring, training, and supervision of staff||Documentation submitted by provider|
|Capability for measuring outcomes||Presence of registry with functionality for tracking and outcome assessment||Documentation submitted by provider, reports|
|Infrastructure for quality improvement||Involvement of consumers in quality improvement||On-site audits, including consumer or staff interviews|
SOURCE: Adapted from Brown et al., 2014.
A number of challenges must be considered in exploiting the opportunities for developing and implementing structure measures described above:
- While there is strong face validity for these concepts, and most of them are key components of evidence-based chronic care models, they have not been formally tested individually or together.
- Resources would be needed to support both the documentation and the verification of structures.
- Clinical organizations providing care for MH/SU disorders have less well developed information systems compared with general health care and also are excluded from the incentive programs in the Health Information Technology for Economic and Clinical Health (HITECH) Act (CMS, 2015b). The costs of developing the health information technology and other capacities necessary to meet the structural criteria discussed above will require additional resources.
- The infrastructure for clinician training, competency assessment, and certification in evidence-based psychosocial interventions is neither well developed nor standardized at the local or national level. For MH/SU clinical organizations to implement their own clinician training and credentialing programs would be highly inefficient.
- Many providers of care for MH/SU disorders work in solo or small practices and lack access to the infrastructure assumed for the concepts discussed above. There would need to be a substantial restructuring of the practice environment and shift of incentives to encourage providers to link with organizations that could provide this infrastructure support. Incentive strategies would need to go beyond those associated with reimbursement (perhaps involving licensure and certification), because a significant proportion of providers of MH/SU care do not accept insurance (Bishop et al., 2014).
“[Measuring the process of care] is justified by the assumption that . . . what is now known to be ‘good’ medical care has been applied. . . . The estimates of quality that one obtains are less stable and less final than those that derive from the measurement of outcomes. They may,
however, be more relevant to the question at hand: whether medicine is properly practiced.”
—Donabedian, 2005, p. 694
Ideally, process measures are selected in areas in which scientific studies have established an association between the provision of particular services and the probability of achieving desired outcomes (McGlynn, 1998) through evidence from randomized controlled trials or observational studies. Examples include the association between receipt of guideline-concordant care and better clinical depression outcomes in routine practice settings (Fortney et al., 2001) and the association between engagement in substance abuse treatment and decreased criminal justice involvement (Garnick et al., 2007). Process measures that track access to services or encounters with MH/SU care delivery systems for which evidence for impact on outcomes is lacking may be useful as measures of service utilization or access to care. Process measures that can be captured through existing data from either administrative claims or medical records (e.g., filled prescriptions, lab tests, results of lab tests) have traditionally been appealing because they take advantage of existing data. However, the focus of the field of quality measurement, at least with regard to accountability measures, is shifting to outcomes and eschewing process measures unless they are proximal to outcomes. Process measures, however, remain important for improvement activities.
The committee sees important opportunities to develop and apply process measures as part of a systematic, comprehensive, and balanced strategy for enhancing the quality of psychosocial interventions. Defining the processes of care associated with evidence-based psychosocial interventions is complicated. However, effective and efficient measures focused on the delivery of evidence-based psychosocial interventions are important opportunities for supporting the targeting and application of improvement strategies (Brown et al., 2014), and currently used data sources offer several opportunities to track the processes of care (see Table 5-4):
- Monitoring the delivery of psychosocial interventions as a measure of access to these services—There is growing concern about the underutilization of psychotherapy in the treatment of MH/SU disorders. Tracking the use of psychotherapy through claims data is one approach to monitoring its delivery. Claims data could be used to determine whether psychotherapy was used at all for persons with certain conditions and to better understand patterns of utilization
related to timing and duration (Brown et al., 2014). Examples of strategies for assessing access include patient surveys and internal waiting list data. Because patient surveys may not provide immediate feedback on availability of services, approaches for using simulated patients or “mystery shoppers” to contact providers to assess appointment availability have also been used (Steinman et al., 2012).
- Tracking the content of evidence-based psychosocial interventions—Better understanding the content of encounters for MH/SU disorders and whether evidence-based psychosocial interventions are actually provided is essential for tracking the delivery of such interventions.
- − Claims data could be used for this purpose if enhanced procedure codes were developed. More specific procedure codes could be used to capture the content and targets of psychosocial interventions, particularly if aligned with ongoing international and national efforts focused on establishing a common terminology and classification system for psychosocial interventions. These codes could be tied to structure measures related to provider credentialing. Such descriptive billing codes could relate to specific psychotherapeutic processes, and the use of such codes could be restricted to providers who have demonstrated competency, such as through credentialing (Brown et al., 2014).
- − As EHRs become more widely adopted in the delivery of MH/SU services, incorporating structured fields on the content of psychosocial interventions could facilitate better documentation and easier extraction of data for constructing quality measures. Computerized extraction of content information from medical notes is another potential approach (Brown et al., 2014). A common terminology and classification system for psychotherapy could provide the basis for coding and documenting the content of care.
- − Clinical registries are another potential opportunity for tracking care and could enable efficiency in implementation, allow standardized reporting, and support coordination across providers and systems.
- Consumer reports on the content of psychosocial interventions—Information on consumers’ experiences with care is collected routinely by health plans and provider organizations. Several existing surveys query consumers about their experiences with the delivery of MH/SU services, although they do not focus on the specific content of psychotherapy. These types of surveys could be used
|Measure Concept||Examples of Existing or Proposed Measures||Data Sources|
|Access/frequency of visits||Psychotherapy visits among people with depression||Claims|
|Documentation of evidence-based psychosocial interventions||Receipt of adequate number of encounters/content of cognitive-behavioral therapy among people with posttraumatic stress disorder||Medical records or electronic health records|
|Consumer- and provider-reported content of psychotherapy||Use of peer support among people with schizophrenia; completion of recommended course of psychotherapy||Surveys of patients or providers|
SOURCE: Adapted from Brown et al., 2014.
to gather such information. It may also be possible to link this information to clinical outcomes and client satisfaction (Brown et al., 2014). Such measures could give consumers an opportunity to assess the delivery of care and serve as a means of engaging clinicians in discussions about treatment.
- Provider reports on the content of care—Such reports hold some promise. One survey asked providers to rate the frequency with which they delivered each psychotherapy element over the course of treatment (Hepner et al., 2010).
A number of challenges need to be considered in the design of process measures, many related to the nature of the data source itself. Claims, EHRs, and consumer surveys all pose challenges as data sources for these measures.
Claims, while readily available, exist for the purpose of payment, not tracking the content of treatment. Procedure codes used for billing lack detail on the content of psychotherapy; the codes have broad labels such as “individual psychotherapy” and “group psychotherapy” (APA, 2013). A further complication is that state Medicaid programs have developed their own psychotherapy billing codes, and these, too, provide no detail on the
content of the psychotherapy (Brown et al., 2014). A key issue, discussed in Chapter 3, is the lack of a common terminology for the various components and forms of psychosocial interventions. Such a terminology would need to be instantiated in a standardized intervention classification system like the American Medical Association’s (AMA’s) Current Procedural Terminology (CPT). The potential harmonization between the AMA CPT codes and the World Health Organization’s (WHO’s) International Classification of Health Interventions might be an opportunity for developing an approach for more useful coding of psychosocial interventions (Tu et al., 2014).
Still, billing practices vary widely, which poses a challenge to making valid comparisons across providers. Even if appropriate billing codes reflecting content could be developed, it is uncertain whether they would actually be applied in a valid manner without an audit process. As the health care system moves away from fee-for-service payment and toward bundled payment approaches, the use of such codes for billing may become less likely.
Clinical records, including EHRs and registries, have potential to enable tracking of the receipt of evidence-based care, provided that the necessary data elements are available electronically. Clinical data registries also could be useful for tracking the processes and outcomes of care for MH/SU conditions. However, current EHRs and registries do not contain fields capturing psychosocial health or specific psychotherapy content (Glasgow et al., 2012). Detailed information on therapy sessions in EHRs also could pose a threat to confidentiality, and could make confidentiality protection more of a concern for both consumers and providers. More important, the recording of specific psychotherapies or the content of psychotherapy would represent a major change in documentation, and this additional burden might not be well accepted. Efforts to lessen the burden of documentation would have to be weighed against the need to ensure that reports are meaningful. Concern also has been raised about measures that allow providers to “check the box,” with little opportunity to verify the content or report.
With respect to consumer surveys, the surveys need to be capable of detecting variations in the delivery of the specific content of psychotherapeutic treatment. However, research on substance use treatment and multisystemic therapy suggests that consumers may not be valid reporters on the content of psychosocial interventions they receive (Chapman et al., 2013; Schoenwald et al., 2009), although data on consumer reports of cognitive-behavioral therapy are promising (Miranda et al., 2010). Consumers may have difficulty recalling therapy sessions, the elements of psychotherapy may change during the course of treatment, and there are burdens and costs associated with data collection (Brown et al., 2014). Finally, consumers may
not be interested in providing feedback, making the collection of sufficient information to make reliable comparisons across providers a challenge.
The validity of provider reporting on the content of psychotherapy is not well established. Providers tend to overestimate their delivery of treatment content, especially if a measure is linked to performance appraisals or payment (Schoenwald et al., 2011). Similarly, providers overestimate their ability to follow treatment protocols compared with the assessments of independent raters (Chapman et al., 2013). Another disadvantage is that providers may have difficulty recalling therapy sessions; the best time to query them may be immediately following a session (Brown et al., 2014).
Finally, measures for assessing the delivery of psychosocial interventions would ideally require detailed information on patient characteristics (e.g., diagnosis, severity) and the intervention (e.g., timing, content) to make it possible to determine the degree to which the intervention was implemented in accordance with the clinical trials demonstrating its effectiveness.
Given the above challenges, process measures that address access to services may be ready for implementation in the short term, while those addressing the content of care may require more detailed study and be better suited to supporting quality improvement efforts.
“Outcomes do have . . . the advantage of reflecting all contributions to care, including those of the patient. But this advantage is also a handicap, since it is not possible to say precisely what went wrong unless the antecedent process is scrutinized.”
—Donabedian, 1988, p. 1746
Of all quality measures, outcome measures have the greatest potential value for patients, families, clinicians, and payers because they indicate whether patients have improved or reached their highest level of function and whether full symptom or disease remission has been achieved. One of the earliest and most widely used conceptual models of health care outcomes, described by Wilson and Cleary (1995), integrates concepts of biomedical patient outcomes and quality-of-life measures. Wilson and Cleary identify five domains that are influenced by characteristics of both the patient and the environment: (1) biological and physiological variables, (2) symptoms, (3) functional status, (4) general health perceptions, and (5) overall quality of life. This model encompasses the interaction and causal linkages among clinical, biological, environmental, and societal variables that influence an individual’s health status. Subsequent models of health care outcomes encompass economic dimensions as well, including direct
and indirect costs; resource utilization; disability; and outcomes external to the health care system, such as employment, absenteeism, incarceration, and legal charges (Velentgas et al., 2013). Other models add consumer experiences with care (Lebow, 1983; Williams, 1994) and measures reflecting full recovery from mental health disorders (Deegan, 1988; Scheyett et al., 2013).
Patient-reported outcome measures are appealing because they can be used to monitor patient progress, guide clinical decision making, and engage consumers in care. Patient-reported outcomes shift the focus from the content of the intervention to its results; quality measures that evaluate outcomes overcome the limitations of structure and process measures. Outcome measures also offer a means of making care more patient-centered by permitting consumers to report directly on their symptoms and functioning. And the measures provide tangible feedback that consumers can use for self-monitoring and for making treatment decisions.
Importantly, outcome measures can be used to identify patients who are not responding to treatment or may require treatment modifications, as well as to gauge individual provider and system performance and to identify opportunities for quality improvement (Brown et al., 2014).
Patient-reported outcomes are integral to measurement-based care (Harding et al., 2011; Hermann, 2005), which is predicated on the use of brief, standardized, specific assessment measures for target symptoms or behaviors that guide a patient-centered action plan. Without standardized measurement, the provider’s appraisal of the patient’s symptom remission may result in suboptimal care or only partial remission (Sullivan, 2008). While measurement cannot replace clinical judgment, standardized measurement at each visit or at periodic intervals regarding specific target symptoms informs both provider and patient about relative progress toward symptom resolution and restoration of a full level of function and quality of life. Measurement-based care helps both provider and patient modify and evaluate the plan of care to achieve full symptom remission and support full or the highest level of recovery from an MH/SU disorder.
The committee sees important opportunities to develop and apply quality measures based on patient-reported outcomes as part of a systematic, comprehensive, and balanced strategy for enhancing the quality of psychosocial interventions. Priority domains for these quality measures include symptom reduction/remission functional status, patient/consumer perceptions of care, and recovery outcomes.
Symptom reduction/remission There are a number of examples of widely used, brief, standardized measures for target symptoms. They include the Patient Health Questionnaire (PHQ)-91 (Kroenke et al., 2001), Generalized Anxiety Disorder (GAD)-72 (Spitzer et al., 2006), and Adult ADHD Self-Report Scale (ASRS) (Wolraich et al., 2003).
Functional status Functional status commonly refers to both the ability to perform and the actual performance of activities or tasks that are important for independent living and crucial to the fulfillment of relevant roles within an individual’s life circumstances (IOM, 1991). Functional ability refers to an individual’s actual or potential capacity to perform activities and tasks that one normally expects of an adult (IOM, 1991). Functional status refers to an individual’s actual performance of activities and tasks associated with current life roles (IOM, 1991). There exist a variety of functional assessment measures tailored for different populations or for condition-specific assessments using different functional domains of health. Examples include the Older Americans Resources and Services (OARS) scale (Fillenbaum and Smyer, 1981), the Functional Assessment Rating Scale (FARS) (Ward et al., 2006), and the 36-Item Short Form Health Survey (SF-36) (McDowell, 2006; Ware, 2014). For measurement of general health, well-being, and level of function, a variety of tools are available, including both the SF-36, a proprietary instrument with similar public domain versions (RAND 36-Item Health Survey [RAND-36], Veterans RAND 12-Item Health Survey [VR-12]), and the Patient Reported Outcomes Measurement Information System (PROMIS) tools (NIH, 2014). The PROMIS tools, developed through research funded by the National Institutes of Health (NIH) and in the public domain, are garnering interest because they are psychometrically sound and address key domains of physical, mental, and social functioning (Bevans et al., 2014).
When selecting functional assessment measures, one needs to be mindful of their intended use, value for clinical assessment or research, established validity and reliability, and floor and ceiling effects. This last consideration is important when evaluating functional ability in patients who may be at their highest level of the measure with little to no variability; patients at the lowest level of functioning will likewise have little variability. Change in function may not be feasible in many chronic disorders, with maintenance of functional status or prevention of further decline being the optimal possible outcome (Richmond et al., 2004).
2 Patient Health Questionnaire (PHQ) Screeners. See http://www.phqscreeners.com/pdfs/03_GAD-7/English.pdf (accessed June 22, 2015).
Patient/consumer perceptions of care Information on patients’ perceptions of care enables comparisons across providers, programs, and facilities, and can help identify gaps in service quality across systems and promote effective quality improvement strategies. Dimensions of patient perceptions of care include (1) access to care, (2) shared decision making, (3) communication, (4) respect for the individual and other aspects of culturally and linguistically appropriate care, and (5) overall ratings and willingness to recommend to others. The most widely used tools for assessing patient experiences of care include the Consumer Assessment of Healthcare Providers and Services (CAHPS) instruments for hospitals, health plans, and providers, as well as the Experience of Care and Health Outcomes (ECHO) survey, which is used to assess care in behavioral health settings (AHRQ, 2015a,b). The Mental Health Statistics Improvement Program (MHSIP) is a model consumer survey initiated in 1976 with state and federal funding (from HHS) to support the development of data standards for evaluating public mental health systems. It has evolved over the past 38 years, and the University of Washington now conducts the 32-item online Adult Consumer Satisfaction Survey (ACS) and the 26-item Youth and Family Satisfaction Survey (YFS). These two surveys are used to assess general satisfaction with services, the appropriateness and quality of services, participation in treatment goals, perception of access to services, and perceived outcomes (UW, 2013). These MHSIP surveys, used by 55 states and territories in the United States, provide a “mental health care report card” for consumers, state and federal agencies, legislative bodies, and third-party payers. Positive perceptions of care are associated with higher rates of service utilization and improved outcomes, including health status and health-related quality of life (Anhang Price et al., 2014).
Recovery outcomes Recovery increasingly is recognized as an important outcome, particularly from a consumer perspective. Research shows that people with serious mental illnesses can and do recover from those illnesses (Harding et al., 1987; Harrow et al., 2012). Personal recovery is associated with symptom reduction, fewer psychiatric hospitalizations, and improved residential stability (SAMHSA, 2011). Still, only recently has recovery become an overarching aim of mental health service systems (Slade et al., 2008).
Recovery is viewed as a process of change through which individuals improve their health and wellness, live a self-directed life, and strive to achieve their full potential (SAMHSA, 2011). As Deegan (1988, p. 1) notes, recovery is “to live, work, and love in a community in which one makes a significant contribution.” The Substance Abuse and Mental Health Services Administration (SAMHSA) has identified four dimensions that support a life in recovery: (1) health, with an individual making informed health
choices that support physical and emotional well-being; (2) home, where an individual has a stable, safe place to live; (3) purpose, with an individual engaging in meaningful daily activities (e.g., job, school, volunteering); and (4) community, wherein an individual builds relationships and social networks that provide support (SAMHSA, 2011).
Measure developers have made different assumptions regarding the underlying mechanisms of recovery and included different domains in their recovery outcome measures (Scheyett et al., 2013). Several instruments—including the Consumer Recovery Outcomes System (Bloom and Miller, 2004), the Recovery Assessment Scale (RAS) (Corrigan et al., 1999; Salzer and Brusilovskiy, 2014), and the Recovery Process Inventory (Jerrell et al., 2006)—have strong psychometric properties. The RAS in particular has been used in the United States with good results. It is based on five domains: (1) confidence/hope, (2) willingness to ask for help, (3) goal and success orientation, (4) reliance on others, and (5) no domination by symptoms (Corrigan et al., 1999; Salzer and Brusilovskiy, 2014).
Quality measures based on patient-reported outcomes It is important to distinguish between the patient-reported outcome measures discussed above and the quality measures that are based on them. Table 5-5 summarizes opportunities for measuring the quality of psychosocial interventions using patient-reported outcome measures. Quality measures based on patient-reported outcome measures typically define a specific population at risk, a time period for observation, and an expected change or improvement in outcome score. For example, the CMS EHR incentive program (“Meaningful Use”) includes a quality measure (NQF #710) assessing remission in symptoms among people with a diagnosis of depression or dysthymia at 12 months following a visit with elevated symptoms as scored using the PHQ-9 (CMS, 2015c,d).
Brief patient-reported or clinician-administered scales with sound psychometrics that are in the public domain could be widely adopted by health care providers and agencies. Wide-scale adoption of these scales or their mandated use by payers for reimbursement would advance understanding of best practices that yield optimal clinical outcomes. Another key opportunity is giving MH/SU providers incentives to use standardized clinical outcome reporting through either EHRs or other clinical databases.
A number of challenges are entailed in measuring MH/SU outcomes. These involve (1) determination of which measures and which outcomes to use; (2) accountability and the lack of a standardized methodology for risk adjustment related to complexity, risk profile, and comorbidities; (3)the
|Measure Concept||Examples of Existing Patient-Reported Outcome Measures||Examples of Existing or Potential Quality Measures Using Patient-Reported Outcome Measures|
|Recovery||Recovery Assessment Scale (RAS)||Consumers with serious and persistent mental illness who improve by x% on the RAS|
|Patient experiences of care||Experience of Care and Health Outcomes (ECHO), Consumer Assessment of Healthcare Providers and Systems (CAHPS), Mental Health Statistics Improvement Program (MHSIP)||Proportion of clients of mental health clinics who report participation in treatment decision making|
|Reduction/remission of symptoms||Patient Health Questionnaire (PHQ)-9||Depression remission among patients with major depression and elevated symptom score|
|Functioning/well-being||36-Item Short Form Health Survey (SF-36), Patient Reported Outcomes Measurement Information System (PROMIS)-29||Improvement in social functioning among consumers enrolled in managed care|
SOURCE: Adapted from Brown et al., 2014.
lack of a cohesive and comprehensive plan requiring the use of standardized MH/SU outcome measures as part of routine care; and (4) the difficulty of extracting data and the lack of electronic health information.
Determination of which measures and which outcomes Without a universally accepted set of outcome measures, clinicians and payers cannot readily compare individual patient outcomes, clinician or provider outcomes, agency outcomes, or population-wide outcomes. Few nationally endorsed measures address outcomes of care, and these few measures address only two domains—symptoms and consumer experiences. Among the NQF-endorsed outcome measures are two assessing depression symptom response, two addressing depression symptom remission, and one addressing consumer experiences with behavioral health services.3 Thus, there exists a
3 See Table 5-1 for information on outcome measures NQF #1884, #1885, #0710, #0711, and #0726.
gap in available outcome measures for the other major MH/SU disorders, as well as for quality of life and full recovery.
The focus on symptom response/remission measures also does not take into account the fact that consumers with an MH/SU disorder often have multiple comorbid conditions. They also rarely receive only one psychosocial intervention, more often receiving a combination of services, such as medication management and one or more psychosocial interventions, making assessment of overall response to MH/SU services appealing. Outcome measures look at overall impact on the consumer and are particularly relevant for psychosocial interventions that have multifactorial, person-centered dimensions.
The large number of tools available for assessing diverse outcomes makes comparisons across organizations and populations highly challenging. In the CMS EHR incentive program, specification of quality measures that use patient-reported outcomes requires specific code sets (CMS, 2015b). Use of measures in the public domain can reduce the burden on health information technology vendors and providers. Consensus on tools for certain topics (e.g., the PHQ-9 for monitoring depression symptoms) allows for relative ease of implementation; however, other tools are preferred for specific populations. An initiative called PROsetta stone is under way to link the PROMIS scales with other measures commonly used to assess patient-reported outcomes (Choi et al., 2012). In addition, efforts to develop a credible national indicator for subjective well-being that reflects “how people experience and evaluate their lives and specific domains and activities in their lives” (NRC, 2013, p. 15) have led to several advances that may be worth considering for quality measurement.
Accountability and the lack of a standardized methodology for risk adjustment Because outcomes can be influenced by myriad factors related to the person’s illness, resources, and history as well as treatment, the opportunity for a clinician or organization to influence outcomes may be limited. Determining the appropriate level of accountability for outcome measures is important since health plans or larger entities may have more opportunities for influencing outcomes and because the risk may be spread across a broader population.
Valid risk adjustment plays a critical role in the successful use of outcome measures by making it possible to avoid disincentives to care for the most complex and severely ill patients. Yet while risk adjustment models have been developed for a variety of medical disorders and surgical procedures, they are less well developed for MH/SU disorders (Ettner et al., 1998). A review of the risk adjustment literature identified 36 articles that included 72 models of utilization, 74 models of cost expenditures, and 15 models of clinical outcomes (Hermann et al., 2007). An average of
6.7 percent of the variance in these areas was explained by models using diagnostic and sociodemographic data, while an average of 22.8 percent of the variance was explained by models using more detailed clinical and quality-of-life data (Hermann et al., 2007). Risk adjustment models based on administrative or claims data explained less than one-third of the variance explained by models that included clinical assessment or medical records data (Hermann et al., 2007). Consensus on a reasonable number of clinical outcome and quality indicators is needed among payers, regulators, and behavioral health organizations to enable the development of risk adjustment models that can account for the interactions among different risk factors.
The lack of a cohesive and comprehensive plan requiring the use of standardized MH/SU outcome measures Comprehensive approaches such as the MHSIP could serve as a model for standardizing measures for MH/SU disorders; however, even that program does not extend to outcomes other than consumer satisfaction, nor does it cover individuals or care outside of the public sector. Efforts to encourage the use of outcome measurement need to be carried out at multiple levels and to involve multiple stakeholders. Consumers need to be encouraged to track their own recovery; clinicians to monitor patient responses and alter treatment strategies based on those responses; and organizations to use this information for quality improvement, network management, and accountability.
Difficulty of extracting data and lack of electronic health information Even if a basic set of outcome measures were universally endorsed, the information obtained would remain fragmented absent agencies and payers committed to developing the infrastructure needed to collect the data for the measures. Aggregating valid data on clinical outcomes is a time-consuming and costly endeavor. Currently, electronic health information that links health care across different providers and agencies is lacking. Even in self-contained systems such as a health maintenance organization (HMO), where electronic data entry can be designed for linkages across providers and levels of care within the system, it can be difficult to obtain consistently valid data (Strong et al., 1997).
As with structure and process measures, improved measurement of clinical outcomes will benefit from the universal adoption of EHRs. Universal use of EHRs will make it possible to link health care and health outcomes across different providers and agencies over time, compare clinical outcomes associated with different treatment approaches, and develop risk adjustment models through assessment of a large national dataset.
The Donabedian framework of structure, process, and outcome measures offers an excellent model for developing measures with which to assess the quality of psychosocial interventions. However, few rigorous quality measures are available for assessing whether individuals have access to or benefit from evidence-based psychosocial interventions. The factors contributing to the lack of attention to quality measurement in this area are common to MH/SU disorders in general and point to the same problems identified by the IOM in its report on MH/SU disorders (IOM, 2006). Despite the diverse players in the quality field, strategic leadership and responsibility are lacking for MH/SU care quality in general and for psychosocial interventions in particular. Furthermore, the involvement of consumers in the development and implementation of quality measures is limited in the MH/SU arena.
Systems for accountability and improvement need to focus on improving outcomes for individuals regardless of modality of treatment, yet the infrastructure for measurement and improvement of psychosocial interventions (at both the national level for measure development and the local level for measure implementation and reporting) is lacking. As a result of the lack of standardized reporting of clinical detail and variations in coding, the most widely used data systems for quality reporting fail to capture critical information needed for assessing psychosocial interventions (IOM, 2014). There has as yet been no strategic leadership to harness the potential for addressing this gap through the nation’s historic investment in health information technology.
Current quality measures are insufficient to drive improvement in psychosocial interventions. NCQA’s annual report on health care quality in managed care plans highlights the lack of improvement in several existing MH/SU quality measures and declining performance for other measures, some of which are summarized in Table 5-6 (NCQA, 2014). While there is enthusiasm for incorporating quality measures based on patient-reported outcome measures, there is no consensus on which outcomes should take priority and what tools are practical and feasible for use in guiding ongoing clinical care, as well as monitoring the performance of the health care system, with respect to treatment for MH/SU disorders.
The entity designated by HHS to assume this responsibility and leadership role needs to ensure coordination among all relevant agencies across the federal government—such as CMS, SAMHSA, the National Institute on Drug Abuse (NIDA), the National Institute of Mental Health (NIMH), the National Institute on Alcohol Abuse and Alcoholism (NIAAA), the Agency for Healthcare Research and Quality (AHRQ), the Health Resources and Services Administration (HRSA), the U.S. Department of Veterans Affairs
|Example Psychosocial Intervention||Structure||Process||Outcome|
|Assertive Community Treatment||Care manager training and caseload||Fidelity assessment using Dartmouth Assertive Community Treatment Scale (DACTS) instrument||Percentage of patients with housing instability at initiation of treatment who are in stable housing at 6 months|
|Cognitive-Behavioral Therapy||Clinicians certified through competency-based training and assessment||Fidelity assessed through electronic health record documentation and periodic review of audiotaped sessions using a standardized assessment tool||Percentage of patients with depression who are in remission at 6 months as assessed by the Patient Health Questionnaire (PHQ)-9|
(VA), and the U.S. Department of Defense (DoD)—in order to make sufficient resources available and avoid duplication of effort. Also essential is coordination with relevant nongovernmental organizations, such as NQF, NCQA, and the Patient-Centered Outcomes Research Institute (PCORI), as well professional associations and private payers, to support widespread adoption of the measures developed in multipayer efforts. The designated entity needs to be responsible for using a multistakeholder process to develop strategies for identifying measure gaps, establishing priorities for measure development, and determining mechanisms for evaluating the impact of measurement activities. In these efforts, representation and consideration of the multiple disciplines involved in the delivery of behavioral health care treatment are essential. Consumer/family involvement needs to encompass participation in multistakeholder panels that guide measure development; efforts to garner broad input, such as focus groups; and specific efforts to obtain input on how to present the findings of quality measurement in ways that are meaningful to consumers/families.
In the short term, structure measures that set expectations for the infrastructure needed to support outcome measurement and the delivery of evidence-based psychosocial interventions need to be a priority to establish the capacity for the expanded routine clinical use of outcome measures. A second priority is the development of process measures that can be used to assess access to care (in light of concerns about expanded populations
with access to MH/SU care under the ACA and the limited availability of specialty care and evidence-based services). Other process measures addressing the content of care can be used for hypothesis generation and testing with regard to quality improvement. The measurement strategy needs to take into account how performance measures can be used to support patient care in real time, as well as the quality improvement efforts of care teams, organizations, plans, and states, and to encompass efforts to assess the impact of policies concerning the application of quality measures at the local, state, and federal levels. HHS is best positioned to lead efforts to gain consensus on the priority of developing and applying patient-reported outcome measures for use in quality assessment and of validating patient-reported outcome measures for gap areas such as recovery. Standardized and validated patient-reported outcome measures are necessary for performance measurement.
The committee drew the following conclusion about the development of approaches to measure quality of psychosocial interventions:
Approaches applied in other areas of health care can be applied in care for mental health and substance use disorders to develop reliable, valid, and feasible quality measures for both improvement and accountability purposes.
Recommendation 5-1. Conduct research to contribute to the development, validation, and application of quality measures. Federal, state, and private research funders and payers should establish a coordinated effort to invest in research to develop measures for assessing the structure, process, and outcomes of care, giving priority to
- measurement of access and outcomes;
- development and testing of quality measures, encompassing patient-reported outcomes in combination with clinical decision support and clinical workflow improvements;
- evaluation and improvement of the reliability and validity of measures;
- processes to capture key data that could be used for risk stratification or adjustment (e.g., severity, social support, housing);
- attention to documentation of treatment adjustment (e.g., what steps are taken when patients are not improving); and
- establishment of structures that support monitoring and improvement.
Recommendation 5-2. Develop and continuously update a portfolio of measures with which to assess the structure, process, and outcomes
of care. The U.S. Department of Health and Human Services (HHS) should designate a locus of responsibility and leadership for the development of quality measures related to mental health and substance use disorders, with particular emphasis on filling the gaps in measures that address psychosocial interventions. HHS should support and promote the development of a balanced portfolio of measures for assessing the structure, process, and outcomes of care, giving priority to measuring access and outcomes and establishing structures that support the monitoring and improvement of access and outcomes.
Recommendation 5-3. Support the use of health information technology for quality measurement and improvement of psychosocial interventions. Federal, state, and private payers should support investments in the development of new and the improvement of existing data and coding systems to support quality measurement and improvement of psychosocial interventions. Specific efforts are needed to encourage broader use of health information technology and the development of data systems for tracking individuals’ care and its outcomes over time and across settings. Registries used in other specialty care, such as bariatric treatment, could serve as a model. In addition, the U.S. Department of Health and Human Services should lead efforts involving organizations responsible for coding systems to improve standard code sets for electronic and administrative data (such as Current Procedural Terminology [CPT] and Systematized Nomenclature of Medicine [SNOMED]) to allow the capture of process and outcome data needed to evaluate mental health/substance use care in general and psychosocial interventions in particular. This effort will be facilitated by the identification of the elements of psychosocial interventions and development of a common terminology as proposed under Recommendation 3-1. Electronic and administrative data should include methods for coding disorder severity and other confounding and mitigating factors to enable the development and application of risk adjustment approaches, as well as methods for documenting the use of evidence-based treatment approaches.
AHRQ (Agency for Healthcare Research and Quality). 2010. National healthcare disparities report. http://www.ahrq.gov/research/findings/nhqrdr/nhdr11/nhdr11.pdf (accessed February 17, 2015).
_____. 2015a. CAHPS surveys and tools to advance patient-centered care. https://cahps.ahrq.gov (accessed June 15, 2015).
_____. 2015b. Experience of Care and Health Outcomes (ECHO). https://www.cahps.ahrq.gov/surveys-guidance/echo/index.html (accessed June 15, 2015).
Anhang Price, R., M. N. Elliott, A. M. Zaslavsky, R. D. Hays, W. G. Lehrman, L. Rybowski, S. Edgman-Levitan, and P. D. Cleary. 2014. Examining the role of patient experience surveys in measuring health care quality. Medical Care Research and Review 71(5):522-554.
APA (American Psychological Association). 2013. Psychotherapy CPT codes for psychologists. http://www.apapracticecentral.org/reimbursement/billing/psychotherapy-codes.pdf (accessed January 28, 2015).
Bevans, M., A. Ross, and D. Cella. 2014. Patient-Reported Outcomes Measurement Information System (PROMIS): Efficient, standardized tools to measure self-reported health and quality of life. Nursing Outlook 62(5):339-345.
Bishop, T. F., M. J. Press, S. Keyhani, and H. A. Pincus. 2014. Acceptance of insurance by psychiatrists and the implications for access to mental health care. JAMA Psychiatry 71(2):176-181.
Bloom, B. L., and A. Miller. 2004. The Consumer Recovery Outcomes System (CROS 3.0): Assessing clinical status and progress in persons with severe and persistent mental illness. http://www.crosllc.com/CROS3.0manuscript-090204.pdf (accessed December 15, 2014).
Brown, J., S. H. Scholle, and M. Azur. 2014. Strategies for measuring the quality of psychotherapy: A white paper to inform measure development and implementation. Report submitted to the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services. Contract number HHSP23320095642WC and task order number HHSP 23320100019WI. Washington, DC: Mathematica Policy Research. http://aspe.hhs.gov/daltcp/reports/2014/QualPsy.cfm (accessed July 20, 2015).
Burstin, H. 2014. Issues in quality measurement: The NQF perspective. Presentation to Committee on Developing Evidence-Based Standards for Psychosocial Interventions for Mental Disorders. Workshop on Approaches to Quality Measurement, May 19, Washington, DC. http://iom.edu/~/media/Files/ActivityFiles/MentalHealth/PsychosocialInterventions/WSI/HelenBurstin.pdf (accessed December 18, 2014).
Byron, S. C., W. Gardner, L. C. Kleinman, R. Mangione-Smith, J. Moon, R. Sachdeva, M. A. Schuster, G. L. Freed, G. Smith, and S. H. Scholle. 2014. Developing measures for pediatric quality: Methods and experiences of the CHIPRA pediatric quality measures program grantee. Academic Pediatrics 14(5):S27-S32. Reprinted with permission from Elsevier.
Chapman, J. E., M. R. McCart, E. J. Letourneau, and A. J. Sheidow. 2013. Comparison of youth, caregiver, therapist, trained, and treatment expert raters of therapist adherence to a substance abuse treatment protocol. Journal of Consulting and Clinical Psychology 81(4):674-680.
Chinman, M., A. S. Young, M. Rowe, S. Forquer, E. Knight, and A. Miller. 2003. An instrument to assess competencies of providers treating severe mental illness. Mental Health Services Research 5(2):97-108.
Choi, S. W., T. Podrabsky, N. McKinney, B. D. Schalet, K. F. Cook, and D. Cella. 2012. PROSetta Stone™ analysis report: A Rosetta Stone for patient-reported outcomes. Vol. 1. Chicago, IL: Department of Medical Social Sciences, Feinberg School of Medicine, Northwestern University. http://www.prosettastone.org/Pages/default.aspx (accessed June 16, 2015).
CMS (Centers for Medicare & Medicaid Services). 2015a. Comprehensive primary care initiative eCQM user manual—Version 4. http://innovation.cms.gov/Files/x/cpci-ecqm-manual.pdf (accessed June 12, 2015).
_____. 2015b. An introduction to the Medicare EHR Incentive Program for eligible professionals. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/downloads/Beginners_Guide.pdf (accessed June 15, 2015).
_____. 2015c. Measure: Depression remission at twelve months. https://ecqi.healthit.gov/ep/2014-measures-2015-update/depression-remission-twelve-months (accessed June 15, 2015).
_____. 2015d. Annual update of 2014 eligible hospitals and eligible professionals Electronic Clinical Quality Measures (eCQMs). http://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/eCQM_TechNotes2015.pdf (accessed June 15, 2015).
_____. n.d. CMS measures management system blueprint (the Blueprint) v 11.0. http://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/MeasuresManagementSystemBlueprint.html (accessed October 1, 2014).
Conway, P. H., and C. Clancy. 2009. Transformation of health care at the front line. Journal of the American Medical Association 301(7):763-765.
Corrigan, P. W., D. Giffort, F. Rashid, M. Leary, and I. Okeke. 1999. Recovery as a psychological construct. Community Mental Health Journal 35(3):231-239.
Deegan, P. E. 1988. Recovery: The lived experience of rehabilitation. Journal of Psychosocial Rehabilitation 11(4):11-19.
Donabedian, A. 1980. Explorations in quality assessment and monitoring. Vol. I. Ann Arbor, MI: Health Administration Press.
_____. 1988. The quality of care. How can it be assessed? Journal of the American Medical Association 260(12):1743-1748.
_____. 2005. Evaluating the quality of medical care. 1966. Milbank Quarterly 83(4):691-729.
Ettner, S. L., R. G. Frank, T. G. McGuire, J. P. Newhouse, and E. H. Notman. 1998. Risk adjustment of mental health and substance abuse payments. Inquiry 35(2):223-239.
Fillenbaum, G. G., and M. A. Smyer. 1981. The development, validity, and reliability of the OARS multidimensional functional assessment questionnaire. Journal of Gerontology 36(4):428-434.
Fisher, C. E., B. Spaeth-Rublee, and H. A. Pincus. 2013. Developing mental health-care quality indicators: Toward a common framework. International Journal for Quality in Health Care 25(1):75-80.
Fortney, J., K. Rost, M. Zhang, and J. Pyne. 2001. The relationship between quality and outcomes in routine depression care. Psychiatric Services 52(1):56-62.
Frank, R. 2014. Presentation to Committee on Developing Evidence-Based Standards for Psychosocial Interventions for Mental Disorders. Workshop on Approaches to Quality Improvement, July 24, Washington, DC.
Garnick, D. W., C. M. Horgan, M. T. Lee, L. Panas, G. A. Ritter, S. Davis, T. Leeper, R. Moore, and M. Reynolds. 2007. Are Washington Circle performance measures associated with decreased criminal activity following treatment? Journal of Substance Abuse Treatment 33(4):341-352.
Glasgow, R. E., R. M. Kaplan, J. K. Ockene, E. B. Fisher, and K. M. Emmons. 2012. Patient-reported measures of psychosocial issues and health behavior should be added to electronic health records. Health Affairs (Millwood) 31(3):497-504.
Harding, C. M., G. W. Brooks, T. Ashikaga, J. S. Strauss, and A. Breier. 1987. The Vermont longitudinal study of persons with severe mental illness. II: Long-term outcome of subjects who retrospectively met DSM-III criteria for schizophrenia. American Journal of Psychiatry 144(6):727-735.
Harding, K. J., A. J. Rush, M. Arbuckle, M. H. Trivedi, and H. A. Pincus. 2011. Measurement-based care in psychiatric practice: A policy framework for implementation. Journal of Clinical Psychiatry 72(8):1136-1143.
Harrow, M., T. H. Jobe, and R. N. Faull. 2012. Do all schizophrenia patients need antipsychotic treatment continuously throughout their lifetime? A 20-year longitudinal study. Psychological Medicine 42(10):2145-2155.
Hepner, K. A., F. Azocar, G. L. Greenwood, J. Miranda, and M. A. Burnam. 2010. Development of a clinician report measure to assess psychotherapy for depression in usual care settings. Administration and Policy in Mental Health 37(3):221-229.
Hermann, R. C. 2005. Improving mental healthcare: A guide to measurement-based quality improvement. Washington, DC: American Psychiatric Press, Inc.
Hermann, R. C., C. K. Rollins, and J. A. Chan. 2007. Risk-adjusting outcomes of mental health and substance-related care: A review of the literature. Harvard Review of Psychiatry 15(2):52-69.
IOM (Institute of Medicine). 1990. Medicare: A strategy for quality assurance. Vol. 1, edited by K. N. Lohr. Washington, DC: National Academy Press.
_____. 1991. Disability concepts revisited: Implications for prevention. In Disability in America: Toward a national agenda for prevention, edited by A. M. Pope and A. R. Tarlov. Washington, DC: National Academy Press. Pp. 309-327.
_____. 2006. Improving the quality of care for mental and substance use conditions. Washington, DC: The National Academies Press.
_____. 2008. Cancer care for the whole patient: Meeting psychological health needs. Washington, DC: The National Academies Press.
_____. 2014. Capturing social and behavioral domains and measures in electronic health records: Phase 2. Washington, DC: The National Academies Press.
Jerrell, J. M., V. C. Cousins, and K. M. Roberts. 2006. Psychometrics of the recovery process inventory. Journal of Behavioral Health Services & Research 33(4):464-473.
Joint Commission. 2015. Hospital accreditation. http://www.jointcommission.org/accreditation/hospitals.aspx (accessed June 12, 2015).
Kilbourne, A., D. Keyser, and H. A. Pincus. 2010. Challenges and opportunities in measuring the quality of mental health care. Canadian Journal of Psychiatry 55(9):549-557.
Kroenke, K., R. L. Spitzer, and J. B. W. Williams. 2001. The PHQ-9: Validity of a brief depression severity measure. Journal of General Internal Medicine 16(9):606-613.
Lebow, J. L. 1983. Research assessing consumer satisfaction with mental health treatment: A review of findings. Evaluation and Program Planning 6:211-236.
Ling, S. M. 2014. Broad issues in quality measurement: The CMS perspective. Presentation to Committee on Developing Evidence-Based Standards for Psychosocial Interventions for Mental Disorders. Workshop on Approaches to Quality Measurement, May 19, Washington, DC. https://www.iom.edu/~/media/Files/Activity%20Files/MentalHealth/PsychosocialInterventions/WSI/Shari%20Ling.pdf (accessed December 18, 2014).
McDowell, I. 2006. Measuring health: A guide to rating scales and questionnaires, 3rd ed. New York: Oxford University Press
McGlynn, E. A. 1998. Choosing and evaluating clinical performance measures. Joint Commission Journal on Quality Improvement 24(9):470-479.
Miranda, J., K. A. Hepner, F. Azocar, G. Greenwood, V. Ngo, and M. A. Burnam. 2010. Development of a patient-report measure of psychotherapy for depression. Administration and Policy in Mental Health and Mental Health Services Research 37(3):245-253.
NCQA (National Committee for Quality Assurance). 2014. State of health care. http://www.ncqa.org/ReportCards/HealthPlans/StateofHealthCareQuality.aspx (accessed June 15, 2015).
_____. 2015. Patient-centered medical home recognition. http://www.ncqa.org/Programs/Recognition/Practices/PatientCenteredMedicalHomePCMH.aspx (accessed June 15, 2015).
New York State Department of Health. 2012. NYS health home provider qualification standards for chronic medical and behavioral health patient populations. https://www.health.ny.gov/health_care/medicaid/program/medicaid_health_homes/provider_qualification_standards.htm (accessed January 27, 2015).
NIH (National Institutes of Health). 2014. PROMIS overview. http://www.nihpromis.org/about/overview (accessed January 28, 2015).
NQF (National Quality Forum). 2009. Nursing staff skill mix. NQF #0204. http://www.qualityforum.org/QPS/0204 (accessed January 27, 2015).
_____. 2011. Medical Home System Survey (MHSS). NQF #1909. http://www.qualityforum.org/QPS/1909 (accessed January 27, 2015).
_____. 2014a. Consensus development process. http://www.qualityforum.org/Measuring_Performance/Consensus_Development_Process.aspx (accessed April 15, 2015).
_____. 2014b. Measure applications partnership. http://www.qualityforum.org/map (accessed January 27, 2015).
_____. 2014c. Measure evaluation criteria. http://www.qualityforum.org/docs/measure_evaluation_criteria.aspx (accessed April 27, 2015).
_____. 2015. Quality positioning system. http://www.qualityforum.org/QPS/QPSTool.aspx (accessed June 15, 2015).
NQMC (National Quality Measure Clearinghouse). 2014. Tutorials on quality measures: Desirable attributes of a quality measure. http://www.qualitymeasures.ahrq.gov (accessed November 6, 2014).
NRC (National Research Council). 2013. Subjective well-being: Measuring happiness, suffering, and other dimensions of experience. Washington, DC: The National Academies Press.
Pincus, H. A., B. Spaeth-Rublee, and K. E. Watkins. 2011. Analysis & commentary: The case for measuring quality in mental health and substance abuse care. Health Affairs (Millwood) 30(4):730-736.
Richmond, T., S. T. Tang, L. Tulman, J. Fawcett, and R. McCorkle. 2004. Measuring function. In Instruments for clinical health-care research, 3rd ed., edited by M. Frank-Stromborg and S. J. Olsen. Sudbury, MA: Jones & Barlett. Pp. 83-99.
Salzer, M. S., and E. Brusilovskiy. 2014. Advancing recovery science: Reliability and validity properties of the Recovery Assessment Scale. Psychiatric Services 65(4):442-453.
SAMHSA (Substance Abuse and Mental Health Services Administration). 2011. SAMHSA’s working definition of recovery. https://store.samhsa.gov/shin/content/PEP12-RECDEF/PEP12-RECDEF.pdf (accessed January 8, 2015).
Scheyett, A., J. DeLuca, and C. Morgan. 2013. Recovery in severe mental illnesses: A literature review of recovery measures. Social Work Research 37(3):286-303.
Schoenwald, S. K., J. E. Chapman, A. J. Sheidow, and R. E. Carter. 2009. Long-term youth criminal outcomes in MST transport: The impact of therapist adherence and organizational climate and structure. Journal of Clinical Child & Adolescent Psychology 38(1):91-105.
Schoenwald, S. K., A. F. Garland, M. A. Southam-Gerow, B. F. Chorpita, and J. E. Chapman. 2011. Adherence measurement in treatments for disruptive behavior disorders: Pursuing clear vision through varied lenses. Clinical Psychology (New York) 18(4):331-341.
Slade, M., M. Amering, and L. Oades. 2008. Recovery: An international perspective. Epidemiologia e Psichiatria Sociale 17(2):128-137.
Spitzer, R. L., K. Kroenke, J. B. W. Williams, and B. Löwe. 2006. A brief measure for assessing generalized anxiety disorder: The GAD-7. Archives of Internal Medicine 166(10): 1092-1097.
Steinman, K. J., K. Kelleher, A. E. Dembe, T. M. Wickizer, and T. Hemming. 2012. The use of a “mystery shopper” methodology to evaluate children’s access to psychiatric services. Journal of Behavioral Health Services & Research 39(3):305-313.
Strong, D. M., Y. W. Lee, and R. Y. Wang. 1997. Data quality in context. Communications of the ACM 40(5):103-110.
Sullivan, G. 2008. Complacent care and the quality gap. Psychiatric Services 59(12):1367-1367.
Tu, S. W., C. Nyulas, M. Tierney, A. Syed, R. Musacchio, and T. B. Üstün. 2014. A content model for health interventions. Presented at WHO—Family of International Classifications Network Annual Meeting 2014, October 11-17, Barcelona, Spain.
UW (University of Washington). 2013. Mental Health Statistics Improvement Program (MHSIP) surveys. Seattle, WA: University of Washington Department of Psychiatry and Behavioral Sciences, Public Behavioral Health and Justice Policy. https://depts.washington.edu/pbhjp/projects-programs/page/mental-health-statistics-improvement-program-adult-consumer-survey-acs (accessed June 15, 2015).
Velentgas, P., N. A. Dreyer, and A. W. Wu. 2013. Outcome definition and measurement. In Developing a protocol for observational comparative effectiveness research: A user’s guide, Ch. 6, edited by P. Velentgas, N. A. Dreyer, P. Nourjah, S. R. Smith, and M. M. Torchia. Rockville, MD: AHRQ. Pp. 71-92.
Vinik, A. I., and E. Vinik. 2003. Prevention of the complications of diabetes. American Journal of Managed Care 9(Suppl. 3):S63-S80; quiz S81-S84.
Ward, J. C., M. G. Dow, K. Penner, T. Saunders, and S. Halls. 2006. Manual for using the Functional Assessment Rating Scale (FARS). http://outcomes.fmhi.usf.edu/FARSUserManual2006.pdf (accessed September 26, 2014).
Ware, J. E. 2014. SF-36 health survey update. http://www.sf-36.org/tools/sf36.shtml (accessed January 28, 2015).
Williams, B. 1994. Patient satisfaction: A valid concept? Social Science and Medicine 38: 509-516.
Wilson, I. B., and P. D. Cleary. 1995. Linking clinical variables with health-related quality of life. A conceptual model of patient outcomes. Journal of the American Medical Association 273:59-65.
Wolraich, M. L., W. Lambert, M. A. Doffing, L. Bickman, T. Simmons, and K. Worley. 2003. Psychometric properties of the Vanderbilt ADHD diagnostic parent rating scale in a referred population. Journal of Pediatric Psychology 28(8):559-568.
This page intentionally left blank.