Previous chapters have addressed the quality of psychosocial interventions in terms of the various types, their efficacy, the potential elements they contain, approaches for assessing the efficacy of these interventions and their elements, the effectiveness of the interventions in actual clinical settings, and the development of guidelines and quality measures to influence and monitor clinical practice. However, these considerations are by themselves insufficient to improve quality. As noted in the Institute of Medicine’s (IOM’s) Quality Chasm report addressing mental health and substance use conditions (IOM, 2006), a comprehensive quality framework must consider properties beyond the interventions delivered; it must consider the context in which they are delivered. This context includes characteristics of the consumer, the qualifications of the provider, the clinic or specific setting in which care is rendered, the health system or organization in which the setting is embedded, and the regulatory and financial conditions under which it operates. Stakeholders in each of these areas can manipulate levers that shape the quality of a psychosocial intervention; shortfalls in the context of an intervention and in the manipulation of those levers can render a highly efficacious intervention unhelpful or even harmful (for example, levers, see Table 6-1).
Evidence-based psychosocial interventions and meaningful measurement tools are key drivers of quality improvement in the delivery of services for persons with mental health and substance use disorders; however, they will not lead to improvements in quality unless they are used appropriately and applied in a system or organization that is equipped to implement change. This chapter examines the array of levers that can be used by
various categories of stakeholders to enhance the quality improvement of psychosocial interventions. The discussion is based on the premise that engaging the perspectives and leveraging the opportunities of multiple stakeholders can best accomplish overall system improvement.
The chapter is organized around five categories of stakeholders:
- Consumers—Whether called consumers, clients, or patients, these are the people for whose benefit psychosocial interventions are intended. Consumers and their family members have much to say about and contribute to what these interventions look like and when and how they are used. Indeed, as discussed in earlier chapters, there is growing evidence of consumers’ value as active participants in the development, quality measurement, and quality monitoring of psychosocial interventions, as well as in shared decision making in their own recovery process.
- Providers—The term is used broadly to include clinicians, rehabilitation counselors, community-based agents who intervene on behalf of individuals in need of psychosocial interventions, peer specialists, and any other professionals who deliver these interventions.
- Clinical settings/provider organizations—This term is used broadly to include clinics, practices, large health systems, medical homes, community settings, schools, jails, and other sites where psychosocial interventions are rendered. In clinical settings, quality and quality improvement are affected by some of the same factors as those that affect clinicians, but also by the practice culture, the adequacy of team-based care, clinic workflow, leadership for change and quality improvement, and clinic-level implementation efforts.
- Health plans and purchasers—These stakeholders (both public and private) work at the supraclinical level, structuring provider payment, provider networks, benefit design, and utilization management.
- Regulators—These include organizations that accredit, certify, and license providers of behavioral health services, including psychosocial interventions. This category can also include organizational regulators, which can ensure that programs are producing clinicians capable of rendering high-quality interventions or that clinics are organized to optimize and ensure the quality of the care delivered.
The levers available to each of these categories of stakeholders are summarized in Table 6-1 and discussed in detail in the following sections. A growing body of research shows the need for deliberate and strategic efforts
|Stakeholder||Levers for Influencing Quality of Care||Examples|
|Clinical Settings/Provider Organizations||
on the part of all of these stakeholders to ensure that evidence-based psychosocial interventions are adopted, sustained, and delivered effectively in a variety of service delivery settings (Powell et al., 2012; Proctor et al., 2009).
Substantive consumer participation—the formal involvement of consumers in the design, implementation, and evaluation of interventions—is known to improve the outcomes of psychosocial interventions (Delman, 2007; Taylor et al., 2009). The unique experience and perspective of consumers also make their active involvement essential to quality management and improvement for psychosocial interventions (Linhorst et al., 2005). To be meaningful, the participation must be sustained over time and focused on crucial elements of the program (Barbato et al., 2014). Roles for consumers include involvement in evaluation, training, management, and service provision, as well as active participation in their own care, such as through shared decision making, self-management programs, and patient-centered medical homes. As noted in Chapter 2, participatory action research (PAR) methods engage consumers. The PAR process necessarily includes resources and training for consumer participants and cross-training among stakeholders (Delman and Lincoln, 2009).
Evidence supports the important role of consumers in program evaluation (Barbato et al., 2014; Drake et al., 2010; Hibbard, 2013). Consumers have been involved at all levels of evaluation, from evaluation design to data collection (Delman, 2007). At the design level, consumer participation helps organizations understand clients’ views and expectations for mental health care (Linhorst and Eckert, 2002), and ensures that outcomes meaningful to consumers are included in evaluations and that data are collected in a way that is acceptable to and understood by consumers (Barbato et al., 2014). Further, Clark and colleagues (1999) found that mental health consumers often feel free to talk openly to consumer interviewers, thus providing more honest and in-depth data than can otherwise be obtained. Personal interviews maximize consumer response rates overall and in populations frequently excluded from evaluation (e.g., homeless persons) (Barbato et al., 2014).
Consumers can be valuable members of the workforce training team. The active involvement of consumers in the education and training of health
care professionals has been increasing largely because of recognition that patients have unique expertise derived from their experience of illness, treatment, and related socioeconomic detriments (Towle et al., 2010). Consumer participation in clinician training has led to trainees having a more positive attitude toward people with severe mental illness, valuing them as a knowledge resource, reconsidering stereotypes and assumptions about consumers, and improving their communication skills (Taylor et al., 2009; Towle and Godolphin, 2013; Turnbull et al., 2013). Likewise, training has been shown to be effective when consumers play a significant role in developing the format and content of the training (Towle and Godolphin, 2013).
Participation in Governance
Consumer participation in decisions about a provider organization’s policy direction and management supports the development of psychosocial interventions that meet the needs of consumers (Grant, 2010; Taylor et al., 2009). Consumers’ increasing assumption of decision-making roles in provider organizations and governmental bodies has resulted in innovations that have improved the quality of care (e.g., peer support services) (Allen et al., 2010). Consumer participation in managing services directly informs organizations about consumer needs and has been strongly associated with consumers’ having information about service quality and how to access services (Omeni et al., 2014).
Consumer councils are common, and can be effective in involving clients in formal policy reviews and performance improvement projects (Taylor et al., 2009). Consumer council involvement provides staff with a better understanding of consumers’ views and expectations, increases clients’ involvement in service improvement, and can impact management decisions (Linhorst et al., 2005). Clients are more likely to participate when their program (e.g., group homes, hospitals) encourages their independence and involvement in decision making (Taylor et al., 2009).
By actively participating in discussions within treatment teams and with staff more generally, consumers bring a lived experience that can round out a more clinical view, improving the treatment decision-making process. Consumers take on a wide variety of service delivery roles as peer support workers, a general term applying to people with a lived experience of mental illness who are empathetic and provide direct emotional support for a consumer. Operating in these roles, peers can play an important part in quality management and transformation (Drake et al., 2010). In August 2007, the Centers for Medicare & Medicaid Services (CMS) issued a letter
to state Medicaid directors designating peer support as a billable service and outlining the minimum requirements that should be addressed for this role (CMS, 2007).
Shared Decision Making and Decision Support Systems for Psychosocial Interventions
Shared decision making is a collaborative process through which patients and their providers make health care decisions together, taking into account patients’ values and preferences and the best scientific evidence and patient data available (Drake et al., 2010). Key to this process are training individual providers in effective communication and supporting clients in openly expressing their service preferences.
Shared decision making has been found to be most effective when computer-based decision support systems are in place to assist providers in implementing clinical guidelines and clients in expressing treatment preferences and making informed decisions (Goscha and Rapp, 2014). These systems provide tailored assessments and evidence-based treatment recommendations for providers to consider based on patient information that is entered through an electronic health record (EHR) system (Deegan, 2010). On the consumer side, a software package elicits information from patients, at times guided by peer specialists, and prints out their goals and preferences in relation to their expressed needs and diagnosis. These systems also provide structural support to both consumers and clinicians in the care planning process—for example, through reminders for overdue services and screenings, recommendations for evidence-based psychosocial interventions, and recommendations for health behavior changes.
Behavioral health providers bring commitment and training to their work. Many, if not most, efforts to improve the quality of psychosocial interventions have focused on providers, reflecting their key role in helping clients achieve recovery and quality of life. Provider-focused efforts to improve quality of care include dissemination of treatment information, such as through manuals and guidelines; various forms of training, coaching, expert consultation, peer support, and supervision; fidelity checks; and provider profiling and feedback on performance. The Cochrane Effective Practice and Organisation of Care (EPOC) Group has conducted systematic reviews documenting the effectiveness of various provider-focused strategies for quality improvement, such as printed educational materials (12 randomized controlled trials [RCTs], 11 nonrandomized studies), educational meetings (81 RCTs), educational outreach (69 RCTs), local opinion
leaders (18 RCTs), audit and feedback (118 RCTs), computerized reminders (28 RCTs), and tailored implementation (26 RCTs) (Cochrane, 2015; Grimshaw et al., 2012). Research on the implementation of evidence-based psychosocial interventions has focused overwhelmingly on strategies that entail monitoring fidelity (also referred to as adherence and compliance) and assessing provider attitudes toward or satisfaction with the interventions (Powell et al., 2014). Other clinician-level factors that can influence and improve quality include competence, motivation, and access to diagnostic and decision-making tools. Importantly, as noted above with regard to consumers, providers actively working in clinical settings should be engaged in the quality improvement culture and the design and application of these levers.
Provider Education and Training
The delivery of quality mental health care requires a workforce adequately trained in the knowledge and skills needed for delivering evidence-based psychosocial interventions. For almost two decades, federal reports have emphasized the shortage of professionals who are trained to deliver evidence-based interventions (HHS, 1999; NIMH, 2006). Quality improvement of behavioral health care is thwarted by low awareness of evidence-based practices among providers (Brown et al., 2008), likely a result of the relatively low percentage of graduate training programs that require didactic or clinical supervision in evidence-based practices (Bledsoe et al., 2007; Weissman et al., 2006).
Several reviews have focused on the efficacy of different educational techniques used to train providers in evidence-based psychosocial treatments (e.g., Beidas and Kendall, 2010; Herschell et al., 2010; Rakovshik and McManus, 2010). While passive approaches (e.g., single-session workshops and distribution of treatment manuals) may increase providers’ knowledge and even predispose them to adopt a treatment, such approaches do little to produce behavior change (Davis and Davis, 2009; Herschell et al., 2010). In contrast, effective education and training often involve multifaceted strategies, including a treatment manual, multiple days of intensive workshop training, expert consultation, live or taped review of client sessions, supervisor trainings, booster sessions, and the completion of one or more training cases (Herschell et al., 2010). Leaders in the field of provider training also have suggested that training should be dynamic and active and address a wide range of learning styles (Davis and Davis, 2009); utilize behavioral rehearsal (Beidas et al., 2014); and include ongoing supervision, consultation, and feedback (Beidas and Kendall, 2010; Rakovshik and McManus, 2010). The effectiveness of training is dependent as well on such factors as workshop length, opportunity to practice skills, and trainer expertise. One
issue limiting the utility of training as a lever for quality improvement is that training in psychosocial treatment often is proprietary, with training fees beyond the reach of many service organizations, particularly those serving safety net populations.
A number of studies have found that after learning a new intervention, clinicians do not use the intervention quickly or frequently enough to maintain skills in its delivery over time (Cross et al., 2014, 2015). Because there are no agreed-upon standards for postgraduate training methods and assessment of skill acquisition beyond a brief knowledge-based quiz, continuing education activities and postgraduate training and certification programs vary widely in content and method. Long-term effects of training also are dependent on the amount of posttraining support that is available. Checklists, introduced in the practice setting to prompt the delivery of treatment protocols, have been shown to be moderately successful in increasing providers’ implementation of research-based practice recommendations (Albrecht et al., 2013).
Training programs can apply state-of-the-art adult learning practices at multiple levels (i.e., as part of degree-granting programs, postgraduate programs, and continuing education) to ensure that trainees are indeed adept at evidence-based psychosocial interventions. Considerable evidence supports models that include skill-building opportunities through observation of experts and practice with standardized cases (Chun and Takanishi, 2009; Cross et al., 2007; Matthieu et al., 2008; Wyman et al., 2008), access to expert consultation after training (Mancini et al., 2009), and ongoing peer support (Austin et al., 2006) to sustain skill sets. Two examples of postgraduate training in psychosocial interventions are the United Kingdom’s Improving Access to Psychological Therapies (IAPT) program and the Veterans Health Administration’s (VHA’s) National Evidence-Based Psychotherapy Dissemination and Implementation Model.
In the early 2000s the United Kingdom’s National Health Service (NHS) invested considerable funds in improving the mental health and well-being of U.K. citizens. As part of those efforts, the IAPT program, an independent, nongovernmental body consisting of experts in the various evidence-based psychotherapies, was created to prepare the workforce to provide evidence-based treatments for a variety of behavioral health problems. Although the program initially focused on training in cognitive-behavioral therapy, it has since added training in other interventions, including interpersonal psychotherapy; brief dynamic therapy; eye movement desensitization and reprocessing; mindfulness-based cognitive therapy; and family interventions for parenting, eating disorders, and psychosis (UCL, 2015). Two types of clinicians are trained: low-intensity therapists, who work with consumers suffering from mild to moderate depression and anxiety, and high-intensity therapists, who provide face-to-face psychotherapy for more severe illnesses
or complex cases. The competencies and curricula for training in these models were developed jointly by NHS and professional organizations that historically have been involved in training clinicians in these practices.
Regardless of the intervention model, high-intensity therapists undergo 1 year of training, which consists of 2 days of coursework and 3 days of clinical service each week (NHS, 2015). Therapists in training are assigned cases involving the conditions for which the treatments are indicated, are supervised weekly, and provide videotapes of their therapeutic encounters that are rated by experts. Trainees must demonstrate competence in the interventions to be certified as high-intensity therapists. Low-intensity therapists undergo a similar training process, but need undergo only 8 months of training (Layard and Clark, 2014). Although not without its initial detractors, this training program has been highly successful. As of 2012, it had resulted in 3,400 new clinicians being capable of providing evidence-based interventions in the United Kingdom, which has translated into 1.134 million people being treated for mental health problems, two-thirds demonstrating “reliable” recovery, and 45 percent showing full remission (Department of Health, 2012). The IAPT creators note that intervention expert involvement and buy-in are critical to the success of the model.
In the United States, the VHA’s National Evidence-Based Psychotherapy Dissemination and Implementation Model is an example of successful postgraduate training in evidence-based practices (see Box 6-1). The VHA also has achieved nationwide implementation of contingency management, an evidence-based treatment for substance abuse, through targeted trainings and ongoing implementation support (Petry et al., 2014). Like the United Kingdom, the VHA has been able to demonstrate enhanced quality of care provided to veterans, with clinicians showing improved clinical competencies and self-efficacy and greater appreciation for evidence-based treatments (Karlin and Cross, 2014b). These changes in practice also have led to improved clinical outcomes in patient populations (Karlin and Cross, 2014b). Since embarking on providing training and support in the delivery of evidence-based psychosocial interventions, the VHA has seen positive effects in suicidal ideation, posttraumatic stress disorder, and depression in veterans seeking care (Watts et al., 2014).
Although the efforts of the United Kingdom and the VHA to effect these changes in practice have resulted in positive outcomes, they were not without their problems. In the United Kingdom, an initial barrier to the IAPT program was having stakeholders agree to a national curriculum tied to practice guidelines. This problem was solved by actively involving professional organizations in detailing the competencies required and in creating tools with which to measure those competencies. Both the U.K. and VHA systems also suffer from long wait times to access treatment, largely because of the limited workforce equipped to provide evidence-based care.
In 2007, the VHA created and deployed a competency-based training program to train existing psychologists and social workers in evidence-based psychotherapies for mental health problems such as posttraumatic stress disorder and depression, and to ensure that therapists’ competencies and skill levels would be maintained over time. The model consists of participation in an in-person workshop in which actual clinical skills are taught and practiced. The workshop is followed by 6 months of ongoing telephone consultation with experts in the evidence-based practices, as well as long-term local support to ensure sustained skills. By the end of fiscal year 2012, training had been provided to 6,400 VHA behavioral health clinicians (Karlin and Cross, 2014b). The program focused initially on cognitive-behavioral therapy but more recently has expanded to cover other evidence-based psychotherapies as well.
The process begins when regional mental health directors select providers to participate in a training organized by the VHA Central Office. In the skill-building workshop, trainers assess the providers’ skills using standardized and validated competency checklists. The providers are then instructed to identify cases with which to practice the new intervention and receive weekly telephone-based support from an expert. Providers are given clinical tools, such as manuals, videos demonstrating the practices, and patient education tools. Once the ongoing support has been completed, the providers are offered virtual office hours, when experts are available to provide consultation on challenging cases. The long-term local support consists of peer consultation, available through groups called communities of practice, to foster organizational change and support the implementation of the new practices (Ranmuthugala et al., 2011a,b).
The program has shown positive training outcomes, such as increased clinical competencies, enhanced self-efficacy, and improved knowledge and attitudes. The program also has led to moderate to large improvements in patient outcomes (Karlin and Cross, 2014b).
However, studies have shown that wait times in the VHA are not substantially longer than those in other health services settings (Brandenburg et al., 2015).
Behavioral health settings vary widely in organizational readiness and capacity for quality improvement (Aarons et al., 2012; Emmons et al.,
2012). Moreover, community settings for behavioral health care differ greatly from the controlled research settings where psychosocial treatments are developed and tested. Emerging evidence that effectiveness often declines markedly when interventions are moved from research to real-world settings (Schoenwald and Hoagwood, 2001) signals the need to address important ecological issues when designing and testing psychosocial treatments. Several advances in implementation science—such as hybrid research designs (Curran et al., 2012), principles of “designing for dissemination” (Brownson et al., 2012), and monitoring and ongoing adaptation to enhance quality (Chambers et al., 2013; Zayas et al., 2012)—offer promising ways to better fit psychosocial interventions to the real-world contexts in which behavioral health care is delivered.
A variety of organizational levers can enhance the quality of behavioral health care. Evidence-based care is facilitated by innovation champions within an organization and clear leadership support for quality analysis and improvement (Brown et al., 2008; Simpson and House, 2002). The implementation of evidence-based practices also is enhanced by management support for innovation, the availability of adequate financial resources, and a learning orientation within the organization (Klein and Knight, 2005). A particular leadership style—transformational leadership—is associated with a climate supportive of innovation and the adoption of evidence-based practices (Aarons and Sommerfeld, 2012). In a program for people with schizophrenia, for example, the implementation of evidence-based care was facilitated by a number of organization-level factors, including champions, provider incentives, intensive provider education, the addition of care managers, and information systems (Brown et al., 2008).
The Availability, Responsiveness, and Continuity (ARC) model is an example of a manualized multicomponent, team-based organizational strategy for quality care (Glisson and Schoenwald, 2005; Glisson et al., 2010). Designed to improve the organizational context in which services are provided, this model has been found effective in a wide range of mental health, health, and social service settings. Quality improvement collaboratives, including the Institute for Healthcare Improvement’s Breakthrough Series Collaborative Model (Ebert et al., 2012; IHI, 2003), have proven helpful to organizations in implementing interventions for physical health conditions (Pearson et al., 2005). Further research is needed to determine the effectiveness of these collaboratives for the implementation of evidence-based care for behavioral health conditions. Specially designed technical support centers external to a given organization also can support quality improvement. External facilitation, used within the VHA to implement evidence-based psychotherapies, has been found to be effective, low-cost, feasible, and scalable (Kauth et al., 2005). Likewise, the Children’s Bureau within the U.S. Department of Health and Human Services (HHS) funds
five regional Implementation Centers within its Training and Technical Assistance Network to help states and tribes improve the quality of child welfare services, including, in some cases, the implementation of evidence-based programs (ACF, n.d.).
One clear challenge faced by organizations is the cost of quality improvement efforts, above and beyond those costs associated with the delivery of psychosocial treatment itself. The adoption of new treatments and quality improvement entail costs, such as those for training, consultation, and supervision; fidelity monitoring; and infrastructure changes associated with embedding standardized assessments into routine forms and databases. Most community-based settings operate under reimbursement mechanisms that rarely cover the costs of implementing new interventions (Raghavan, 2012). Raghavan and colleagues (2008) characterize these system antecedents or requisites for evidence-based care as the “policy ecology of implementation.” The implementation of evidence-based practices requires, at the organizational level, policies that provide for the added marginal costs of treatments and support the learning of new treatments at the organizational and provider levels. Saldana and colleagues (2014) developed a tool for calculating implementation costs; the “COIN” tool provides a feasible template for mapping costs onto observable activities and examining important differences in implementation strategies for an evidence-based practice. One psychosocial intervention for behavioral problems among youth, for example, cost more to implement through a team-based approach than through individual provider implementation “as usual,” although the team-based approach was more efficient in terms of time to implementation and expenditure of staff hours. Further research is needed to identify cost-effective implementation strategies, and at the payor or regulatory level, policies are needed to leverage contractual mechanisms, utilize provider and organizational profiling, and support outcome assessment (Raghavan et al., 2008).
Finally, although the assessment of barriers to implementation is important, the field would benefit from rigorous study of the implementation processes and specific strategies that lead to sustained adoption and delivery of evidence-based interventions.
Purchasers (including private employers and the government, in the case of insurance programs such as Medicare and Medicaid) and health plans have a number of levers available for encouraging quality improvement for psychosocial interventions. These levers include strategies targeting primarily consumers, such as enrollee benefit design, and those targeting
primarily providers, such as utilization management, patient registries, provider payment methods, and provider profiling.
Enrollee Benefit Design
Benefit design is a key strategy used by purchasers and plans to influence the use of health care services, including psychosocial interventions. By affecting the quantity and types of services used, benefit design also can affect the quality of care (Choudhry et al., 2010).
A large literature dating back more than 40 years documents that health care utilization levels tend to be lower when individuals face high out-of-pocket costs. The RAND Health Insurance Experiment, an RCT of the impact of cost sharing on health care utilization and spending conducted in the 1970s and 1980s, found that use of health care services declined sharply as cost-sharing requirements increased (Manning et al., 1988); other nationally representative surveys have yielded similar findings (Horgan, 1985, 1986). Use of ambulatory mental health services was about twice as responsive to the out-of-pocket cost faced by an enrollee as the use of ambulatory general medical services (Manning et al., 1988). More recent studies, conducted after the introduction of managed care, likewise have documented lower use of behavioral health services associated with higher cost-sharing levels (Rice and Morrison, 1994). Benefit design also can distort treatment decision making if different types of services are covered at differing levels of generosity. For example, if a plan requires much lower cost sharing for pharmacological treatments than for psychosocial interventions, individuals may be more likely to seek the former treatments only.
Because of the relationship between cost sharing and service use, the recent movement toward high-deductible health plans, which require enrollees to pay a large deductible (anywhere from $1,000 to $5,000 or higher) before the plan covers any health care expenses, could cause some individuals to reduce or altogether forego their use of beneficial evidence-based psychosocial treatments for mental health and substance use disorders (Kullgren et al., 2010). The same is true for the shift on the part of some health plans from requiring enrollees to make flat copayments to requiring coinsurance (i.e., paying a percentage of the fee for a service) (Choudhry et al., 2010). In contrast, value-based insurance designs, which involve tailoring cost-sharing requirements to the cost-effectiveness of a given service in an effort to improve the value of care delivered (i.e., lower cost sharing for higher-value services), could result in more appropriate use of evidence-based psychosocial treatments (Eldridge and Korda, 2011).
Plans use a variety of utilization management techniques to influence the use of health care services by members. A plan’s goals for these techniques include controlling growth in health care spending and improving the quality of care—for example, by discouraging treatment overuse or misuse. Common utilization management techniques include prior authorization requirements, concurrent review, and fail-first policies (i.e., requiring an enrollee to “fail” on a lower-cost therapy before obtaining approval for coverage of a higher-cost therapy). These review processes can be burdensome for clinicians, and may encourage them to provide alternative treatments that are not subject to these techniques.
A large literature documents decreases in the use of health care services associated with utilization management techniques, with some studies suggesting that the quality of care could be adversely affected for some individuals (Newhouse et al., 1993). In the case of mental health–related prescription drugs, for example, the implementation of prior authorization requirements has been associated with reductions in the use of medications subject to prior authorization and lower medication expenditures, but also with reduced medication compliance and sometimes higher overall health care expenditures (e.g., Adams et al., 2009; Law et al., 2008; Lu et al., 2010; Motheral et al., 2004; Zhang et al., 2009). Similarly, the use of fail-first policies for prescription drugs (sometimes referred to as “step therapy”) has been associated with lower prescription drug expenditures (e.g., Farley et al., 2008; Mark et al., 2010); however, one study of a fail-first policy for antidepressant medications found that adoption of this policy was associated with an increase in mental health–related inpatient admissions, outpatient visits, and emergency room visits for antidepressant users in affected plans (Mark et al., 2010). Thus, the use of these tools can have both intended and unintended outcomes, and these outcomes can be linked to quality of care. However, a carefully constructed utilization management strategy could serve to improve the quality of psychosocial interventions if it resulted in more appropriate use of these interventions among those most likely to benefit from them. On the other hand, as with benefit design, the differential application of utilization management across treatment modalities could affect treatment decision making (i.e., individuals might be less likely to use services subject to stricter utilization management).
Selective contracting and network management is another utilization management tool used by plans that can influence the provision of psychosocial interventions. Plans typically form exclusive provider networks, contracting with a subset of providers in the area. Under this approach, plans generally provide more generous coverage for services delivered by network providers than for those delivered by providers outside the network. As
a result, plans often can negotiate lower fees in exchange for the patient volume that will likely result from being part of the plan’s network. To ensure the availability of evidence-based psychosocial interventions, a plan’s provider network must include adequate numbers of providers with skills in delivering these interventions who are accepting new patients. Importantly, plans will need tools with which to determine the competence of network providers in delivering evidence-based treatments. Network adequacy has been raised as a concern in the context of new insurance plans offered on the state-based health insurance exchanges under the Patient Protection and Affordable Care Act (Bixby, 1998).
As discussed in Chapter 1, the Mental Health Parity and Addiction Equity Act requires parity in coverage for behavioral health and general medical services. Parity is required in both quantitative treatment limitations (e.g., copays, coinsurance, inpatient day limits, outpatient visit limits) and nonquantitative treatment limitations, including the use of utilization management techniques by plans. Thus, plans are prohibited from using more restrictive utilization management for mental health and substance use services than for similar types of general medical services. However, the regulations would not govern differential use of utilization management techniques across different mental health/substance use treatment modalities (e.g., drugs versus psychosocial treatments).
The methods used to pay health care providers for the services they deliver influence the types, quantity, and quality of care received by consumers. Historically, providers typically were paid on a fee-for-service (FFS) basis, with no explicit incentives for performance or quality of care. FFS payment creates incentives for the delivery of more services, as each service brings additional reimbursement, but does not encourage the coordination of care or a focus on quality improvement. Since their introduction more than 20 years ago, managed behavioral health care carve-outs—a dominant method of financing mental health/substance use care whereby specialty benefits for this care are separated from the rest of health care benefits and managed by a specialty managed care vendor—also have shaped the financing and delivery of behavioral health services. These arrangements allow the application of specialty management techniques for behavioral health care and help protect a pool of funds for behavioral health services (because a separate budget and contract are established just for these services). By definition, however, carve-out contracts increase fragmentation in service delivery and distort clinical decision making to some extent. For example, risk-based carve-out contracts have traditionally excluded psychiatric medications, giving carve-out organizations an incentive to encourage the use
of medications over psychosocial interventions when the two types of interventions could otherwise be viewed as substitutable (Huskamp, 1999).
Over the past several years, two trends have been emerging: (1) a move away from FFS payment toward bundled payment arrangements, a form of risk-based payment under which providers face some level of financial risk for the health care expenditures of a given patient population; and (2) increasing use of pay-for-performance (P4P) approaches in provider contracts.
Instead of reimbursing each provider individually for every service delivered to a patient under an FFS model, bundled payment models involve fixed payments for bundles of related services. The bundle of services can be defined relatively narrowly (e.g., all physician and nonphysician services delivered during a particular inpatient stay) or more broadly, with the broadest bundle including all services provided to an individual over the course of a year (i.e., a global budget). The current Medicare accountable care organization (ACO) demonstration programs fall somewhere in the middle of this continuum, including almost all services in the bundle but placing the large provider organizations that serve as ACOs at only limited—not full—financial risk for total health care spending.
Bundled payment arrangements create incentives for efficiency in the delivery of all services included in the bundle and for greater coordination of care, in addition to providing incentives to substitute services not included in the bundle (and thus reimbursed outside of the bundled payment) where possible. These arrangements also raise concerns about stinting and poor quality of care to the extent that maintaining or improving quality can be costly. In the case of psychosocial interventions, there is concern that provider organizations operating under a global full risk payment contract, with strong incentives for efficiency in service delivery, could reduce the delivery of effective psychosocial interventions for which measurement of quality is problematic or there is no incentive for the provision of quality in payment systems, as is the case for many psychosocial interventions (Mechanic, 2012).
Both public and private purchasers and plans also have embraced P4P approaches to encouraging quality improvement. Under P4P, clinicians or provider organizations receive bonuses if they meet or exceed certain quality thresholds that are specified in provider contracts. While the literature on P4P strategies suggests that they often result in improved quality as
measured by the metrics used, the improvements often are relatively small in magnitude and may be somewhat narrowly focused on the clinical areas that are targeted through the measures (Colla et al., 2012; Mullen et al., 2010; Werner et al., 2013; Wilensky, 2011).
Risk-based payment models currently in use for Medicare and some commercial payers include a P4P component, with a set of performance metrics and associated financial incentives. The P4P components are included in the risk-based contracts in an effort to ensure that quality of care is maintained or improved in the face of greater provider financial risk for expenditures. Given such financial risk, provider organizations may be more likely to discourage the use of treatments with no associated quality metrics or less focused on ensuring the quality of those treatments relative to treatments for which financial incentives are included in the contract. This concern underscores the importance of incorporating validated quality metrics for psychosocial treatments in P4P systems. For any metrics based on outcome measures, it will be important for the P4P methodology to account for differences in patient case mix to counteract incentives for selection behavior on the part of clinicians and provider organizations.
Provider Profiling and Public Reporting
The collection of data and issuance of periodic reports to providers on their performance relative to that of other providers in their practice setting, provider group, or overall plan or payer has been carried out in the medical arena for many years. Provider profiling is based on the premise that giving providers feedback that compares their performance with that of others will motivate them to improve in areas in which they may be underperforming. This is one strategy that could be incorporated into a quality improvement system adopted by providers, plans, and purchasers in an effort to improve the quality of psychosocial interventions.
Evidence on the effectiveness of profiling in the medical arena has been mixed. A review by the Cochrane Collaborative found evidence of improvement in clinical standards (Jamtvedt et al., 2006), although a later study found mixed evidence that provider profiling served as a catalyst for quality improvement activities (Fung et al., 2008).
An extension of provider profiling is the public reporting of information from provider profiles. Public reporting systems, such as Medicare’s Nursing Home Compare and New York State’s reporting system for cardiovascular disease providers, can include information at the organization level (e.g., hospital, group practice) or at the individual clinician level. In theory, public reporting can improve quality of care in two primary ways. First, by providing consumers and family members with information on the quality of care delivered by different clinicians or provider organizations,
public reporting can facilitate consumer selection of high-quality providers. Second, public reporting of quality metrics can encourage individual clinicians and provider organizations to engage in efforts to improve the quality of care, both to protect their reputation and to attract new patients.
A literature review on public reporting of quality measures conducted by the Agency for Healthcare Research and Quality (AHRQ, 2012), however, found little or no effect of public reporting on provider selection by consumers and family members. Consumers often said they were unaware of the publicly reported data when making provider selection decisions, or that they found the reports confusing or lacking in key information needed for making a decision (AHRQ, 2012). On the other hand, the review found evidence of a positive effect of public reporting systems on the behavior of clinicians and provider organizations, including improvements in quality measures over time among profiled providers, increased focus on quality improvement activities, evidence that some surgeons with the worst outcomes left surgical practice, and hospitals offering new services in response to public reporting (AHRQ, 2012). The review also found that the impacts of public reporting appeared to be greater in more competitive versus less competitive health care markets (AHRQ, 2012).
As for P4P systems, provider profiling and public reporting systems must account for differences in patient case mix to counteract incentives for selection behavior on the part of clinicians and provider organizations.
In the United States, professional organizations (e.g., the American Psychiatric Association, the American Psychological Association, Council on Social Work Education) and associated accreditation and certification organizations (e.g., the Accreditation Council for Graduate Medical Education, the American Board of Psychiatry and Neurology) and state licensing and accreditation agencies determine the competencies that professional schools are required to teach their students, and evaluate the success of the schools based on a set of predetermined standards. For example, the American Psychological Association accredits graduate programs and clinical internships based on each program’s ability to document successes in graduation, the percentage who become licensed, and whether the program teaches basic core competencies (APA, n.d.). In its new accreditation standards, still in the public comment stage, the American Psychological Association calls on doctoral training programs to focus on “empirically supported intervention procedures.” Likewise, the 2008 accreditation standards of the Council on Social Work Education require that social work trainees “employ evidence-based interventions.” The Accreditation Council for Graduate Medical Education and the American Board of Psychiatry and Neurology require,
as a condition of accreditation, that residents be trained in cognitive-behavioral therapy and that they be able to summarize the evidence base for that therapy; the same requirements now apply as well to psychodynamic psychotherapy and supportive psychotherapies (ACGME and ABPN, 2013). Nonetheless, these efforts by professional and accrediting bodies are nascent; even when these bodies require that students, residents, and fellows be trained in evidence-based practices, programs are given little guidance as to which practices are indeed evidence based, what models of training are most effective, or how the acquisition of core competencies should be assessed. As a result, accredited training programs vary considerably in the degree to which they offer training in evidence-based practices. If professional and accrediting organizations are to exert greater leadership in ensuring effective training in evidence-based practices, they will need to reach consensus on the competencies needed to implement those practices and on the best means of determining that a training program is successfully preparing its students in their delivery. This approach has been used successfully in training models developed by IAPT and the VHA. In the United States, professional organizations and intervention authors and experts could work together to create a competence framework, as well as ensure that the training methods are effective and that those trained can demonstrate competence.
At the postgraduate and continuing education level, providers are required in many states to accrue continuing education credits to maintain licensure. Providers are known to value training in evidence-based practices that accords with their clients’ needs, that offers continuing education opportunities, and that is advanced beyond the “beginning level” (Powell et al., 2013). Continuing education as required by state licensing or professional certification organizations thus can be used as a lever for quality improvement. As with professional schools, state professional organizations may need to determine whether a continuing education activity meets quality standards for adult learning and establish clear guidance on what competencies may need to be renewed.
A growing body of research demonstrates the effectiveness of quality improvement efforts focused on each of the stakeholders discussed in this chapter. Yet growing evidence suggests that multifaceted implementation strategies targeting multiple levels of service provision—consumers, providers, organizations, payers, and regulators—are most effective. For example, effective implementation of acceptance and commitment therapy was shown to require multilevel, coordinated efforts on the part of state mental health authorities, senior program administrators, and program
staff (Proctor et al., 2009). High-fidelity implementation of the therapy was facilitated by dedicated billing mechanisms, technical assistance centers, and program monitoring (Mancini et al., 2009). Yet while some studies testing comprehensive or blended strategies have shown positive effects (Forsner et al., 2010; Glisson et al., 2010), the same is true for more narrowly focused strategies (Herschell et al., 2010; Lochman et al., 2009). With more than 60 different implementation strategies being reported in the literature (Powell et al., 2012), encompassing planning, training, financing, restructuring, management, and policy approaches, research is needed to identify the most effective, efficient, and parsimonious approaches. The National Institutes of Health (NIH) has designated as a priority effort to “identify, develop, and refine effective and efficient methods, structures, and strategies to disseminate and implement” innovations in health care (NIH, 2009).
Improving the quality of psychosocial interventions is a particular need (Goldner et al., 2011; Herschell et al., 2010). For instance, a scoping review of the published literature focused on implementation research in mental health identified 22 RCTs, only 2 of which tested psychosocial interventions in mental health settings (Goldner et al., 2011). This finding stands in contrast to the broader field of health care, in which the number of RCTs testing implementation strategies dwarfs the number in mental health and social service settings. This differential led Landsverk and colleagues (2011) to conclude that the field of mental health has lagged behind other disciplines in building an evidence base for implementation.
This chapter and the report as a whole have described the need to consider quality not as a binary, static characteristic but as existing within a complex context and as part of a cycle of actions leading to the implementation of quality improvement by the multiple stakeholders involved in the delivery of care for mental health and substance use disorders. These stakeholders—from consumers who receive psychosocial interventions; to the providers who render the interventions; to their clinics and the organizations in which the clinics are embedded; to payers, regulators, and policy makers—each have levers, incentives, and other means by which they can move the system toward higher quality. These contextual factors and levers interact with one another in complex ways, and the means by which their effects occur are not yet fully understood. Much of the evidence surrounding the use of these levers to improve quality is weak but promising, and needs to be augmented with further research.
The committee drew the following conclusion about improving the quality of psychosocial interventions:
Multiple stakeholders should apply levers, incentives, and other means to create learning health systems that continually progress toward higher quality (as recommended in previous IOM Quality Chasm reports).
Recommendation 6-1. Adopt a system for quality improvement. Purchasers, plans, and providers should adopt systems for measuring, monitoring, and improving quality for psychosocial interventions. These systems should be aligned across multiple levels. They should include structure, process, and outcome measures and a combination of financial and nonfinancial incentives to ensure accountability and encourage continuous quality improvement for providers and the organizations in which they practice. Quality improvement systems also should include measures of clinician core competencies in the delivery of evidence-based psychosocial interventions. Public reporting systems, provider profiling, pay-for-performance, and other accountability approaches that include outcome measures should account for differences in patient case mix (e.g., using risk adjustment methods) to counteract incentives for selection behavior on the part of clinicians and provider organizations, especially those operating under risk-based payment.
Recommendation 6-2. Support quality improvement at multiple levels using multiple levers. Purchasers, health care insurers, providers, consumers, and professional organizations should pursue strategies designed to support the implementation and continuous quality improvement of evidence-based psychosocial interventions at the provider, clinical organization, and health system levels.
- The infrastructure to support high-quality treatment includes ongoing provider training, consumer and family education, supervision, consultation, and leadership to enhance organizational culture and foster a climate for continuously learning health care systems. Other core aspects of infrastructure for the implementation and quality improvement of evidence-based psychosocial interventions include the use of registries, electronic health records, and computer-based decision support systems for providers and consumers, as well as technology-supported technical assistance and training.
- This infrastructure could be fostered by a nonprofit organization, supported and funded through a public–private partnership (e.g., the Institute for Healthcare Improvement), that would provide technical assistance to support provider organizations and clinicians in quality improvement efforts.
Recommendation 6-3. Conduct research to design and evaluate strategies that can influence the quality of psychosocial interventions. Research is needed to inform the design and evaluation of policies, organizational levers, and implementation/dissemination strategies that can improve the quality of psychosocial interventions and health outcomes. Potential supporters of this research include federal, state, and private entities.
- Policies should be assessed at the patient, provider, clinical organization/system, payer, purchaser, and population levels.
- Examples might include research to develop and assess the impact of benefit design changes and utilization management tools, new models of payment and delivery, systems for public reporting of quality information, and new approaches for training in psychosocial interventions.
Aarons, G. A., and D. H. Sommerfeld. 2012. Leadership, innovation climate, and attitudes toward evidence-based practice during a statewide implementation. Journal of the American Academy of Child and Adolescent Psychiatry 51(4):423-431.
Aarons, G., J. Horowitz, L. Dlugosz, and M. Ehrhart. 2012. The role of organizational processes in dissemination and implementation research. In Dissemination and implementation research in health: Translating science to practice, by R. Brownson, G. A. Colditz, Enola K. Proctor. New York: Oxford University Press. Pp. 128-153.
ACF (Administration for Children and Families). n.d. Training and technical assistance. http://www.acf.hhs.gov/programs/cb/assistance (accessed January 1, 2015).
ACGME (Accreditation Council for Graduate Medical Education) and ABPN (American Board of Psychiatry and Neurology). 2013. The psychiatry milestone project. http://acgme.org/acgmeweb/Portals/0/PDFs/Milestones/PsychiatryMilestones.pdf (accessed June 17, 2015).
Adams, A. S., F. Zhang, R. F. LeCates, A. J. Graves, D. Ross-Degnan, D. Gilden, T. J. McLaughlin, C. Lu, C. M. Trinacty, and S. B. Soumerai. 2009. Prior authorization for antidepressants in Medicaid: Effects among disabled dual enrollees. Archives of Internal Medicine 169(8):750-756.
AHRQ (Agency for Healthcare Research and Quality). 2012. Closing the quality gap series: Public reporting as a quality improvement strategy. http://www.effectivehealthcare.ahrq.gov/search-for-guides-reviews-and-reports/?pageaction=displayproduct&productID=1198 (accessed June 17, 2015).
Albrecht, L., M. Archibald, D. Arseneau, and S. D. Scott. 2013. Development of a checklist to assess the quality of reporting of knowledge translation interventions using the Work-group for Intervention Development and Evaluation Research (WIDER) recommendations. Implementation Science 8(52).
Allen, J., A. Q. Radke, and J. Parks. 2010. Consumer involvement with state mental health authorities. Alexandria, VA: National Association of Consumer/Survivor Mental Health Administrators.
APA (American Psychological Association). n.d. Understanding APA accreditation. http://www.apa.org/ed/accreditation/about (accessed June 17, 2015).
Austin, Z., A. Marini, N. MacLeod Glover, and D. Tabak. 2006. Peer-mentoring workshop for continuous professional development. American Journal of Pharmaceutical Education 70(5):117.
Barbato, A., B. D’Avanzo, V. D’Anza, E. Montorfano, M. Savio, and C. G. Corbascio. 2014. Involvement of users and relatives in mental health service evaluation. The Journal of Nervous and Mental Disease 202(6):479-486.
Beidas, R. S., and P. C. Kendall. 2010. Training therapists in evidence-based practice: A critical review of studies from a systems contextual perspective. Clinical Psychology: Science and Practice 17:1-30.
Beidas, R. S., W. Cross, and S. Dorsey. 2014. Show me, don’t tell me: Behavioral rehearsal as a training and analogue fidelity tool. Cognitive and Behavioral Practice 21(1):1-11.
Bixby, T. D. 1998. Network adequacy: The regulation of HMO’s network of health care providers. Missouri Law Review 63:397.
Bledsoe, S. E., M. M. Weissman, E. J. Mullen, K. Ponniah, M. Gameroff, H. Verdeli, L. Mufson, H. Fitterling, and P. Wickramaratne. 2007. Empirically supported psychotherapy in social work training programs: Does the definition of evidence matter? Research on Social Work Practice 17:449-455.
Brandenburg, L., P. Gabow, G. Steele, J. Toussaint, and B. Tyson. 2015. Innovation and best practices in health care scheduling. Discussion paper. Washington, DC: Institute of Medicine. http://www.iom.edu/schedulingbestpractices (accessed June 15, 2015).
Brown, A. H., A. N. Cohen, M. J. Chinman, C. Kessler, and A. S. Young. 2008. EQUIP: Implementing chronic care principles and applying formative evaluation methods to improve care for schizophrenia: QUERI Series. Implementation Science 3:9.
Brownson, R. C., J. A. Jacobs, R. G. Tabak, C. M. Hoehner, and K. A. Stamatakis. 2012. Designing for dissemination among public health researchers: Findings from a national survey in the United States. American Journal of Public Health 103(9):1693-1699.
Chambers, D. A., R. E. Glasgoe, and K. C. Stange. 2013. The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science 8:117.
Choudhry, N. K., M. B. Rosenthal, and A. Milstein. 2010. Assessing the evidence for value-based insurance design. Health Affairs 29(11):1988-1994.
Chun, M. B., and D. M. Takanishi, Jr. 2009. The need for a standardized evaluation method to assess efficacy of cultural competence initiatives in medical education and residency programs. Hawaii Medical Journal 68(1):2-6.
Clark, C. C., E. A. Scott, K. M. Boydell, and P. Goering. 1999. Effects of client interviewers on client-reported satisfaction with mental health services. Psychiatric Services 50(7):961-963.
CMS (Centers for Medicare & Medicaid Services). 2007. Letter to state Medicaid directors. SMDL #07-011. August 15, 2007. http://downloads.cms.gov/cmsgov/archiveddownloads/SMDL/downloads/SMD081507A.pdf (accessed June 16, 2015).
Cochrane. 2015. Cochrane effective practice and organisation of care group: Our reviews. http://epoc.cochrane.org/our-reviews (accessed June 18, 2015).
Colla, C. H., D. E. Wennberg, E. Meara, J. S. Skinner, D. Gottlieb, V. A. Lewis, C. M. Snyder, and E. S. Fisher. 2012. Spending differences associated with the Medicare Physician Group Practice Demonstration. Journal of the American Medical Association 308(10):1015-1023.
Cross, W., M. M. Matthieu, J. Cerel, and K. L. Knox. 2007. Proximate outcomes of gatekeeper training for suicide prevention in the workplace. Suicide and Life-Threatening Behavior 37(6):659-670.
Cross, W. F., A. R. Pisani, K. Schmeelk-Cone, Y. Xia, X. Tu, M. McMahon, J. L. Munfakh, and M. Gould. 2014. Measuring trainer fidelity in the transfer of suicide prevention training. Crisis 35(3):202-212.
Cross, W., J. West, P. A. Wyman, K. Schmeelk-Cone, Y. Xia, X. Tu, M. Teisl, C. H. Brown, and M. Forgatch. 2015. Observational measures of implementer fidelity for a school-based preventive intervention: Development, reliability, and validity. Prevention Science: The Official Journal of the Society for Prevention Research 16(1):122-132.
Curran, G. M., M. Bauer, B. Mittman, J. M. Pyne, and C. Stetler. 2012. Effectiveness-implementation hybrid designs: Combining elements of clinical effectiveness and implementation research to enhance public health impact. Medical Care 50(3):217-226.
Davis, D. A., and N. Davis. 2009. Educational interventions. In Knowledge translation in health care: Moving from evidence to practice, edited by S. Straus, J. Tetroe, and I. D. Graham. Oxford, England: Wiley-Blackwell. Pp. 113-123.
Deegan, P. E. 2010. A web application to support recovery and shared decision making in psychiatric medication clinics. Psychiatric Rehabilitation Journal 34(1):23-28.
Delman, J. 2007. Consumer-driven and conducted survey research in action. In Towards best practices for surveying persons with disabilities, Vol. 1, edited by T. Kroll, D. Keer, P. Placek, J. Cyril, and G. Hendershot. Hauppauge, NY: Nova Publishers. Pp. 71-87.
Delman, J., and A. Lincoln. 2009. Service users as paid research workers: Principles for active involvement and good practice guidance. In Handbook of service user involvement in mental health research, edited by J. Wallcraft, B. Schrank, and M. Amering, New York: John Wiley & Sons, Ltd. Pp. 139-151.
Department of Health (U.K.). 2012. IAPT three-year report: The first million patients. http://www.iapt.nhs.uk/silo/files/iapt-3-year-report.pdf (accessed June 17, 2015).
Drake, R. E., P. E. Deegan, and C. Rapp. 2010. The promise of shared decision making in mental health. Psychiatric Rehabilitation Journal 34(1):7-13.
Ebert, L., L. Amaya-Jackson, J. M. Markiewicz, C. Kisiel, and J. A. Fairbank. 2012. Use of the breakthrough series collaborative to support broad and sustained use of evidence-based trauma treatment for children in community practice settings. Administration and Policy in Mental Health 39(3):187-199.
Eldridge, G. N., and H. Korda. 2011. Value-based purchasing: The evidence. American Journal of Managed Care 17(8):e310-e313.
Emmons, K. M., B. Weiner, M. E. Fernandez, and S. P. Tu. 2012. Systems antecedents for dissemination and implementation: A review and analysis of measures. Health Education and Behavior 39(1):87-105.
Farley, J. F., R. R. Cline, J. C. Schommer, R. S. Hadsall, and J. A. Nyman. 2008. Retrospective assessment of Medicaid step-therapy prior authorization policy for atypical antipsychotic medications. Clinical Therapeutics 30(8):1524-1539.
Forsner, T., A. A. Wistedt, M. Brommels, I. Janszky, A. P. de Leon, and Y. Forsell. 2010. Supported local implementation of clinical guidelines in psychiatry: A two year follow-up. Implementation Science 5:1-11.
Fung, C. H., Y. W. Lim, S. Mattke, C. Damberg, and P. G. Shekelle. 2008. Systematic review: The evidence that publishing patient care performance data improves quality of care. Annals of Internal Medicine 148(2):111-123.
Glisson, C., and S. K. Schoenwald. 2005. The ARC organizational and community intervention strategy for implementing evidence-based children’s mental health treatments. Mental Health Services Research 7(4):243-259.
Glisson, C., S. K. Schoenwald, A. Hemmelgarn, P. Green, D. Dukes, K. S. Armstrong, and J. E. Chapman. 2010. Randomized trial of MST and ARC in a two-level evidence-based treatment implementation strategy. Journal of Consulting and Clinical Psychology 78(4):537-550.
Goldner, E. M., V. Jeffries, D. Bilsker, E. Jenkins, M. Menear, and L. Petermann. 2011. Knowledge translation in mental health: A scoping review. Healthcare Policy 7:83-98.
Goscha, R., and C. Rapp. 2014. Exploring the experiences of client involvement in medication decisions using a shared decision making model: Results of a qualitative study. Community Mental Health Journal 1-8.
Grant, J. G. 2010. Embracing an emerging structure in community mental health services hope, respect, and affection. Qualitative Social Work 9(1):53-72.
Grimshaw, J. M., M. P. Eccles, J. N. Lavis, S. J. Hill, and J. E. Squires. 2012. Knowledge translation of research findings. Implementation Science 7:1-17.
Herschell, A. D., D. J. Kolko, B. L. Baumann, and A. C. Davis. 2010. The role of therapist training in the implementation of psychosocial treatments: A review and critique with recommendations. Clinical Psychology Review 30:448-466.
HHS (U.S. Department of Health and Human Services). 1999. Mental health: A report of the Surgeon General. Rockville, MD: HHS, Substance Abuse and Mental Health Services Administration, Center for Mental Health Services, National Institutes of Health, National Institute of Mental Health.
Hibbard, J. H. 2013. What the evidence shows about patient activation: Better health outcomes and care experiences; fewer data on costs. Health Affairs 32(2):207.
Horgan, C. M. 1985. Specialty and general ambulatory mental health services: A comparison of utilization and expenditures. Archives of General Psychiatry 42:565-572.
_____. 1986. The demand for ambulatory mental health services from specialty providers. Health Services Research 21(2):291-319.
Huskamp, H. A. 1999. Episodes of mental health and substance abuse treatment under a managed behavioral health care carve-out. Inquiry 36(2):147-161.
IHI (Institute for Healthcare Improvement). 2003. The breakthrough series: IHI’s collaborative model for achieving breakthrough improvement. http://www.ihi.org/resources/Pages/IHIWhitePapers/TheBreakthroughSeriesIHIsCollaborativeModelforAchievingBreakthroughImprovement.aspx (accessed June 17, 2015).
IOM (Institute of Medicine). 2006. Improving the quality of care for mental and substance use conditions. Washington, DC: The National Academies Press.
Jamtvedt, G., J. M. Young, D. T. Kristoffersen, M. A. O’Brien, and A. D. Oxman. 2006. Audit and feedback: Effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews (2):CD000259.
Karlin, B. E., and G. Cross. 2014a. Enhancing access, fidelity, and outcomes in the national dissemination of evidence-based psychotherapies. American Psychologist 69(7):709-711.
_____. 2014b. From the laboratory to the therapy room: National dissemination and implementation of evidence-based psychotherapies in the U.S. Department of Veterans Affairs health care system. American Psychologist 69(1):19-33.
Kauth, M. R., G. Sullivan, and K. L. Henderson. 2005. Supporting clinicians in the development of best practice innovations in education. Psychiatric Services 56(7):786-788.
Klein, K. J., and A. P. Knight. 2005. Innovation implementation: Overcoming the challenge. Current Directions in Psychological Science 14(5):243-246.
Kullgren, J. T., A. A. Galbraith, V. L. Hinrichsen, I. Miroshnik, R. B. Penfold, M. B. Rosenthal, B. E. Landon, and T. A. Lieu. 2010. Health care use and decision making among lower-income families in high-deductible health plans. Archives of Internal Medicine 170(21):1918-1925.
Landsverk, J., C. H. Brown, J. Rolls Reutz, L. A. Palinkas, and S. M. Horwitz. 2011. Design elements in implementation research: A structured review of child welfare and child mental health studies. Administration and Policy in Mental Health 38:54-63.
Law, M. R., D. Ross-Degnan, and S. B. Soumerai. 2008. Effect of prior authorization of second-generation antipsychotic agents on pharmacy utilization and reimbursements. Psychiatric Services 59(5):540-546.
Layard, R., and D. M. Clark. 2014. Thrive: How better mental health care transforms lives and saves money. Princeton, NJ: Princeton University Press.
Linhorst, D. M., and A. Eckert. 2002. Involving people with severe mental illness in evaluation and performance improvement. Evaluation & The Health Professions 25(3):284-301.
Linhorst, D. M., A. Eckert, and G. Hamilton. 2005. Promoting participation in organizational decision making by clients with severe mental illness. Social Work 50(1):21-30.
Lochman, J. E., N. P. Powell, C. L. Boxmeyer, L. Qu, K. C. Wells, and M. Windle. 2009. Implementation of a school-based prevention program: Effects of counselor and school characteristics. Professional Psychology: Research and Practice 40(5):476.
Lu, C. Y., S. B. Soumerai, D. Ross-Degnan, F. Zhang, and A. S. Adams. 2010. Unintended impacts of a Medicaid prior authorization policy on access to medications for bipolar illness. Medical Care 48(1):4-9.
Mancini, A. D., L. L. Moser, R. Whitley, G. J. McHugo, G. R. Bond, M. T. Finnerty, and B. J. Burns. 2009. Assertive community treatment: Facilitators and barriers to implementation in routine mental health settings. Psychiatric Services 60(2):189-195.
Manning, W. G., J. P. Newhouse, N. Duan, E. B. Keeler, B. Benjamin, A. Liebowitz, and M. S. Marquis. 1988. Health insurance and the demand for medical care: Evidence from a randomized experiment. Report R-3476-HHS. Santa Monica, CA: RAND Corporation.
Mark, T. L., T. M. Gibson, K. McGuigan, and B. C. Chu. 2010. The effects of antidepressant step therapy protocols on pharmaceutical and medical utilization and expenditures. American Journal of Psychiatry 167(10):1202-1209.
Matthieu, M. M., W. Cross, A. R. Batres, C. M. Flora, and K. L. Knox. 2008. Evaluation of gatekeeper training for suicide prevention in veterans. Archives of Suicide Research: Official Journal of the International Academy for Suicide Research 12(2):148-154.
Mechanic, D. 2012. Seizing opportunities under the Affordable Care Act for transforming the mental and behavioral health system. Health Affairs 31(2):376-382.
Motheral, B. R., R. Henderson, and E. R. Cox. 2004. Plan-sponsor savings and member experience with point-of-service prescription step therapy. American Journal of Managed Care 10:457-464.
Mullen, K. J., R. G. Frank, and M. B. Rosenthal. 2010. Can you get what you pay for? Pay-for-performance and the quality of healthcare providers. The RAND Journal of Economics 41(1):64-91.
Newhouse, J. P., and the Insurance Experiment Group. 1993. Free for All? Lessons from the RAND Health Insurance Experiment. Cambridge, MA: Harvard University Press.
NHS (U.K. National Health Service). 2015. High intensity cognitive behavioural therapy workers. http://www.iapt.nhs.uk/workforce/high-intensity (accessed June 16, 2015).
NIH (National Institutes of Health). 2009. Dissemination and implementation research in health (R03). Funding Opportunity Announcement PAR-10-039. http://grants.nih.gov/grants/guide/pa-files/PAR-10-039.html (accessed June 17, 2015).
NIMH (National Institute of Mental Health). 2006. The road ahead: Research partnerships to transform services. http://www.nimh.nih.gov/about/advisory-boards-and-groups/namhc/reports/road-ahead_33869.pdf (accessed June 16, 2015).
Omeni, E., M. Barnes, D. MacDonald, M. Crawford, and D. Rose. 2014. Service user involvement: Impact and participation: A survey of service user and staff perspectives. BMC Health Services Research 14(1):491.
Pearson, M. L., S. Wu, J. Schaefer, A. E. Bonomi, S. M. Shortell, P. J. Mendel, J. A. Marsteller, T. A. Louis, M. Rosen, and E. B. Keeler. 2005. Assessing the implementation of the chronic care model in quality improvement collaboratives. Health Services Research 40(4):978-996.
Petry, N. M., D. DePhilippis, C. J. Rahs, M. Drapkin, and J. R. McKay. 2014. Nationwide dissemination of contingency management: The Veterans Administration Initiative. The American Journal on Addictions 23:205-210.
Powell, B. J., J. C. McMillen, E. K. Proctor, C. R. Carpenter, R. T. Griffey, A. C. Bunger, J. E. Glass, and J. L. York. 2012. A compilation of strategies for implementing clinical innovations in health and mental health. Medical Care Research and Review 69(2):123-157.
Powell, B. J., J. C. McMillen, K. M. Hawley, and E. K. Proctor. 2013. Mental health clinicians’ motivation to invest in training: Results from a practice-based research network survey. Psychiatric Services 64(8):816-818.
Powell, B. J., E. K. Proctor, and J. E. Glass. 2014. A systematic review of strategies for implementing empirically supported mental health interventions. Research on Social Work Practice 24(2):192-212.
Proctor, E. K., J. Landsverk, G. Aarons, D. Chambers, C. Glisson, and B. Mittman. 2009. Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research 36:24-34.
Raghavan, R. 2012. The role of economic evaluation in dissemination and implementation research. In Dissemination and implementation research in health, Ch. 5, edited by R. C. Brownson, G. A. Colditz, and E. K. Proctor. New York: Oxford University Press. Pp. 94-113.
Raghavan, R., C. L. Bright, and A. L. Shadoin. 2008. Toward a policy ecology of implementation of evidence-based practices in public mental health settings. Implementation Science 3:26.
Rakovshik, S. G., and F. McManus. 2010. Establishing evidence-based training in cognitive behavioral therapy: A review of current empirical findings and theoretical guidance. Clinical Psychology Review 30:496-516.
Ranmuthugala, G., F. C. Cunningham, J. J. Plumb, J. Long, A. Georgiou, J. I. Westbrook, and J. Braithwaite. 2011a. A realist evaluation of the role of communities of practice in changing healthcare practice. Implementation Science 6:49.
Ranmuthugala, G., J. J. Plumb, F. C. Cunningham, A. Georgiou, J. I. Westbrook, and J. Braithwaite. 2011b. How and why are communities of practice established in the healthcare sector? A systematic review of the literature. BMC Health Services Research 11:273.
Rice, T., and K. R. Morrison. 1994. Patient cost sharing for medical services: A review of the literature and implications for health care reform. Medical Care Research and Review 51(3):235-287.
Saldana, L., P. Chamberlain, W. D. Bradford, M. Campbell, and J. Landsverk. 2014. The Cost of Implementing New Strategies (COINS): A method for mapping implementation resources using the stages of implementation completion. Children and Youth Services Review 39:177-182.
Schoenwald, S. K. and K. Hoagwood. 2001. Effectiveness, transportability, and dissemination of interventions: What matters when? Psychiatric Services 52(9):1190-1197.
Simpson, E. L., and A. O. House. 2002. Involving service users in delivery and evaluation of mental health services: Systematic review. British Medical Journal 325:1265-1271.
Taylor, T. L., H. Killaspy, C. Wright, P. Turton, S. White, T. W. Kallert, M. Schuster, J. A. Cervilla, P. Brangier, J. Raboch, L. Kalisová, G. Onchev, H. Dimitrov, R. Mezzina, K. Wolf, D. Wiersma, E. Visser, A. Kiejna, P. Piotrowski, D. Ploumpidis, F. Gonidakis, J. Caldas-de-Almeida, G. Cardoso, and M. B. King. 2009. A systematic review of the international published literature relating to quality of institutional care for people with longer term mental health problems. BMC Psychiatry 9(1):55.
Towle, A., and W. Godolphin. 2013. Patients as educators: Interprofessional learning for patient-centred care. Medical Teacher 35(3):219-225.
Towle, A., L. Bainbridge, W. Godolphin, A. Katz, C. Kline, B. Lown, I. Madularu, P. Solomon, and J. Thistlethwaite. 2010. Active patient involvement in the education of health professionals. Medical Education 44(1):64-74.
Turnbull, P. and F. Weeley. 2013. Service user involvement: Inspiring student nurses to make a difference to patient care. Nurse Education in Practice 13(5):454-458.
UCL (University College London). 2015. UCL competence frameworks for the delivery of effective psychological interventions. https://www.ucl.ac.uk/pals/research/cehp/research-groups/core/competence-frameworks (accessed June 16, 2015).
Watts, B. V., B. Shiner, L. Zubkoff, E. Carpenter-Song, J. M. Ronconi, and C. M. Coldwell. 2014. Implementation of evidence-based psychotherapies for posttraumatic stress disorder in VA specialty clinics. Psychiatric Services 65(5):648-653.
Weissman, M. M., H. Verdeli, M. J. Gameroff, S. E. Bledsoe, K. Betts, L. Mufson, H. Fitterling, and P. Wickramaratne. 2006. National survey of psychotherapy training in psychiatry, psychology, and social work. Archives of General Psychiatry 63(8):925-934.
Werner, R. M., R. T. Konetzka, and D. Polsky. 2013. The effect of pay-for-performance in nursing homes: Evidence from state Medicaid programs. Health Services Research 48(4):1393-1414.
Wilensky, G. R. 2011. ACO regs, round 1. Healthcare Financial Management: Journal of the Healthcare Financial Management 65(5):30, 32.
Wyman, P. A., C. H. Brown, J. Inman, W. Cross, K. Schmeelk-Cone, J. Guo, and J. B. Pena. 2008. Randomized trial of a gatekeeper program for suicide prevention: 1-year impact on secondary school staff. Journal of Consulting and Clinical Psychology 76(1):104-115.
Zayas, L. H., J. L. Bellamy, and E. K. Proctor. 2012. Considering the multiple service contexts in cultural adaptations of evidence-based practices. In Dissemination and implementation research in health, edited by R. C. Brownson, G. A. Colditz, and E. K. Proctor. New York: Oxford University Press. Pp. 483-497.
Zhang, Y., A. S. Adams, D. Ross-Degnan, F. Zhang, and S. B. Soumerai. 2009. Effects of prior authorization on medication discontinuation among Medicaid beneficiaries with bipolar disorder. Psychiatric Services (Washington, D.C.) 60(4):520-527.