Strengthening the Evidence Base and Quality Improvement Infrastructure
Despite substantial evidence documenting the efficacy of numerous treatments for mental and substance-use problems and illnesses, mental and/or substance-use (M/SU) health care (like all health care) often is not consistent with this evidence base. Further, in the absence of evidence on how best to treat some M/SU conditions, treatment for the same condition often varies inappropriately from provider to provider. Moreover, medication errors and the use of restraints and seclusion threaten patient safety, while many individuals with serious symptoms of M/SU illnesses receive no treatment despite having health insurance and geographic access to health care. Finally, although we know about risk factors for the development of some M/SU illnesses, the health care system fails to apply this knowledge in prevention initiatives. As a result, large numbers of people who are at risk of developing M/SU illnesses go on to do so, even as those with existing illnesses cannot always count on receiving safe and effective care.
Remedies for these problems are the same as those for general health care: identifying and disseminating effective practices, providing decision support for clinicians at the point of care delivery, measuring the extent to which effective practices are applied, and incorporating measurement results into ongoing quality improvement activities. For multiple reasons, however, the infrastructure to support these activities is less well developed for M/SU than for general health
care. Clinical assessment and treatment practices (especially psychosocial interventions) have not been standardized and classified for inclusion in the administrative datasets widely used to analyze variations in care and other quality-related issues in general health care. Initiatives to disseminate advances in evidence-based care often fail to use effective strategies and available resources. The development of performance measures for M/SU health care has not received sufficient attention in the private sector, and efforts in the public sector have not yet achieved widespread consensus. Finally, the understanding and use of modern quality improvement methods have not yet permeated the day-to-day operations of organizations and individual clinicians delivering M/SU services—both those in the general health care sector and those providing specialty M/SU health care.
The committee recommends a five-part strategy to build this infrastructure and improve the safety and effectiveness of M/SU health care: (1) a more coordinated strategy for filling the gaps in the evidence base; (2) a stronger, more coordinated, and evidence-based approach to disseminating evidence to clinicians; (3) improved diagnostic and assessment strategies; (4) a stronger infrastructure for measuring and reporting the quality of M/SU health care; and (5) support for quality improvement practices at the locus of health care.
PROBLEMS IN THE QUALITY OF CARE
As in general health care, there is ample evidence of problems in the quality of care for mental and/or substance-use (M/SU) problems and illnesses. These problems include (1) failure to provide care consistent with existing scientific evidence, (2) variations in care that occur when clear evidence on effective care is lacking, (3) failure to provide any treatment for an M/SU illness or to address the risk factors associated with the development of these illnesses, and (4) unsafe care.
Failure to Provide Care Consistent with Scientific Evidence
Numerous studies document the discrepancy between M/SU care that is known to be effective and the care that is actually delivered. An extensive review of all peer-reviewed studies published from 1992 through 2000 in Medline, the Cochrane Collaborative, and related sources that assessed rates of adherence to specific clinical practice guidelines for treating diverse M/SU clinical conditions (including alcohol withdrawal, bipolar disorder, depression, panic disorder, psychosis, schizophrenia, and substance abuse)
found that of the 21 cross-sectional studies showing unequivocal results, only 24 percent documented adequate adherence to the aspect(s) of the practice guidelines under study. Of 5 pre/post studies, only 2 showed adequate adherence rates. When these two groups of naturalistic studies were combined, only 27 percent demonstrated adequate rates of adherence. Better adherence was observed in 6 of the 9 controlled trials reviewed1 (Bauer, 2002). Subsequent studies have continued to document clinicians’ departures from evidence-based practice guidelines for conditions as varied as attention deficit hyperactivity disorder (ADHD) (Rushton et al., 2004), anxiety disorders (Stein et al., 2004), conduct disorders in children (Zima et al., 2005), comorbid mental and substance-use illnesses (Watkins et al., 2001), depression in adults (Simon et al., 2001) and children (Richardson et al., 2004), opioid dependence (D’Aunno and Pollack, 2002), use of illicit drugs (Friedmann et al., 2001), and schizophrenia (Buchanan et al., 2002).
As in general health care, M/SU care received by members of racial and ethnic minorities is even less consistent with standards for effective care than that received by nonminority members. Two nationally representative studies found that members of ethnic minorities were less likely to receive appropriate care for depression or anxiety than were white Americans (Wang et al., 2000; Young et al., 2001). Likewise, facilities dispensing methadone for the treatment of opioid dependence that have a greater percentage of African American patients have been shown to be more likely to dispense low and ineffective doses (D’Aunno and Pollack, 2002).
A 1999 comparison of the performance of 67.7 percent of the nation’s health maintenance organizations (HMOs) on five measures of the quality of mental health care2 and nine measures3 of the quality of general health care found that the HMOs delivered mental health care in accordance with standards of care on average 48 percent of the time, compared with an average of 69 percent for the nine general health care measures (Druss et al., 2002). In a landmark study of the quality of a wide variety of health care received by U.S. citizens, individuals with many different types of illnesses received guideline-concordant care about 50 percent of the time, whereas those with alcohol dependence received care consistent with scientific knowledge only about 10.5 percent of the time (McGlynn et al., 2003).
This failure to provide care consistent with evidence also is manifest in the failure to offer ongoing care for substance dependence consistent with the condition’s chronic nature. Historically, drug dependence has been conceptualized as a disease, a bad habit, or a sin (Musto, 1973). Despite significant differences among these perspectives, all are based on a view that often persists today: that some limited (often very limited) amount, duration, and/or intensity of therapies, medications, and services should be adequate to cause patients with a drug dependence illness to “learn their lesson,” “achieve insight,” and especially “change their ways.” The expectation is that once patients have achieved that insight or learned that lesson, they will be ready for discharge from treatment and will continue as recovered for a substantial period of time. This view has led to the universally applied convention of evaluating the outcomes of treatment through measurement of patient performance 6–12 (or more) months following discharge from treatment (see Finney et al., 1996; Gerstein and Harwood, 1990; Gossop et al., 2001; Hubbard et al., 1989; McLellan et al., 1993a,b; Project MATCH Research Group, 1997; Simpson et al., 1997, 1999).
In fact, however, most alcohol- and drug-dependent patients relapse following cessation of treatment (IOM, 1998; McLellan, 2002). In general, about 50–60 percent of patients begin reusing within 6 months of treatment cessation, regardless of the type of discharge, patient characteristics, or the particular substance(s) used (IOM, 1998; McKay et al., 1999, 2004; McLellan, 2002). It is increasingly apparent that patients with more chronic forms of substance-use illnesses require and do well with appropriately tailored continuing care and monitoring (McKay, 2005; McLellan et al., 2000). Indeed, accumulating evidence suggests that many cases of substance-use illness are best treated with the same type and level of ongoing clinical support as other chronic illnesses, such as cardiovascular disease and diabetes (McLellan et al., 2000).
Variations in Care Due to a Lack of Evidence
Variations in health care are driven by a variety of factors—some appropriate and therapeutic, others not. Appropriate variations in care result when clinicians tailor therapeutic regimens to patients’ unique clinical conditions, in consultation with patients about their expressed preferences and values. Undesirable variations reflect departures from widely accepted evidence-based standards of care (as described above) due to provider preferences, traditions, ignorance of evidence-based standards, or administrative or financial constraints. Variations also result from inconsistencies in diagnosis (described later in this chapter) and from the absence of widely accepted standards of care. Variations in the absence of clinical practice guidelines have been documented, for example, in the use of seclusion and
restraint (Busch and Shore, 2000), patterns of prescribing psychotropic medications for preschoolers and older children (Rawal et al., 2004; Zito et al., 2000), the use of combinations of antipsychotics (Miller and Craig, 2002), and inpatient care lengths of stay (Harman et al., 2003). A 1999–2000 cross-sectional study of the care of children and adolescents at residential treatment centers in four states, for instance, found that 42.9 percent of youths receiving antipsychotic medications had no history of or current psychosis and were thus receiving those medications for “off-label” purposes. Significant regional differences in the prescription of antipsychotic drugs were found across the four states and were associated with the presence of attention deficit/impulsivity, substance use, the duration of symptoms, danger to others, sexually abusive behavior, elopement, and crime/delinquency. The use of antipsychotic medications to treat aggression and conduct disorders has been reported in the clinical literature and identified as an off-label use. Yet positive outcomes for their use in children to treat attention deficit/impulsivity disorders is not well documented and raises concerns, as does the widespread use of antipsychotics for off-label purposes generally and the regional variations in this practice (Rawal et al., 2004).
There is historical evidence that race and ethnicity account for some of these variations. African Americans have been more likely to receive antipsychotics across the diagnostic spectrum, even without indications for their use (Strickland et al., 1991), and more likely than whites to receive these medications “PRN” (as needed) and in higher doses (Chung et al., 1995; Strakowski et al., 1993).
Failure to Treat and Prevent
Failure to Treat
More than a decade ago, the 1990–1992 National Comorbidity Study documented the high proportion of individuals with symptoms of serious mental illness who failed to receive any treatment for their condition (Wang et al., 2002). Since that time, progress has been made. Recent studies have shown improvements in access to and receipt of care for those with the most severe mental illnesses (Kessler et al., 2005; Mechanic and Bilder, 2004). And although the prevalence of M/SU illnesses has remained the same over the past decade, a greater proportion of all non-aged adults with M/SU problems and illnesses have received treatment. Between 1990 and 1992, 20.3 percent of individuals with a mental “disorder” received treatment; between 2001 and 2003 this proportion was 32.9 percent (Kessler et al., 2005). Improvements also have been noted in the access to care for children with these illnesses (Glied and Cuellar, 2003).
Despite this progress, however, the same reports showing improved access to care for some reveal that many others who need treatment still do not receive it (Mechanic and Bilder, 2004); this is true especially for ethnic minorities (Kessler et al., 2005). Between 2001 and 2003, fewer than half (40.5 percent) of individuals with symptoms of a serious mental illness received treatment (Kessler et al., 2005), and there is evidence of a decline in access for those with less severe mental illnesses (Kessler et al., 2005; Mechanic and Bilder, 2004). Findings of recent studies similarly reaffirm the continuing failure to treat substance-use problems and illnesses (Watkins et al., 2001).
These failures to treat persist, even when individuals are receiving some type of health care and have financial and geographic access to care. For example, data for 1998–2001 from a seven-site longitudinal study of 1,088 youths in residential, outpatient, and inpatient treatment for drug use show that 43 percent of the youths reported receiving no mental health services in the 3 months after being admitted, despite having severe mental health problems at the time of admission. At three sites where mental health services were provided at no additional charge, rates of service receipt for those with severe mental illnesses were 6 percent, 28 percent, and 79 percent, respectively. In contrast, rates of receipt of care for comorbid general health problems among these youths ranged from 64 to 71 percent (Jaycox et al., 2003). Results of the 2003 National Survey on Drug Use and Health document a similar failure to treat adults. Data from another national survey conducted in 1997–1998 reveal that among persons with probable comorbid mental and substance-use disorders who received treatment for one of these conditions, fewer than a third (28.6 percent) received treatment for the other (Watkins et al., 2001).
Reasons for the failure to treat M/SU illnesses have not been fully determined, but the finding of low treatment rates in the presence of access to services and no additional cost to the patient indicates that access and ability to pay are not always the only contributing factors. This point is confirmed by responses of civilian, noninstitutionalized adults aged 18 and older to the 2003 National Survey on Drug Use and Health, which captured separately information on mental and substance-use problems and illnesses. These respondents reported the following reasons for not receiving mental health treatment that they believe they needed: cost/insurance issues (45.1 percent), did not feel the need for treatment at the time/could handle the problem without treatment (40.6 percent), did not know where to go for services (22.9 percent), stigma (22.8 percent), did not have time (18.1 percent), believed treatment would not help (10.3 percent), fear of being committed/having to take medication (7.2 percent), and other access barriers (3.7 percent). Reasons given by respondents who felt they needed treatment for a substance-use problem but did not receive it were somewhat
different: not ready to stop using (41.2 percent), cost or insurance barriers (33.2 percent), stigma (19.6 percent), did not feel the need for treatment (at the time) or could handle the problem without treatment (17.2 percent), access barriers other than cost (12.3 percent), did not know where to go for treatment (8.7 percent), believed treatment would not help (6.3 percent), and did not have time (5.3 percent) (SAMHSA, 2004a).
Other studies of factors that influence consumers’ entry into alcohol and drug treatment have found that individuals with alcohol or drug problems who do not experience recovery on their own typically do not go into treatment until their problems become severe or until social circumstances, such as workplace problems or criminal offenses, send them there. In a 2001 nationally representative survey of individuals in recovery from alcohol or drug illnesses and their families, 60 percent reported that denial of addiction or refusal to admit the severity of the problem was the greatest barrier to their recovery. Embarrassment or shame was the second most frequently cited obstacle (Peter D. Hart Research Associates, Inc., 2001). Factors that drive these individuals to seek help vary over the course of their alcohol or drug use. Early on, these factors include adverse social consequences in the workplace, criminal convictions, or serious disturbances in interpersonal relationships. As substance use progresses over time, health problems related to use are associated with seeking treatment (Satre et al., 2004; Weisner and Matzger, 2002).
Individuals who are members of ethnic minorities face additional obstacles to receiving needed mental health services (DHHS, 2001). Despite roughly similar levels of need, ethnic minorities are less likely to receive mental health care than are white Americans. Blacks, for example, are only 50 percent as likely to receive psychiatric treatment as whites when both receive a diagnosis of the same severity (Kessler et al., 2005). Latino children also have higher rates of unmet need relative to other children (Kataoka et al., 2002). Access to mental health services may be restricted for ethnic minorities for multiple reasons—for example, because they are more apt to be uninsured (Brown et al., 2000), because ethnic minority providers and/or providers with appropriate language capabilities are often unavailable, and because they may have less trust in the health care system (LaVeist et al., 2000).
Failure to Prevent
Sometimes failure to provide care occurs at the level of the health system, rather than at the patient–provider level. The United States, like other developed countries, has structures and mechanisms in place to address threats to the public’s health that arise from both external environmental conditions and an individual’s personal health practices. An earlier
Institute of Medicine report on reducing risks for mental disorders (IOM, 1994) notes that prevention activities for many general health conditions take place even when the etiology of an illness and how to prevent it are not fully understood. Examples are primary prevention of cancer and heart disease, for which the public health system has targeted known risk factors (e.g., diet, exercise, lipid levels, smoking) despite the lack of such knowledge. This risk reduction model of prevention targets the risk factors known to be associated with an illness or injury. By contrast, despite scientific evidence on risk factors associated with some mental illnesses (predominantly in children and adolescents) and effective interventions to mitigate these factors (see, e.g., Beardslee et al., 2003; Hollon et al., 2002; Mojtabai et al., 2003), this evidence has not yet been widely applied in practice (Davis, 2002), and the prevalence4 of M/SU problems and illnesses does not appear to have declined over the past decade (Kessler et al., 2005).
Although there is not yet clear evidence to support preventive interventions for specific diagnoses (e.g., ADHD, anxiety, or depression), risk factors have been identified that have been helpful for developing broad, school-based preventive programs that generally target “behavior problems.” This prevention literature for children is focused largely in two areas: (1) risk factors for conduct problems, serious disruptive behaviors, and violence, and testing of interventions aimed at preventing the onset of those problems (Kazdin, 2003; Patterson et al., 1989, 1993; Webster-Stratton and Hammond, 1997, 1999); and (2) prevention of depression among adolescents (Clarke et al., 1995; Lewinsohn, 1987) or children (Beardslee et al., 1996, 1997; Podorefsky et al., 2001). The U.S. Surgeon General’s report on youth violence also clearly sets forth the evidence for prevention of violent behavior (Office of the Surgeon General, 2001).
As with the quality of M/SU health care overall, less is known about errors in or injuries due to M/SU treatment services than is the case for general health care (Bates et al., 2003; Moos, 2005). This is especially true for errors that occur in outpatient settings, where the greatest proportion of treatment for individuals with M/SU problems and illnesses is provided. Some mental health “interventions” have been found to be harmful subsequent to their use; examples are organized visits to jails and prisons by children or adolescents to deter their future delinquency (sometimes known as “scared straight” programs) (Petrosino et al., 2005) and rebirthing therapy (Lilienfeld et al., 2003). Others, such as critical incidence stress
debriefing, have been found to be potentially harmful (Rose et al., 2005). Most data on threats to safety have been collected on medication errors and the use of seclusion and restraints in mental health care. Errors or injuries from treatment for substance-use problems and illnesses have not yet received substantial attention. Although an estimated 7–15 percent of patients who receive psychosocial treatment for substance use may be worse off after treatment, a conceptual model to help distinguish the iatrogenic effects of the intervention from other factors that can cause worsening of substance-use problems (e.g., social isolation) has only recently been proposed (Moos, 2005).
A Medline search for articles published between 1996 and 2003 on medication errors (one of the most common types of health care errors) in psychiatric treatment revealed relatively few data available, and only a handful of studies of adverse drug events in inpatient psychiatric settings. Although studies of adverse drug events in general hospitals have yielded data on errors involving psychotropic drugs, less is known about medication errors in psychiatric hospitals and psychiatric units of general hospitals. Moreover, as recently as 2002, terms such as “adverse drug events,” “medication errors,” and “adverse drug reactions” were not even listed as key search words in several widely read psychiatric journals (Grasso et al., 2003b). Errors committed in substance-use treatment also have received little attention.
What is known from the few published studies gives cause for concern. A retrospective, multidisciplinary review of the charts of 31 randomly selected patients in a state psychiatric hospital discharged during a 4 1/2-month study period detected 2,194 medication errors during these patients’ entire 1,448 inpatient days.5 Of the 2,194 errors, 19 percent were rated as having the potential to cause minor harm, 23 percent moderate harm, and 58 percent severe harm (Grasso et al., 2003a). Another 12-month study of all long-term residents of 18 community-based nursing homes in Massachusetts found that psychoactive medications (antipsychotics, antidepressants, and sedatives/hypnotics) were among the most common medications
associated with preventable adverse drug events, and neuropsychiatric events were the most common type of preventable adverse drug events (Gurwitz et al., 2000).
With respect to ambulatory care, additional safety concerns have been raised about the practice of long-term treatment with combinations of antipsychotic medications (except in instances of failures of monotherapy using different drugs). The use of combinations of antipsychotic medications continues despite (1) the absence of evidence to support the practice, (2) the lack of evidence to inform clinicians about how to adjust dosages in the face of increased symptoms or side effects, and (3) increased risks to the patient from problematic side effects and failure to adhere to treatment (Miller and Craig, 2002). Similarly, experts in children’s mental health care express concern about the growing use of atypical antipsychotics to treat aggression in children and adolescents in the face of limited basic and clinical research supporting the rationale, efficacy, and safety of using these agents for this purpose (Patel et al., 2005).
Seclusion and Restraint
Use of seclusion and restraint, while necessary in some emergency situations to prevent harm to a patient or others, also is associated with substantial psychological and physical harm to patients (GAO, 1999). The federal government estimates that each year approximately 150 individuals in the United States die as the direct result of these practices (SAMHSA, 2004b). In 1998, the death of an 11-year-old boy who died while secluded and restrained in a psychiatric hospital focused national attention on the risks to patients when these approaches are used. A follow-up report of the U.S. General Accounting Office (now the Government Accountability Office) (GAO) confirmed the danger of improper use of seclusion and restraint and called attention to inadequate monitoring and reporting of their use, inconsistent and insufficient standards for their use and reporting by licensing and accreditation bodies, and widespread failure to employ strategies that can prevent their use and reduce the risk of related injuries. Children experience higher rates of seclusion and restraint relative to adults and are at greater risk of injury from their use (GAO, 1999).
Consumers and their advocates, professional associations, provider organizations, and the federal government recommend substantial reductions in the use of seclusion and restraint (American Association of Community Psychiatrists, 2003; NAMI, 2003; NASMHPD, 1999, 2005; SAMHSA, 2004b). GAO found that these practices can be greatly reduced through strong management commitment and leadership, defined principles and policies regarding when and how they may be used, a requirement to report their use, staff training in their safe use and alternative approaches, and
oversight and monitoring (GAO, 1999).6 Several initiatives incorporating these practices have greatly reduced the use of seclusion and restraint (American Psychiatric Association et al., 2003; Hennessy, 2002), some achieving near elimination of the practices. Pennsylvania’s state psychiatric hospital system, for example, which called attention to the use of seclusion and restraints as an indicator of “treatment failure,” sharply decreased their use from 107.9 hours per 1,000 patient days in 1993 to 2.72 hours per 1,000 patient days in 2000 through quality improvement initiatives in all state psychiatric hospitals (Smith et al., 2005).
Use of seclusion and restraint continues, however, despite a Cochrane Collaboration finding that “few other forms of treatment which are applied to patients with various psychiatric diagnoses are so lacking in basic information about their proper use and efficacy (Sailas and Fenton, 2005:4). As a result, seclusion and restraints are frequently applied without clear indications for their use (Finke, 2001) and can lead to death (Denogean, 2003; Schnaars, 2003), physical harm (Mohr et al., 2003), or severe psychological trauma (Pflueger, 2002).7 Individuals admitted to inpatient psychiatric care often have a history of sexual or other physical abuse (Goodman et al., 1997; Mueser et al., 2002). Being physically overpowered, restrained, or placed in a locked room may have many features in common with the abuse experienced earlier by these individuals.
Heightened Safety Concerns and Need for Multiple Actions
The limited information on the safety of M/SU health care is of particular concern because some of the unique features of M/SU illnesses and their treatments could make patients less able to detect and avoid errors and more vulnerable to errors and adverse events when they occur. For example, the stigma experienced by individuals with M/SU illnesses may make them less willing to report errors and adverse events and less likely to be believed when they do so. The symptoms of some severe illnesses, such as major depression or schizophrenia, when not alleviated by therapy, also could interfere with a patient’s ability to detect and report medication errors.
The departures from scientific knowledge, variations in care, failures to treat and prevent, and unsafe practices discussed above have multiple causes. These include (1) gaps in the evidence base, (2) problems in disseminating existing evidence to clinicians and ensuring its uptake, (3) greater subjectiv-
ity in diagnosing mental problems and illnesses relative to general health conditions, (4) a less-well-developed infrastructure for measuring and reporting the quality of M/SU health care, and (5) inadequate adoption of quality improvement practices at the locus of M/SU care delivery. The following sections of this chapter present evidence on these issues and describe actions that can be taken to address them, specifically by:
Improving the production of evidence.
Improving diagnosis and assessment.
Using evidence-based practices and untapped resources to better disseminate the evidence.
Strengthening the quality measurement and reporting infrastructure.
Applying quality improvement methods at the locus of care.
Related issues of improved care coordination, use of information technology, implications of a more diverse workforce, and creation of incentives in the marketplace to support this five-part strategy are addressed in succeeding chapters.
IMPROVING THE PRODUCTION OF EVIDENCE
Gaps in the Evidence Base
Over the past two decades, there has been an impressive increase in the number and quality of studies on M/SU problems, illnesses, and therapies for both children (Burns and Hoagwood, 2004, 2005; Pappadopulos et al., 2004; Weisz, 2004) and adults (IOM, 1997; Johnson et al., 2000). Nonetheless, gaps remain in our knowledge of how to treat some M/SU conditions, how to care simultaneously for multiple comorbidities, how to care for some population subgroups, and which evidence-based therapies are better than others or best of all (see Box 4-1).
Such gaps in knowledge mean that evidence-based clinical practice guidelines are unavailable for many M/SU problems and illnesses.
The Efficacy–Effectiveness Gap
In addition to the above gaps in knowledge of efficacious therapies, there has been more research on the efficacy of specific treatments than on the effectiveness of these treatments when delivered in usual settings of care; in the presence of comorbid conditions, social stressors, and varying degrees of social support; and when administered by service providers with-
Therapies for children and older adults. Knowledge about how to best care for individuals at both ends of the age continuum is limited, including how to incorporate effective treatment for the most prevalent disorders of childhood (i.e., anxiety, ADHD, depression, conduct disorders) into routine care (Hoagwood et al., 2001; Stein, 2002), the effect of multiple medications on children’s outcomes, and the comparative efficacy of different therapies for severe conditions (e.g., bipolar disorder, childhood depression) (Kane et al., 2003). Evidence is also needed on how to better care for older adults with comorbid conditions and the frail elderly in usual settings of care (Borson et al., 2001).
Treatment of multiple conditions. In spite of the high frequency of comorbid mental, substance-use, and general illnesses (see Chapters 1 and 5), there is a substantial lack of knowledge about effective treatment for individuals with complex comorbidity (Kessler, 2004).
Posttraumatic Stress Disorder/Acute Stress Disorder. Better evidence is needed about effective treatment for posttraumatic stress disorder (PTSD) and acute stress disorder (ASD), e.g., how best to combine pharmacotherapy and psychotherapy, and how to relieve some specific symptoms, such as insomnia or nightmares, and in the presence of other medications. Moreover, although cognitive and behavioral therapies have demonstrated efficacy in treating victims of sexual assault, interpersonal violence, and industrial or vehicular accidents, their effectiveness in treating PTSD or ASD in combat veterans or victims of mass violence requires further study (Work Group on ASD and PTSD, 2004).
out specialized education in their use (DHHS, 1999; Essock et al., 2003; Kazdin, 2004). For example, while numerous clinical efficacy studies have documented that psychostimulant medications reduce the core symptoms of ADHD, accumulating evidence suggests that this drug treatment is much less effective as currently delivered in routine community settings (Lefever et al., 2003). For people with severe mental illnesses and many substance-use problems and illnesses, how well the clinical aspects of treatment work is often closely related to such factors as housing, income support, and employment-related activities. This complicates considerations regarding effectiveness and has resulted in calls for improved research efforts (discussed below) that can provide information on both the effectiveness and efficacy of interventions (Carroll and Rounsaville, 2003; Tunis et al., 2003; Wolff, 2000).
Although the knowledge gaps discussed above also exist for general health care, some of the tools and strategies used to build the evidence base
Psychotic illnesses. Questions remain about which antipsychotic medication should be the first line of therapy, what constitutes a sufficient period of time for a trial of a new medication to see if it is effective, and how to handle poor response to the initial prescribed medication (Kane et al., 2003). Moreover, the use of multiple antipsychotic medications takes place despite a lack of evidence about their combined efficacy and how to manage their dosing when increased symptoms or side effects occur (Miller and Craig, 2002).
Amphetamine or marijuana dependence. No medications have yet been found effective in treating these dependencies.
Cocaine dependence. No medications are currently approved by the U.S. Food and Drug Administration to treat this dependency.
Relative effectiveness of different treatments. Multiple therapies are used to treat the same illness. For example, more than 550 psychotherapies are currently in use for children and adolescents, with little helpful information for clinicians or consumers about their comparative effectiveness (Kazdin, 2000, 2004). As in other areas of health care, the federal government’s drug approval rules give little incentive for head-to-head clinical trials (Pincus, 2003), and there is a lack of substantial capital investment in developing and testing psychosocial interventions.
Therapies for other population subgroups. Ethnic and cultural minorities are largely missing from efficacy studies for many treatments (DHHS, 2001) in spite of growing evidence that drug dosages may vary by ethnic status (Lin et al., 1997). Few of these studies had the power necessary to examine the impact of care on specific minorities.
in general health care are less frequently utilized in M/SU health care. Research on M/SU health care needs to make greater use of these approaches to generating evidence on effective therapies.
Filling the Gaps in the Evidence Base
As is the case for general health care, federal agencies, philanthropic organizations, and other private-sector entities undertake many efforts to identify priority areas in M/SU health care in need of evidence, fund and conduct research, and support systematic reviews of research findings to identify evidence-based therapies. A strategy for coordinating these various efforts is articulated in Chapter 9. However, the large number of gaps in the evidence base for M/SU health care also requires that all sources of valid and reliable information be used to produce as much evidence as quickly, comprehensively, and accurately as possible. Three sources of information have been under-
utilized: (1) studies other than randomized controlled trials, (2) administrative datasets that often exist electronically, and (3) patients and their ability to report changes in their symptoms and well-being (outcomes of care). Steps can be taken to make better use of each of these sources.
Studies Other Than Randomized Controlled Trials
While well-designed randomized controlled trials are recognized as the gold standard for generating sound clinical evidence, experts note that the sheer number of possible pharmacological and nonpharmacological treatments for many M/SU illnesses makes relying solely on such studies to identify evidence-based care infeasible (Essock et al., 2003). Others add that some features of mental health care make use of randomized controlled trials methodologically problematic as well. For example, in studies of the effectiveness of psychotherapy, the therapist and the patient cannot be blinded to the intervention, delivery of a placebo psychotherapeutic intervention is difficult to conceptualize, and standardization of the intervention is problematic because therapists must respond to what happens in a psychotherapy session as it unfolds (Tanenbaum, 2003). For such reasons, the behavioral and social sciences have often used quasi-experimental as well as qualitative research designs (National Academy of Sciences, undated), practices that are sometimes a source of contention.
Some assert that quasi-experimental studies often are more useful than randomized controlled trials in generating practical information on how to provide effective mental health interventions in some clinical areas (Essock et al., 2003). Consistent with this assertion, the U.S. Preventive Services Task Force notes that a well-designed cohort study may be more compelling than a poorly designed or weakly powered randomized controlled trial (Harris et al., 2001). Observational studies also have been identified as a valid source of evidence that is useful in determining aspects of better quality of care (West et al., 2002). However, others note the comparative weakness of these study designs in controlling for bias and other sources of error and exclude them from systematic reviews of evidence for the determination of evidence-based practices.
A discussion of variations in study design and their implications for systematic reviews of evidence is beyond the scope of this report; many researchers and methodologists are considering strategies for addressing these difficult issues (Wolff, 2000). As this study was under way, the National Research Council had established a planning committee to oversee the development of a broad, multiyear effort—the Standards of Evidence–Strategic Planning Initiative—to identify critical issues affecting the quality and utility of research in the behavioral and social sciences and education (National Academy of Sciences, undated). The committee believes such
discussions are critical to strengthening the appropriate use of all of the above types of research in building the evidence base on effective treatments for M/SU illnesses.
Better Capture of Mental and Substance-Use Health Care Data in Administrative Datasets
In general health care, routinely collected administrative data (e.g., claims or encounter data) that are generally produced each time a patient is admitted to a hospital or makes a visit to an ambulatory heath care provider are widely used for health services research, epidemiologic studies, and quality assessment and improvement initiatives (Iezzoni, 1997; Zhan and Miller, 2003). While these datasets have limitations with respect to their completeness, accuracy, and level of detail (AHRQ, 2004a; Iezzoni, 1997), administrative data remain a preferred and routinely used source of information for multiple quality-related purposes because they are readily available, inexpensive, and computer readable (AHRQ, 2004b; Zhan and Miller, 2003). For example, analysis of administrative data revealed the now well-known and sizable variations that exist in clinical care within the United States, an analysis that continues today (Mullan, 2004; Wennberg, 1999). Consequently, administrative data produce a variety of clinical quality indicators for hospital care (AHRQ, 2004b), underpin many of the quality measures found in the National Committee for Quality Assurance’s (NCQA) Healthplan Employer Data and Information Set (HEDIS) performance measures (NCQA, 2004a), and are the data source for the Agency for Healthcare Research and Quality’s (AHRQ) new patient safety indicators (Zhan and Miller, 2003). Because of their utility, administrative data are viewed as a mainstay of health services research on quality of care (Iezzoni, 1997) and are likely to become even more so as the National Health Information Infrastructure is developed (see Chapter 6).
These inpatient and outpatient datasets typically contain standardized information on each individual’s diagnosis (using International Classification of Diseases [ICD] codes) and on the specific therapies and procedures performed for that diagnosis (using the American Medical Association’s [AMA] Current Procedural Terminology [CPT] codes, the Centers for Medicare and Medicaid Services’ (CMS) Healthcare Common Procedure Coding System [HCPCS] for outpatient care, and ICD, ninth revision, Clinical Modification (ICD-9-CM) procedure codes for inpatient care). However, these codes are less useful at present for the study of M/SU care than for the study of general health care for several reasons. Psychotherapy codes are few and imprecise and differ across inpatient and outpatient settings. Codes for other psychosocial services generally are absent, as are codes for the use of restraints. And the new CPT II codes for use in performance measure-
ment, a significant development, do not yet include codes for measuring the quality of M/SU health care.
CPT codes CPT psychotherapy codes generally do not indicate what specific type of psychotherapy was provided, only that psychotherapy in general was provided and how long the session lasted. The 2005 CPT codes (AMA, 2004a) include only two main codes for psychotherapy:
“Insight Oriented, Behavior Modifying and/or Supportive Psychotherapy” in an office or other outpatient facility, approximately 20 to 30, or 45 to 50, or 75 to 80 minutes face-to-face with the patient (codes 90804, 90806, and 90808 respectively) without or with (codes 90805, 90807, or 90809) accompanying medical evaluation and management services.
“Interactive Psychotherapy” which consists of individual psychotherapy, interactive, using play equipment, physical devices, language interpreter, or other mechanism of non-verbal communication, in an office or outpatient facility for approximately 20 to 30, or 45 to 50, or 75 to 80 minutes face-to-face with the patient (codes 90810, 90812, and 90814, respectively). These codes are typically used for children or others who have not yet developed or who have lost language communication skills.
A similar number of codes exist for these same services when provided in an inpatient hospital, partial hospital, or residential care facility. Six other codes for psychoanalysis and group, family, and interactive psychotherapy exist, as well as 10 codes for “Other Psychiatric Services or Procedures,” such as electroconvulsive treatments, hypnotherapy, and biofeedback. With the exception of a code for psychoanalysis, none of these codes identify the specific type of psychotherapy administered (e.g., cognitive therapy, behavior modification, cognitive behavioral therapy, interpersonal therapy, dialectical behavioral therapy, prolonged exposure therapy for individuals suffering from posttraumatic stress disorder, Gestalt therapy, movement/dance/art therapy, humanistic therapy, existential therapy, eye movement desensitization therapy, primal therapy, person-centered therapy, multisystemic therapy, and the many variants of these. Nor are there procedure codes for the use of diagnostic or behavioral assessment instruments. Other evidence-based psychotherapies, as well as psychosocial interventions such as family psychoeducation, multisystemic therapy, illness self-management programs, and assertive community treatment also do not have designated CPT codes. Moreover, a recent initiative of the AMA and the CPT Editorial Panel to develop codes for performance measurement (CPT II codes) and emerging technologies, services, and procedures (CPT III codes) has not yet adequately addressed M/SU health care.
The new CPT II codes are optional codes to support nationally established performance measures by allowing the electronic capture of information that otherwise would have to be obtained through medical record abstraction or chart review. The growing use of administrative data for research purposes also instigated their development. The CPT II codes currently address specific types of patient management (e.g., prenatal care); patient history-taking activities (e.g., assessment of tobacco use, anginal symptoms and level of activity); physical examination processes (e.g., measurement of blood pressure); and therapeutic, preventive, or other interventions (e.g., counseling or intervention for cessation of tobacco use and prescription of certain medications).8 CPT III codes for new and emerging technologies include a new code for online medical evaluation service using the Internet or similar electronic communications network (AMA, 2004a). NCQA is proposing to use the new CPT II codes for the first time in HEDIS 2006 to capture data on blood pressure (≤140/90 mm Hg or > 140/90), prenatal and postpartum care, beta-blocker treatment after heart attacks, diabetes care, and cholesterol management after a cardiovascular event (NCQA, 2005).
Category II codes are reviewed by a Performance Measures Advisory Group (PMAG) made up of performance measurement experts representing AHRQ, CMS, the Joint Commission on Accreditation of Healthcare Organizations (JCAHO), NCQA, and the AMA’s Physician Consortium for Performance Improvement (the Consortium). The PMAG may seek additional expertise and/or input as necessary from other national health care organizations, including national medical specialty societies, other national health care professional associations, accrediting bodies, and federal regulatory agencies, and will consider code proposals submitted by national regulatory agencies, accrediting bodies, national professional and medical specialty societies, and other organizations (AMA, 2004a).
ICD-9 procedure codes ICD-9 procedure codes for inpatient care are somewhat more detailed than the CPT codes with respect to psychotherapy. For example, they include a separate code for behavior therapies such as “aversion therapy, behavior modification, desensitization therapy, extinction therapy, relaxation therapy, and token economy.” They do not, however, include a code for use of restraints in psychiatric care, although there are two codes for use of “isolation.” Similarly, ICD-9-E codes, used to classify external events or circumstances that can cause injury or other adverse events, do not include a specific code for injuries obtained during the appli-
cation or use of restraints, in contrast with the codes provided for a variety of other “misadventures to patients during surgical and medical care,” ranging from errors caused by a surgical operation (Code E870.0) to errors caused by administration of an enema (Code E870.7) (AMA, 2004b).
As a result, when psychotherapy is delivered to a patient and paid for by insurers, it is essentially a “black box.” In child and adolescent therapy alone, for example, it is conservatively estimated that, even if one omits various combinations of treatments and variants of treatments that are not substantially different, there are more than 550 psychotherapies in use (Kazdin, 2000). Because of their lack of specificity, however, administrative data currently cannot document the extent of variation in therapeutic practice and trends over time as they have done for general health care. More-detailed therapy codes, type-of-provider codes,9 and codes that use consistent terminology across inpatient and outpatient settings could help in measuring the use and variation in use of the many hundreds of types of psychotherapy. Moreover, if the type of psychotherapy were routinely captured in administrative data and combined with data on patients’ reports regarding the results of their care (as are currently obtained in some consumer surveys now in use), such information could assist in evaluating the effectiveness of different therapies in the field, in contrast to evaluation of their efficacy in experimentally controlled settings (see above and the discussion of outcome data below). The absence of detailed administrative data linked to patient outcomes makes it difficult to discern the relative effectiveness of different therapies or whether, as some assert, the effectiveness of the therapist’s relationship with the client may be equally or more important than the type of therapy provided (Levant, 2004; Norcross, 2002). Moreover, performance measurement and improvement would be facilitated by this type of administrative data. Performance measures based on administrative data, such as claims data, are more likely to be used than measures based on more costly or labor-intensive sources of data, such as medical records or patient surveys (Hermann et al., 2000).
Following the issuance of regulations implementing the administrative simplification provisions of the Health Insurance Portability and Accountability Act (HIPAA), the Substance Abuse and Mental Health Services Administration (SAMHSA), the National Association of State Mental Health Directors, Inc. (NASMHPD), and the National Association of State Alcohol and Drug Abuse Directors, Inc. (NASADAD) took steps to identify some additional procedure codes to capture the range of treat-
ment services provided (see http://hipaa.samhsa.gov). Similar, expanded efforts in coordination with public- and private-sector experts in coding, evidence-based practices, and use of administrative datasets could help substantially in building the evidence base on the effectiveness of different M/SU treatments.
Collection of Outcome Data from Patients
Patients are increasingly recognized as valid judges of the quality of their health care (Iezzoni, 1997); this applies equally to general and M/SU health care. In addition to reporting on their experiences with care delivery processes—such as the extent to which they were able to participate in decisions about their own care and gain skill in the self-management of their illness—consumers can provide information on the effectiveness of treatment in reducing symptoms and improving functioning (Hibbard, 2003). Moreover, “the shift toward patient-centered care has meant that a broader range of outcomes from the patient’s perspective needs to be measured in order to understand the true benefits and risks of healthcare interventions.” (emphasis added) (Stanton, 2002:2) Patient questionnaires that ask about the extent to which patients’ symptoms have been reduced as a result of treatment are already being used to measure outcomes for treatment of general medical conditions such as benign prostatic hypertrophy and cataracts. These questionnaires have been found to yield accurate and reliable information on the extent of improvement in symptoms, providing detailed and sensitive measures of treatment effectiveness from the patient’s perspective. For example, the VF-14, a 14-item questionnaire on eyesight, asks patients about the amount of difficulty they experience in pursuing usual daily activities, such as driving and reading fine print. Many insurers (including Medicare) require that the results of the VF-14 be reported as part of claims payment. The questionnaire also is required by the National Eye Institute to test the benefits of new technologies and procedures for cataract patients (Stanton, 2002).
Such consumer surveys may be an even more appropriate and valuable source of data on the outcomes of M/SU health care than on those of general health care. Laboratory tests or other physical measures, such as blood glucose levels, blood pressure, and forced expiratory lung volume, can measure outcomes of general health care accurately and easily. In contrast, fewer laboratory or other physical examination findings can measure whether mental illness or drug dependence is remitting. Thus patients are likely to be the best source of information on the extent to which their symptoms are abating and functioning is improved.
Patient reports of symptoms and functioning (outcomes of care) can readily be gathered using several clinically feasible, valid, and reliable ques-
tionnaires, such as the Behavior and Symptom Identification Scale (BASIS-32) (Eisen et al., 1999, 2004) and the Patient Health Questionnaire (PHQ-9) (Lowe et al., 2004). Alternatively, clinicians can assess response to treatment systematically and reliably by obtaining information from the patient, combined with other data, and following up over time by using such instruments as the Global Assessment of Functioning (GAF), the Brief Psychiatric Rating Scale (BPRS), and the Health of the Nation Outcome Scales (HoNOS) (VA Technology Assessment Program, 2002). In the alcohol and drug field, instruments such as the Addiction Severity Index (ASI), the Global Appraisal of Individual Needs (GAIN), and the Project MATCH Form 90 are widely used to measure function. In addition, patient surveys used for quality measurement purposes, such as the Experience of Care and Health Outcomes (ECHO) Survey (Anonymous, 2001) and the Mental Health Statistical Improvement Project (MHSIP) surveys, include questions on patients’ perceptions of their improvements in functioning.
If the more detailed administrative data on treatment described above were linked to patient reports of improvement in clinical symptoms and other outcomes, additional evidence could be generated on what treatments and treatment approaches are more effective than others in usual settings of care. For example, the annual Medicare Current Beneficiary Survey asks aged and disabled Medicare beneficiaries living in the community and in institutions to answer questions about many aspects of their health and health care, including their health status and ability to function. These patient self-report data are often combined with Medicare claims and expenditure data to answer a variety of questions about Medicare-covered services (CMS, 2004), such as whether particular services improve beneficiaries’ functional status (Hadley et al., 2000) and what effects variations in Medicare spending have on the delivery of care and patient outcomes (Fisher et al., 2003). In addition, the analysis of administrative data and patient outcomes can be used to facilitate experimental research by identifying target population groups that are using therapies or medications of interest and have experienced either treatment failures, partial symptom abatement, or more complete recovery (Miller and Craig, 2002). In the Veterans Health Administration (VHA), linking outcome data on patients treated for posttraumatic stress disorder with administrative data showed that long-term, intensive inpatient treatment was not more effective than short-term treatment and cost $18,000 more per patient per year (Fontana and Rosenheck, 1997; Rosenheck and Fontana, 2001). In 1999, the VHA mandated that all mental health inpatients be rated at discharge using the GAF instrument, and that all outpatients be similarly rated at least once every 90 days during active treatment. The agency now includes GAF outcome measures in its National Mental Health Program Performance Monitoring System (Greenberg and Rosenheck, 2005) (see the discussion in Appendix C).
How Mechanisms for Analyzing the Evidence Can Be Strengthened and Coordinated
As evidence is generated, systematic analysis is essential to translate it into clinically useful practice guidelines and other clinician decision-support tools. Many organizations and initiatives in the United States are performing such analyses for M/SU health care. However, there is often little coordination of those efforts. Moreover, although the practice of evidence-based care is widely endorsed, there is not yet a shared understanding in M/SU health care (as is also the case in general health care [Steinberg and Luce, 2005]) of what constitutes a finding that a given practice is evidence-based. Views differ about the acceptability of various forms of evidence, what level of evidence is necessary for a practice to be recommended or endorsed as evidence-based (Tanenbaum, 2003), and whether knowledge of evidence-based care for a population can be adapted to meet each individual’s unique needs (Tanenbaum, 2005).
This lack of consensus prompted a call from Congress in 1999 for AHRQ to identify and describe sound methods for rating the strength of scientific evidence. AHRQ found several acceptable systems that address the essential considerations of (1) the aggregate quality ratings for individual studies; (2) the quantity of studies (number of studies, magnitude of observed effects, and sample size or power); and (3) consistency, or the extent to which similar and different study designs yield similar findings (West et al., 2002). However, AHRQ’s findings while helpful, do not resolve debates about whether a given intervention is evidence-based. Most evidence reviewers acknowledge that many interventions have varying degrees of evidence in their favor, ultimately necessitating a judgment as to whether the evidence supports recommending their use.
This judgment can often differ according to the entity conducting the evidence review but may be more susceptible to variation in M/SU than in general health care for several reasons. First, a greater number of organizations are involved in making determinations with regard to evidence-based practices in M/SU health care. As Chapter 7 attests, a greater number of professions (e.g., physicians, psychologists, counselors, marriage and family therapists) with their diverse traditions and training are involved in independently diagnosing and treating M/SU conditions than is the case for general health care. Their professional organizations are increasingly conducting evidence reviews and promulgating their own practice guidelines. Moreover, because M/SU problems and illnesses are addressed not only by the health care system, but also by the welfare, justice, and education systems, organizations and disciplines involved in these latter systems also are dedicating resources to evaluating the evidence and identifying evidence-based M/SU health care practices (see the Department of Justice’s What Works initiative in Table
4-1). Second, the biological and social sciences often have employed different types of research designs, with resulting differences in the types of empirical evidence produced. Because M/SU health care involves both medical and psychosocial issues and professions that have their historical origins in either the biological or social sciences, reviews are conducted by entities with different origins and research traditions and sometimes produce different types of empirical evidence and judgments about their meaning. Table 4-1 lists some of the leading organizations or initiatives that conduct evidence reviews of M/SU health care services and make determinations with regard to effective practices.
The commitment of these and other organizations to promoting the delivery of evidence-based care is to be applauded. “Reinvention” has been identified as a key ingredient in ensuring acceptance of new concepts and necessary change (Greenhalgh et al., 2004). At the same time, however, variations in review and rating methodologies can result in different practice guidelines for treating the same condition and a lack of consensus on what guidelines are best (Manderscheid et al., 2001). The lack of coordination and consensus across the multiple existing review efforts also contributes to significant confusion about what constitutes “evidence-based” health care for mental and substance-use conditions (Ganju, 2004). These variations and sometimes duplication in the topics reviewed create challenges to the promotion of evidence-based care. Moreover, the lack of coordination among these initiatives means there are fewer resources available for other quality improvement activities.
There is also a contrast between the evidence review infrastructure for psychotherapies and that for drug safety and efficacy, as well for how new treatments and therapies are deployed. The U.S. Food and Drug Administration (FDA) oversees the development, delivery, and dissemination of safe and effective medication therapies by subjecting new medications to a safety review before they are released into the marketplace for use by consumers. FDA review mechanisms also assess the strength of the evidence for the effectiveness of certain drugs prior to their release. Medications cannot be distributed or advertised to the public unless they have been approved by the FDA. However, no such safety and efficacy reviews are required for psychotherapies. As a consequence, those seeking psychotherapy cannot always be confident that the treatment they are receiving has met any standards for safe and effective care. In one extreme example, this situation resulted in the death of a 10-year-old child who was subjected to “rebirthing therapy” (Associated Press, 2005), a practice subsequently discredited (Lilienfeld et al., 2003). Moreover, while many new therapies in general health care, such as surgical procedures not involving a new medical device, can be used without an FDA-type review, individual patients for whom such therapies are used generally receive information about the evidence for
TABLE 4-1 Organizations and Initiatives Conducting Systematic Evidence Reviews in M/SU Health Care
The Cochrane Collaboration
The standard setter for evidence-based reviews, its Database of Systematic Reviews and other products are the output of over 50 international Collaborative Review Groups (CRGs), which follow detailed procedures contained in a 234-page handbook. CRGs review primarily randomized controlled trials (Alderson et al., 2004). The Cochrane Collaboration maintains four CRGs related to M/SU illnesses: the Depression, Anxiety and Neurosis Group; the Developmental, Psychosocial, and Learning Problems Group; the Drug and Alcohol Group; and the Schizophrenia Group, which together have produced over 100 evidence reports for these areas (The Cochrane Collaboration, 2004).
The U.S. Preventive Services Task Force
Congressionally mandated, it is the “gold standard” for reviewing preventive services in the United States (AHRQ, 2002–2003). Its standardized methodology has been adopted by others, including the Veterans Health Administration and Department of Defense. Because of its focus on prevention, its evidence reviews are limited to screening practices, counseling interventions, and other preventive interventions delivered in primary care settings (Harrison et al., 2001). To date, the task force’s recommendations pertaining to M/SU illnesses have addressed screening and/or counseling in primary care settings for alcohol misuse by adults, depression in adults, and suicide risk in the general population (Harris et al., 2001).
National Registry of Evidence-based Programs and Practices (NREPP)
The Substance Abuse and Mental Health Services Administration’s (SAMHSA) rating and classification system for M/SU prevention and treatment interventions designates evidence-based programs and practices as “model,” “effective” or “promising.” As of June 2005, NREPP listed more than 50 model, 30 effective, and 50 promising programs. In contrast to the evaluation of generic practice interventions (e.g., screening, cognitive behavioral therapy), as is the focus of the Cochrane Collaboration and the U.S. Preventive Services Task Force, the majority of NREPP’s reviews to date have evaluated specific “brand-name” programs for prevention (e.g., the Keep A Clear Mind drug education program), but it also reviews generic practices such as multisystemic therapy and cognitive behavioral treatments. NREPP
also differs in that its reviews evaluate evidence accompanying an entity’s application for review whereas Cochrane, AHRQ EPCs, and Campbell reviews (described below) consist of an independent search for all evidence on a particular generic intervention. Originally developed to evaluate substance-use prevention interventions, the scope of NREPP’s reviews has been expanded to include both prevention and treatment of all mental and addictive disorders (SAMHSA, 2005). In a Federal Register notice in August 2005, SAMHSA solicited formal public comment on NREPP’s review processes and criteria (SAMHSA, 2005).
Agency for Healthcare Research and Quality’s (AHRQ) Evidence-based Practice Centers (EPCs)
Through AHRQ, the United States funds 13 EPCs that address topics particularly relevant to the Medicare and Medicaid programs. One EPC specializes in technology assessments for the Center for Medicare and Medicaid Services; another supports the work of the U.S. Preventive Services Task Force. EPC reviews are developed from comprehensive syntheses and analyses of the scientific literature, and can include metaanalyses and cost analyses. EPCs also provide technical assistance to stakeholders to help translate the reports into quality improvement tools, curriculums, and policy. EPCs are located predominantly in academic research centers. Of the 123 EPC evidence reports listed on AHRQ’s website as of November 2004, 4 addressed M/SU health care: the diagnosis of ADHD, the treatment of ADHD, pharmacotherapy for alcohol dependence, and new drug therapies for depression—all published in 1999 (AHRQ, undated).
Veterans Health Administration (VHA)
VHA performs systematic reviews of health care technologies through its national Technology Assessment Program (VATAP) and development of clinical practice guidelines. VATAP’s reviews of devices, drugs, procedures, and organizational and supportive systems used in health care have focused on outcome measurement in mental health services (Department of Veterans Affairs, 2004). Practice guidelines have addressed major depression, psychoses, posttraumatic stress disorder, and substance use.
Department of Justice’s (DOJ) Federal Collaboration on What Works
Like the efforts of NREPP, DOJ’s What Works initiative aims to develop and apply consistent federal standards to determine what constitutes evidence-based programs. In conjunction with the U.S. Department of Education, SAMHSA, the National Institute on Drug Abuse, and the National Institute on Alcohol
Abuse and Alcoholism, as well as selected private organizations, DOJ in 2004 convened the Federal Collaboration on What Works, which spawned a working group whose early efforts focused on the development of a framework for assessing the evidence for program effectiveness. This Hierarchical Classification Framework for Program Effectiveness is intended to be applied initially to programs relevant to the mission of the Office of Justice Programs (i.e., primarily prevention, intervention, supervision, and treatment of drug abuse, juvenile delinquency, and adult crime), but the working group has identified it as potentially contributing to the development of a common standard of program effectiveness for use throughout the federal government (Department of Justice, 2005).
The Campbell Collaboration
Created in 2000 as a sibling of the Cochrane Collaboration, the Campbell Collaboration conducts systematic reviews of evidence in the fields of education, criminal justice, and social welfare. Its systematic reviews are carried out in accordance with explicit review protocols published in the Campbell Database of Systematic Reviews and are subject to comment and criticisms from users of that database. As of March 1, 2005, seven completed systemic reviews were listed on its website, along with an additional 35 registered titles or protocols for forthcoming reviews. Because the education, criminal justice, and social welfare systems play key roles in the funding and delivery of M/SU treatment services, there is some expected overlap between Cochrane and Campbell reviews, and seven completed Campbell reviews are also registered as Cochrane reviews. To address this overlap, the Cochrane and Campbell Collaborations are pursuing coordination of their activities, including joint registration of methods groups, as well as links with other conveners and members of Cochrane and Campbell methods groups and with the steering group representatives of both organizations* (The Campbell Collaboration, undated).
Some states conduct or sponsor their own evidence reviews. For example, in 1999 Hawaii created a panel to review the efficacy and effectiveness of treatments for a range of child and adolescent mental health conditions (Chorpita et al., 2002). Using methods and rating criteria adapted from those of the American
their potential advantages and risks through the informed consent process. If a new treatment in general health care is considered experimental, review by an institutional review board is required. Psychotherapies are unique in this regard in that a given therapist may offer a new therapeutic approach without its undergoing a safety or effectiveness review and without having to inform the patient about the extent to which its safety and effectiveness have been established.
The committee concludes that a more comprehensive, systematic, and coordinated approach is needed to describe, assess, and classify M/SU treatments and practices according to the level of evidence that supports their use. Better coordination of current national and international review activities, as well as coordination of those efforts with the evidence review activities that underlie the guideline development process of many organizations, could prevent redundancy and waste, produce more evidence reviews on a timelier basis, and avoid conflicting interpretations of the data for clinicians and consumers. The organizations engaged in these activities are natural partners for building a more comprehensive, coordinated, and systematic review network. Many of these same organizations are also involved with the dissemination of their review findings in the form of practice guidelines and other clinical decision-support tools.
IMPROVING DIAGNOSIS AND ASSESSMENT
The production of evidence will be less fruitful if it is not accompanied by accurate diagnosis and comprehensive longitudinal assessment. Because having a mental illness or alcohol- or other substance-use diagnosis is a leading risk factor for suicide (Maris, 2002), failure to diagnose these conditions can be lethal. An inaccurate diagnosis also can lead to ineffective treatment and even harmful outcomes. Yet individuals with the same symptoms presenting to different mental health clinicians can receive different diagnoses. For example, variations have been documented in the extent to which depression is diagnosed in individuals with similar symptoms by both psychiatrists (Kramer et al., 2000) and primary care providers (Mojtabai, 2002) and in the extent to which ADHD is diagnosed within different communities (Lefever et al., 2003). Recently, the diagnosis of bipolar illness in children, especially preschoolers, has been the subject of considerable controversy among psychiatrists (McClellan, 2005). For many conditions, significant discrepancies have been observed among diagnoses generated from structured interviews for research purposes and those resulting from clinician judgments (Lewczyk et al., 2003) and diagnostic tools developed for clinical purposes (Eaton et al., 2000).
In children, diagnoses may have an even greater range of variability because diagnostic manifestations change over the course of development.
Moreover, clinicians are greatly dependant upon parents’ perceptions of the nature of the presenting problems. Parents may differ, for example, in the extent to which they perceive very active behavior as problematic versus being “all boy,” or view a quiet and introspective child as being “shy” versus having a “social disorder.” Subjectivity in diagnosis also is manifest in the variable diagnoses received by white patients and individuals who are members of ethnic minorities. African American patients with manic-depressive illness, for example, have been found to be at higher risk for being misdiagnosed as having schizophrenia than are whites (Bell and Mehta, 1980, 1981; Mukherjee et al., 1983). Such racial differences have tended to disappear when structured interviews rather than clinical diagnoses are used (Adebimpe, 1994; Simon and Fleiss, 1973), suggesting the existence of differences in clinician assessment by patient ethnicity.
A number of factors account for variations in diagnosis of M/SU illnesses. Foremost, in contrast with general health conditions, relatively few laboratory, imaging, or other physical measures can detect the presence of a mental illness or substance dependence.10 Accurate diagnosis relies instead upon descriptive methods whereby patients or their caregivers inform clinicians about symptoms, and clinicians apply their expert judgment to determine whether diagnostic criteria for a condition are met. Moreover, individual clinicians vary in the breadth, depth, and theoretical basis of their training (see Chapter 7). Because diagnosis requires a subjective interpretation of reported symptoms, these variations result in inconsistency and unreliability in how individuals are diagnosed. Administrative rules and financial incentives can also influence diagnostic practices.
Criteria for diagnosing M/SU problems and illnesses reliably are found in the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM), which has been a highly significant milestone in the diagnosis and treatment of mental and substance-use problems and illnesses and is now in its revised fourth edition (DSM-IV-R). However, adherence to these guidelines is not uniform. Fully 56 percent of primary care physicians in Michigan surveyed in 2002 reported that they did not use DSM criteria to diagnose ADHD (Rushton et al., 2004). This may be because DSM-IV is not easy to use in primary care settings, in part because of its focus on specialty care, its length, and its complexity (Pincus, 2003).
Several different approaches have been undertaken to improve the accuracy of diagnosis of M/SU illnesses. System-level interventions, such as routine screening, have been shown to help (Gilbody et al., 2001; Rollman et al., 2001). Structured diagnostic interview instruments have also been developed to reduce variability in information gathering and biases that can inadvertently influence individual clinicians’ decision making. While these
instruments have demonstrated reasonable reliability, their clinical feasibility and accuracy in routine practice are not well established (Lewczyk et al., 2003). Other initiatives have provided clinicians with education and guidelines to improve their recognition and treatment of mental illnesses (Lin et al., 2001; Thompson et al., 2000).
The committee concludes that multiple strategies are needed to improve diagnostic accuracy in M/SU health care. First, existing evidence-based diagnostic tools and assessment practices should be identified and applied in practice, just as must be done for evidence-based treatment. More age-appropriate diagnostic instruments also should be developed that are reliable and practicable for routine use, and information about these tools should be included in initiatives to better disseminate evidence-based practices. Further, clinicians should be encouraged to employ standardized clinical assessment instruments to measure target symptoms consistently and systematically, and document results over the course of treatment (American Psychiatric Association Task Force for the Handbook of Psychiatric Measures, 2000).
As discussed earlier in this chapter, however, even when evidence-based practices are known, their adoption by all relevant practitioners—in both general and M/SU health care—is too slow. Accordingly, many public and private organizations are actively engaged in efforts to strengthen the dissemination and uptake of effective clinical practices. Yet these activities themselves are not always consistent with the evidence on effective dissemination and uptake of new knowledge. Improving the effectiveness of dissemination activities is thus the next essential step in improving the effectiveness of M/SU health care.
BETTER DISSEMINATION OF THE EVIDENCE
Research has been under way for many years, in health care as well as other fields of study, to identify the multiple contributors to successful dissemination and adoption of new practices and innovations by their targeted users. An extensive and systematic review of empirical evidence and related theoretical literature from multiple disciplines (Greenhalgh et al., 2004) identified the following key factors in successful dissemination and adoption of innovations: (1) the characteristics of the innovation itself, (2) the characteristics of the individuals targeted to adopt it, (3) sources of communication and influence regarding the innovation, (4) structural and cultural characteristics of organizations targeted to adopt it, (5) external influences on targeted individuals or organizations, (6) organizations’ uptake processes, and (7) the linkages among these six factors (see Box 4-2).
Although some of the factors affecting the adoption of new practices (e.g., characteristics of individual adopters) may not be very amenable to
Characteristics of the Innovation
Innovation more likely to be adopted if it:
Sources of Communication and Influence
Uptake of innovation influenced by:
Uptake of innovation influenced by:
Linkages Among the Components
Innovation more likely to be adopted if there are:
Characteristics of Individual Adopters
Uptake of innovation influenced by individual’s:
Structural and Cultural Characteristics of Potential Organizational Adopters
Innovation more likely to be adopted if organization:
The Uptake Process
Innovation more likely to be adopted with:
SOURCE: Greenhalgh et al., 2004.
external change, others are. For example, the sources of communication and influence used in dissemination of information can be chosen. While many initiatives are now under way to disseminate evidence-based M/SU practices, these initiatives are generally being undertaken by specialty M/SU organizations, as opposed to those associated with general health care. Evidence indicates that integrating the dissemination of evidence-based M/SU health care practices into the scope and initiatives of mainstream general health care dissemination activities is essential to reaching the vast numbers of general health care clinicians who now treat M/SU problems and illnesses and have an essential role in ensuring the early detection, appropriate treatment, and referral of these conditions.
Key Dissemination Efforts
Substance Abuse and Mental Health Services Administration
As part of its Science to Service Initiative, SAMHSA has multiple activities under way to disseminate information on evidence-based practices, promote the incorporation of such practices into general and M/SU health care, and facilitate feedback from the field to guide research. For example, SAMHSA’s Center for Mental Health Services is developing six “tool kits” addressing Illness Management and Recovery, Medication Management, Assertive Community Treatment, Family Psychoeducation, Supported Employment, and Integrated Dual Diagnosis Treatment for Co-Occurring Disorders. The kits include information sheets for all stakeholder groups, introductory videos, practice demonstration videos, and workbooks or manuals for practitioners. The tool kits will be finalized through a national demonstration project to be completed at the end of 2005 (SAMHSA, undated-a). SAMHSA also funds the Center for Mental Health Quality and Accountability of the National Association of State Mental Health Program Directors (NASMHPD) Research Institute (NRI) to provide an overview of evidence-based practices to the association’s constituents and other stakeholders (NASMHPD Research Institute, undated).
SAMHSA’s dissemination mechanisms for substance-use prevention and treatment include Treatment Improvement Protocols, the National Addiction Technology Transfer Centers (ATTC) Network (1 national and 13 regional centers), the Network for the Improvement of Addiction Treatment, and the Centers for the Application of Prevention Technology. Further, SAMHSA’s State Systems Development Program—an enhanced technical assistance program involving conferences and workshops, development of training materials and knowledge transfer manuals, and on-site consultation—assists states with the administration and implementation of Substance Abuse Prevention and Treatment Block Grant activities. The
program’s Treatment Improvement Exchange, the hub for the full range of its technical assistance services, also facilitates and promotes information exchange between SAMHSA’s Center for Substance Abuse Treatment (CSAT) and state and local alcohol and substance abuse agencies. These activities include information development and dissemination; state, regional, and national conferences; and on-site expert consultation (SAMHSA, undated-c). In addition, SAMHSA is partnering with the National Institute of Mental Health (NIMH) and the National Institute on Drug Abuse (NIDA) to jointly fund planning activities and research on the adoption of evidence-based practices by state M/SU agencies.
National Institutes of Health
NIMH is partnering with SAMHSA to promote and support the dissemination of evidence-based mental health treatment practices and their adoption by state mental health systems through Bridging Science and Service grants to states (NIMH, 2004). NIDA and CSAT have a similar joint initiative—the NIDA/SAMHSA-ATTC Blending Initiative—which encourages the use of evidence-based treatments by professionals in the drug abuse field. NIDA has identified specific research practices (e.g., motivational interviewing) as ready for use by the field at large. Blending teams comprising staff from CSAT’s ATTC network and NIDA researchers then develop strategic dissemination plans for the adoption and implementation of these practices (NIDA, 2005).
In addition, NIMH, NIDA, and the National Institute on Alcohol Abuse and Alcoholism (NIAAA) have multiple publication, interpersonal, electronic media, and other initiatives to help disseminate information on evidence-based practices. For example, NIDA’s Office of Science Policy and Communications, responsible for research dissemination activities, produces a number of periodical publication (e.g., NIDA Notes, Perspectives), as well as topic-specific publications. NIDA’s Principles of Drug Addiction Treatment: A Research-based Guide, for example, is a synthesis of the treatment research organized into 13 key principles, questions and answers, and a listing of some programs for which a strong evidence base exists.11
Veterans Health Administration
VHA’s clinical practice guidelines initiative (described in Table 4-1) also identifies, disseminates, and promotes the adoption of evidence-based practices. Practice guidelines resulting from evidence reviews are frequently
displayed in clinical flowcharts that offer decision support to VHA clinicians (VHA, 2005). VHA’s Quality Enhancement Research Initiative facilitates the translation of research findings into routine care by (1) conducting research to fill gaps in knowledge about what constitutes best treatment practices, (2) undertaking demonstration projects that implement already known best practices, (3) identifying enhancements to VHA’s information systems, and (4) conducting research and demonstration projects to accelerate the uptake of evidence-based practices (Fischer et al., 2000). The initiative includes projects in mental health (schizophrenia and depression) and substance-use illnesses (improving the quality of methadone maintenance therapy).
As discussed above, many professional bodies are actively engaged in dissemination activities. These activities are often connected with their development and distribution of practice guidelines.
Underused Sources of Communication and Influence
The dissemination activities described above are conducted by organizations that generally are perceived as specialty M/SU organizations and thus may be most likely to communicate and have influence with specialty M/SU health care providers. As described in Chapter 7, however, primary care providers deliver a substantial portion of mental health services and are a critical source for the detection of M/SU conditions, referral, and subsequent treatment. Other non–M/SU specialty providers also have key roles to play in detection, treatment, and referral. However, data show that these general health care providers need to adopt evidence-based practices to better detect, treat, and appropriately refer individuals in need of M/SU health care. Thus it is important that dissemination of the evidence on effective M/SU health care reach all providers, not just those specializing in M/SU care.
However, the key current dissemination efforts described above may be less likely to influence primary care providers and other non–M/SU specialty clinicians. Research on the effective dissemination of innovations described above (Box 4-2) shows that individuals’ and organizations’ adoption of new practices is greatly influenced by their social networks. Successful dissemination occurs most easily among individuals with similar educational, professional, and cultural backgrounds. Opinion leaders within a field also strongly influence the dissemination and uptake of innovations. Formal dissemination programs will be more successful if they are aware of and address potential adopters’ needs and perspectives, and tailor their
dissemination strategies to the demographic, structural, and cultural characteristics of different subgroups (Greenhalgh et al., 2004). To this end, resources routinely tapped by general and other non–M/SU specialty health care practitioners and policy makers should be used to help disseminate evidence on effective detection and treatment of M/SU illnesses. In short, M/SU health care needs to be better addressed in evidence dissemination efforts that are routinely employed to address providers of general health promotion and disease and disability prevention and treatment. The U.S. Centers for Disease Control and Prevention (CDC) and AHRQ’s Division of User Liaison and Research Translation (formerly called the User Liaison Program) are two highly regarded organizations with expertise in knowledge dissemination that can be utilized more fully for this purpose.
Centers for Disease Control and Prevention
CDC’s mission is “to promote health and quality of life by preventing and controlling disease, injury, and disability” (CDC, 2005a:1). It does so by serving as “the principal agency in the United States government for protecting the health and safety of all Americans and for providing essential human services, especially for those people who are least able to help themselves” (CDC, 2005b:1). Despite this mandate, CDC’s substantial and highly regarded expertise in these areas, and the large contribution of M/SU illnesses to morbidity, disability, and injury (see Chapter 1), M/SU illnesses could be better represented in CDC’s organizational structures, programs, and initiatives.
CDC encompasses multiple centers, institutes, and offices (CDC, 2005c) (see Box 4-3). Of these, the National Center for Chronic Disease Prevention and Health Promotion might reasonably be expected to address M/SU problems and illnesses, given their substantial contribution to chronic disease and general health problems. Yet the listing of chronic disease programs on the center’s website includes arthritis, cancer, diabetes, epilepsy, global health, healthy aging, healthy youth, heart disease and stroke, nutrition and physical activity, oral health, a block grant program to implement national objectives contained in the Healthy People report, prevention research programs, elimination of racial disparities, pregnancy-related illnesses, tobacco use, and an initiative for uninsured women (addressing high blood pressure and cholesterol, nutrition and weight management, physical inactivity, and tobacco use)—but not M/SU illnesses. Another key initiative of the center—Steps to a HealthierUS—is designed to advance the goal of helping Americans live longer, better, and healthier lives through 5-year cooperative agreements with states, cities, and tribal entities to implement chronic disease prevention efforts focused on reducing the burden of diabetes, overweight, obesity, and asthma and three related risk factors—
physical inactivity, poor nutrition, and tobacco use (CDC, 2005d). The prevention and treatment of M/SU illnesses are not mentioned in these and similar CDC initiatives
Moreover, the CDC website providing an overview of chronic illness (http://www.cdc.gov/nccdphp/overview.htm)12 fails to list any M/SU problems or illnesses among the Leading Causes of Disability among Persons Aged 15 Years or Older, United States (although the source for the data cited is dated 1991–1992). This omission is in spite of the evidence presented in Chapter 1 and acknowledged in the President’s New Freedom Commission report that mental illnesses rank first among conditions that cause disability in the United States (New Freedom Commission on Mental Health, 2003).
Instead of being included explicitly in these and other structures or formal initiatives, mental health is addressed in CDC through a Mental Health Work Group that is not part of any of the agency’s formal centers, programs, or offices and has no formal budget allocation, personnel positions, or other dedicated administrative support. “Staff members participating in this work group do so voluntarily as an add-on to their other CDC responsibilities because of their commitment to advancing the field of mental health within the context of the overall mission of CDC” (CDC, 2005e:1). Although CDC has undertaken important work on alcohol use (see, for example, http://www.cdc.gov/alcohol/about.htm) and alcohol and drug use among youth (see http://www.cdc.gov/HealthyYouth/alcoholdrug/index.htm), M/SU health care could benefit greatly from a larger commitment of CDC resources and expertise.
Agency for Healthcare Research and Quality’s User Liaison Program
For more than 22 years, AHRQ’s User Liaison Program (ULP) has focused on bringing information on science-based health care services to policy makers at the state and local levels, including the staff of governors’ offices, state legislators and their staffs, and executive branch agency heads such as Medicaid and public health directors, to help them develop more effective policies and programs. The ULP historically has relied on workshops, seminars, and conferences to provide this information, but in the past few years has also been conducting audio and web conferencing. The ULP has addressed a wide variety of topics identified through regular formal and informal mechanisms, including biennial needs assessment meetings across the country, conference calls with stakeholders, and portions of workshops devoted to audience feedback regarding topics to be addressed
each year. As of 2004, the ULP’s mission had been expanded to encompass a wider range of knowledge transfer activities (e.g., technical assistance, distance learning, electronic and face-to-face networking, web and teleconferencing). Its target audience has also been expanded to include providers and purchasers in addition to policy makers. To better carry out these new mandates, AHRQ has revised the ULP to focus on long-term knowledge transfer strategies for a few critical health care issues. As of December 2004, these issues were (1) developing high-reliability organizations, (2) care management, (3) purchaser–provider synergies for improving health care quality, and (4) decreasing disparities. This change in direction means that the ULP will likely not offer specific disease-focused programs in the future. Rather, multiple clinical areas of concern can be addressed within the four targeted issues identified above.13 M/SU health care policy makers, administrators, and clinicians ought to be targeted as part of ULP activities.
Conclusions and Recommendation
The committee concludes that dissemination strategies for effective M/SU treatment innovations should use the sources of communication and influence that are highly regarded in general health care in addition to those so regarded in M/SU health care. Moreover, organizations that are especially influential with private-sector providers and other policy makers and purchasers because of their past relationships should be included in a coordinated strategy. For example, with its new focus on policy makers and purchasers, as well as clinicians, AHRQ’s ULP could be an instrument for bringing M/SU health care to the attention of these key leaders.
Recommendation 4-1. To better build and disseminate the evidence base, the Department of Health and Human Services (DHHS) should strengthen, coordinate, and consolidate the synthesis and dissemination of evidence on effective M/SU treatments and services by the Substance Abuse and Mental Health Services Administration; the National Institute of Mental Health; the National Institute on Drug Abuse; the National Institute on Alcohol Abuse and Alcoholism; the National Institute of Child Health and Human Development; the Agency for Healthcare Research and Quality; the Department of Justice; the Department of Veterans Affairs; the Department of Defense; the Department of Education; the Centers for Disease Control and Prevention; the Centers for Medicare and Medicaid Services; the Administration for
Personal communication with Steve Seitz, User Liaison Program, Agency for Healthcare Research and Quality on December 9, 2004.
Children, Youth, and Families; states; professional associations; and other private-sector entities.
To implement this recommendation, DHHS should charge or create one or more entities to:
Describe and categorize available M/SU preventive, diagnostic, and therapeutic interventions (including screening, diagnostic, and symptom-monitoring tools), and develop individual procedure codes and definitions for these interventions and tools for their use in administrative datasets approved under the Health Insurance Portability and Accountability Act.
Assemble the scientific evidence on the efficacy and effectiveness of these interventions, including their use in varied age and ethnic groups; use a well-established approach to rate the strength of this evidence, and categorize the interventions accordingly; and recommend or endorse guidelines for the use of the evidence-based interventions for specific M/SU problems and illnesses.
Substantially expand efforts to attain widespread adoption of evidence-based practices through the use of evidence-based approaches to knowledge dissemination and uptake. Dissemination strategies should always include entities that are commonly viewed as knowledge experts by general health care providers and makers of public policy, including the Centers for Disease Control and Prevention, the Agency for Healthcare Research and Quality, the Centers for Medicare and Medicaid Services, the Office of Minority Health, and professional associations and health care organizations.
The committee calls attention to three important considerations involved in implementing this recommendation. First, implementing this recommendation will require a long-term commitment on the part of DHHS. An ongoing process accommodating changes in the science base over time will be necessary to synthesize the evidence base; assess interventions based on the strength of their scientific evidence; and develop and continually update a reliable categorization and coding scheme for individual M/SU prevention, screening, assessment, psychotherapy, psychosocial, and other treatment interventions. Given fiscal constraints, and in an effort to mainstream M/SU health care, the committee recommends that DHHS make use of public- and private-sector structures and processes already in place that synthesize evidence, develop procedure codes such as the HCPCS codes and CPT codes for administrative datasets, develop performance measures and measurement approaches for the public and private sectors, and carry out
related activities. To marshal the substantial expertise and resources of these entities and assist them in dedicating additional resources to M/SU health care, DHHS will need to provide them with formal support and financial and nonfinancial resources to enable and sustain these activities until they are firmly in place.
In addition, the committee notes that a wide variety of M/SU health care interventions are important to the effective treatment of M/SU conditions and need to be included in the recommended evidence review, coding, and performance measurement initiatives. In addition to traditional psychotherapy, these initiatives should encompass screening and diagnostic questionnaires and assessment tools with practical utility in routine primary and specialty care settings (as opposed to tools used for research purposes); other clinically practicable tools used to monitor symptoms and patient outcomes; and the range of psychosocial services with proven effectiveness, such as family psychoeducation, illness self-management, and assertive community treatment. In addition to procedure codes, codes should be developed that indicate the type of clinician providing care (e.g., psychiatrist, psychologist, marriage and family therapist, or counselor).
Finally, the committee reaffirms its view that the development of more precise procedure and provider codes is a critical pathway to improvements in quality. The development of an analytic database comparable to that which exists for general health is critical to informing our understanding of factors that influence utilization of care, variations in care, and the relationship between health outcomes and various types of treatments. Such information also will provide transparency as to what health care purchasers are paying for and what consumers are actually receiving. As these codes are developed, the federal government should require their use in all federally mandated and supported administrative data collection activities.
In addition, as discussed above, the committee believes that the collection of outcome data can both inform clinical care at the point of care delivery and contribute to the development of evidence on effective treatments. It therefore makes the following recommendation:
Recommendation 4-2. Clinicians and organizations providing M/SU services should:
Increase their use of valid and reliable patient questionnaires or other patient-assessment instruments that are feasible for routine use to assess the progress and outcomes of treatment systematically and reliably.
Use measures of the processes and outcomes of care to continuously improve the quality of the care provided.
The committee points out that this recommendation refers to general health care providers who offer M/SU health care, as well as to specialty M/SU health care providers.
STRENGTHENING THE QUALITY MEASUREMENT14 AND REPORTING INFRASTRUCTURE
A frequently stated maxim across many industries is, “You can’t improve what you can’t measure.” This holds true in health care. Measuring the quality of care provided by individuals, organizations, and health plans and reporting back the results is linked both conceptually and empirically to reductions in variations in care and increases in the delivery of effective care (Berwick et al., 2003; Jha et al., 2003). However, this successful strategy has not yet seen widespread application in M/SU health care. Less measurement of the safety, effectiveness, and timeliness of M/SU health care has taken place than is the case for general health care (AHRQ, 2003; Garnick et al., 2002). In 1998, the President’s Advisory Commission on Consumer Protection and Quality in the Health Care Industry identified mental health care as an aspect of health care not well addressed by existing quality measures and measure sets (The President’s Advisory Commission on Consumer Protection and Quality in the Health Care Industry, 1998). Five years later, the first National Healthcare Quality Report published by DHHS continued to identify mental illness as a clinical area lacking “broadly accepted” and “widely used” measures of quality. Of 107 measures of the effectiveness of health care, only 7 addressed mental health: 3 the treatment of depression in adults, 1 suicide, and 3 management of delirium and confusion in nursing homes and home health. None addressed the quality of care for substance-use problems and illnesses. The only measure that pertained to children was that for suicide (AHRQ, 2003). No additional measures of the quality of mental health care were included in the second annual report published in 2004, and measures of the quality of substance-use care remained absent (AHRQ, 2004a).
This lack of measurement is not caused by a lack of organizations and initiatives developing measures of M/SU health care quality. A National Inventory of Mental Health Quality Measures, funded by AHRQ, NIMH, SAMHSA, and The Evaluation Center@HSRI (The Human Services Re-
search Institute) identified more than 100 measures of the processes of M/SU health care developed by government agencies, researchers, clinician/professional organizations, accreditors, health systems/facilities, employer purchasers, consumer coalitions, and commercial organizations (Hermann et al., 2004). A significant number of outcome measurement instruments also have been identified by VHA (VA Technology Assessment Program, 2002). The failure of mainstream health care quality measurement and improvement efforts to incorporate a greater number of M/SU quality measures is due in part to the separation of M/SU and general health care, as discussed in Chapters 2 and 5. Because of this separation, many M/SU health care advocates, professional associations, and other organizations have undertaken efforts to develop and apply measures of the quality of M/SU health care. However, a major factor inhibiting both mainstream and specialty efforts is the lack of a quality measurement and reporting infrastructure addressing M/SU health care.
Necessary Components of a Quality Measurement and Reporting Infrastructure
Effectively measuring quality and reporting results to providers, consumers, and oversight organizations requires structures, resources, and expertise to perform several related functions:
Conceptualizing the aspects of care to be measured.
Translating the quality-of-care measurement concepts into performance measure specifications.
Pilot testing the performance measure specifications to determine their validity, reliability, feasibility, and cost.
Ensuring calculation of the performance measures and their submission to a performance measure repository.
Auditing to ensure that the performance measures have been calculated accurately and in accordance with specifications.
Analyzing and displaying the performance measures in a format or formats suitable for understanding by the multiple intended audience(s), such as consumers, health care delivery entities, purchasers, and quality oversight organizations.
Maintaining the effectiveness of individual performance measures and performance measure sets and policies over time.
These seven functions are currently performed to varying degrees for M/SU health care by multiple organizations—again often separately from general health care, but in this case the separation also exists across the public and private health care sectors. The result is the rudiments of a
quality measurement and reporting infrastructure, but with some redundancy and gaps in the measures, measurement functions, and entities whose performance is being measured, and without a coordinated approach that maximizes the efficiency and effectives of the various efforts. What is needed is one or more infrastructures that perform these seven functions for the four different levels of health care delivery: (1) individual clinicians or groups of clinicians; (2) health care organizations, such as inpatient facilities; (3) health plans; and (4) public health systems (national, state, and local). Below we discuss for each of the seven functions special issues related to the delivery of M/SU health care that should influence the implementation of that function and the development of a quality measurement and reporting infrastructure for M/SU health care.
Conceptualizing the Aspects of Care to Be Measured
Because of the large number of existing process and outcome quality indicators and measures, the multiple populations of interest (e.g., children; older adults; individuals with less-frequent but severe and chronic mental illnesses, such as schizophrenia; and inpatients), the different units of analysis (clinicians; inpatient and outpatient organizations; health plans; and local, state, and national systems), and the importance of not overburdening the clinicians and organizations that will produce the measures, a framework is needed for identifying a finite number (often termed a “core” set) of valid, reliable, effective, and efficient measures that can best serve the multiple interested parties and purposes. The best-documented example of such a framework is that of the Strategic Framework Board, which designed a National Quality Measurement and Reporting System (NQMRS) for U.S. health care overall to guide such efforts as those of the National Quality Forum (McGlynn, 2003).
Within M/SU health care, multiple organizations and initiatives also have put forth frameworks or core measure sets, using different approaches to identify aspects of care delivery to be measured and select measures of the structures, processes, and outcomes of M/SU care. These initiatives include the Forum on Performance Measures in Behavioral Health and Related Service Systems (Teague et al., 2004), the Mental Health Statistics Improvement Program Quality Report (Ganju et al., 2004), the Center for Quality Assessment and Improvement in Mental Health (Hermann and Palmer, 2002; Hermann et al., 2004), the Behavioral Healthcare Performance Measurement System for inpatient care of the NRI, the Outcomes Roundtable for Children and Families (Doucette, 2003), and the Washington Circle Group (McCorry et al., 2000) (all of which are convened and/or funded by SAMHSA), as well as the previous efforts of the American College of Mental Health Administrators Accreditation Workgroup (ACMHA,
2001) and the American Managed Behavioral Health Association. The federal government also has adopted a framework through its State Outcomes Measurement and Management System (described below) (SAMHSA, undated-b). These efforts are in addition to performance measure sets that address health care overall and include some M/SU performance measures, such as NCQA’s HEDIS and measures used by VHA (see Appendix C).
All of these efforts have tackled two enduring and related problems that are encountered in all performance measurement efforts: (1) the tension between having measures of high validity, reliability, and ease of calculation and having a broader set of measures that is more representative of the populations and conditions of interest; and (2) the difficulty of achieving consensus on the measure set across all stakeholders (Hermann and Palmer, 2002). In addition to these problems, conceptualizing a framework for M/SU health care is more complex than doing so for general health care for the reasons discussed below.
More-diverse stakeholders The larger number of disciplines licensed to diagnose and treat M/SU problems and illnesses relative to those licensed to diagnose and treat general health conditions potentially requires the involvement of a greater number of stakeholder groups in a consensus process. Moreover, as discussed earlier, M/SU health care involves both specialty and general medical providers. In addition, the involvement of the education, juvenile and criminal justice, and child welfare systems as payers and providers of M/SU services means performance measures selected for M/SU health care must be determined with input from these stakeholders, who are not typically involved in general health care. Consumer advocates also have been very active in shaping the delivery of M/SU health care, again with implications for the numbers and diversity of stakeholders in a consensus process.
Difference between the public and private sectors Although general health care is delivered in both the public and private sectors, in M/SU health care the public sector serves a population with a clinical profile much different from that of the population served by the private sector—most often those with severe and chronic illnesses. Thus, measures that may be meaningful to private-sector stakeholders may be less useful to those in the public sector. In NCQA’s HEDIS measures for general health care, for example, some measures15 are designated for calculation for Medicaid populations but not for privately insured populations (NCQA, 2004b). This practice may need to be employed more widely for M/SU health care. Even measures appropriate for multiple populations may need to be reported separately.
Different types of evidence As discussed earlier, M/SU health care has often relied on evidence generated by quasi-experimental studies rather than randomized controlled trials. Some performance measures that are deemed valid by M/SU stakeholders may therefore be less credible to performance measurement stakeholders in the general health care sector.
Unclear locus of accountability The separation of the delivery of M/SU and general health care discussed earlier impairs performance measurement in two ways. First, it can create confusion as to whether a given performance measure can be used because it is unclear to whom the measure should apply. There is confusion about the entity accountable for care quality when care can be delivered through multiple delivery arrangements (e.g., primary or specialty care, general or carve-out health plans, school-based programs). For example, the HEDIS performance measures addressing M/SU health care apply to general health plans seeking accreditation, but not to managed behavioral health care organizations.16
Another problem caused by the separation of M/SU and general health care, as well as by the separation of mental and substance-use care, relates to access to data. To produce many performance measures, data on the patient’s entire illness—from detection through ongoing treatment—are needed. When patients are served by entities separate from their general health care plan or from each other, such as carved-out managed behavioral health plans, employee assistance programs, school-based health care services, and child welfare agencies, the ability to link necessary data is impaired, making many performance measures infeasible (Bethell, 2004; Garnick et al., 2002). Moreover, the voluntary support sector is not typically viewed as formal treatment despite the fact that self-help groups such as Alcoholics Anonymous and other types of peer counseling play an important role in recovery for many individuals with M/SU illnesses. Indeed, the voluntary support sector has been characterized by a lack of data and, in some cases, a commitment to anonymity (Horgan and Garnick, 2005).
As articulated in a paper on performance measurement for child and adolescent M/SU health care that was commissioned by the committee (Bethell, 2004:30):
Perhaps one of the most significant findings … is the lack of coordination in the field among the many actors engaged in measurement development in the area of mental and behavioral health care for children and adolescents. It seems new activities evolve daily with no coordinating center to ensure activities address priority needs and strategic goals as
reflected in the Crossing the Quality Chasm reform model. The lack of coordination is especially evident between efforts occurring primarily from the vantage point of the medical arena (e.g., Medicaid, health plan and pediatric practice-based measurement) and those taking place in the more community-based, public health mental health arena (state mental health agencies, community-based clinics, etc.). Ironically, this lack of coordination on the measurement front exactly mirrors the very frustrating lack of coordination between the medical and psychiatric-based mental health services also experienced by families with children with mental and behavioral health care problems.
Translating Quality-of-Care Measurement Concepts into Performance Measure Specifications
Some quality measures address structural and qualitative characteristics of care providers and require a “yes/no” answer. The Leapfrog Group’s measure of whether inpatient facilities use computerized physician order entry exemplifies such a structural measure. Most quality measures in use today, however, measure processes of care and require a numerical calculation of the rate at which an appropriate activity is performed for a defined population. These calculations require detailed instructions for calculating the numerators and denominators of the rates to guarantee the accuracy and reliability of the measures. The instructions specify, for example, data sources to be used to calculate a measure, rules for including and excluding some individuals from the rate, time frames for data capture, and sampling strategy if sampling is used. Translating measurement concepts into quality measures also requires detailed knowledge of multiple data sources, including health plan enrollment and encounter data, inpatient and outpatient claims data, pharmacy and laboratory databases, administrative data coding sets, and patient surveys, as well as knowledge of the capabilities of organizations’ information systems and of the appropriateness of and techniques for case-mix adjustment.
Appreciation of and knowledge in all these areas is not universal. As a result, many entities that put forth intended quality measures are actually putting forth quality measure concepts, as opposed to well-developed measures with accompanying specifications for their calculation. A comprehensive 1999–2000 search for and review of mental health performance measures developed in the United States that met a minimum threshold of development (i.e., had a specific numerator and denominator, a designated data source, and an ostensible relationship to quality) found that half of the first 86 measures reviewed were insufficiently developed for implementation, and few measures had been tested for reliability or validity (Hermann et al., 2000). A quality measurement infrastructure for M/SU health care will need to have ongoing formal structures and processes to translate
quality measurement concepts into measures that are ready for deployment. NCQA, for example, conducts this translation activity using both internal staff and a formal structure of measurement advisory panels that provide clinical and technical expert knowledge, ad hoc expert panels, and a technical advisory group (NCQA, 2004b).
Pilot Testing the Performance Measure Specifications
Frequently, measures that appear to be theoretically sound are operationally complex, very costly to produce, or unreliable and invalid for reasons not apparent during their design. For example, with respect to M/SU quality measures, the fact that a population is covered by both a general health plan and a separate employee assistance program or carved-out behavioral health plan means that the clinical data required to calculate a measure may be in the possession of multiple separate organizations and difficult to access and link. Stigma and discriminatory benefit designs (discussed in Chapter 3) also mean that many individuals choose to or must pay for M/SU services out of pocket; in such cases, no claim record is produced, so that a major data source for the calculation of quality measures is lacking. For the same reasons, providers sometimes deliver an M/SU service but code it as a general medical problem. Because of these impediments to accurate and reliable measurement of the quality of M/SU health care, new quality measures almost always require some type of pilot testing before being implemented and used for decision making (Garnick et al., 2002). For example, prior to NCQA’s incorporation of quality measures addressing health plans’ treatment of alcohol and other drug problems into the HEDIS measurement process, these measures were pilot tested by six health care organizations that delivered services to approximately 5 million people so as to evaluate the measures’ feasibility and quality improvement potential (Hon, 2003).
Ensuring Calculation and Submission of the Performance Measures
Successful quality measurement initiatives in general health care have taken place under one of two conditions: (1) a critical mass of influential supporters is committed to either requiring or carrying out the calculation and submission of measures (e.g., HEDIS), or (2) there is an ongoing commitment of sufficient resources to enable the analysis of quality measures, making them so useful that those calculating and submitting them do so voluntarily (e.g., NRI’s Behavioral Healthcare Performance Measurement System and AHRQ’s Healthcare Cost and Utilization Project [H-CUP] Quality Indicators).
For example, the success of the HEDIS performance measures dataset can be traced to its initiation by a small but committed and influential
group of employers and health plans. These employers, who purchased health care for their employees, were seeking meaningful data to require of their contracting health plans. The health plans wished to reduce costly variations in the data they were required to submit to multiple purchasers. This critical mass of employer-purchasers and health plans ensured the calculation and submission of the HEDIS measures while they were still in a preliminary state, which subsequently attracted other influential supporters. CMS, for example, now requires health plans participating in the Medicare program to submit data on HEDIS measures. Many state Medicaid agencies also require the submission of HEDIS or HEDIS-like measures. In contrast, submission of the Behavioral Healthcare Performance Measurement System inpatient hospital measures to NASMHPD or NRI is not required, but facilities that choose to do so may use those measures to fulfill accreditation reporting requirements.
Auditing to Ensure That Performance Measures Have Been Calculated Accurately and According with Specifications
Reported measures may not accurately represent an individual’s or organization’s performance. Information systems and internal data recording conventions used by individual clinicians and health care organizations vary greatly. Data also may not be collected or stored in ways that facilitate collection of a measure as requested. When measures further require data to be linked across organizations, there may be incompatible data formats. All these factors can introduce error, as can less-than-scrupulous adherence to a measure’s specifications. Because the reporting of quality measures to external bodies for public disclosure to consumers, for use in financial reimbursement strategies to reward best performance, or in response to other quality oversight requirements can have significant consequences for the entity being measured, it is important for the accuracy of the reported measures to be verified. This is typically accomplished through systematic audits of the measures’ calculation. NCQA, for example, has developed standardized auditing procedures for use in verifying the integrity of the calculation of HEDIS measures (NCQA, undated).
Analyzing and Displaying the Performance Measures in Suitable Formats
Ensuring that quality measures are useful for multiple audiences requires analytic and communication capabilities that can respond to the sometimes differing needs of consumers, health care providers (both individual clinicians and organizations), purchasers, and quality oversight organizations. For example, while clinicians and health care organizations may want numerous, detailed data on their performance on individual
procedures and a variety of individual treatments, strong evidence shows that consumers can attend to a limited number of variables when making decisions such as which clinician or health plan to select (Office of Technology Assessment, 1988). Thus, in addition to providing detailed performance measures, there is a need to aggregate such measures into a smaller set. Data also need to show real differences in performance to help consumers select among care providers. In addition, health care delivery entities require benchmarking data so they can compare their performance with that of others in their field. Purchasers and quality oversight organizations also need comparative information for incentivizing and rewarding best performance. Risk adjustment of performance measures may sometimes be necessary, especially when reporting measures of patient outcomes as opposed to measures of the processes of care delivery.
Maintaining the Effectiveness of Performance Measures and Measure Sets and Policies
Individual performance measures and their deployment require ongoing maintenance. Performance measures’ specifications often change over time; for example, administrative coding systems may change, health care entities calculating the measures may discover issues not anticipated in the original specifications, and health care delivery systems themselves change. Also, some measures need to be retired as priorities shift over time and as new, needed measures are developed. For example, a comprehensive review of mental health performance measures found several gaps in the available set of M/SU measures. First, only a handful of adequately developed process-of-care measures exist for children, older adults, individuals with prevalent but not severe mental illnesses (e.g., anxiety disorders, dysthymia, or personality disorders), and individuals with dual mental health and substance-use disorders. The review further documented a lack of measures assessing the content of psychotherapy; instead measures focused on whether psychotherapy was provided and how frequently.17 And there were fewer process-of-care measures for substance-use problems and illnesses compared with those for mental illnesses (Hermann et al., 2000).
There is also a need for performance measure deployment policies and practices that guard against the unintended consequences of measuring only a small portion of the care that is delivered. Because it is not possible to measure everything, and because how an entity performs on one measure does not indicate how it will perform on another or in an area not measured
(Brook et al., 1996), focusing on only a small set of performance measures may have the unintended consequence of drawing quality improvement resources away from care delivery practices that are not in the measurement set. Periodic rotation of the measures to be calculated may therefore be needed, especially as new performance measures are developed.
Need for Public–Private Leadership and Partnership to Create a Quality Measurement and Reporting Infrastructure
Ensuring the existence of a quality measurement and reporting infrastructure that is responsive to the issues outlined above requires leadership. The committee also notes that, as with successful efforts in performance measurement in general health care, leadership is required from a critical mass of influential stakeholders; no one entity has sufficient influence or control over the vast array of M/SU providers and delivery systems or command over the many diverse technical and other resources needed to develop, test, ensure reporting of, audit, analyze, display, and continuously improve a set of M/SU health care performance measures for the nation. Moreover, although much M/SU health care is delivered in the public sector, many individuals also receive care in the private sector, often from general as opposed to specialty M/SU providers. And the many clinicians providing M/SU health care receive both public and private reimbursement. To ensure that these providers (both general and M/SU) are not required to report different quality measures to different purchasers or to report measures that are purportedly the same but calculated in different ways to multiple purchasers, public- and private-sector purchasers must reach agreement on a common set of quality measures and specifications for their reporting.
The committee acknowledges the primary leadership role played by the public sector to date in developing M/SU performance measures. While the private sector has exhibited strong leadership in the development of performance measures and measurement initiatives for general health care, leadership in M/SU performance measurement has come primarily from the public sector, most notably from SAMHSA and the Department of Veterans Affairs (DVA). For example, the successful efforts of the Washington Circle Group to identify a set of performance measures for substance-use health care and the subsequent inclusion of these measures in HEDIS came about as a result of SAMHSA’s convening and nurturing these efforts. All of the efforts to conceptualize and define a comprehensive set of performance measures in mental health described above also have occurred under the auspices of SAMHSA.
Given its role in stimulating and supporting the existing M/SU health care performance measurement initiatives, together with the fact that gov-
ernment funding (much of it federal) is the source of 76 percent of funding for substance-use health care and 63 percent of funding for mental health services (Mark et al., 2005) (see Chapter 8), the federal government can be a prime mover in creating consensus across the public and private sectors on standard sets of measures of the quality of M/SU health care. It can do so by (1) partnering more strongly in initiatives located in the private sector, (2) requiring the submission of jointly agreed-upon public- and private-sector measures in state grants and directly administered programs, and (3) continuing its historical efforts to develop and test new performance measures. However, the public sector alone cannot achieve a performance measurement and reporting system for M/SU health care. Private-sector initiatives to build the components of the performance measurement and reporting infrastructure for health care overall need to reach out to M/SU communities to ensure their strong participation in these initiatives.
Considering currently available resources, influence, and expertise, the committee believes a partnership of public and private leaders is needed to build a quality measurement and reporting infrastructure for M/SU health care. The committee further believes that this infrastructure should build on existing structures. It should also aim to achieve maximal consistency and integration of public and private performance measurement and reporting efforts, as well as the efforts of M/SU and general health care.
Establishing Collaborative Public- and Private-Sector Efforts
There is ample precedent for collaborative public–private quality measurement efforts, as is seen in the agreement reached by public-sector (i.e., Medicaid) and private-sector (private insurance) purchasers and other stakeholders on the reporting of standardized measures of child health care in HEDIS, in the endorsement of a wide variety of performance measures by both the public and private sectors through the National Quality Forum, and in the agreement reached by the public and private sectors on a common set of performance measures for inpatient psychiatric care through a partnership among NASMHPD, NRI, the National Association of Psychiatric Health Systems (NAPHS), the American Psychiatric Association, and JCAHO. The core measures developed by NASMHPD, NRI, and NAPHS have been accepted by JCAHO as meeting its ORYX© reporting requirements for accredited inpatient psychiatric facilities (Ghinassi, 2004).
DHHS could further its collaboration with the private sector by participating more strongly in general health care and private-sector performance measurement initiatives. For example, while VHA and DHHS’s CMS and AHRQ have liaison positions on NCQA’s policy-making Committee on Performance Measurement, SAMHSA has no such position. Similarly, the National Quality Forum, a private, not-for-profit, open-membership orga-
nization that endorses consensus-based national standards for measurement and public reporting of health care performance data, involves more than 250 public- and private-sector consumer, purchaser, provider, health plan, research, and quality improvement members in its consensus process for endorsing performance measures for multiple types of inpatient and outpatient health care. The forum has begun to address the quality of M/SU health care by convening a workshop to identify evidence-based practices for substance-use treatment and a workshop on behavioral health funded in part by DVA (Kizer, 2005; National Quality Forum, 2004).18 Continued involvement and support of SAMHSA and DVA in this and other national performance measurement and reporting initiatives for general health care, as well as their encouraging other M/SU organizations to participate, would help bring the resources of the private sector to bear on M/SU performance measurement and achieve consistency across the public and private sectors—both of which would facilitate the creation of a performance measurement and reporting infrastructure for M/SU health care.
An additional benefit is that M/SU health care would be able to participate on the ground floor in quality measurement initiatives, such as the development of new CPT II codes to capture outcome and otherwise non-reimbursed process-of-care measures in administrative datasets. As described earlier in this chapter, this advance has taken place through a Performance Measures Advisory Group comprising representatives of AHRQ, CMS, JCAHO, NCQA, and the AMA’s Physician Consortium for Performance Improvement (AMA, 2004a). Had representatives of M/SU health care been a part of this effort and the precursor efforts of the constituent agencies, improvements in M/SU performance measurement might have occurred alongside the development of CPT II codes for general health care. While the federal government can take action to become more involved in such private-sector initiatives, these private initiatives must also take action to ensure strong representation of M/SU health care providers and delivery systems (both public and private).
Requiring Submission of Jointly Agreed-Upon Public- and Private-Sector Measures in Public and Publicly Funded Programs
The federal government also can do much to promote the collection and reporting of M/SU quality measures in both the private and public sectors. This is illustrated by the inclusion of measures developed for the Medicare and Medicaid programs in HEDIS and their subsequent application to privately enrolled populations. Both SAMHSA and DVA have ini-
tiatives under way to measure the performance of their M/SU programs that can contribute to the development and use of M/SU performance measures in the private sector.
SAMHSA is beginning to require performance measurement and reporting in all its grant programs for substance-use prevention and treatment and mental health as part of its National Outcome Measures initiative. This initiative aims to measure 10 outcomes of care: (1) abstinence from substance use and decreased mental illness symptomotology, (2) increased/retained employment or return to/stay in school, (3) decreased criminal justice involvement, (4) increased stability in housing, (5) increased access to services, (6) increased retention in treatment for substance abuse and reduced utilization of inpatient psychiatric care, (7) increased social supports/social connectedness, (8) clients’ perception of care, (9) cost-effectiveness, and (10) use of evidence-based practices. While several of the actual measures (e.g., for evidence-based substance-use practices) are still being developed, SAHMSA achieved a major milestone in this initiative when, in 2004, it reached agreement with a representative body of states on the measures to be reported in 2005, on measures that required developmental work, and on a plan for preparing all states to report fully on the measures by the end of fiscal year 2007. SAMHSA’s State Outcomes Measurement and Management System will support the expansion of state data collection efforts to meet the requirements of the agreed-upon National Outcome Measures (SAMHSA, undated-b).
DVA similarly has a National Mental Health Program Performance Monitoring System, which uses internal VHA performance measures to evaluate the work of the VA’s 21 Veterans Integrated Services Networks (VISNs) and the medical centers within each of these networks. Many of these measures address the quality of M/SU health care, including the new outcome measures of each patient’s functional status (Greenberg and Rosenheck, 2005).
While these performance measurement initiatives are noteworthy, their benefits could be even greater if the information obtained by the federal government were shared with the private sector as part of formal public and private collaboration.
Continuing Public-Sector Efforts to Develop, Test, and Implement New Performance Measures
While DHHS and DVA are reaching out to become an integral part of private-sector performance measurement and reporting initiatives, they should not discontinue their internal efforts to develop, test, and implement performance measures, for several reasons. First, SAMHSA and DVA are the primary payers for much of the M/SU health care provided in the United
States. They have an obligation to move forward to ensure that the quality of the care they secure and provide to their beneficiaries is as good as it can be. Second, there is not yet an agreed-upon National Quality Measurement and Reporting System in place. Until such a system begins to take shape, SAMHSA and DVA need to develop as much expertise as possible in quality measurement and reporting so they can be strong partners in the system’s development and implementation. Finally, SAMHSA and DVA will be more attractive partners if they bring to the table both experience and influence in shaping the quality measurement activities of a large portion of the marketplace, as has the Medicare program.
APPLYING QUALITY IMPROVEMENT METHODS AT THE LOCUS OF CARE
Measuring and reporting on quality by themselves will not achieve improvements in care (Berwick et al., 2003). Since quality improvement is, at its heart, a change initiative, successful quality improvement requires that quality measurement be linked with activities at the locus of care to effect change and that understanding and use of these change (quality improvement) techniques be woven into the day-to-day operations of health care organizations and provider practices.
Although a systematic review and analysis of quality improvement strategies reveals remarkably little information about the most effective ways to secure the consistent incorporation of research findings into routine clinical practice (Shojania et al., 2004), many published reports of successful quality improvement initiatives clearly show that it is possible for organizations to change the quality of their health care for the better (Shojania and Grimshaw, 2005), just as it is possible to increase the quality of other industries’ products (Deming, 1986). While the susceptibility to successful change is a function of some intrinsic characteristics of individuals (Berwick, 2003), the types of activities that organizations and clinicians need to undertake to achieve and sustain quality improvement can be surmised from research on and studies of organizational change (Shojania and Grimshaw, 2005). A large body of research and other published work on organizational change, for example, consistently calls attention to five predominantly human resource management practices19 (and one other organizational practice) that are key to successful change implementation: (1) ongoing communication about the desired change with those who are to effect it; (2) training in the new practice; (3) worker involvement in designing the change process; (4) sustained attention to progress in making the
change; (5) use of mechanisms for measurement, feedback, and redesign; and (6) functioning as a learning organization. All of these practices require the exercise of effective leadership (IOM, 2004).
These practices are illustrated in some of the leading quality improvement initiatives in health care, including those of VHA (Jha et al., 2003) and the Institute for Healthcare Improvement (http://www.ihi.org/ihi/programs). Most recently, they have been employed by some of the smallest and least resource-rich health care providers—providers of substance-use treatment services—through the Network for the Improvement of Addiction Treatment (NIATx) (see Box 4-4).
More-widespread application of quality improvement techniques would be facilitated by similar initiatives in mental health care, additional substance-use treatment sites, and provider sites offering combined M/SU treatment that could undertake research, demonstration, and dissemination of quality improvement strategies across more diverse clinicians, organizations, and systems delivering M/SU health care.
NIATx is a partnership between The Robert Wood Johnson Foundation’s “Paths to Recovery” program and the Center for Substance Abuse Treatment’s “Strengthening Treatment Access and Retention” program. The mission of NIATx is to help providers learn approaches that make more efficient use of their treatment capacity and produce improvements in care delivery that affect access to and retention in addiction treatment.
Of the millions of Americans who need substance-use treatment, only a small minority receive it. Fifty percent of those who do leave treatment before its benefits can be realized. While finances and psychological readiness explain some of this deficit, the issue that often keeps clients from treatment is the way services are delivered. Systems engineering, process improvement, and innovative uses of technology have been shown in other industries to dramatically improve the quality and efficiency of service delivery processes. NIATx brings these resources to substance-use treatment. The National Program Office at the University of Wisconsin’s Industrial and Systems Engineering Department provides coaching, phone, and face-to-face educational sessions; a process improvement website and other communications to the field; and administrative support.
NIATx aims to reduce waiting time, reduce the percentage of no shows for treatment, reduce the percentage of clients that leave treatment early, and increase the number of clients admitted to treatment through three initiatives.
The Treatment Provider Initiative. The 39 treatment agencies (including 9 mental health agencies with addiction services) in 25 states that participate in NIATx are demonstrating the potential of process improvement to help treatment providers improve nine work processes that influence treatment access and retention: (1) the first contact a client has with the treatment agency, (2) the intake and
A PUBLIC–PRIVATE STRATEGY FOR QUALITY MEASUREMENT AND IMPROVEMENT
To address the need for strengthened quality measurement and improvement and the application of quality improvement at the locus of care, the committee recommends a public–private collaborative strategy.
Recommendation 4-3. To measure quality better, DHHS, in partnership with the private sector, should charge and financially support an entity similar to the National Quality Forum to convene government regulators, accrediting organizations, consumer representatives, providers, and purchasers exercising leadership in quality-based purchasing for the purpose of reaching consensus on and implementing a common, continuously improving set of M/SU health care quality measures for providers, organizations, and systems of care. Participants in this consortium should commit to:
assessment process, (3) the process by which clients are transferred between levels of care, (4) paperwork burden, (5) client and employee scheduling, (6) support systems (e.g., day care) that can help clients stay in treatment, (7) processes for reaching out to clients and referral agencies, (8) techniques for engaging clients, and (9) strategies to improve the agency’s financial condition.
The Single State Agency Initiative. While the Provider Initiative demonstrates the potential to substantially improve access and retention, the state initiative tests the potential of Single State Agencies to improve their own work processes and to widely disseminate improvements (such as those identified in the Provider Initiative) across all treatment agencies in each of five states.
The Innovation Initiative. The innovation initiative examines ways to take full advantage of the technologies (e.g., consumer health informatics, virtual reality simulation, sensors, computer-mediated communication) currently or soon to be available to enhance the efficiency and effectiveness of addiction prevention and treatment.
NIATx members have demonstrated that work processes can be improved, which in turn improves the quality of care clients receive, as well as the fiscal health of treatment agencies. Within the first 18 months of the initiative, members reported improvements in each of the four project aims. Thirty-seven change projects resulted in an average reduction of 51 percent in waiting times between first contact and first treatment session. Twenty-eight change projects produced an average reduction in no-show rates of 41 percent. Twenty-three change projects produced an average increase of 56 percent in admissions, while 39 change projects produced improvements in continuation averaging 39 percent. The extent to which those improvements can be sustained and diffused to other parts of the organization is now being examined, and early results are encouraging.
Requiring the reporting and submission of the quality measures to a performance measure repository or repositories.
Requiring validation of the measures for accuracy and adherence to specifications.
Ensuring the analysis and display of measurement results in formats understandable by multiple audiences, including consumers, those reporting the measures, purchasers, and quality oversight organizations.
Establishing models for the use of the measures for benchmarking and quality improvement purposes at sites of care delivery.
Performing continuing review of the measures’ effectiveness in improving care.
Recommendation 4-4. To increase quality improvement capacity, DHHS, in collaboration with other government agencies, states, philanthropic organizations, and professional associations, should create or charge one or more entities as national or regional resources to test, disseminate knowledge about, and provide technical assistance and leadership on quality improvement practices for M/SU health care in public- and private-sector settings.
Recommendation 4-5. Public and private sponsors of research on M/SU and general health care should include the following in their research funding priorities:
Development of reliable screening, diagnostic, and monitoring instruments that can validly assess response to treatment and that are practicable for routine use. These instruments should include a set of M/SU “vital signs”: a brief set of indicators—measurable at the patient level and suitable for screening and early identification of problems and illnesses and for repeated administration during and following treatment—to monitor symptoms and functional status. The indicators should be accompanied by a specified standardized approach for routine collection and reporting as part of regular health care. Instruments should be age- and culturally appropriate.
Refinement and improvement of these instruments, procedures for categorizing M/SU interventions, and methods for providing public information on the effectiveness of those interventions.
Development of strategies to reduce the administrative burden of quality monitoring systems and to increase their effectiveness in improving quality.
ACMHA (American College of Mental Health Administration). 2001. A Proposed Consensus Set of Indicators for Behavioral Health. Pittsburg, PA: ACMHA. [Online]. Available: http://www.acmha.org/publications/acmha_20.pdf [accessed March 18, 2005].
Adebimpe VR. 1994. Race, racism, and epidemiological surveys. Hospital and Community Psychiatry 45(1):27–31.
AHRQ (Agency for Healthcare Research and Quality). 2002–2003. U.S. Preventive Services Task Force Ratings: Strength of Recommendations and Quality of Evidence. Guide to Clinical Preventive Services. Rockville, MD: AHRQ. [Online]. Available: http:www.ahrq.gov/clinic/3rduspstf/ratings.htm [accessed February 28, 2005].
AHRQ. 2003. National Healthcare Quality Report. Rockville, MD: U.S. Department of Health and Human Services.
AHRQ. 2004a. 2004 National Healthcare Quality Report. AHRQ Publication Number: 05-0013. Rockville, MD: U.S. Department of Health and Human Services. [Online]. Available: http://www.QualityTools.Ahrq.Gov/Qualityreport/Documents/Nhrq2004.Pdf [accessed July 22, 2005].
AHRQ. 2004b. AHRQ Quality Indicators—Guide to Inpatient Quality Indicators: Quality of Care in Hospitals—Volume, Mortality, and Utilization. AHRQ Publication Number: 02-RO204 (June 2002), Revision 4 (December 22, 2004). Rockville, MD: AHRQ. [Online]. Available: http:www.qualityindicators.ahrq.gov/downloads/iqi/iqi_guide_rev4.pdf [accessed February 25, 2005].
AHRQ. undated. EPC Evidence Reports. [Online]. Available: http://www.ahrq.gov/clinic/epcindex.htm [accessed November 26, 2004].
Alderson P, Green S, Higgins J, eds. 2004. Cochrane Reviewers’ Handbook 4.2.2 [Updated March 2004]. Chichester, UK: John Wiley & Sons. The Cochrane Library.
AMA (American Medical Association). 2004a. Current Procedural Terminology: CPT 2005 2nd ed. Chicago, IL: AMA Press.
AMA. 2004b. Hospital ICD-9-CM 2005, Volumes 1, 2, & 3 Compact. Chicago, IL: AMA Press.
American Association of Community Psychiatrists. 2003. AACP Guidelines for Recovery Oriented Services. [Online]. Available: http://www.comm.psych.pitt.edu/finds/ROSMenu.html [accessed February 18, 2005].
American Psychiatric Association Task Force for the Handbook of Psychiatric Measures. 2000. Handbook of Psychiatric Measures. Washington, DC: American Psychiatric Association.
American Psychiatric Association, American Psychiatric Nurses Association, National Association of Psychiatric Health Systems. 2003. Learning from Each Other: Success Stories and Ideas for Reducing Restraint/Seclusion in Behavioral Health. [Online]. Available: http://www.psych.org/psych_pract/treatg/pg/LearningfromEachOther.pdf [accessed February 20, 2005].
American Psychological Association. undated. A Guide to Beneficial Psychotherapy: Empirically Supported Treatments. [Online]. Available: http://www.apa.org/divisions/div12/rev_est/index.html [accessed March 7, 2005].
Anonymous. 2001. ECHO Experience of Care and Health Outcomes Survey. [Online]. Available: http://www.hcp.med.harvard.edu/echo/home.html [accessed March 18, 2005].
Associated Press. 2005, February 14. Colorado Supreme Court refuses to hear therapist’s appeal in “rebirthing” death. State and Regional. Denver, CO: Summit Daily News.
Bates DW, Shore MF, Gibson R, Bosk C. 2003. Examining the evidence. Psychiatric Services 54(12):1–5.
Bauer MS. 2002. A review of quantitative studies of adherence to mental health clinical practice guidelines. Harvard Review of Psychiatry 10(3):138–153.
Beardslee WR, Wright EJ, Rothberg PC, Salt P, Versage E. 1996. Response of families to two preventive intervention strategies: Long-term differences in behavior and attitude change. Journal of the American Academy of Child and Adolescent Psychiatry 35(6):774–782.
Beardslee WR, Wright EJ, Salt P, Drezner K, Gladstone TR, Versage EM, Rothberg PC. 1997. Examination of children’s responses to two preventive intervention strategies over time. Journal of the American Academy of Child and Adolescent Psychiatry 36(2):196–204.
Beardslee WR, Gladstone TR, Wright EJ, Cooper AB. 2003. A family-based approach to the prevention of depressive symptoms in children at risk: Evidence of parental and child change. Pediatrics 112(2):119–131.
Bell CC, Mehta H. 1980. The misdiagnosis of black patients with manic depressive illness. Journal of the National Medical Association 73(2):141–145.
Bell CC, Mehta H. 1981. Misdiagnosis of black patients with manic depressive illness: Second in a series. Journal of the National Medical Association 73(2):101–107.
Berwick DM. 2003. Dissemination innovations in health care. Journal of the American Medical Association 289(15):1969–1975.
Berwick DM, James B, Coye MJ. 2003. Connections between quality measurement and improvement. Medical Care 41(1):Supplement I-30–I-38.
Bethell C. 2004. Taking the Next Step to Improve the Quality of Child and Adolescent Mental and Behavioral Health Care Services. Paper commissioned by the Institute of Medicine Committee on Crossing the Quality Chasm: Adaptation to Mental Health and Addictive Disorders.
Borson S, Bartels SJ, Colenda CC, Gottlieb G. 2001. Geriatric mental health services research: Strategic plan for an aging population. American Journal of Geriatric Psychiatry 9(3): 191–204.
Brook RH, McGlynn EA, Cleary PD. 1996. Quality of health care. Part 2: Measuring quality of care. New England Journal of Medicine 335(13):966–969.
Brown ER, Ojeda VD, Wyn R, Levan R. 2000. Racial and Ethnic Disparities in Access to Health Insurance and Health Care. Los Angeles, CA: UCLA Center for Health Policy Research and The Henry J. Kaiser Family Foundation. [Online]. Available: http:www.kff.org/uninsured/loader.cfm?url=/commonspot/security/getfile.cfm&PageID=13443 [accessed July 10, 2005].
Buchanan RW, Kreyenbuhl J, Zito JM, Lehman A. 2002. The schizophrenia PORT pharmacological treatment recommendations: Conformance and implications for symptoms and functional outcome. Schizophrenia Bulletin 28(1):63–73.
Burns BJ, Hoagwood K, eds. 2004. Evidence-based practices Part I: A research update. Child and Adolescent Psychiatric Clinics of North America 13(4).
Burns BJ, Hoagwood K, eds. 2005. Evidence-based practices Part II: Effecting change. Child and Adolescent Psychiatric Clinics of North America 14(2).
Busch AB, Shore MF. 2000. Seclusion and restraint: A review of recent literature. Harvard Review of Psychiatry 8(5):261–270.
Carroll KM, Rounsaville BJ. 2003. Bridging the gap: A hybrid model to link efficacy and effectiveness research in substance abuse treatment. Psychiatric Services 54(3):333–339.
CDC (Centers for Disease Control and Prevention). 2005a. United States Department of Health and Human Services Centers for Disease Control and Prevention. [Online]. Available: http://www.cdc.gov/about/mission.htm [accessed October 10, 2005].
CDC. 2005b. United States Department of Health and Human Services Centers for Disease Control and Prevention. [Online]. Available: http://www.cdc.gov/about/default.htm [accessed October 10, 2005].
CDC. 2005c. United States Department of Health and Human Services Centers for Disease Control and Prevention. [Online]. Available: http://www.cdc.gov/about/cio.htm [accessed October 10, 2005].
CDC. 2005d. Chronic Disease Prevention: United States Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion. [Online]. Available: http://www.cdc.gov/nccdphp [accessed October 10, 2005].
CDC. 2005e. About the Mental Health Work Group. [Online]. Available: http://www.cdc.gov/mentalhealth/about.htm [accessed September 20, 2005].
Chorpita BF, Yim LM, Dankervoet JC, Arensdorf A, Amundsen MJ, McGee C, Serrano A, Yates A, Burns JA, Morelli P. 2002. Toward large-scale implementation of empirically supported treatments for children: A review and observations by the Hawaii Empirical Basis to Services Task Force. Clinical Psychology: Science and Practice 9(2):165–190.
Chung H, Mahler JC, Kakuma T. 1995. Racial differences in treatment of psychiatric inpatients. Psychiatric Services 46(6):586–591.
Clarke GN, Hawkins W, Murphy M, Sheeber LB, Lewisohn PM, Seeley JR. 1995. Targeted prevention of unipolar depressive disorder in an at-risk sample of high school adolescents: A randomized trial of a group cognitive intervention. Journal of the American Academy of Child and Adolescent Psychiatry 34(3):312–321.
CMS (Centers for Medicare and Medicaid Services). 2004. Medicare Current Beneficiary Survey: Survey Overview. [Online]. Available: http://www.cms.hhs.gov/MCBS/Overview.asp [accessed February 23, 2005].
D’Aunno T, Pollack HA. 2002. Changes in methadone treatment practices: Results from a national panel study, 1988–2000. Journal of the American Medical Association 288(7):850–856.
Davis NJ. 2002. The promotion of mental health and the prevention of mental and behavioral disorders: Surely the time is right. International Journal of Emergency Mental Health 4(1):3–29.
Deming WE. 1986. Out of the Crisis. Cambridge, MA: Massachusetts Institute of Technology, Center for Advanced Engineering Study.
Denogean AT. 2003, October 18. No charges in death of woman at Kino. Tucson Citizen. p. 1A. [Arizona].
Department of Justice. 2005. The What Works Repository—Working Group of the Federal Collaboration on What Works. Washington, DC: Community Capacity Development Office, Office of Justice Programs, U.S. Department of Justice.
Department of Veterans Affairs. 2004. VA Technology Assessment Program (VATAP). [Online]. Available: http://www.va.gov/vatap/publications.htm [accessed March 7, 2005].
DHHS (U.S. Department of Health and Human Services). 1999. Mental Health: A Report of the Surgeon General. Rockville, MD: DHHS. Substance Abuse and Mental Health Services Administration, Center for Mental Health Services, and National Institutes of Health, National Institute of Mental Health.
DHHS. 2001. Mental Health: Culture, Race, and Ethnicity—A Supplement to Mental Health: A Report of the Surgeon General. Rockville, MD: DHHS.
Doucette A. 2003. Outcomes Roundtable for Children and Families Performance Measurement Survey: Summary of Findings. Paper presented at a conference, Advancing Mental and Behavioral Health Care Quality Measurement and Improvement for Children and Adolescents. Baltimore, MD, March 30, 2004.
Druss BG, Miller CL, Rosenheck RA, Shih SC, Bost JE. 2002. Mental health care quality under managed care in the United States: A view from the Health Employer Data and Information Set (HEDIS). American Journal of Psychiatry 159(5):860–862.
Eaton WW, Neufeld K, Chen L, Cai G. 2000. A comparison of self-report and clinical diagnostic interviews for depression: Diagnostic interview schedule and schedules for clinical assessment in neuropsychiatry in the Baltimore Epidemiologic Catchment Area Follow-up. Archives of General Psychiatry 57(3):217–222.
Eisen SV, Normand SL, Belanger, AJ, Spiro A, Esch D. 2004. The Revised Behavior and Symptom Identification Scale (BASIS-R): Reliability and validity. Medical Care 42(12): 1230–1241.
Eisen SV, Wilcox M, Leff HS, Schaefer E, Culhane MA. 1999. Assessing behavioral health outcomes in outpatient programs: Reliability and validity of the BASIS-32. Journal of Behavioral Health Services and Research 26(1):5–17.
Essock SM, Drake RE, Frank RG, McGuire TG. 2003. Randomized controlled trials in evidence-based mental health care: Getting the right answer to the right question. Schizophrenia Bulletin 29(1):115–123.
Finke L. 2001. The use of seclusion is not evidence-based practice. Journal of Child and Adolescent Psychiatric Nursing 14(4):186–189.
Finney JW, Hahn AC, Moos RH. (1996). The effectiveness of inpatient and outpatient treatment for alcohol abuse: The need to focus on mediators and moderators of setting effects. Addiction 91(12):1773–1796.
Fischer EP, Marder SR, Smith GR, Owen RR, Rubenstein L, Hedrick SC, Curran GM. 2000. Quality Enhancement Research Initiative in Mental Health. Medical Care 38(6 Supplement 1):I70–I81.
Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder E. 2003. The implications of regional variations in Medicare spending. Part 2: Health outcomes and satisfaction with care. Annals of Internal Medicine 138(4):288–298.
Fontana A, Rosenheck RA. 1997. Effectiveness and cost of the inpatient treatment of posttraumatic stress disorder: Comparison of three models of treatment. American Journal of Psychiatry 154(6):758–765.
Friedmann PD, McCullough D, Saitz R. 2001. Screening and intervention for illicit drug abuse: A national survey of primary care physicians and psychiatrists. Archives of Internal Medicine 161(2):248–251.
Ganju V. 2003. Implementation of evidence-based practices in state mental health systems: Implications for research and effectiveness studies. Schizophrenia Bulletin 29(1): 125–131.
Ganju V. 2004. Quality and Accountability: An Agenda for Public Mental Health Systems. A paper developed for the Institute of Medicine Meeting on Crossing the Quality Chasm—An Adaptation to Mental Health and Addictive Disorders. Washington, DC. Available from the Institute of Medicine.
Ganju V, Smith ME, Adams N, Allen J Jr, Bible J, Danforth M, Davis S, Dumont J, Gibson G, Gonzalez O, Greenberg P, Hall LL, Hopkins C, Koch RJ, Kupfer D, Lutterman T, Manderscheid R, Onken S, Osher T, Stange JL, Wieman D. 2004. The MHSIP Quality Report: The Next Generation of Mental Health Performance Measures. Rockville, MD: SAMHSA.
GAO (Government Accounting Office). 1999. Mental Health: Improper Restraint or Seclusion Use Places People at Risk. GAO/HEHS-99-176. Washington, DC: GAO. [Online]. Available: http://www/gao.gov/archive/1999/he99176.pdf [accessed on October 10, 2005].
Garnick DW, Lee MT, Chalk M, Gastfriend D, Horgan CM, McCorry F, McLellan AT, Merrick EL. 2002. Establishing the feasibility of performance measures for alcohol and other drugs. Journal of Substance Abuse Treatment 23(4):375–385.
Gerstein D, Harwood H, eds. 1990. Treating Drug Problems. Volume 1. Washington, DC: National Academy Press.
Ghinassi FA. 2004. Testimony before the Institute of Medicine Committee on Crossing the Quality Chasm: Adaptation to Mental Health and Addictive Disorders. Washington, DC. July 14, 2004. Available from the Institute of Medicine.
Gilbody S, House A, Sheldon T. 2001. Routinely administered questionnaires for depression and anxiety: Systematic review. British Medical Journal 322(7283):406–409.
Glied S, Cuellar AE. 2003. Trends and issues in child and adolescent mental health. Health Affairs 22(5):39–50.
Goodman LA, Rosenberg SD, Mueser KT, Drake RE. 1997. Physical and sexual assault history in women with serious mental illness: Prevalence, correlates, treatment, and future research directions. Schizophrenia Bulletin 23(4):685–696.
Gossop M, Marsden J, Stewart D, Treacy S. 2001. Outcomes after methadone maintenance and methadone reduction treatments: Two year follow-up results from the National Treatment Outcome Research Study. Drug and Alcohol Dependence 62(3):255–264.
Grasso BC, Genest R, Jordan CW, Bates DW. 2003a. Use of chart and record reviews to detect medication errors in a state psychiatric hospital. Psychiatric Services 54(5): 677–681.
Grasso BC, Rothschild JM, Genest R, Bates DW. 2003b. What do we know about medication errors in inpatient psychiatry? Joint Commission Journal on Quality and Safety 29(8): 391–400.
Greenberg G, Rosenheck R. 2005. Department of Veterans Affairs National Mental Health Program Performance Monitoring System: Fiscal Year 2004 Report. West Haven, CT: Northeast Program Evaluation Center (182), VA Connecticut Healthcare System. [Online]. Available: http://nepec.org/NMHPPMS/default.htm [accessed May 30, 2005].
Greenhalgh T, Robert G, MacFarlane F, Bate P, Kyriakidou O. 2004. Diffusion of innovations in service organizations: Systematic review and recommendations. The Milbank Quarterly 82(4):581–629.
Gurwitz JH, Field TS, Avorn J, McCormick D, Jain S, Eckler M, Benser M, Edmondson AC, Bates DW. 2000. Incidence and preventability of adverse drug events in nursing homes. American Journal of Medicine 109(2):87–94.
Hadley J, Rabin D, Epstein A, Stein S, Rimes C. 2000. Post hospitalization home health care use and changes in functional status in a Medicare population. Medical Care 38(5): 494–507.
Harman JS, Manning WG, Lurie N, Christianson JB. 2003. Association between interruptions in Medicaid coverage and use of inpatient psychiatric services. Psychiatric Services 54(7):999–1005.
Harris RP, Helfand M, Woolf SH, Lohr KN, Mulrow CD, Teutsch SM, Atkins D. 2001. Current methods of the U.S. Preventive Services Task Force: A review of the process. American Journal of Preventive Medicine 20(3S):21–35.
Harrison PA, Beebe TJ, Park E. 2001. The Adolescent Health Review: A brief, multidimensional screening instrument. Journal of Adolescent Health 29(2):131–139.
Hawaii Department of Health. 2004. Evidence Based Services Committee 2004 Biennial Report—Summary of Effective Interventions for Youth With Behavioral and Emotional Needs. [Online]. Available: http://www.state.hi.us/health/mental-health/camhd/library/pdf/ebs/ebso11.pdf [accessed March 8, 2005].
Hennessy R. 2002. Focus on the states: Hospitals in Florida, Georgia and Utah drastically reduce seclusion and restraint. Networks (1–2)10–11. [Online]. Available: http://www.nasmhpd.org/general_files/publications/ntac_pubs/networks/SummerFall2002.pdf [accessed February 20, 2005].
Hermann RC, Palmer RH. 2002. Common ground: A framework for selecting core quality measures for mental health and substance abuse care. Psychiatric Services 53(3): 281–287.
Hermann RC, Leff HS, Palmer RH, Yang D, Teller T, Provost S, Jakubiak C, Chan J. 2000. Quality measures for mental health care: Results from a national inventory. Medical Care Research and Review 57(Supplement 2):136–154.
Hermann RC, Palmer H, Leff S, Schwartz M, Provost S, Chan J, Chiu WT, Lagodmos G. 2004. Achieving consensus across diverse stakeholders on quality measures for mental healthcare. Medical Care 42(12):1246–1253.
Hibbard JH. 2003. Engaging health care consumers to improve the quality of care. Medical Care 41(Supplement 1):I-61–I-70.
Hoagwood K, Burns BJ, Kiser L, Ringeisen H, Schoenwald SK. 2001. Evidence-based practice in child and adolescent mental health services. Psychiatric Services 52(9):1179–1189.
Hollon SD, Munoz RF, Barlow DH, Beardslee WR, Bell CC, Bernal G, Clarke GN, Franciosi LP, Kazdin AE, Kohn L, Linehan MM, Markowitz JC, Miklowitz DJ, Persons JB, Niederehe G, Sommers D. 2002. Psychosocial intervention development for the prevention and treatment of depression: Promoting innovation and increasing access. Biological Psychiatry 52(6):610–630.
Hon J. 2003. Using Performance Measurement to Improve the Quality of Alcohol Treatment. Washington, DC: The George Washington University Medical Center.
Horgan C, Garnick D. 2005. The Quality of Care for Adults with Mental and Addictive Disorders: Issues in Performance Measurement. Paper commissioned by the Institute of Medicine Committee on Crossing the Quality Chasm: Adaptation to Mental Health and Addictive Disorders. Available from the Institute of Medicine.
Hubbard RL, Marsden ME, Rachal JV, Harwood HJ, Cavanaugh ER, Ginzburg HM. 1989. Drug Abuse Treatment: A National Study of Effectiveness. Chapel Hill, NC: University of North Carolina Press.
Iezzoni LI. 1997. Data sources and implications: Administrative databases. In: Iezzoni LI, ed. Risk Adjustment for Measuring Healthcare Outcomes 2nd ed. Chicago, IL: Health Administration Press. Pp. 169–242.
IOM (Institute of Medicine). 1994. Mrazek PJ, Haggerty RJ, eds. Reducing Risks for Mental Disorders: Frontiers for Prevention Intervention Research. Washington, DC: National Academy Press.
IOM. 1997. Dispelling the Myths about Addiction: Strategies to Increase Understanding and Strengthen Research. Washington, DC: National Academy Press.
IOM. 1998. Bridging the Gap Between Practice and Research: Forging Partnerships with Community-based Drug and Alcohol Treatment. Washington, DC: National Academy Press.
IOM. 2004. Page A, ed. Keeping Patients Safe: Transforming the Work Environment of Nurses. Washington, DC: The National Academies Press.
Jaycox LH, Morral AR, Juvonen J. 2003. Mental health and medical problems and service use among adolescent substance users. Journal of the American Academy of Child and Adolescent Psychiatry 42(6):701–709.
Jha AK, Perlin JB, Kizer KW, Dudley RA. 2003. Effect of the transformation of the Veterans Affairs Health Care System on the Quality of Care. New England Journal of Medicine 348(22):2218–2227.
Johnson R, Chatuape M, Strain E, Walsh S, Stitzer M, Bigelow G. 2000. A comparison of levomethadyl acetate, buprenorphine, and methadone for opioid dependence. New England Journal of Medicine 343(18):1290–1297.
Kane JM, Leucht S, Carpenter D, Docherty JP. 2003. Optimizing pharmacologic treatment of psychotic disorders. Journal of Clinical Psychiatry 64 Supplement 12(1–100): 5–19.
Kataoka SH, Zhang L, Wells KB. 2002. Unmet need for mental health care among U.S. children: Variation by ethnicity and insurance status. American Journal of Psychiatry 159(9):1548–1555.
Kazdin A. 2000. Psychotherapy for Children and Adolescents: Directions for Research and Practice. New York: Oxford University Press.
Kazdin AE. 2003. Evidence-Based Psychotherapies for Children and Adolescents. New York: Guilford Press.
Kazdin AE. 2004. Evidence-based treatments: Challenges and priorities for practice and research. Child and Adolescent Psychiatric Clinics of North America 13(4):923–940.
Kessler RC. 2004. Impact of substance abuse on the diagnosis, course, and treatment of mood disorders: The epidemiology of dual diagnosis. Biological Psychiatry 56:730–737.
Kessler RC, Demler O, Frank RG, Olfson M, Pincus HA, Walters EE, Wang P, Wells KB, Zaslavsky AM. 2005. Prevalence and treatment of mental disorders, 1990 to 2003. New England Journal of Medicine 352(24):2515–2523.
Kimberly J, Quinn R. 1984. Managing Organizational Transitions. Homewood, IL: Dow Jones—Irwin.
Kizer K. 2005. Conducting a dissonant symphony. Modern Healthcare 34(14):20.
Kramer TL, Daniels AS, Zieman GL, Willimas C, Dewan N. 2000. Psychiatric practice variations in the diagnosis and treatment of major depression. Psychiatric Services 51(3):336–340.
LaVeist TA., Diala C, Jarrett NC. 2000. Social status and perceived discrimination: Who experiences discrimination in the health care system, how and why? In: Hogue C, Hargraves M, Scott-Collins K, eds. Minority Health in America—Findings and Policy Implications from the Commonwealth Fund Minority Health Survey. Baltimore, MD: Johns Hopkins University Press. Pp 194–208.
Lefever G, Arcona A, Antonuccio D. 2003. ADHD among American schoolchildren: Evidence of overdiagnosis and overuse of medication. The Scientific Review of Mental Health Practice 21(1). [Online]. Available: http://www.srmph.org/0201-adhd.html [accessed November 2, 2004].
Levant RF. 2004. The empirically validated treatments movement: A practitioner/educator perspective. Clinical Psychology: Science and Practice 11(2):219–224.
Lewczyk CM, Garland AF, Hurlburt MS, Gearity J, Hough RL. 2003. Comparing DISC-IV and clinician diagnoses among youths receiving public mental health services. Journal of the American Academy of Child and Adolescent Psychiatry 42(3):349–356.
Lewinsohn PM. 1987. The coping-with-depression course. In: Munoz RF, ed. Depression Prevention: Research Directions. Washington, DC: Hemisphere Publishing Corporation. Pp. 159–170.
Lilienfeld SO, Lynn SJ, Lohr JM, eds. 2003. Science and Pseudoscience in Clinical Psychology. New York: Guilford Press.
Lin EH, Simon GE, Katzelnick DJ, Pearson SD. 2001. Does physician education on depression management improve treatment in primary care? Journal of General Internal Medicine 16(9):614–619.
Lin KM, Anderson D, Poland RE. 1997. Ethnic and cultural considerations in psychopharmacology. In: Dunner D, ed. Current Psychiatric Therapy II. Philadelphia, PA: W.B. Saunders. Pp. 75–81.
Lowe B, Unutzer J, Callahan CM, Perkins A, Kroenke K. 2004. Monitoring depression treatment outcomes with the Patient Health Questionnaire-9. Medical Care 42(12):1194–1201.
Manderscheid RW, Henderson MJ, Brown DY. 2001. Status of national accountability efforts at the millenium. In: Manderscheid RW, Henderson MJ, eds. Mental Health, United States, 2000. DHHS Publication Number: (SMA) 01-3537. Washington DC: U.S. Government Printing Office. Pp. 43–52.
Maris RW. 2002. Suicide. The Lancet 360(9329):319–326.
Mark TL, Coffey RM, Vandivort-Warren R, Harwood HJ, King EC, the MHSA Spending Estimates Team. 2005. U.S. spending for mental health and substance abuse treatment, 1991–2001. Health Affairs Web Exclusive W5-133–W5-142.
McClellan J. 2005. Commentary: Treatment guidelines for child and adolescent bipolar disorder. Journal of the American Academy of Child and Adolescent Psychiatry 44(3): 236–239.
McCorry F, Garnick DW, Bartlett J, Cotter F, Chalk M. 2000. Developing performance measures for alcohol and other drug services in managed care plans. Joint Commission Journal on Quality Improvement 26(11):633–643.
McGlynn EA. 2003. Introduction and overview of the conceptual framework for a national quality measurement and reporting system. Medical Care 41(Supplement 1):I-1–I-7.
McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, Kerr EA. 2003. The quality of health care delivered to adults in the United States. New England Journal of Medicine 348(26):2635–2645.
McKay J. 2005. Is there a case for extended interventions for alcohol and drug use disorders? Addiction 100(11):1594–1610.
McKay JR, Alterman AI, Cacciola JS, Rutherford MR, O’Brien CP, Koppenhaver J, Shepard D. 1999. Continuing care for cocaine dependence: Comprehensive 2-year outcomes. Journal of Consulting and Clinical Psychology 67(3):420–427.
McKay JR, Lynch KG, Shepard DS, Ratichek S, Morrison R, Koppenhaver J, Pettinati HM. 2004. The effectiveness of telephone-based continuing care in the clinical management of alcohol and cocaine use disorders: 12 month outcomes. Journal of Consulting and Clinical Psychology 72(6):967–979.
McLellan AT. 2002. Contemporary drug abuse treatment: A review of the evidence base. In: Investing in Drug Abuse Treatment: A Discussion Paper for Policy Makers. New York: United Nations Press.
McLellan AT, Grissom G, Brill P, Durell J, Metzger DS, O’Brien CP. 1993a. Private substance abuse treatments: Are some programs more effective than others? Journal of Substance Abuse Treatment 10(3):243–254.
McLellan AT, Arndt IO, Woody GE, Metzger D. 1993b. Psychosocial services in substance abuse treatment: A dose-ranging study of psychosocial services. Journal of the American Medical Association 269(15):1953–1959.
McLellan AT, Lewis DL, O’Brien CP, Kleber HD. 2000. Drug dependence, a chronic medical illness: Implications for treatment, insurance and outcomes evaluation. Journal of the American Medical Association 284(13):1689–1695.
Mechanic D, Bilder S. 2004. Treatment of people with mental illness: A decade-long perspective. Health Affairs 23(4):84–95.
Miller AL, Craig CS. 2002. Combination antipsychotics: Pros, cons, and questions. Schizophrenia Bulletin 28(1):105–109.
Mohr WK, Petti TA, Mohr B. 2003. Adverse effects associated with physical restraint. Canadian Journal of Psychiatry 48(5):330–337.
Mojtabai R. 2002. Diagnosing depression and prescribing antidepressants by primary care physicians: The impact of practice style variations. Mental Health Services Research 4(2):109–118.
Mojtabai R, Malaspina D, Susser E. 2003. The concept of population prevention: Application to schizophrenia. Schizophrenia Bulletin 29(4):791–801.
Moos RH. 2005. Iatrogenic effects of psychosocial interventions for substance use disorders: Prevalence, predictors, prevention. Addiction 100(5):595–604.
Mueser KT, Rosenberg SD, Goodman LA, Trumbetta SL. 2002. Trauma, PTSD, and the course of severe mental illness: An interactive model. Schizophrenia Research 53 (1–2):123–143.
Mukherjee S, Shukla S, Woodle J, Rosen AM, Olarte S. 1983. Misdiagnosis of schizophrenia in bipolar patients: A multiethnic comparison. American Journal of Psychiatry 140(12): 1571–1574.
Mullan F. 2004. Wrestling with variation: An interview with Jack Wennberg. Health Affairs. [Online]. Available: http://content.healthaffairs.org/cgi/content/full/healthaff.var.73/DC2 [accessed February 25, 2005].
Musto DF. 1973. The American Disease: The Origins of Narcotic Control. New Haven, CT: Yale University Press.
NAMI (National Alliance for the Mentally Ill). 2003. Seclusion and Restraint: Task Force Report. Arlington, VA: NAMI. [Online]. Available: http://ww.nami.org/content/NavigationMenu/Inform_Yourself/About_Public_Policy/Policy_Research_Institute/seclusion_and_restraints.pdf [accessed February 20, 2005].
NASMHPD (National Association of State Mental Health Directors, Inc.). 1999. Position Statement on Seclusion and Restraint. [Online]. Available: http://www.nasmhpd.org/general_files/position_statement/posses1.html [accessed February 20, 2005].
NASMHPD. 2005. NASMHPD Position Statement on Services and Supports to Trauma Survivors. [Online]. Available: http://www.nasmhpd.org/general_files/position_statement/NASMHPD%20TRAUMA%20Position%20statementFinal.pdf [accessed February 20, 2005].
NASMHPD Research Institute. undated. NRI Center for Mental Health Quality and Accountability: Evidence-Based Practices. [Online]. Available: http://ebp.networkofcare.net/index.cfm?pageName=index [accessed March 7, 2005].
National Academy of Sciences. undated. Standards of Evidence: Strategic Planning Initiative. [Online]. Available: http://www7.nationalacademies.org/dbasse/Standards%20of%20Evidence%20Description.html [accessed March 2, 2005].
National Quality Forum. 2004. National Quality Forum Home. [Online]. Available: http://www.qualityforum.org [accessed March 21, 2005].
NCQA (National Committee for Quality Assurance). 2004a. HEDIS 2005 Technical Specifications. Washington, DC: NCQA.
NCQA. 2004b. HEDIS 2005 Technical Specifications Volume 2. Washington, DC: NCQA.
NCQA. 2005. Draft Document for HEDIS 2006 Public Comment. [Online]. Available: http://www.ncqa.org/Programs/HEDIS/Public%20Comments/overview/pdf with notation [accessed March 18, 2005].
NCQA. undated. NCQA HEDIS Compliance Audit Program. [Online]. Available: http://ww.ncqa.org/programs/hedis/audit/auditex.htm [accessed June 8, 2005].
New Freedom Commission on Mental Health. 2003. Achieving the Promise: Transforming Mental Health Care in America. Final Report. DHHS Publication Number SMA-03-3832. Rockville, MD: U.S. Department of Health and Human Services.
NIDA (National Institute of Drug Abuse). 2005. NIDA/SAMHSA-ATTC Blending Initiative. [Online]. Available: http://www.drugabuse.gov/CTN/whatisblending.html [accessed June 6, 2005].
NIMH (National Institute of Mental Health). 2004. State Implementation of Evidence-Based Practices II: Bridging Science and Service RFA MH-05-004. [Online]. Available: http://grants.nih.gov/grants/guide/rfa-files/RFA-MH-05-004.html [accessed June 7, 2005].
Norcross JC, ed. 2002. Psychotherapy Relationships That Work: Therapist Contributions and Responsiveness to Patients. New York: Oxford University Press.
Office of Technology Assessment. 1988. The Quality of Medical Care: Information for Consumers. OTA-H-386. Washington, DC: U.S. Government Printing Office.
Office of the Surgeon General. 2001. Youth Violence: A Report of the Surgeon General. Washington, DC: U.S. Department of Health and Human Services.
Pappadopulos EA, Guelzow BT, Wong C, Ortega M, Jensen PS. 2004. A review of the growing evidence base for pediatric psychopharmacology. In: Burns BJ, Hoagwood KE, eds. Evidence-Based Practice Part I: Research Update. Child and Adolescent Psychiatric Clinics of North America Vol. 13, No. 4. Pp. 817–856.
Patel NC, Crismon ML, Hoagwood K, Jensen PS. 2005. Unanswered questions regarding atypical antipsychotic use in aggressive children and adolescents. Journal of Child and Adolescent Psychopharmacology 15(2):270–284.
Patterson GR, DeBaryshe BD, Ramsey E. 1989. A developmental perspective on antisocial behavior. American Psychologist 44(2):329–335.
Patterson GR, Dishion TJ, Chamberlain P. 1993. Outcomes and methodological issues relating to treatment of antisocial children. In: Giles TR, ed. Handbook of Effective Psychotherapy. New York: Plenum Press. Pp. 43–88.
Peter D. Hart Research Associates, Inc. 2001. The Face of Recovery. Washington, DC: Peter D. Hart Research Associates, Inc.
Petrosino A, Turpin-Petrosino C, Buehler J. 2005. “Scared Straight” and other juvenile awareness programs for preventing juvenile delinquency. Cochrane Developmental, Psychosocial and Learning Problems Group The Cochrane Database of Systematic Reviews 4.
Pflueger W. 2002. Consumer view: Restraint is not therapeutic. Networks (1–2):7. [Online]. Available: http://www.nasmhpd.org/general_files/publications/ntac_pubs/networks/SummerFall2002.pdf [accessed February 20, 2005].
Pincus HA. 2003. The future of behavioral health and primary care: Drowning in the mainstream or left on the bank? Psychosomatics 44(1):1–11.
Podorefsky DL, McDonald-Dowdell M, Beardslee WR. 2001. Adaptation of preventive interventions for a low-income, culturally diverse community. Journal of the American Academy of Child & Adolescent Psychiatry 40(8):879–886.
Project MATCH Research Group. 1997. Matching alcoholism treatments to client heterogeneity: Project MATCH post treatment drinking outcomes. Journal of Studies on Alcohol 58(1):7–29.
Rawal PH, Lyons JS, MacIntyre II JC, Hunter JC. 2004. Regional variation and clinical indicators of antipsychotic use in residential treatment: A four state comparison. The Journal of Behavioral Health Services & Research 31(2):178–188.
Richardson LP, Di Giuseppe D, Christakis DA, McCauley E, Katon W. 2004. Quality of care for Medicaid-covered youth treated with antidepressant therapy. Archives of General Psychiatry 61(5):475–480.
Rollman BL, Hanusa BH, Gilbert T, Lowe HJ, Kapoor WN, Schulberg HC. 2001. The electronic medical record. A randomized trial of its impact on primary care physicians’ initial management of major depression. Archives of Internal Medicine 161(2): 189–197.
Rose S, Bison J, Churchill R, Wessely S. 2005. Psychological debriefing for preventing post traumatic stress disorder. The Cochrane Database of Systematic Reviews 2.
Rosenheck RA, Fontana A. 2001. Impact of efforts to reduce inpatient costs on clinical effectiveness: Treatment of Post Traumatic Stress Disorder in the Department of Veterans Affairs. Medical Care 39(2):168–180.
Rushton JL, Fant K, Clark SJ. 2004. Use of practice guidelines in the primary care of children with Attention-Deficit Hyperactivity Disorder. Pediatrics 114(1):e23–e28. [Online]. Available: http://www.pediatrics.aappublications.org/cgi/reprint/114/1/e23 [accessed on September 1, 2005].
Sailas E, Fenton M. 2005. Seclusion and restraint for people with serious mental illness. The Cochrane Database of Systematic Reviews 1.
SAMHSA (Substance Abuse and Mental Health Services Administration). 2004a. Results from the 2003 National Survey on Drug Use and Health: National Findings. DHHS Publication Number SMA 04-3964. Rockville, MD: SAMHSA.
SAMHSA. 2004b. SAMHSA Action Plan: Seclusion and Restraint—Fiscal Years 2004 and 2005. [Online]. Available: http://www.samhsa.gov/Matrix/SAP_seclusion.aspx [accessed February 20, 2005].
SAMHSA. 2005. SAMHSA’s National Registry of Evidence-Based Programs and Practices (NREPP). [Online]. Available: http://www.modelprograms.samhsa.gov/template.cfm?page=nreppover [accessed June 6, 2005].
SAMHSA. undated-a. About Evidence-Based Practices: Shaping Mental Health Services toward Recovery. [Online]. Available: http://mentalhealth.samhsa.gov/cmhs/communitysupport/toolkits/about.asp [accessed March 6, 2005].
SAMHSA. undated-b. Fiscal Year 2006 Justification of Estimates for Appropriations Committees. [Online]. Available: http://www.samhsa.gov/budget/FY2006/FY2006Budget.doc [accessed March 26, 2005].
SAMHSA. undated-c. Report to Congress on the Prevention and Treatment of Co-Occurring Substance Abuse Disorders and Mental Disorders. [Online]. Available: http://www.samhsa.gov/reports/congress2002/CoOccurringRpt.pdf [accessed April 25, 2004].
Satre DD, Knight BG, Dickson-Fuhrmann E, Jarvik LF. 2004. Substance abuse treatment initiation among older adults in the GET SMART program: Effects of depression and cognitive status. Aging & Mental Health 8(4):346–354.
Schnaars C. 2003, April 13. Tape called strong evidence in boy’s death. Daily Press. Local News. p. C1. Newport News, Virginia.
Shojania KG, Grimshaw JM. 2005. Evidence-based quality improvement: The state of the science. Health Affairs 24(1):138–150.
Shojania KG, McDonald KM, Wachter RM, Owens DK. 2004. Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies, Volume 1—Series Overview and Methodology. AHRQ Publication Number: 04-0051-1. Rockville, MD: Agency for Healthcare Research and Quality.
Simon GE, Von Korff M, Rutter CM, Peterson DA. 2001. Treatment processes and outcomes for managed care patients receiving new antidepressant prescriptions from psychiatrists and primary care physicians. Archives of General Psychiatry 58(4):395–401.
Simon R, Fleiss J, Gurland B, Stiller P, Sharpe L. 1973. Depression and schizophrenia in hospitalized black and white mental patients. Archives of General Psychiatry 28(4): 509–512.
Simpson DD, Joe GW, Brown BS. 1997. Treatment retention and follow-up outcomes in the Drug Abuse Treatment Outcome Study (DATOS). Psychology of Addictive Behaviors 11(4):294–301.
Simpson DD, Joe GW, Fletcher BW, Hubbard RL, Anglin MD. 1999. A national evaluation of treatment outcomes for cocaine dependence. Archives of General Psychiatry 56(6):507–514.
Smith G, Davis R, Bixler E, Lin H, Altenor A, Altenor R, Hardenstein B, Kopchik G. 2005. Special section on seclusion and restraint: Pennsylvania state hospital system’s seclusion and restraint reduction program: Timeline of change. Psychiatric Services 56:1115–1122.
Stanton M. 2002. Expanding Patient-Centered Care to Empower Patients and Assist Providers. AHRQ Publication Number: 02-0024. Rockville, MD: Agency for Healthcare Research and Quality. [Online]. Available: http://www.ahrq.gov/qual/ptcareria.pdf [accessed February 8, 2005].
Stein M. 2002. The role of attention-deficit/hyperactivity disorder diagnostic and treatment guidelines in changing physician practices. Pediatric Annals 31(8):496–504.
Stein MB, Sherbourne CD, Craske MG, Means-Christensen A, Bystritsky A, Katon W, Sullivan G, Roy-Byrne PP. 2004. Quality of care for primary care patients with anxiety disorders. American Journal of Psychiatry 161(12):2230–2237.
Steinberg EP, Luce BR. 2005. Evidence based? Caveat emptor! Health Affairs 24(1):80–92.
Strakowski SM, Shelton RC, Kolbrener ML. 1993. The effects of race and comorbidity on clinical diagnosis in patients with psychosis. Journal of Clinical Psychiatry 54(3): 96–102.
Strickland TL, Ranganath V, Lin K, Poland RE, Mendoza R, Smith MW. 1991. Psychopharmacologic considerations in the treatment of Black American populations. Psychopharmacology Bulletin 27(4):441–448.
Tanenbaum S. 2003. Evidence-based practice in mental health: Practical weaknesses meet political strengths. Journal of Evaluation in Clinical Practice 9(2):287–301.
Tanenbaum SJ. 2005. Evidence-based practice as mental health policy: Three controversies and a caveat. Health Affairs 24(1):163–173.
Teague GB, Trabin T, Ray C. 2004. Toward common performance indicators and measures for accountability in behavioral health care. In: Roberts AR, Yeager K, eds. Evidence-Based Practice Manual: Research and Outcome Measures in Health and Human Services. New York: Oxford University Press. Pp. 46–61.
The Campbell Collaboration. undated. The Campbell Collaboration. [Online]. Available: http://www.campbellcollaboration.org/index.html [accessed December 3, 2004].
The Cochrane Collaboration. 2004. What Is The Cochrane Collaboration? [Online]. Available: http://www.cochrane.org/docs/descrip.htm [accessed December 3, 2004].
The President’s Advisory Commission on Consumer Protection and Quality in the Health Care Industry. 1998. Quality First: Better Health Care for All Americans. Washington, DC: U.S. Government Printing Office.
Thompson C, Kinmonth A, Stevens L, Peveler R, Stevens A, Ostler K, Pickering R, Baker N, Hensen A, Preece J, Cooper D, Campbell M. 2000. Effects of a clinical-practice guideline and practice-based education on detection and outcome of depression in primary care: Hampshire Depression Project randomized controlled trial. Lancet 355(9199): 185–191.
Tunis SR, Stryer DB, Clancy CM. 2003. Practical clinical trials: Increasing the value of clinical research for decision-making in clinical and health policy. Journal of the American Medical Association 290(12):1624–1632.
VA Technology Assessment Program. 2002. Outcome Measurement—Mental Health Overview: Final Report. [Online]. Available: http://www.va.gov/vatap [accessed February 28, 2005].
VHA (Veterans Health Administration). 2005. Clinical Practice Guidelines. [Online]. Available: htp://www.oqp.med.va.gov/cpg/cpg.htm [accessed February 28, 2005].
Wang PS, Berglund P, Kessler RC. 2000. Recent care of common mental disorders in the United States: Prevalence and conformance with evidence-based recommendations. Journal of General Internal Medicine 15(5):284–292.
Wang P, Demler M, Kessler RC. 2002. Adequacy of treatment for serious mental illness in the United States. American Journal of Public Health 92(1):92–98.
Watkins KE, Burnam A, Kung F-Y, Paddock S. 2001. A national survey of care for persons with co-occurring mental and substance use disorders. Psychiatric Services 52(8):1062–1068.
Webster-Stratton C, Hammond M. 1997. Treating children with early-onset conduct problems: A comparison of child and parent training interventions. Journal of Consulting and Clinical Psychology 65(1):93–109.
Webster-Stratton C, Hammond M. 1999. Marital conflict management skills, parenting style, and early-onset conduct problems: Processes and pathways. Journal of Child Psychology and Psychiatry, and Allied Disciplines 40(6):917–927.
Weisner C, Matzger H. 2002. A prospective study of the factors influencing entry to alcohol and drug treatment. Journal of Behavioral Health Services & Research 29(2):126–137.
Weisz JR. 2004. Psychotherapy for Children and Adolescents: Evidence-Based Treatments and Case Examples. Cambridge, MA: Cambridge University Press.
Wennberg JE, ed. 1999. The Dartmouth Atlas of Health Care in the United States. [Online]. Available: http://www.dartmouthatlas.org/pdffiles/99atlas.pdf [accessed November 24, 2004].
West S, King V, Carey TS, Lohr K, McKoy N, Sutton S, Lux L. 2002. Systems to Rate the Strength of Scientific Evidence. AHRQ Publication No. 02-E016. Rockville, MD: Agency for Healthcare Research and Quality.
Wolff N. 2000. Using randomized controlled trials to evaluate socially complex services: Problems, challenges, and recommendations. Journal of Mental Health Policy and Economics 3(2):97–109.
Work Group on ASD and PTSD. 2004. Practice Guideline for the Treatment of Patients with Acute Stress Disorder and Posttraumatic Stress Disorder. [Online]. Available: http://www.psych.org/psych_prac/treatg/pg/PTSD-PG-PartsA-B-CNew.pdf [accessed June 6, 2005].
Young AS, Klap R, Sherbourne C, Wells KB. 2001. The quality of care for depressive and anxiety disorders in the United States. Archives of General Psychiatry 58(1):55–61.
Zhan C, Miller MR. 2003. Administrative data based patient safety research: A critical review. Quality & Safety in Health Care 12(Supplement II):ii58–ii63.
Zima BT, Hurlburt MS, Knapp P, Ladd H, Tang L, Duan N, Wallace P, Rosenblatt A, Landsverk J, Wells KB. 2005. Quality of publicly-funded outpatient specialty mental health care for common childhood psychiatric disorders in California. Journal of the American Academy of Child & Adolescent Psychiatry 44(2):130–144.
Zito JM, Safer DJ, dosReis S, Gardner JF, Boles M, Lynch F. 2000. Trends in the prescribing of psychotropic medications to preschoolers. Journal of the American Medical Association 283(8):1025–1030.