4

Metrics

Summary: The importance of measuring impacts from interprofessional education resonated with Forum member and workshop planning committee co-chair Scott Reeves of the University of California, San Francisco. Reeves, who has devoted much of his career to studying the impact of interprofessional education (IPE), said, “If we want to understand culture and begin to develop robust metrics, we need to go in there and we need to study it,” he asserted. In essence, implementers of IPE need to be clear about the purpose of their work so that researchers can confidently analyze whether or not a program is successful. According to Reeves, having robust measurements of the effectiveness of IPE allows programs to be compared and conclusions to be drawn. This assertion was echoed by other participants at the workshop and forms the foundation for this chapter on developing metrics to advance interprofessional education and collaborative care.

EMBRACING A COMMON PARLANCE

Without clear conceptualizations of what is being investigated and without a common understanding of what various terms mean, researchers studying IPE face a variety of problems, Reeves said. He also noted that throughout the workshop participants had inaccurately used some words interchangeably. For example, he said, despite how some participants had used the words, “assessment” is not the same as “evaluation.” And although “interprofessional” had been defined early in the workshop,



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 43
4 Metrics Summary: The importance of measuring impacts from interprofes- sional education resonated with Forum member and workshop planning committee co-chair Scott Reeves of the University of California, San Francisco. Reeves, who has devoted much of his career to studying the impact of interprofessional education (IPE), said, “If we want to understand culture and begin to develop ro- bust metrics, we need to go in there and we need to study it,” he asserted. In essence, implementers of IPE need to be clear about the purpose of their work so that researchers can confidently ana- lyze whether or not a program is successful. According to Reeves, having robust measurements of the effectiveness of IPE allows programs to be compared and conclusions to be drawn. This asser- tion was echoed by other participants at the workshop and forms the foundation for this chapter on developing metrics to advance interprofessional education and collaborative care. EMBRACING A COMMON PARLANCE Without clear conceptualizations of what is being investigated and without a common understanding of what various terms mean, research- ers studying IPE face a variety of problems, Reeves said. He also noted that throughout the workshop participants had inaccurately used some words interchangeably. For example, he said, despite how some partici- pants had used the words, “assessment” is not the same as “evaluation.” And although “interprofessional” had been defined early in the workshop, 43

OCR for page 43
44 INTERPROFESSIONAL EDUCATION FOR COLLABORATION participants continued to mix their terms and offer examples of interdisci- plinary and multidisciplinary education and care. Reeves emphasized that one must be clear about the terminology and concepts or the entire research methodology becomes flawed. Assessment Versus Evaluation Reeves made a useful distinction between assessment and evaluation. Assessment is done to determine the level of understanding by a learner, while evaluation is a tool to determine how well a program or an educa- tor teaching a course is conveying messages. For assessment, he says, there needs to be a meaningful analysis of how the individual learns, not just in the short term but in the long term as well. For evaluation, thoughtful consideration is needed to determine how well the program is conveying the desired messages and information. Interprofessional, Interdisciplinary, or Multidisciplinary The terms “interprofessional” and “interdisciplinary” are often used interchangeably in the literature, but at the workshop most speakers and participants used the word “interprofessional.” This is not surprising, said Reeves, given that the workshop title included the term “interprofessional education.” These terms imply an integrative, collaborative approach to education or practice, he said. On the other hand, multidisciplinary sim- ply means several fields, areas of expertise, or disciplines coming together without integrating the services (Reeves et al., 2010). MEASUREMENT PRACTICES IN IPE According to Forum member Eric Holmboe of the American Board of Internal Medicine (ABIM), there are two overarching themes that arise when one discusses measurement practices in IPE: the need for competency- based models and the need for a more robust evidence base. Although work is under way to fill the gaps in the evidence base, serious obstacles remain because of uncertainty about what to measure and how to measure it. Currently, Holmboe said, there are differences of opinion regarding what the unit of analysis should be when measuring various aspects of IPE (i.e., individual, programmatic, institution) and where such an assessment should start. One Forum member suggested that, regardless of whether the analysis is of the faculty, the curriculum, the patient, or the community, the tools do exist, but the analysis needs to be broken out in a way that allows tools to be applied.

OCR for page 43
METRICS 45 Analyzing Program Design A number of workshop participants proposed starting with the de- sired results and working backward to determine the best ways to educate students. However, Holmboe said, this design goes against most health professional education models, which typically start with the student and work forward. Holmboe added that working this way also means educators have to predict what future practice will entail and to attempt to prepare health professional students to fit within that model. The World Health Organizatin (WHO) International Classification of Functioning, Disability and Health framework, presented by workshop speaker Stefanus Snyman in Chapter 2, may be a useful tool for envisioning such a practice, he said. Purposeful IPE Research and Program Design As the leader of the small group on IPE assessment, Holmboe reported to the wider audience the group’s contention that before initiating any as- sessment, the purpose of the assessment should be clarified. If the purpose is to drive improvements and feedback, for instance, tools could be built that have catalytic effects that impel future learning to improve health and to drive education. One example of this is the Kaiser Permanente care teams in Colorado. A care team includes physicians, clinical pharmacists, nurses, and medical assistants. A main focus of the teams’ care since 2008 has been hypertension control. During that time the percentage of members who kept their hypertension under control went from 61 to 83 percent, the latter of which is roughly 10 to 30 percent above the national average. As workshop speaker Dennis Helling, executive director of pharmacy opera- tions and therapeutics at Kaiser Permanente, said, “We are a team-based, fully integrated delivery system, with an electronic medical record that is a great site for IPE.” And, he added, the pharmacy operations section is taking full advantage of this IPE opportunity by engaging its students in meaningful work as part of these well-functioning teams. Despite the accepted benefits of student exposure to well-functioning teams like those at Kaiser Permanente Colorado, it has not been possible to directly measure the effects of interprofessional education on health. As Holmboe said, to assess IPE well, researchers will likely need to embrace more complex measurement strategies that require developmental expertise as well as a knowledge of methodology and program evaluation. It is pos- sible, he said, that a combination of approaches and tools that includes both qualitative and quantitative methods will be required. Holmboe speculated that the argument against a complex approach to analysis would be that it is easier to use the reductionist model of measuring small pieces of IPE. The problem, as he sees it, is that such a simplification

OCR for page 43
46 INTERPROFESSIONAL EDUCATION FOR COLLABORATION inevitably leads to a loss of information, bringing into question the meaning and the value of the assessment. Speaker Mark Earnest of the University of Colorado agreed and then elaborated on the issue. To assess collaboration effectively, he said, one needs measurements that are valid and reliable. He added, to be valid and reliable, the data need to be multi-source (that is, not just from a single person), to occur over multiple points in time across multiple settings, and to be measured against a standardized rubric. This is quite difficult to accomplish, Earnest said, although ABIM is working on developing such a model. Holmboe, who is from ABIM, pointed to the real- ist evaluation strategy by Ray Pawson and Nick Tilley and also to Michael Quinn Patton’s developmental evaluation as approaches that might provide insights into how IPE could be assessed and evaluated (Pawson and Tilley, 1997; Patton, 2011). In thinking through the various models to assess his students’ ability to work collaboratively, Earnest said that he studied the pros and cons of various educational models. More details are provided in Box 4-1. Self-Directed Assessment Assessment is something that all health professionals need to do to remain relevant within a field, but, Holmboe said, most often the asses- sor is not the person who would benefit most from the assessment. Thus he suggested that organizations should increasingly move to self-directed assessments. However, he said, this would have ramifications for the mea- surements of professional collaborative relationships. “When you ask an audience if they collaborate well, everybody puts their hands up, because nobody wants to say they’re a bad collaborator.” Thus one issue is whether self-assessment is biased and, if it is, how that would impact interprofes- sional assessment. One participant from the breakout group on assessment suggested using newer technologies to track self-assessments in a more structured manner. This might include portfolios, blogs, or electronic applications installed on mobile devices, such as iPhones and iPads, which could be sources of information for measuring the effectiveness of IPE applications. In fact, the participant said, some IPE programs are already using blogs within portfolios that capture what happens over time, particularly from a developmental perspective. Forum and planning committee member Jan De Maeseener of Ghent University in Belgium commented that the IPE instructors at Ghent Uni- versity require students to maintain a portfolio of written and electronic reflections that begin with their first year and continue throughout their 6 years at the university. The reason for having students’ include their clinical

OCR for page 43
METRICS 47 experiences in the portfolio, he said, is to encourage them to internalize the need for lifelong continuous professional development. ASSESSMENT TOOLS Eric Holmboe, in his presentation about the breakout group he led, talked about the need for faculty who are competent in IPE. “A general problem for all of concept-based education,” he said, “is that we have a faculty workforce across all the health professions who were not trained in the very system we are trying to create.” Based on the discussions of his small group, Holmboe commented that many faculties are struggling, so it will be necessary to offer many co-learning activities around assessment as well as education. Despite the challenges to measuring competencies among learners, a number of presenters at the workshop did report the existence of fairly robust tools for assessing learners and evaluating programs at their institu- tions. The tools reported by the presenters are described below, organized by the universities at which the various IPE measurement methods are used. Curtin University At Curtin University in Australia, faculty have developed the Inter- professional Capability Assessment Tool (ICAT), illustrated in Figure 4-1. Drawn from models developed at Sheffield Hallam University and the University of Toronto, the ICAT assesses students within four domains: communication, professionalism, collaborative practice, and client-centered service and care. Students, faculty, and field preceptors all complete the ICAT form to provide students with feedback on the development of their interprofessional capabilities. University of Colorado Earnest, the IPE director from the University of Colorado, reported using an assessment program from Purdue University called the Compre- hensive Assessment for Team-Member Effectiveness (CATME). With this tool, self- and peer-assessment information is gathered to determine how successfully each member contributed to the team’s performance. There are no assessments from individuals provided in the CATME report, only group feedback created by aggregating the data from individual responses. The eventual goal is to be able to compare these outcomes with team per- formance scores gathered from other interprofessional activities in order to measure their students’ interprofessional growth over time.

OCR for page 43
48 INTERPROFESSIONAL EDUCATION FOR COLLABORATION BOX 4-1 Mark Earnest, M.D., Ph.D. University of Colorado When designing the interprofessional experience for students at the University of Colorado, Mark Earnest and colleagues studied the pros and cons of various edu- cational models. They were particularly interested in finding a model that could as- sess student learning. Through their research they considered the following models: • Traditional model of the facilitated discussion • Group projects • Problem-based learning • Michaelson’s team-based learning model In the traditional model of the facilitated discussion, students participate in a planned “experience” and read literature to more fully understand the experience. They then come back to the university and discuss what they learned, typically with a faculty preceptor who serves as the referee. The goal is to engage all learners in speaking and active listening. In this model, the group does not necessarily have to make a decision, but if they do, the stakes are fairly small. The group projects model requires students to work together in completing a term paper. For example, each student may write a paragraph, and in the final product all the paragraphs are assembled. But this is not teamwork or collaboration, Earnest said, and, generally, the students do not feel invested in the product in part

OCR for page 43
METRICS 49 because they do not believe the paper is read with sufficient attention. Furthermore, evaluating each student’s contribution to the term paper is difficult. Problem-based learning has a strong methodological foundation, but measur- ing the contribution of individual students is still difficult. Measuring or comparing the performance of one student team to another is difficult, as is finding problems that are amenable to this learning method and that all students embrace and are equally ready for. Michaelson’s team-based learning model had a number of valuable compo- nents, but, as with the other models, much of the student work ultimately cannot be assessed. One team’s outcome can be qualitatively compared with that of another team, but an individual team’s performance is not measurable. Given the limitations of each of the models, Earnest and his colleagues de- vised a new model with a set of principles for what they considered optimal con- ditions for learning about teamwork. One condition was the requirement that the team be the unit of learning and the unit of work. With the method that Earnest and colleagues developed, the team’s goal is important enough to them that they do not need a faculty preceptor. This situation more closely emulates real work environments, where there are no referees and team members need to work out challenges among themselves. Borrowing from team- and problem-based learning models, Earnest’s model has student teams receive an activity that requires group problem solving and col- laboration for successful completion. Unlike the case with the group term paper, this activity cannot be easily or efficiently accomplished by single individuals or by individuals working in parallel. In addition, the team performance is measurable so that at the end of the learning activity, members can compare how they did in a standardized objective way and find out how well their team performed compared to other teams. Those teams with better collaboration receive higher scores. In this model, an activity begins with roughly eight teams gathering in a room with a single facilitator who keeps time and directs the learning experience. The teams work in parallel to solve a multidimensional clinical puzzle in which they identify potential harms and process errors. Teams are given an hour to complete the task. At the end of that time, their work is done, and each team receives a score that is posted at the front of the room. This is followed by a debriefing that focuses on what each team did to accomplish the activity and how the team got to its answer. Through this team-based, competitive activity, educators at the University of Colorado hope to create a language and a set of experiences that students can translate into clinical settings that will provide them with a richer and more sophis- ticated understanding of how to collaborate effectively.

OCR for page 43
50 INTERPROFESSIONAL EDUCATION FOR COLLABORATION FIGURE 4-1 Interprofessional education capability framework—and the ICAT. SOURCE: Brewer and Jones, in press. Figure 2-2 and 4-1 Bitmapped University of Virginia Faculty of the University of Virginia (UVA) IPE program are also interested in longitudinal assessment of student learning, said Valentina Brashers, the UVA presenter at the workshop. Their tool, the Interpro- fessional Teamwork Objective Structured Clinical Examination, assesses students’ pre- and post-clinical/clerkship outcomes in order to better un- derstand student learning before and after completing four IPE simulation experiences, which are done in the same year. Students are also assessed following each individual simulation experience, she added. Using the Collaborative Behaviors Observational Assessment Tool, faculty can track student achievement of competencies corresponding to a specific simula- tion activity. Another assessment tool used at UVA is the Team Skills Scale. According to Brashers, this tool was developed by Hepburn and colleagues (1996) to assess self-perceived team skills in the preclinical education phase. Brashers also said that researchers from UVA are looking into how well participants of the Continuing Interprofessional Education (CIE) Program

OCR for page 43
METRICS 51 follow through on expressed commitments to change. In this Commitment to Change model, CIE participants are asked to fill out a “commitment to change” form before leaving the premises; UVA staff follow up with each participant 3 and 6 months later to ask whether the participant made the intended change. Although the results from this activity at UVA are still pending, Brashers said, studies have shown that health providers who make such commitments are more likely to change their behavior than those who do not make the commitments (Wakefield et al., 2003; Fjortoft, 2007). University of Missouri The University of Missouri’s IPE presenter, Carla Dyer, reported how a measure of safety—decreasing hospital patient falls—has been used as the endpoint for assessing student-based interprofessional interventions in an attempt to link IPE to patient outcomes. Using patient interviews to assess student success, the research group found that despite the lack of evidence demonstrating a significant impact on patient falls—which may have been an artifact of the small sample size—93 percent of patients reported that the students’ interventions had value. Furthermore, through pre- and post-intervention testing of the participating medical and nursing students, faculty did find that the students had significantly greater confi- dence in assessing and intervening at-risk patients after participating in the interventions. Department of Veterans Affairs Administration One of the evaluation tools used by the Department of Veterans Affairs (VA), reported on by Kathryn Rugen at the workshop, is the VA Learner Perception Survey. According to Rugen, this tool was modified specifically for use in primary care to include attributes of the PACT (Patient Aligned Care Teams) model of patient-centered, team-based interprofessional care. This revised survey was piloted in 2012. Rugen said that preliminary analy- sis showed that the trainees within the centers of excellence were reporting higher satisfaction rates, although further assessments (which are forthcom- ing) are needed to confirm these preliminary results. Linköping University At Linköping University in Sweden, Margaretha Wilhelmsson and col- leagues were interested in knowing whether certain personal attributes indi- cated a readiness for interprofessional learning. According to Wilhelmsson, who represented the university’s IPE program at the workshop, they studied approximately 700 medical and nursing students from programs across

OCR for page 43
52 INTERPROFESSIONAL EDUCATION FOR COLLABORATION Sweden. Using the Readiness for Interprofessional Learning instrument, they found that women and those enrolled in nursing programs displayed earlier readiness for interprofessional learning. The study included only nursing and medical students, but it does indicate that some students may be more ready than others to work collaboratively. Such increased readiness could lead to greater success in interprofessional education and collabora- tions (Wilhelmsson et al., 2011). EVALUATING IPE TO INTERPROFESSIONAL PRACTICE In her summary remarks Forum member Gillian Barclay from the Aetna Foundation said that activities are under way to measure “care coordination” in the United States. For example, she pointed out that in 2010 the National Quality Forum published Preferred Practices and Performance Measures for Measuring and Reporting Care Coordination (NQF, 2010), and that same year the Agency for Healthcare Research and Quality produced Care Coordination Measures Atlas (AHRQ, 2012). The following year the National Committee for Quality Assurance made the Care Coordination Process Measures available in addition to similar mea- surement publications already available from other organizations. Despite these laudable efforts to measure care coordination activities, however, no organizations are attempting to measure linkages between IPE and interpro- fessional practice (IPP). As Barclay said, “It is a bit disturbing because the assumption is made that care can be coordinated without really figuring out if people have competencies and skills to work together as a team. It is not as simple as just putting people there and having them coordinate care.” In addition, she added, many of the indicators used to measure outcomes in care coordination come from the clinical environment, such as the 30- day readmission rate and the time spent in a waiting room. Barclay then challenged the audience to go beyond the walls of the clinical environment to use IPE-to-IPP indicators that measure outcomes in population health. Although Forum Member Brenda Zierler from the University of Wash- ington agreed with Barclay, she added that, from a clinical perspective, there may be difficulties with linking patient outcomes to IPE training events in the simulation lab or classroom for pre-licensure students. One reason for this is that students are trained together in team-based activities and then placed in clinical sites, one student at a time. The other issue is the inability of high-functioning clinical teams to articulate team compe- tencies to students. This issue was also raised by Matthew Wynia of the American Medical Association, who found in a study with colleagues that team members do not always see what they do as transferrable, teachable, or something that others could adopt and learn (Mitchell et al., 2012). As a result, there are potential teachers and role models of team care that go

OCR for page 43
METRICS 53 untapped because these individuals do not recognize that their activities are teachable. Key Messages Raised by Individual Speakers • Implementers of IPE need to be clear about the purpose of their work so researchers can confidently analyze whether or not a program is successful. (Reeves) • Uncertainty over how to measure IPE creates obstacles to de- veloping competency-based models and an evidence base for IPE. (Holmboe) • A complex, multi-sourced approach to assessment and evalua- tion is needed to distill the meaning and value of IPE. (Earnest and Holmboe) • Tools for assessing interprofessional learning are being de- veloped and refined. (Brashers, Dyer, Earnest, Forman, and Rugen) REFERENCES AHRQ (Agency for Healthcare Research and Quality). 2012. Patient centered medical home re- source center. http://www.pcmh.ahrq.gov/portal/server.pt/community/pcmh__home/1483/ pcmh_defining_the_pcmh_v2 (accessed March 4, 2013). Brewer, M., and S. Jones. In press. An interprofessional practice capability framework focusing on safe, high quality client centred health service. Journal of Allied Health. Fjortoft, N. 2007. The effectiveness of commitment to change statements on improving prac- tice behaviors following continuing pharmacy education. American Journal of Pharmacy Education 71(6):112. Hepburn, K., R. A. Tsukuda and C. Fasser. 1996. Team skills scale. In G. D. Heinemann and A. M. Zeiss, eds., Team performance in health care: Assessment and development. New York: Kluwer Academic/Plenum Publishers. Mitchell, P., M. Wynia, R. Golden, B. McNellis, S. Okun, C. E. Webb, V. Rohrbach, and I. von Kohorn. 2012. Core principles and values of effective team-based care. Discussion Paper, Institute of Medicine, Washington, DC. http://iom.edu/Global/Perspectives/2012/ TeamBasedCare.aspx (accessed March 12, 2013). NQF (National Quality Forum). 2010. Preferred practices and performance measures for measuring and reporting care coordination: A consensus report. Washington, DC: NQF. Patton, M. 2011. Developmental evaluation: Applying complexity concepts to enhance in- novation and use. New York: Guilford Press. Pawson, R., and N. Tilley. 1997. Realist evaluation. London, UK: Sage. Reeves, S., S. Lewin, S. Espin, and M. Zwarenstein. 2010. Interprofessional teamwork for health and social care. Oxford, UK: Wiley-Blackwell.

OCR for page 43
54 INTERPROFESSIONAL EDUCATION FOR COLLABORATION Wakefield, J., C. P. Herbert, M. Maclure, C. Dormuth, J. M. Wright, J. Legare, P. Brett- MacLean, and J. Premi. 2003. Commitment to change statements can predict ac- tual change in practice. Journal of Continuing Education in the Health Professions 23(2):81–92. Wilhelmsson, M., S. Ponzer, L. O. Dahlgren, T. Timpka, and T. Faresjö. 2011. Are female students in general and nursing students more ready for teamwork and interprofessional collaboration in healthcare? BMC Medical Education 11:15.