Click for next page ( 6


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 5
1 Setting the Stage Key Messages • Both summative and formative assessments are critical components of a competency- based system. (Holmboe, Norcini) • Understanding why the assessment is being conducted and how the purpose aligns with the desired outcomes is key to undertaking an assessment. (Holmboe, Norcini) • By combining a demonstration of knowledge with acquisition of skills, and testing for an ability to apply both knowledge and skills in new situations, a message is sent to learners that knowledge, skills, application, and ability are all important elements for their education. (Holmboe, Norcini) • Too little time is spent on formative assessment. (Holmboe, Norcini) • There is a need for greater faculty development in the area of assessment (Aschenbrener, Bezuidenhout, Holmboe, Norcini, Sewankambo) • Although it is a useful tool, most individuals are not good at self-assessments. (Baker, Holmboe, Norcini, Reeves) • Regardless of how well learners are trained, dangerous situations leading to medical errors will persist if there is no support of the larger organizational structures emphasizing the need for a culture of safety. (Finnegan, Gaines, Malone, Palsdottir, Talbott) In setting the stage for the entire workshop, John Norcini, from the Foundation for Advancement of International Medical Education and Research (FAIMER), described assessment as a powerful tool for directing learning by signaling what is important for a learner to know and understand. In this way, he said, assessments can motivate learners to acquire greater knowledge and skills in order to demonstrate that learning has occurred. The summative assessment measures achievement, while formative assessments focus on the learning process and whether the activities the learners engaged in helped them to better understand and demonstrate competency. As such, both summative and formative assessments are critical components of a competency-based system. A competency-based model directs learning based on intended outcomes of a learner (Harris et al., 2010; Sullivan, 1995) in the particular context of where the training takes place. Although it is outcome oriented, competency-based education also relies upon continuous and frequent assessments for obtaining specific competencies (Holmboe et al., 2010). THE PURPOSE OF ASSESSMENT According to Norcini, assessment involves testing, measuring, collecting and combining information, and providing feedback (Norcini et al., 2011). Understanding 1-1 PREPUBLICATION COPY: UNCORRECTED PROOFS

OCR for page 5
why the assessment is being conducted and how the purpose aligns with the desired outcomes is key to undertaking an assessment. Norcini posed a list of potential purposes of the assessment in health professional education, which might include some or all of the following: • Enhance learning by pointing out flaws in a skill or errors in knowledge. • Ensure safety by demonstrating that learning has occurred. • Guide learning in a particular direction outlined by the assessment questions or methods. • Motivate learners to seek greater knowledge in a particular area. • Provide feedback to the educator or trainer that benchmarks progress of the learner. Highlighting the fourth bullet, Norcini emphasized that a purpose of assessment is to “create learning.” In order to learn, one needs to be able to retrieve and use the information taken in. To underscore this point, Norcini cited an example involving students who took a test three times and ultimately scored better on that test than students who read a relevant article three times (Roediger and Karpicke, 2006). This is known as the “testing effect” where it is believed that tests can actually enhance retention even when those tests are given without any feedback. Norcini described the testing effect hypothesis that assessments create learning because it forces not only retrieval but also application of information and signals to students what is important and what should be emphasized in their studies and experiential learning. Afaf Meleis, the Forum co-chair, questioned whether there is a danger in using assessments that direct studying toward the assessment tool rather than opening new ways of critical thinking. Norcini responded in the positive, saying that because the risk is always present, the assessment tool must be carefully selected. Historically, tests have been designed around fact memorization. Roughly 20 to 25 years ago, the standardized patient was introduced into assessments that moved beyond the simple memorization– regurgitation model. By combining a demonstration of knowledge with acquisition of skills, and testing for an ability to apply both knowledge and skills in new situations, a message is sent to learners that knowledge, skills, application, and ability are all important elements for their education. Assessment Outcomes and Criteria As might be expected, said Norcini, the most important outcome of an assessment differs based on one’s perspective. Students are concerned about being able to demonstrate their competence, educators and educational institutions are interested in producing competent health professionals who are accountable, and regulatory bodies are mainly focused on accountability and maintenance of professional competence. Users of the health system are also concerned that health professionals are accountable and competent, but in addition, they want to know if providers are being efficient with their resources. Desired outcomes of an assessment differ not only based on perspective as noted above, but also based on the context within which the assessment is being conducted. 1-2 PREPUBLICATION COPY: UNCORRECTED PROOFS

OCR for page 5
And although there are certain characteristics of a good assessment, Norcini emphasized the point that no single set of criteria apply equally to all assessment situations. Despite all this diversity in reasons for conducting assessments and the settings within which the assessments are conducted, Norcini reported on how participants at The Ottawa Conference were able to come together to produce a unified set of seven criteria needed for a good assessment (Norcini et al., 2011). These conference participants also explored how this criteria might be modified based on the purpose of the assessment and the stakeholder(s) using it. The criteria was presented to the Forum members for discussion at the workshop and can be found in Table 1-1. In considering the criteria outlined by Norcini, Global Forum co-chair Jordan Cohen asked if it is possible to use these principles of assessment for assessing how well teams function and work interprofessionally. Norcini responded with a resounding affirmation that the principles apply regardless of the assessment situation, although the challenges increase dramatically. This is an area, he said, that is a growing area of research. For example, the 360-degree assessment is one way to measure teams, and there is considerable work underway in using simulation to assess health professional teams. Assessment as a Catalyst for Learning Warren Newton, representing the American Board of Family Medicine, asked about Norcini’s use of the term catalyzing learning. Norcini responded that it is one thing to tell a student what is important to learn and another thing to provide students with feedback based on the assessment that drives their learning. The latter is a much more specific way of signaling what is important, and it is used to create learning among students. Newton then asked another question about the activity costs of assessment versus other kinds of activities. He pointed out that many of the Forum members manage both faculties and clinical systems; this prompted the question, how much time should be spent in assessment as part of the overall teaching role? Norcini responded by looking at the types of assessments, saying that far too much time is often devoted to summative assessment and too little time is spent on formative assessment; he added that formative assessment is the piece that drives learning and the part that is integrated with learning. Furthermore, assessments can be done relatively efficiently especially if the assessors collaborate with partners across the institution. Norcini believes there could be greater sharing of resources across institutions, which would lead to better and more efficient assessments. Another advantage is the cost savings that can be achieved by spreading the fixed costs across institutions; these costs typically represent the largest expenses associated with assessments. Assessment’s Impact on Patients and Society Forum member and workshop co-chair Eric Holmboe from the American Board of Internal Medicine (ABIM) moderated the question-and-answer session with John Norcini, and brought up assessment from a public perspective. He asked the audience what the return on investment would be if the assessment were not in place—if health professionals were licensed who are insufficiently prepared, and allowed to practice throughout a 30-year career? The cost to society would be much less if time was spent, 1-3 PREPUBLICATION COPY: UNCORRECTED PROOFS

OCR for page 5
TABLE 1-1 Criteria Needed for a Good Assessment, Produced at the Ottawa Conference Elements of a Good Assessment Describing the Assessment Element Further Information Validity or Is there a body of evidence that “hangs Is a property of the inferences coherence together” and supports the use of a test drawn from a test, not the test for a particular purpose itself; Is a matter of degree; Requires the ongoing collection of data Reliability or Scores of examinees will be the same Test–retest reliability; Alternate reproducibility if retested form reliability; Split-half reliability; Reliability index Equivalence Different versions of an assessment A challenge for assessment in the yield equivalent scores or decisions workplace Educational effect The test motivates those who take it to How do students prepare for the prepare in a fashion that has test? educational benefit Catalytic effect The assessment provides results and A requirement for formative feedback in a fashion that enhances assessment learning Feasibility The test is practical, realistic, and sensible, given the circumstances and context Acceptability Stakeholders find the assessment process and results to be credible SOURCE: Norcini et al., 2011. particularly on the formative side, to make sure health professionals acquire the competence needed to be effective. Holmboe went on to say that often assessors look at the short-term costs and the time costs without recognizing that not putting in sufficient effort comes at a heavy cost over time. And there has not been a strong concerted effort to embed assessment into daily activities, like bedside rounds. That can be a form of observation and assessment that could be more effectively exploited. There are also a number of multisource tools that are relatively low tech and involve a series of observations; however, what is lacking in these tools is how to make them sufficiently reliable so appropriate judgments and inferences can be extracted. Forum and workshop planning committee member Patricia Hinton Walker, from the Uniformed Services University of Health Sciences, followed Holmboe’s lead in asking about including the public on the health team and how an assessment might be conducted that includes not just patients but students as well. Norcini responded again by 1-4 PREPUBLICATION COPY: UNCORRECTED PROOFS

OCR for page 5
emphasizing the value of multisource feedback for team assessments as well as other opportunities, such as ethics panels that can make use of the patient’s competence in a particular area. He went on to say that the assessment process would lack validity if patients were not involved in the assessment. But in follow-up, Walker commented that students are somewhat separated from patients and families. Norcini pointed out this is an area of keen interest with researchers in the United Kingdom who are incorporating patients into the education of all health care providers through family interviews. Holmboe also brought up the longitudinal integrated clerkships (LICs) where students are assigned a group of patients and a family to follow over all 4 years of their training. It is the families who play a major role in the assessment and feedback process of the trainees, said Holmboe. Although it is a resource intensive model, there is data from Australia, Canada, South Africa, and the United States looking into using LICs as an organizing principle (Hirsh et al., 2012; Norris et al., 2009). The Commonwealth Medical School in Scranton has actually moved to an entirely LIC-based model so every student at Commonwealth will be in an LIC-type model for their entire medical education. Hinton-Walker also wanted to know Holmboe’s and Norcini’s views on “high- stakes assessments.” In Holmboe’s opinion, there needs to be some form of public accountability through a summative assessment (Norcini agreed). At the ABIM where Holmboe works, he views the certification exam as part of their public accountability as well as an act of professionalism. But for him, the bigger issue is the inclusion of more formative assessments during training and education rather than relying so much on summative examinations. The only addition Norcini made to Holmboe’s comments was that he sees formative assessment as a mechanism for addressing trainee errors at a much earlier stage than waiting until the end for the summative assessment. Jacob Buck from the University of Maryland School of Social Work, who joined the workshop as a participant, asked what the target of the assessment should be—is it to have healthier individuals and populations, or is it to graduate smarter health providers? In response, Norcini took apart the goal of the assessment. If the goal is to take better care of patients, then the focus would be on the demonstration of the skills in a practice environment and likely not a multiple choice test. In his opinion, the triple aim of improving health and care at lower costs may be the desired outcome from education so an assessment could be designed to achieve that goal. Forum member Pamela Jefferies from Johns Hopkins University did not disagree, but she asked how one might measure interprofessional education (IPE) in the practice environment while patients are involved. Holmboe responded that this gets at some of the complexities of assessing experiential learning acquisition of a learner. Holmboe also raised the complexity of finding training sites where high-quality interprofessional care can be experienced so the learners can be assessed against a gold standard. It is not surprising that learners who do not experience high-quality, interprofessional care are not well prepared to work in these environments. Jeffries suggested that interprofessional clinical simulations could help bridge the gap for learners who are not trained through an embedded IPE clinical or related work experience. 1-5 PREPUBLICATION COPY: UNCORRECTED PROOFS

OCR for page 5
STRUCTURE AND IMPLEMENTATION OF ASSESSMENT Looking at the assessment from a different lens, Forum member Bjorg Palsdottir, who represents the Belgian organization Training for Health Equity Network (THEnet), wanted to know more about who is doing the assessing and how that person might prepare to undertake this role. Norcini acknowledged the need for greater faculty development in this area because health professionals are not trained in education or assessment. Aschenbrener agreed but also felt that the shortage of modern, clinical practice sites in which to embed the learner is another major impediment. In her opinion, it is the clinical sites that need greater scrutiny and that, if pushed toward modernization through assessment, could be the lever for greater, more relevant faculty development. According to Holmboe, measuring practice characteristics unfortunately remains difficult, although the tools are improving particularly with the introduction of the Patient-Centered Medical Home (PCMH). For example, the National Committee for Quality Assurance (NCQA) PCMH developed the NCQA 2011 Medical Home Assessment Tool that providers and staff can use to assess how their practice operates compared to PCMH 2011 standards (Ingram and Primary Care Development Corporation, 2011). This tool looks mostly at structure and process, said Holmboe, but researchers are beginning to embed outcomes into the assessment that might make it a good starting place for measuring practice characteristics that could be then be applied in education. Another example Holmboe described is the Dartmouth Microsystem Improvement Curriculum (DMIC). This is a set of tools that incorporates success characteristics associated with high-functioning practices (The Dartmouth Institute, 2013). It uses action learning to instruct providers on how to assess and improve a clinical work environment in order to ultimately provide better patient care. The Idealized Design of Clinical Office Practices (IDCOP) from the Institute of Health Care Improvement is yet another tool (Institute for Healthcare Improvement, 2014). It attempts to demonstrate that through appropriate clinical office practice redesign, performance improvements can be achieved that respond to patients’ needs and desires. Goals of the IDCOP model are better clinical outcomes, lower costs, higher satisfaction, and improved efficiency (Institute for Healthcare Improvement, 2000). Holmboe acknowledged that these examples are clinically oriented, and he would be interested to learn about other models (although no other models were offered by the participants). The Global Forum co-chair Afaf Meleis, from the University of Pennsylvania School of Nursing, asked how one might assess the social mission of health professional learners and design a tool that assesses cultural competence. Neither Norcini nor Holmboe knew of any good models to assess either of these areas, but Holmboe repeated that work within social accountability and professionalism can only be assessed if learners actually experience a work environment that has role models in these areas—and it is the responsibility of the professionals to create these opportunities. Norcini agreed with Meleis, saying that cultural competence is a critical issue to assess. He added that it is absolutely essential that assessors scrutinize the methods used and the results obtained to ensure no one is disadvantaged for cultural reasons. Meleis encouraged Norcini to add multicultural perspective to his list of criteria needed for a good assessment. 1-6 PREPUBLICATION COPY: UNCORRECTED PROOFS

OCR for page 5
Forum member Beverly Malone from the National League for Nursing questioned the role of peer assessment in formative and summative assessments given the inherent challenges associated with this type of assessment. Norcini responded that peer assessments are underutilized particularly when it comes to the assessment of teachers, although a set of measures is being developed for assessing teachers that includes peer assessment. Norcini added that another way to assess teachers is to look at the outcomes of students. Holmboe pointed out that one of the risks to using student outcomes as assessment tools of educators is when the experiences are not well designed so interactions with peers, patients, or others are brief or casual. Attempting to assess learners’ knowledge, skills, or ability in these types of brief and casual encounters are simply not useful, said Holmboe. The next question changed the focus of the conversation from the learner to the patient: a patient encounter is a one-time event, so what methodologies are in place to ensure equivalence when incorporating the patient’s very particular set of experiences? Norcini admitted that there are biases so, in order to counter those, he samples the patient population of a provider as broadly as possible to include different patients on different occasions. In his opinion, there are at least three reasons for including patients in the assessment of providers: 1. Patients are reluctant to criticize their provider so when they do, the provider has a major issue that should be addressed. 2. Patients can be used to compare providers with their colleagues. 3. Patient feedback makes a major difference in provider performance. Another comment made during this Q&A session was a personal example from Forum member Joanna Cain, representing the American Congress of Obstetricians and Gynecologists and the American Board of Obstetrics and Gynecology, who described how her colleagues in the operating room (OR) use a time-efficient model of formative assessment. In their model, every operation ends with a “60-second” gathering of the team to discuss what did and did not go well. Holmboe applauded their use of formative assessment, but he cautioned against using time limitations as an excuse for not engaging in a complete assessment process. In his view, assessment is a professional obligation that demonstrates the return on investment. With that caveat, Holmboe reported that multiple 2- to 3-minute shared observations can be a rich source of information, and more opportunities for such assessments would be useful. In fact, as the OR example showed, quick assessments are attractive to many health professionals who keep busy schedules. Quick assessments can drive culture as colleagues observe the value in this form of individual and peer assessment, information sharing, and team building. In hearing the previous discussion, Jordan Cohen commented that self-reflection is a potentially important tool. Norcini half agreed because although it is a useful tool, most individuals are not good at self-assessments. Holmboe added to the response that self-directed assessment defined by Eva and Regehr (2011) as a global judgment of one’s ability in a particular domain is as Norcini described. The real value is found when self- assessors seek comments and feedback from others, especially those outside their own profession or discipline (Sargeant, 2008). But despite the valuable information this form of assessment can provide, it is not used as often as other forms of assessment. 1-7 PREPUBLICATION COPY: UNCORRECTED PROOFS

OCR for page 5
MAKING ASSESSMENT MEANINGFUL Following the orienting discussion, Forum members engaged in interprofessional table discussions to delve more deeply into the value of formative and summative assessments. Each table in the room included Forum members, a health professional student representative, and a user of the health care system. The purpose of engaging students and patient representatives was to enrich the discussions at each table by infusing different perspectives into the conversations. Students identified by members of the Forum were invited to attend the workshop and represented the fields of social work, public health, medicine, nursing, pharmacy, and speech, language, and hearing. Forum member and workshop co-chair Darla Coffey from the Council on Social Work Education led the session. Coffey suggested that communication might be a focus of the discussions about assessment. One person from each group was designated to present to the entire group the summary of the discussions that took place at his or her table. The results of these discussions can be found in Table 1-2 (value of summative assessments) and Table 1-3 (value of formative assessments). The responses were TABLE 1-2 Summative Assessment Discussion Question: From the Perspective of Assessment of Learning, What Do You Think Makes a Good Assessment Tool/Measure?a Underappreciated Elements of a Good Assessment Description of Element Workshop Participant Knowing the context Who the communication is Carol Aschenbrener with; who it is between; and for what purpose Standardized metrics Include assessment of mutual Patricia Hinton Walker respect, empathy, compassion, and professionalism across the different professions Standardized tools In direct observation Nelson Sewankambo assessments Safety Use clinical simulation to Meg Gaines assess safety but be cognizant of embedded biases Hawthorne effect with People act differently knowing Scott Reeves assessments in simulation their performance is being watched Identify the educational goals Align assessments with current Carol Aschenbrener educational goals a This table presents opportunities discussed by one or more workshop participants. During the workshop, all participants engaged in active discussions about opportunities. In some cases, participants expressed differing opinions. Because this is a summary of workshop comments and not meant to provide consensus recommendations, the workshop rapporteur endeavored to include all opportunities discussed by workshop participants as presented by the group leaders who were informed by the group discussions. This table and its content should be attributed to the rapporteur of this summary as informed by the workshop. 1-8 PREPUBLICATION COPY: UNCORRECTED PROOFS

OCR for page 5
TABLE 1-3 Formative Assessment Discussion Question: From the Perspective of Assessment for Learning, What Do You Think Makes a Good Assessment Tool/Measure?a Underappreciated Elements of a Good Assessment Description of Element Workshop Participant Role models in practice The hidden curriculum can Bjorg Palsdottir environment undo all education Safety Assess communication for Susan Skochelak safety rather than personality Informed self-reflection Seek feedback from peers to Eric Holmboe inform self-reflection Feedback Needs to be clear, directive, Cathi Grus and timely, and assesses team and individual contributions Nonverbal communication Assess beyond spoken Cathi Grus communication Bedside manner Assess for empathy Connie Mercer NOTE: Connie Mercer participated in a table discussion as a user of the health care system. a This table presents opportunities discussed by one or more workshop participants. During the workshop, all participants engaged in active discussions about opportunities. In some cases, participants expressed differing opinions. Because this is a summary of workshop comments and not meant to provide consensus recommendations, the workshop rapporteur endeavored to include all opportunities discussed by workshop participants as presented by the group leaders who were informed by the group discussions. This table and its content should be attributed to the rapporteur of this summary as informed by the workshop. informed by group discussion and should not be construed as consensus. In addition to the points listed in the Tables 1-2 and 1-3, Richard Talbott, representing the Association of Schools of the Allied Health Professions, brought up challenges associated with assessing supervisors or others who may be possess greater power than the assessor, due to fear of reprisal. He believes that the first goal within communication is to dismantle the power structure so anyone can feel comfortable in speaking up. In this type of setting, individuals may feel more comfortable giving honest assessments. This would include patients and caretakers, and it would create positive role models for learners to emulate. Bjorg Palsdottir then discussed the hidden curriculum and how negative role models have an ability to imprint negative experiences on learners regardless of the educational training received in the classroom. This comment was underscored by yet another Forum member, who cited an example of an aggressive attending physician. Their program director confronted the physician about his aggression by emphasizing the risk to safety, saying, “If you are intimidating people, you are not a safe practitioner.” One needs to understand how to navigate potentially delicate situations created by uneven power structures when one is challenging the hierarchy, said the Forum member. It takes practice, but it can be done. Workshop planning committee member Meg 1-9 PREPUBLICATION COPY: UNCORRECTED PROOFS

OCR for page 5
Gaines from University of Wisconsin Law School took this point a step further, saying that it was an ethical imperative to speak up. This topic resonated with the Forum’s public health representative John Finnegan from the Association of Schools and Programs of Public Health (ASPPH), who was reminded of the 2005 Joint Commission report that cited communication failures as the leading root cause for medical errors (Joint Commission Resources Inc., 2005). This does not mean the wrong information was always transmitted; rather, often times nothing was said due to a fear of retribution. Regardless of how well learners are trained, said Finnegan, dangerous situations leading to medical errors will persist if there is no support of the larger organizational structures emphasizing the need for a culture of safety. Workshop co-chair Darla Coffey then asked the members and the students and patient representatives to consider how assessments could be a catalyst for change in the educational and health care systems. Much of the discussion revolved around the idea of better integrating education and practice; Forum member George Thibault from the Josiah Macy Jr. Foundation was a vocal advocate for rethinking health professional education and practice as one system. The Forum representative from pharmacy, Lucinda Maine, thought this could possibly be accomplished within her field by improving the assessment skills of their volunteer instructors and preceptors. In her view, this would make it easier to suggest changes in practice environments that could strengthen relationships within the continuum of education to practice. But, said Forum and planning committee member Carol Aschenbrener from the Association of American Medical Colleges, for there to be any benefits to health professional education, assessments need to be reviewed at least annually for their alignment with the predetermined educational goals and the set level of student achievement. The representative from the Association of American Veterinary Medical Colleges, Chris Olsen, felt that for assessment to drive change, it would need to be part of the expectation. Too often, assessments are carried out without taking the critical last step of using the information to drive change. Individual participants at the workshop provided their thoughts on how assessments in the context of education could drive changes in the practice environment. For example, Lucy Mac Gabhann suggested that in a community setting, student assessment might influence policy. And Forum member Jan De Maeseneer from Ghent University in Belgium thought that students exposed to resource-constrained neighborhoods would develop a sensitivity to the social inequalities in health. However, others expressed doubt that assessments could affect change when the organizational culture is based on hierarchy and imbalances in power structures that are perpetuated through the hidden curriculum and role modeling. Beverly Malone from the National League for Nursing (NLN) pointed out that such a culture puts patients at risk when open and honest communication is avoided due to a fear of reprisal. John Finnegan fervently agreed saying that communication in an organizational setting is strongly influenced by that culture, and no matter how much one tries to educate around it, the larger organizational framework will prevail. That must change, he said; there has to be a safe culture where communication is not feared in order for assessment to drive change in education and practice. Yet another view was expressed by George Thibault, representing the Josiah Macy Jr. Foundation, who pushed for health professions education and health care delivery to be taken as one unit with one goal. In this way, the impact of assessments is 1-10 PREPUBLICATION COPY: UNCORRECTED PROOFS

OCR for page 5
considered on both education and practice simultaneously. The educational reforms are informed by the delivery changes, and the delivery changes are informed by the education changes. If education and practice continue to be dichotomized, he said, valuable learning opportunities across the continuum will be missed. Workshop planning committee member Cathi Grus from the American Psychological Association commented on the opportunity for learning from assessments that are bidirectional. To her, such learning meant engaging patients in the design of the feedback that would be provided to students, and as such could send a powerful message to the learner of what is important to the end user of the health system. What is important, said Grus, is that all involved have an understanding of the goals of the assessment in order to maximize its impact. REFERENCES Eva, K. W., and G. Regehr. 2011. Exploring the divergence between self-assessment and self- monitoring. Advances in Health Sciences Education 16(3):311-329. Harris, P., L. Snell, M. Talbot, and R. M. Harden. 2010. Competency-based medical education: Implications for undergraduate programs. Medical Teacher 32(8):646-650. Hirsh, D., E. Gaufberg, B. Ogur, P. Cohen, E. Krupat, M. Cox, S. Pelletier, and D. Bor. 2012. Educational outcomes of the Harvard Medical School-Cambridge Integrated Clerkship: A way forward for medical education. Academic Medicine 87(5):643-650. IHI (Institute for Healthcare Improvement). 2000. Idealized design of clinical office practices. Boston, MA. IHI. 2014. Idealized design of the clinical office practice (IDCOP): Overview. http://www.ihi.org/offerings/Initiatives/PastStrategicInitiatives/IDCOP/Pages/default.aspx (accessed January 6, 2014). Ingram, D. J., and Primary Care Development Corporation. 2011. NCQA 2011 Medical Home Assessment Tool. http://www.pcdc.org/resources/patient-centered-medical-home/pcdc- pcmh/pcdc-pcmh-resources/PCDC-PCMH/ncqa-2011-medical-home.html (accessed January 6, 2014). Joint Commission Resources Inc. 2005. The Joint Commission guide to improving staff communication. Oakbook Terrace, IL: Joint Commission Resources. Norcini, J., B. B. Anderson, V. Burch, M. J. Costa, R. Duvivier, R. Galbraith, R. Hays, A. Kent, V. Perrott, and T. Roberts. 2011. Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 conference. Medical Teacher 33(3):206-214. Norris, T. E., D. C. Schaad, D. DeWitt, B. Ogur, and D. D. Hunt. 2009. Longitudinal integrated clerkships for medical students: An innovation adopted by medical schools in Australia, Canada, South Africa, and the United States. Academic Medicine 84(7):902-907. Roediger, H. L., and J. D. Karpicke. 2006. The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science 1(3):181-210. Sargeant, J. 2008. Toward a common understanding of self-assessment. Journal of Continuing Education in the Health Professions 28(1):1-4. Sullivan, R. S. 1995. The competency-based approach to training. Washington, DC: U.S. Agency for International Development. The Dartmouth Institute. 2013. Dartmouth microsystem improvement curriculum: Microsystem action learning series. http://clinicalmicrosystem.org/materials/curriculum/ (accessed January 6, 2014). 1-11 PREPUBLICATION COPY: UNCORRECTED PROOFS

OCR for page 5