The conceptual bases and empirical findings of the previous chapters indicate that eight intrapersonal competencies show some evidence of predicting success in higher education. In addition, as discussed further in Chapter 5, some intra- and interpersonal competencies have been identified as desired outcomes for 2- and 4-year college graduates. In this report, the committee recommends further research to better understand each group of competencies and their relationships to students’ college success, research that reflects one vital use of assessment. In addition, if research confirms these relationships, assessments of these competencies can provide data useful for informing college decision making and buttressing improvements in teaching, learning, and co-curricular support services. Such uses of assessment are the focus of this chapter. Following a brief discussion of the growth of assessment in higher education, the chapter describes the current and potential future uses of assessments of intra- and interpersonal competencies. The third section details the potential users of such assessments—the various stakeholders who may use and apply the resulting data on student competencies. The fourth section focuses on how these data can be used to support improvement and explores factors that facilitate or inhibit such uses of the data. Next is a set of cases illustrating how some colleges and universities are already using assessments of intra- and interpersonal competencies to enhance college readiness and success. The chapter ends with conclusions and recommendations.
In response to demands for accountability and improvement in higher education, many colleges, universities, and fields of study have begun to identify specific learning outcomes for all college graduates and to assess students’ attainment of these outcomes. Two recent surveys provide insights into this trend. Kuh and colleagues (2014) report findings from a survey of provosts at 1,200 regionally accredited undergraduate institutions, including a mix of 2- and 4-year public and private institutions; the survey received a 43 percent response rate. Large majorities of respondents reported that their institution had identified specific student learning outcomes and that assessment of these outcomes had increased since 2009. Hart Research Associates (2016) reports on an online survey of chief academic officers at 1,001 member institutions of the Association of American Colleges and Universities (AAC&U). The 325 respondents were representative of the association’s membership, including 2- and 4-year public and private institutions ranging from regional state colleges to research-intensive universities. Their responses were similar to those reported by Kuh and colleagues (2014). A large majority (85%) of responding institutions had established a set of common learning outcomes for all undergraduates, up from 78 percent in 2008. The learning outcomes included cognitive competencies (e.g., inquiry and analysis), intrapersonal competencies (e.g., lifelong learning), and interpersonal competencies (e.g., oral communication skills). The proportion of AAC&U member institutions that reported assessing these learning outcomes across the curriculum had grown to 89 percent, from 72 percent in 2009.
Interest in assessing postsecondary students’ learning outcomes continues to grow, driven partly by concerns of policy makers and the broader public about college effectiveness (Arum and Roksa, 2011; Carey, 2014; Guttenplan, 2014; Kaminer, 2013). These concerns often include questions about whether the outcomes of higher education justify its costs (Bennett and Wilezol, 2013). A focal point of higher education policy since the 1980s, assessment and program evaluation for the purpose of improving students’ learning today is a movement of notable strength (Astin, 2012; Ewell, 2008).
The growing implementation of assessment tools and measures not only reflects postsecondary institutions’ growing interest in—and responsibility for—demonstrating the attainment of student learning outcomes, but also gives rise to a growing body of evidence on which to base institutional decision making. Furthermore, transparency in the documentation of learning outcomes has naturally fueled interest in how such outcomes can be improved and in what role data can and should play in improvement efforts.
Researchers describe a number of ways in which student assessment data can be used in higher education. Rowntree (2015), for example, distinguishes among the following uses: selection, maintaining standards (i.e., quality control), student motivation, feedback to students, feedback to instructors, and preparation for life. Falchikov (2013) discusses assessment for measurement, procedure, inquiry, accountability, and quality control. The broader literature contains several similar conceptualizations, reinforcing the idea that assessment in higher education serves a range of uses, goals, and stakeholders (e.g., Hughes, 2014; Lambert and Lines, 2013). This section examines four major, interrelated uses of assessment:
- selection and placement of individual students;
- formative improvement of local educational processes, practices, and programs;
- research and evaluation supporting knowledge generation; and
Attention to two of these uses—formative improvement and accountability—is justified by their prominence in the higher education literature (Ewell, 2002, 2008, 2009). According to Ewell (2002), these two purposes entail distinct values and actors, with improvement focusing on internal stakeholders (faculty, staff, administrators, students within one institution) using data to change educational practices, and accountability focusing on documenting the level of institutional effectiveness for external audiences (accreditors, policy makers). This tension between the two uses in the literature is apparent in a recent survey on assessment of student learning outcomes (Kuh et al., 2014). Provosts from a national sample of public and private 2- and 4-year institutions indicated that their greatest worry related to assessment was that external accountability mandates stretched limited assessment resources and dominated institutional conversations about assessment around an agenda of compliance rather than improvement.
Nevertheless, improvement and accountability uses of assessment can be mutually reinforcing. Research shows that accountability serves to signal important educational goals and motivate action to improve teaching and learning (Dougherty and Reddy, 2011; National Research Council, 2011), so that assessments initially used for accountability may also serve to stimulate improvement. Based on the recent survey of provosts, Kuh and colleagues (2014) reached a similar conclusion, stating that assessment of student learning outcomes “is no longer primarily an act of compliance but—more appropriately and promisingly—is driven by a balance of compliance and institutional desire to improve” (p. 5).
Beyond these two uses, assessments for college selection and placement typically meet the needs of internal stakeholders (e.g., admissions officers, developmental education instructors) (see, for example, Atkinson and Geiser, 2009; Linn, 2009; Zwick, 2007). Finally, assessments for evaluation and research are designed to build generalizable knowledge for wider audiences about the nature of the competencies (e.g., Center for Advanced Research on Language Acquisition, 2016), their role in college success, and effective programs and strategies for supporting them.
Selection and Placement of Individual Students
College admissions officers use letters of recommendation as informal representations of intra- and interpersonal competencies, and formal measures also have been developed for this purpose. At both Oregon State University and DePaul University, the admissions process includes assessments of a set of competencies that includes planning and goal setting (behaviors related to conscientiousness) and positive self-concept (related to academic self-efficacy) (Sedlacek, 2006, 2011). Applicants to Oregon State University are required to submit six short essays related to these competencies (Oregon State University, n.d.) along with transcripts and standardized test scores, while applicants to DePaul University may choose not to submit standardized test scores and instead write short essays focusing on the same set of competencies1 (Cortes et al., 2014; Sedlacek, 2011). Over a 5-year period, Tufts University experimented with using an optional assessment of four clusters of cognitive, intrapersonal, and interpersonal competencies (one cluster reflected prosocial goals and values; Sternberg, 2010). In a final example, some graduate programs used scores from the Personal Potential Index (PPI) (Kyllonen, 2008), formerly offered by the Educational Testing Service (ETS), when considering candidates for admission.
Augmenting existing admissions tests, which focus primarily on cognitive knowledge and skills and which show disparities in scores across different racial/ethnic and socioeconomic groups, with such assessments of students’ intra- and interpersonal competencies could potentially improve admission decisions and reduce differences in scores across subgroups (see Klieger et al., 2014). However, research to date has not conclusively demonstrated that the use of assessments of intra- and interpersonal competencies can reduce racial disparities in selection (Foldes et al., 2008).
Assessments designed to measure intra- and interpersonal competencies also can be used to inform decisions about the selection and placement of students after they have been admitted. This use of assessment has grown in recent years, as researchers and test developers have created a variety of
1 At both universities, the essays are scored based on a rubric.
instruments targeting competencies that are thought, based on correlational research, to be related to student success and may or may not be malleable in response to interventions (e.g., Cengage Learning, 2015; Noel-Levitz, Inc., 2011; Pickering et al., 1992). These instruments typically assess a range of competencies, including a few of those identified by the committee, along with other competencies.
Some institutions are using these new instruments to identify incoming students who are at risk of dropping out. The University of North Texas, for instance, required all incoming students in fall 2008 and 2009 to complete the ACT Student Readiness Inventory, now known as ACT Engage (ACT, Inc., 2012). The instrument measures some of the eight competencies identified by the committee (e.g., behaviors related to conscientiousness, aspects of sense of belonging) along many other competencies (Le et al., 2005). The university used the results to select at-risk students for intervention, inviting them to a one-on-one meeting with a counselor to discuss the test results and, more important, to establish a relationship with the student and refer her or him to campus resources. An unpublished quasi-experimental study of the approach showed promising results: at the end of the first semester, 74 percent of the treatment group remained in good academic standing, compared with 63 percent of the control group (Tampke, 2011).
Allen and colleagues (2009) conducted a study illustrating the potential benefits of assessing intra- and interpersonal competencies. The authors modeled the effects on student persistence of assessing incoming students’ intra- and interpersonal competencies for the purpose of selecting at-risk students for intervention. They drew on prior studies that found that scores on assessments of certain competencies were correlated with indicators of college success (Robbins et al., 2004, 2006). They also drew on a prior meta-analysis of the effects of interventions on retention, clustering the interventions into four categories as follows: academic skill (r = 0.15), self-management (r = 0.29), socialization (r = 0.11), and first-year experience (r = 0.10) (Robbins et al., 2009).2 Using a “typical” institution with a first-year academic failure rate of 24 percent and first-year dropout rate of 32 percent, they modeled different scenarios of the proportion of students identified as at risk and the effectiveness of the intervention to estimate the proportion of students that would be saved from dropping out by assessing these competencies. Positing that the effectiveness of an intervention
2 This meta-analysis included interventions using a variety of research designs in contrast to the committee’s approach, which entailed focusing only on experimental intervention studies with random assignment. Robbins and colleagues (2009) note three limitations of the study: (1) In several instances, they had to impute missing meta-analytic effects; (2) they calculated difference scores from different study designs on a common metric; and (3) they did not take into account institutional characteristics, system-level factors, and other variables that might influence student academic performance and retention.
potentially could be increased by targeting it to the students who need it the most, as indicated by assessments of students’ intra- and interpersonal competencies, the researchers created scenarios in which the effect size of the intervention increased by 0 percent, 10 percent, and 20 percent as a result of the assessment. At the low end, the additional number of students saved by assessment of intra- and interpersonal competencies in addition to traditional academic predictors was 1.5 per 5,000, assuming that 10 percent of students were identified as “at risk” and received an intervention with an effect size of 0.20, and that the assessment produced no increase in effect size. At the high end, 140.5 students per 5,000 were saved, assuming that 50 percent of students were identified as “at risk” by the assessment and received an intervention with an effect size of 0.80, and that the assessment increased the effect size by 20 percent. The authors conclude that all of the factors that influence the practical benefits of measuring these competencies will vary across institutions, making it difficult for individual institutions to draw firm conclusions. Thus they suggest that colleges and universities conduct their own local research studies to understand the potential benefits.
Formative Improvement of Educational Processes, Practices, and Programs
Formative improvement is a process by which individuals (such as faculty and students), organizations (such as departments, counseling centers, and student affairs offices), or whole institutions use assessment data to drive strategic change efforts (Suskie, 2009). This improvement process is internal to the institution, engaging stakeholders who use assessment and/or evaluation data in real time to monitor and improve educational processes, practices, and programs in pursuit of desired goals. Local stakeholders often carry out a formative evaluation cycle that includes planning, gathering and interpreting evidence from student assessments, along with other evaluation evidence, and using the evidence to inform educational improvement (Banta and Blaich, 2011; Maki, 2010). Kuh and colleagues (2014) report that institutions frequently assess student learning outcomes for internal improvement purposes, including an institutional commitment to improving, faculty or staff interest in improving student learning, president and/or governing board direction, and concerns about the effectiveness and value of education.
One example of an improvement-oriented assessment process is that of Alverno College in Milwaukee, Wisconsin, which engaged administrators, faculty, and students in conversations over the course of several years to reimagine its curriculum. The college then adopted a unique curriculum focused on developing eight abilities, including interpersonal competencies, such as communication and social interaction; intrapersonal competen-
cies, such as aesthetic engagement; and cognitive competencies, such as problem solving (Alverno College, 2016). Students’ development of these competencies within academic disciplines is assessed by both the faculty and the students themselves and recorded in a diagnostic digital portfolio. Although these assessment data ultimately are used for both formative and summative purposes (including graduation), the Alverno assessment process began as an effort to improve student learning on campus. Accordingly, Alverno uses assessment data in an ongoing process to improve educational practices and increase student learning, including learning in these intra- and interpersonal domains (Mentkowski et al., 2000).
These formative improvement processes often occur within regular learning cycles and are focused on determining what strategies work well in specific contexts for specific students and on what can be changed to work better. As an ongoing process concurrent with the implementation of instructional approaches or support services, assessment used for formative improvement contrasts with assessment for summative purposes, which is used to gauge the overall effectiveness of a course or program at its end or in comparison with other options. As Ewell (2008, p. 9) describes, formative “assessment is accomplished directly by practitioners (faculty and administrators) acting within the parameters of the teaching and learning process.” In addition, students can act directly as agents of their own formative improvement. Seal and colleagues (2015), for example, report on an intra- and interpersonal assessment designed for use by college students in their own self-development of four dimensions of competence: (1) self-awareness, (2) consideration of others, (3) connection to others, and (4) influence orientation. As noted above, improvement-oriented assessment processes focus on progress within a particular institution, program, or course or on the growth of a particular individual (Suskie, 2009), and are internal to the institution or program.
Research and Evaluation Supporting Knowledge Generation
In contrast to formative improvement, which typically involves institutions collecting student assessment data, analyzing the data, and applying them internally, the third major use of assessment—research and evaluation—focuses on collecting and analyzing data to generate knowledge for a wider audience. Academic researchers and/or institutional research officers collect student assessment data throughout or at the end of a course or program. They study these data using more sophisticated study designs and analysis methods than those typically applied for formative improvement purposes. Researchers who assess student learning outcomes ask such questions as, “How well are people and programs performing?” “What are the best practices and programs to implement for student success?” and
“What generalizable knowledge can be developed to share with others?” Such research and evaluation can provide the faculty, staff, and students of colleges and universities with vital information to inform the design or selection and implementation of new strategies and programs that support students’ success in higher education.
Although assessment of intra- and interpersonal competencies for research and evaluation purposes is growing, the available research to date is limited, as noted in Chapter 2, and there have been calls for greater rigor. In a review of research on co-curricular interventions to develop motivation and other competencies thought to support and retain students in science, technology, engineering, and mathematics (STEM), for example, Estrada (2013) calls for the use of stronger research designs. Estrada suggests use of longitudinal designs and, when possible, randomized controlled trials to investigate the relationship between interventions focused on these competencies and student outcomes (e.g., cumulative grade point averages [GPAs], graduation rates). Further, as noted in Chapter 2 and as will also be reinforced in Chapter 5, additional research is needed to demonstrate more clearly which intra- and interpersonal competencies contribute most strongly to students’ persistence and success in college. And as noted in Chapter 3, further research and development of assessments also is needed to define these competencies more clearly and measure them more accurately and to guide assessment users in drawing valid inferences from the resulting data.
The committee defines accountability as a summative process through which a person, program, or institution is judged against some standard in a way that is comparable across individuals, programs, and/or institutions (Ewell, 2008; Suskie, 2009). In higher education, accountability processes often are developed and conducted by central bodies (such as accrediting agencies or federal or state government agencies) for such purposes as benchmarking of relative institutional performance; accreditation or certification; and decision making about such matters as rewards, sanctions, and funding (Alexander, 2000; Bender and Schuh, 2002; Burke, 2005; Dougherty and Hong, 2005). Many states, for example, have adopted performance-based accountability systems under which they allocate a portion of higher education funding to each institution based on measures of its success on such outcomes as graduation and retention (Dougherty and Reddy, 2011).
Most accountability occurs at the institutional level. Colleges and universities, for instance, are accountable to regional accreditation boards (e.g., the Middle States Commission on Higher Education) that are approved by
the U.S. Department of Education (2016) to measure and approve institutional quality. The accreditation process is high stakes, used in determining whether students attending the institution may receive federal financial aid. According to the Middle States Commission on Higher Education (2015, p. 10), Standard 5 for Accreditation,
Assessment of student learning and achievement demonstrates that the institution’s students have accomplished educational goals consistent with their program of study, degree level, the institution’s mission, and appropriate expectations for institutions of higher education.
In the K-12 context, state educational content standards and performance benchmarks guide the interpretation of student assessment data. In higher education, by contrast, accrediting agencies allow institutions to self-define their own standards for student learning based on their unique missions. The Middle States Commission on Higher Education (2015, p. 10), for example, requires institutions to demonstrate
consideration and use of assessment results for the improvement of educational effectiveness . . . [for such uses as] improving key indicators of student success, such as retention, graduation, transfer, and placement rates.
Disciplinary accrediting bodies also use assessment for accountability at the program level. The Accrediting Board for Engineering and Technology (ABET), for example, has established student learning outcomes as part of its accreditation process. In engineering, programs are required to demonstrate that students have “an ability to function on multi-disciplinary teams” (Accreditation Board for Engineering and Technology, 2015, p. 3). This requirement has spurred research and development of assessments of teamwork competencies, as discussed in Chapter 5.
An individual college or university may also hold individual faculty members or counselors accountable for the progress of students under their care. This accountability may take the form of monitoring the extent to which an educator’s students or advisees pass courses, make regular progress toward a degree, or even improve specific competencies—although such approaches can encourage faculty resistance to assessment (Kuh et al., 2014; see further discussion below).
For students, accountability may take the form of graduation standards that require certain levels of credits, achievement, and performance on assessments of intra- and interpersonal competencies. In response to the ABET accreditation requirements, some undergraduate engineering students are currently receiving grades and course credit based on assessments of such competencies as ethics and teamwork (see Chapter 5 for further discussion). This form of accountability is also in place at a few institutions
that have targeted the development of intra- and interpersonal competencies as key goals. Returning to the example of Alverno College, it created a pervasive culture of improvement-oriented assessment on campus (including assessment of cognitive, intrapersonal, and interpersonal competencies) that was sufficient to fulfill accreditation goals (at the institution level) and graduation requirements (for individual students). Assessments of these competencies could be used more broadly by other colleges and universities to evaluate effectiveness at the institutional or individual level to the extent that the assessments accord with institutional missions and the quality of their measures is adequate.
Stakeholders in higher education may focus on different cognitive, intrapersonal, or interpersonal competencies when planning to assess them for different purposes. Competencies assessed at the institutional level for accountability purposes need to reflect the broad mission and learning outcomes of the institution, but faculty members would place higher priority on assessing competencies that advance the learning goals of their specific courses (Suskie, 2009). Assessments of these competencies with respect to a single course may be idiosyncratic to that course, but institutional measures must be common to or at least possible to aggregate across courses, departments, and schools. The challenge of such aggregation of the assessment data for purposes of program or institutional accountability is a potential barrier to wider assessment of intra- and interpersonal competencies (Kuh et al., 2014).
An additional challenge to the use of intra- and interpersonal competency assessments for accountability is the possibility of unintended consequences from high-stakes assessments that may not take appropriate account of the experience of underrepresented student groups. When considering data from assessments of student engagement and effort as a measure of institutional effectiveness, for example, Dowd and colleagues (2011) observe that the assessments may unintentionally benefit campuses with greater percentages of racial majority students. Such assessments typically do not account for contextual pressures faced by racial/ethnic groups that are in the minority at predominantly white campuses. For example, institutions participating in the voluntary National Survey of Student Engagement (NSSE) ask students to report on their level of engagement in various “best practice” learning and personal development activities provided on campus. However, Dowd and colleagues (2011, p. 19) note that
engagement benchmarks are based on indicators of educational “best practices” without consideration of the racialized “bad practices” that minoritized students experience as harmful to their self-worth.
In essence, underrepresented minority students who experience a hostile climate may have to exert “intercultural effort” that may detract from their ability to engage in educational “best practices” (Dowd et al., 2011). By contrast, Dowd and colleagues (2011) propose that measuring other constructs that account for the campus climate for students of color, such as sense of belonging, may better represent the educational experiences of such students and therefore reward institutions that serve and support them. In sum, any assessments of intra- and interpersonal competencies for accountability purposes need to pay particular attention to the equity concerns that may manifest among these constructs and in their measurement.
Evidentiary Demands of Different Assessment Uses
It is important to recognize differences among the evidentiary demands of the four purposes of assessment discussed above. As noted in Chapter 3, the higher the stakes associated with the use of an assessment (e.g., for purposes of selection or accountability), the stronger the evidence must be that the results are valid for that purpose (Borden and Young, 2008; Dougherty and Hong, 2005; McCormick and McClenney, 2012; Suskie, 2009). If the assessment results will influence important decisions that will have critical consequences for individuals or institutions, there must be strong evidence that the assessment supports valid, reliable, and fair inferences to inform decision making. The evidence standards may be lower for other purposes, such as formative improvement or placement, where the consequences are less serious, particularly when multiple sources of evidence are in play and decisions are not as permanent or binding (see further discussion in Chapter 3). In a study of an assessment for admitted students, for example, Markle and colleagues (2013a) observe that the instrument’s intended purposes—selection of students for additional support and intervention after admission—are relatively low-stakes.
Furthermore, as discussed in the previous chapter, threats to validity can vary with an assessment’s purpose. For example, individuals may have little reason to fake their responses in a research setting, but a strong incentive to do so when taking a high-stakes test for college admission or other accountability purposes. Consequently, it is important to establish the validity of an assessment for each of its potential purposes. For formative improvement purposes, for example, evidence is needed to establish that a particular construct is relevant to improvement (e.g., sense of belonging or growth mindset matters in a particular context), that the construct is well measured (the assessment is valid, reliable, and fair), and that the construct is malleable by institutions.
A variety of higher education stakeholder groups potentially could benefit from access to high-quality data on competencies that are related to college success.3 The focus here is on six different groups that could be interested in the results of assessments of the eight competencies identified in Chapter 2 or other competencies shown to be related to college success. The groups have different needs, priorities, and expectations: families, K-12 educators, college and university faculty, college admissions and student affairs staff, college and university leaders and administrators, and policy makers and state and national regulators. In each case, this section briefly describes their chief concerns and how information from competency assessments might be relevant to them.
In the assessment context, students and their parents and relatives are concerned mainly about individual students’ preparation for college (i.e., development of the necessary skills and competencies) and admission to college (i.e., identification of the right institutions, demonstration of the needed skills and competencies). After admission, this group also is concerned about retention, graduation, and future employment. The common focus of families in all of these cases is individual-level performance, that is, the status or progress of a particular individual. Their interests encompass both one-time events (selection and/or placement, getting good grades) and longer-term development and support. In the former case, if higher education institutions were able to clearly articulate a set of desired student competencies based on research demonstrating their relationship to college success, then measures of individual status would assist families and students in demonstrating competencies needed for successful admission to these institutions. In the latter case, assessment information might help families and students focus on and students develop the competencies they would need for admission and for their development and success in college.
Currently, many families support their students’ participation in high school sports, clubs, and leadership activities partly because college admissions officers consider such activities to be important indicators of an applicant’s intra- and interpersonal competencies.4 Students also represent those competencies through statements of intent and biographical information submitted as part of their college applications. In an example of
3 For an example of K-12 assessment stakeholders, see http://www.cal.org/flad/tutorial/impact/5stakeholdersmap.html [July 2016].
4 For example, see https://bigfuture.collegeboard.org/get-started/outside-the-classroom/extracurriculars-matter-to-you-and-to-colleges [July 2016].
more structured assessment of one such competency, Harackiewicz and colleagues (2012) conducted an experimental study of high school students and their families in which parents were provided a brochure and access to a website outlining the importance of STEM education. Students of the group of parents receiving the information enrolled, on average, in about one STEM class more in their last 2 years of high school relative to the students whose parents did not receive the information. In this example, the parents supported their students’ perception of the “utility value” of STEM (i.e., their belief that science and mathematics are useful in everyday life and for a variety of careers) as a way to encourage their interest and persistence in STEM. The researchers assessed the participating students’ perception of utility value through a survey asking such questions as “In general, how useful is what you learned in math classes?” If assessments of this or other competencies were widely available, families might welcome the resulting information for use in college preparation and admission. Families also could use this information about their students’ intra- and/or interpersonal competencies in evaluating choices among institutions, selecting degree programs and specific courses and instructors, enhancing academic performance, identifying strategies that lead to persistence, and focusing on competencies that may lead to career and life success after graduation.
K-12 teachers, counselors, and school and school district administrators may be interested in information about students’ intra- and interpersonal competencies at both at the individual and aggregate levels. They may use this information to inform both short-cycle individual actions (e.g., guiding students in college selection) and longer-cycle programmatic changes, such as programs or strategies for helping students improve on these competencies. These stakeholders can use assessment information specifically to focus K-12 programs on (1) developing cognitive, intrapersonal, and interpersonal competencies; (2) improving the efficacy of their college and career counseling efforts; (3) identifying competencies that need further development in individual students or student groups; and (4) documenting the value of their diplomas in preparing students for future college and workforce success (Bialik et al., 2016; Dilley et al., 2015; Plucker et al., 2015; Trilling and Fadel, 2009). The growing number of schools that are currently focusing on “21st-century” competencies are already using assessments of these competencies for all four of these purposes, as demonstrated in a recent review of schools identified by the Partnership for 21st Century Learning as “exemplar schools” (Brown, 2014). In another example, a federally funded program operating across school districts in southern Texas used the ACT Engage assessment, described earlier in this chapter,
to identify traditionally underrepresented students low in the competencies thought to be related to college success for the purpose of targeting extra support to enhance their college readiness.
College and University Faculty
College and university faculty members are responsible for delivering and improving the education provided to students, both individually (through advising and mentoring) and in the aggregate (through teaching and program and course design). Thus, they are concerned both about formative improvement within ongoing courses and research and evaluation that can inform more fundamental redesign of teaching and curriculum materials. They could potentially use competency assessment data to help design activities to foster the development of intra- and interpersonal competencies in the context of academic disciplines, to improve the efficacy of mentoring and advising efforts, to monitor and improve program design, and to evaluate and enhance instructional strategies and activities. For example, faculty members have used assessments of mathematics self-efficacy to understand how teaching and learning activities can be redesigned to foster self-efficacy and potentially enhance achievement (e.g., Hall and Ponton, 2005; Peters and Hortecamp, 2010). In another example of potential formative use, a professor could choose a competency that is relevant to a specific course, assess students’ level of that competency at the beginning of the course, and plan for how to use the assessment data to modify the teaching approaches used in the course later in the semester to enhance that competency (Suskie, 2009). In addition, based on the suggestive evidence that academic self-efficacy and positive future self are related to college success (see Chapter 2), a faculty member might assess science self-efficacy and identity (i.e., imagining one’s future self as a scientist) for his or her advisees or students and then target mentoring activities or research experiences as a way to increase these competencies, with the goal of retaining the students in science fields. This potential use of assessment data is suggested by research demonstrating that mentoring and research experiences can foster students’ feelings of self-efficacy in science, identification with science, and commitment to science careers (Chemers et al., 2011; Estrada et al., 2011).
Faculty members also may assess these competencies for program and student accountability purposes. In response to the ABET accreditation requirement that programs demonstrate students’ attainment of intra- and interpersonal competencies including ethics and teamwork, engineering administrators and faculty members have begun to assess these competencies (Lattuca et al., 2006; see Use Case 3 later in this chapter). Some undergraduate engineering faculty already use data from assessments of ethical reasoning to assign student grades and award academic credit (National
Academies of Sciences, Engineering, and Medicine, 2016b; see Chapter 5 for further discussion).
Based on the national survey of provosts described above, Kuh and colleagues (2014) concluded that more faculty involvement is essential to sustain progress in assessment of student learning outcomes (including intra- and interpersonal outcomes). They found agreement among administrators, rank-and-file faculty members, and assessment scholars that faculty engagement in implementing assessment and interpreting the resulting data is essential to improve teaching and learning and enhance institutional effectiveness.
College Admissions and Student Affairs Staff
College admissions and student affairs staff focus their efforts on individual students or applicants, both in making important admission and placement decisions and in providing ongoing individual support and development. Student affairs staff also may be concerned with programmatic decisions that affect all students, selected subgroups of students (e.g., freshmen, at-risk students), or self-selected student groups (e.g., interest groups, student government representatives). As noted earlier in this chapter, both groups already are using assessments of various intra- and interpersonal competencies. At Oregon State University, for example, admissions staff members consider data on intra- and interpersonal competencies when selecting students for admission, while student affairs staff use the data for academic advising, student services, and on- and off-campus referrals (Sedlacek, 2011).
College admissions staff could use data from assessments of competencies clearly related to college success for purposes of selection or gatekeeping, while student affairs staff could use the data primarily for improvement and research and evaluation purposes. Although central administrators, admissions personnel, and faculty could clearly benefit from using such data to guide and improve college programs and instructional approaches, many may need to be convinced about the efficacy of these competencies in increasing college students’ success.
Student affairs officers in colleges and universities already deal extensively with students’ development of intra- and interpersonal competencies and could use high-quality assessment data for purposes of program evaluation, design of support services, and related activities. As noted above, a few universities are already administering tests of self-efficacy, motivation, and related competencies to incoming students in order to select at-risk students for intervention (Fain, 2015). Illustrating the potential value of such efforts, Lent and colleagues (2003) studied 328 students in introductory engineering courses and found that creating an environment that provides support
and removes barriers is associated with higher self-efficacy and (indirectly) with increased intent to persist in engineering.
College and University Leaders and Administrators
College administrators, such as presidents, deans, and department heads, are responsible for delivering and improving the education provided to students in the aggregate. Whereas college faculty care about short-cycle improvements in ongoing courses and course redesign to enhance student development and content knowledge, administrators are likely to be more concerned with long-term development of students’ intra- and interpersonal competencies and the relationship of these competencies to retention, graduation, and success in careers in the aggregate.5 In addition, assessment information on students’ intra- and interpersonal competencies could be useful to administrators for the evaluation and improvement of degree programs, courses, instructors, co-curricular activities, and equity. Administrators at the University of Nevada, Las Vegas, for example, used data from the Multi-Institutional Study of Leadership to examine student perceptions of the campus climate by race, and then used the results to develop new student programs designed to support positive and reduce negative perceptions (Early and Blevins, n.d.).
Policy Makers and State and National Regulators
Policy makers with potential interest in using data on students’ intra- and interpersonal competencies include legislators, boards of education, accreditation boards, and disciplinary societies. Although this stakeholder category is quite diverse, all of these groups desire information about the quality of institutions of higher education. They ask such questions about these institutions as “Are the academic programs ‘good enough’?” “Are student graduation rates at expected levels?” “What can be done to improve quality (e.g., how can courses or programs be improved)?” and “What factors influence student success in the aggregate and for specific groups, and how are departments and programs working to address these factors?”
As noted above, accreditors are charged with ensuring institutional quality (U.S. Department of Education, 2016), and as part of this process, some are beginning to demand that colleges demonstrate that their students
5 Although evidence of differing administrator and faculty perspectives on assessment of intra- and interpersonal competencies is limited, a substantial literature exists on the differing perspectives of these two groups in general and with respect to other aspects of higher education, such as technology use (Campbell and Slaughter, 1999; Ehrenberg, 1999; Palm, 2006; Seidman, 1985; Stark et al., 1997).
are developing competencies beyond those that are purely academic (see, for example, Spurlin et al., 2008). In fact, the primary driver of additional assessment in higher education has been demands from accreditation agencies (Kuh et al., 2014). Further, state systems, accreditors, and professional associations and related consortia may be able to use measures of intra- and interpersonal competencies to improve institutions and advance system goals.
Examples of Uses of Assessment Data by Specific Stakeholders
Table 4-1 makes explicit the kinds of questions that different stakeholders could address through different uses of data on intra- and interpersonal competencies. This table is intended to illustrate but not to limit the range of possibilities for the use of such data. Note also that, although the table is organized by stakeholder group, the committee argues below that the use of data for institutional change and improvement is likely to be most effective when different stakeholders act in concert across levels, drawing on multiple sources of assessment data (Chatterji, 2005; Dowd and Tong, 2007).
Moreover, while this report focuses on the development and assessment of student competencies, students are not the only appropriate focus of assessment, evaluation, and change. Rather, intra- and interpersonal competencies develop and exist in interaction with local context that may support or diminish them in various ways. It is critical, for example, that programs and strategies designed to support students’ sense of belonging take account of the college context and climate. In essence, it is necessary not only to assess how students feel about the climate (whether they feel they belong), but also to evaluate whether the environment poses structural or normative barriers that inhibit a sense of belonging, particularly for underrepresented minority students (Hurtado et al., 1998). Similarly, the intra- and interpersonal competencies of faculty members, counselors, instructors, and other college staff influence students’ success. Data from assessments of these practitioners’ competencies could complement student data to further understanding of the relationship between the two and support the formulation of plans for improvement (Bensimon, 2007; Dowd, 2015).
Many researchers argue that uses of assessment and evaluation data for formative improvement are most important for improving college coursework and programs and increasing student success (Suskie, 2009). The third column of Table 4-1 gives examples of the kinds of formative questions that could be answered by specific stakeholders if they had access to assessment data on students’ intra- and interpersonal competencies. The top of this column poses improvement questions that an individual student might ask (e.g., “How well am I improving my intra- and interpersonal
TABLE 4-1 Example Questions about Intra- and Interpersonal Competencies That Are Relevant to Different Stakeholders and Different Purposes
|Stakeholders (Users)||Purpose of Assessment (Uses)|
|Selection and Placement||Formative Improvement||Research and Evaluation||Accountability|
|Students/Families||(Parents) To what extent does my child have the necessary competencies for success at this particular college/in this major?||
(Students) How well am I improving my intra- and interpersonal competencies during college, and what do I need to continue to improve upon?
What adjustments can I make in my intra- and interpersonal competencies so that I can succeed in college?
|(Researchers) How does family support during college influence students’ development of intra- and interpersonal competencies?||(Students) Are there ways to document my learning of intra- and interpersonal competencies for job search portfolios and graduate school applications?|
(High school counselors) To what extent does this student have the intra- and interpersonal competencies necessary for college success?
Do all of our students have access to programming that fosters the development of intra- and interpersonal competencies?
|(Grade 12 teachers) What can I learn about my students’ intra- and interpersonal competencies at the beginning of the year that will enable me to help them develop those competencies by graduation?||(Teachers) Does a specific curricular intervention support students in developing the intra- and interpersonal competencies necessary for college success?||
(K-12 administrators) What level of intra- and interpersonal competencies do students need to develop during high school in order to be successful in college?
How can we document that students from our school achieve this level of competency?
|College/University Faculty||To what extent do students have the intra- and interpersonal competencies necessary for success in my course?||
How are my students improving their intra- and interpersonal competencies in addition to learning course content in my class?
If my students are not improving in these competencies by midsemester, can I help them in new ways to improve by the end of the term?
Does explicit teaching of a certain competency lead to improved outcomes for underrepresented students in my course?
Does explicit teaching of a certain competency lead to student success (grades, completion, content learning) in my course?
How can I document students’ changes in these competencies during the course for the purposes of program accreditation?
To what extent does my course contribute to the broader goals of my unit (program, school, college) regarding these competencies?
Does attention to intra- and interpersonal competencies in my instruction improve the degree to which underrepresented students achieve program goals?
What proportion of our students are deficient in these intra- and interpersonal competencies when they arrive on campus?
What proportion of our underrepresented students are deficient in these competencies when they matriculate?
|What extra support can we provide to students who are lower on these intra- and interpersonal competencies to help ensure that they are retained at and graduate from this college?||
Does proficiency in certain intra- and interpersonal competencies lead to student success at our institution?
Does proficiency in certain intra- and interpersonal competencies lead to improved outcomes for underrepresented student groups attending our institution?
To what extent do students improve in these competencies during their undergraduate career?
Can we document this improvement for the purposes of institutional accreditation?
|Stakeholders (Users)||Purpose of Assessment (Uses)|
|Selection and Placement||Formative Improvement||Research and Evaluation||Accountability|
|College Admissions and Student Affairs Staff||(Admissions staff) To what extent do incoming students have the necessary intra- and interpersonal competencies for success at this particular college?||
(Student affairs staff) Can resident assistants provide extra support for students who are lower in certain intra- and interpersonal competencies?
What aspects of resident assistant support are most helpful for creating a sense of belonging for underrepresented student groups?
|(Student affairs researchers) Which student affairs interventions are effective in helping students improve upon these competencies across institutional contexts?||
To what extent does the student affairs division contribute to students’ gaining intra- and interpersonal competencies that could be documented as meeting accreditation standards?
Do all students have access to programming designed to improve intra- and interpersonal competencies?
To what extent are all students accessing these interventions?
|Policy Makers and Regulators||Not applicable||Do institutions have the necessary resources to assess students’ intra- and interpersonal competencies throughout the student experience?||
(State consortia) How do state systems compare in terms of how students fare in these competencies?
(Disciplinary accreditors) Does proficiency in certain competencies lead to increased student success in disciplinary outcomes?
|(Regional accreditors) Can we include assessments of intra- and interpersonal competencies as a way for colleges to demonstrate that their students have developed proficiency in certain areas prior to graduation?|
|Do institutions have the necessary resources to address any deficiencies in students’ intra- and interpersonal competencies that are identified during the semester?||(State consortia) Are some state systems better than others at fostering specific intra- and interpersonal competencies among underrepresented student groups? If so, are they accomplishing this in replicable ways?||
(Disciplinary accreditors) What is the level of proficiency in intra- and interpersonal competencies that students in our discipline achieve prior to graduation?
Is this level sufficient for the skills needed in related careers?
competencies during college?”). The answers to these questions might lead the student to seek out counseling or advice from faculty or other mentors, participate in support services or extracurricular activities, consider new career options, or take other actions to meet his or her individual goals.
The middle rows of Table 4-1 illustrate the kinds of improvement questions that faculty and staff in different roles could answer with data from intra- and interpersonal competency assessments. For example, student affairs staff might use the data to determine the need for special supports or services, to focus on interventions that can strengthen specific competencies, or to assess whether existing strategies are having their intended effects or need improvement. Faculty, working alone or in disciplinary departments, might use the data to improve instructional strategies by integrating competency development into coursework. In general, policy makers may be less concerned with specific improvement decisions with respect to individual courses or strategies within their institutions, but their policies—for example, investment of available resources, incentives—can directly influence faculty and staff engagement in and commitment to formative improvement. Policy makers’ direct concerns are more salient in other rows of the table, such as in taking action through policy and practice to ensure that their institutions are accredited.
To this point, this chapter has described the intersection of assessment uses and stakeholders as important potential contexts for assessment practices that can support student success in higher education. The discussion has emphasized the distinctions among these contexts and provided illustrative questions to illustrate how the assessment of specific intra- or interpersonal competencies can provide important information to people (such as students and faculty) who need that information for specific purposes (such as course planning, course selection, counseling, and accreditation) and ultimately to improve higher education and STEM retention and success.
Using Assessment Data to Serve Multiple Purposes
As noted above, assessments of intra- and interpersonal competencies may be able to serve the needs of both internal and external stakeholders simultaneously. Assessments undertaken for accountability purposes—to document success to external stakeholders, such as accreditors—can generate useful information that internal institutional stakeholders can apply to guide improvement efforts. At the K-12 level, for example, new federal legislation (the Every Student Succeeds Act, Public Law 114-95) highlights the role of intra- and interpersonal competencies as indicators of school
quality in accountability systems. Theorists contend that, to the extent that improving these competencies is a valued outcome of elementary and secondary education, using results from assessments of these competencies as an indicator of school quality can lead directly to school improvement (e.g., Darling-Hammond et al., 2014).
At the higher education level, institutions’ use of assessment data for purposes of internal institutional improvement can support accreditation and accountability functions. Institutions that have used assessment data internally to strengthen teaching approaches and student learning outcomes have then used these improvements to document for accreditors that they focus sufficiently on learning for accreditation (Mentkowski et al., 2000). Institutions that effectively weave assessment of learning into their institutional culture begin with using assessment for improvement and end with using it for accreditation.
Moreover, external accountability pressures on institutions of higher education can fuel internal improvement processes. The learning outcomes assessment movement, which evolved in response to state and federal accountability demands (Campbell, 2015), provides an example. As discussed in Chapter 5, this movement has incorporated intra- and interpersonal competencies along with academic skills and knowledge as valued goals of higher education (Association of American Colleges and Universities, 2007). Likewise, the ABET accreditation requirement that undergraduate engineering programs demonstrate students’ acquisition of teamwork, ethics, and other intra- and interpersonal competencies has catalyzed an array of improvement efforts in instruction, curriculum design, internship programs, and assessment (e.g., Lattuca et al., 2006; National Academies of Sciences, Engineering, and Medicine, 2016b).
Despite the possibilities for cross-fertilization between the improvement and accountability paradigms, however, research suggests that the improvement paradigm results in broader buy-in from institutions of higher education and is more successful in promoting change relative to externally imposed accountability (Blaich and Wise, 2010; Ewell, 2008; Kuh et al., 2014; Suskie, 2009). Ewell (2008) and Blaich and Wise (2010) warn of difficulties in using data derived from an accountability framework for institutional change. For example, externally imposed assessment processes (such as accreditation and system-level accountability efforts) often incur lower faculty investment in understanding or applying the resulting data as compared with processes focused on improvement. Change processes that are externally imposed also tend to focus on standardization across contexts instead of situating assessments within individual institutions, departments, and courses, which have their own educational goals and norms that are important for creating change on college campuses. In essence, it may be more efficacious to use improvement data to document a
culture of evidence-based improvement for accountability than to use accountability data to garner the buy-in necessary for systematic improvement in a college setting.
Barriers to Use of Assessment Data for Improvement
The committee found a number of examples of institutions using student assessment data for improvement purposes, but the prevailing reality is less cogent. In working with dozens of colleges and universities that participated in the Wabash National Study of Liberal Arts Education, Blaich and Wise (2010) observed that “although many campuses gather mounds of evidence, few use it to get better at promoting student learning” (p. 77). Kuh and colleagues (2014, p. 4) similarly observe that “although more assessment evidence is now available, its use is not nearly as pervasive as it must be to guide institutional actions toward improving student outcomes.”
One barrier to the use of assessment data to support improvement is that some faculty, staff, and administrators are unfamiliar with basic principles of educational assessment. They may lack access to and familiarity with high-quality data systems, and they have little experience with using assessment data—or other types of data—to inform and improve teaching and student support practices. This group of stakeholders may be sophisticated in other forms of research but less familiar with the contexts for use of assessment data in higher education. As a result, they will need a great deal of support, including training in how to interpret and use the data, when provided with assessment data on students’ competencies. This challenge is illustrated by a recent national survey in which provosts at 2- and 4-year institutions were asked about assessment of student learning outcomes (Kuh et al., 2014). When asked how the use of assessment could be advanced at their institutions, 64 percent called for more professional development for faculty, 63 percent said they wished that more faculty were using assessment results, and 56 percent responded that additional financial or staff resources were key. One provost at a master’s degree–granting institution commented (Kuh et al., 2014, p. 28): “Many faculty struggle with determining how to conduct a proper assessment and then how to use the results.” Additionally, some stakeholders may be resistant to the use of data from assessments of students’ competencies given norms, assumptions, and values about assessment (Blaich and Wise, 2010). Provosts reported, for example, that progress on assessment of student learning outcomes may be slowed by faculty members’ worry that the results will be used in performance reviews (Kuh et al., 2014).
That different stakeholders have different information needs and may be interested in different constructs, measurement methods, and ques-
tions (Campbell, 2015; Ewell, 2008) also can be a barrier to effective use of assessment data. In efforts to promote institutional improvement, it is rarely the case that one stakeholder group can opt for one assessment for one purpose. Instead, the incorporation of any new assessments, including those of intra- and interpersonal competencies, will have to be negotiated and coordinated among many stakeholders, each with different interests and priorities. Higher education leaders and administrators will likely need to work with faculty, student affairs staff, and institutional research and assessment experts collectively to (1) identify the competencies that are most germane to the overall institutional mission at large, (2) agree on measures that will be acceptable to all stakeholders in terms of quality and practicality, (3) collect the data, and then (4) allocate resources and develop processes for changing practices across curricula and support structures (Dowd and Tong, 2007). Institutions will need to plan ahead to build campus support and capacity across multiple stakeholders (e.g., administrative leaders, faculty, student affairs staff, and students) (Blaich and Wise, 2010). Focusing specifically on assessment of intra- and interpersonal competencies among entering undergraduates, Allen and colleagues (2009) call for such an approach. They argue that the value of measuring these student competencies depends on the effectiveness of institutional support programs for high-risk students and recommend research into integrated systems for identifying and intervening with such students.
Supporting Use of Assessment for Institutional Improvement
The committee’s review of research in higher education and organizational change indicates that any change in higher education, including increased use of assessments (whether of one or more of the eight competencies identified in Chapter 2 or other cognitive, intrapersonal, or interpersonal competencies), requires careful planning and consideration of multiple factors, as discussed below.
Motivating Stakeholders to Assess Intra and Interpersonal Competencies
A robust literature base on organizational change culled from several disciplines (e.g., economics, sociology, psychology, anthropology, business, and education) suggests that organizations can play a role in the motivation of individuals. Further, there is a growing base of literature that examines organizational change, specifically, on college campuses (e.g., Bess and Dee, 2008; Birnbaum, 1989; Kezar, 2001). These organizational change and motivation theories may be relevant as institutions consider how to help faculty implement and use assessments of intra- and interpersonal competencies. Although this literature base is robust and varied, certain theories
have been directly mirrored in what practitioners in higher education have done to foster assessment use on college campuses.
Vroom’s (1964) landmark expectancy theory, drawn from management literature, proposes that incentives can motivate individual behaviors under certain conditions: when individuals can expect to achieve high performance (expectancy), when they can depend on a reward for achieving high performance (instrumentality), and when they care sufficiently about the reward (valence). Other economic theories describe how the use of financial incentives can motivate both desired and unanticipated behaviors if the incentives are tied to the wrong measures (Gibbons, 1998; Williamson, 1975, 1985). Beyond incentives, anthropological theories applied to organizations (broadly) and colleges (specifically) have found that institutional and disciplinary cultures play a strong role in individuals’ adopting certain behaviors (Bess and Dee, 2008; Schein, 1992).
Certain strategies that practitioners have used to implement assessment practices successfully on different college campuses appear to mirror the organizational change and motivational theories. For example, Kuh and colleagues (2014) observe that faculty may not have sufficient understanding of assessment practices to expect a payoff for these behaviors (i.e., expectancy in Vroom’s theory). Given that assessment practices often are not tied to reward structures in higher education, faculty also may not see instrumentality in practicing assessment. Thus if there were substantial evidence linking the eight intra- and interpersonal competencies described in this report to college success, college administrators could look to incentives, such as changing reward structures, to motivate faculty to adopt such assessment practices. However, the survey by Kuh and colleagues (2014) also revealed that one barrier to faculty use of assessments was faculty members’ concern that assessment results would be linked to evaluation. Therefore, any incentives created (including use of assessment in reward structures) would need to be carefully balanced to address such faculty concerns.
Integrating Assessment with Institutional Culture
Beyond incentives, literature on higher education assessment suggests that a culture of assessment throughout the fabric of the institution may foster buy-in from faculty (Dowd and Tong, 2007; Dwyer et al., 2006). Kuh and colleagues (2014) argue that “culture, climate, context, and language all matter deeply.” Cultures (long-standing) and climates (more immediate) that emphasize innovation, improvement, and evidence-based decision making all contribute to the use of assessment results by faculty. For example, institutions might link assessment use to already established internal procedures. Given the competing priorities in faculty roles, if assessment
practices are integrated into the work of the institution (e.g., through teaching and learning centers) and integrated into the established commitments of faculty (e.g., in curricular reform initiatives), faculty may view assessment practices as more integral and less burdensome. This literature and other research suggest that college and university leaders and administrators are unlikely to accomplish the goal of using assessment data to support long-term improvement without the active involvement and commitment of multiple stakeholder groups.
A burgeoning literature describing the conditions under which assessment processes lead to institutional improvement in higher education underscores the importance of cross-stakeholder coordination and collaboration. Although this literature is based on research on the use of assessments to measure cognitive competencies, its principal findings appear to be relevant to the assessment of intra- and interpersonal competencies as well. Effecting change in higher education requires acting on several institutional levels concurrently toward similar goals, triangulating multiple forms of data, and using results in feedback loops serving the needs of participating stakeholders (Dowd and Tong, 2007; Ewell, 2008).
The effective collaboration of multiple stakeholders, including students, appears to be key to the success of improvement efforts: when the values and goals of the institution and stakeholders are incorporated and when relevant stakeholders actively support the assessment and improvement efforts, greater institutional improvement is observed (Blaich and Wise, 2010; Cistone and Bashford, 2002; Dowd and Tong, 2007; Suskie, 2009). Local structures and processes that involve stakeholders in developing and agreeing on assessment goals, purposes, and procedures and that provide time for stakeholders to fully discuss the assessment results can help build ownership and understanding and combat resistance. The Brockford College example described later in this chapter illustrates this process. A multistakeholder committee took time to fully understand the theory behind prosocial leadership, select appropriate measurement instruments, and ultimately make changes to the institution’s educational programming based on the assessment data.
The inclusion of practitioners in the process appears to be particularly important, as they are the ones who ultimately will be responsible for change in the institution or program (Dowd and Tong, 2007; Dowd et al., 2011; Middaugh, 2009; Welsh and Metcalf, 2003). To promote a culture of assessment use, especially for improvement, Baker (2012) recommends the following elements: assessment champions, central committees or other capacity to guide effective assessment and use of its results, disciplinary departments or programs that can serve as centers of assessment excellence, and institutional support for improving faculty and staff assessment practices.
Some researchers have suggested that institutions conduct advance planning for how assessment data, once collected and analyzed, will translate into and support subsequent change and improvement. They also propose that institutions commit themselves fully to the goals of their assessment. Beyond simply disseminating assessment results, they argue, institutional leaders should set aside appropriate resources for the development of pathways that will facilitate discussion of the data and the formulation of options for actually using the data to change practices and improve student competencies (Blaich and Wise, 2010; Cistone and Bashford, 2002).
As noted earlier, a recent survey of the status of assessment in U.S. colleges and universities found increasing attention to the assessment of student learning outcomes (Kuh et al., 2014) and progress in supporting the use of assessment. Responding provosts cited as the most prevalent and important supports for assessment explicit institutional policy on assessing student learning, faculty engagement and involvement in assessment, and increased centralized capacity for assessment work. Relatively less prevalent were student participation in assessment activities and significant involvement of student affairs staff. Moreover, nearly two-thirds of the responding provosts identified as pressing needs more professional development in assessment for faculty and more faculty using assessment results. Case studies, for example, reveal the help faculty and staff may need in mapping the connections between their priority questions and available assessment data (Blaich and Wise, 2010).
The use cases presented in this section illustrate how a variety of institutions have taken a systematic approach to effective use of data from currently available assessments of intra- and interpersonal competencies for different purposes in local contexts. The use cases also offer ideas for potential additional uses of assessment data.
Use Case 1: Assessing Competencies for Placement and Intervention
The University of New Mexico is a flagship research university that enrolls many low-income, first-generation students, some of whom are underprepared for college. College leaders wanted to be able to determine which admitted students would need the most support to succeed and graduate. They recognized that, in addition to academic skills, a student’s level of certain intra- and interpersonal competencies could be an important indicator of whether the student would be at risk for dropping out. To identify and provide support for the most at-risk students, administrators decided to
supplement information from traditional academic measures (high school GPA, SAT, ACT) with SuccessNavigator, an ETS test that measures four clusters of academic and intra- and interpersonal competencies: tools and strategies for academic success, commitment, self-management, and social support. As noted in Chapter 3, the instrument measures a few of the competencies identified in Chapter 2, along with a range of other competencies. For example, the “tools and strategies” cluster includes measures of organization, a behavior related to conscientiousness, and “self-management” includes measures of academic self-efficacy. Correlational analyses of data gathered from 2- and 4-year institutions showed that test scores predicted GPA, persistence, and course grades, even after controlling for standardized test scores and high school GPA (Markle et al., 2013a).
In fall 2015, University of New Mexico administrators required all incoming students who were first-generation, scholarship recipients, enrolling in STEM programs, and athletes (approximately 1,500 in total) to take the test (Fain, 2015). The university decided to include athletes because they must balance the time demands of training and competition with their academic work, with high stakes attached—the possibility of losing their scholarship if they fall behind in their classes. The office of student affairs used the test results to offer students identified as at risk extra academic supports, such as regular meetings with tutors or academic advisers. The student affairs office also shared the test results with advisers for their use in recommending specific courses for particular students and developing and managing the “success plans” that some students were required to create. Based on individual students’ assessment scores, student affairs staff required some students to meet monthly with their advisers to review their success plans and others to meet with their tutors regularly to check on their grades (Fain, 2015).
This use case demonstrates the connection among administrative priorities, institutional context, and the role of intra- and interpersonal competencies within broader student-success initiatives. The University of New Mexico’s mission to serve a diverse student body with varying degrees of preparation for college provides the backdrop for how the assessment was used—in this case, for student risk assessment and selection for special support and advising programs, with the goal of improving retention and graduation rates. Extending beyond that use, the University of New Mexico or other universities could use this or a similar test for another purpose—evaluation. In this case, the university would randomly assign new students to an experimental or a control group. The experimental group would take the test and be assigned to special support and advising services, while the control group would not take the test or be assigned to these services. After the first semester or the first year, researchers would gather evidence of success for both groups (e.g., GPA, fraction returning for the second semester
or year) and analyze the results to evaluate the effectiveness of the intervention (including the use of the assessment).
In a related example, Iowa Western Community College used the same test to inform placement of students in developmental courses (Fain, 2015). Although 80 percent of Iowa Western students were placed in developmental math classes, college leaders hypothesized that students with strong motivation and related intrapersonal competencies might succeed in college-level math courses despite their low placement test scores. Using this assessment to complement placement tests and high school GPAs allowed the college to bypass remediation for these students (Fain, 2015). Similarly, an observational study of 3,647 students at four campuses within a large, urban community college system found no statistically significant difference in passage rates for students who were placed in college-level mathematics courses based on their academic placement scores alone and those whose placement was accelerated based on their SuccessNavigator test scores (Rikoon et al., 2014). (Accelerated students were within 1 standard deviation below the cutoff score recommended for college-level course placement, and comparison students were those just above the cutoff score.)
Use Case 2: Accountability Driving Assessment for Improvement at Pennsylvania State University
External accountability pressures can sometimes catalyze the use of assessment data for improvement in courses and programs of study. In 1996, ABET introduced the new Engineering Criteria 2000 (EC2000), requiring undergraduate engineering programs to demonstrate students’ progress toward specific learning outcomes, including intra- and interpersonal competencies (e.g., the ability to work in interdisciplinary teams). In 2002, the Pennsylvania State University (PSU) engineering college hired a team of researchers (Lattuca et al., 2006) to evaluate the effects of the new criteria on student learning outcomes and educational and organizational policies and practices. The authors used a pre-post design to gather information from 1994 program graduates (before the new criteria were in place) and 2004 graduates (after the new criteria were in place); they also gathered information from administrators, faculty, and employers using one-time surveys that asked about perceptions of change following implementation of the new criteria. Based on these various sources of evidence, the authors found that engineering programs placed greater emphasis on learning outcomes (also referred to as engineering professional skills) and active learning, rather than simply lecturing, than they had prior to the new accreditation requirements. Surveys also identified high levels of faculty support for continuous improvement. More than 75 percent of department chairs estimated that the majority of their faculty members supported continuous
improvement efforts, and more than 60 percent of chairs reported moderate to strong support for the assessment of student learning. Faculty corroborated this finding: nearly 90 percent of the faculty respondents reported some personal effort in assessment, and more than half reported moderate to significant levels of personal effort. For the most part, moreover, faculty members did not perceive their assessment efforts to be overly burdensome; nearly 70 percent described their level of effort as “about right.”
These changes in teaching practices and curriculum improvements appeared to influence student learning outcomes positively. Compared with their 1994 counterparts, and after taking differences in graduates’ and institutional characteristics into account, 2004 graduates reported the following:
- more active engagement in their own learning,
- more interaction with instructors,
- more instructor feedback on their work,
- more time spent studying abroad,
- more international travel,
- more involvement in engineering design competitions, and
- more emphasis in their programs on openness to diverse ideas and people.
Although they tended to be small, 7 of 10 statistically significant differences between pre- and post-EC2000 graduates persisted even after adjusting for an array of graduate and institutional characteristics.
Use Case 3: Assessing Prosocial Values for Improvement at the College at Brockport6
In 2009, the College at Brockport, a college in the State University of New York (SUNY) system, began discussing how to develop student leaders on campus. Brockport is a residential, public 4-year institution with approximately 7,000 students. College leaders convened a committee of faculty, students, and staff to develop a certificate program in leadership. To understand leadership on the campus, the committee decided to use the Social Change Model of leadership development, focusing extensively on prosocial goals and values. The committee then decided to collect data so they could better understand the educational practices that facilitated prosocial leadership development on this specific campus. As the basis for its assessment, the committee chose to participate in a national survey, the
6 See http://leadershipstudy.net/reports-publications/#campus-spotlight-series [August 2016].
Multi-Institutional Study of Leadership, mentioned earlier, that uses the Social Change Model of leadership development.
Analysis of the survey results led administrators to believe that several activities, including community service, internships, mentoring relationships, and attendance at leadership conferences, facilitated the growth and development of student leadership on their campus. As a result, they developed a new leadership program structured around these existing activities and allocated appropriate resources to allow more students to engage in the new program, including the component activities. Beyond using the campus-specific survey data to develop the leadership program, they used the national data from this same survey to benchmark the program’s success against that of other programs. They found that students involved in their program participated relatively more in “high-impact practices” in leadership development. The institution viewed this as a great success and integrated the leadership program into its strategic plan.
This use case demonstrates how leaders in higher education can use assessment data in multiple ways to inform multiple stakeholders for several interrelated purposes. The data collected were initially used largely for formative purposes, for improvement—specifically by the faculty and student affairs staff that were developing and administering the new leadership program. These stakeholders were interested primarily in data that could help them identify those conditions and educational opportunities that facilitated leadership development on Brockford’s campus. To these individuals, the context of the campus mattered in understanding how the survey results would influence the development of specific supports for student leadership experiences. Secondarily, as noted above, central administrators used the data to benchmark the success of the program against that of other programs nationally in a summative way for integration into broader strategic plans. This benchmarking capability is a benefit of using assessment data that apply locally but link to a national sample. This case also highlights how colleges see intra- and interpersonal competencies themselves as important goals for students. In essence, Brockford did not assess prosocial leadership development only because it leads to graduation, but because it is of value for Brockford’s students. This view resonates with the outcomes movement in higher education discussed in Chapter 5, which focuses on developing such competencies as teamwork, ethical responsibility, and leadership as critical outcomes for all college students.
Addressing its charge to prioritize the uses of assessments of intra- and interpersonal competencies, the committee reviewed research on how higher education institutions are using assessments of cognitive, intraper-
sonal, and interpersonal competencies. Based on this review, it identified four major uses in higher education:
- selection and placement of individual students;
- formative improvement of local educational processes, practices, and programs;
- research and evaluation supporting knowledge generation; and
Assessments of intra- and interpersonal competencies for these four purposes are carried out by a variety of stakeholders, including families, K-12 schools, faculty members, college administrators, accreditors, and state and federal policy makers. To understand how these stakeholders presently and potentially could use data resulting from these assessments, the committee reviewed relevant higher education literature and reports on current practice.
Assessment Processes Supporting Student Success
Individual stakeholders in higher education have differing needs for data resulting from assessments of intra- and interpersonal competencies at different levels of aggregation, depending on the immediacy of those needs, the purposes to be served by the data, and their assessment-related knowledge and skills (Blaich and Wise, 2010; Dowd and Tong, 2007; Ewell, 2008). These variations in uses of the data necessitate different measures, different levels of evidence, and different kinds of buy-in for the assessment process and its uses. It is important to consider these contextual aspects of the assessment process when implementing an intra- and interpersonal competency assessment in practice.
Conclusion: Assessments of intra- and interpersonal competencies in higher education are most valuable for supporting student success when their selection, design, analysis, and interpretation are guided by stakeholder information needs, intended uses, and users.
RECOMMENDATION 11: Leaders in higher education should select, design, analyze, and interpret data from assessments of intra- and interpersonal competencies based on stakeholder information needs, intended uses, and users.
The research literature contains convincing evidence that institutions of higher education can benefit from using assessments for both institutional improvement and accountability purposes, and these uses can ultimately
be mutually reinforcing. However, assessment processes that emphasize improvement tend to garner more institutional support, including faculty buy-in, relative to those emphasizing accountability (Dowd and Tong, 2007; Ewell, 2008). Indeed, some administrators are concerned that external accountability mandates may focus institutional conversations about assessment on bottom-line compliance rather than institutional improvement, especially given limited assessment resources (Kuh et al., 2014). College stakeholders also tend to be more receptive to assessment processes when they are internally derived, sensitive to specific institutional and disciplinary contexts, and driven by a belief that the assessment can serve the goal of improving student learning outcomes. Therefore, institutional improvement requires planning for needed resources and putting systems in place to support moving assessments from data collection to improvement processes.
Conclusion: Assessments are more likely to be implemented and used by stakeholders to improve student success when they are motivated by internal institutional improvement purposes than when they are motivated by accountability purposes.
Research has highlighted the need for multiple stakeholders across levels (i.e., staff, students, faculty members, administrators) to work together in an assessment process to effect pervasive change on a college campus (Dowd and Tong, 2007). The example of Brockford College described above illustrates how assessment results can be used to catalyze an improvement process in which multiple stakeholders work together toward a shared goal (in this case, improving students’ leadership abilities). Complementary local- and institutional-level applications of the assessment data made it possible to incorporate the leadership goals that motivated the process into broader institutional strategic initiatives, ensuring that the improvements realized were pervasive across the institution. In the University of New Mexico example, advisers and student affairs staff used assessment data individually with students to tailor support services, while central administrators saw the data as useful for broader strategic initiatives aimed at retaining diverse and underprepared students to graduation.
Conclusion: Assessments are more likely to contribute to student retention and completion if efforts to use their results involve stakeholders at multiple levels of the organization (e.g., student support services, faculty, diversity officers, administrators) as opposed to involving individual stakeholders acting alone.
Support for Stakeholders’ Assessment Capacity
Administrators and faculty in institutions of higher education may not have specialized training or expertise in educational assessment with regard to instrument design and selection, test administration, data analysis, or the best uses of assessment data. Yet while some stakeholders on campus, such as institutional researchers and assessment experts, can help with educating the broader campus community about assessment, they may not be familiar enough with the specific issues involved in intra- and interpersonal competency assessment. Therefore, training targeted at specific stakeholders may be necessary for the full value of these assessments to be realized. In addition, although data on intra- and interpersonal competencies can potentially add substantial value to efforts to enhance the success of underrepresented groups, faculty may not be familiar with this particular use of the data.
Conclusion: Some stakeholders in higher education will require support and training to develop the knowledge and skills needed to select, use, and interpret data from assessments to improve student success in higher education. Such training can also help stakeholders understand how these assessments can contribute to the success of underrepresented student groups in particular and how to engage stakeholders who are resistant to assessment in general.
Research has yielded preliminary evidence of the importance of the eight competencies identified in Chapter 2 to success in college, and case studies of the use of cognitive assessment data by institutions of higher education for purposes of institutional and instructional improvement also are widely available (e.g., Astin, 2012; Blaich and Wise, 2010; Borden and Young, 2008). By contrast, evidence on how data from assessments of the eight competencies or other intra- or interpersonal competencies can be used for these purposes is relatively sparse. As additional assessments of these competencies take place on college campuses, they may yield a more robust understanding of how such assessments can lead to improvement within specific institutional, disciplinary, and student contexts.
Conclusion: Limited evidence is available from an organizational science perspective on how stakeholders in higher education can use data on intra- and interpersonal competencies for improvement and evaluation purposes.
RECOMMENDATION 12: To broaden understanding of how assessments of intra- and interpersonal competencies can lead to greater student retention and success, institutions of higher education should study and report on their use of these assessments for improvement purposes (e.g., enhancing student support services, developing underrepresented students’ sense of belonging, improving courses, identifying effective programs).