Chapter 2 of this report focuses on competencies showing some evidence of a relationship to undergraduate persistence and success, as measured by such indicators as persistence from year to year, grade point average (GPA), and graduation. This chapter responds to the growing interest among higher education policy makers in assessing and developing intra- and interpersonal competencies to prepare students for professional employment and community and family life after college. It identifies and defines a set of such college outcomes, examines their relationship to college completion, and considers the opportunities and challenges related to assessing them.
As discussed in Chapter 4, demands for accountability and improvement in higher education have led many colleges, universities, and fields of study to identify specific learning outcomes that their graduates should achieve and to assess students’ development of these outcomes. As part of this trend, higher education leaders have identified intra- and interpersonal competencies as important college outcomes.
Frameworks of College Outcomes
The current interest in learning outcomes builds on work done over the past two decades to expand the long-term goals of higher education
to include capacities that graduates will need for success in life and work. The Accreditation Board for Engineering and Technology (ABET), the accrediting organization that oversees engineering, computer science, and engineering technology programs, pioneered these efforts in 1996 when it adopted the Engineering Criteria 2000 (Accreditation Board for Engineering and Technology, 1996). Responding to employer concerns about graduates’ lack of professional skills, ABET moved from basing its accreditation on inputs (e.g., courses offered, student supports) to basing it on learning outcomes (Lattuca et al., 2006). These criteria, which are still in effect today (Accreditation Board for Engineering and Technology, 2015), specify 11 learning outcomes and require programs to assess and demonstrate students’ progress toward each. The outcomes include the following intra- and interpersonal competencies:
- an ability to function on multidisciplinary teams,
- an understanding of professional and ethical responsibility,
- an ability to communicate effectively, and
- a recognition of the need for and an ability to engage in lifelong learning.
Several other individuals and organizations have since developed frameworks of college outcomes. Oswald and colleagues (2004) analyzed themes across a wide range of college mission statements and stated institutional objectives to identify 12 dimensions of college success, organized around three high-level categories:
- intellectual behaviors;
- interpersonal behaviors (e.g., communicating and dealing effectively with others, multicultural appreciation); and
- intrapersonal behaviors (e.g., ethics, career orientation, perseverance).
Around the same time, in a separate effort, leaders from 15 community colleges agreed on an initial framework of student competencies for the knowledge economy that included a set of interpersonal skills and a set of personal skills (Miles and Wilson, 2004).
Building on and extending these efforts, in 2005 the Association of American Colleges and Universities (AAC&U) launched the Liberal Education and America’s Promise (LEAP) initiative to define more clearly and promote a 21st-century liberal education for all students, in both 2- and 4-year institutions, regardless of their field of study. Based on dialogue with employers and colleges and universities, analysis of reports from the business community and accreditation agencies, and research on how people
learn (National Research Council, 2000), the initiative developed a framework of 16 “essential learning outcomes.” This framework is organized around four high-level dimensions (Association of American Colleges and Universities, 2007):
- knowledge of human cultures and the physical and natural world,
- intellectual and practical skills,
- personal and social responsibility, and
- integrative and applied learning.
As defined by the LEAP initiative, a 21st-century liberal education also includes three strategies to help students attain these learning outcomes: (1) high-impact educational practices (e.g., first-year programs, collaborative assignments, service learning); (2) authentic assessments (discussed further below); and (3) students’ signature work (e.g., capstone, internship, field work) (Schneider, 2015b).
By incorporating both acquisition of content knowledge and skills and application of learning, the AAC&U (2007) vision of 21st-century higher education is similar to the definition of “21st-century competencies” in a prior study related to the present one (National Research Council, 2012b, pp. 5-6):
Through deeper learning . . . the individual develops expertise in a particular domain of knowledge and/or performance. The product of deeper learning is transferable knowledge, including content knowledge in a domain and knowledge of how, why, and when to apply this knowledge to answer questions and solve problems. We refer to this blend of both knowledge and skills as “21st-century competencies.”
Markle and colleagues (2013b) synthesized several of these frameworks to identify seven critical domains of competence for college graduates. Most recently, the Lumina Foundation (2015) built on the AAC&U framework (Association of American Colleges and Universities, 2007) to develop the Degree Qualifications Profile, outlining what graduates should know and be able to do at the associate’s, bachelor’s, and master’s levels. A beta version of that framework was released in 2011, and it was revised based on feedback from more than 400 2- and 4-year colleges and universities, four of the seven regional accrediting agencies, and several higher education associations. The framework is organized around five high-level dimensions similar to those in the AAC&U framework (Association of American Colleges and Universities, 2007):
- specialized knowledge,
- broad and integrative knowledge,
- applied and collaborative learning,
- civic and global learning, and
- intellectual skills.
Identifying and Defining Key Outcomes
The committee reviewed all of these frameworks and other reports on the goals of higher education, searching for those intra- and interpersonal competencies that appeared most frequently. Through this process, the committee identified the following six competencies for college graduates:
- lifelong learning/career orientation,
- intercultural/diversity competence,
- civic engagement/citizenship,
- communication, and
These six competencies are summarized within selected outcomes frameworks in Table 5-1, and they are defined briefly below.
The reports and frameworks examined by the committee offer several practical definitions of the ethics competencies to be developed by 2- and 4-year institutions (see Table B-1); the authors of these reports set aside the centuries of debate among philosophers and religious leaders about the meaning of ethics and how to promote ethical behavior. For example, AAC&U (2007) includes “ethical reasoning and action” as an essential learning outcome within the “personal and social responsibility” dimension, whereas the Lumina Foundation (2015, p. 17) include “ethical reasoning” within the “intellectual skills” dimension, as follows:
Ethical reasoning thus refers to the judicious and self-reflective application of ethical principles and codes of conduct resident in cultures, professions, occupations, economic behavior and social relationships to making decisions and taking action.
Disciplinary accrediting organizations view ethics as an important component of preparing students to work within the discipline. The guidelines of the American Chemical Society (2015, p. 17), for example, state:
Ethics should be an intentional part of the instruction in a chemistry program. Students should be trained in the responsible treatment of data, proper citation of others’ work, and the standards related to plagiarism and the publication of scientific results.
Lifelong Learning/Career Orientation
The AAC&U’s personal and social responsibility dimension (Association of American Colleges and Universities, 2007) includes “foundations and skills for lifelong learning,” whereas Oswald and colleagues’ (2004) intrapersonal dimension of college success includes “career orientation.” The League for Innovation identifies “learning to learn” as a key outcome for community college graduates in the knowledge economy (Miles and Wilson, 2004). The ABET accreditation criteria for undergraduate engineering programs include “ability to engage in lifelong learning.” In a similar vein, Markle and colleagues’ (2013b) synthesis of frameworks of higher education outcomes identifies “self-directed learning” as a key competency within the “life skills” category.
The frameworks and reports examined by the committee use a cluster of related terms to refer to intercultural competence (see Table B-1). AAC&U (2007), for example, identifies “intercultural competence and knowledge” as an essential learning outcome, while Oswald and colleagues (2004) identify “multicultural tolerance and appreciation” as one of the 12 dimensions of college success, falling within the “interpersonal behaviors” category. Oswald and colleagues (2004, p. 189) define multicultural interpersonal behaviors as follows: “Showing openness, tolerance, and interest in a diversity of individuals (e.g., by culture, ethnicity, or gender). Actively participating in, contributing to, and influencing a multicultural environment.” Within the “intellectual skills” dimension of the Lumina Foundation (2015) framework, engaging diverse perspectives is identified as a key outcome for 2- and 4-year graduates (see Table B-1), with a note that it is also relevant to two other dimensions—“applied and collaborative learning” and “civic and global learning.” Finally, the Educational Testing Service (ETS) is currently conducting research to define more clearly “intercultural competency and diversity” and to develop an assessment framework for this competency (Griffith et al., 2016).
TABLE 5-1 Intra- and Interpersonal Competencies within Selected Outcomes Frameworks
|Competency||Accreditation Board for Engineering and Technology (2014)||Oswald et al. (2004)||Association of American Colleges and Universities (2007)||Markle et al. (2013b)||Lumina Foundation (2015)|
|Ethics||Professional and ethical responsibility||Ethics, within intrapersonal behaviors||Ethical reasoning and action, within personal and social responsibility||Ethics and integrity, within citizenship||Ethical reasoning, within intellectual skills|
|Lifelong Learning/Career Orientation||A recognition of the need for and an ability to engage in lifelong learning||Career orientation and perseverance, within intrapersonal behaviors; continuous learning, within intellectual behaviors||Foundations and skills for lifelong learning, within personal and social responsibility||Selfdirected learning, within life skills||Refers to students’ learning and engagement throughout their academic careers and beyond, within broad and integrative knowledge|
|Intra- and Interpersonal Competencies|
|Intercultural/Diversity Competence||Multicultural appreciation, within interpersonal behaviors||Intercultural knowledge and competence, within personal and social responsibility||Respect for others, within citizenship||Civic and global learning; engaging diverse perspectives|
|Civic Engagement/Citizenship||Citizenship, within interpersonal behaviors||Civic knowledge and engagement—local and global, within personal and social responsibility||Citizenship||Civic and global learning|
|Communication||The ability to communicate effectively||Communicating and dealing well with others, within interpersonal behaviors||Written and oral communication, within intellectual and practical skills||Effective communication||Communicative fluency, within intellectual skills|
|Teamwork||Ability to function on multidisciplinary teams||Leadership (showing skills in a group), within interpersonal behaviors||Teamwork and problem solving, within intellectual and practical skills||Teamwork||Applied and collaborative learning|
The frameworks reviewed by the committee use various terms for civic engagement and citizenship, including “civic knowledge and engagement—local and global”; “social responsibility, citizenship, and involvement”; and simply “citizenship” (see Table B-1). Adding to these earlier, brief definitions, Torney-Purta and colleagues (2015) conducted an extensive review of existing frameworks, definitions, and assessments of civic-related constructs in higher education to develop a more comprehensive framework for assessing this complex competency. The proposed framework divides civic learning into two broad domains—civic competency and civic engagement—each of which includes the dimensions of civic competency (civic knowledge, analytic skills, and participatory and involvement skills) and civic engagement (motivations, attitudes and efficacy, and democratic norms).
Most frameworks of college learning outcomes include oral and written communication. Research suggests that this competency involves both cognitive and interpersonal skills as the individual receives and interprets messages from others and formulates appropriate responses (Levy and Murnane, 2004; National Research Council, 2012b). The frameworks reviewed by the committee highlight various dimensions of communication as critical for 2- and 4-year graduates, including “oral communication” (Association of American Colleges and Universities, 2007) and “the ability to communicate effectively” (Accreditation Board for Engineering and Technology, 2015). Miles and Wilson (2004) define communication skills simply as reading, writing, speaking, and listening, whereas the American Chemical Society (2015, p. 17) states:
Effective communication is vital to all professional chemists. Speech and English composition courses alone rarely give students sufficient experience in oral and written communication of technical information. The chemistry curriculum should include critically evaluated writing and speaking opportunities so students learn to present information in a clear and organized manner, write well-organized and concise reports in a scientifically appropriate style, and use relevant technology in their communications. Because chemistry is a global enterprise, knowledge of one or more foreign languages or an international experience can be a valuable asset to chemistry students and add greatly to a student’s ability to communicate with other chemists worldwide.
The Degree Qualifications Profile of the Lumina Foundation (2015, p. 18) describes “communicative fluency” as follows:
The use of messages to achieve shared understanding of meaning depends on effective use of language, intentional engagement of audience, cogent and coherent iteration and negotiation with others, and skillful translation across multiple expressive modes and formulations, including digital strategies and platforms.
Teamwork, a complex competency involving communication skills, is frequently identified as a critical outcome for college graduates. The AAC&U (Association of American Colleges and Universities, 2014) teamwork rubric defines teamwork as encompassing five types of behaviors under the control of the individual team member: (1) contributes to team meetings, (2) facilitates the contributions of team members, (3) obtains individual contributions outside team meetings, (4) fosters constructive team climate, and (5) responds to conflict. The rubric describes four levels of performance along each of the five dimensions. Markle and colleagues identify four slightly different dimensions of teamwork skills and behaviors, which are shared across several competency frameworks (Markle et al., 2013b, p. 15): (1) fulfill roles within a team; (2) treat group members with respect; (3) motivate group members; and (4) possess leadership skills. As discussed further below, defining and measuring teamwork is challenging, and further research is needed to define the concept more clearly and to develop valid, reliable measures.
When examining these six competencies in light of its charge, the committee considered whether and to what extent any of them might also be related to traditional measures of success during college, such as GPA, persistence from year to year, and/or graduation. The committee reasoned that perhaps these six competencies have their effects on eventual career outcomes in part by contributing to the academic success of college students. An initial search of the literature produced little evidence related to this conjecture, one way or the other. As discussed later in this chapter, further research is needed to define more clearly and assess the competencies that have been identified as valued outcomes of college and to explore possible areas of conceptual and empirical overlap with the focal competencies identified in Chapter 2.
The committee conducted a literature search to identify publications examining possible relationships between the six college outcomes discussed above and college success (see Appendix A). The search yielded no rigorous research. Other than the published work by Bowman (2014), the committee found little high-quality research on the association between these six competencies and college success.
The committee was nevertheless able to find a few isolated studies relevant to the questions of interest. Teamwork, for example, is one of the six focal competencies for college graduates, and Fortenberry and colleagues (2007) provide evidence from a single institution that engaging students in team projects increased their persistence in engineering. An emerging body of “discipline-based education research” suggests that carefully designed group learning activities can support learning in science, technology, engineering, and mathematics (STEM) disciplines (National Research Council, 2012a), but this research focuses on acquisition of STEM concepts and skills and does not assess teamwork competencies. Further, while instructors may assign team projects, they are unprepared to facilitate students’ development of teamwork competencies (Borrego et al., 2013).
In the future, conceptual and technological advances in assessment, research, and instructional design may lead to improvements in teaching and assessing teamwork. To the extent that explicit teaching and grading of teamwork result, data may be available for use in future studies to examine in-depth relationships between teamwork competencies and GPA, persistence, and graduation. Similarly, as discussed further below, undergraduate engineering programs are beginning to teach and grade ethics explicitly. Here, too, future studies may yield understanding of additional relationships between ethics and GPA, persistence, and graduation.
Given the lack of published research on the possible contributions of the six outcome competencies to college persistence and success, the committee commissioned two original data analyses. At the committee’s December 2015 workshop, education researcher Nicholas Bowman (2014) shared his study focused on a construct closely related to intercultural competence—openness to diversity and challenge. Bowman found that openness to diversity and challenge was statistically significantly related to college experiences and, critically, to first-year GPA. It was also a marginally statistically significant predictor of first-to-second-year student retention. Bowman’s (2014) study was based on analysis of data from the Wabash Study of Liberal Arts Education. To address its questions, the committee
commissioned Bowman to conduct further analysis of these data. At the same workshop, economist David Deming presented his academic research drawing on the National Longitudinal Survey of Youth (NLSY) to investigate the labor market rewards for social skills (Deming, 2015). The committee commissioned Deming to conduct further analysis of those data.
Commissioned Analysis No. 1
The committee asked Bowman to draw once again on the Wabash Study data (a sample of 8,475 students) to reanalyze data on the relationships among openness to diversity and challenge, college experiences, and student success and to identify possible differences in these relationships for various student groups (genders, underrepresented minorities). In addition to openness to diversity (i.e., intercultural competency), the committee asked him to analyze (both overall and for different subgroups) the relationships among the following three competencies, college experiences, and student success:
- ethics (moral reasoning),
- civic engagement/citizenship, and
Bowman examined four of the six competencies discussed in this chapter, omitting lifelong learning/career orientation and communication. He used measures of the four competencies available in the Wabash data (see Box 5-1).
Bowman (2016) reports that with the exception of ethics, all of the other competencies he examined (i.e., teamwork, intercultural/diversity competence, and civic engagement/citizenship) were statistically significant predictors of college engagement. These competencies were all measured at college entrance, while college engagement (experiences) was measured near the end of the first year, obviating the possibility that the college engagement influenced the status of the competencies (see Box 5-1). However, his results for GPA were quite different: only ethics was a statistically significant predictor of GPA in years 1 and 4 (r = 0.103-0.108, p <.001), while the effects of the other competencies were not consistently statistically significant in both years. Bowman suggests that ethics is likely a cognitive competency, which would account for its ability to predict grades but not engagement. Finally, turning to retention, the only consistently significant predictor of retention was civic engagement (r = 0.207, p <.001; r = 0.073; p <.05; and r = 0.115, p <.001 in years 2, 3, and 4, respectively). In the paper describing his analysis and results, Bowman (2016) acknowledges
that there is no particular theoretically grounded explanation for why civic engagement might predict retention.
Bowman also estimated these equations across a variety of subgroups, including race/ethnicity, sex, first-generation status, and standardized test scores. While he found some significant effects of different competencies on different outcomes, he observed no clear or consistent pattern in the findings.
In his presentation to the committee, Bowman remarked that even given his extensive familiarity with the higher education literature, he could find no strong empirical basis on which to develop hypotheses or otherwise ground his analyses. These comments reinforced the committee’s perception that research is lacking on the possible relationships between college outcomes and competencies related to college success.
Commissioned Analysis No. 2
To explore its questions about the possible roles of teamwork and communication in college success, the committee commissioned Deming to extend his research on social and cognitive skills (Deming, 2015), which draws on data from the NLSY. In that study, Deming used a social skills index with two components (see Box 5-2):
- data from two sociability items (How sociable are you now? How sociable were you at age 6?—extremely shy/somewhat shy/somewhat outgoing/extremely outgoing); and
- data on the number of clubs in high school and (yes/no) participation in team sports in high school.
The committee asked Deming to reexamine these data to explore whether social skills (as measured by the same social skills index) might be related
to college success, both among the general student population and for different subgroups of students.
Deming reported to the committee that NLSY respondents with greater levels of social skills had significantly greater levels of completed schooling. A 1 standard deviation (SD) increase in social skills was correlated with an increase of about 0.78 years of completed education (p <.05). Adding controls first for race, sex, and age, and second for geography (census division, metro area, and urbanicity fixed effects) changed the correlations only slightly (r = 0.775 and r = 0.882, p <.05). Including a measure of cognitive skills (the Armed Forces Qualification Test [AFQT]) reduced the size of
the association by somewhat over 50 percent, but it remained relatively large and statistically significant (r = 0.376, p <.05). Relative to the AFQT measure of cognitive skills, however, social skills had only about a fourth of the effect on years of schooling. When the attainment of a bachelor’s degree was the dependent variable, Deming’s results generally replicated these findings (r = 0.101-0.046 as the same controls were added, p <.05). Deming found no differences between men and women in the effects of social skills on years of schooling and receipt of a bachelor’s degree, and small and mixed effects for race. Like Bowman, Deming reported to the committee that he was unaware of any rigorous prior research on the questions of interest to the committee.
The committee noted the limitations of Deming’s social skills index for measuring the focal competencies identified in this chapter—teamwork and communication—and raised questions about the two components of the index. When Deming estimated two additional models, each using one of the two different components of the social skills index, he found that the same patterns held for both models (see Box 5-2).
Findings from the Literature Review and Commissioned Analyses
In summary, little high-quality research has been conducted to date on the relationship between the six college outcomes discussed in this chapter and college success. The commissioned analyses by Bowman and Deming provide some evidence that these outcomes contribute to college success, but their findings need to be interpreted as only suggestive rather than definitive. The few published studies discussed above also provide limited evidence that some of the six outcomes contribute to college success, but again are only suggestive.
As noted earlier, colleges and universities are beginning to assess a broader set of student learning outcomes, including intra- and interpersonal outcomes, and are using a variety of methods for those assessments. Provosts responding to a survey in 2014 indicated they were using a wider range of assessment instruments than they had in 2009 (Kuh et al., 2014). National surveys, such as the National Survey of Student Engagement, were the most popular assessment instrument (used by 85% of respondents), followed by rubrics (69%) and classroom-based assessments that are aggregated to the institutional level (66%). Other methods included alumni surveys, incoming student placement exams, locally developed surveys, capstone projects or papers, and locally developed knowledge and skill measures. Similarly, Hart
Research Associates (2016) found that the proportion of AAC&U member institutions assessing learning outcomes had increased from 6 years earlier, both in general education and more broadly at the institutional level. The respondents reported using a variety of measurement tools; prime among them for general education was the use of rubrics applied to samples of student work and capstone projects. Forty-two percent of those assessing outcomes in general education said they used AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics. Among those using locally created rubrics, well over half (58%) reported that the VALUE rubrics informed the development of these local rubrics.
Described below are selected examples of current efforts to measure the six competencies identified earlier. Some of these efforts focus on one specific competency, while others are more general in nature, designed to assess multiple competencies. As examples, they are not fully representative of all such efforts.
VALUE rubrics are available for all six competencies discussed in this chapter. The rubrics were designed to measure authentic student work so that colleges and universities could determine whether and how well students were attaining the 16 essential learning outcomes. In 2007-2009, teams of faculty and administrators from a range of institutional types located across the country analyzed and synthesized existing campus rubrics and college mission statements, consulted experts in relevant fields, and obtained feedback from faculty to develop the initial set of VALUE rubrics (Maki, 2015). Other faculty at more than 100 campuses tested the use of the rubrics for scoring local student work samples in three rounds of drafting, campus testing, and revision. At 12 campuses, the rubrics were used to evaluate e-portfolios (Maki, 2015). Reflecting AAC&U’s (2007) emphasis on the application as well as acquisition of knowledge, the rubrics were developed for use in assessing a variety of student products, ranging from traditional papers to signature projects (such as capstone projects, service learning, internships, and other applied activities).
Each rubric includes a definition of the competency, framing language, a glossary, and benchmarks for assessing five dimensions of the competency at four levels, from benchmark (level 1) to capstone (level 4) (see Table 5-2). All are available at the AAC&U Website.1
Maki (2015) proposes that the process used to develop the VALUE rubrics—drafting and validation by faculty who are closest to student learning and outcomes assessment, together with pilot testing—helps ensure the
TABLE 5-2 Sample VALUE Rubric: Foundations and Skills for Lifelong Learning
|Selected Dimension||Capstone Level|
|Initiative||Completes required work, generates and pursues opportunities to expand knowledge, skills, and abilities.|
|Reflection||Reviews prior learning (past experiences inside and outside of the classroom) in depth to reveal significantly changed perspectives about educational and life experiences, which provide foundation for expanded knowledge, growth, and maturity over time.|
SOURCE: Schneider (2015b). Reprinted with permission.
face and content validity of these measures. In addition, a test of interrater reliability showed relative convergence in the scores of 44 faculty members from across disciplines that had independently scored student work online using the critical thinking, integrative learning, and civic engagement VALUE rubrics. By December 2015, the VALUE rubrics had been accessed by about 42,000 individuals from more than 2,800 colleges and universities (Schneider, 2015b). In addition, they have been used across institutions, beginning in 2013 when seven colleges and universities in Massachusetts jointly tested protocols for collecting and scoring samples of student work (Maki, 2015). Faculty submitted 350 student work samples representing written communication, quantitative literacy, and critical thinking. To establish interrater agreement, the samples were distributed for independent scoring, followed by group discussion to build shared understanding of performance levels for each competency. The results of individual scoring of institutional samples were entered into a spreadsheet that identified remaining areas of disagreement between the two scorers assigned to evaluate each piece of work—areas that would need to be addressed in future scoring sessions. This group of institutions then offered regional workshops to support a larger pilot project.
Building on the Massachusetts initiative, the State Higher Education Officers Association and AAC&U developed the Multi-State Collaborative to Advance Learning Outcomes Assessment, including state higher educa-
tion systems from 12 states. In its initial phase, the collaboration engaged faculty raters in examining student work from 68 2- and 4-year colleges and universities in nine state systems. The Bill & Melinda Gates Foundation then provided funding to expand this collaborative model in a demonstration/implementation year beginning in September 2015. With this grant, the collaborative plans to develop a methodology for ensuring a high level of reliability and validity of the results obtained using the VALUE rubrics, test the process for identifying and gathering samples of student work, and develop a national VALUE database for benchmarking student learning. The committee looks forward to these developments, including public release of information on the reliability and validity of the rubrics.
Assessing Ethics in Engineering
In the two decades since the ABET Engineering Criteria 2000 document was released, engineering education experts, faculty members, and researchers have developed a variety of methods for assessing the intra- and interpersonal competencies required by these accreditation standards (e.g., Shuman et al., 2005). Some recent efforts are described in a National Academies of Sciences, Engineering, and Medicine (2016b) report on Infusing Ethics into the Development of Engineers. Based on submissions from 44 programs, the committee that developed that report identified 25 exemplary programs that included some type of assessment of student learning outcomes. Although these programs reported using a wide variety of informal and formal assessment methods, they did not report on the reliability or validity of any of these methods. In other words, the insights offered by the exemplary programs are commendable, but empirical work on these programs is sorely needed. That said, several programs assessed outcomes based at least partly on student feedback, whether through comments or emails, more formal student rating systems (e.g., the IDEA Student Ratings of Instruction system)2, or faculty-designed questionnaires.
Faculty at Texas State University, a Hispanic-serving institution, and the University of Texas at Tyler, whose student population is 60 percent women, developed two modular courses on ethical, health, and safety issues related to nanotechnology for undergraduates in engineering and engineering technology. The modules, developed in collaboration with industry, were infused into nontechnical introductory courses and more technical courses from the sophomore through senior years. Student learning outcomes were assessed through in-class assignments and separate interval and
2 According to the IDEA Website, the IDEA system is designed to improve teaching and includes carefully formulated questions designed to elicit students’ thoughts about their own learning, rather than simply opinions about the instructor.
end-of-term assessments, focused on student understanding, engagement, and satisfaction. Assessment also included student ratings of the courses (on a five-point scale, from poor to excellent) and a series of focus groups. Student retention was high, and student feedback was used to revise the modules further.
At Northeastern University, faculty developed an engineering ethics program including courses in engineering students’ second and fourth years, which drew on faculty members’ professional engineering experiences and integrated students’ internship learning experiences. Assessment included students’ anonymous evaluations of the junior-year course and their scores on an independent test—the Fundamentals of Engineering (FE) examination, a required assessment for certification as a licensed professional engineer. The FE exam includes questions on the topic of ethics and business practices, and the test administrator provides a separate score on this topic. Over the period 2005-2013, a group of about 400 Northeastern students who had completed the ethics program scored 4.7 percent higher than the national average on the ethics and business practices section of the test.
Two other exemplary programs used the Defining Issues Test-2 (DIT-2)3 to assess student learning. The DIT-2, a selected-response assessment, presents students with five scenarios to assess their understanding of ethical issues. One program reported using a newly developed moral reasoning instrument, the Engineering Ethical Reasoning Instrument (Zoltowski et al., 2013). In a separate publication, the developers of that instrument reported that it was still in the scale and construct validation stages (Zoltowski et al., 2013).
Employers invariably identify teamwork as one of the most important competencies required of 2- and 4-year college graduates, but both conceptualizing and measuring this construct in the classroom environment can be difficult. The intra- and interpersonal aspects of teamwork are intimately related to a team’s task and environment, team processes (e.g., member attitudes, interactions, communication patterns), and team outcomes (products or services created by the team, along with outcomes for the team members) (Kozlowski and Ilgen, 2006). In undergraduate education, group and team learning activities are growing, but instruction and assessment typically have focused on only one aspect of team outcomes—the team’s product, such as a paper, presentation, or video (Britton et al., 2015; Hughes and Jones, 2011). Such assessment taps cognitive knowledge and skills but
provides little information on team processes, including the development of interpersonal teamwork competencies.
In an important early study focused on team processes, Stevens and Campion (1994) conducted a review of the literature on teams to develop a comprehensive taxonomy of individual teamwork knowledge, skills, and attitudes (KSAs). The taxonomy includes two major factors: interpersonal KSAs, comprising conflict resolution, collaborative problem solving, and communication, and self-management KSAs, comprising goal setting/performance management, planning, and task coordination. Stevens and Campion (1999) then developed the Teamwork KSA test to measure these constructs. The test items present hypothetical teamwork scenarios along with alternative responses for selection by the test taker. This test has been widely used for employee selection and has also been incorporated into research in higher education.
Chen and colleagues (2004), for example, used the Teamwork KSA test as a focus for the development and evaluation of a semester-long teamwork training course for undergraduates that included both didactic reading and lecture and experiential learning in teams. A quasi-experimental evaluation showed that the course had a statistically significant effect on increasing the levels of students’ teamwork knowledge and skills, but not on improving teamwork-related attitudes and self-efficacy. More recently, Bedwell and colleagues (2014) incorporated elements of Stevens and Campion’s (1999) taxonomy into a broader taxonomy of interpersonal skills, and these authors propose approaches for integrating the development and assessment of interpersonal skills into the master of business administration curriculum.
Although Stevens and Campion’s (1999) taxonomy and test have informed the assessment of teamwork in higher education, O’Neill and colleagues (2012, p. 1) recently identified statistical limitations of the test. Reviewing the literature, they found “. . . an average criterion validity of .20 for the Teamwork-KSA Test,” which they characterize as low, although a more thorough examination of the criterion measures tied to the dimensions of the taxonomy, along with a comparison of teamwork skills measures, would be needed to conduct a more solid assessment of validity. That said, O’Neill and colleagues (2012) were unable to locate any research on the item properties, factor structure, or subscale reliabilities.
Researchers have moved away from paper-and-pencil tests of teamwork and toward the use of team member ratings, which may reduce biases associated with self-report ratings and better represent dynamic team processes. These team member rating approaches are based on definitions of teamwork that focus on individual contributions to team success, as discussed further below, and draw on the literature on performance management in organizations (Ohland et al., 2012). They offer several advantages for as-
sessing and developing teamwork relative to other measurement methods. Team members are well positioned to evaluate their own and their peers’ contributions to a team, and they can learn about teamwork through the process of rating themselves and their peers based on research-based definitions of individual contributions to team success (Ohland et al., 2012). Studies of business students have found that requiring student team members to rate themselves and their peers reduces “social loafing” (doing little or no work while other team members carry out the task) and is associated with higher perceived grade fairness and more positive attitudes toward teamwork (Aggarwal and O’Brien, 2008; Chapman and van Auken, 2001; Erez et al., 2002, cited in Ohland et al., 2012).
At the same time, however, self- and peer-rating approaches also face challenges. For example, in a study of “team-based learning” (TBL), an instructional strategy used in medical education, Thompson and colleagues (2007) found that many students were resistant to the peer evaluation that was used to inform a portion of their course grades. One medical school dropped the peer evaluation component of TBL because of student hostility, and in other schools, students gamed the rating system by giving every team member the same rating. When rating themselves, students may have an inflated sense of their own contributions to the team and unrealistic expectations of their teammates’ contributions (Ohland et al., 2012). Another challenge is that these rating approaches typically focus on a single team experience, but a student who displays strong teamwork competencies in a particular team may or may not be able to transfer those competencies to other team contexts. Additional research is needed on the consistencies and inconsistencies of students’ teamwork and team performance across teams, tasks, and time.
An additional challenge is the potential for various forms of bias to influence peer ratings. Haynes and Heilman (2013) report on a series of studies examining how women and men allocated credit for joint success in performing a task. Women gave more credit to their male teammates and took less credit themselves unless their role in bringing about the successful performance was irrefutably clear, or they were given explicit information about their likely competence before completing the task. However, women did not credit themselves less when their teammate was female. Collectively, the studies showed that women working in gender-diverse teams tended to devalue their contributions to the collaborative work.
In a separate study, supported by the National Science Foundation, Joshi (2014) assembled and analyzed data across more than 60 science and engineering research teams. Results from this analysis indicated that recognition and utilization of the expertise of male and female scientists and engineers were influenced by the gender and gender identification of the rater, the team’s gender composition, and female faculty representa-
tion in the discipline in which the teams were embedded. Relative to male team members, female team members evaluated the expertise of highly educated female and male team members more positively. Male team members who identified strongly with their gender evaluated highly educated female colleagues more negatively than less educated female colleagues. In male-dominated teams, the expertise of highly educated men was used to a greater extent than was the case in teams dominated by women. Finally, teams with a greater proportion of highly educated women were significantly more productive (in terms of research publications) in disciplines with greater female faculty representation.
Such challenges notwithstanding, several investigators have developed rating systems for assessing teamwork. Loughry and colleagues (2007) reviewed the literature on organizational teams to identify ways in which an individual can contribute to a team and translated these findings into a large pool of potential test items. After surveying students, they reduced the pool to 29 specific types of contributions clustered into five broad categories: contributing to the team’s work; interacting with teammates; keeping the team on track; expecting quality; and having relevant knowledge, skills, and abilities.
The authors then created the Comprehensive Assessment of Team Member Effectiveness (CATME) instrument in both a long version with 87 items and a short version with 33 items. Students rate their peers using Likert scales. Even the short version, however, requires students to read 33 items and rate each of their teammates on each item. This represents a nontrivial burden on students, as well as on the instructor who tries to draw inferences from the large number of ratings. To address this problem, Ohland and colleagues (2012), with support from the National Science Foundation, developed CATME-B, a web-based instrument that collects and analyzes confidential self- and peer-evaluation data. Instead of Likert rating scales, these authors developed behaviorally oriented rating scales that students can use to identify three levels of performance across the five categories of team member contributions cited above. Three tests of the instrument demonstrated psychometric characteristics equivalent to those of the much longer initial version of CATME (Loughry et al., 2007), high convergence with another peer-evaluation scale created by Van Duzer and McMartin (2000), and a statistically significant relationship with final course grades in a course requiring a high level of team interaction.
SPARK—the Self and Peer Assessment Resource Kit (Freeman and McKenzie, 2002)—is another system for peer and self-evaluations of teamwork competencies. It provides a template that allows faculty to customize the evaluation criteria according to specific disciplines or project goals.
More recently, Kulturel-Konak and colleagues (2014) introduced PEAR—the Peer Evaluation and Assessment Resource—also developed
with support from the National Science Foundation. In addition to simplifying the collection and analysis of student ratings, the instrument is intended to measure students’ developmental progression in teamwork over the course of the undergraduate years. The PEAR assessment framework is based on the model of domain learning, which posits that the development of expertise in a domain proceeds in three progressive stages: acclimation, competency, and proficiency. The model also proposes that the nature of domain knowledge, strategic processing abilities, and interests differ across these three stages. Thus, the Web-based tool incorporates three types of rubric items (knowledge, interest, strategic processing), mapped against the three developmental stages of the model of domain learning. PEAR also differs from CATME in its flexibility, as it allows instructors to create their own custom rubrics. A pilot study demonstrated the feasibility of the framework and data-gathering instrument. According to Kulturel-Konak and colleagues (2014), the instrument was in the alpha testing stage, to be followed by evaluation of the reliability of the instrument and the validity of the framework.
Thus, although progress is being made in assessing teamwork, assessment of this competency is impeded by the lack of a robust conceptual model of teamwork processes and outcomes that is directly tied to an assessment framework. The teamwork literature to date shows a wide variety of models and definitions of teamwork competencies and a wide variety of new assessment instruments (e.g., online self- and peer-rating systems). Teamwork researchers continue to grapple with new and different frameworks and applications. In developing a teamwork assessment for midwifery students, for example, Hastie and colleagues (2014) chose not to build on CATME-B because the web-based measure could not be altered or customized, and because they viewed the CATME items as not well defined and even likely to impede students’ understanding of expected teamwork skills and behaviors. Instead, the authors created a new rubric based on a revision of the VALUE teamwork rubric (Association of American Colleges and Universities, 2007). Most recently, Britton and colleagues (2015) drew on a taxonomy of teamwork dimensions derived from surveys used in health care settings (Valentine et al., 2012), along with the rubric created by Hastie and colleagues (2014), to develop yet another rubric for self- and peer evaluations of teamwork—the Team-Q. Evaluating this new tool, Britton and colleagues (2015) found that it had high internal consistency, interrater reliability was within an acceptable range, and factor analyses provided evidence of convergent and discriminant validity. The authors also obtained preliminary evidence that teamwork skills improved over time when taught and assessed.
As noted above, undergraduates increasingly are required to complete team projects and engage in group learning activities. However, the devel-
opment of expertise in any domain, including teamwork, requires feedback as well as practice (National Research Council, 2000, 2012b), so simply working in teams without direct instruction, assessment, and feedback will not necessarily develop students’ teamwork competencies. The current lack of valid, reliable assessments of teamwork impedes the development of effective instructional approaches for teaching teamwork along with subject matter content during group learning activities (Kulturel-Konak et al., 2014; National Research Council, 2015a).
The Engineering Professional Skills Assessment
With support from the National Science Foundation, Zhang and colleagues (2015) developed and tested a new performance assessment designed to measure five engineering professional skills identified by ABET as critical student learning outcomes: (1) understanding of professional and ethical responsibility; (2) ability to communicate effectively; (3) broad understanding of the impact of engineering solutions in global, economic, environmental, and cultural/social contexts; (4) recognition of the need for the ability to engage in lifelong learning; and (5) knowledge of contemporary issues. Unlike all of the other assessment examples discussed in this section, this assessment focuses on measuring group rather than individual performance. It uses a prompting scenario to present students with a contemporary engineering issue lacking a clear-cut solution and an analytical scoring rubric with five dimensions corresponding to the skills cited above. Groups of four to six students were instructed to engage in a 45-minute discussion around the issue presented by the scenario, and trained raters used the scoring rubric to assess each group’s performance on each of the five skills. The raters did not rate individual students but rather the groups as a whole.
In a small, exploratory study, Zhang and colleagues (2015) collected and analyzed data from 20 discussion groups at three engineering colleges to determine whether the scenario used affected performance scores. Although based on a small sample, the findings tentatively suggested that scores on the assessment discriminated among student groups and that using different scenarios appeared to have a minimal effect on student group scores. In addition, student groups varied in their measured proficiency across the five ABET outcomes, suggesting the need for a stronger emphasis on developing these outcomes in the undergraduate engineering curriculum.
Assessing Civic Competency and Engagement
In response to widespread interest in developing students’ civic competency and engagement (also referred to as citizenship), ETS convened a
research team to define this construct more clearly and to develop an assessment framework. As discussed earlier, Torney-Purta and colleagues (2015) conducted an extensive review of the literature on defining and assessing civic engagement to identify challenges and opportunities for designing and implementing assessments of this complex competency. They identified two dimensions of this competency: (1) participatory and involvement skills; and (2) civic engagement, which consists of motivations, attitudes, and efficacy, democratic norms and values, and participation and activities. They also considered which item formats and task types would be most likely to ensure fair and reliable scoring for a future assessment of civic competency and engagement. Those efforts were part of a suite of higher education assessments in various stages of research and development at ETS, referred to as HEIghten™. Reflecting higher education leaders’ growing interest in assessing intra- and interpersonal competencies, the suite includes two of the six competencies that are the focus of this chapter—civic engagement/citizenship and intercultural/diversity competence. Following on the publication and dissemination of the Torney-Purta et al. (2015) study, the ETS team (Liu et al., 2015) planned the following further development activities: prototyping and cognitive interviews, feedback from user audiences, revision of the test blueprint, item writing and test development, pilot study, validation studies, and operational testing.
The intra- and interpersonal competencies of ethics, lifelong learning/career orientation, intercultural/diversity competence, civic engagement/citizenship, communication, and teamwork have been identified as valued outcomes of college education. Although it might seem intuitive that these competencies would predict academic success, there is little evidence to date that these desired outcomes for graduates are actually improving in college itself and also contribute to persistence, GPA, and graduation. There simply are too many large gaps in the research literature and in the available data to say with any certainty that these competencies do or do not matter for students’ success in college.
Conclusion: To date, only limited research has been conducted on the intra- and interpersonal competencies that have been identified as important learning outcomes for college graduates. Therefore, little is known about whether and under what conditions these competencies are related to persistence and success in college.
This gap in the research comes at a time when educational policy makers are pursuing two potentially complementary aims related to intra- and
interpersonal competencies: (1) increasing students’ persistence to graduation, and (2) developing students’ competencies for life and work after graduation. Research is needed to explore whether, and to what extent, these aims are complementary.
RECOMMENDATION 13: Federal agencies and foundations should invest in research examining whether, and under what conditions, the intra- and interpersonal competencies identified as outcomes for college graduates may also be related to students’ persistence and success in college.
The six competencies discussed in this chapter have the potential to significantly broaden understanding of the noncognitive determinants of student postsecondary success. The committee focused on these particular competencies because all of them have been identified by blue ribbon panels and top educational researchers as desirable outcomes of higher education. Many observers view these six competencies as critical to the workplace success of the next generation of college graduates, and indeed a growing body of research shows that collectively, these competencies are valuable in the labor market and other aspects of life (e.g., Deming, 2015).
Very little is known about the empirical associations between the six competencies that are the focus of this chapter and the ability and process of students moving successfully through college. This paucity of knowledge opens up a host of research opportunities. The committee believes a sensible research agenda will require going well beyond calculation of correlation coefficients between, on the one hand, ethics, lifelong learning/career orientation, intercultural/diversity competence, civic engagement/citizenship, communication, and teamwork and, on the other hand, college completion, GPA, and other indicators of performance. Three particular issues concerning a needed research agenda are worth noting.
First, the state of measurement of most of these competencies is still markedly underdeveloped, as is the case for the eight competencies identified in Chapter 2. In contrast with the century-long history of measuring cognitive competencies, researchers have only in the past decade devoted a concerted effort to defining intra- and interpersonal competencies and designing and testing scales and items appropriate to their assessment. Furthermore, as discussed in Chapter 3, many new and rapidly evolving psychometric and technological advances are now available to assist in the
measurement of these complex competencies. These advances open up new opportunities to conduct much more test development work, reinforcing the committee’s recommendation for such work (Recommendation 8).
Theoretical and Conceptual Issues
Second, much theoretical and conceptual work remains to be done before statistical analysis of these competencies is undertaken to explore potential areas of conceptual overlap between college outcomes and predictors of college persistence. This research would examine such questions as the following:
- What are the theoretical reasons (if any) to expect that civic engagement, for example, is empirically (even causally) associated with GPA and/or college persistence?
- What theoretical models or frameworks would guide investigation of this potential relationship?
The committee notes that some researchers have found ethical behavior to be associated with both conscientiousness (e.g., Gensler, 1996; Kalshoven et al., 2011) and responsibility, which, in turn, is related to conscientiousness (Jackson and Roberts, 2015). In another example, the college outcomes of intercultural/diversity competency and civic engagement/citizenship discussed in this chapter appear to overlap conceptually with the competency of prosocial values and goals discussed in Chapter 2 (Wolniak et al., 2012). A process-focused elaboration of these theoretical frameworks—and a concomitant measurement framework to capture these processes—will be required if research on these competencies and postsecondary success is to proceed in a cumulative and informative manner.
Finally, the committee endorses a multimethods research agenda aimed at better understanding the role of these competencies in student success. The commissioned analyses of Bowman and Deming each demonstrate the potential of survey-based regression analysis to begin to explore these questions. Additionally, other researchers might favor studies of particular institutions, and might adopt experimental, quasi-experimental, experience-sampling, field research, multiobserver, or qualitative case study methods rather than relying heavily on surveys. The committee believes the state of the literature is such that investment in the development of a conceptual framework of intra- and interpersonal competencies tied to a multiyear, multi-institutional research agenda holds particular promise.
This page intentionally left blank.