2
Governing Principles of Good Metrics

Important first steps in creating metrics for evaluating teaching in engineering schools is to develop principles that ensure that the metrics will be widely accepted and sustainable and that they actually will provide valid assessments of the educational impact of faculty on students. One of the main principles should be what is valued is rewarded, and what is rewarded is valued.


For far too long many have bought into the notion that teaching effectiveness cannot be evaluated as objectively as research contributions (where output quantity, frequency of citation, and confidential letters attesting to quality and impacts are frequently employed; England, 1996). Some have internalized this principle and made it part of the value system in engineering education, namely, that teaching is less important and less scholarly than research. Promoters of metrics for evaluating teaching must be sensitive to these long-held, very strong convictions and recognize that introducing metrics will represent a major cultural change.


The principles listed below are common to the development of any new system in an organization and can guide the creation of metrics for evaluating teaching:

  • The evaluation system must be compatible with the overall mission, goals, and structure of the institution because engineering colleges reside within universities, and the evaluation of engineering faculty for promotion and tenure will eventually be conducted by university committees. If metrics have been created in isolation, engineering faculty might be judged by one set of criteria in the engineering context and a different set of criteria in the context of promotion at the university level. Thus, ideally, engineering schools should approach their respective institutions to initiate a discussion across the university regarding improved metrics for evaluating teaching.

  • The proper locus for developing an effective evaluation system should be the deans and department chairs, or their equivalents. These administrative levels can provide the necessary connections between the institutional administration and the individual faculty members. Deans and department heads can also assist in allocating resources for the design and implementation of an evaluation system that is in concert with the institutional mission, goals, and structure.

  • To ensure the acceptance of the evaluation system, faculty members should be integrally involved in its creation (i.e., faculty must believe in the fairness and utility of the evaluation process). To ensure faculty buy-in, they must be involved in the discussions from the beginning. Moreover, the discussions themselves, by providing a forum where faculty from different departments can discuss characteristics and methods of effective teaching, will begin to break down the barriers of teaching as an isolated activity and reposition it as a collegial activity, thus further legitimizing its value.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 9
2 Governing Principles of Good Metrics Important first steps in creating metrics for evaluating teaching in engineering schools is to develop principles that ensure that the metrics will be widely accepted and sustainable and that they actually will provide valid assessments of the educational impact of faculty on students. One of the main principles should be what is valued is rewarded, and what is rewarded is valued. For far too long many have bought into the notion that teaching effectiveness cannot be evaluated as objectively as research contributions (where output quantity, frequency of citation, and confidential letters attesting to quality and impacts are frequently employed; England, 1996). Some have internalized this principle and made it part of the value system in engineering education, namely, that teaching is less important and less scholarly than research. Promoters of metrics for evaluating teaching must be sensitive to these long-held, very strong convictions and recognize that introducing metrics will represent a major cultural change. The principles listed below are common to the development of any new system in an organization and can guide the creation of metrics for evaluating teaching: • The evaluation system must be compatible with the overall mission, goals, and structure of the institution because engineering colleges reside within universities, and the evaluation of engineering faculty for promotion and tenure will eventually be conducted by university committees. If metrics have been created in isolation, engineering faculty might be judged by one set of criteria in the engineering context and a different set of criteria in the context of promotion at the university level. Thus, ideally, engineering schools should approach their respective institutions to initiate a discussion across the university regarding improved metrics for evaluating teaching. • The proper locus for developing an effective evaluation system should be the deans and department chairs, or their equivalents. These administrative levels can provide the necessary connections between the institutional administration and the individual faculty members. Deans and department heads can also assist in allocating resources for the design and implementation of an evaluation system that is in concert with the institutional mission, goals, and structure. • To ensure the acceptance of the evaluation system, faculty members should be integrally involved in its creation (i.e., faculty must believe in the fairness and utility of the evaluation process). To ensure faculty buy-in, they must be involved in the discussions from the beginning. Moreover, the discussions themselves, by providing a forum where faculty from different departments can discuss characteristics and methods of effective teaching, will begin to break down the barriers of teaching as an isolated activity and reposition it as a collegial activity, thus further legitimizing its value. 9

OCR for page 9
• The evaluation system should reflect the complexity of teaching, which must include the course design element, implementation and delivery of the course, assessment, and mechanisms for continuous improvement, and recognition of different learning styles and levels of student abilities. Teaching is both a science and an art, and doing it well requires a knowledge base and skills that are usually not well-addressed in disciplinary doctoral programs. • At the end of the day, the discussion participants must be in agreement/consensus on the fundamental elements of effective teaching. Most important, learning1 should be a key component of any definition, because the outcome of effective teaching is always learning. Other elements include design (e.g., the alignment of clearly articulated objectives/outcomes,2 assessments,3 and instructional activities4) and implementation (e.g., clear explanations, frequent and constructive feedback, illustrative examples). • An evaluation of teaching should include both formative feedback to assist/help individual improvement and summative evaluation to measure progress toward institutional goals.5 An evaluation system must identify areas for improvement and provide both opportunities and support for making those improvements. While we believe that faculty evaluation and faculty development should not be programmatically linked (they should not be housed in the same entity or done by the same people), linking the two conceptually sends a clear message that the institution supports faculty growth, which happens only when faculty receive ongoing and constructive feedback. • The evaluation system must be flexible enough to encompass various institutional missions, disciplines, audiences, goals, teaching methodologies, etc. In addition, it should also accommodate people on different “tracks” (e.g., some universities have adopted teaching tracks as some faculty gravitate toward expanded teaching roles at different points in their careers). Finally, the system should be flexible enough to acknowledge, encourage, and/or reward educational experimentation or attempts at educational innovation. A flexible system enables instructors to try new things without worrying that they might be penalized if the outcomes are not immediately positive. 1 In the context of this report, learning is defined as knowledge, skills, and abilities, as well as attitudes students have acquired by the end of a course or program of study. 2 Objectives/outcomes are descriptions of what students should be able to do at the end of the course (e.g., analyze, use, apply, critique, construct). 3 Assessments are tasks that provide feedback to the instructor and the student on the student’s level of knowledge and skills. Assessments should be varied, frequent, and relevant. 4 Instruction includes providing contexts and activities that encourage meaningful engagement by students in learning (e.g., targeted practice). 5 A formative assessment is typically defined as an ongoing assessment intended to improve performance, in this case, faculty teaching (and hence student learning). A summative assessment, typically conducted at the end of instruction (e.g., of a semester or program), is used to determine overall success. 10

OCR for page 9
• Evaluations should be based on multiple sources of information, multiple methods of gathering data, and information for multiple points in time.6 The evidence collected should be reliable (i.e., consistent and accurate), valid (i.e., it should measure what it is intended to measure), and fair (i.e., it should reflect the complexity of the educator’s achievements and accomplishments). • It is equally important to note that collecting and analyzing data of this sort often demands a skill that we may need to develop further among our faculty and administrators. A good way to learn these skills might be to enlist the help of colleagues on campus who have expertise in, for example, survey design, qualitative interviewing, educational outcomes research, and so forth. • A sustainable evaluation system must not require implementation that is burdensome to faculty or administrators. However, it is important to guard against sacrificing the fairness, validity, accuracy, and reliability of the evaluation system in trying to make it as easy to use as possible. • The evaluation system itself should be evaluated periodically to determine if it is effective. These periodic reviews should be part of the development plan to ensure that evaluations provide both formative feedback that leads to improvements in teaching and data adequate for judging the quality of teaching. If the system is successful, all stakeholders will recognize that it provides accurate and valuable information that meets the needs of various groups and creates a culture of assessment that drives teaching and learning improvements. They will also agree that an assessment is not done to faculty but is done by faculty and for faculty and that assessment supports continuous improvements in the quality of education. If stakeholders internalize the principles listed above for developing metrics, they will naturally support a culture of assessment. 6 Both direct and indirect measures should be used. Direct measures (e.g., exams, projects, assignments) show evidence of students’ knowledge and skills. Indirect measures (e.g., teaching evaluations) reflect students’ perceptions of teaching effectiveness and employers’ and alumni perceptions of how well the program prepares students for their jobs. 11