National Academies Press: OpenBook
« Previous: 2 Governing Principles of Good Metrics
Suggested Citation:"3 ASSUMPTIONS." National Academy of Engineering. 2009. Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved. Washington, DC: The National Academies Press. doi: 10.17226/12636.
×
Page 12
Suggested Citation:"3 ASSUMPTIONS." National Academy of Engineering. 2009. Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved. Washington, DC: The National Academies Press. doi: 10.17226/12636.
×
Page 13
Suggested Citation:"3 ASSUMPTIONS." National Academy of Engineering. 2009. Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved. Washington, DC: The National Academies Press. doi: 10.17226/12636.
×
Page 14

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

3 ASSUMPTIONS The basic, critical assumption that underlies this report is that a well developed, meaningful mechanism for evaluating instructional effectiveness will improve both teaching and learning. This assumption is based on the common understanding that faculty (like most individuals) respond in accordance with how well their efforts are rewarded. As stated earlier, the perception is that the current system for evaluating faculty for promotion and tenure is heavily weighted in favor of research (scholarly and creative activities) with a relative low weight given to teaching. This imbalance reflects that in “the market” in higher education, effective teaching, unlike research, is not rewarded with advancement and prestige. Another reason for the imbalance might be that the methods used to evaluate teaching effectiveness are not well developed or widely understood and, in most cases, have not been adopted at the institutional level. Under these circumstances, administrators may be understandably reluctant to give significant weight to an assessment whose validity and accuracy may be uncertain, or even suspect. Another significant underlying assumption is that all faculty members are capable of improving their teaching. Just as researchers must constantly update their knowledge and methodologies, instructors should also continue to “update” their teaching practices based on both developments in learning and pedagogy and feedback on their teaching skills. Also, an assumption which is closely linked to the preceding assumption is that many faculty members are intrinsically motivated to improve their teaching. Therefore, they may welcome feedback, both formative and summative, if it is believed that it will improve their teaching effectiveness. Of course, the committee is aware that priorities among demands on faculty for research, service, and personal life, as well as teaching, differ among types of universities, from university to university within a type, from department to department, from individual to individual, and even from time to time. Some people may question whether all, or even most, engineering educators have an intrinsic desire to improve their teaching. Certainly, the responses of some faculty members to teaching evaluations seem to exhibit more cynicism than intrinsic motivation. However, faculty members are typically high achievers and are concerned with how they would be ranked in comparison with their peers being similarly evaluated. Therefore, we assume that when faculty members feel that the information they receive from teaching evaluations is appropriately informative, they will use that information to improve their teaching. Thus the crucial factor is that faculty members must believe that an evaluation system is appropriately informative. Although it may appear that some faculty would not welcome feedback on their teaching, it is likely they are reacting within the context of current promotion and tenure and evaluation systems. Any performance evaluation must be perceived to be accurate and fair in order for the individuals being evaluated to welcome the experience and to try to improve their performance by changing their teaching practice. Of course, even if a system is perceived to be “unfair,” it 12

may still lead to changes in behavior, provided the outcome of the evaluation is sufficiently threatening. However, we are more interested in developing an evaluation system that motivates changes because the system is fair and informative, rather than because it is threatening. While the issue of accuracy of such instruments is a subject that is broadly understood and does not warrant in-depth description in this text, the issue of fairness will be defined more thoroughly. The perception of fairness cannot be separated from the egocentrism of the person being evaluated. A study by Paese, Lind and Kanfer (1988) found that pre-decision input from those who will be judged in the evaluation process will lead to their judging the system to be procedurally fair. However, many other investigators have demonstrated that, even for those who have had input into developing the process, perceptions of fairness are linked, consciously or not, to an individual’s interests and needs (Van Prooijen, 2007). Thus a sense of fairness is significantly affected by whether an individual believes he or she may benefit from an action, or, even more important, whether he or she will be disadvantaged by it. Thus all individuals, even those who had input into the development of a process of evaluation, may eventually or initially consider the system unfair, depending upon how the system influences decisions that affect them. With respect to implementing a more effective and valuable assessment program, we might adapt to instruction a practice commonly used to increase competence in the evaluation of research proposals and journal articles. That is, we can systematically engage graduate students and junior faculty in evaluating the various types and aspects of teaching effectiveness. Their reviews of teaching are then evaluated by senior faculty as a way of providing valuable feedback and constructive criticism on the quality and comprehensiveness of the reviews. The time and effort of graduate students and junior faculty pay off by raising the level of their understanding of the research, teaching, and reporting process as a whole. At the same time, their efforts ensure that future cadres of effective reviewers and researchers will be available. Similar efforts could be made to increase competency in instructional evaluation by enlisting senior faculty with expertise in teaching along with the participation of graduate students and junior faculty to increase their capabilities as evaluators of instructional effectiveness. Such an investment would utilize the approach used to foster continuous improvement in research techniques through advising and mentoring of graduate students and junior faculty not only to ensure that more, and more capable, individuals had some experience of assessing instructional effectiveness, but also to create a large cadre of faculty with exposure to the concepts of instructional design and delivery and a better understanding of the fields of instructional research. Our final assumption is that administrators and campus reviewers will do their jobs fairly and objectively, including making appropriate assignments, communicating university and program expectations, and using the data collected from evaluations to make fair and accurate judgments of performance, both to encourage professional development and to inform job- advancement decisions. This assumption assumes a great deal of trust and requires some further explanation. The ultimate goal of evaluating teaching is to provide feedback to individuals (in both formative and summative formats) as a basis for gauging their effectiveness in meeting institutional and program expectations and then continuously improving their teaching performance to satisfy their intrinsic desire for excellence. To accomplish this goal, the 13

individuals being evaluated must depend upon a team of people to gather and analyze data in a way that they trust will produce accurate and fair results. As Lencioni (2002) points out, no team can function effectively without trust. In university settings, administrators cannot create an environment of trust by themselves, but they can be crucial players in maintaining trust. Some of the things administrators and campus reviewers should do to engender trust in the teaching evaluation process are listed below: 1. They must assign faculty to teach only in areas in which they have, or can readily develop, the expertise to teach at an appropriate level. 2. They must ensure that an evaluation of an individual’s teaching performance is considered in the correct context, such as expected outcomes for student learning, the level of students in the course, whether a course is required or elective, the size of the classes and the nature of the available facilities, and the past experience of the instructor in this teaching situation. 3. Complex social data, such as teaching evaluations, must be used in accordance with well documented social science practices that have established appropriate interpretations and limitations for deriving results. 4. Administrators and reviewers must show that they are using the evaluation process to develop and advance faculty members fairly. 14

Next: 4 What To Measure »
Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved Get This Book
×
 Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved
Buy Paperback | $29.00 Buy Ebook | $23.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Faculty in all disciplines must continually prioritize their time to reflect the many demands of their faculty obligations, but they must also prioritize their efforts in ways that will improve the prospects of career advancement. The current perception is that research contributions are the most important measure with respect to faculty promotion and tenure decisions, and that teaching effectiveness is less valued--regardless of the stated weighting of research, teaching and service. In addition, methods for assessing research accomplishments are well established, even though imperfect, whereas metrics for assessing teaching, learning, and instructional effectiveness are not as well defined or well established.

Developing Metrics for Assessing Engineering Instruction provides a concise description of a process to develop and institute a valid and acceptable means of measuring teaching effectiveness in order to foster greater acceptance and rewards for faculty efforts to improve their performance of the teaching role that makes up a part of their faculty responsibility. Although the focus of this book is in the area of engineering, the concepts and approaches are applicable to all fields in higher education.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!