National Academies Press: OpenBook
« Previous: 4 Models and Resources in Ethics Education
Suggested Citation:"5 Assessment and Evaluation of Ethics Education and Mentoring." National Academy of Engineering. 2009. Ethics Education and Scientific and Engineering Research: What's Been Learned? What Should Be Done? Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/12695.
×
Page 29
Suggested Citation:"5 Assessment and Evaluation of Ethics Education and Mentoring." National Academy of Engineering. 2009. Ethics Education and Scientific and Engineering Research: What's Been Learned? What Should Be Done? Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/12695.
×
Page 30
Suggested Citation:"5 Assessment and Evaluation of Ethics Education and Mentoring." National Academy of Engineering. 2009. Ethics Education and Scientific and Engineering Research: What's Been Learned? What Should Be Done? Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/12695.
×
Page 31
Suggested Citation:"5 Assessment and Evaluation of Ethics Education and Mentoring." National Academy of Engineering. 2009. Ethics Education and Scientific and Engineering Research: What's Been Learned? What Should Be Done? Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/12695.
×
Page 32

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

5 Assessment and Evaluation of Ethics Education and Mentoring The following background questions provided a context for Session III, Outreach and Assessment: Are relevant and important materials and techniques reaching the appropriate audiences? Who are the appropriate audiences, and are there useful feedback loops from them to the developers of materials, techniques, and guidance? Are the audiences able to adapt or adopt these resources? What efforts might i ­mprove access, use, and feedback and improvement? What kinds of assess- ment have been developed, make sense, or should be encouraged for the future? What have we learned, and what do we need to learn? Felice Levine, executive director, American Educational Research Association (AERA), moderated this session. Speakers were Melissa Anderson, professor, Department of Educational Policy and Adminis- tration, University of Minnesota, Minneapolis; Daniel Denecke, head of the Best Practices and Publications Program, Council of Graduate Schools; and Joseph Whittaker, dean, School of Computer, ­Mathematical and Natural Sciences, Morgan State University. The respondents were NAS member W. Carl Lineberger, professor, Department of Chemistry and Biochemistry, University of Colorado, Boulder; and Charles Huff, professor, Psychology Department, St. Olaf College. One of the speakers in Session I, Michael Mumford, University of Oklahoma, also addressed the issue of assessment in reviewing the work of his research team, which compared results from its “sensemaking” training with other kinds of ethics training. Using a case-based pre/post Brian Schrag, executive secretary, Association for Practical and Professional Ethics, had also been scheduled to make a presentation but was unable to attend. 29

30 ETHICS EDUCATION AND SCIENTIFIC AND ENGINEERING RESEARCH measure, the team found that interactive “sensemaking” instruction had more positive results than some other approaches. Mumford reported that an evaluation of research-ethics courses at a number of research intensive universities showed that instruction given as part of regular classes that did not include interactive activities was generally not effec- tive. In some cases, he said, this kind of instruction even had negative impacts on ethical decision making in four areas of research conduct— data management, the conduct of a study, professional practices, and business practices. Melissa Anderson, University of Minnesota, Minneapolis, reported on her research team’s survey of more than 7,000 early and mid-career NIH- funded scientists. Very few of the survey respondents reported that they had engaged in any fabrication, falsification, or plagiarism in the three years prior to taking the survey, but many indicated engaging in question- able research practices. A majority of mid-career scientists reported that they had cut corners or made inappropriate use of funds in those years. For both early- and mid-career scientists, the research indicated signifi- cant associations between these questionable practices and environmental factors, such as competitiveness, counter-norms (e.g., secrecy and self- i ­nterestedness), and perceived injustices in the research environment. The survey results also indicated limited positive influence of ­ethics education on research behaviors, whether the instruction had been given in a separate course or was combined with other research training. The self-reports from early-career NIH-funded scientists even indicated a negative relationship between separate ethics instruction and good data- handling practices. In addition, the results indicated that the influence of mentoring depended on the type of mentoring. Mentoring focused on research ­ethics, good practice, and personal assistance was associated with a decrease in questionable behavior, but mentoring for survival (or how to get ahead in your field) was associated with an increase in questionable behavior. Anderson recommended that laboratories and other research locations adopt a principle of “collective openness” that would require participants to encourage “anybody at any time [to] ask questions about any . . . work or how it is done . . . [and] raise questions so that mistakes, oversights, and misbehavior will . . . be caught.” Operating in accordance with this principle, she argued, would ensure that research behavior could “stand up to scrutiny” and meet the standards of “scientific integrity.” Readers can find citations to this work at http://cehd.umn.edu/EdPA/People/Anderson.html.

ASSESSMENT AND EVALUATION OF ETHICS EDUCATION AND MENTORING 31 The next speaker, Daniel Denecke of CGS, reported that the 10 universities that participated in the first CGS project on ethics research (funded by NIH), found assessment to be a difficult challenge because of the difficulty of finding or developing measures of student learning. Denecke said assessments should also measure the institu- tional climate for integrity (which might explain differences between faculty and student perceptions) and the effectiveness of curricular reforms. The 10 participating universities assessed the effectiveness of efforts to get faculty buy-in rather than student learning. The eight universities that participated in the second CGS project (funded by NSF) had some features in common, such as online ­modules, but they also developed their own activities and, especially, their own assessment strategies. Although a comparative assessment for these uni- versities would have been helpful, Denecke said, the short lifespan of the project and the diversity of approaches had made that impossible. He then described a new project that will have three layers of assess- ment. Measures of student learning will be left to the institutions, but the other two measures will be based on common instruments, one to assess student and faculty perceptions of cultural changes in their insti- tutions and one to assess how well practices put in place for the project worked during the project and afterward, and to identify mid-course adjustments. At various times in discussions throughout the meeting, workshop participants remarked that assessments of ethics instruction and men- toring were at an early stage of development, and that determining and adopting appropriate, consistent measures for success would not be easy. Even measures of student satisfaction and pre/post test achieve- ment differentials, which are relatively easy to measure, do not tell if the right things are being measured or whether students can call on what they’ve learned afterwards, when needed. In addition, many assessment instruments have not been validated, and instructional methods may not always be appropriate for the target audience. In the general discussion following this session, areas in need of further research, such as a multi-level assessment that would include individual outcomes and institutional changes over the short and long term, were identified. Among the commonly accepted, or at least usable, measures, the group named measures of broad-based faculty and depart- mental involvement at the institutional level, and measures of improve- ments in reasoning ability and other skills and knowledge at the indi- vidual level.

32 ETHICS EDUCATION AND SCIENTIFIC AND ENGINEERING RESEARCH Some discussion participants noted that new, expanded, or revised programs offered by professional societies and accreditation bodies could provide another kind of measure. Felice Levine of American E ­ ducational Research Association suggested that questions might be embedded in ongoing periodic research surveys. For example, NSF could add an ethics question to its graduate student/postdoctoral ­survey. Several participants suggested that compliance officers in industry and academia might be asked to describe their experiences with differ- ent approaches to ethics education and to identify needs for further research. The group was generally encouraged that attempts at assessment were being made and that the need for assessment has been recognized, if only in response to the new requirements of funding agencies, such as NSF. Many participants noted the urgent need for better assessment tools and a “menu” of choices to guide principal investigators who want to incorporate ethics training into their research programs, including assessments of training programs and “train-the-trainers” programs, to determine their consistency and effectiveness. Some members of one discussion group had floated the idea of national standards or certifica- tion but did not have time to pursue the idea in detail. Charles Huff of St. Olaf Collage also mentioned a variety of available measurement tools that might be adapted to ethics education, ranging from tests of person- ality, to those measuring recognition of ethical issues and knowledge of approaches to their resolution, to organizational ethical climate scales.  http://www.stolaf.edu/people/huff/info/Papers/Good.Computing.P1.doc

Next: 6 What's Next? »
Ethics Education and Scientific and Engineering Research: What's Been Learned? What Should Be Done? Summary of a Workshop Get This Book
×
 Ethics Education and Scientific and Engineering Research: What's Been Learned? What Should Be Done? Summary of a Workshop
Buy Paperback | $29.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Increasing complexity and competitiveness in research environments, the prevalence of interdisciplinary and international involvement in research projects, and the close coupling of commerce and academia have created an ethically challenging environment for young scientists and engineers. For the past several decades, federal research agencies have supported projects to meet the need for mentoring and ethics training in graduate education in research, often called training in the responsible conduct of research. Recently, these agencies have supported projects to identify ethically problematic behaviors and assess the efficacy of ethics education in addressing them.

With support from the National Science Foundation, the National Academy of Engineering Center for Engineering, Ethics, and Society held the workshop "Ethics Education and Scientific and Engineering Research: What's Been Learned? What Should Be Done?" on August 25 and 26, 2008.

The workshop, summarized in this volume, discussed the social environment of science and engineering education; the need for ethics education for graduate students and postdoctoral fellows in science and engineering; models for effective programs; and assessment of approaches to ethics education, among other topics.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!