National Academies Press: OpenBook

Best Practices in Assessment of Research and Development Organizations (2012)

Chapter: Appendix C Validating the Assessment

« Previous: Appendix B Importance of Alignment Between an Organization's Vision and Its People
Suggested Citation:"Appendix C Validating the Assessment." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations. Washington, DC: The National Academies Press. doi: 10.17226/13529.
×

Appendix C

Validating the Assessment

An organizational assessment results, fundamentally, in a set of predictions based on sampling of the characteristics of an organization. The predictions may be about the current characteristics of the organization’s wider activities, staff, or processes based on the set examined; or they may be about the organization’s future relevance or impact based on observed trends. This report identifies guidelines that may be considered and possible measurement methods applicable to the key characteristics of a research and development (R&D) organization. Some measures and criteria may be quantitative, and others may be qualitative, including anecdotal evidence. Just as an organization’s activities can be assessed, so too can the assessment itself be assessed with respect to the validity of its measurement of quality, preparedness (management), and impact.

DEFINITION OF VALIDITY

Validity is the extent to which an assessment measures what it claims to measure. It is vital for an assessment to be valid in order for the results to be applied and interpreted accurately. Validity is not determined by a single statistic, but by a set of parameters that demonstrate the relationship between the assessment and that which it is intended to measure. There are four types of validity—content validity, criterion-related validity, construct validity, and face validity.

Content Validity

Content validity signifies that the items constituting an assessment represent the entire range of possible items that the assessment is intended to address. Individual assessment questions may be drawn from a large pool of items that cover a broad range of topics. For example, to achieve adequate content validity the project assessed will be shown to represent by some clearly defined strategy the wider pool of projects to which the conclusions of the assessment are also intended to apply; similarly with respect to surveys of an organization’s customers.

In some instances when an assessment measures a characteristic that is difficult to define, expert judges may rate the relevance of items under consideration for the assessment. Items that are rated as strongly relevant by multiple judges may be included in the final assessment.

Criterion-related Validity

An assessment is said to have criterion-related validity when it has demonstrated its effectiveness in predicting criteria or indicators of the characteristics it intends to assess. There are different types of criterion validity: concurrent validity and predictive validity.

Concurrent validity is examined when the criterion measures are obtained at the same time as the assessment. This indicates the extent to which an assessment’s measures accurately estimate the organization or project’s current state with respect to the criterion. For example, on

Suggested Citation:"Appendix C Validating the Assessment." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations. Washington, DC: The National Academies Press. doi: 10.17226/13529.
×

an assessment that measures current levels of customer satisfaction, the assessment would be said to have concurrent validity if it measured the current levels of satisfaction experienced by the organization’s customers. Predictive validity refers to the extent to which the predictions yielded by an assessment turn out to be correct at some specified time in the future. For example, if an assessment yields the prediction that a certain avenue of research will yield a certain outcome, and that avenue is pursued, the accomplishment of the predicted outcome enhances the predictive validity of the assessment.

Construct Validity

An assessment has construct validity if the measures on the items assessed correlate well with measures of the same items performed by other assessment methods. For example, if quantitative measures of research productivity (e.g., papers published) correlate well with subjective measures (e.g., expert rating of the productivity of the research), this supports the construct validity of the assessment.

Face Validity

Face validity is the extent to which the participants in the assessment agree that it appears to be designed to measure what is intended to be measured. For example, if an assessment survey contains many questions perceived as irrelevant by the participants, its face validity will be low.

RELIABILITY OF THE ASSESSMENT

The validity of an assessment instrument is reliant on its reliability. Examples of reliability include inter-rater reliability, test-retest reliability, and parallel-forms reliability. Inter-rater reliability is the extent to which multiple raters of a given item agree. For example, if there is consensus among the members of a peer review committee, this indicates good inter-rater reliability. Test-retest reliability is the extent of agreement among repeated assessments of an item that has not changed between the assessments. Parallel-forms reliability is gauged by comparing two different assessments, created using different versions of the same assessment items and then randomly dividing the items into two separate tests. The two forms of the assessment would then be administered together, and the correlation of their results would indicate the parallel-forms reliability.

EFFICIENCY AND IMPACT OF THE ASSESSMENT

Efficiency and impact are also key aspects of an effective assessment. Factors related to the efficient conduct of an assessment include its cost in terms of money and time, burdens perceived by those being assessed, and timeliness of reported findings. Factors relating to the impact of an assessment include the extent to which the recipients of the assessment implement the advice provided in the assessment, the extent to which the assessment findings are distributed to those who should receive them, and the content of the feedback from those who receive the findings.

Suggested Citation:"Appendix C Validating the Assessment." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations. Washington, DC: The National Academies Press. doi: 10.17226/13529.
×
Page 54
Suggested Citation:"Appendix C Validating the Assessment." National Research Council. 2012. Best Practices in Assessment of Research and Development Organizations. Washington, DC: The National Academies Press. doi: 10.17226/13529.
×
Page 55
Next: Appendix D Example of Peer Advice During the Planning Phase of R&D »
Best Practices in Assessment of Research and Development Organizations Get This Book
×
 Best Practices in Assessment of Research and Development Organizations
Buy Paperback | $37.00 Buy Ebook | $29.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Research and development (R&D) organizations are operated by government, business, academe, and independent institutes. The success of their parent organizations is closely tied to the success of these R&D organizations. In this report, organizations refers to an organization that performs research and/or development activities (often a laboratory), and parent refers to the superordinate organization of which the R&D organization is a part. When the organization under discussion is formally labeled a laboratory, it is referred to as such. The question arises: How does one know whether an organization and its programs are achieving excellence in the best interests of its parent? Does the organization have an appropriate research staff, facilities, and equipment? Is it doing the right things at high levels of quality, relevance, and timeliness? Does it lead to successful new concepts, products, or processes that support the interests of its parent?

This report offers assessment guidelines for senior management of organizations and of their parents. The report lists the major principles of assessment, noting that details will vary from one organization to another. It provides sufficient information to inform the design of assessments, but it does not prescribe precisely how to perform them, because different techniques are needed for different types of organizations.

Best Practices in Assessment of Research and Development Organizations covers three key factors that underpin the success of an R&D organization: (1) the mission of the organization and its alignment with that of the parents; (2) the relevance and impact of the organization's work; and (3) the resources provided to the organization, beginning with a high-quality staff and management.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!