6

Summative Evaluation

This chapter addresses the following key study question:

Key Question #4. To what extent are the final outputs from NIDRR grants of high quality?

The chapter answers this central question and also provides an assessment of the methods used by the committee to conduct the summative evaluation. The scope and methods used to conduct the External Evaluation of the National Institute on Disability and Rehabilitation Research (NIDRR) and its Grantees were described earlier in Chapter 2. The first section of this chapter elaborates on the methods to evaluate the quality of outputs. The second section describes the results of the assessment of grant outputs and provides recommendations for improving the quality of outputs. The final section presents the committee’s self-assessment of the methods and recommendations for future evaluations.

SUMMARY OF METHODS DEVELOPED FOR ASSESSING THE QUALITY OF OUTPUTS

The methods and procedures developed by the committee for assessing the quality of outputs involved first determining the criteria and dimensions to be used for the assessment. Second, a questionnaire was developed to assist grantees in nominating outputs for review and to elicit supplemental descriptive information about those outputs. Third, a sampling plan was



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 158
6 Summative Evaluation This chapter addresses the following key study question: Key Question #4. To what extent are the final outputs from NIDRR grants of high quality? The chapter answers this central question and also provides an assess- ment of the methods used by the committee to conduct the summative evalu- ation. The scope and methods used to conduct the External Evaluation of the National Institute on Disability and Rehabilitation Research (NIDRR) and its Grantees were described earlier in Chapter 2. The first section of this chapter elaborates on the methods to evaluate the quality of outputs. The second section describes the results of the assessment of grant outputs and provides recommendations for improving the quality of outputs. The final section presents the committee’s self-assessment of the methods and recommendations for future evaluations. SUMMARY OF METHODS DEVELOPED FOR ASSESSING THE QUALITY OF OUTPUTS The methods and procedures developed by the committee for assessing the quality of outputs involved first determining the criteria and dimensions to be used for the assessment. Second, a questionnaire was developed to assist grantees in nominating outputs for review and to elicit supplemental descriptive information about those outputs. Third, a sampling plan was 158

OCR for page 158
159 SUMMATIVE EVALUATION developed for selecting grantees who would be invited to participate in the evaluation. Fourth, the committee and staff worked with grantees who agreed to participate in the evaluation, and gathered and cataloged the out- puts and supplemental information submitted for the committee’s review. Fi- nally, the committee assessed the outputs through an expert review process. Development of Quality Criteria A key element of the summative evaluation was the response to NIDRR’s request to develop criteria for assessing the quality of its grantees’ outputs. In developing these criteria, the committee drew on its own research exper- tise, recommendations of the external advisory group convened by NIDRR in the course of planning this evaluation (National Institute on Disability and Rehabilitation Research, 2008), and methods used in other National Research Council (NRC) and international studies that have evaluated fed- eral research programs (Bernstein et al., 2007; Chien et al., 2009; Ismail et al., 2010; National Research Council and Institute of Medicine, 2007; Panel on Return on Investment in Health Research, 2009; Wooding and Starkey, 2010; Wooding et al., 2009). Quality Criteria Four criteria were developed for the evaluation of grantee outputs: (1) technical quality, (2) advancement of knowledge or the field, (3) likely or demonstrated impact, and (4) dissemination. Technical quality The technical quality of outputs was assessed using dimensions that included the application of standards of science and tech- nology, appropriate methodology (quantitative or qualitative design and analysis), and the degree of accessibility and usability. Advancement of knowledge or the field (e.g., research, practice, or policy as relevant) The dimensions used to assess this criterion included scientific advancement of methods, tools, and theory; the development of new infor- mation or technologies; closing of an identified gap; and use of methods and approaches that were innovative or novel. Likely or demonstrated impact This criterion was used to assess the likely or demonstrated impact of outputs on science (impact, citations), consum- ers (health, quality of life, and participation for people with disabilities), provider practice, health and social systems, social and health policy, or the private sector/commercialization.

OCR for page 158
160 REVIEW OF DISABILITY AND REHABILITATION RESEARCH Dissemination Dimensions of dissemination assessed included the identifi- cation and tailoring of materials for reaching different audience/user types; collaboration with audience/users in identifying content and medium needs/ preferences; delivery of information through multiple media types and sources for optimal reach and accessibility; evaluation of dissemination ef- forts and impacts; and commercialization/patenting of devices, if applicable. Scale Developed for Rating the Criteria For the output ratings, the quality scale used by the committee was substantively different than the opinion scale used in the evaluation re- ported in earlier chapters for surveys of stakeholder organizations and peer reviewers. A 7-point scale was used to rate the criteria at varying levels of quality, where 1 indicated poor quality, 4 indicated good quality, and 7 indicated excellent quality. The committee deliberated at length in deter- mining what the midpoint score (4) would represent on the quality scale and decided that the midpoint should be “meeting expectations for good quality.” This midpoint anchor description was operationalized for assess- ing the technical quality of publications, which made up 70 percent of the outputs reviewed. For publications, a rating of 4 was generally assigned to journal articles that were published in peer-reviewed journals based on the fact that they had already been peer reviewed and met the scientific standards of their respective fields of research or development. However, articles could be rated higher or lower than 4 if, after review, their qual- ity was determined to be higher or lower than “good.” For other output categories (tools, devices, or informational products), there was no such common way to operationalize the midpoint anchor, but the committee applied its expert judgment in determining ratings for these other outputs relative to the standard applied to the publications category. Box 6-1 provides examples of quality indicators considered by com- mittee members in determining scores for each criterion. These examples are not intended to be exhaustive but to illustrate the attributes of outputs that were considered in their review. In rating the outputs, committee mem- bers drew on their scientific expertise to consider the outputs’ quality with respect to the dimensions within each criterion (see the above discussion). (More information on the review procedures is presented later in this sec- tion; the review procedures guide and output rating sheet used by committee members are included in Appendix B.) Grantee Questionnaire NIDRR supplied the committee with information gathered from grant- ees in their Annual Performance Reports (APRs) (Research Triangle Inter-

OCR for page 158
161 SUMMATIVE EVALUATION BOX 6-1 Examples of Quality Indicators Considered in Determining Output Scores Technical Quality • S trength of literature review and framing of issues • C ompetence of design, considering the research question and other parameters of the study • Q uality of measurement planning and description • A nalytic methods and interpretation; degree to which recommendations for change are drawn clearly from the analysis • D escription of feasibility, usability, accessibility, and consumer satisfaction testing Advancement of Knowledge or the Field • D egree to which a groundbreaking and innovative approach is presented • A pplication of a formal test of a hypothesis regarding a technique used widely in the field to improve practice • L evel of advancement and improvement of current classification systems • U sefulness of descriptive base of information about factors associated with a condition • N ovelty of ways of studying a condition that can be applied to the develop- ment of new models, training, or research Likely or Demonstrated Impact • D egree to which the output is well cited or has promise to be (for newer articles) • P otential to improve the lives of persons with disabilities by increasing accessibility • P ossibly transformative clinical and policy implications • P otential for building capacity, lowering costs, commercialization, etc. • I nfluence on the direction of research, use in the field, or capacity of the field Dissemination • M ethod and scope of dissemination • D escription of the evidence of dissemination (e.g., numbers distributed to different audiences) • L evel of strategic dissemination to target audiences when needed • E vidence of reaching the target audience • D egree to which dissemination used appropriate multiple media outlets, such as webinars, television coverage, Senate testimony, websites, DVDs, and/or social network sites SOURCE: Generated by the committee.

OCR for page 158
162 REVIEW OF DISABILITY AND REHABILITATION RESEARCH national, 2009). Grantees are required to complete an APR annually to report on their progress. At the end of a grant, they must complete a final report. To supplement the APR information provided by NIDRR, the com- mittee developed a grantee questionnaire (see Appendix B). The first part of the questionnaire asked grantees to list all projects under the grant and nominate the top two outputs from each project that reflected their grant’s best achievements. The questionnaire specified that outputs were to be drawn from the four categories defined in NIDRR’s APR (Research Triangle International, 2009): • ublications (e.g., research reports and other publications in peer- p reviewed and nonpeer-reviewed publications); • ools, measures, and intervention protocols (e.g., instruments or t processes created to acquire quantitative or qualitative information, knowledge, or data on a specific disability or rehabilitation issue, or to provide a rehabilitative intervention); • echnology products and devices (e.g., industry standards/guidelines, t software/netware, inventions, patents/licenses/patent disclosures, working prototypes, product(s) evaluated or field tested, product(s) transferred to industry for potential commercialization, product(s) in the marketplace); and • nformational products (e.g., training manuals or curricula, fact i sheets, newsletters, audiovisual materials, marketing tools, educa- tional aids, websites or other Internet sites produced in conjunction with research and development, training, dissemination, knowledge translation, and consumer involvement activities). The instructions for the questionnaire indicated that the committee would prefer to review one publication and one other type of output for each project within their grants, but that grantees could select two publica- tions if that was the only type of output for a project. The questionnaire asked the grantees to submit the actual outputs for the committee’s review. If the output was a website, a tool, or a technology device that had to be demonstrated, grantees were asked to provide descriptive information, pic- tures, or links to websites for the committee’s direct review. The second part of the questionnaire included a series of questions de- signed to elicit more in-depth description of an output when needed and to provide supplemental information on the output’s technical quality, how it advanced knowledge or practice, its likely or demonstrated impact, and how it was disseminated. This type of information, needed for a comprehensive assessment of the output, would not always be apparent in reviewing the output in isolation. For technical quality, grantees were asked to describe examples, such as the approach or method used in an output’s develop-

OCR for page 158
163 SUMMATIVE EVALUATION ment; relevant peer recognition; receipt of a patent, approval by the Food and Drug Administration, or use of the output in standards development; and evidence of the output’s usability and accessibility. For advancement of knowledge or the field, grantees were asked to discuss the importance of the original question or issue and describe how the output advanced knowledge in such arenas as making discoveries; providing new information; establish- ing theories, measures, and methods; closing gaps in the knowledge base; and developing new interventions, products, technology, and environmental adaptations. For likely or demonstrated impact, grantees were instructed to describe the output’s potential or actual impact on science, people with disabilities, provider practice, health and social systems, social and health policy, the private sector/commercialization, capacity building, and any other relevant arenas. Under dissemination, grantees were asked to describe the stage and scope (e.g., local, regional, national) of dissemination efforts, specific dissemination activities, any identification and tailoring of materials for particular audiences, efforts to collaborate with particular audiences or user communities to identify content and medium needs and preferences, and the delivery of information through multiple media types. Grantees were also asked to provide information from evaluations of their dissemi- nation efforts and impacts that they may have conducted (e.g., results of audience feedback or satisfaction surveys). The committee piloted the questionnaire on one NIDRR grant that had ended in 2008 and was outside the sampling pool (described below). Sub- groups of the committee assessed five outputs of this grant, which consisted of two publications, an assessment package, a working prototype, and a fact sheet; discussed results; and adapted the questionnaire by collapsing some of the dimensions from an original set of six criteria into the four final criteria.1 To supplement the grantee questionnaire in assessing the likely impact of published articles, the committee used such sources as Scopus and the Web of Science to determine the journal impact factor and the number of citations of a particular article. Sampling NIDRR provided the committee with a data set of grantee information that consisted of all grants ending in years 2006 to 2010 (N = 248). Included in that data set was extensive information on all of the outputs produced by 1 An original criterion on output usability was collapsed into the final technical quality criterion. Another original criterion on consumer and audience involvement was restructured as dimensions of the other criteria. For example, the technical quality criterion now includes a dimension on “evidence of usability and accessibility”; the impact criterion includes a dimension on “impact on people with disabilities”; and the dissemination criterion includes a dimension on “tailoring materials to audiences” and “collaboration with users.”

OCR for page 158
164 REVIEW OF DISABILITY AND REHABILITATION RESEARCH all NIDRR grantees, which NIDRR routinely collects. The committee sam- pled from that larger data set with no involvement of NIDRR staff in which grants were selected. The committee was directed by its charge to draw a sample of 30 grants ending in 2009 that was representative of NIDRR’s 14 program mechanisms. As shown in Table 6-1, there were 107 grants that ended in 2009. As displayed in the table, however, a number of program mechanisms did not have at least 2 grants ending in 2009: Burn Model System (BMS), Spinal Cord Injury Model System (SCIMS), Traumatic Brain Injury Model System (TBIMS), Disability and Business Technical Assistance Center (DBTAC), Knowledge Translation (KT), Advanced Rehabilitation Research Training (ARRT), and Section 21. Because the BMS, SCIMS, and TBIMS program mechanisms support some of NIDRR’s flagship programs, the committee adjusted the sampling pool to ensure that these grants would be included in the sample. The com- mittee went back to the most recent year in which at least two grants under TABLE 6-1 Number of NIDRR Grants Ending in 2007 to 2009, with Grants Included in Sampling Pool Highlighted Program Mechanism 2007 2008 2009 Burn Model System (BMS) 0 5 0 Traumatic Brain Injury Model System (TBIMS) 7 8 1 Spinal Cord Injury Model System (SCIMS) 9 0 0 Rehabilitation Engineering Research Center (RERC) 0 0 8 Rehabilitation Research and Training Center 0 0 10 (RRTC) Disability and Rehabilitation Research Project- 0 0 14 General (DRRP) Field Initiated Project (FIP) 0 0 36 Small Business Innovation Research I (SBIR-I) 0 0 16 Small Business Innovation Research II (SBIR-II) 0 0 8 Disability and Business Technical Assistance Center 0 1 0 (DBTAC) Knowledge Translation (KT) 0 0 0 Advanced Rehabilitation Research Training (ARRT) 0 0 1 Switzer Fellowship 0 0 12 Section 21 0 1 1 Total Grants in Years Ending in 2007, 2008, 2009 16 15 107 Total Grants Included in Sample (N = 111) 9 13 89 SOURCE: Generated by the committee based on data from the NIDRR grantee database.

OCR for page 158
165 SUMMATIVE EVALUATION these program mechanisms ended, which was 2008 for BMS (N = 5) and TBIMS (N = 9, with 1 in 2009 and 8 in 2008) and 2007 for SCIMS (N = 9), and included these grants in the pool. The DBTAC, KT, ARRT, and Section 21 program mechanisms were excluded from the pool for this first evalua- tion cycle. Small Business Innovation Research I (SBIR-I) grants also were excluded from the sampling pool because they do not produce “outputs” and therefore did not align with the evaluation parameter of reviewing two outputs for each project within a grant. After these adjustments, the total pool consisted of 111 grants across nine NIDRR program mechanisms, shown in the highlighted cells of Table 6-1. The older grants included in the evaluation may have had an advantage over the grants ending in 2009 because of the additional time for their outputs to have had an impact. From this pool of 111 grants, 30 grants (27 percent) were randomly selected for review in the following way. To balance the desire for the sample of grants to represent the nine program mechanisms included in the pool, the committee stratified the sampling at the program-mechanism level as a proportion of all grants in the sampling pool. For example, there were 36 Field Initiated Project (FIP) grants in the sampling pool, as shown in Table 6-1, representing 32 percent of all of the grants in the sampling pool (N = 111); therefore, 32 percent of the 30 grants in the sample (N = 10) should be FIP. The 36 FIP grants in the sampling pool were numbered 1 through 36, and 10 FIP grants were randomly selected using a website that generated random numbers. A table in the next section shows the number of grants included in the sample by program mechanism. Once the proposed evaluation methods had been approved by the Institutional Review Board of the National Academies, the sample of 30 grants was drawn, and invitations to participate were sent to the principal investigators of those grants. The principal investigators were fully informed about the methods to be used in the evaluation and what would be required of them. Of the original 30 grantees invited, 2 declined because they did not have time to fulfill the evaluation requirements and 1 because of a change in institutions. Three additional grants were then randomly selected from the pool to bring the final sample to 30. The committee acknowledges that bias from self-selection could have caused the final sample of 30 grantees that participated in the evaluation to be less representative of the larger population of grants. Compiling Outputs to Be Reviewed and Number of Outputs Reviewed The questionnaire described above was sent to the 30 grantees who agreed to participate in the study. As noted, the principal investigators of the grants included in the sample were given written instructions for submitting their outputs for the evaluation and providing supplemental information

OCR for page 158
166 REVIEW OF DISABILITY AND REHABILITATION RESEARCH about the outputs. Committee staff worked with the grantees to clarify the instructions and to encourage them to submit their output packages. Be- cause some grants had ended several years before the evaluation (2007 and 2008 for the Model System grants), some grantees had difficulty submitting materials because the principal investigators had changed institutions or departments within the same university or had other competing priorities during the time period of our review. Staff accommodated these principal in- vestigators by providing additional time to submit their materials and in five cases by assisting them in completing the questionnaires through telephone interviews. Two grantees did not respond to the supplemental questionnaire. As described above, grantees received questionnaires on which they were asked to list each project under their grant and identify two outputs per project to be reviewed by the committee. They were asked to identify the “top” two outputs per project that reflected their grant’s best achievements. As noted, to permit assessment of outputs beyond journal publications, grantees were asked to offer at least one non-journal publication output per project if such outputs were available. The number of projects for each grant varied by size, from 1 for small FIP to 10 for larger center grants. A total of 156 outputs were submitted for review across the 30 grants in the sample. Eight outputs were considered highly related to other outputs, and they were reviewed together with those other outputs. This occurred when one output was a derivative or different expression of another and when the principal investigator responses to criteria questions were basically the same. Therefore, the total number of outputs for analysis was 148. Table 6-2 presents the number of grants included in the sample by program mechanism and the types of outputs that were reviewed. To place the outputs reviewed into the larger context of the outputs pro- duced by grantees in the sampling pool of 111 grants, Table 6-2 also shows that the proportions of publications and other outputs (tools, technology, and informational products) reviewed by the committee were relatively close to the proportions of the various output types produced by grantees in the larger sampling pool. The proportion of publications reviewed was somewhat lower at 70 percent (versus 76 percent in the sampling pool), and the proportion of informational products reviewed was somewhat higher at 18 percent (versus 11 percent in the sampling pool). Review Process The committee members, whose expertise encompasses social sciences, rehabilitation medicine, engineering, evaluation, and knowledge transla- tion, were divided into three groups of five members each. The subgroups were organized to ensure that outputs would be reviewed by a group of individuals with the collective expertise necessary to judge their quality. The

OCR for page 158
TABLE 6-2 Number of Grants and Distribution of Outputs Reviewed by Program Mechanism NIDRR Grant Category and Informational Program Mechanism Grants Publications Tools Technology Products Total Model System Grants Burn Model System (BMS) 2 12 2 0 4 18 (12%) Traumatic Brain Injury Model System 2 12 0 0 2 14 (10%) (TBIMS) Spinal Cord injury Model System (SCIMS) 2 11 0 0 0 11 (7%) Center Grants Rehabilitation Research and Training Center 3 16 0 0 12 28 (19%) (RRTC) Rehabilitation Engineering Research Center 2 16 2 5 3 26 (18%) (RERC) Research and Development Grants Disability and Rehabilitation Research 4 13 4 0 5 22 (15%) Project-General (DRRP) Field Initiated Project (FIP) 10 17 1 3 1 22 (15%) Small Business Innovation Research II 2 1 0 1 0 2 (1%) (SBIR-II) Training Grants Switzer Fellowship 3 5 0 0 0 5 (3%) Total and Proportion of Output Types in 30 103 (70%) 9 (6%) 9 (6%) 27 (18%) 148 Sample Total and Proportion of Output Types in 111 1,060 (76%) 101 (7%) 84 (6%) 148 (11%) 1,393 Sampling Pool SOURCE: Generated by the committee based on data from the grantee questionnaire and NIDRR grantee database. 167

OCR for page 158
168 REVIEW OF DISABILITY AND REHABILITATION RESEARCH subgroups were convened on three occasions—in October 2010, December 2010, and February 2011. Because of the relatively short time period avail- able to conduct the reviews, grants were scheduled for review according to size, with the smaller grants being invited first (e.g., FIP, Switzer, SBIR-II) and the larger grants (DRRP, Model System grants, center grants) being invited to participate in the later rounds. The rationale for this approach was that the smaller grants had fewer outputs and would require less prepa- ration time for the review than the larger grants, which had many projects and more outputs so that more preparation time was required. Therefore, the content of the grants tended to be mixed during each round of reviews, necessitating a corresponding mix of expertise in each subgroup. As noted, however, efforts were made to match the expertise of the reviewers in each subgroup with the outputs they would be reviewing (e.g., technology out- puts were assigned to a subgroup with engineering expertise). The review procedures are described in detail in Box 6-2. The committee’s expert review involved consideration and assessment of the multiple quality dimensions of the outputs—a process that has been recommended as a valid method for evaluating the relevance and quality of federal research programs (National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, 1999). The seven-point rating scale was used to describe the results of the output assessment more precisely in terms of the varying levels of quality. During the reviews, the committee members frequently discussed how they were applying the crite- ria and interpreting the anchors of the rating scale so they could calibrate their ratings. In addition, brief narrative statements were written summariz- ing the rationale for the subgroups’ ratings of each output. These statements were reviewed after the ratings had been completed to identify attributes that particularly characterized the varying levels of quality and were helpful in further exemplifying the dimensions of the criteria. Although the final scores used to report results of the output assessment were based on subgroups’ consensus scores, the committee conducted an interrater reliability analysis of their initial independent ratings (i.e., raw scores before the subgroup discussions) to determine the extent to which the individual committee members were using and interpreting the scale in the same way. The interrater reliability analysis was conducted, using methods suggested by MacLennan (1993), for more than two raters with ordinal data. This method calculates an intraclass correlation coefficient (ICC) that represents an average correlation among raters. The interrater reliability analyses were run on 15 grants for which at least 3 outputs were reviewed by the subgroups. The analyses could not be run with less than 3 outputs, and only 15 grants had 3 or more outputs reviewed. The ratings compared were the individual committee members’ raw scores (before discussion) on each of the criteria. According to Yaffee

OCR for page 158
196 REVIEW OF DISABILITY AND REHABILITATION RESEARCH These reports have the potential to supply relevant information for evalua- tions. However, the quality of this information varied across the text reports describing the tools, technologies, and informational products reviewed by the committee. Only half contained substantive descriptive information. Recommendation 6-5: NIDRR should consider revising its APR to better capture information needed to routinely evaluate the qual- ity and impacts of outputs, grants, or program mechanisms. The agency might consider efforts such as consolidating existing data elements or adding new elements to capture the quality criteria and dimensions used in the present summative evaluation. According to NIDRR management, the agency’s APR system has stabi- lized in recent years following periods of changing and improving it to make the data more usable for grantees, for grant monitoring, and for agency performance reporting. The agency currently is in the process of adding a new “accomplishments” module to the APR that will focus on the external use and adoption of NIDRR-funded outputs. In this new module, NIDRR will consolidate some data elements that are already being collected and add new ones. For up to five outputs that have been used or adopted by persons or groups external to the grant during the reporting period, grantees will be asked to provide information for each output on who adopted it (in 16 categories, such as researchers, practitioners, and service providers); how the output is being used or adopted by the target audience; the source of the evidence; and whether and how the output may be contributing to changes in policy, practice, system capacity, or other impact areas. These efforts to improve the APR will address the quality criteria used in the present evalu- ation for assessing the advancement of knowledge or practice and the likely or demonstrated impact of outputs. For the technical quality criterion, the current APR system collects data on whether articles were published in peer-reviewed journals. For the technical quality of outputs other than publications, Recommendation 6-4 provides examples of ways to operationalize dimensions of accessibility and usability, such as providing evidence of testing the psychometrics of measurement instruments; assessing the usability features of informational products; and documenting the results of research and development tests of technology products that relate to human factors, ergonomics, universal design, product reliability, and safety. The APR system currently asks for information on how outputs were validated, but data elements that relate to such testing might be further specified in the system. The APR system might also be modified to capture evidence on the quality criterion of dissemination of outputs through such data elements as target audiences for dissemination activities; media types; number of

OCR for page 158
197 SUMMATIVE EVALUATION outputs disseminated; and reach of dissemination, such as number of hits on websites. Recommendation 6-6: NIDRR should investigate ways to work with grantees to ensure the completeness and consistency of infor- mation provided in the APR. The committee fully appreciates the need to minimize the data collec- tion burden on grantees and acknowledges the challenges and feasibility issues related to modifying the APR system while at the same time providing continuity in the system. The committee believes that embedding evalua- tion data collection processes into existing processes would lead to greater efficiencies and reduce grantee burden while enhancing NIDRR’s ability to evaluate quality and impact. The committee acknowledges that the sug- gested refinements would have to be undertaken in the context of a larger assessment of the APR system as part of NIDRR’s ongoing initiatives to improve the system. More immediately, grantees should be made aware that, in addition to being a data source for assessing individual grant performance, APRs could be a valuable data source for NIDRR’s program evaluation purposes. Project officers could provide technical assistance, working with individual grantees on focusing their APRs more on the details of their findings that move their projects forward and lead to changes and improvements in policy, practice, behavior, and system capacity. In closing, the committee developed and implemented a quantifiable expert review process that can serve as a foundation for future evalua- tions of the quality of outputs of NIDRR grantees. If future evaluations of output quality are conducted, the methods developed by the committee should be implemented with refinements to strengthen the design, validity, and reliability of the process. Whereas assessing grantee outputs is valuable, the committee believes that even greater value would come from assessing outputs in the context of a more comprehensive grant-level and program mechanism-level evaluation, which could yield broader implications for the value of grants, their impact, and future directions for NIDRR. REFERENCES Bernstein, A., Hick, V., Borbey, P., Campbell, T., McAuley, L., and Graham, I.D. (2007). A framework to measure the impacts of investments in health research. In Organisation for Economic Co-Operation and Development, Science, technology and innovation indicators in a changing world: Responding to policy needs (pp. 231-251). Ottawa, CA: Organisa- tion for Economic Co-Operation and Development. Brown, T. (2011). Journal quality metrics: Options to consider other than impact factors. American Journal of Occupational Therapy, 65(3), 346-350.

OCR for page 158
198 REVIEW OF DISABILITY AND REHABILITATION RESEARCH Chien, C.F., Chen, C.P., and Chen, C.H. (2009). Designing performance indices and a novel mechanism for evaluating government R&D project. Journal of Quality, 19(2), 119-135. Ismail, S., Tiessen, J., and Wooding, S. (2010). Strengthening research portfolio evaluation at the Medical Research Council: Developing a survey for the collection of information about research outputs. Santa Monica, CA: RAND Corporation. MacLennan, R.N. (1993). Interrater reliability with SPSS for Windows 5.0. The American Statistician, 47(4), 292-296. Moher, D., Liberati, A., Tetzlaff, J., Altman, D.G., and The PRISMA Group. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Medicine, 6(6), e1000097. National Academy of Sciences, National Academy of Engineering, and Institute of Medicine. (1999). Evaluating federal research programs: Research and Government Performance and Results Act. Committee on Science, Engineering, and Public Policy. Washington, DC: National Academy Press. National Institute on Disability and Rehabilitation Research. (2006). Department of Education: National Institute on Disability and Rehabilitation Research—Notice of Final Long- Range Plan for Fiscal Years 2005–2009. Federal Register, 71(31), 8,166-8,200. National Institute on Disability and Rehabilitation Research. (2008, August). Meeting report of the NIDRR External Evaluation Advisory Group. Unpublished document. Washington, DC: National Institute on Disability and Rehabilitation Research. National Research Council and Institute of Medicine. (2007). Mining safety and health re- search at NIOSH: Reviews of research programs of the National Institute for Occupa- tional Safety and Health. Committee to Review the NIOSH Mining Safety and Research Program. Committee on Earth Resources and Board on Earth Sciences and Resources. Washington, DC: The National Academies Press. Panel on Return on Investment in Health Research. (2009). Appendices—Making an impact: A preferred framework and indicators to measure returns on investment in health research. Ottawa, Canada: Canadian Academy of Health Sciences. Research Triangle International. (2009). National Institute on Disability and Rehabilitation Research. Annual Performance Report: Web-based reporting system instruction manual. Research Triangle Park, NC: Research Triangle International. Schulz, K.F., Altman, D.G., and Moher, D. for the CONSORT Group. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. Annals of Internal Medicine, 152. Story, M.F., and Mueller, J. (2002). Universal design performance measures for products: A tool for assessing universal usability. In J.M. Winters, C.J. Robinson, R.C. Simpson, and G.C. Vanderheiden (Eds.), Emerging and accessible telecommunications, information and healthcare technologies (pp. 19-28). Arlington, VA: RESNA Press. Wooding, S., and Starkey, T. (2010). Piloting the RAND/ARC Impact Scoring System (RAISS) tool in the Canadian context. Cambridge, England: RAND Europe. Wooding, S., Nason, E., Starkey, T., Haney, S., and Grant, J. (2009). Mapping the impact: Exploring the payback of arthritis research. Santa Monica, CA: RAND Corporation. Yaffee, R.A. (1998). Enhancement of reliability analysis: Application of intraclass correlations with SPSS/Windows v.8. Available: http://www.nyu.edu/its/statistics/Docs/intracls.html [April 20, 2011].

OCR for page 158
199 SUMMATIVE EVALUATION ANNEX 6-1 JOURNAL IMPACT FACTORS OF PUBLISHED ARTICLES FROM GRANTS IN THE ORIGINAL POOL FROM WHICH THE SAMPLE OF 30 GRANTS WAS DRAWN (N = 111 GRANTS, 631 ARTICLES) TABLE A6-1 Number of Times Articles Were Published in SJRa ISIb This Journalc Journal Name 0.160 2.254 75 Archives of Physical Medicine and Rehabilitation 0.124 1.442 32 Journal of Spinal Cord Medicine 0.170 2.779 28 Journal of Head Trauma Rehabilitation 0.109 1.563 27 Journal of Burn Care & Research 0.097 1.489 24 Disability and Rehabilitation 0.099 1.224 17 Topics in Stroke Rehabilitation 0.125 1.762 16 American Journal of Physical Medicine & Rehabilitation 0.077 1.676 15 Rehabilitation Psychology 0.118 1.75 10 Brain Injury NA NA 10 Hearing Loss 0.099 1.592 9 NeuroRehabilitation 0.061 NA 8 Assistive Technology 0.120 1.718 7 Burns 0.125 1.708 7 Journal of Rehabilitation Research and Development 0.102 1.364 7 Physical Medicine and Rehabilitation Clinics of North America 0.031 0.484 7 Research and Practice for Persons with Severe Disabilities 0.14 NA 7 Trends in Amplification continued

OCR for page 158
200 REVIEW OF DISABILITY AND REHABILITATION RESEARCH TABLE A6-1 (Continued) Number of Times Articles Were Published in SJRa ISIb This Journalc Journal Name 0.083 0.969 6 American Journal of Hospice & Palliative Medicine NA NA 6 Assistive Technology Outcomes and Benefits 0.263 2.302 6 Muscle & Nerve 0.04 0.74 6 Rehabilitation Counseling Bulletin 0.132 1.534 6 Respiratory Care NA 0.71 5 Exceptionality 0.033 NA 5 Journal of Vocational Rehabilitation 0.374 4.23 5 Multiple Sclerosis 0.041 NA 5 Topics in Spinal Cord Injury Rehabilitation 0.099 2.075 4 Clinical Neuropsychologist 0.615 6.254 4 Critical Care Medicine 0.254 2.777 4 General Hospital Psychiatry 0.424 3.792 4 Health Affairs 0.123 1.805 4 Journal of Clinical and Experimental Neuropsychology 0.684 8.017 4 Neurology 0.37 3.203 4 Shock 0.731 5.756 4 Stroke 0.202 2.374 3 Annals of Biomedical Engineering 0.032 NA 3 Career Development for Exceptional Individuals 0.276 3.436 3 Developmental Disabilities Research Reviews 0.052 2.271 3 Exceptional Children NA NA 3 Exceptional Parent 0.137 1.782 3 IEEE Transactions on Biomedical Engineering 0.068 1.842 3 International Journal of Clinical and Experimental Hypnosis 0.04 NA 3 Journal of Disability Policy Studies 0.048 0.694 3 Journal of Early Intervention 0.115 1.596 3 Journal of Intellectual Disability Research 0.357 3.426 3 Journal of Neurotrauma 0.032 0.222 3 Journal of Rehabilitation 0.503 5.391 3 Pediatrics 0.354 3.974 3 Psychosomatic Medicine 0.19 2.51 3 Spine 0.057 1.215 3 Violence Against Women NA NA 2 ADVANCE for Directors in Rehabilitation 0.118 2.018 2 American Journal of Speech-Language Pathology 1.415 10.746 2 Annals of Neurology 0.151 2.304 2 Archives of Clinical Neuropsychology 0.778 7.108 2 Archives of Neurology 0.1 2.109 2 Body Image

OCR for page 158
201 SUMMATIVE EVALUATION TABLE A6-1 (Continued) Number of Times Articles Were Published in SJRa ISIb This Journalc Journal Name 0.047 1.13 2 Children and Youth Services Review 0.073 1.384 2 Developmental Neurorehabilitation 0.044 NA 2 Disability and Health Journal 0.053 NA 2 Disability and Rehabilitation Assistive Technology 0.032 0.466 2 Education and Training in Developmental Disabilities NA NA 2 E-Medicine NA NA 2 Giornale Italiano delle Disabilità [Italian Journal on Disability] 0.038 0.6 2 Infants and Young Children NA NA 2 Inside MS NA NA 2 International Journal of Telerehabilitation NA NA 2 Journal of Burn Care and Rehabilitation 0.026 0.174 2 Journal of Developmental Disabilities NA NA 2 Journal of Mine Action 0.432 4.791 2 Journal of Neurology Neurosurgery and Psychiatry 0.079 1.644 2 Journal of the Acoustical Society of America NA NA 2 Journal of Visual Impairment & Blindness NA NA 2 Journal on Developmental Disabilities 1.977 21.659 2 Lancet Neurology 0.291 4.106 2 Medicine and Science in Sports and Exercise NA NA 2 MS in Focus 0.118 1.731 2 Neuropsychological Rehabilitation 0.547 5.355 2 Pain 0.189 2.645 2 Physical Therapy 0.4 3.134 2 Progress in Brain Research 0.052 0.634 2 Prosthetics and Orthotics International NA NA 2 Rehabilitation Education 0.04 0.561 2 Remedial and Special Education 0.161 1.826 2 Spinal Cord NA NA 2 TASH Connections 0.191 2.195 1 Academic Radiology NA NA 1 ADVANCE for Managers of Respiratory Care NA NA 1 AER-Journal of Research and Practice in Visual Impairment and Blindness 0.038 0.694 1 American Annals of the Deaf 0.462 5.115 1 American Journal of Medicine 1.024 5.224 1 American Journal of Pathology 0.072 NA 1 American Occupational Therapy Association 0.293 3.397 1 Amyotrophic Lateral Sclerosis 0.249 3.984 1 Annals of Behavioral Medicine continued

OCR for page 158
202 REVIEW OF DISABILITY AND REHABILITATION RESEARCH TABLE A6-1 (Continued) Number of Times Articles Were Published in SJRa ISIb This Journalc Journal Name 0.822 7.474 1 Annals of Surgery 1.158 8.435 1 Arthritis & Rheumatism-Arthritis Care & Research NA NA 1 Assessment for Effective Instruction 0.039 0.835 1 Behaviour & Information Technology 0.187 3.162 1 Brain and Language NA NA 1 Brain Injury Professional 0.61 6.519 1 Chest 0.118 NA 1 Chronic Illness 0.143 2.036 1 Clinical Biomechanics 0.16 1.687 1 Clinical Rheumatology 0.034 NA 1 Communication Disorders Quarterly 0.091 1.763 1 Composites Part B-Engineering 0.214 2.093 1 Connective Tissue Research 0.394 4.595 1 Critical Care 0.57 5.021 1 Current Opinion in Neurology 0.183 1.81 1 Current Treatment Options in Neurology NA 1.803 1 Cyberpsychology & Behavior NA NA 1 Datum, Medical College of Wisconsin NA NA 1 Design Principles and Practices: An International Journal 0.193 2.257 1 Ear and Hearing 0.268 3.825 1 Exercise and Sport Sciences Reviews 0.032 NA 1 Generations 0.031 NA 1 Hearing Journal NA NA 1 Hearing Review NA NA 1 Hospital News (nationally syndicated) 2.011 8.058 1 Human Molecular Genetics 0.074 2.828 1 IEEE Engineering in Medicine and Biology Magazine 0.134 2.182 1 IEEE Transactions on Neural Systems and Rehabilitation Engineering 0.124 3.176 1 IEEE Transactions on Power Electronics 0.048 1.327 1 Intellectual and Developmental Disabilities 0.044 1.192 1 Interacting with Computers NA NA 1 International Journal of Advances in Rheumatology 0.035 NA 1 International Journal of Disability, Development, and Education 0.160 2.029 1 International Journal of Geriatric Psychiatry 0.147 2.244 1 International Journal of Medical Informatics NA NA 1 International Journal of MS Care

OCR for page 158
203 SUMMATIVE EVALUATION TABLE A6-1 (Continued) Number of Times Articles Were Published in SJRa ISIb This Journalc Journal Name 0.097 1.055 1 International Journal of Psychiatry in Medicine 0.03 NA 1 International Journal of Web Engineering and Technology 0.039 1.15 1 Internet Research 0.03 0.351 1 Intervention in School and Clinic NA NA 1 Journal for Vocational Special Needs Education 0.053 NA 1 Journal of Aging and Social Policy NA NA 1 Journal of Applied Rehabilitation Counseling 0.226 3.169 1 Journal of Behavioral Nutrition and Physical Activity 0.19 3.044 1 Journal of Biomedical Materials Research Part A 0.136 1.415 1 Journal of Cardiopulmonary Rehabilitation and Prevention 0.088 1.612 1 Journal of Clinical Psychology 0.122 1.506 1 Journal of Clinical Psychology in Medical Settings 0.107 1.433 1 Journal of Communication Disorders 0.04 NA 1 Journal of Disability Policy 0.116 1.5 1 Journal of Health Communication 0.065 1.354 1 Journal of Interpersonal Violence 0.063 1.849 1 Journal of Marriage and the Family 0.353 2.628 1 Journal of Neural Engineering 0.112 1.805 1 Journal of Occupational Rehabilitation 0.279 2.976 1 Journal of Orthopaedic Research 0.401 4.851 1 Journal of Pain 0.25 2.64 1 Journal of Pain and Symptom Management 0.03 0.959 1 Journal of Policy and Practice in Intellectual Disabilities 0.057 1.943 1 Journal of Positive Behavior Interventions 0.065 NA 1 Journal of Positive Psychology 0.059 1.09 1 Journal of Primary Prevention 0.039 NA 1 Journal of Prosthetics and Orthotics 0.224 2.842 1 Journal of Psychosomatic Research 0.042 1.343 1 Journal of Special Education 0.397 3.913 1 Journal of The American Geriatrics Society 0.209 2.91 1 Journal of The International Neuropsychological Society 0.155 3.129 1 Journal of Trauma-Injury Infection and Critical Care NA NA 1 Journal of Usability Studies continued

OCR for page 158
204 REVIEW OF DISABILITY AND REHABILITATION RESEARCH TABLE A6-1 (Continued) Number of Times Articles Were Published in SJRa ISIb This Journalc Journal Name NA NA 1 Journal of Vocational Special Needs Education 0.03 0.3 1 Journal of Web Engineering NA NA 1 Kansas Public Policy Journal NA NA 1 L’Audition Revue D’Informations Techniques et Scientifiques NA NA 1 Learning Disabilities Quarterly 0.033 NA 1 Lecture Notes in Computer Science 0.398 3.183 1 Medical Care 0.335 2.195 1 Medical Care Research and Review NA NA 1 Mississippi Brain Injury Association Newsletter 0.039 NA 1 Missouri Medicine NA NA 1 Momentum NA NA 1 MS Exchange 0.619 5.932 1 Neuroimage 0.408 2.764 1 Neuromuscular Disorders 0.314 3.772 1 Neurorehabilitation Neural Repair NA NA 1 Novartis Foundation Symposium NA NA 1 O&P Business News 0.027 NA 1 OT Practice NA NA 1 Paraplegia News 0.226 2.672 1 Pediatric Critical Care Medicine NA NA 1 Perspectives on Neurophysiology and Neurogenic Speech and Language Disorders 0.029 NA 1 Physical & Occupational Therapy in Geriatrics 0.026 NA 1 Planning NA NA 1 Principal Leadership NA NA 1 Proceedings of IEEE Virtual Rehabilitation 0.08 1.376 1 Psychiatric Rehabilitation Journal 0.145 2.388 1 Psychiatric Services 0.128 2.589 1 Psychological Assessment 0.039 0.753 1 Psychology in the Schools 0.056 0.615 1 Rehabilitation Nursing NA NA 1 Rehabilitation Outlook 0.429 4.744 1 Seminars in Arthritis and Rheumatism 0.049 NA 1 Seminars in Hearing 0.062 1.048 1 Social Work 0.034 NA 1 Technology and Disability 0.081 1.297 1 Telemedicine and E-Health 0.027 NA 1 The ASHA Leader

OCR for page 158
205 SUMMATIVE EVALUATION TABLE A6-1 (Continued) Number of Times Articles Were Published in SJRa ISIb This Journalc Journal Name NA NA 1 The Journal of Special Children Education (Korea) NA NA 1 The Judge’s Journal NA NA 1 The RT News, Newsletter of AER Division 11 0.036 NA 1 Universal Access in the Information Society NA NA 1 US-ISPO Highlights 0.1 1.287 1 Women’s Health Issues 0.35 3.443 1 Wound Repair and Regeneration 0.026 NA 1 Young Exceptional Children Total Published Articles Reviewed 631 NOTE: NA = journal not tracked by Scopus or Web of Science in the 2010 databases. aSJR (Scopus): The SJR database for 2010 was used. Available: http://www.scopus.com/source/ eval.url [August 29, 2011]. bISI (Web of Science): These data were obtained by searching two Web of Knowledge JCR databases (editions): (1) JCR Science Edition 2010 and (2) JCR Social Sciences Edition 2010. Available: http://admin-apps.webofknowledge.com/JCR/JCR?RQ=HOME [August 29, 2011]. cThe values in this column were obtained from a data set provided by NIDRR of all reported publications as of July 2010. SOURCE: Generated by the committee based on data from Scopus (http://www.scopus.com) and Web of Science (http://apps.webofknowledge.com) (membership required for both).

OCR for page 158