community living, and health and function (i.e., long-term outcome arenas). NIDRR holds itself accountable primarily for the generation of knowledge in the short-term outcome arena, and it is this arena that was the focus of the committee’s external evaluation.

The committee examined how NIDRR’s grant funding is prioritized for these investment areas, the processes used for reviewing and selecting grants, and the quality of the research and development outputs, as depicted in the conceptual framework in Figure 2-1. The committee developed this framework to guide the evaluation effort. The boxes labeled Q1 to Q5 (i.e., NIDRR’s process and summative evaluation questions 1 to 5; see Chapter 1), were the direct foci of the evaluation. The figure also includes other inputs, contextual factors, and implementation considerations as they are likely to influence the processes and short-term outcomes. The figure shows that the measurable elements of the short-term outcomes are what NIDRR considers to be the array of grant outputs (Q4) generated by grantees, which are expected to inform and generate new projects (Q5). Also shown are the expected long-term outcomes, which include an expanded knowledge base; improved programs and policy; and reduced disparities for people with disabilities in employment, participation and community living, and health and function. However, these long-term outcomes were beyond the scope of the committee’s evaluation.

In summary, the scope of the evaluation encompassed key NIDRR processes of priority setting, peer review, and grant management (process evaluation) and the quality of grantee outputs (summative evaluation). It is important to point out that the scope of the summative evaluation did not include a larger explicit focus on assessing the overall performance of individual grants or NIDRR portfolios (e.g., Did grants achieve their proposed objectives? Did the various research and development portfolios operate as intended to produce the expected results?). Although capacity building is a major thrust of NIDRR’s center and training grants, the present evaluation did not include assessment of outputs related to capacity building (e.g., number of trainees moving into research positions), which would have required methods different from those used for this study.

Definition of “Quality”

The evaluation focused on the quality of NIDRR’s priority-setting, peer review, and grant management processes and on the quality of the outputs generated by grants. A review of the literature on evaluation of federal research programs reveals that the term “quality” is operationalized in a variety of ways. For example, the National Research Council (NRC) and Institute of Medicine (IOM) (2007) developed a framework and suggested

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement