National Academies Press: OpenBook

Evaluation of the Sea Grant Program Review Process (2006)

Chapter: 3 Critique of the Periodic Assessment Process

« Previous: 2 History of Sea Grant Program Review and Assessment
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 43
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 44
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 45
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 46
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 47
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 48
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 49
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 50
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 51
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 52
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 53
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 54
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 55
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 56
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 57
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 58
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 59
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 60
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 61
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 62
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 63
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 64
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 65
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 66
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 67
Suggested Citation:"3 Critique of the Periodic Assessment Process." National Research Council. 2006. Evaluation of the Sea Grant Program Review Process. Washington, DC: The National Academies Press. doi: 10.17226/11670.
×
Page 68

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

3 Critique of the Periodic Assessment Process C hapter 2 described the evolution of procedures that are currently used to assess the performance of individual Sea Grant programs. As mentioned, the character of the program assessment process is dominated by periodic aspects: the quadrennial visit of a Program As- sessment Team (PAT), followed by the National Sea Grant Office (NSGO) Final Evaluation (FE) Review. This chapter presents a critique of this periodic portion of the assess- ment process. The first section discusses the documents that provide guid- ance on which the review procedures are based and carried out. The second section provides a critique of the primary element--the onsite review carried out by the PATs. The third section examines the FE process carried out by NSGO staff during an intensive review of the programs that make up the most recently reviewed PAT cohort (7 to 8 programs in a given year) and results in a final evaluation letter from the National Director to the individual Sea Grant program director. The final section considers the assessment process as a whole, its use in assignment of merit and bonus funding, and proposes a realignment of functions in- tended to strengthen the program overall. GUIDANCE DOCUMENTS The periodic assessment process follows instructions provided in two guidance documents: the PAT Manual (NSGO, 2005a) and the Policy Memorandum (NSGO, 2005b). The PAT Manual provides detailed in- structions on conducting the PAT visit. The Policy Memorandum outlines 43

44 EVALUATION OF THE SEA GRANT PROGRAM REVIEW PROCESS the structure and function of the FE process, including details on how funds are allocated based on program scores derived from the PAT visits and reviewed during the FE. These two key documents do not appear to be as well known among the relevant parties as they should be. Observation of the various PAT site visits taking place in 2005 made it clear that while the individual Sea Grant program directors were famil- iar with the PAT Manual and policy memoranda, some other staff in- volved with preparing background documents and briefing reports for the PAT had not seen the PAT Manual even by the end of the PAT visits. Also, there appears to be some significant confusion about the FE process, despite the fact that relevant policy memos (available in the administrative information portion of the NSGO web site at http:// www.seagrant.noaa.gov/other/admininfo.html) answer the vast major- ity of the most frequently posed questions. Thus the most frequently raised concerns do not appear to reflect a lack of specificity or availabil- ity of these documents, but rather a lack of familiarity with them. The NSGO needs to disseminate the contents of the documents more actively and broadly through a process that involves active and personal expla- nation of the periodic program assessment process with staff as well as directors of the individual Sea Grant programs. The individual program directors should disseminate, to their staffs and all others who will be taking part in the review, the contents of these documents, particularly the PAT Manual. The result would be a more satisfying PAT site visit for all concerned. The more detailed of the two documents, the PAT Manual, identifies the review criteria, the benchmarks used to describe the expected level of performance in a particular area (such as program organization and man- agement), and the indicators used to help assess the outcomes or impacts of the individual program against the benchmarks (see Appendix G). These set the standard for performance and provide a basis for rating the individual Sea Grant programs in relation to established expectations. The specific wording of these items has evolved over time, under intense scrutiny and regular feedback from PAT members, individual Sea Grant program directors, the National Sea Grant Review Panel (NSGRP), and program officers. Throughout the history of the process, there have been four main criteria for assessment reflecting the breadth of the activities for which each program is responsible: (1) organizing and managing the program1 (20 percent); (2) connecting Sea Grant with users (20 percent); (3) effective 1 This criterion, originally named "Organizing and Managing for Success" in Cycle 1, was renamed in Cycle 2.

CRITIQUE OF THE PERIODIC ASSESSMENT PROCESS 45 and aggressive long-range planning (10 percent); and (4) producing sig- nificant results (50 percent). The weight given to each of these criteria has remained constant throughout Cycle 1 and Cycle 2, although in Cycle 2 each of the four criteria categories is subdivided into 2 or more sub- criteria with individual weightings--14 sub-criteria in all. The 4 major criteria are balanced well between evaluation of potential to perform and performance itself, but focus extensively on how the program performs at a local level (this aspect will be revisited in Chapter 4). For each of these criteria one or more benchmarks are provided. The benchmark is a description of what constitutes acceptable performance. For example, the sub-criterion "Institutional Setting and Support" ac- counts for 4 percent of the overall score and appears under the criterion category "Organizing and Managing the Program." The "expected per- formance benchmark" is: The program is located at a high enough level within the university to enable it to operate effectively within the institution and externally with all sponsors, partners, and constituents. The institution provides the sup- port necessary for the Sea Grant program to operate efficiently as a state- wide program (NSGO, 2004a). The internal complexity of each benchmark leaves room for the evalu- ators (PAT members) to weigh the different elements appropriately for the program in question. The evaluators are also asked to take into ac- count indicators of performance and a list of "suggested considerations." Asking knowledgeable evaluators to incorporate such diverse sets of in- formation into an overall score is a standard part of assessment processes in research organizations. In particular, using quantitative indicators to inform qualitative judgments, as the Sea Grant evaluation process does, is widely considered the best use of performance criteria. The current Sea Grant benchmarks have variable formats and sometimes mix manage- ment and results concepts in the same benchmark (e.g., under "effective and integrated program components" the list of expected performance benchmarks includes "research results are consistently reported in peer- reviewed publications"), but are by and large quite well done and are consistent with the goal of assessing, and thus guiding, performance of individual Sea Grant programs. The use of performance criteria in underpinning subjective evalua- tions is treated in Appendix B of the PAT Manual. Much of that treatment is of a general nature, defining and recommending the use of perfor- mance criteria to inform the review process and contribute reliably to comparability among different PATs. This is followed by a list of possible indicators related to the four broad criteria, on which the overall review process is based. See Box 3.1 for list of indicators in the 2005 PAT Manual.

46 EVALUATION OF THE SEA GRANT PROGRAM REVIEW PROCESS Box 3.1 Indicators of Performance Organized in 4 Categories Reprinted from 2005 PAT Manual (NSGO, 2005a) 1. INDICATORS FOR PROGRAM MANAGEMENT Managing the Program--Response to previous PAT recommendations; Man- agement Team composition and responsibilities; Percentage time Direc- tor and staff devote to SG (FTEs [Full Time Equivalent]); Advisory Boards membership and function (expertise, meeting schedule, recommenda- tions, meeting agendas, attendance, diversity, and turnover); Staff structure, interactions, and physical location in state Institutional Setting--Setting of the program within the university or consor- tium organization and reporting structure; Program infrastructure (space, equipment, available resources) Project Selection--Process to develop RFP [Request for Proposal] priori- ties; Preproposals and proposals submitted, and institutions represented / institutions available in state; Review process including composition of panels; RFP distribution; External peer review (numbers and quality), ratings/ scoring analysis, quality of feedback to PI's; Conflict of interest policy and prac- tice; Time from submission to decision; Technology support for submission and review process; Feedback from PIs and/or institutions Recruiting and Focusing the Best Talent Available--New vs. continuing projects and PI's [Principal Investigators]; Recruitment of PI's/institu- tions; Relative success of home institution; Success in national competi- tions; Regional/multi-program projects; Multiinvestigator projects; Lever- aged funding in projects Institutional Program Components--Integration of outreach and research program elements; Core Federal and matching funds (last 8 years) and distribution among program elements; Leveraged funding from partners (NOAA, other Federal, State and local) for the program; National competi- tion funding (NSIs [National Strategic Initiatives], pass through awards); Additional Program Funding through grants, contracts and development activities; Leveraged funding from partners (NOAA, other Federal, State and local) for PIs 2. INDICATORS FOR CONNECTING WITH USERS Constituent Involvement in Planning--Local business and stakeholder needs surveys; User feedback (mechanisms and tracking) Contact with Appropriate User Communities--Leadership by staff on boards and committees; Informational meetings/training sessions held and number of participants; Individual consultations with clients/users; Involvement with indus- try (number of businesses aided); Demographics of contacts and efforts; Re- quests for information

CRITIQUE OF THE PERIODIC ASSESSMENT PROCESS 47 Partnerships--Effective local, regional and national interactions/collabora- tions including with NOAA programs Implementation--Number, list and diversity of products produced print, au- dio, video, web, etc); Internal evaluation processes for products and pro- grams; Staff and product awards; Targeted audience and evaluation for all products; Media interest (calls, "experts quoted," press clippings); Use of prod- ucts for public education (classroom enhancement, curriculum development); Relationship of products to other SG program elements; Numbers of teachers and/or students using Sea Grant materials in curriculum 3. INDICATORS FOR PLANNING Planning Process (Input)--Stakeholder and staff involvement (numbers and duration) and integration of input into planning; Transparent priority-set- ting process; Endorsement by Advisory Board; acknowledgement by Univer- sity Ongoing monitoring of plan and reassessment of priorities Plan Quality (Goals, Objectives, etc.)--Short to long-term functional and management goals established; Demonstrated link from state to national pri- orities Plan Implementation (Strategy and Tactics)--Distribution of investment ef- fort to meet strategic plan priorities; Identification of short to long-term benchmarks; Work plan developed for integration of program elements; Program development and rapid response procedures and strategies to meet emerging issues; Evaluation process 4. INDICATORS FOR ACHIEVING SIGNIFICANT RESULTS Contributions to Science and Engineering--Number and list of publications (journal articles, book chapters, reports, etc); Invention disclosures and patents; Technologies and tools developed; Theories or approaches accepted widely; Number and list of presentations by PI's; Citation analysis for selected projects Contributions to Education--Numbers of graduate and undergraduate stu- dents supported, including fellowships and internships; Sponsorship of education programs and target audience participation; Changes in behav- ior of target audiences; Numbers of theses completed; Tracking of graduate students after Sea Grant support Socioeconomic Impact--Descriptions of the most important impacts; Posi- tive environmental impacts and economic benefits resulting from chang- es in behavior of individuals, businesses, and institutions; Businesses and jobs developed after contact; Best management practices developed in re- sponse to extension involvement Success in Achieving Planned Program Outcomes--Self-assessment

48 EVALUATION OF THE SEA GRANT PROGRAM REVIEW PROCESS Under each of the four criteria the required and suggested indicators are organized into three to five sub-groups. The large number of indica- tors (approximately 100) attests to the complexity of the review, but the organization of indicators into sub-groups provides a useful framework for understanding the most valued characteristics of an individual Sea Grant program. An essential contribution made by the study of performance criteria is to improve the efficiency of activities such as the Sea Grant review process. Under certain circumstances careful analysis may show that an approach with 5 criteria would lead to as reliable a result as one using 10 criteria, or that the use of 50 indicators may be as useful as 100. In the present case the argument is made below for the reduction of the number of criteria from the current 14 to a significantly smaller number but with the addition of a criterion that would assess activities to strengthen the ability of programs to cooperate on regional or national scale issues. Determining the most appropriate number of indicators is not simple. Reducing the number of indicators might be advantageous to reviewers when carrying out their tasks, but shortening the list of indicators might be a disservice to the individual Sea Grant program directors who must prepare briefing materials. The director's task is to anticipate and provide answers to questions that the reviewers might logically raise. The indica- tors listed in the 2005 PAT Manual (see Box 3.1) all appear to represent relevant questions that could reasonably be expected to come up during the review. Because no one wants to be caught off guard during a review, these indicators aid in preparation. While the instructions for both Cycle 1 and Cycle 2 asked reviewers to give four levels of rating, the labels and instructions varied somewhat between the cycles (see example of score sheets in Chapter 2--Box 2.3 and Table 2.2). In the current instructions, the reviewers are asked to assign one of four ratings: needs improvement, meets benchmark, exceeds bench- mark, or highest performance. Some description is provided for each rat- ing level, although there can be considerable subjectivity involved in dis- tinguishing between the "exceeds benchmark" level (described as "in general goes beyond") and the "highest performance" level (described as "goes well beyond and is outstanding in all areas"). In addition, the defi- nitions of some benchmarks include superlative language (e.g., excep- tional talent) that would make it difficult to distinguish benchmark per- formance from the "highest level". Further fine-tuning in the rating instructions is possible and advisable, but no grading system will ever eliminate subjectivity entirely. The earliest set of instructions to PATs had one benchmark for each of three criteria and four benchmarks in a fourth criterion. Evaluators pro- vided just four ratings, one for each performance criterion. In the PAT

CRITIQUE OF THE PERIODIC ASSESSMENT PROCESS 49 manuals for Cycle 2, there are 14 sub-criteria,2 because each of the original four major criteria used in Cycle 1 were subdivided into at least two sub- criteria. Evaluators provide a rating for each of the 14 sub-criteria. Each criterion still carries its own weighting, which now ranges from 2 percent to 25 percent, and the final score is the sum of the products of the 14 ratings and weights for each criterion. This subdivision into 14 weighted sub-criteria was not recommended by any of the major committees that have examined the process. Nor is there evidence to suggest that 14 weighted sub-criteria provide a more accurate assessment of program performance than a smaller number of criteria. The 14 weighted sub-criteria may also increase the perception that individual Sea Grant programs now have to "teach to the test," that is, that the very specific criteria skew behavior. Performance measure- ment systems always have the effect of orienting behavior, and good ones are carefully balanced to make sure that all the kinds of behavior that are actually important are included. Consideration should be given to reduc- ing the number of weighted criteria to be assessed in the future, but implementation should be postponed until the beginning of the next cycle of program review (the current review cycle will conclude in late 2006). With only 4 to 6 broader criteria, weighted to reflect a balance between the production of meaningful results, outreach and education, and plan- ning, organization, management and coordination among programs, the PATs would be able to form more holistic judgments of overall program performance. All parties involved in the review process have been concerned with how PATs made up of different groups of volunteers could rate different programs in consistent ways (e.g., would the same actions in two pro- grams receive different grades if evaluated by different visiting PATs). In an effort to characterize the problem, a simple statistic was calculated to measure overlap among PATs over the course of a four-year cycle. For each cycle, the proportion of pairs of PATs that shared at least one mem- ber was calculated. Although this statistic could be calculated for a given time period, the statistic for overlap within a given cycle is the most relevant, given that a program is ultimately ranked against all 29 of its partners. The results for both Cycle 1 and the partially completed Cycle 2 show a low proportion of overlap, 0.24 and 0.30 respectively. In addition, overlaps with more than one person were rare. The average numbers of shared members were 0.26 and 0.35, for Cycle 1 and 2 respectively. Thus, 2The term criterion has been used differently at various points in the evolution of the evaluation standards, but currently refers to these fourteen areas.

50 EVALUATION OF THE SEA GRANT PROGRAM REVIEW PROCESS although recent efforts to improve the reliability of PAT reviews by in- creasing the overlap among PAT membership appears to have had some effect, the actual effect is still relatively low. The Sea Grant program assessment process has taken several steps to attempt to achieve reliability in ratings. First, the NSGRP--a standing committee from which PAT chairs are selected--is represented in the FE process and provides continuity and broad assistance in PAT guidance and training, including work on providing grades in consistent ways across PATs. Second, the NSGO tries to have some overlap in the mem- bership of PATs, so that someone is present at the PAT who can do com- parisons across at least two programs between Cycle 1 and Cycle 2. This does not, however, address reliability among reviews within a cycle. Third, the benchmarks are designed to provide a standardized compari- son point in each of the four rated criteria. Both PATs and NSGO staff use the same criteria, sub-criteria, benchmarks, indicators, and ratings instruc- tions in their evaluations. Finally, the last day of the FE is devoted to comparing grades across programs and adjusting them to reflect differ- ences in performance consistently. This final step, though necessary, un- derscores the importance of NSGO being well positioned to indepen- dently and credibly evaluate the individual programs across the breadth of the entire program. The April 8, 2005, Revised Policy Memorandum on NSGO Final Evaluation and Merit Funding (NSGO, 2005b) from the National Director to the individual directors moves significantly toward the goal of improv- ing the transparency of these processes. It carefully describes the informa- tion that is considered in the FE, the procedures by which the process is carried out, and the ways in which this review differs from and parallels the PAT process. It also describes in detail the manner in which the merit and bonus decisions are made but does not specify how the performance criteria categories and relative standings are defined in terms of the re- sulting numerical scores.3 PROGRAM ASSESSMENT TEAM VISIT Currently, the site visit by the PAT is the defining event of the peri- odic review process. The concepts of program review and accreditation are well established in the academic community and among granting agencies. The one aspect that distinguishes these events from most simi- 3The description of how qualitative ratings in the FE are converted into numerical values can be found in NSGO, 2005b, also included here as Appendix E.

CRITIQUE OF THE PERIODIC ASSESSMENT PROCESS 51 lar activities is the element of competition. Most reviews of ongoing pro- grams are carried out to determine whether the program is doing well against some set of mutually agreed upon goals. While this is true for the PAT visit and report, an additional element of competition was formally introduced in response to the National Sea Grant College Act Amend- ments of 2002 (P.L. 107­299). The competitive process is directly affected by differences among the personnel in the various PATs. While NSGRP activities, the guidance documents and previsit training are all conscientiously applied, further improvement would result from measures that would facilitate the over- lap of personnel among several review teams. Overlap is essential both within cycles and between cycles, as discussed earlier in this chapter. Many of the visits have required four or five days of project review, field trips, and program presentations, raising concerns about the finan- cial cost and the demand such efforts place on PAT members, reducing their ability or desire to serve on more than one PAT. Shortening the PAT visit could save expenses and time devoted to preparation and conduct, and the expense of clientele and principal investigator appearances be- fore the PAT. Because Sea Grant is a partnership between NSGO and the institution, the PAT visits are often designed to satisfy the host institution's requirement for periodic external review of academic pro- grams. Consequently, the desire to shorten the length of the PAT visit should be tempered by the need to be responsive to the individual pro- gram needs. The PATs need to understand the individual Sea Grant program's manifold dimensions. In March 2005, the NSGO added a new section to the PAT Manual (NSGO, 2005a) entitled "PAT Preparation, Structure and Cost Control." Under this section, the NSGO provides suggestions for ways to minimize the costs of the PAT visit, without reducing the PAT's effectiveness. The content of that section is summarized below: · Field trips should be used sparingly, and when appropriate, ses- sions with formal presentations can substitute for field trips. · Expensive venues should be avoided. · Expensive social events and dinners are not expected. · Receptions can be combined with poster sessions. · Quality of briefing book depends on content, not on glossy publi- cations.4 4The PAT Manual (NSGO, 2005a) includes an appendix on "Guidelines for Program Assessment Briefing Books" that recommends brevity in briefing book preparation.

52 EVALUATION OF THE SEA GRANT PROGRAM REVIEW PROCESS · Use of CD-ROMS for auxiliary materials is encouraged. · Use conference calls, web and video conferencing where appropri- ate to reduce travel expenses and engage important community leaders who may not be able to attend in person. The NSGO should be commended for encouraging reducing the costs and fanfare in its 2005 PAT Manual. Putting a program in its best light can be achieved more effectively by providing an easily digested amount of well focused, content rich material. Thus, another way of reducing prepa- ration time of the site visit is for the Program Officer, the individual Sea Grant director, and the PAT chair to have some flexibility in deciding how to organize the visit. Perhaps, it would help to highlight certain issues or activities, while still using the performance criteria consistently. The success of shorter and more focused PAT site reviews will depend, in part, on increased engagement and continuous oversight by the Program Officer and the ability to identify and focus on important program areas, as discussed and recommended in Chapter 4 of this report. More efficient and shorter PAT site visits could allow NSGO to con- duct site visits to half the programs in one year and the other half the following year. This might make it easier for PAT members to participate in several site visits and provide better comparison among programs. At the end of two years, all programs can be more effectively compared and ratings of program performance would be more comparable. FINAL EVALUATION PROCESS The FE and merit and bonus funding process is introduced in Chap- ter 2 of this report, and it is described in the National Director's memo- randa of April 22, 1999, and April 8, 2005 (see appendixes D and E). The FE process has been the subject of frustration for some individual Sea Grant program directors who characterize the FE as "lacking transpar- ency" or as a "smoke-filled room" event, where program scores are changed for reasons that are unknown or not understood by the indi- vidual Sea Grant program directors. A significant cause for this percep- tion appears to be poor communication in several areas. In one exchange of letters between an individual Sea Grant program director and the Na- tional Director, it was clear that the Sea Grant program director was not aware of the 1999 Policy Memorandum describing the FE process. Prima- rily prompted by the introduction of the rating and ranking process man- dated in the 2002 reauthorization (P.L. 107­299) and by the implementa- tion of this new process, the 2005 Policy Memorandum was written in an attempt to clarify the FE process. The NSGO sent out successive drafts in 2004 for comment and made significant revisions based on comments

CRITIQUE OF THE PERIODIC ASSESSMENT PROCESS 53 received. However, because the final 2005 Policy Memorandum was not available until after the FE week (a 5-day meeting usually held in Febru- ary), the degree to which it will clarify the process and reduce tensions is not yet known. The letters that the NSGO director sends to the individual Sea Grant program directors at the conclusion of the FE process may also contribute to the perception of a lack of transparency. Although in many respects these letters are quite similar to the letter sent to the individual Sea Grant directors after the PAT report, they differ in one important way. In early portions of Cycle 2, the comments in the final letter are compressed from the 14 criteria used by the PAT into the four larger categories, and do not include any final score. This issue was addressed by a procedural change in 2004, which led to the practice of including the final score in the FE letter. Differing perspectives and program obligations of the NSGO and the individual Sea Grant directors, as well as insufficient communication and program liaison, appear to contribute to a tension that fuels the percep- tion of lack of transparency and misunderstanding of the role of the FE. These tensions are understandable given a national program that is imple- mented by state and local directors and staff who are passionate about their work. Several actions discussed and recommended in this report, such as better NSGO communication with individual programs, increased program officer engagement, and more integrated strategic planning, could help to improve operational trust and respect among all program levels, thereby facilitating efforts to further improve the program and enhance its station within the community. Credibility of PAT and FE Scoring Process In Cycle 2, the number of criteria was increased to 14 over the 4 of Cycle 1. However, the 14 sub-criteria were simply subdivisions of the 4 major criteria used in Cycle 1; thus, the distributions of the FE and the PAT differences among the 4 broad categories can still be assessed. In the 8 reviews of the first year of Cycle 2 there were 2 disagreements in "Sig- nificance of Results" and 1 for "Connecting with Users" (combined carry- ing 70 percent of the ranking weight) as opposed to 9 disagreements in "Organizing and Managing the Program" and 11 in "Effective and Ag- gressive Long Range Planning" (recall there were multiple sub-criteria under each of the 4 criteria). This distribution was similar to differences seen in Cycle 1 and implies that in spite of the involvement of members of the NSGRP in both the PAT and FE processes as a communication link, there is often not a common view of program performance under these criteria.

54 EVALUATION OF THE SEA GRANT PROGRAM REVIEW PROCESS In year two of Cycle 2 the procedure changed from requiring a simple majority to requiring a two-thirds majority for an FE rating to differ from the PAT rating and the number of disagreements dropped substantially. There were nearly 3 changes per program (2.875, a weighted average out of a total of 14 criteria and reported on a four-point scale, as they have been since 1998) in year 1 and only just over 1 change per program (1.14, also a weighted average out of a total of 14 criteria, reported on a 4-point scale) in year 2. The distribution changed as well with half of the disagree- ments being in the "Significance of Results" category. While this outcome is correlated, it is not necessarily causal. To fully understand the signifi- cance of this correlation, one would need to know how many changes were proposed but failed to win a two-thirds majority or how many changes were not proposed because they were unlikely to win a two- thirds majority. While there were many differences between the PAT score and the FE score in both cycles, these differences were not predominantly either posi- tive or negative5 The mean overall score difference in Cycle 1 was 0.0047 and the mean overall score change in Cycle 2 was 0.0093. Because the mean overall score difference includes positive and negative differences, it does not provide a good representation of the typical difference be- tween the PAT and FE score for individual programs. The mean absolute overall score difference is indicative of the typical magnitude of differ- ences in PAT versus FE scores; in Cycle 1, the mean absolute overall score difference was 0.1530 and the mean absolute overall score difference in Cycle 2 was 0.0827.6 Given its responsibility for managing the overall program, the NSGO should have greater say when disagreements occur between opinions developed by the PAT over the span of a few days and opinions devel- oped by the NSGO over several years. Conversely, the independent per- spective provided by the PAT should be useful to the NSGO when deter- mining which action, if any, to take to address poor performance in these areas. Some have suggested that larger programs fare better in the FE pro- cess than smaller ones. Figure 3.1 plots differences in score between the PAT and FE ratings against program funding as a proxy for program size. The distribution between positive and negative differences does not indi- 5The numerical score derived is calculated from the numeric equivalent of the four pos- sible ratings, 1 being the highest and 4 the lowest, in each of the weighted criteria. Thus, 1.00 is a perfect score and larger numbers represent poorer performance. 6The differences in mean score difference and mean absolute score difference between Cycles 1 and 2 are not statistically significant at the 5 percent significance level of a two- tailed hypothesis test.

CRITIQUE OF THE PERIODIC ASSESSMENT PROCESS 55 0.8 FE) - 0.6 TAP 0.4 0.2 (Wt 0.0 score -0.2 in -0.4 -0.6 Change $500,000 $1,000,000 $1,500,000 $2,000,000 $2,500,000 $3,000,000 $3,500,000 $4,000,000 base funding cycle 1 cycle 2 category changers FIGURE 3.1 Base funding (a proxy for program size) vs. difference (change) in overall score during NSGO Final Evaluation review. Category changers (individ- ual program scores that are circled) are the seven programs whose categorization changed (i.e., change in score moved that program either to a higher or lower category within the 5 categories set up by congressional legislation). Four pro- grams improved their categorization and 3 lost ground (data from NSGO). cate that smaller programs are more likely to receive worse scores in the FE.7 Similarly, there has been concern that program officers with long tenure with particular programs might have undue influence in the FE portion of the review (see Figure 3.2). It appears that this concern was unfounded in Cycle 1. In Cycle 2, all programs that received worse scores in the FE (negative PAT-FE) had NSGO program officers with less than 2.5 years with those individual programs. Conversely, all Cycle 2 scores that improved (positive PAT-FE) relative to the PAT score had NSGO program officers with more than 2.5 years with those individual pro- grams. Although all the differences between PAT and FE scores for Cycle 2 were small (< 0.2), two of the changes were statistically significant at the 5 percent significance level. One of the stated advantages of the FE is that simultaneous consider- ation of 7 or 8 programs provides an opportunity to compensate for varia- 7 During Cycle 1, the correlation between base funding (2000-2002 average) and changes to the PAT score was -0.108. During Cycle 2, the correlation between base funding (2003) and changes to the PAT score was -0.052. Neither of these correlations is significantly differ- ent from zero at a 5 percent significance level of a two-tailed hypothesis test.

56 EVALUATION OF THE SEA GRANT PROGRAM REVIEW PROCESS 1.2 1.0 0.8 FE)- TA 0.6 (P 0.4 0.2 score in 0.0 -0.2 erence -0.4 Diff -0.6 -0.8 0 2 4 6 8 10 Length of PO service with an individual Sea Grant program (years) cycle 1 cycle 2 category changers FIGURE 3.2 Continuity of PO service with a particular program vs. difference in overall score assigned by PAT vs. FE. Positive values indicate a better ranking being assigned in the FE (data from NSGO). tions in the way that different PATs score program performance (i.e., simultaneous consideration of multiple programs helps to address con- cerns about reliability). To address concerns about reliability and consis- tency, NSGO staff members would benefit from professional develop- ment training in performance evaluation. In addition, an outside expert in performance evaluation could be included in the FE. IMPROVING THE VALUE OF ASSESSMENT As noted in Chapter 2, the review process produces a numerical score that is used in allocating merit and bonus funds. Based on testimony and evaluations by committee members expert in this field there is consensus that the criteria set forth as the basis of the review process are appropriate to the goal of improving individual Sea Grant programs. The qualitative ratings for individual criteria are translated into a numerical score and arithmetically weighted to yield a single numerical final score. This sec- tion addresses the use of the resulting numerical scores for: · Determining whether there have been improvements in the indi- vidual Sea Grant programs,

CRITIQUE OF THE PERIODIC ASSESSMENT PROCESS 57 · Allocation of merit and bonus funds, · Identification of potential biases, and · Broad management of the program. Improvement The question of whether the assessment process produces improve- ment in individual Sea Grant programs can only be judged for the 15 programs that have been through two review cycles. Based on FE scores, the number of individual Sea Grant programs in Category 1 (scores better than 1.5) increased from 7 to 9 between Cycles 1 and 2, the number in Category 2 (1.5 to 2) remained at 5, and the number in Category 3 de- creased to 1. Four programs improved their categorization and 2 lost ground. The average ranking number over the entire 15 improved only slightly--from 1.55 to 1.49. Although there was not great improvement, the fact that nearly half of the programs were already in the highest cat- egory (scores of 1 to 1.5) implies that there was not much latitude for a major numerical change. In addition, given changes in criteria and bench- marks made during and between cycles, it's not apparent that such rela- tively small changes in score reflect actual changes in program perfor- mance. The multivariate regression analysis, described in Appendix F, in- cluded a variable to reflect differences in average FE scores between Cycle 1 and Cycle 2 while controlling for the influence of other explanatory variables. The results of that analysis suggest that the average difference in scores between the two cycles is not significantly different from zero. Thus, there was no statistical improvement in average program score following the implementation of changes specified in the 2002 reauthori- zation (P.L. 107­299). Because the majority of the individual Sea Grant programs receive scores in the "Highest Performance" and "Exceeds Benchmark" catego- ries (categories 1 and 2, respectively), it seems appropriate to wonder if the benchmarks are sufficiently ambitious. If the benchmarks are designed to reflect annually updated, quantitative measures of the significance and impact of research, outreach, and education activities, it would be easier to contrast program performance relative to other programs and to the program's past performance. The criterion with the most variable results from Cycle 1 to Cycle 2 was "Effective and Aggressive Long Range Planning," with six Sea Grant programs improved and seven downgraded--not a clear indication that the first round led to significant learning. This apparent lack of program change, i.e., the adoption of effective long-range plans, may be remedied if NSGO takes steps, as recommended here, to work with individual Sea

58 EVALUATION OF THE SEA GRANT PROGRAM REVIEW PROCESS Grant programs to develop and adopt strategic plans. The NSGO should work with the individual programs to generate an agreed upon strategic plan (recommendations for this report can be found in the last sections of chapters 3, 4, and 5). Adoption and implementation of a strategic plan by the NSGO and the individual program would remove the need for a benchmark for the plan itself--establishing the plan would be a joint responsibility. The plan would then be the standard against which the effectiveness of execution would be judged. Distribution of Merit and Bonus Funds The practice of awarding "merit" and "bonus" funds based on perfor- mance began in 1998 when the NSGO began to emphasize the importance of the new program review process by providing financial rewards for programs that excelled at performance benchmarks. NSGO created three funding categories into which programs were placed based on the scores achieved by each program through the review process. Programs that were ranked in the two best-performing categories (programs with the lowest scores) in the NSGO's scoring system were awarded additional (on top of base funds) or "merit" funding for the duration of the period until their next review. This basic practice continues to this day, with some refinements. Merit funding was intended to reward program per- formance (based on criteria), rather than competition among programs. It was intended to stimulate improved performance by individual Sea Grant programs However, in 2002 Congress mandated creation of five sharply de- fined categories into which the individual Sea Grant programs were to be placed. Congress required "no less than 5 categories, with each of the 2 best-performing categories containing no more than 25 per cent of the programs" (P.L. 107­299, section 3[b][A][ii]). Some consequences of this mandate, which put programs in competition against each other, are at odds with the natural trend and intent of the original merit funding pro- cess which was to encourage improvement in all program scores and thereby ultimately aggregate all programs into one category. The NSGO responded to the mandate by retaining the three existing categories and subdividing Category 1 (programs of the highest rank) into three sections (1a, 1b, 1c; scores range 1.0­1.5, 1.5­2.0, and 2.0­2.5, respectively), the first and second of which contain just under 25 percent of all programs. The distribution of scores is shown in Figure 3.3, as of the end of the second year of Cycle 2. The best possible score is 1.0. Although this adheres to the letter of the legislation, the close numeri- cal spacing of adjacent rankings in Category 1 creates two stepwise discontinuities in the bonus assignment process in which a small differ- ence in score (e.g., between 1.17 and 1.19; or 1.26 and 1.29) results in a

CRITIQUE OF THE PERIODIC ASSESSMENT PROCESS 59 4 3.5 3 Score 2.5 Actual 2 1.5 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 29 Individual Sea Grant Programs FIGURE 3.3 Distribution of individual program scores in 2005. Each program scored on a 4-point scale with 1.0 being the best possible score and 4.0 being the worst possible score. Merit categories are 1.0­1.5, 1.5­2.0, and 2.0­2.5. Data from NSGO. significant difference in reward while adjacent corresponding differences have no effect (Figure 3.4). For perspective, the discussion of the FE pro- cess notes that the mean magnitude of changes between the PAT and FE scores was 0.1. This small difference in scoring during the FE may have substantial impacts on program funding even though the absolute differ- ences in performance are small. An alternative to division into discrete categories would be to reward the top 50 percent of the programs on a sliding scale so that there would be no large steps, but rather consecutive small ones. Although there still would be uncertainties in scores at this level of aggregation, a more logi- cal approach would be to reward each program with a bonus increment that would be proportional to the difference in score between adjacent programs. This is the equivalent of computing the bonus in proportion to the difference of any given score in the top half from that of the program at the 50 percent mark in the ranked sequence. The resulting smoothed distribution is shown, based on 2005 data, in Figure 3.4.

60 EVALUATION OF THE SEA GRANT PROGRAM REVIEW PROCESS PROPORTIONAL BONUS $1,000 2:1 STEP BONUS $1,000 160 140 120 100 thousands) 80 (in 60 Dollars S. U. 40 20 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 Individual Sea Grant Programs FIGURE 3.4 Current and proposed bonus distributions. Based on the scores of the 29 individual Sea Grant programs, 14 of the 29 programs scored high enough to receive bonus funds from a $1 million pool. Dark gray bars reflect a 2:1 fund- ing ratio between the top seven ranked programs and programs ranked 8 through 14 (note the significant difference between funds awarded to program 7 and 8). Light gray bars show a proposed distribution based on the proportion that each score is above the score of the 15th program. Amounts are in thousands of dol- lars. Data from NSGO. Potential Biases The results of a multivariate analysis of NSGO data, as described in detail in Appendix F, show that the FE scores are not biased as a result of program officer seniority, program funding levels, program maturity, or- der of review within a cycle, or between Cycles 1 and 2. There is, how- ever, statistically significant evidence that program officer continuity with the individual Sea Grant program is inversely related to the FE score. Looking at the scores in relation to continuity of NSGO program officer experience with a particular program, Figure 3.5 shows this effect

CRITIQUE OF THE PERIODIC ASSESSMENT PROCESS 61 3.0 2.5 2.0 score 1.5 Final 1.0 0.5 0 2 4 6 8 10 Length of PO service with program (years) cycle 1 cycle 2 FIGURE 3.5 Continuity of program officer (PO) service with a particular pro- gram vs. the FE score. All programs with scores greater than 1.8 are associated with program officers with tenures of less than 3 years. Data from NSGO. in a simple form: all of the poor scores (greater than 2) occur with NSGO program officers with no more than 2 years of service with that individual Sea Grant program. Considering both cycles, the correlation between NSGO program officer continuity with a particular Sea Grant program and the FE score assigned to that Sea Grant program is -0.37. That is, individual programs that have a longer history of interaction with their NSGO program officer are, on average, assigned lower (better) FE scores.8 Two implications can be drawn from these findings. First, the strength of the evaluation system and methodology, in that the final scores are not influenced by variables of the characteristics of the program officer, or the 8 The probability that a correlation of ­0.37 would have been observed if the true correla- tion is greater than or equal to zero is 0.0066. The coefficients reported in Appendix F are ordinary least squares regression coefficients, not simple correlation coefficients. The multi- variate model apportions the observed variation in scores across several different variables simultaneously and thus does not map back to the simple correlation coefficient. In con- trast, the simple regression in Appendix F does map back to the simple correlation coeffi- cient: ­0.077 = ­0.371(1.901)/(0.393), where 1.901 is the standard deviation of program of- ficer (PO) continuity and 0.393 is the standard deviation of the FE scores.

62 EVALUATION OF THE SEA GRANT PROGRAM REVIEW PROCESS characteristics of the program itself, its funding, and its maturity. Second, while there is evidence of a positive relationship between program officer continuity and the FE score, this relationship should not necessarily be viewed as a cause-and-effect relationship; instead it could be suggestive of the importance of linkages and feedback between the NSGO and the individual Sea Grant programs. The value of robust support by, and in- teraction with, skilled program officers must be balanced against tenden- cies for program officers to lose perspective as they develop longstanding relationships with individual SG programs. Rather than serving as a sug- gestion that scores could be improved by increasing the length of time that NSGO program officers are assigned to particular programs, the sta- tistical finding serves to highlight the importance of ensuring that there is a close and ongoing working relationship between each individual Sea Grant program and the NSGO. Broad Program Management Although much of the discussion that took place during open ses- sions involving individual Sea Grant program directors focused on the use of quantitative scores for competitive ranking of the individual Sea Grant programs, it is important to consider the broader question of the role of the current review process in improving the individual programs and the National Sea Grant College Program (National Program) in other ways. Considerable effort goes into the periodic review process, yet it often appears to be used simply within the narrow confines of assignment of merit and bonus funds. Given the effort involved, the outcomes should be used more widely for program management. Unfortunately, the dissection of the review into 14 sub-criteria robs the process of an opportunity to take a holistic approach that would en- hance its broader application. The PAT and FE discussions become dis- cussions of individual criteria. Roughly as much time was spent in the 2005 FE on a criterion worth 4 percent of the total score as was spent on the research and outreach topics that constitute major contributions (20+ percent) to the total Sea Grant program. The use of program ratings to rank for competitive funding can have unintended and counterproductive consequences. While competition en- courages programs to improve, it can reduce the incentive for individual Sea Grant programs to cooperate with one another or work productively with the NSGO on regional activities. This effect was brought up repeat- edly in testimony at public meetings by individual Sea Grant program directors; these directors stated that they were somewhat reluctant to share their ideas with each other for fear of "helping the competition." Sharing and networking have traditionally been important positive ele- ments of the NSGCP and have helped to weave the current 30 individual

CRITIQUE OF THE PERIODIC ASSESSMENT PROCESS 63 Sea Grant programs (not including 3 programs in development) into a single NSGCP. It is essential that the review process evaluate the manner in which individual programs contribute to the whole. Introduction of an explicit criterion for performance in this area (discussed in the next section) would remedy this shortcoming and improve the effectiveness of the National Program as a whole. COLLABORATION AMONG INDIVIDUAL SEA GRANT PROGRAMS In 2004, Admiral James D. Watkins, Chair of the U.S. Commission on Ocean Policy (USCOP), stated in the letter transmitting the Commission's final report An Ocean Blueprint for the 21st Century to the President of United States that the USCOP concluded that the following action was essential: . . . a new national ocean policy framework must be established to im- prove federal coordination and effectiveness. An important part of this new framework is strengthening support for state, territorial, tribal, and local efforts to identify and resolve issues at the regional level (USCOP, 2004). Although the Commission's findings were nonbinding, the heavy emphasis placed on coordination and effectiveness at local, regional, and national scales is striking. Furthermore, in response to the USCOP's re- port, the emphasis placed on facilitating regional collaboration was adopted in the formal White House response, the U.S. Ocean Action Plan (Council on Environmental Quality, 2005). The U.S. Ocean Action Plan identified three high-priority actions to address the USCOP's call for "en- hancing ocean leadership and coordination." In addition to "codifying the existence of NOAA within the Department of Commerce by passage of an organic act" and "establishing a cabinet-level federal ocean, coastal, and Great Lakes coordinating entity," the Bush administration called for greater effort to support "voluntary regional collaboration." In particular, the U.S. Ocean Action Plan underscores support for ". . . enhanced coordi- nation and [the Plan] strongly values the local input that is essential in managing and protecting our nation's ocean, coastal, and Great Lakes resources." Existing programs, such as Sea Grant, which emphasize local and federal collaboration, would seem to be natural candidates to play lead- ing roles in efforts to address well-recognized and emerging marine policy challenges at regional scales. If Sea Grant can demonstrate an ability to foster regional collaboration, one would expect that ability to be recog- nized and utilized.

64 EVALUATION OF THE SEA GRANT PROGRAM REVIEW PROCESS Although some Sea Grant programs are already collaborating at vari- ous scales to address issues of high regional interest (such as the Chesa- peake Bay area), it appears that these collaborations are driven largely by regional constituencies that interact with multiple Sea Grant programs. Thus it is not apparent that sufficient attention is given in the current review process to systematically identifying opportunities for regional collaboration. Furthermore, during open session discussions with individual Sea Grant program directors, the assertion was made that the newly enacted Congressional directive to rate and rank programs for the purpose of distributing merit and bonus funds had, to some degree, a chilling effect on program-to-program collaboration. While the veracity of this assertion is difficult to determine, there is reason to believe that the requirement to rate and rank programs has strained the relationship between the indi- vidual programs and the NSGO itself. Collaboration is an essential part of integrating the individual Sea Grant programs into a successful National Program. Barriers to effective communication and collaboration among the individual programs could realistically reduce the impact from ad- vances made in various parts of the overall network. Because network building is an important function, it might be advisable to augment the original four criteria with a fifth criterion that assesses the extent to which an individual Sea Grant program contributes to network cohesiveness. Including this additional criterion would ensure that activities in support of the overall network are evaluated in the review process; however, it would only provide insight into one component of the network (i.e., how individual programs contribute to the overall program). In an effort to develop a fuller understanding of how the network is functioning as a whole, greater attention should also be focused on determining how well the NSGO is fostering collaboration at a variety of scales, including sup- porting collaborative efforts of individual programs. FINDINGS AND RECOMMENDATIONS REGARDING THE PERIODIC ASSESSMENT PROCESS The majority of the individual Sea Grant programs receive scores in the "Highest Performance" and "Exceeds Benchmark" categories, thus, it seems appropriate to wonder if the benchmarks are sufficiently ambi- tious. The Director of the National Sea Grant College Program, working with the National Sea Grant Review Panel, should carefully review the present benchmarks and indicators to ensure that they are sufficiently ambitious and reflect characteristics deemed of high priority for the program as a whole. The evaluation criteria currently used do not adequately emphasize

CRITIQUE OF THE PERIODIC ASSESSMENT PROCESS 65 the importance of network building among individual programs and how such activities help to link the local and regional efforts into an effective nationwide program. Some aspects of the current program evaluation process and ranking appear to have fostered an increase in competition and lowered the level of cooperation between individual Sea Grant pro- grams. This tendency is not consistent with efforts to build a cooperative nationwide effort, as encouraged by NOAA guidance documents (33 U.S.C. 1123).9 Explicit consideration of cooperative and collaborative ac- tivities between programs should be included in the program evaluation process and programs should be rewarded for these kinds of activities. Concomitantly, there is no evidence that the use of 14 weighted sub- criteria in Cycle 2 in place of the 4 criteria in Cycle 1 has improved the review process. Conversely, introduction of criteria weighted in small percentages (less than 5 percent) work against taking a holistic view of the individual programs and creates a less efficient process. The Director of the National Sea Grant College Program, under supervision of the Sec- retary of Commerce and in consultation with National Sea Grant Re- view Panel and the individual Sea Grant programs, should substan- tially reduce the overall number of scored sub-criteria by combining various existing criteria, while adding cooperative, network-building activities as an explicitly evaluated, highly valued criterion. Benchmarks and indicators for this network-building criterion will need to be care- fully constructed so that geographically isolated programs are not inap- propriately penalized. However, the steps taken to make such allowances should not undermine the importance of this criterion for the vast major- ity of individual Sea Grant programs. Steps taken by the NSGO and the NSGRP to improve consistency in grading are laudable; while it is not possible to attain perfect reliability in a system that values and depends on professional judgments, further actions could be taken to generate improvements in this area. The Direc- tor of the National Sea Grant College Program, working with the Na- tional Sea Grant Review Panel, should engage independent expertise to refine the benchmarks and grading instructions to meet professional methods and standards for reliability and to refine the training materi- als used to prepare individuals involved in the evaluation process, in a manner consistent with the recommendations made in this report. While the PAT site visit is a central element of the periodic review, it appears that it has in some instances expanded unnecessarily in terms of 9 Title 33, Section 1123 of the U.S. Code states that directors shall "encourage and pro- mote coordination and cooperation between the research, education and outreach programs of the administration and those of academic institutions." See Appendix H.

66 EVALUATION OF THE SEA GRANT PROGRAM REVIEW PROCESS time and cost. Reducing the duration of the site visits would decrease the expenditure of time and funds and allow more overlap of reviewers with increased reliability of the results. Lacking some standards set by NSGO, there is a tendency for individual Sea Grant program directors to expand their presentations to match those of other programs. National Sea Grant Office and National Sea Grant Review Panel should reduce the effort and costs required to prepare for and conduct a Program Assessment Team site review by providing specific limits on the amount and kind of preparatory material to be provided to the Program Assessment Team and by limiting the site visit to no more than three days, including the time to draft the preliminary report and meet with program directors and institutional representatives. The perceived lack of transparency in the FE process has been miti- gated by issuance of the 2005 version of the NSGO memorandum describ- ing this phase of the review process. However, lack of transmission of the FE ratings, in contrast to the PAT reports, contributes to a remaining lack of transparency in the FE rating and eliminates a useful opportunity for the NSGO to explain to the individual programs why the views of the NSGO (as reflected in the FE) and the PAT differ. The "Revised Policy Memorandum on NSGO Final Evaluation and Merit Funding" (NSGO, 2005b) from the NSGCP director moves signifi- cantly toward the goal of improving the transparency of these processes (see Appendix E). A few shortcomings remain, particularly the lack of description of how the qualitative ratings of the FE are converted into numerical values and how the merit categories and relative standings are defined in terms of the resulting numerical scores. Greater clarity is needed in the communication of ratings and rankings of programs. The Director of the National Sea Grant College Program should communi- cate the results of the FE (annual NSGO Final Evaluation) directly to individual Sea Grant program directors. This communication should include the final rating score received by that program (as begun in 2004) and document any substantial difference between the conclu- sions reached during the annual evaluation and the most recent peri- odic review. Furthermore, the Director of the National Sea Grant Col- lege Program should communicate the implication of the annual evaluation in terms of the rating and ranking process used to determine a program's eligibility or receipt of merit or bonus funding. The diverse score changes for "long-range planning" for the programs that have been reviewed twice show that the long-range planning concept has not been well defined and communicated by NSGO or well imple- mented by the individual Sea Grant programs. Existence of an appropri- ate long-range plan shortly after a program is reviewed is essential as a

CRITIQUE OF THE PERIODIC ASSESSMENT PROCESS 67 road map for the subsequent interval and as a yardstick against which a program can be measured each year and at the forthcoming PAT review. The National Sea Grant Office, in consultation with the National Sea Grant Review Panel and individual Sea Grant programs, should estab- lish regular procedures (separate from annual and periodic performance evaluation) for working with the individual Sea Grant program to cre- ate and adopt an appropriately ambitious strategic plan, with goals and objectives against which the program would be evaluated at the next program evaluation period. There are scoring uncertainties arising from the diversity of programs being reviewed and the differences in interpretation of benchmarks by different PATs such that the stepwise score changes at the 25 percent and 50 percent marks are not defined adequately to justify the abrupt bonus changes at those boundaries. For example, in 2004, 15 programs received bonus funds. An alternative to distributing the bonus funds based simply on whether the program falls into one of only two bins made up the top fifty percent of the programs (by rank) would be to reward the top 50 percent on a sliding scale so that instead of large steps in the award of bonus funds, there would be a gradation of awards. This would reduce the potential for very small differences in scores being converted into large differences in the amount of bonus awarded. The Director of the National Sea Grant College Program, under supervision of the Secre- tary of Commerce, should revise the calculation of bonus funding allo- cation relative to program rank to ensure that small differences in pro- gram rank do not result in large differences in bonus funding, while preserving or even enhancing the ability to competitively award bonus funds as required by the National Sea Grant College Act Amendments of 2002 (P.L. 107­299). Several approaches for accomplishing this seem worthy of consideration. One approach would be to reward each pro- gram in the upper half in proportion to the difference between its score and the score of the program at the 50 percent mark (the median score). The resulting smoothed distribution is shown in Figure 3.4. Another pos- sible alternative would be to smooth the distribution based on the relative standings of those programs in the top half relative to the middle pro- gram. This second approach is less attractive given that the relative stand- ings are themselves derived from the program scores. Neither approach would totally eliminate differences in bonus funding between programs that have statistically similar scores, but either approach would signifi- cantly reduce the potential for two programs with statistically similar scores from receiving significantly different bonus awards. Both ap- proaches would appear to satisfy the congressional desire to see bonus funding distributed based on performance (P.L. 107­299).

68 EVALUATION OF THE SEA GRANT PROGRAM REVIEW PROCESS RETHINKING THE PROGRAM ASSESSMENT PROCESS Many of the changes proposed above are intended to address the challenges to effective program assessment that stem from the desire to rate and rank the individual Sea Grant programs for the purposes of determining which programs qualify for bonus funding and to support efforts to distribute funds in a competitive manner. As discussed in Chap- ter 2, in response to congressional desire to see a greater level of oversight and competition in the program, the purpose of assessment within the Sea Grant program became two-fold. First, and more traditionally, assess- ment is used to identify weaknesses or opportunities for growth in the individual Sea Grant programs and possible mechanisms to address them. Second, and more recently, assessment is used to reward programs for achievement (i.e., rate and rank programs in order to pass out bonus funds competitively). Steps proposed to further strengthen the assessment process for the purposes of establishing a more credible and reliable rating and ranking system (including greater overlap among PAT teams, more uniform PAT visits and briefing materials, shortened PAT visits to allow completion of the PAT reviews in a shorter period, etc.) may be difficult to fully achieve and, would likely reduce the value of assessment for the purpose of ex- ploring areas of growth or mechanisms for accomplishing it. Thus, it would seem appropriate to explore an alternative structure for assess- ment within the Sea Grant program, one that fundamentally embraces the two purposes of assessment, by developing two separate mechanisms, each tailored to address a single, more or less unique purpose. Designing an effective dual-mode assessment process would require that one mode emphasize the main purpose supporting the annual rate and rank process, while the main purpose of the second mode would be to nurture the program by evaluating the National Program in its entirety (i.e., all the individual programs as well as the NSGO) at least once every 4 years. Such a change in approach would allow external peer reviewers to move beyond simple ratings to consider broader issues such as an inde- pendent check on individual programs and the evaluation process over- all. Broader issues may include identifying areas for growth or improve- ment and mechanisms for achieving that growth or improvement, exploring ways to strengthen the individual programs institutional rela- tionships, examining the nature of the individual program's relationship with the NSGO, and the effectiveness and credibility of annual evaluation (to support findings about the "state" of the individual programs as well as the network overall). The implications of such a change will be further explored in Chapter 4.

Next: 4 Program Oversight and Management »
Evaluation of the Sea Grant Program Review Process Get This Book
×
 Evaluation of the Sea Grant Program Review Process
Buy Paperback | $63.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!