Skip to main content

Currently Skimming:

3 Critique of the Periodic Assessment Process
Pages 43-68

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 43...
... The third section examines the FE process carried out by NSGO staff during an intensive review of the programs that make up the most recently reviewed PAT cohort (7 to 8 programs in a given year) and results in a final evaluation letter from the National Director to the individual Sea Grant program director.
From page 44...
... The NSGO needs to disseminate the contents of the documents more actively and broadly through a process that involves active and personal explanation of the periodic program assessment process with staff as well as directors of the individual Sea Grant programs. The individual program directors should disseminate, to their staffs and all others who will be taking part in the review, the contents of these documents, particularly the PAT Manual.
From page 45...
... , but are by and large quite well done and are consistent with the goal of assessing, and thus guiding, performance of individual Sea Grant programs. The use of performance criteria in underpinning subjective evaluations is treated in Appendix B of the PAT Manual.
From page 46...
... and distribution among program elements; Leveraged funding from partners (NOAA, other Federal, State and local) for the program; National competi tion funding (NSIs [National Strategic Initiatives]
From page 47...
... -- Distribution of investment ef fort to meet strategic plan priorities; Identification of short to long-term benchmarks; Work plan developed for integration of program elements; Program development and rapid response procedures and strategies to meet emerging issues; Evaluation process 4. INDICATORS FOR ACHIEVING SIGNIFICANT RESULTS Contributions to Science and Engineering -- Number and list of publications (journal articles, book chapters, reports, etc)
From page 48...
... attests to the complexity of the review, but the organization of indicators into sub-groups provides a useful framework for understanding the most valued characteristics of an individual Sea Grant program. An essential contribution made by the study of performance criteria is to improve the efficiency of activities such as the Sea Grant review process.
From page 49...
... Nor is there evidence to suggest that 14 weighted sub-criteria provide a more accurate assessment of program performance than a smaller number of criteria. The 14 weighted sub-criteria may also increase the perception that individual Sea Grant programs now have to "teach to the test," that is, that the very specific criteria skew behavior.
From page 50...
... It also describes in detail the manner in which the merit and bonus decisions are made but does not specify how the performance criteria categories and relative standings are defined in terms of the resulting numerical scores.3 PROGRAM ASSESSMENT TEAM VISIT Currently, the site visit by the PAT is the defining event of the periodic review process. The concepts of program review and accreditation are well established in the academic community and among granting agencies.
From page 51...
... The PATs need to understand the individual Sea Grant program's manifold dimensions. In March 2005, the NSGO added a new section to the PAT Manual (NSGO, 2005a)
From page 52...
... . The FE process has been the subject of frustration for some individual Sea Grant program directors who characterize the FE as "lacking transparency" or as a "smoke-filled room" event, where program scores are changed for reasons that are unknown or not understood by the individual Sea Grant program directors.
From page 53...
... , the degree to which it will clarify the process and reduce tensions is not yet known. The letters that the NSGO director sends to the individual Sea Grant program directors at the conclusion of the FE process may also contribute to the perception of a lack of transparency.
From page 54...
... Because the mean overall score difference includes positive and negative differences, it does not provide a good representation of the typical difference between the PAT and FE score for individual programs. The mean absolute overall score difference is indicative of the typical magnitude of differences in PAT versus FE scores; in Cycle 1, the mean absolute overall score difference was 0.1530 and the mean absolute overall score difference in Cycle 2 was 0.0827.6 Given its responsibility for managing the overall program, the NSGO should have greater say when disagreements occur between opinions developed by the PAT over the span of a few days and opinions developed by the NSGO over several years.
From page 55...
... In Cycle 2, all programs that received worse scores in the FE (negative PAT-FE) had NSGO program officers with less than 2.5 years with those individual programs.
From page 56...
... This section addresses the use of the resulting numerical scores for: · Determining whether there have been improvements in the individual Sea Grant programs,
From page 57...
... . Because the majority of the individual Sea Grant programs receive scores in the "Highest Performance" and "Exceeds Benchmark" categories (categories 1 and 2, respectively)
From page 58...
... , rather than competition among programs. It was intended to stimulate improved performance by individual Sea Grant programs However, in 2002 Congress mandated creation of five sharply defined categories into which the individual Sea Grant programs were to be placed.
From page 59...
... This small difference in scoring during the FE may have substantial impacts on program funding even though the absolute differences in performance are small. An alternative to division into discrete categories would be to reward the top 50 percent of the programs on a sliding scale so that there would be no large steps, but rather consecutive small ones.
From page 60...
... Based on the scores of the 29 individual Sea Grant programs, 14 of the 29 programs scored high enough to receive bonus funds from a $1 million pool. Dark gray bars reflect a 2:1 funding ratio between the top seven ranked programs and programs ranked 8 through 14 (note the significant difference between funds awarded to program 7 and 8)
From page 61...
... in a simple form: all of the poor scores (greater than 2) occur with NSGO program officers with no more than 2 years of service with that individual Sea Grant program.
From page 62...
... Broad Program Management Although much of the discussion that took place during open sessions involving individual Sea Grant program directors focused on the use of quantitative scores for competitive ranking of the individual Sea Grant programs, it is important to consider the broader question of the role of the current review process in improving the individual programs and the National Sea Grant College Program (National Program) in other ways.
From page 63...
... would remedy this shortcoming and improve the effectiveness of the National Program as a whole. COLLABORATION AMONG INDIVIDUAL SEA GRANT PROGRAMS In 2004, Admiral James D
From page 64...
... FINDINGS AND RECOMMENDATIONS REGARDING THE PERIODIC ASSESSMENT PROCESS The majority of the individual Sea Grant programs receive scores in the "Highest Performance" and "Exceeds Benchmark" categories, thus, it seems appropriate to wonder if the benchmarks are sufficiently ambitious. The Director of the National Sea Grant College Program, working with the National Sea Grant Review Panel, should carefully review the present benchmarks and indicators to ensure that they are sufficiently ambitious and reflect characteristics deemed of high priority for the program as a whole.
From page 65...
... work against taking a holistic view of the individual programs and creates a less efficient process. The Director of the National Sea Grant College Program, under supervision of the Secretary of Commerce and in consultation with National Sea Grant Review Panel and the individual Sea Grant programs, should substantially reduce the overall number of scored sub-criteria by combining various existing criteria, while adding cooperative, network-building activities as an explicitly evaluated, highly valued criterion.
From page 66...
... Lacking some standards set by NSGO, there is a tendency for individual Sea Grant program directors to expand their presentations to match those of other programs. National Sea Grant Office and National Sea Grant Review Panel should reduce the effort and costs required to prepare for and conduct a Program Assessment Team site review by providing specific limits on the amount and kind of preparatory material to be provided to the Program Assessment Team and by limiting the site visit to no more than three days, including the time to draft the preliminary report and meet with program directors and institutional representatives.
From page 67...
... for working with the individual Sea Grant program to create and adopt an appropriately ambitious strategic plan, with goals and objectives against which the program would be evaluated at the next program evaluation period. There are scoring uncertainties arising from the diversity of programs being reviewed and the differences in interpretation of benchmarks by different PATs such that the stepwise score changes at the 25 percent and 50 percent marks are not defined adequately to justify the abrupt bonus changes at those boundaries.
From page 68...
... First, and more traditionally, assessment is used to identify weaknesses or opportunities for growth in the individual Sea Grant programs and possible mechanisms to address them. Second, and more recently, assessment is used to reward programs for achievement (i.e., rate and rank programs in order to pass out bonus funds competitively)


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.