National Academies Press: OpenBook

International Education and Foreign Languages: Keys to Securing America's Future (2007)

Chapter: 11 Monitoring, Evaluation, and Continuous Improvement

« Previous: Part III: Important Next Steps
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 211
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 212
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 213
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 214
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 215
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 216
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 217
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 218
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 219
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 220
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 221
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 222
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 223
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 224
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 225
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 226
Suggested Citation:"11 Monitoring, Evaluation, and Continuous Improvement." National Research Council. 2007. International Education and Foreign Languages: Keys to Securing America's Future. Washington, DC: The National Academies Press. doi: 10.17226/11841.
×
Page 227

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

11 Monitoring, Evaluation, and Continuous Improvement T he extent to which programs can meet their objectives is dependent in part on how they are managed at the national level. The preceding several chapters outlined the evidence available to characterize the performance of the Title VI and Fulbright-Hays (Title VI/FH) programs in each of the eight key areas identified by Congress. However, neither the data available in the Evaluation of Exchange, Language, International and Area Studies (EELIAS) database nor the few available evaluation studies enable an assessment of the relative value of each of the 14 component programs. Program monitoring and performance measurement efforts have largely been conducted as a top-down enterprise accompanied by little consultation with grantees. Yet program performance is also partially dependent on the extent to which universities and other grantees share program goals and work collaboratively to accomplish those goals. Although there are a few examples of mechanisms that facilitate coordination and identification of promising practices across programs (see the description in Chapter 9 of CIBERweb), and the center programs have established an effective process for involvement in national policy making, there is no established process to facilitate continual assessment and improvement. This chapter discusses the status of program monitoring and program evaluation activities and outlines ways that the U.S. Department of Edu- cation (ED) should advance its efforts in these areas. It also presents the limitations of current collaborations with grantees and a recommended ap- proach to continuous improvement that would complement and potentially enhance the department’s efforts to ensure the effectiveness of the Title VI/FH programs. 211

212 INTERNATIONAL EDUCATION AND FOREIGN LANGUAGES PROGRAM MONITORING At ED, program monitoring is carried out by program officers in the International Education Programs Service (IEPS) who are assigned to a par- ticular program(s) or, in the case of the National Resource Center (NRC) program, a particular world area. Unlike in the past, when financial and project (programmatic) monitoring was conducted by separate staff, pro- gram officers are currently responsible for both functions, although their work focuses primarily on financial monitoring. At the same time, the number of program staff required to administer has increased over time. In recent years, full-time-equivalent staffing levels have decreased, dropping from 23 in FY 2003 to 21 in FY 2006, despite congressional concerns ar- ticulated in the FY 2003 budget. Limited staff combined with limited travel resources and departmental policies related to travel approval have severely limited the ability of staff to visit grantees. ED has an increased emphasis in project-level evaluation in part to address the limits on their ability to conduct on-site reviews. For example, the number of points awarded for evaluation plans has been increased in the past several NRC competitions, and it is now the criterion with the highest number of points (25 of 165). In addition, ED is now requiring NRCs to include a professional evaluator as well as a peer evalu- ator in their evaluation plans. The effectiveness of this new emphasis on evaluation is as yet unknown. Like most federal programs, the Title VI/FH programs require their grantees to submit annual reports to review the progress of individual grantees and to assist efforts to evaluate the performance of the programs overall. For most of the programs’ history, annual reports were submitted in paper format. In recent years, as discussed below, IEPS shifted from a paper to an electronic, web-based system (EELIAS) for collecting grantee information. In 1978, the General Accounting Office recommended that Title VI/FH programs share evaluative data with grantees to provide information about accomplishments and guidance for possible improvements. At the request of the President’s Commission on Foreign Language and International Studies in 1979, ED staff prepared an analysis on how NRCs use their funds based on reports submitted by grantees (Scheider, 1979). Other staff-generated memos to the NRC and Foreign Language and Area Studies (FLAS) Fel- lowships directors distributed in the mid-1990s provide analytic syntheses   Congress stated: “The conferees are disappointed that the Department has not fully ad- dressed the staffing needs of the Title VI and Fulbright-Hays international education pro- grams” (House Report 108-010).   Shortly before release of the committee’s report, the system was revised and released as the International Resource Information System.

MONITORING, EVALUATION, AND CONTINUOUS IMPROVEMENT 213 BOX 11-1 Categories of Performance Measures Inputs: Resources used (e.g., amount of funding). Activities: Types of work performed (e.g., number of publications or workshops, number of students who study abroad). Outputs: Results from program activities (e.g., number of students enrolled). Outcomes: An accomplishment attributable to program outputs (e.g., improve- ment in language proficiency, job placement). Impact: Achievement of broad social objectives (e.g., increased global understanding). SOURCES: Adapted from Joyner (2006a). of data from annual reports on such criteria as average number of degrees awarded, career choice, and distribution of degrees by discipline (Schneider, 1995). The committee was told that data had been compiled from reports at the end of every funding cycle since the 1970s. Other reports on program performance, such as those mentioned above, seem to have disappeared with the transition to an electronic system. Performance Measures The Government Performance and Results Act (GPRA) requires all federal programs to report measures of program performance, with several types of possible measures (see Box 11-1). Until recently, ED had perfor- mance measures for the NRC and FLAS programs only, which were used to report performance on all the Title VI domestic programs. After an Of- fice of Management and Budget (OMB) review under the Program Assess- ment Rating Tool (PART) process that resulted in a rating of “results not demonstrated,” the program developed specific performance measures for each of the 14 programs (see Table 11-1). There are now two performance measures and one efficiency measure for most programs with an emphasis on measures intended to indicate outcomes, as preferred by OMB. The measures were recently approved by OMB.   OMB conducts periodic program reviews using PART and assigns one of five ratings as a result of the process: (1) effective, (2) moderately effective, (3) adequate, (4) ineffective, and (5) results not demonstrated.

214 INTERNATIONAL EDUCATION AND FOREIGN LANGUAGES TABLE 11-1  Planned Title VI/Fulbright-Hays Performance Measures Program Performance Measures Efficiency Measure NRC Percentage of critical languages Percentage of Cost per master's taught, as reflected in the list of employed master's or doctoral critical languages referenced in and doctoral degree degree graduate the Title VI program statute. graduates in fields employed in fields of postsecondary of postsecondary education or education or government. government. FLAS Average language competency Percentage of Cost per fellow score of FLAS fellowship employed master's increasing average recipients at the end of one full and doctoral degree language competency year of instruction will be at graduates in fields by at least one level. least 1.20 levels higher than their of postsecondary average score at the beginning of education or the year. government. CIBE Percentage of graduates Percentage of Cost per master’s, of a Ph.D. or master’s, projects reported including MBA, including MBA, program and validated as degree recipient with significant international of high quality or Ph.D. graduate business concentration at the or successfully employed in postsecondary institution who completed. business-related are employed in business-related fields, including fields, including teaching at a teaching in a business business school. school. IRS Number of outreach activities Percentage of Cost per high- that result in adoption or further projects reported quality, successfully dissemination within a year, and validated as completed project. divided by the total number of of high quality IRS outreach activities conducted or successfully in the current reporting period. completed. LRC Number of outreach activities Percentage of Cost per high- that result in adoption or further projects reported quality, successfully dissemination within a year, and validated as completed project. divided by the total number of high quality of LRC outreach activities or successfully conducted in the current completed. reporting period. UISFL Percentage of critical languages Percentage of Cost per high- addressed/covered by foreign projects reported quality, successfully language major, minor, or and validated as completed project. certificate programs created of high quality or enhanced; or by language or successfully courses created or enhanced; or completed. by faculty or instructor positions created with UISFL or matching funds in the reporting period.

MONITORING, EVALUATION, AND CONTINUOUS IMPROVEMENT 215 TABLE 11-1  Continued Program Performance Measures Efficiency Measure BIE Number of outreach activities Percentage of Cost per high- that result in adoption or further projects reported quality, successfully dissemination within a year, and validated as completed project. divided by the total number of of high quality BIE outreach activities conducted or successfully in the current reporting period. completed. TICFIA Percentage of projects reported N/A Cost per high- and validated as of high quality quality, successfully or successfully completed. completed project. AORC Percentage of projects reported Percentage of Cost per high- and validated as of high quality scholars who quality, successfully or successfully completed. indicated they completed project. were “highly satisfied” with the services the center provided. DDRA Average fellow increases Percentage of Cost per grantee language competency by at least projects reported increasing language 0.75 level. and validated as competency by at of high quality least one level in one or successfully area (or all three). completed. FRA Average fellow increases Percentage of Cost per grantee language competency by at least projects reported increasing language 0.50 level. and validated as competency by at of high quality least one level in one or successfully area (or all three). completed. GPA Average fellow increases Percentage of Cost per grantee language competency by at least projects reported increasing language 0.50 level. and validated as competency by at of high quality least one level in one or successfully area (or all three). completed. SA Percentage of projects reported N/A Cost per high- and validated as of high quality quality, successfully or successfully completed. completed project. IIPP Percentage of employed IIPP Percentage of IIPP Cost per IIPP graduates in government or participants who graduate employed international service. complete a master’s in government or degree within six international service. years of enrolling in the program.

216 INTERNATIONAL EDUCATION AND FOREIGN LANGUAGES The measures are based on a combination of quantitative (e.g., gradu- ate placements, language courses, self-reported language proficiency) and narrative (e.g., accomplishments) information provided by grantees to ED via its web-based grantee reporting system. The performance measures identified appear to have been developed with little or no input from the universities or other organizations who receive funding. Although designed to measure outcomes as much as pos- sible, little thought seems to have been given to the goals of the programs or how to best measure performance against those goals. Several universi- ties expressed concern about the measures for which they are being held accountable, pointing to several issues: (1) Monitoring the percentage of languages taught overlooks the level of language taught and might create a disincentive to offer advanced-level language courses. (2) Placements in fields other than academia and government are reasonable outcomes not reflected in the placement measures (see Chapter 6 for further discussion of this issue). (3) The two measures used do not reflect the emphasis placed on outreach. EELIAS Database The EELIAS system, reportedly developed with input from the field, ap- pears to have been designed to capture all potentially useful data, using the NRC program as the model (see Box 11-2 for background information). It is not clear, however, whether full consideration was given to how the data would be used for program monitoring—that is, the specific performance measures to be used and how the required data would be used to report those measures. In practice, the system has been used by ED exclusively for two purposes: (1) to enable project officers to review the performance of individual grantees and make decisions about annual continuation grants for multiyear grants and (2) to report performance measures. The project officer review uses primarily narrative information and appears to be a one- way review, with no feedback to the grantee. The purpose of the extensive quantifiable data collected that are not used for performance measures is also unclear. No information has been given to grantees on how their performance compares with other grantees, and no other aggregate reports have been produced. The system is widely viewed by universities as a burdensome, time-consuming requirement, not a resource. Many universities pointed out that the time required to collect and then enter the required information is significant, particularly in rela-   The committee was told that a continuation award has never been denied as a result of this review.

MONITORING, EVALUATION, AND CONTINUOUS IMPROVEMENT 217 BOX 11-2 Evolution of ED’s Grant Monitoring Database The Evaluation of Exchange, Language, and International Area Studies (EELIAS) database was developed by the National Foreign Language Center under a grant from the Title VI International Research and Studies program. Developed over a period of several years, it was intended to be used as a grant monitoring tool and to provide data for ED to report program performance to the Office of Management and Budget, as required by the Government Performance and Results Act. It was developed as an online grantee reporting system intended to provide information on the language, international, and area studies compo- nents of the Title VI/FH programs. National Resource Centers (NRC) and Foreign Language and Area Studies (FLAS) grantees were the first to report information into the system, beginning in the early 2000s. Other programs were phased in over time, and an ED contractor has been entering historical grantee information. Multiyear grantees submit both annual and final reports. According to both ED staff and the committee’s own review, the data are most reliable for the NRC and FLAS programs. In fact, there is some thought among other grantees that the system was designed for those programs and imposed onto their programs. Technical issues with the design of the system, as well as incompleteness of data, make it difficult to use aggregate data for program evaluation purposes. ED worked with a contractor to develop a revised system designed to correct technical issues. The revised and renamed International Resource Information System should address some of the database’s technical limitations and make grantee-level information more readily accessible. However, it is unclear whether and if so, how, the new system will result in improved program-level or aggregate information or public access to data used in reporting performance measures, which are necessary to ensure transparency. tion to the relatively small amount of money received and given that they do not get any summary data or other feedback. The department has grappled with the same technical issues related to lack of internal controls, frequent use of open-ended questions and an “other” category, limited data verification, and missing data encountered by the committee (see Appendix B, Box B-1) that limited our ability to use much of the EELIAS data for program-level review. The committee notes that the system was designed under a grant rather than a contract. Grants do not typically allow significant feedback from the sponsoring agency and are not the ideal mechanism for developing a program monitoring   Infact, several universities reported that they were not aware that they could access their own data, and certainly did not know how to do so once it had been submitted.

218 INTERNATIONAL EDUCATION AND FOREIGN LANGUAGES tool. Nonetheless, ED made the decision to implement the system across all Title VI/FH programs. Perhaps recognizing the limitations posed by the grant mechanism, ED is now managing the system with the assistance of a contractor, which enables substantially greater control over the product. Concurrent with the committee’s deliberations, ED was in the midst of a systematic review of the database, with the assistance of its contractor. ED is aware of many of the issues encountered by the committee. In fact, it independently shared observations about the system’s limitations and its proposed solutions. Near the end of the committee’s deliberations, ED pub- lished in the Federal Register proposed system changes that will reportedly fix most if not all of the technical issues with the system we have identi- fied (U.S. Department of Education, 2006d). Select grantees were asked to review the navigability of the new system, including a prototype for a new user interface. However, it appears that use of the redesigned system, which was re- named the International Resource Information System when launched as a “new” system (see http://www.ieps-iris.org) shortly before release of the committee’s report, will continue to focus on individual grant monitoring and reporting performance measures. The redesign will make access to se- lect system components more readily available, but it will not make basic data available. It also does not make data used for performance measures readily accessible. In addition, the redesign does not appear to address ques- tions related to the purpose and use of the required data. Data Transparency At the time of this writing, although ED reported that it has considered open web-based access to the database, a web-based interface with addi- tional query potential (e.g., languages taught, graduate placements), and production of additional summary reports, a decision had not been made about whether to make data public and, if so, which data. Several univer- sities made the observation that open access to the data would reveal any differences in the way universities are reporting data and highlight missing data, informally encouraging data continuity. Greater system transparency would be likely to facilitate a vested interest on the part of grantees in the timeliness and comparability of data. The committee is aware of only three efforts (two of which were requests for the same data) by organizations other than the system devel- oper to access the data. The Social Science Research Council planned to undertake an evaluation of Middle East NRCs funded by the International   The system developers conducted two related studies that used EELIAS data to review language enrollments: Brecht and Rivers (2000) and Brecht et al. (2007).

MONITORING, EVALUATION, AND CONTINUOUS IMPROVEMENT 219 Research and Studies Program (IRS) using EELIAS data and was required to submit a Freedom of Information Act request to access the database. However, such issues as the difficulty of analyzing responses to open-ended questions related to languages taught and type of instructor hampered the ability of the research team to use the data to answer their initial questions (Browne, 2006). In addition, Steven Heydemann of Georgetown Univer- sity and Martin Kramer of the Washington Institute for Near East Policy submitted separate requests to ED and received a spreadsheet with NRC placement data for 2002 (see the discussion in Chapter 6). Although using the same data, they came to very different conclusions about the program’s success in producing graduates who enter government service, based on their starting assumptions. Had these data been publicly available, a more informed public discourse might have been possible. If the planned improvements to the database are well implemented, the system may offer the potential to provide better information to the field. Well-designed grantee data systems can be useful program monitoring tools to help track trends, collect data on outputs, and ensure that funds are used for their intended purposes. When the information collected is viewed as useful to those reporting it, and information is in fact returned to them, it is likely to be of higher quality. It is important to note that the planned improvements to the system are aimed at fixing technical glitches and improving user navigability. Consideration was not given to whether the data collected should be collected and how it would be used for program monitoring or improvement purposes. Basic functionality will not change. That is, data elements are not being increased or decreased. ED might be well served by convening grantees to discuss the current data collected and how it can be modified to address mutual objectives (see below for further discussion). ED should undertake a systematic evaluation involving grantees to ensure that the changes made through the launch of its redesign system have been implemented effectively, that the data are fully usable and understandable, and that the expected value of the data is clear. In comments shared with the committee, Brustein (2006) reinforced this and related points, claiming that there is insufficient dissemination of project outcomes and best practices, as well as a lack of accessible and searchable databases. PROGRAM EVALUATION Although well-designed grantee data systems are vital program moni- toring tools, they are rarely adequate to assess the effectiveness of programs   Martin Kramer requested the data to verify information included in an article published by Steven Heydemann based in part on the placement data.

220 INTERNATIONAL EDUCATION AND FOREIGN LANGUAGES in terms of outcomes and impacts or to understand why identified trends are occurring. These more detailed and nuanced assessments require well- designed program evaluations that include information on both what is achieved and how it is achieved. Few Title VI/FH program evaluations have been conducted in general; even fewer have focused on program outcomes. Of the evaluations that have been conducted (see Appendix B, Attachment B-1), most have involved surveys of grantees only. Although control groups are infeasible, almost none have had a comparison group. Many have been funded by the IRS Program via grants designed and implemented with little ED input. Although IRS proposals are reviewed by peer review panels, and some of the criteria seem to be aimed at ensuring a quality evaluation proj- ect (e.g., 15 of 100 points for an evaluation plan, 10 points for adequacy of method and scope of project, 10 points for knowledge to be gained), grants are initiated and managed by the principal investigator. Grants are typically not the ideal mechanism for federal program evaluation, given the relative independence given to grantees. Contracts, which allow more specification of research questions and methodology and greater involvement in the re- search design and implementation, would be a preferable mechanism. Only one recent evaluation—the survey of fellowship recipients that in- cluded the FLAS and Doctoral Dissertation Research Abroad (DDRA) pro- grams in addition to two other ED fellowship programs—was conducted under an ED contract that enabled regular, ongoing review and feedback by the department. This survey was funded by the Office of Planning, Evaluation and Policy Development (OPEPD). It is also the only evaluation that included more than one Title VI program. As Gabara (2006) reported to the committee, more research should be done on national needs, and evaluations of programs should look at their accomplishments over the past 15 years. Given that the programs are now firmly established, more periodic, well-designed evaluations with clearly articulated outcomes are warranted—perhaps every four to five years. This would make available up-to-date program evaluation information for PART reviews and for re- authorization deliberations. Christian (2006) reported that there seems to be some dissatisfaction with the way projects are evaluated (regarding the indicators used), as well as some thought that it is difficult to measure suc- cess for some of them, or that they cannot all be evaluated in the same way. Program evaluations, ideally with input from the field, will have to carefully consider the expectations of the programs and the use of best practices in achieving those goals.   Under PART, each program is to be reviewed at least every five years. The Higher Education Act is supposed to be reauthorized on a six-year cycle.

MONITORING, EVALUATION, AND CONTINUOUS IMPROVEMENT 221 CONTINUOUS IMPROVEMENT Modern governance theory suggests that educational programs are more likely to improve if goals are clear, participants have a voice in defin- ing objectives and internalizing the goals in their own systems, quantitative indicators of process are available and made public, and efforts are made to identify and duplicate promising practices (see, for example, Liebman and Sabel, 2003). Neither ED nor the established mechanisms to communicate across grantees have made this a priority. In fact, universities (the primary Title VI/FH grantees) have been little involved in ED’s efforts to monitor performance or identify strategic directions, although there have been some recent attempts to improve in this area. ED has historically convened a meeting in Washington, DC, of all grantees for each program after each competition, but it has not held na- tional meetings in the years between the multiyear award cycles and has met separately with each program. In September 2006, ED convened all NRC and Language Resource Centers (LRC) grantees at the same event, the first such meeting involving leaders of both programs. The agenda included not only presentations by IEPS staff about grant reporting requirements, but also panels of invited experts and grantees, who shared their knowledge of such complex topics as assessing language acquisition and evaluating the impact of study abroad. This event also included an open town hall meet- ing for feedback from the field to IEPS staff. ED plans to convene more information-sharing and technical assistance meetings in 2007: the first will focus on language acquisition, and the second will be about outreach. This appears to represent a distinct shift from earlier years and an attempt to actively engage the field in relevant issues. Each of the major programs has established an independent mecha- nism for communication across projects. Each program has a dues-paying organization of program directors: NRCs have the Council of NRC Direc- tors, LRCs have the Council of Directors of National Foreign Language Resource Centers, and Centers for International Business Education and Research (CIBER) have the Association for International Business and Re- search. At a minimum, each group gets together (without ED staff present) in conjunction with the ED-sponsored grantee meeting for its program. The Council of NRC Directors organized a major conference at the University of California, Los Angeles, in 1997 prior to renewal of the Higher Educa- tion Act. The conference, which included grantees of other Title VI pro- grams as well as NRC directors, was reported to have a major influence on Congress and to have led to creation of the Technological Innovation and Cooperation for Foreign Information Access (TICFIA) program (Ruther, 2003). American Overseas Research Centers (AORC) have created the Council of American Overseas Research Centers.

222 INTERNATIONAL EDUCATION AND FOREIGN LANGUAGES The LRCs and the CIBERs also have websites (the content of the CIBER web site is significantly greater than the LRC site; see Chapter 9) and collect and synthesize performance data based on the data submitted via EELIAS. IEPS staff have actively encouraged the creation and content of the CIBER web site and have supported a shared web portal for the Busi- ness and International Education (BIE) program, hosted by Grand Valley State University (2006). The committee was told that the Council of NRC Directors agreed in September 2006 to establish a web presence, but that it is on hold pending a decision by ED regarding what EELIAS information they might be making available via the web. The director of each council or association is also a member of the Coalition for International Education (CIE), whose membership includes a range of other national organizations concerned with higher education, foreign language, or international education issues. CIE aims to build consensus within the higher education community on policies affecting the programs and advocates for grantees in the annual federal budgeting and appropriations process. CIE convened a national conference in 2003, with support from ED and the Ford Foundation, for Title VI grantees to discuss demand for global education after the terrorist attacks of September 11, 2001, and to prepare for reauthorization of the Higher Education Act. The upcoming 50th anniversary of Title VI in 2008 is likely to provide another opportunity for a reflective view of the programs. In addition to sharing information through these national associa- tions, grantees report that they discuss common issues and opportunities for collaboration through informal meetings at professional associations, such as the Asia Studies Association and the Joint National Committee on Languages. These efforts have an important role to play. However, few of these efforts are aimed at collaborations across programs or between ED and grantees, efforts that would encourage a partnership approach to the pro- grams. They are also not being implemented in a coordinated, systematic manner that would facilitate accomplishment of common goals. ED appears to recognize the value of collaboration, as illustrated in its introduction of an invitational priority related to collaboration in the NRC and LRC 2005 competitions. Curiously, however, the NRC priority included collaboration with LRC, AORC, and CIBER programs but did not mention other NRCs. In addition, the collaborations are aimed at interactions between grantees, with little department involvement and no attention to the way the program is managed at a national level or to the content of collaborations. Several National Academies reports in the health care arena have called for a collaborative approach to performance measurement that embraces an interactive process involving the public and private sectors in the de- velopment of program goals, performance measures consistent with those

MONITORING, EVALUATION, AND CONTINUOUS IMPROVEMENT 223 goals, and evaluation of the measures (see National Academy of Sciences, National Academy of Engineering, Institute of Medicine, 2007; Institute of Medicine, 1997). The recent report of the secretary of education’s Com- mission on Higher Education (U.S. Department of Education, 2006a) re- inforced a similar idea when it called for a change to a system based on performance, emphasizing that higher education institutions need to “em- brace and implement serious accountability measures.” A collaborative approach in which ED and the universities work to- gether to develop performance measures should increase the programs’ ef- fectiveness and encourage continuous improvement. Universities are more likely to internalize performance measures if they have had a voice in their creation. Such measures can also help universities learn from each other. Once specific performance measures are identified, universities can compare their performance on these measures with those of their peers. This could lead to the identification of promising practices that they may be able to adapt. For example, why is a particular university or program particularly good at increasing language proficiency? How has a given program been able to establish linkages with their college of education or local education authority? Why are graduates of a particular program getting significant numbers of government jobs? If universities find that these results are brought about by practices that are transferable, they are likely to improve their own performance. Systems to encourage and sup- port measurement of performance and sharing of information concerning promising and transferable approaches could improve overall performance of the Title VI/FH programs. AWARD TRANSPARENCY At the most basic level, successful grant applications are a source of information about best practice; at a minimum, they represent proposals that expert review panels rated as most responsive to application criteria. During site visits and the committee’s meeting with directors of newly awarded NRCs, multiple project staff mentioned the potential usefulness of applications as a source of information about what it takes to put together a successful program. Feedback on applications was also cited as a potentially useful source of information about program strengths and needed program improvements, even for successful applicants. At the same time, the committee heard repeated concerns about the difficulty of accessing successful grant applications. According to ED staff and grantees, applications can only be viewed (but not removed or copied) in the ED offices in Washington, DC. This limits public access generally and creates a clear disadvantage for anyone not based in the area. Limiting access to successful applications may limit unsuccessful applicants’ ability

224 INTERNATIONAL EDUCATION AND FOREIGN LANGUAGES to identify shortcomings and develop more competitive proposals in future competitions, thus limiting competition. In addition, since they are federal programs, the process used to make Title VI/FH awards and the results of the competition should be readily available to the public. Information technology makes this easy to do. The committee also heard specific criticisms about the grant selection process, which has moved to electronic review. In the past, reviewers met face-to-face to discuss their review and their planned point assignment for each criterion. Ratings are now done electronically, with a teleconference held for reviewers to discuss their ratings. Critics claim that this change has made it difficult to deliberate and has reduced substantive comment on ap- plications. Wiley and Schneider (2006) state that in the case of IRS grants, for example, the effectiveness of the program has been negatively impacted by the grant selection procedure. The committee notes, however, that the electronic review process helps minimize travel costs for reviewers and is designed to increase review efficiency. Curiously, while electronic review should make sharing of review results with applicants a quick process, ED prints and mails paper copies of the review. CONCLUSIONS AND RECOMMENDATIONS Monitoring, evaluation, and continuous improvement efforts at ED have been affected by staffing limitations, the availability of data, resources, and program leadership. The committee was told of several initiatives un- der way that will improve recent efforts, including systematic annual grant review processes, redesigning the data system, and plans for future grantee meetings focused on issues important to the field. The grantees themselves have also implemented mechanisms to improve collaboration within and among programs. Nonetheless, there is much room for continued improve- ment both within ED and between ED and its grantees. Program Monitoring The limitations and burden of the ED data system are widely recog- nized, including by ED. The recently implemented redesign appears to be aimed at addressing the technical issues encountered by the committee. However, the usefulness of the system will not be fully realized unless the grantees gain some advantage from the demanding data entry required. Universities should be able to compare their performance with that of other similar grantees. The redesigned system should be used more aggressively to report program performance beyond the narrow criteria reflected in the performance indicators, and it should include the monitoring of trends.

MONITORING, EVALUATION, AND CONTINUOUS IMPROVEMENT 225 Finally, data should be publicly available to facilitate open, public discourse about the program and its accomplishments. C  onclusion:  The original online data reporting system for Title VI and Fulbright-Hays programs (EELIAS) is inadequate, is difficult to use, and has significant consistency problems as well as a lack of transpar- ency in the data collected. Recommendation 11.1:  The Department of Education should ensure that its new data system, the International Resource Information Sys- tem, provides greater standardization, allows comparison across years and across programs, and provides information to all grantees and to the public. Program Evaluation Few program evaluations have been conducted; those that are available have been generated mainly by interested researchers, conducted as grants, and covered a single program. Even the ED-funded evaluation of FLAS and DDRA recipients did not emanate from the program office. C  onclusion:  At the present time, limited information is available to rigorously assess the outcomes and impacts of the Title VI and Ful- bright-Hays programs and the nature of the funding (partial funding of a larger set of activities) makes it difficult to assess outcomes and impacts. Recommendation 11.2:  The Department of Education should com- mission independent outcome and impact evaluations of all programs every 4 to 5 years. Well-designed program evaluations would be a more reliable approach to determine program outcomes and impacts than use of performance mea- sures. There are several options available to fund evaluations. First, IEPS could develop evaluations in collaboration with OPEPD, as it did for the survey of fellowship programs. Another alternative would be to fund evalu- ations using the 1 percent of the appropriation available for program evalu- ation, national outreach, and information dissemination. However, these are the same resources currently being used for redesign and maintenance of the IEPS data system, and to date they have not been used for program evaluation. Although program evaluations have been funded through IRS in the past, this mechanism provides little opportunity for national-level direction or guidance.

226 INTERNATIONAL EDUCATION AND FOREIGN LANGUAGES Continuous Improvement To date, neither the Title VI/FH programs nor the universities that make up the bulk of its grantees have established a process that facilitates continuous improvement. ED has by and large implemented its reporting system and performance measures using a top-down approach, with little buy-in from grantees or consideration of data collection costs, and no clear rationale for the measures chosen. Networks created by grantees provide a framework for interactions across grantees, but there is large variation in the role of the networks, and they function largely independent of ED. There is little activity that supports interactive governance involving col- laborative specification of program goals, development of performance measures, or assessment of the extent to which goals are accomplished. Recommendation 11.3:  The Department of Education should work with universities to create a system of continuous improvement for Title VI and Fulbright-Hays programs. The system would help de- velop performance indicators and other improvement tools and should include networks of similar centers (National Resource Centers, Lan- guage Resource Centers, Centers for International Business Education and Research) and university officials with overall responsibilities in language, area, and international studies. While the system could build on existing networks, the process de- veloped should include senior university officials with university-wide decision-making authority that affects language, area, and international studies. Such officials will have the ability to speak on behalf of the broader pressures and opportunities affecting the programs and reflect on how the discussions regarding the Title VI/FH programs would be affected or would affect the larger university system. The system should also capitalize on existing expertise in universities on given world areas or languages. As part of this process, ED might consider convening individuals with established credentials in foreign languages and cultures in major world areas to build on their successes. Award Transparency The most basic example of a successful practice is a successful grant ap- plication. Yet unlike some other federal agencies, ED has not made applica- tions readily available. This lack of access hampers competition and thwarts public access to information about a federally funded program. Although a successful application does not necessarily indicate that a project will be successful or well implemented, sharing successful applications provides

MONITORING, EVALUATION, AND CONTINUOUS IMPROVEMENT 227 one public resource on which to build other efforts aimed at continuous improvement. C  onclusion:  Sharing successful grant applications could improve future competitions and contribute to a continual improvement process. Recommendation 11.4:  The Department of Education should make its award selection process more transparent, including making successful applications publicly available via the Internet. ED has been moving all of its grant competitions to the web-based http://resource-grants.gov. Although this web-based system is not currently used to make applications publicly available, it has the capacity to do so. It is difficult to imagine an applicant who does not prepare his or her applica- tion in electronic form. Submitting it electronically is a logical next step. ED could establish a process for making the applications publicly available, while protecting any necessary financial or other information. An alternative model for an electronic system, Fastlane, is used by all programs at the National Science Foundation (NSF). Fastlane can be used to conduct virtually all business with NSF, including submitting grant proposals, reviewing grant proposals, and determining the status of one’s proposal, including comments from reviewers. It does not make unsuc- cessful applications or rankings or comments about individual proposals available to the public or to anyone other than the applicant. All successful applications, however, are readily available. Of course, none of the recommendations in this chapter can be effec- tively implemented without an institutional structure in the department that supports innovation, recognizes the importance of strong program leader- ship, and encourages program change. The next chapter turns to discussion of these issues.   See https://www.fastlane.nsf.gov/fastlane.jsp.

Next: 12 Looking Toward the Future »
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!