Appraisal of OERI
Although the Office of Educational Research and Improvement (and its predecessor, the National Institute of Education) has contributed much to education research and development (see Chapter 2), several pervasive problems have impaired the agency's effectiveness. Some of these problems span the various offices and activities and others span history. Several problems are inherent in the nature of education and education research; others are amendable to policy and administrative correction.
MISSION AND OBJECTIVES
OERI's legislated mission statement asks the agency to be almost everything for everybody. Since funding has always been less than needed, the agency tends to spread its resources thinly among many endeavors. OERI has regularly responded to pressures to do so—sometimes readily and sometimes reluctantly—after confrontations with irate interest groups, the executive branch, and members of Congress. The result is manifest throughout the agency: from 1980 to 1991 the number of centers more than doubled while the budget for them (in constant dollars) decreased by 21 percent; the field-initiated research program now provides grants for a maximum of 18 months, with virtually no chance of a renewal award; and the FIRST program funds more than 100 school-based reform efforts that are supposed to
be of national significance, but for only 1- to 3-year periods for only $50,000 to $150,000. An official of a business organization wrote to the committee: "What is needed is a vision and mission articulating direction, priorities and goals. In business it is called a strategic plan."
OERI's mission statement does not state that the agency's governance, operations, and activities should involve a working partnership between research, schools, employers, families and policy makers. Although the agency has made efforts to involve these groups in its work, the enthusiasm for doing so waxes and wanes over time. Some assistant secretaries have deliberately sought to involve the various groups; others have seemed almost oblivious.
It is doubtful that the problems with OERI's mission statement have, by themselves, seriously compromised the agency. Rather, these problems, in concert with several others (discussed below), have seriously affected OERI.
Continuing Controversy About Education
Controversy has surrounded federally funded education R&D ever since the Office of Education was founded in 1867. There have been conflicts over the appropriateness of federal activity in education R&D, the agendas pursued, the specific activities undertaken, the distribution of funds among various potential performers, and the utility of the enterprise (Kaestle, 1991; Sproull et al., 1978). The controversy has variously involved the Congress, the President, the education community, researchers, federal administrative agencies, and factions within OERI and NIE.
Some controversy results from the sheer numbers of potential users of education R&D: 535 members of Congress; the administrators of several federal agencies; 50 governors, state legislatures, and state departments of education; hundreds of intermediate state agencies; numerous education associations; 15,000 school districts; 83,000 public schools and 26,000 private schools; almost 3 million teachers; and 32 million parents of school-age children. These potential users place varying and sometimes conflicting demands on the enterprise.
Education itself has been a battleground of interest groups since at least the 1870s (Peterson and Rabe, 1983; Ravitch, 1974), and the conflicts naturally spill over onto research issues. There are several reasons for those conflicts. Americans put great faith in education as a means to upward mobility and a good life: they believe it can make a difference, they expect it to, and they are upset when it does not achieve that goal. Americans also hold deep-seated and differing values about both the goals and the means of education. When research findings or innovative programs contradict values and beliefs, the results are often dismissed, and the enterprise that produced them is criticized. For instance, an elementary school mathemat-
ics curriculum that encourages children to explore, conjecture, and challenge is considered by many scientists as an important investment for the future of science, but some parents see it as lacking discipline and encouraging disrespect.
Research has repeatedly been caught up in disputes over the value of intelligence and standardized achievement testing, the relative advantages of phonics and whole-language approaches to teaching reading, the need for school desegregation, the effects of mainstreaming handicapped students, the merit of bilingual education, and the importance of small class sizes. Few groups welcome a study, however well designed, that might undermine their beliefs, their prerogatives, or their jobs.
Widespread disagreement about education and education R&D was also apparent in comments the committee received in respect to this study. Many suggestions were offered for improving education R&D, but only one, improving linkages with practice, was offered by more than 25 percent of the people we heard from. Most people think improvements are needed; few agree on what those improvements should be.
However extensive the problems caused by these disagreements and conflicts, they make education research all the more important. Consider the observation of Albert Shanker (1988), president of the American Federation of Teachers:
We believe that in an enterprise such as education, which is often fraught with conflicting values, opinions and politics, research is the best hope we have of distinguishing between fads and facts, prejudices and informed judgments, habits and insights. Without systematic inquiry, development, and testing, we will continue to have the same babble of arguments and practices concerning what works or ought to work. Without good research, we will continue on an endless cycle of mistakes and the loss of successful insights and discoveries. Without good research, there will continue to be an endless reinvention of mousetraps, the same rehashing of controversies, and, in the end, the same faltering school system.
A History of "Politicization"
The National Institute of Education was born in the midst of political maneuvering. It was proposed by President Nixon, a Republican, at a time when he was simultaneously proposing cuts in federal funding for many social and education programs to a Democratically controlled Congress (Sproull et al., 1978). Political conservatives, wanting to limit federal involvement in the nation's life, were generally against the institute. So were many liberals, who were unwilling to trade federal support of local school programs for education research. Senator Warren Magnuson, a powerful member of the Senate Appropriations Committee and chair of the subcommittee
responsible for the Department of Health, Education, and Welfare, was angered by Nixon's proposals to cut $3 billion from that department and sought to extract revenge through NIE.
Ever since, there has been a widespread perception that NIE and OERI have been inappropriately and dysfunctionally politicized. The examples of politicization, however, vary markedly depending on who is citing them. Members of Congress and their staffs frequently charge that the administration's ideological and political agendas have skewed the appointment of top administrators, the selection of topics to be studied, the determination of how the topics are to be studied, the awarding of contracts, and the editing of reports and timing of their release. For instance, it is claimed that there was little research on the educational effects of dual-earner families during the Carter administration (for fear that the results might impair the women's employment opportunities, which were supported by that administration) and little research on women's equity issues during the Reagan administration (because excellence, rather than equity, was that administration's focus). In their turn, members of the administration frequently charge that Congress has politicized the research by favoring constituency desires rather than substantive merit, by large set-asides for the laboratories and centers, by mandating specific centers and studies, by limiting the focus of some congressionally mandated studies (such as the lack of examination of student achievement in the 1980s Chapter I study), by pushing other pet projects with threats against OERI's appropriations, and by making "big cases" over trivial complaints from constituents.
Staff within OERI frequently repeat the above charges and contribute additional ones. They allege that in the early 1980s the agency hired a number of people who were politically connected but not qualified for doing the programmatic work. Many staff also allege that the report clearance process in the Department of Education has been used against OERI reports that fail to support the department's positions on various issues.
Some researchers complain that those who hold views unpopular with the members of proposal review panels are precluded from funding, and that various interest groups have distorted OERI's agenda. Organizations of professional educators frequently complain that OERI (and NIE) has been the pawn of the researchers and ignored the needs of practitioners. Education writers complain of political coloring of research reports, especially those on issues of major concern to the administration.
The perceptions of politicization are not limited to people in Washington. One state department of education official wrote to our committee:
To us, the fundamental problem has been political. The Congress and sundry administrations have routinely been at odds over what should be researched. Hence, there has been minimal funding for research except the
rather diffuse, short-term agendas ... Unless there is a fundamental structural change to obviate this nonproductive arrangement, progress is unlikely.
Most of these charges are difficult to investigate. But given their sheer number, some are probably true.
Whil Congress was pushing down the ceiling of NIE's budget during the mid-1970s, three groups could have been of help to the agency. The main education associations of teachers, administrators, and school board members actively supported some proposals for federal assistance to education, but were little interested in education research at that time. The American Educational Research Association, the primary organization of education researchers, had little inclination or skill in marshalling political support. The third group, the Council for Educational Development and Research (CEDaR), was an association of the laboratories and some centers. It presented the case for institutional R&D, mobilizing satisfied client school districts and state agencies to support its cause, and succeeded in convincing Congress to establish floors under the appropriations for those institutions. When the ceiling on NIE's budget caved in, the floors sagged but did not collapse, and almost everything else was flattened.
Ever since there has been feuding between university researchers who want to do field-initiated research and CEDAR. The former variously accuse CEDAR of single-handedly determining the budget for the entire agency through its minions in Congress, destroying support for field-initiated research, and securing "pork" for its member institutions. However, the above-cited events are not consistent with the accusations. The budget for the laboratories and centers declined by 35 percent (in constant dollars) between 1973 and 1979. There also is ample historical evidence that Congress was uneasy about field-initiated research by the mid-1960s, well before the formation of CEDAR (see Chapter 3—"History"). The numerous reviews of the laboratories and centers over the past two decades have almost uniformly suggested that institutions such as these are needed, and though the general performance has not been as high as hoped, it has not been universally dismal. CEDAR responds to its critics by accusing university researchers of failing to present a credible case for the support of education R&D and of weakening the entire enterprise with criticisms of CEDAR, the laboratories, and sometimes the centers.
Thus, for almost three decades, charges of politicization have swirled around NIE and OERI. Many people view the agency as politicized, and that perception inevitably affects the credibility of its work. The diversity of the allegations, however, does suggest that these charges are partly a function of the dissension that often accompanies education. Over and over again, what one group views as leadership, other groups view as politicization.
The National Institute of Education was created as a separate agency within the Department of Health, Education and Welfare (HEW), to consolidate education research and development activities, give responsibility for management of these activities to professional scholars, and to provide higher status for the work. Its director, appointed by the President with the advice and consent of the Senate, reported to the Assistant Secretary for Education, who was also in charge of the Office of Education and starting in 1974, the newly established National Center for Education Statistics.
A National Council on Educational Research was created by legislation to set overall policies for NIE. The House bill had provided for an advisory council, but the Senate bill and the resulting legislation called for a policy-making council, apparently because a few Senators sought to ensure NIE's independence from the Assistant Secretary (Sproull et al., 1978). The 15 members were appointed by the President, with the advice and consent of the Senate, and served 3-year staggered terms. The first council was comprised of five university presidents or chancellors, two university professors, three businessmen, one state superintendent of instruction, one state education administrator, one school district superintendent, one school principal and a graduate student. It was chaired by Pat Haggerty, chairman of the board of Texas Instruments. The council was to prescribe the director's powers and duties, advise the assistant secretary and the director on program development, recommend improved methods of collecting and disseminating educational research findings, conduct studies necessary to fulfill its own functions, and submit annual reports to the Institutes's activities and on education and educational research in general (Public Law 92–318, 1972).
In the 1976 reauthorization of NIE, Congress specified that the council was to be "broadly representative of the general public; of the education professions, including practitioners and researchers; and of the various fields of education, including preschool, elementary and secondary, postsecondary, continuing, vocational, special, and compensatory education (Public Law 94–482).
With the creation of the Department of Education, OERI was established as an umbrella organization to house NIE, the National Center for Educational Statistics (NCES), Library Programs, and some discretionary and dissemination activities. The National Council of Educational Research was retained, but as before, had authority only over OERI.
In 1985 the Secretary of Education reorganized OERI to make the semiautonomous units become line offices and replaced the policy-making council with a National Advisory Council on Educational Research and Improvement. This was done to reallocate authority for these activities to the secretary,
minimize duplication, improve coordination, expand links with practitioners, and generally improve the quality of the work (Bennett, 1985). The 1986 reauthorization of OERI supported these changes (Public Law 99–498). The council was appointed exactly as before, was composed of essentially the same types of representatives, and they served for the same 3-year staggered terms. The responsibilities were also essentially the same, with two exceptions. The council no longer was to prescribe the duties of the head of NIE (or OERI), and in all the other areas it was to provide advice to the Secretary of Education and to the Assistant Secretary of OERI, rather than determine policy.
A few years later Congress modified the governance of the National Center for Educational Statistics (NCES) providing it with a commissioner serving a 4-year term, appointed by the President with the advice and consent of Congress. Congress also offered NCES the option of independent staffing and procurement authority, but the Secretary of Education chose not to implement these options (see Chapter 3—NCES).
It is not clear which governance structure—the policy-making council of NIE or the advisory council of OERI—has been most effective. There is widespread agreement that the advisory council has not been influential within OERI or outside of it. The policy-making council, under both Republican and Democratic administrations in the 1970s, was generally considered competent and hardworking, but it was unable to help NIE gain the support of educators, the public, or Congress. In the early 1980s it was considered less distinguished, more politicized, and even less effective in securing support. Most researchers believe that NIE conducted more good research and achieved more important progress in knowledge in the 1970s than did OERI in the 1980s, but this judgment may be skewed by the larger constant-dollar budgets for NIE in the 1970s and by the fact that the contributions of research often take a decade before they are obvious, making OERI's contributions less discernable at this time.
The two most prominent federal research agencies—the National Science Foundation (NSF) and the National Institutes of Health (NIH)—have different governance structures, although in actual practice the differences are not great. NSF has a policy-making board, the National Science Board, which is comprised of persons eminent in the fields of "science, engineering, agriculture, education, research management or public affairs." The members are appointed by the President, with the advice of the board and with the advice and consent of the Senate. The members serve 6-year staggered terms. Former directors of NSF believe the board has expanded their power, not diminished it, by forging some consensus among scientists, building public and congressional support for the agency, and buffering it from dysfunctional politicization.
At NIH, each institute has an advisory council of 12–18 members, ap-
pointed by the Secretary of Health and Human Services in consultation with the institute director, who serve 4-year terms. Two-thirds are selected from "among the leading representatives of the health and scientific disciplines (including not less than two individuals who are leaders in the fields of public health and the behavioral or social sciences) relevant to the activities of the [institute]." The remaining one-third are selected "from the general public and shall include leaders in fields of public policy, law, health policy, economics, and management." One of the important functions of these councils is to engage the communities of researchers and practitioners in discussions of R&D needs and to build agreement as to what should be the priorities. Though these councils are not policy-making, there is a tradition of their advice being heeded by the institute administrators.
There are advantages and disadvantages to both policy-making boards and top-level advisory councils. Policy-making bodies are more prestigious and, all other things equal, generally more powerful. For that reason, distinguished and accomplished people, especially those still interested in having an impact, are more likely to accept appointments to a policy-making board and to take their responsibilities seriously.
However, a policy-making board raises the issue of accountability. After being appointed, it does not have to answer to anyone, particularly if the members are appointed for fixed terms: some people believe this gives the board the necessary independence to stand above the fray of constituent politics and to do what is best for the nation; others believe it encourages elitism and irresponsibility.
A policy-making board, unlike a top-level advisory council, appears to impinge on the normal executive branch prerogative of proposing federal programs and activities, but the actual effect is modest since the President is still free to submit whatever budget proposals he or she chooses and to sign or veto legislation. Some fear that a policy-making board can create a dangerous imbalance in the powers of the President relative to the Congress. This could happen only if the President submits budgets contrary to the policies of the board, and Congress implements the proposals of the board, and even then the President would still retain the veto power.
As a consequence, a policy-making board within the federal government does not usurp the power of a vigilant President or the Congress, but rather becomes a third party that informs and may influence both. Its impact ultimately is dependent on decisions by the executive and congressional branches. Thus, in actual practice, a policy-making board has little more power than an advisory council, only that which comes from its greater stature and ability to attract more prominent members. And if a board runs amuck, the President and Congress can thwart or abolish it through the normal legislative process.
This analysis presumes the policy-making board focuses mostly on agenda-
setting activities. If it becomes involved in determining the administrative procedures of the agency, problems could go unchecked by the President or Congress, because such matters are not easily visible to either branch and are time-consuming to counter with legislation.
In addition to policy-making and advisory bodies, there is another option—to use neither. Critics of both types of bodies point out that they can become highly politicized and captured by special interests, are unlikely to recommend bold new approaches, and inevitably require funds for support staff and travel that could be used for other purposes. There is some merit in all of these points, but there are countervailing considerations. Whatever the risks of a group becoming politicized or captured, they are less likely than the risks with one or two political appointees whose jobs depend on their decisions. A diverse and accomplished group of people, appointed with a balance of power between the executive and congressional branches of government, and whose jobs do not depend on their decisions, are balanced and buffered in a way no one assistant secretary can be. Group governance may mitigate against bold redirections of an agency, but that doesn't mean the agency would be precluded from funding such work. And the costs of boards and advisory councils for an agency the size of OERI are usually considerably less than 1 percent of the budget; a board or council does not have to improve the productivity of the agency by much to recoup its cost.
Another key issue in the governance of OERI has been the roles played by the top administrators. Seven of the ten former top administrators of OERI and NIE held their positions for less than 2 years. The rapid turnover appears to be due to several factors. As mentioned above, education and education R&D have historically been very contentious, and this conflict takes a toll on top administrators. The job requires considerable skill as a researcher, manager, and politician, and many of the appointees have been inexperienced in one or two of those areas. The declining budgets have alienated agency staff, minimized the discretion of top administrators, and contributed to an impression of failure. In addition, the assistant secretary of any highly visible government agency must respond to the diverse and sometimes conflicting demands of the President, Congress, and the public.
Although assistant secretaries throughout the federal government do not last longer, on the average, than the heads of OERI and NIE, tenure in major federal research agencies has been much longer. As noted in Chapter 3, during the 13-year life of NIE there were six Senate-confirmed directors; since 1985, when NIE was dissolved into OERI, the agency has had three confirmed assistant secretaries. That total of nine over 19 years compares with six directors at NSF, five at NIH, and three at the Agriculture Research Service.
Each new director or assistant secretary of OERI (and NIE) has sought
to make his or her mark by pursuing a distinctive agenda, but most have not remained long enough to enact more than a small portion of it. For example, in 1978 the NIE director identified complex learning skills as a priority area: she commissioned papers to identify key questions for further research, convened a conference at which the papers were reviewed by researchers and practitioners, and then organized the first grant competition in that area of research. Of the more than 90 proposals that were received and reviewed by panels of experts, 30 were judged worthy of being funded. But then, in 1981, a newly appointed director—who did not regard complex learning skills as a priority area—decided not to fund any of the proposals.
Most of the directors and assistant secretaries do remain long enough to reorganize the agency. It is not clear whether the reorganizations are due to a persisting belief that there are structural solutions to the problems of educational research or due to the lack of opportunities for discretion in other areas of managing the agency.
Surely one of the big surprises to new board or council members and incoming assistant secretaries is how little discretionary funds are available for establishing new programs. Of the $300–400 million dollar budgets of OERI, an assistant secretary is likely to have substantial discretion over only $5–10 million. Most of the $46 million in the FIRST office is for specified purposes, except for part of the $26 million in the Funds for Innovation in Education, which is essentially the discretionary fund of the Secretary of Education. The $142 million in Library Programs is set by specifications in the authorizations. The $45 million set-aside for the centers and laboratories is authorized with broad language that potentially allows considerable discretion, but the centers and laboratories operate under 5-year contracts or cooperative agreements, and the appropriations' reports occasionally include directives for these institutions. NCES operates with a long list of projects that have been negotiated with Congress.
New ideas, of course, can be proposed for future budgets, but there is an 18-month lead time, competing proposals from other agencies, and budget ceilings. Newly authorized programs often have to wait another year for their first appropriation. By then most assistant secretaries have left their positions, and unless there is a strong board or council to follow through, there is little chance of the agency providing leadership.
A few federal agencies have top administrative officers with a fixed term of appointment. The director of NSF is appointed for a 6-year term. The commissioner of the Bureau of Labor Statistics is appointed for a 4-year term, and there is a long history of reappointments. The commissioner of NCES was recently given a 4-year term. The head of Congress's Office of Technology Assessment has a 6-year term, and the head of the General Accounting Office has a 12-year term. None of these agencies is perfectly analogous with OERI, but in each case the terms of office have been used to
encourage sustained professional management of research and to minimize the opportunities for politicization. In those federal research agencies without terms of office, there is usually a tradition of long tenures by the top administrators: tradition can be effective, but it cannot be legislated.
The most common terms of office are 4-year and 6-year periods. The former allows each President to make an appointment. The latter does not, and the timing of the appointment, relative to the President's term, fluctuates.
There are some potential problems with terms of office. They reduce accountability to the executive branch and do not necessarily increase accountability to whatever board or council may exist. In addition, an incompetent can linger on for several years, although a precaution against this would be to allow removal by the President with the advice and consent of the Senate. In actual practice, the problems apparently have been few, and the practice has grown slowly over the years.
Another governance issue of growing concern is the appropriate position of NCES. Through the 1970s and most of the 1980s it was a relatively small entity with an annual budget of about $15–30 million (in 1990 constant dollars). In fiscal 1990 the budget was $40 million, in 1991 it was $60 million, and large increases are projected for the next few years. Some people believe that NCES will soon become the tail that wags the dog, although others predict that NCES may soon implode from understaffing (see Chapter 3). It has been suggested to the committee that NCES be taken out of OERI and placed in tandem with it under an assistant secretary. The advocates of this suggestion fear that the politicization and instability in OERI will corrode NCES. That is possible, but there is little assurance of less politicization or more stability in the tandem structure: in both cases, NCES would answer to a political appointee. The proposed structure was used for NIE and NCES during the early 1980s—years that are widely thought to be the darkest for both—though not necessarily because of the structure.
The funding history of OERI and NIE has been a bruising downhill slide that has inevitably extracted a heavy toll on the agency. As indicated above, new directors have quickly become criticized and demoralized. Careful agenda setting became futile; ''quick fixes'' replaced thoughtful investments; and few sustained research and development activities could be maintained. Resources were spread so thinly that mediocrity was virtually assured. Individual researchers, with less political clout than institutions, were squeezed out. Agency staff focused on required administrative functions and survival strategies rather than fulfilling the agency's substantive mission. Top-flight personnel often shunned working in the agency.
Dramatic budget cuts have forced OERI (and NIE) to do less—much less. Basic research, aimed at discovering new phenomena, is barely funded. In 1973 the average center budget was $6.0 million (in 1990 dollars); it is now $0.9 million. The average laboratory budget was $6.4 million (in 1990 dollars); it is now $3.0 million. The centers and laboratories formerly developed major innovations—such as Student Team Learning and the Comprehensive School Mathematics Program—but they rarely can do such work now. The laboratories used to do considerable work directly with schools and teachers, but they now do more work with state agencies and improvement assistance organizations. In the 1970s NIE provided support for the graduate training of minority and women researchers, but there has been very little such support in the 1980s. In the 1970s NIE funded research and demonstrations on how to effectively disseminate innovations and change schools, but in the 1980s OERI funded little such work.
Each of OERI's programs of research is now generally limited to a dozen senior researchers affiliated with a single R&D center. There is little money for special projects to supplement the work of the centers. As a consequence, virtually all of OERI's support for research on a given topic is committed to the small number of researchers affiliated with one center. Other researchers studying those topics are almost precluded from OERI support for 5 years, until the center competes again for support.
Two critical types of R&D activity have been severely underfunded at OERI. First, the agency invests very little in field-initiated research. For several years prior to 1986, OERI did not fund any field-initiated research. Now only $1.3 million, or 2 percent of its R&D budget, is for this purpose. Field-initiated research is research whose topics and methods are suggested by scholars around the country, rather than in response to requests by an agency for specific work. It harvests the insight, creativity, and initiative of researchers widely dispersed across the country, and it has been a major contributor to knowledge and technology in all fields of science. NIH invests 56 percent of its R&D budget in field-initiated research and NSF devotes 94 percent (data are not available for the Agriculture Research Service). As best as we can determine, it has been congressional action that has constrained field-initiated research at OERI by imposing set-asides on virtually all of the agency's primary appropriation and specifying very low levels of support for this work.
The second underfunded critical type of R&D activity is basic research. Basic research in education is aimed at expanding understanding of the fundamental aspects of human development, learning, teaching, schools, and their environmental contexts; such research generates new views of what exists and new visions of the achievable. (In contrast, applied research is aimed at expanding knowledge of the means to achieve specific objectives.) In 1989, the last year for which data are available, only $1.9
million of OERI's R&D budget was allocated to basic research—just 5.5 percent. By comparison, at the Agriculture Research Service, 46.6 percent of the R&D budget is allocated to basic research; at the National Institutes of Health, 59.8 percent; and at the National Science Foundation, 93.5 percent. Overall, federal government, excluding the Department of Defense, invests 40.6 percent of its R&D budget in basic research (National Science Foundation, 1991 a)
In the late 1970s a National Research Council report (1977b) Fundamental Research and the Process of Education, recommended that the federal government "increase ... the proportion of the federal investment in education research and development designated for fundamental research" and that NIE "take immediate steps to implement a policy of strong support for fundamental research relevant to education." Soon after that report, support for basic research at NIE increased substantially for a few years (Timpane, 1982). During the early years of the 1980s, the entire Department of Education invested about 11 percent of its R&D budget on basic research; since 1986 it has spent only about 2 percent (National Science Foundation, 1991 a).
Basic research has been slighted at OERI primarily because Congress, teachers, administrators, and the administration have repeatedly urged that the agency quickly solve the pressing problems in schools. Since basic research seldom yields practical applications in less than a decade, the agency has responded to demands for solutions by focusing on applied research, development, and dissemination activities. This is an understandable short-term response, but it is akin to eating one's seed corn. As indicated in Chapter 2, several of today's most promising innovations in education have been heavily influenced by findings from basic research in cognitive science—work that was conducted not only by education researchers but by investigators in several of the behavioral and social sciences. Basic research in computer science and mathematics was also critical to some of the described innovations.
COORDINATION AND COOPERATION
It was hoped that the creation of NIE in 1972 would serve to improve coordination among several programs that had been inherited from the Office of Education—centers, laboratories, ERIC, career education model development, experimental schools, researcher training, field initiated research, and dissemination activities (National Institute of Education, 1973a). NIE did move, during its early years, to coordinate work on several key topics. Most noticeably, it engaged in major efforts, with distinguished outside scholars, to plan programs of research in high-priority areas that coordinated the work of centers, field-initiated studies, and various special
projects. Nevertheless, agenda setting in federally funded education research has generally been erratic and unsystematic. There has been limited coordination of efforts and communication of findings among the various units in OERI and NIE, among the various offices in the Department of Education, and among the several other federal agencies that support education R&D activities. The committee was frequently told that there is a need for OERI's activities to be better coordinated with those of other federal, state, and local agencies.
Each office in OERI generally prepares its budget materials with little advance program planning or consultation among the offices. A few years ago there was an agreement to develop a consultative process, but it was never implemented. Though budget proposals are requested at about the same month each year, there is little advance planning, and each office rushes to prepare its budget documents. Staff do sometimes seek out counterparts in other offices for help in planning studies, reviewing draft reports, and participating on proposal review panels, but this is usually done informally and on the basis of personal contacts, primarily among those in the Office of Research, Programs for the Improvement of Practice, and NCES. The FIRST Office and Library Programs are more isolated.
For more than two decades the national R&D centers and regional laboratories have had partly overlapping responsibilities of research, development, demonstration, and technical assistance—with the centers emphasizing research and the laboratories mostly engaged in the latter three. There was some cooperation in the preparation of the 1985 and 1990 competitions for laboratory and center awards. After granting the awards, however, there was little follow-up cooperation. Although the centers do research that could be of use in the laboratories' development and technical assistance work, the laboratories seldom work closely with the centers. Conversely, although the laboratories have extensive contacts with state departments of education and local school districts, the centers seldom seek their advice about the needs of those organizations. NCES regularly queries the laboratories and centers when planning its data collection activities, but the Office of Research does not regularly contact the laboratories when planning its activities; the Programs for the Improvement of Practice also does not regularly communicate with the centers when planning its activities, although there have been some recent efforts to correct this situation. After the 1990 competition for laboratories and centers, OERI did not circulate copies of the winners' proposals to the other winning institutions until requested to do so at a joint meeting of the laboratory and center directors.
The lack of coordination of the laboratories and centers is long-standing. One new center director noted to the committee:
the relationship of Centers to Labs ... is a matter that puzzles us and continues to do so after the recent meeting in Washington of Center Directors and Lab heads. When we asked about how this relationship works, we received polite smiles. Apparently there is some history here that people would rather not get into. We remain uncertain about how we [a new Center] are to relate with Labs or even whether we are to relate to them at all.
A few instances of long-term cooperation between a center and a laboratory have produced notable results. The Learning Research and Development Center and Research for Better Schools, both located in Pennsylvania, collaborated on the development of individually prescribed instruction. The Far West Laboratory and the Center on Teaching at Stanford developed a series of minicourses widely used for in-service teacher development.
There is also a lack of coordination between the dissemination and technical assistance work of the National Diffusion Network (NDN) and that of the laboratories. The NDN state facilitators generally know the needs and wants of teachers and principals in their respective states better than the laboratories, but the laboratories seldom benefit from this knowledge. NDN state facilitators are still primarily involved in disseminating innovations, although it is now well understood that innovations alone, without broader reform in the schools, seldom have substantial and lasting effects. The laboratories are increasingly assisting districts and schools with systemic reform, but without regular input from the NDN facilitators.
Some of the lack of coordination has been due to internal administrative action or inaction, but some has been a result of external forces. The diversity of interest groups concerned with education and continuing conflicts over mission (Sproull et al., 1978:219) have prevented the agency from focusing its limited resources on a few high-priority matters. For instance, in 1989 the Office of Research prepared several options for the 1990 competition of the centers and recommended one that proposed using the limited available funding for only five substantial centers. The Under-secretary of the Department of Education selected an option with 12 centers, and after receiving comments from the public and Congress, OERI held a competition for 17 centers.
Within the Department of Education there are two units in addition to OERI that conduct substantial research and evaluation: the Office of Planning, Budget, and Evaluation (OPBE) and the Office of Special Education and Rehabilitation Services. Historically, the three have operated with little coordination in planning, little sharing of information on the work each is sponsoring, and little knowledge of what each has learned. A letter to the committee from a researcher suggested: "There needs to be much closer linkage between research and major educational programs supported by the
federal government—by taking small steps to investigate and validate specific program procedures before large and costly efforts, that are difficult to 'fine tune' are put in place."
The lack of cooperation has been exacerbated by the 1970 "Cranston Amendment" (Public Law 91–230), which prevents other offices in the Department of Education from having OERI do work for them—but does not prohibit other departments, such as the Departments of Labor and Agriculture, from doing so. This amendment was precipitated by the department's prior commingling of some funds from different appropriations, which was perceived as an attempt to circumvent the intent of Congress.
There is also very limited coordination and cooperation between the special technical assistance dissemination efforts of the various program offices in the department—such as the Chapter 1 Technical Assistance Centers, the Regional Drug-Free Centers, and the Education Evaluation Bilingual Assistance Centers—and those of the laboratories and centers, with the exception of ERIC, which receives materials from many of these sources. The problem has recently been recognized by the department's regional inspector general (Nestlehutt, 1991:1–2), who identified 43 different programs that "provide technical assistance, assist in program evaluation, perform training and research, and disseminate data and information ... [and] evidence uncoordinated growth that fails to ensure an effective and efficient issue of Department resources." An inventory conducted by OERI found 12 separate technical assistance programs, supporting 135 projects or institutions at an annual cost of $90 million. Means for coordinating these various activities are currently being considered.
As noted above, the National Science Foundation, the Department of Defense, and the Department of Health and Human Services also fund considerable education R&D. There has generally been little communication among those agencies and little monitoring of their activities. When the National Education Goals Panel recently wanted to know how much the federal government spends on education R&D, the Office of Management and Budget had to do a special survey (National Education Goals Panel, 1991). That survey collected data on funding levels, but not on the nature of the work undertaken. As a letter to the committee noted: "At the present time OERI has no capability to even know what research is being undertaken outside the purview of the agency, much less to develop educational applications for it." One assistant secretary of OERI did establish collaborations with a few other federal agencies concerned with education issues, but these were achieved only after some effort, and it remains to be seen whether they will continue in the future.
For at least 20 years there have been limited attempts to coordinate related activities in different federal agencies. There is a Federal Interagen-
cy Committee on Education with a mandate to coordinate all education activities sponsored by federal agencies. For a while it monitored early childhood education research closely, but it no longer does so. There is also a Federal Coordinating Council for Science, Engineering, and Technology (FCCSET) that has a Committee on Education and Human Resources. It recently conducted a detailed survey of federal agency activities in mathematics and science education activities, including research activities within that domain.
Most knowledgeable observers suggest that despite the appeal of attempts to coordinate related activities in several federal agencies, the forces working against such efforts are strong (Kaestle, 1991). The agencies are competitors for federal funding, each is reluctant to give up its prerogatives, and the task of coordination is usually assigned to a mid-level staff member. Unless a high official is leading the effort, it appears to have little effect. Observers also note that some duplication is not a waste of resources because it allows replication. Yet it is clear that funders could benefit from knowing what their counterparts are supporting, and that users would benefit from access to the best information and assistance available from all sources.
The Office of Technology Assessment's recent report (1988:167–168,171,184) on R&D for technological applications for education provides a good discussion of the importance of sustained efforts in R&D work:
The Department of Education has had an off and on love affair with technology. Where research support has been consistent, as in support of children's television programming in the late 1960s through the 1970s, or long term as in support for technology in special education, important milestones were reached. These are exceptions. Most research projects did not have opportunities to proceed from laboratory research through to development of products and processes, much less to testing in the classroom, with real students and teachers.
In the 1970s, the Department supported quite a few projects lasting 5 or more years ... During the 1980s few projects received comparable long-term support.
... [The 1987–88 plans] fall short of focused, long-term commitments called for by the National Governors' Association, the National Task Force on Educational Technology, and the National School Boards Association .... Significant improvements in education can be made if sustained support is made available for development of new tools for teaching and learning. The private sector, while a contributor to this effort, does not have the primary responsibility or appropriate vision for making this a priority. States and localities do not have the capacity.
OERI's overall record of support for sustained efforts is somewhat better than this indictment, but hardly exemplary. Five of the current ten laboratories have existed since 1967. Their responsibilities and modes of operation, however, have been altered some by OERI and NIE over the years. For instance, in the 1960s and 1970s, they undertook mostly applied research and development work. In the late 1970s and early 1980s they did considerable work with schools and teachers. In the mid-1980s they were directed to work more with state agencies and improvement assistance organizations. Then in 1990 that mandate was relaxed.
Only 2 of the current 25 OERI centers are direct descendants of the 10 that existed in 1973: the Center on Student Learning at Pittsburgh's Learning Research and Development Center and the Center on Assessment, Evaluation, and Testing at UCLA's Center for the Study of Evaluation. Of the 12 centers that existed in the early 1980s, 6 were eliminated when their contracts expired in 1985, and 3 were awarded to new bidders. Of the 13 centers whose contracts expired in 1990, 3 were terminated, and 3 were awarded to new bidders.
Despite the rapid changes in leadership and frequent reorganizations in OERI and NIE, underlying stability has sometimes been maintained. For instance, Virginia Koehler Richardson managed a program on research in teaching at NIE for a decade. She notes (quoted in Kaestle, 1991:20):
To propel a research agenda through all that change ... requires some real stability on the part of the people that are there ... and different kinds of rhetoric ... changing wording, incorporating some of the newer notions, and just ensuring that people felt that research on teaching was a worthwhile endeavor ...
It was never easy, and when the agency was threatened with extinction in the early 1980s, "burnout" and "exhaustion" drove her to academia.
Instability often results in mediocrity. Most of the research-based innovations that are currently available to educators provide only modest improvements, partly because of the complexity of human learning and behavior, but also partly because these innovations are seldom subject to successive iterations of research, development, and testing aimed at strengthening effects, assuring effectiveness in a wide range of settings, enhancing market appeal, and minimizing costs. Funding for such work is rarely available, and universities often do not consider the second and subsequent iterations to be scholarly work.
One notable exception is the repeated cycles of work used in developing Student Team Learning, then Cooperative Integrated Reading and Composition, and finally Success for All at two successive R&D centers at Johns Hopkins—the Center on Social Organization of Schools and the Center on Elementary and Middle Schools. Each program built on and extended the
prior one (see Chapter 2). The first of these became one of the most widely adopted of all NDN programs; the second also gained PEP certification and is being disseminated through NDN; the third is now in the demonstration stage. Despite these accomplishments, the centers that produced them were closed down by NIE and OERI in 1985 and 1990.
The need for several iterations of research, development, and testing is not peculiar to education innovations. It is common in all fields of science and technology. It took almost 50 years of research and development to achieve a satisfactory vaccination for polio. The story of flight involves repeated cycles of research, development and failure; then research, development, and short-lived successes; followed by research, development, and unacceptably expensive successes; and, finally, decades latter, research, development, and commercial success. There has never been that kind of focused persistence in education R&D.
After interviewing many of the former top officials in OERI and NIE, Kaestle (1991:19–20), tried to articulate how to balance the need for stability with the need for response to emerging issues and opportunities:
Good stability means supporting some carefully selected, sustained work on subjects of central importance, where answers are not likely to be forthcoming quickly or cheaply. Good stability is having ongoing committees of the best scientific people as a balance against fads and politics and as a way to build credibility for the accumulating knowledge base. On the other hand, good instability means having the capacity to respond to new leaders' interests and philosophies and to shifts in national concerns, to be able to weed out weak work after a period of evaluation, and to be ready to grab innovative ideas and push new insights when it seems warranted. To make these decisions is difficult; to structure agencies to accommodate both values is even more difficult.
One of the most difficult problems is how to foster the right kind of stability and the right amount of change. The issue would be simpler with more money, but it would not vanish. It is difficult, our narrators indicated, to create conditions in which you have the conditions for sustained work and professional control at the same time you allow for new players, responsiveness to changing public concerns, support for renegade scholars and paradigm-busters, and the capacity to terminate outdated or incompetent work.
A state department of education official wrote to the committee, supporting the need for sustained efforts: "Educational innovation is difficult and risky. We need a stable system of R&D programs so that risk is not only tolerated, but also valued and encouraged."
As the nation moves from innovation to comprehensive reform, the need for sustained efforts becomes even more important. As Elmore and McLaughlin (1988) have observed:
Reform of the basic conditions of teaching and learning in schools requires "steady work".... Lags in implementation and performance are a central fact of reform ... the time it takes for reforms to mature into changes in resource allocation, organization, and practice is substantially longer than the electoral system that determine changes in policy.
QUALITY ASSURANCE AND ACCUMULATION OF RESULTS
OERI has a checkered history in respect to quality assurance. As noted in chapter 3, OERI has permitted practitioners to evaluate the technical merit of research proposals when they often have no training or experience suitable for the task, and it has allowed researchers to pass judgment on the programmatic merit of applied research and development proposals even when they have little expertise needed for doing so. The Program Effectiveness Panel (PEP) often judges programs as "proven exemplary" on the basis of evaluations in only a handful of sites, on only a few outcomes, and with no follow-up assessment 1 or 2 years after termination of the "treatment." PEP also does not examine possible disadvantages of applicant programs.
There have been several "reviews" of the centers and the laboratories, but never a comprehensive evaluation of either set of institutions that examined their products, services, and impact. Although the centers and laboratories develop innovative programs and processes, they are not required to have their effectiveness appraised by PEP or any other independent authority.
OERI's internal report review processes have varied by office and over time. Sometimes external and internal peer review have been required; at other times only the approval of the office head or the assistant secretary has been needed. In addition, many staff claim, with a few others disagreeing, that the Department of Education's clearance procedures sometimes result in politically motivated changes to OERI reports or delays in their release.
Quality assurance is essential in any research agency, not only for the sake of valid results, but also for credibility and support from Congress and other users. Weiss (1980) has noted that just as good research can enlighten policy makers, invalid results can "endarken" them, misinforming policy decisions. In addition, if policy makers become aware of low-quality work, support for the research enterprise is likely to suffer.
OERI supports reviews and syntheses of the R&D literature by several means. The centers and laboratories conduct reviews—sometimes to provide summaries to scholars and practitioners, and sometimes to inform their future work. The centers usually publish their reviews in journals, and the laboratories often print and distribute theirs to the state and local education agencies with whom they work. The ERIC clearinghouses commission and
publish reviews on various topics, usually oriented towards informing teachers, administrators, and policy makers of the practical implications of various bodies of research. And OERI staff have prepared some major reviews of the literature which have been published as agency reports.
OERI has not made much use of another mechanism used by several federal agencies for synthesizing and judging available evidence on a given topic. That mechanism is the expert committee assembled to examine the available evidence on disputed topics and determine what is established and what is not yet known from the evidence.
The Environmental Protection Agency (EPA) has a standing Science Advisory Board that reviews and judges the scientific bases of all proposed standards and regulations of the agency. A National Research Council study (1977a) found that the board had helped the agency improve the accuracy and reliability of its scientific determinations and provided political legitimization for the agency. Though the board languished during the early 1980s, it has since become a significant actor in EPA decision making; it has 67 members, 14 staff, and produced 43 reports in 1988 (Jasanoff, 1990).
Since the 1960s the Food and Drug Administration (FDA) has used numerous committees to review the evidence on disputed matters, finding them valuable because of the technical expertise provided and the social legitimation that comes from using independent scientists (Jasanoff, 1990). The FDA has also contracted with a professional association to review the safety of certain food additives. The latter of these two undertakings involved an expert panel that examined and discussed the available literature on the topic, prepared an interim report that was released publicly, invited written comments and held an open meeting for responses, and then prepared the final report (Federation of American Societies for Experimental Biology, 1977, 1985).
Since 1977 the Office of Medical Applications of Research at the NIH has held almost 100 consensus development conferences to evaluate the effectiveness and safety of medical interventions and to improve the translation of biomedical research results into knowledge that can be used effectively by health care providers. The topics of the conferences are those that have provoked controversy, for which considerable scientific evidence is available, and that are or should be of significant interest to health care practitioners. Considerable preparation work is done before the conference, but the expert panel meets for only 2.5 days with a public audience. An interim report is read to the audience on the morning of the third day, comments are received, and then final revisions are made by the panel.
OERI usually refers to ''dissemination;'' NIH often refers to "technology assessment and dissemination." Several of NIH's major health promotion campaigns over the past decade, such as those for breast cancer screening and the National Cholesterol Education Program, have been partly based
on the findings of NIH's consensus development conferences. Almost all the conferences are now reported in the Journal of the American Medical Association, and heavy media coverage has accompanied release of the reports.
Consensus processes are difficult, time consuming and expensive. If they are not done well, they can easily fall into disrepute. An evaluation of NIH's early conferences in 1979 and 1980 found they elicited substantial media coverage, but usually had only a small effect on clinical practice. NIH subsequently made several changes to improve the impact of conferences on practice. A recent Institute of Medicine report (1990) reviewed the NIH consensus development process and recommended more input from practitioners during the planning of the conferences, more thorough preparation for the conferences, experimentation with new means of facilitating the group decision making, and adequate financial support.
Although OERI and NIE have supported reviews of R&D literature, they have rarely summarized and synthesized what has been accomplished under their own funding. There are several obvious questions to ask when judging the work of OERI and NIE: What has been learned? What has been developed? What has been disseminated? What has been changed? There are thousands of final reports from projects, laboratories, and centers, but hardly any efforts to aggregate them across providers and over years. As a result, the agency is not able to explain its accomplishments to Congress, professional associations, the public, or itself. Two useful exceptions are unpublished staff papers summarizing what had been learned from NIE-sponsored studies on assessment between 1977 and 1983 (Rudner et al., n.d.) and on teaching and learning between 1978 and 1982 (Wirtenberg, 1982). A recent staff publication by Fox (1990) discussed how education research has been used in education policy making, but it does not clearly indicate which of the cited examples are based on work sponsored by OERI or NIE. Ironically, none of the three documents is indexed in ERIC or available in OERI's library.
OERI and NIE have not created a depository of their own institutional memory. As a consequence, future staff, researchers, and practitioners will have no way of easily learning from the agency's past successes and failures. The learning communities of the future will either have to plow through millions of pages of documents or proceed without the benefits of some lessons learned during the 1970s and 1980s.
LINKAGES WITH PRACTICE
Contrary to popular perception, OERI and NIE have worked hard and creatively at establishing good linkages between research and practice. They have published hundreds of documents for practitioners, including Increas-
ing Achievement of At-Risk Students at Each Grade Level; Women and Mathematics: Research Perspectives for Change; Science Education Programs that Work; Good Secondary Schools—What Makes Them Tick; Violent Schools, Safe Schools; Dealing with Dropouts: The Urban Superintendents' Call to Action; Profiles of Successful Drug Prevention Programs. OERI and NIE have long mandated that the centers and laboratories engage in developments that make research useful to teachers and administrators. They have administered ERIC, a state-of-the-art document search service that is widely used throughout the country. They have operated the Program Effectiveness Panel (PEP) and the National Diffusion Network (NDN). And they have funded regional laboratories to provide states, districts, and schools with a range of services aimed at improving education.
In the 1970s NIE funded several innovative efforts aimed at enhancing dissemination and building the capacity of local education agencies to benefit from education R&D work (Hutchins, 1989). The Research Development and Utilization Program had "linker" agents to assist districts identify potentially useful R&D products and agencies that could help in the adoption process (Louis et al., 1981). The Research and Development Exchange, begun in 1976, had all the laboratories and one center work collaboratively on dissemination and technical assistance efforts (Hutchins, 1989). The Local Problem Solving Program assisted schools in developing the ability to adapt existing innovations to their particular needs, develop their own innovations, and undertake change in a systematic manner (Hutchins, 1989). The Experimental Schools Program provided substantial funding to a few local schools districts that agreed to undertake locally initiated comprehensive change (Doyle, 1976; Herriott and Gross, 1979). And the State Capacity Building Program funded state education agencies to develop stronger links between research and practice (Louis et al., 1984). In the 1980s OERI increasingly involved teachers, administrators, and policy makers in the setting of research agendas and review of proposals, in hopes that the research would better address their needs.
Some of these innovative efforts were failures. Others have been successes, but were terminated because of declining budgets. Despite much experimentation and hard work, linkages between education research and practice have generally been weak. Of all the suggestions made to the committee about how to enhance the utility of education research and development, improving linkages with practitioners was cited most frequently.
Why have there been persistent problems in linkages? Some of the answers have to do with the nature of research and development and how NIE and OERI has tried to make it available to practitioners—these can be thought of as supply-side problems. Some of the answers have to do with the nature of the teaching profession and schools—these can be thought of as demand-side problems. As one person who wrote the committee noted:
we need a much closer working relationship between researchers and practitioners. As researchers become more involved in the issues of practice and the problems of change, they develop more valid information for program improvement; as educational professionals engage in the community of inquiry, they become better prepared to design the schools and educational programs of the future.
On the supply side, several aspects of education R&D and dissemination activities have made them difficult to use or otherwise unattractive to teachers and administrators. Research reports are difficult to read and interpret, many innovative programs have been either ambiguous or overly rigid, and many allegedly effective programs have had only modest effects.
Original research reports are unlikely to be of much use to teachers. There often are many studies on a given issue, and teachers do not have time to locate and read them (Cox et al., 1985; Rackliffe, 1988; Sawyer, 1987). The reports are usually highly technical and difficult for non-researchers to read (Billups and Rauth, 1987; Rackliffe, 1988). The research often seems out of touch with the reality of schools according to many teachers and administrators who met with the committee. Some studies frequently appear to contradict other studies, and it can be a complex intellectual task to interpret the entire body of research on a topic (Glass, 1976, Hunter and Schmidt, 1990; Light and Pillemer, 1984). A researcher who works closely with practitioners claimed to the committee:
While the quality and salience of educational research to practice has improved greatly in the last few years, much of it remains irrelevant to anything other than the advancement of the researcher.
Many of the innovations presented to teachers and administrators have been "fuzzy," lacking in a clear rationale, without specific procedures, and without convincing evidence as to their effectiveness. Others have been so specific—in an attempt to be "teacher proof"—that they have demeaned the teachers and undermined their talents and skills. Help in adopting an innovation has usually involved printed materials and 2–3 days of training, with little or no follow-up support after the teachers return to their classrooms. Educators have found many research-based innovations to be "ivory-towerish," implausible, incompatible with their concerns, not easily adapted to their local conditions, and not well tested in the field (Louis et al., 1984).
Most of the innovations disseminated over the past two decades appear to produce only modest results, even under the best of circumstances. And there is a long history of results being less favorable once a program is disseminated beyond the initial demonstration sites (Berman and McLaughlin, 1977).
One potential path for education research to influence teaching is through the curriculum materials used in the classrooms. However, the entire De-
partment of Education is prohibited by legislation (Public Law 96–88) from "Direction, supervision, or control over the curriculum of any educational institution." This has generally been interpreted as precluding the funding of curriculum development efforts unless specifically mandated by Congress, as was the case for a model drug prevention curriculum. As a consequence, OERI's potential impact on curriculum materials must be through utilization of the agency's R&D by commercial publishers, but OERI maintains few links with commercial publishers.
Publishers are rarely consulted when OERI is developing research agendas, and special efforts are not made to provide publishers with the results of OERI's work. One exception is OERI's reading center, which has held several meetings for commercial publishers of reading instruction materials to convey the implications of reading research findings. In the 1970s NIE had a Publishers Alert Service for several years, but dropped it because of limited impact. Publishers wanted to be involved at an earlier stage of development than most centers and laboratories preferred; they perceived the school districts to be not particularly interested in innovative approaches; and they had little interest in "thin market" materials such as those designed for high school principals or students of a specific state (BCMA Associates, 1977).
Most teachers, principals, school superintendents, school boards, and chief state school officers are members of at least one of six major professional associations. Many read their associations' publications, and some attend their annual meetings and special conferences. OERI's linkages with these key professional associations have varied over the years. There has often been an exchange of publications; OERI sometimes has displays and makes presentations at the associations' meetings; and association officials and members serve on the ERIC clearinghouse advisory committees and occasionally on other advisory committees and on proposal review panels. Associations have submitted proposals to OERI, and some have been funded. And a few of the assistant secretaries have held special meetings for association representatives, for the exchange of ideas and to keep them informed of OERI's activities.
Some observers believe that OERI could make more use of the professional associations to assess the research needs of teachers and administrators, to bring the talents of practitioners into the federally funded R&D process, and to disseminate R&D results to practitioners. Other observers are less sanguine about a role for professional associations, pointing out that two of them are unions, and the others—like most professional associations—sometimes put the interests of their members ahead of what might be best for students.
One association representative suggested to the committee: "Associations like ours can be helpful in ... dissemination of center/lab findings to
practitioners ... since we already have networks in place." The associations could also be used in several other ways. For instance, OERI could ask to schedule sessions at the professional meetings to solicit input on research priorities from the practitioners, to cull their craft knowledge for researchers, and to introduce them to interpreting and using research.
The laboratories, which formerly focused much of their work on individual schools, have moved during the past 6 years to work more with state agencies and improvement assistance organizations. This has been partly because budget declines necessitated reducing the number of contacts, partly because of some congressional complaints that OERI was "meddling in the affairs of local schools," and partly because of a 1985 formal directive from OERI (subsequently relaxed in 1990).
Although Congress determines OERI's budgets, there has been little effort by the agency to incorporate the needs of Congress into its agenda setting. Congress's most obvious need is for timely information in relation to the reauthorization of existing education programs, but OERI has rarely proposed studies for that purpose. The 1985 and 1990 solicitations for centers and laboratories did not direct the institutions to consult with Congress or the congressional research agencies about R&D needs. The specialists in those congressional agencies who do contact the centers or laboratories for help have usually been pleased with the assistance they received. Even those, however, complained that they had no easy way of knowing what the centers and laboratories had done and were doing.
On the demand side of the research-practice linkage problem, there are several constraints. Parents often do not insist on improvements to schools, teachers have been immersed in traditional instructional approaches, schools of education seldom prepare teachers to use research to change schools, teachers work under schedules that leave little time for anything but their immediate responsibilities, and significant improvement of schools requires considerable leadership and coordination.
Recent national surveys find that about 65–69 percent of adults think public schools in the nation deserve a "C" or lower grade, but only 26–27 percent of parents think their children's public schools deserve such grades (Elam, 1990; Elam et al., 1991). That perception may blunt the pressure for substantial improvements in the nation's schools.
Unlike all other professionals, teachers are exposed to traditional methods of their craft for 12 years before they start college—all through their elementary and secondary schooling (D. Cohen, 1987). Before they begin their professional education, they have expectations about how teaching should be done, which makes it difficult for them to consider dramatically different approaches.
The first potential link between research and teaching is during this professional education, but it generally provides little explicit introduction
to the contributions of research, where to find it, how to read and interpret it, and how it might be used in teaching. Most of the nation's teachers are educated in regional and state universities at which the faculty have heavy teaching loads and seldom engage in research (Goodlad, 1990; Howey and Zimpher, 1989). Schools of education generally do not strongly emphasize the integration of theory, research, and practice; nor do they focus on the knowledge and skills needed to change schools (Fullan, 1991). They prepare teachers to work in "top-down" organizations with limited opportunities for professional decision making. OERI has had few direct links with these institutions, other than mailing publications and notices to them and using their faculty occasionally when planning research or reviewing proposals. A few OERI-funded centers, however, have established closer relations with the schools of education.
Teachers' work in schools usually involves nonstop instructional schedules throughout the day, with course planning and paper grading carried out after school hours. Teachers spend most of their day isolated from other adults and sources of professional support (Goodlad, 1984; Huberman, 1984; Lortie, 1975). They have very little time and assistance in reflecting on their practice, seeking research-based knowledge, examining new options, and preparing to try them out. A researcher who has worked extensively with schools wrote to the committee:
Teachers are socialized into a subculture where publicly expressed questions, the essence of research, are seen as threats to authority or signs of weakness... to create a demand, a market for research information ... requires the creation of a culture in the schools that values scholarship.
Most of teachers' work is "on the fly." It is under conditions of uncertainty, enveloped in the press of classroom events, and subject to innumerable contingencies (Brophy and Good, 1974; Jackson, 1968). When teachers are given new curriculum materials, they adapt them rather than adopt them (Berman and McLaughlin, 1975, 1977; Cohen and Ball, 1990; Porter and Brophy, 1988). Teachers need to exercise good judgment and be expert problem solvers, but they receive little preparation for either (Darling-Hammond et al., 1983; Griffin and Barnes, 1986; Hopkins, 1990; Schon, 1983). One writer to the committee noted:
The universities formulate the constructs of the research agenda and expect schools to carry them out. This has never worked, never will. The constructs have no bearing on the realities of the classroom.
This is an overstatement, but it accurately reflects the views of many practitioners.
Good teaching is a complex activity that does not lend itself to compulsory or mechanical adoption of innovations. New curricula, teaching ap-
proaches, and technology can be valuable, but only in the hands of enthusiastic and committed teachers. "Significant educational change consists of changes in beliefs, teaching style, and materials, which can come about only through a process of personal development in a social context (Fullan, 1991)." Collegiality among teachers and direct involvement in improvement efforts are important to successful change (Fullan, 1991).
The implementation of a single innovation seldom substantially improves schooling. Effective reform of schools requires coordinated changes across subject matter, grades, and management practices. Any one teacher's efforts will be of limited significance unless supported widely by the other teachers and the administrators. That needed support is far from assured. Teachers, like others, hold diverse values and beliefs about education. There are few incentives and rewards for outstanding performance in most school systems. And accurately assessing that performance is very difficult (Murnane and Cohen, 1986), because the ultimate goals of education—preparation for the workplace, good citizenship, and personal fulfillment—are multidimensional, diffuse, and long term.
These supply-side and demand-side barriers to linking research and practice create a difficult challenge that could not previously have been comprehensively addressed by OERI or NIE. An understanding of the challenge has evolved only during the past decade (McLaughlin, 1990; Turnbull, 1991). And OERI has never had the funding necessary for confronting it.