Click for next page ( 282


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 281
COMMISSIONED PAPERS Knowledge Development Under the Youth Employment and Demonstration Projects Act, 1977-1981 Richard F. Elmore INTRODUCTION In July and August of 1977, Congress passed and President Carter signed the Youth Employment and Demonstration Projects Act (YEDPA). The law substantially increased authorizations for two existing youth employment programs, the Job Corps and the Summer Youth Employment Program (SYEP). It added three new programs, Youth Community Conser- vation and Improvement Projects (YCCIP), the Youth Employment and Training Programs (YETP), and the Young Adult Conservation Corps (YACC). It also authorized a large-scale demonstration of strategies designed to encourage high-risk youths to stay in school, using guaranteed work as an incentive--the Youth Incentive Entitlement Pilot Projects (YIEPP). (Table 1 summarizes the target groups and activities included within these programs.) In the fiscal year immediately prior to the passage of YEDPA, federal outlays for youth employment programs were about $955 million (Hahn, 1979~. Over the next four fiscal years, 1978-1981, about $8 billion was spent on programs and about $500 million on research and development addressed to youth employment, serving about 1.5 million youths annually (see Tables 2 and 3~. YEDPA was administered by a newly created Office of Youth Programs (OYP), which was located in the Employment and Training Administration (ETA) of the U.S. Department of Labor (DOL) and which relied on a large number of independent contrac- tors, as well as state, local, and other federal agencies. Included in the legislation authorizing YEDPA was a broad charge "to explore methods of dealing with the structural unemployment problems of the Nation's youth" and n to test the relative efficacy different ways of dealing with these problems in different local contexts." This charge was backed by substantial discretionary - Richard F. Elmore is at the Graduate School of Public Affairs, University of Washington. 281 or

OCR for page 281
282 TABLE 1 Federal Youth Employment Programs, 1977-1981 Program Target Group Activity Job Corps 16-21; economically disadvantaged Summer Youth 14-21; economically Employment disadvantaged Program (SYEP) Youth Community Conservation and Improvement Projects (YCCIP) Youth Incentive Entitlement Pilot Projects (YIEPP) 16-21; unemployed; preference to economically disad- vantaged 16-19; economically disadvantaged; 17 selected sites-- 7 large cities, 10 smaller cities Young Adult 16-23; unemployed Conservation Corps (YACC) Youth Employment 14-21; at least and Training 85 percent economically Programs (YETP) disadvantaged Residential centers; education, skill train- ing, work experience, counseling, health care; stipends; centers admin- istered in cooperation with other federal agencies, state and local governments, and non-profit organizations Work in public or private, nonprofit agencies; some educational enrichment Work in community-planneo public projects Two-year demonstration; guarantee of minimum wage, part-time work during school year, full- time work during summer; contingent on satisfac- tory performance in school and work Work for up to 1 year on conservation, public projects; by statute, 70 percent administered through interagency agreements with Departments of Interior and Agriculture, 30 percent through formula to states Classroom or on-the-job training, work experi- ence, pre-employment skills; administered through local prime sponsors; 22 percent set-aside for cooperative programs with local educational agencies

OCR for page 281
283 TABLE 2 Outlays for Federal Youth Employment Programs, Fiscal 1978-1981 (in millions of dollars) Program Fiscal 1978 1979 1980 1981 Job Corps 280 380 470465 SYEP 670 660 721769 YCCIP 61 103 1220 YIEPP Larch 1978-August 1980: 218 YACC 139 273 234174 YETP 294 556 695719 SOURCE: Data from U.S. Department of Labor (1980a, 1981, 1982~. authority and money, granted to the secretary of labor and delegated to the Office of Youth Programs, to conduct research, demonstration, and evaluation activities around the structural unemployment problems of youths. This effort was described with the arresting phrase "knowledge development." The youth employment knowledge development effort was remarkable in many respects: It was one of the largest short-term investments in social research and development ever undertaken by the federal govern- ment. Its scale and complexity dwarfed any research and development effort undertaken by the Department of Labor before or since. It generated a large body of research, evaluation, and practical knowledge, which is only now being sorted, assessed, and assimilated into policy and practice. It coincided with a sharp surge in political attention to youth employment problems, creating many opportunities for connect- ing research with policy. And it galvanized a broad-based national constituency of policy makers, researchers, and local practitioners, at least for a short time, around the problems of youth employment. These features argue for a retrospective look at the process. This paper analyzes the conduct of the knowledge development effort, from its origins in congressional decision making, to its design, implementation, and results. The paper addresses five main questions: (1) What congressional and executive expectations shaped the knowledge development process? (2) How was the knowledge development process designed? (3) How was the process organized and managed? (4) How has the process influenced policy and practice in youth employment? (5) What lessons can be learned from the process that might shape future large-scale research and development efforts in social policy? Because the knowledge development effort was vast and the scope of this analysis is modest, the answers to these questions are necessarily

OCR for page 281
284 TABLE 3 Participation in Federal Youth Employment Programs, Fiscal 1978-1981 (headcounts) Program19781979 1980 1981 Job Corps72,00085,000 103,800 il4,400 SYEP1,009,300888,000 800,000 774,000 YCCIP28,70038,500 43,000 38,400 YIEPP March 1978-August 1980: 72,000 YACC 51,900 67,200 68,000 68,000 YET P 359,200 413,600 463,000 393,700 SOURCE: Data from U.S. Department of Labor (1980a, 1981, 1982). tentative and incomplete. But, even with these limitations, the paper provides an occasion to examine the broader consequences of large-scale investments in social research and development. It also provides a political and organizational complement to more methodologically oriented reviews of the evidence. The YEDPA knowledge development effort raises questions that have run through federal investments in social research and development since at least the mid-1960s. Prior to that time--during the New Deal, for example--the unspoken assumption was that public money spent on remediation of social problems was effective simply by having been spent. In the 1930s, the National Youth Administration's employment programs and the Civilian Conservation Corps were assumed to have accomplished their purposes when federal money was transferred through public employment to unemployed young people. In the 1960s, this view began to shift markedly. It was no longer adequate justification for public expenditures simply to pass money from the government to individuals who needed it; public expenditures had, in some way, to contribute to the long-term solution of basic structural problems in society--poverty, unemployment, crime, delinquency, and the like. Some argued that even this was not sufficient justification, proposing instead that expenditures be based on comparisons of their net returns to society, rather than just on their relationship to social problems. This shift in perspective coincided with a marked increase in federal social expenditures and at least a five-fold increase in federal expenditures on social research and development (Rein and White, 1977~. The dominant theme of social research and development in the 1960s and 1970s was the seemingly straightforward matter of "finding out what works" (Rivlin, 1971~. The dominant analytic model was a combination of experimental method and economic analysis. Experimental method, which in its most rigorous form prescribes pre and post-measurement coupled with random assignment of subjects to treatment and control groups, would provide the means of disentangling

OCR for page 281
285 the effects of social programs from the effects of various other factors. Economic analysis would provide the means of attributing value to program effects and of assessing their net social benefit. As the custodian of expertise and money in this effort, "the federal government should take the leadership in organizing, funding, and evaluating systematic experiments with various ways of delivering education, health, and other social services . . . trying out new methods in various places under various conditions'' (Rivlin, 1971:86-87~. The evidence emerging from these systematic experiments would inform public policy decisions. The underlying assumptions were that (1) knowledge of effects and net benefits was a key determinant of public policy decisions; (2) systematic knowledge, when marshaled in support of decisions, would be used by policy makers; and (3) better knowledge meant better decisions and more value to society from social expenditures. This analytic model produced some notable successes--well-conceived and well-implemented experiments in income maintenance, health insur- ance, and housing subsidies--and some notable embarrassments--vaguely conceived and erratically implemented experiments in educational vouchers, compensatory education, and educational performance contract- ing, for example. The analytic model also began to find its way into federal evaluation requirements that accompanied categorical grants and federally sponsored research and demonstration activities under the umbrella of operating programs. In education, for example, a national Dissemination Review Panel was established to review evaluations of exemplary programs for methodological rigor and results and to "validate" those programs for broad-scale dissemination. Parallel patterns developed in delinquency prevention, employment, and mental health. This analytic model had no sooner become a fixture of federal policy than experience began to surface a number of problems: . Timing. Social experiments and evaluations, especially the well-conceived ones, took so long to produce results that they usually answered questions no longer being asked by policy makers. The demands of the political process were out of synch with what the experimental method could produce. The Nature of the Treatment. Most of the notable successes with social experimentation were with policies involving relatively simple cash-transfers (income maintenance, health insurance, and housing subsidies). Most of the notable failures were with policies involving complex changes in existing organizations, the creation of new organizations, or the delivery of highly individualized services. Organizational Complexity. The large-scale accumulation of knowledge about social problems turned out to require orchestrating competing political demands, marshaling money and expertise behind policy questions, and constructing organizations to deliver services and do research. The skills necessary for these activities were more akin to the skills of management than the skills of scientific inquiry. People who have the required management skills do not necessarily have the skills, interest, or commitment to scientific inquiry, and vice versa.

OCR for page 281
286 Implementation. Complex interventions have to be implemented before they can be tested. Implementation requires skill and commitment ~ ~ interest - ~ " Implementation also requires organizational and administrative capacity--people and institutions who are ready to apply their practical knowledge to the problems raised by policymakers. These practical concerns often came as a shock to social scientists whose main concerns were methodological and theoretical. Variability and Robustness. The cash-transfer experiments seemed to produce findings that were robust from one setting to another, if not from one experiment to the next. Evaluations and experiments requiring complex organizational solutions and individualized services produced findings that were highly sensitive to setting--differences among sites were consistently greater than differences among treatments across sites. Small, Ambiguous Effects. The social interventions of the raid-1960s were justified politically in rhetoric that suggested broad-scale reform of society. The actual results of experiments and program evaluations showed small, often inconclusive effects. The interventions worked with some groups and not others; the effects were sometimes so small as to be of questionable practical significance; important sources of variation (site-to-site differences, for example) were often not included in the design of evaluations; and effects often did not persist over time. Methodolocical Uncertaintv Better-desianed. more r icorous. trom people whose main "finding out what works. . . . . . VariahiliEv and Robustness;. Is In aellverlng services, not In . . ~ more analytically sophisticated experiments and evaluations did not reduce the uncertainty and conflict surrounding policy decisions. Indeed, they often aggravated it. Serious discussions of important policy questions often got sidetracked into arcane debates over methodological decisions, analytic assumptions, and statistical techniques, leaving the intended consumers of the results confused. The most frequent conclusions of policy research were recommendations for more research. The research community seemed reluctant to apply the same benefit-cost calculus to its own work that it applied to social policy. Conflict Between Science and Practice. As the application of the breach widened social science to social policy making proceeded, between people who delivered services, on the one hand, and the people who conducted experiments and evaluations, on the other. Practitioners argued that the quantitative findings of rigorous research and evalua- tion were too abstract to be of any practical use, too insensitive to practical problems, and that experimentation and evaluation were expensive ornaments hung on social programs for the benefit of social scientists. Social scientists argued that, without a scientific basis, practice could not be justified to the public and that resistance to systematic analysis stemmed from the professional's usual hostility to external scrutiny. It was impossible to engage in large-scale policy research, experimenta- tion, or evaluation in the 1970s--or the 1980s, for that matter--without

OCR for page 281
287 confronting these problems in one form or another. They were part of the baggage of systematic inquiry in the service of policymaking. Out of these misgivings there began to emerge a different, more tempered view of the connection among systematic inquiry, policy, and practice. The utility of experimental method and economic analysis came to be defined in narrower terms. Rigorous social experimentation required relatively strong theory, analytic skill, time, money, and organizational capacity--conditions that could not be met in all instances. Social scientists began to acknowledge an explicit tradeoff between internal validity (the ability to distinguish treatment effects) and external validity (the ability to generalize effects beyond an experiment). The degree of experimental control required for a precise estimate of effects was to some degree inconsistent with the ability to transfer the treatment from an experimental setting to a practical operating environment. Policy analysts began to speak with more respect about the "ordinary knowledge" (Lindblom and Cohen, 1979), or practical understanding, necessary to make complex decisions and to get from an analytic result to a prescription for action. Views about the relation- ship among systematic inquiry, policy, and operating decisions became more elaborate and less hard edged. Systematic inquiry, even when it met rigorous methodological stan- dards, was rarely brought to bear on clearly specified decisions-- legislative, budgetary, or administrative. But systematic inquiry did have longer-term, more diffuse effects on the conventional wisdom that policy makers used to define problems, on the way organizations were structured, on the directions administrators pushed their organizations, and on the way practitioners handled day-to-day problems in providing services. The shifts, in other words, were less a repudiation of the experimental/analytic model and more a domestication of it to political, organizational, and operating realities. The Department of Labor had, by the mid-1970s, accumulated con- siderable capacity and experience in economic analysis and evaluation, although its experience with large-scale experimentation was more limited. The department's analytic functions in the employment and training area were the responsibility of the Office of the Assistant Secretary for Policy Evaluation and Research (ASPER) and, within the Employment and Training Administration, the Office of Policy Evaluation and Research (OPER). The Policy Evaluation and Research budget of DOL was consistently around $35 million a year between 1976 and 1980; over $20 million was in earmarked appropriations and about $15 million in discretionary funds (apart from YEDPA). The varied collection of state and local government agencies and community-based organizations that delivered employment and training services under the Comprehensive Employment and Training Act (CETA) had become acclimated to a relatively high level of federally required evaluation, but no less resistant to its costs and inconveniences. An array of external research and evaluation organizations had developed around DOL-sponsored evaluations, as well as a large array of university-based research organizations. Not much of this capacity for research and analysis, however, was focused specifically on youth employment--a matter that would become important with the passage of YEDPA. in economic analysis and evaluation,

OCR for page 281
288 The youth employment knowledge development effort commenced at a time, then, when federal investment in analysis, research, and evalua- tion related to employment had been relatively high for a number of years, when methodological and analytic sophistication were on the rise, when major uncertainties were surfacing about the role of systematic inquiry in the service of policy making, and when the administrative structure for employment programs had become acclimated to, if not totally accepting of, evaluation. The uncertainties that characterized policy analysis, research, and evaluation generally at that time were necessarily part of any effort to apply systematic inquiry to the youth employment problem. Among the questions growing out of this larger context are the following: What constitutes "useful" knowledge? Is the utility of knowledge, and hence the value gained from investment in systematic inquiry, to be judged in strictly methodological and quantitative terms--that is, are "useful" results measures of specific impacts to which no alternative causal explanation can be offered under the methodological conventions of social science? Or is the utility of results more a matter of practical use--that is, "useful" results are those that are perceived to be helpful in solving political, administrative, and practical problems? What should be the relationship between the delivery of services and the discovery of effects? Is it the primary responsi- bility of government agencies to deliver services to people, consistent with the political decisions of elected officials? Or is their primary responsibility to "find out what works," consistent with the economic criterion of positive net benefit? Is it possible to accommodate the delivery of services and the measurement of effects within a single organizational structure? Should the delivery of services be con- strained by the methodological conditions necessary to identify effects, or should methodological conditions be constrained by the practical necessities of delivering services? What are the political and organizational correlates of successful accumulation of knowledge? If the accumulation of knowledge about social problems requires orchestrating competing political demands, marshaling money and expertise behind policy questions, and constructing organizations to deliver services and do research, then how do we distinguish between better and worse ways of doing these things? What payoffs should we expect from large-scale research, demonstration, and evaluation activities? Should the payoffs be on the order of resolutions to the problem of youth unemployment!'? Or is it sufficient to offer solutions, on the order of "ways to reduce the high school dropout rate" or "ways to impart employment skills," that offer constructive solutions to practical problems, but little hope of solving the overall problem? The analysis that follows is divided into five main sections: (1) expectations about knowledge development on the part of Congress and

OCR for page 281
289 the executive branch; (2) design of the knowledge development effort; (3) organization and management of the effort; (4) influence of the effort on policy and program; and (5) guidance for the future that might be gained from the knowledge development effort. POLITICAL EXPECTATIONS Shortly after the inauguration of President Carter in January 1977, several representatives of the new administration were summoned to a meeting on the Senate side of the Capitol Building. The Carter appointees were Bill Spring, from the White House Domestic Policy Staff; Ray Marshall, secretary of labor; Ernest Green, assistant secretary for employment and training; and Nik Edes, deputy under- secretary for legislative and intergovernmental affairs at the Labor Department. Prom the Senate was assembled a rare array of senior members. Among the Democrats were Henry Jackson, Washington; Hubert Humphrey, Minnesota; Edward Kennedy, Massachusetts; Alan Cranston, California; Harrison Williams, New Jersey; Gaylord Nelson, Wisconsin; and Jennings Randolph, West Virginia. Among the Republicans were Jacob Javits, New York, and Robert Stafford, Vermont. According to Nik Edes and Richard Johnson, then staff director of Nelson's Senate Employment Subcommittee, the Senators delivered a simple message to the Carter appointees: A youth employment bill would be introduced in the Senate immediately, with or without administration support. The administration could collaborate or be left behind. The reasons for the pressure were political. According to Johnson, "There were youth proposals coming from all over the Senate; they could have gone to [the] public works, interior, or labor [committees]. Javits sensed that the whole thing was about to come apart. Whichever committee got to the floor first with a proposal would get credit. He decided it was time to call a meeting. He told the administration, 'We're about to produce a piece of legislation. If you want in, now is the time'." Spring, recently transplanted from the Senate Labor and Public Welfare Committee Staff to the White House, recalls, "It was a rescue effort by the Labor and Public Welfare Committee to maintain its jurisidiction. Javits [ranking member of the Labor and Public Welfare Committee] saw that Jackson [chair of the Interior Committee] and Randolph [chair of the Public Works Committee] were about to move, and understood that if something didn't happen quickly they were going to lose it." Within weeks, Edes, Johnson, and Spring had drafted a proposal incorporating the special programs of the key Senators. This particular selection of staff was not accidental. Edes and Spring, representing the Carter administration, had only weeks before been members of the Senate staff--Edes working for Senator Williams, chair of Labor and Public Welfare, and Spring working for Senator Nelson, chair of the Employment Subcommittee. According to Spring, "It was the old Senate staff pulling together around an issue. There really wasn't an administration position, because the Carter White House hadn't gotten organized."

OCR for page 281
290 Congressional Perspective According to Johnson, "As far as the Senators were concerned, the ideas were simple. You needed to have better cooperation between the schools and the employment and training system. You needed to do something about dropouts. You needed to provide opportunities for kids in trouble with the schools to do useful, productive work and prepare themselves for employment." Jackson, Humphrey, and Randolph came of age politically in the New Deal. Their ideas about what young people needed were consistent with the New Deal view of employment in the service of conservation and public works. YACC and YCCIP were manifes- tations of this view. Nelson and Williams had a large political stake in maintaining their committee's jurisdiction over employment policy and assuring that the federal employment and training structure provided adequate access for youths. YETP was the solution to that problem. Javits's special interest was in the connection between the schools and the employment and training system. On the strength of Javits's interest, a provision was drafted requiring that a proportion of YET P funds (originally 15 percent, later 22 percent) be allocated to projects jointly involving local education agencies (LEAs) and CETA prime sponsors. For the Carter administration the top domestic priority was dealing with persistent inflation and rising unemployment. Youth employment, per se, was not part of their early agenda. As one congressional staff member said, "They didn't have any hip-pocket proposals on youth employment coming out of the transition, so it was relatively easy for them to buy into whatever the Senate had to offer." On January 31, 1977, Carter proposed a $20 billion emergency economic stimulus package, composed of supplemental budget requests for fiscal 1977, to cover the 18-month period from April 1977 to September 1978. The package con- tained an $8 million addition to public service employment (PSE), $4 billion for public works jobs, over $5.5 billion in aid for local governments, and $1.5 billion for unspecified youth employment programs. The administration's original intent was to implement its youth program administratively, without new legislative authority. Senate aide Richard Johnson said, "We told them, 'You can't do that on Capitol Hill, legislators want to pass legislation and get some visibility'." So on March 9, the administration followed with a youth employment message, containing a proposal that had been worked out jointly with the Senate. It requested authority for three new youth programs--YACC, YCCIP, YETP; it provided a set-aside of joint school-prime sponsor projects; and it provided that half the YETP funds would be distributed by formula to prime sponsors and the other half used to fund "innovative and experimental" programs at the discretion of the secretary of labor or his designee. Explaining the purpose of the discretionary funding, Richard Johnson argued, "We [the Senate] had always been inclined to put rather generous discretionary funding into the employment programs because we recognized that the formulas for distributing money sometimes resulted in problems getting the money to the right constituencies." For the Senate, in other words, discretionary funding was a way of adjusting formula-funded

OCR for page 281
291 programs to special geographical, political, or organizational interests. In retrospect, according to DOL's Nik Edes, "leaving the House out of the early negotiations was a major tactical error." The administra- tion's affinity for working with the Senate was understandable. Two key actors for the administration, Edes and Spring, were former Senate staff. Also, according to Spring, "The House got left out because the internal politics of the Senate were so delicate we were afraid we'd lose the whole thing if we tried to accommodate the interests of House members too." When the House got wind of a Senate-administration proposal, they decided to move on their own youth employment bill. The late Carl Perkins, chair of the House Education and Labor Committee, and Augustus Hawkins, newly designated chair of the Employment Subcommittee, intro- duced the administration's youth employment bill on April 6 and then developed an alternative proposal. According to a member of the House staff, "The White House didn't have a lot of experience in these things; they said, 'Wait a minute, you can't develop your own bill; we already have a bill in.' We went ahead with our own proposal." The Senate and House approaches differed in several respects. First, whereas the Senate proposal created new programs focused on youths, the House proposal amended Title III of CETA, a general grant of discretionary authority to the secretary of labor for research and demonstration projects. The Senate saw itself as initiating new, more or less permanent, youth employment programs. The House, by contrast, saw itself as initiating demonstration projects that would form the basis for later, more permanent programs. In a House staff member's words, "our philosophy was 'let a thousand flowers bloom,' and then come back after we'd had some experience and decide what was promising." Another key Senate-House difference was the House's Youth Incentive Entitlement Pilot Projects (YIEPP). YIEPP was important politically to the House proposal because it originated from the Republican side. According to Nat Semple, minority staff to the House Employment Subcommittee, the idea had its origins long before 1977. "Marvin Esch [former Republican member from Michigan] liked to think in big terms. He had kids of his own and he was concerned about how to get kids to stay in school and how to get schools to respond to kids who might not be the greatest academically. We had a long discussion one evening after work over hamburgers and beer at the Hawk and Dove Ma Capitol Hill eatery] and I started sketching out the ideas for entitlement on the tablecloth. The problem was how to get the Republicans to buy off on some kind of a job guarantee. We struck on the idea of a contract. The kids would have to agree to stay in school in return for a job. There would be rules and structure. We weren't offering something for nothing. The whole idea was basically very conservative: keep kids in school, promote the work ethic, make kids contract for benefits, etc. It had a lot of appeal to the Republicans." Esch ran for the Senate from Michigan in 1976 and lost. Semple took the entitlement proposal to Ronald Sarasin, moderate Republican from Connecticut, and Sarasin agreed to sponsor it.

OCR for page 281
337 Useful Knowledge Running through the knowledge development process is a tension between knowledge acquired through social science and knowledge based on practical insight--a tension between science and ordinary knowledge. When House members asked the Department of Labor to "find out what works," they stated their concerns as a potpourri of questions and problems. Some of those questions, such as how to solve structural unemployment among young people, implied sophisticated, long-term research. Others--the effects of specific training and job-search activities, for example--implied shorter-term project evaluations. The congressional mandate did not take account of the vast differences in those questions, nor the time and resources required to answer them. The questions specified by the House were, from a research perspective, exceedingly vague. They provided little guidance for what the Congress meant by "finding out what works." Assuming that Congress meant rigor- ous research when it said "find out what works" probably overstates the sophistication of Congress's concern. Congress was more interested in generating a variety of practical activities addressed to youth employ- ment than in setting the conditions for rigorous social research. On this score the House and Senate agreed. The objective was to launch a wide variety of activities and see if they could survive administra- tively and politically. In the words of a House staff member, "finding out what works" meant "let a thousand flowers bloom," not the conduct of rigorous research. When members of Congress said "find out what works," they had in mind nothing more complicated than demonstrating whether new programs could be instituted administratively and whether young people could find useful work in the process of participating in them. Larger, more sophisticated research questions were embedded in this basic concern, but were not central to Congress's thinking. With certain routine qualifications, the answer to the questions posed by Congress, after three years of research and demonstration, was "yes." Knowledge of this kind is far from trivial, even though it does not meet many social scientists' theoretical or methodological standards. Congress had other important items on its agenda beyond finding out what works. Distributive politics--by age, by region, by constituency group, by federal agency, and by level of government--was Congress's major concern. The legislative language and history of YEDPA manifested far more attention to the distribution of money among competing interests than it did to discovering solutions to youth unemployment. Making the CETA system more responsive to the problems of youths was another agenda item. By targeting youths for special concern, Congress was, in effect, telling the Department of Labor and the CETA system that they had not paid adequate attention to the problems of youths. Still another agenda item was using federal funds to make the schools and the employment and training system work more closely together. From the point of view of certain members, the gap between schools and CETA-funded organizations was inexcusable and should be closed. Each of these items brought with it a collection of problems that Taggart and his staff had to solve in the implementation of YEDPA.

OCR for page 281
338 Failing to address these items would have meant failing to respond to the manifest concerns of Congress. One can argue that Congress was irresponsibly vague, that it failed to provide the necessary guidance in structuring a research agenda, and that it undermined the possibility of finding out what works by loading too many other items on the agenda. But these arguments all miss an essential point: Congressional action requires coalitions; coalition politics requires vagueness and multiple agenda items. In some instances, as YIEPP illustrates, the demands of coalition politics and the demands of rigorous research are not incompatible. One cannot expect them to be compatible in all, or even most, instances. Ordinary knowledge of politics, in other words, should shape our sense of what we can feasibly expect of Congress in setting the initial conditions of large-scale research on social problems. Ordinary knowledge of administration also played an important role in the knowledge development process. Federal employment and training programs are administered through units of state and local government, which are in some senses autonomous, but which also assume the delegated authority of the federal government to make contracts for the delivery of services. When a shift in policy creates new demands on that system, these units are entitled to ask a host of practical questions about the consequences of those demands. How should new programs be meshed with existing delivery structures? How should the competing demands of services for youths and adults be sorted out administratively, organizationally, and politically at the local level? If local employment training programs are supposed to be coordinated with local educational systems, what is acceptable evidence of coordination and how have other jurisdictions responded to the requirement? If young people are to be given clear expectations of performance as a condition for participation in employment programs, what constitutes satisfactory performance and what happens to those young people who do not meet expectations? Again, these questions are relatively far removed from the con- ventional social science questions about Treatment A and Treatment B. but they describe knowledge that plays an important role in addressing Congress's concerns about whether new programs can be made to work administratively. Moreover, since the administrative structure is composed not just of functionaries working under contract to the federal government, but also of governors, mayors, legislators, council members, and the like, who are elected officials in their own right, these people are entitled to answers. If we probe far enough into the administrative structure, we eventually reach the people who call employers to ask if they would be willing to hire a young person, who teach reading, multiplication, and long division to 18-year-old dropouts, who try to find housing for a young man who is sleeping in his car, and who try to find child care for a young woman who is about to leave the welfare roles and start working as an orderly in a nursing home. These people ask a different order of question. If we add another section to our remedial General Equivalency Diploma course, who will we

OCR for page 281
339 get to teach it? If we are expected to get rid of kids whose attendance and academic performance are poor, how do we keep our enrollment at a high enough level to meet our placement objectives? Is there a way to combine the teaching of elementary math with training in the use of common measurement tools? Is it okay to send a bright, but poor and neglected, kid to the academic program at the local community college rather than to a job placement--will it count against our placement results? These questions are also somewhat removed from the Treatment A versus Treatment B questions of social scientists. But if someone cannot answer these questions, it is highly unlikely that the designs set in motion by Congress will be translated into employment for disadvantaged young people, or that the application of research methods to employment programs will yield information useful to policy makers. What constitutes "useful knowledge," then, depends on where you stand in the complex system of relationships that operates on the youth employment problem. From this premise, three conclusions follow: First, only a small part of what the system as a whole regards as useful knowledge meets the social scientist's definition of useful knowledge. Second, ordinary knowledge, in the form of answers to practical questions about whether things can be done, is a precondition for more sophisticated forms of knowledge, like that resulting from social experiments. And third, if political and administrative systems fail to accumulate ordinary knowledge, they will, with absolute certainty, fail to accumulate scientific knowledge. The notion that social problem-solving requires the faithful application of social science methods to policy decisions, then, is not so much wrong as it is incomplete. Social science deals in a kind of knowledge that is derivative of, and dependent on, other kinds of knowledge. Failing to distinguish between ordinary knowledge and scientific knowledge, and failing to understand the role that ordinary knowledge plays in the creation of scientific knowledge, is the single largest problem with social science in the service of policy making. As an exercise in the creation and codification of ordinary knowledge, the knowledge development process was a qualified success--at least in the eyes of people who regard ordinary knowledge as important. As an exercise in the application of social science methods to the problem of youth employment, it was less successful, but by no means a complete failure. whatever its other defects, the knowledge development process did reflect, in its design and execution, the distinction between ordinary knowledge and scientific knowledge. Taggart observed that the applica- tion of social science methods to early YEDPA projects was "researching ineffectuality, not intervention." He observed later that, for all its defects, the knowledge development effort produced more social science on employment questions than any previous federal intervention. His understanding of the limits of the existing delivery system led him to take a skeptical view of the possibilities for experimentation and to focus on creating the prior conditions for scientific knowledge. On the one hand, this focus resulted in what seemed, from the point of view of social science, a disproportionate investment in activities that did not produce "results" in the form of clear treatment-control

OCR for page 281
340 comparisons. On the other hand, the focus seems far more troubling to social scientists than it does to other actors in the process, including the Congress, which authorized the program to start with. At a minimum, then, it seems that future large-scale employment research and demonstration projects should begin with a frank acknowledgment that experimentation is the final stage of some larger effort to codify ordinary knowledge, not the first step in finding out what works. Doing research and demonstration projects involves a large-scale investment over a long period of time in creating a conventional wisdom, translating it into structures and beliefs and behavior, and then (after a fashion) subjecting it to some sort of rigorous empirical test. Beyond this minimum condition, it seems reasonable to promote actively the notion that different levels of knowledge are required to mount large-scale research and demonstration projects, and that doing research is only one way of gathering the necessary knowledge. Simple expedients are often the most effective, like practitioners' workshops, regularly scheduled congressional visits to pilot projects, and head-to-head discussions among administrators, practitioners, and researchers. All of these, and more, occurred in the knowledge develop- ment process. Whether they are understood as legitimate parts of knowledge development in retrospect is problematical. When the results of the knowledge development process are culled for "hard" conclusions about what works, these parts of the process are often lost. Delivering Services and Discovering Effects Another important tension running through the knowledge development process is that between delivering services to constituents and tracing the long-term benefits of those services for disadvantaged youths and for society at large. Most descriptions of YEDPA begin with the statement that its purpose was to find out what works in getting high-risk, disadvantaged youths into the labor market. As we have seen, this is not so much an inaccurate reading of the intent of Congress as it is an incomplete one. Certain members of the House had a genuine interest in finding out what works, but that interest was also rooted in a politically motivated desire to restrain the Senate's enthusiasm for spinning out new programs. Most key Senators thought they knew what to do and saw YEDPA as the vehicle for doing it. The compromise between the House and Senate incorporated both the House's tentativity and the Senate's commitment to specific solutions. More importantly, though, Congress's charge to DOL made clear that the new resources were to be deployed to support the network of constituencies that had grown up around employment training programs. If DOL failed in that mission, the issue of "what works?" would be moot, since there would be no political constituency to support youth programs in the next round of congressional debate. While finding out what works was an important purpose of YEDPA, delivering services to political con- stituencies, state and local jurisdictions, employment training organizations, and disadvantaged youths was instrumental to that

OCR for page 281
341 purpose. Research and development without a political constituency is of little practical use to elected policy makers. Most of the money spent on knowledge development was not spent on research. It was spent on providing jobs, training, and education to disadvantaged youths. Most of the decisions about which organizations would receive YEDPA discretionary funds were not based on the proven research capacity of those organizations, or even the expected payoff of the funds in research results. In fact, most organizational recipi- ents were chosen on the basis of the constituencies they represented. Within the vast collection of projects that knowledge development comprised were a limited number of projects chosen explicitly for their research value--some on the basis of congressional intent, some on the basis of OYP's policy research agenda. It was in this limited array of projects that the research payoff of knowledge development was to occur. One can argue about whether the research agenda was well formulated, whether the right projects were chosen and developed in the right ways, whether the proportion of constituency-based projects was too large, or whether the right organizations were represented in the constituency- based projects. But it is difficult to argue with the fact that most of what goes on in research and development activities of the scale represented by YEDPA consists of delivering services to constituents, not doing research. It is also difficult to argue with the fact that creating political constituencies is an important part of the process of getting from research to a change in policy. This intimate connection between delivering constituent services and discovering effects did not elude Congress, nor did it elude Taggart when he deployed YEDPA discretionary money. It did, however, seem to elude many of the social scientists and policy analysts who criticized the knowledge development effort. The confusion between "advocacy" and "research" troubled some, as did the raggle-taggle quality of the research in many of the demonstration projects. Anxious to show that social science could deliver clear, policy-relevant guidance, they failed to see that the delivery of services was driving research, not vice versa. There is vicious paradox in the use of social science rhetoric to justify social intervention. YEDPA is described as an attempt to find out what works, when in fact it was an attempt to deliver services to constituents while at the same time finding out what works. Because many people, even the politically sophisticated who presumably know better, accept that the primary purpose was to find out what works, the "mere" delivery of services becomes tainted. It is not enough to get the money out to the right people and to get the right organizations involved in searching for solutions to the problems of disadvantaged youths. If the delivery of services does not add significant new knowledge to social science, or provide solutions to the problem of structural unemployment, it is a failure. Anything short of significant new social science knowledge is just pork barrel. There is nothing wrong with aspiring to significant new social science knowledge, or to long-term solutions to structural unemployment. The problem occurs when, aspiring to these things, we conclude that merely providing jobs, training, and education to disadvantaged youths, and merely building a

OCR for page 281
342 professional constituency with an interest in providing those services, means that policies have failed. When this happens, the gulf between science and politics widens irreparably. The fact that we find it easy to discredit interventions that merely deliver services, but difficult to find scientifically valid solutions to chronic social problems, may mean that we have gotten too sophisticated in using the rhetoric of social science to justify social intervention. Until the "solutions" come along, we may simply need to do a better job of delivering services. Rather than arguing that large-scale social interventions will result in solutions to chronic problems, we may want to say that, while we are working on the chronic problems, we intend to see that some number of disadvantaged young people get access to jobs, training, and education. If we fail at the more ambitious task of finding scientifically valid solutions, we have at least succeeded in delivering services and at creating a constituency committed to search for the solutions e In practical terms, researchers and policy makers alike should moderate their use of social science rhetoric to justify social inter- ventione Finding out what works, in the scientific sense, requires a long-term investment in practical knowledge as well as research. If that investment is not possible, then we should not expect to find solutions to chronic social problems. In the meantime, merely delivering services may be the best we can do. Political and Organizational Correlates Laurence Lynn (1981b), in his book Managing the Public 'S Business, argues that the alleged failures of public management are as much a result of poorly framed policies as they are of incompetent administra tars. The initial conditions set for public servants often make their success unlikely. There is probably no better illustration of this argument than YEDPA. DOL was given four new youth programs to imple- ment. It was directed to expand two existing programs dramatically, and it was given a large amount of discretionary money to find out wha works for disadvantaged youths--all with a 1-year authorization. The programs were reauthorized in 1978, but by that time the Carter administration had launched the Vice President's Task Force on Youth Employment, with instructions to produce a new youth employment policy by the following year. The pressure mounted within the administration to produce results that simply were not there. By 1980, as the YEDPA research and demonstration agenda was beginning to produce results, the presidential election brought a reversal of the mandate under which YEDPA was launched. Each of these events can be explained by the logic of electoral politics. Electoral politics is what makes policy research possible. But, against this background, it should surprise no one that the results of YEDPA knowledge development fell short of expectations e In a practical sense, there was little anyone in DOL could do to control the volume or the pace of the political demands they were operating under. No Sectretary of Labor in his right mind would tell the leading members of the U.S. Senate on both sides of the aisle that

OCR for page 281
343 they should scale back their ambitions. DOL faced the choice of participating in the authorization of YEDPA and sharing in the credit, or not participating and getting the program anyway. Nor would a sensible secretary discourage the President from making his department's program a central domestic initiative in the next campaign. There were, strictly speaking, no solutions to the problem of external demands on YEDPA, only adaptations. These adaptations carried a high cost, both to the delivery system and the production of useful knowledge. The political lesson from YEDPA is relatively clear, although probably not very helpful. The scale of the enterprise was incompatible with the pace of external demands. A research and demonstration effort, without the complex structure of operating programs, could have produced modest, short-term results within the amount of time available. A number of new operating programs could have been launched, with limited payoff in terms of new research and development. But both demands together were incompatible with the time and institutional capacity available. It is instructive that the entitlement demonstra- tion, the one piece of the knowledge development effort that had a relatively clear mandate, a finite research agenda, and a considerable amount of institutional research capacity behind it, came the closest to meeting congressional and executive expectations. It is also instructive that the Job Corps, the federal youth program with the greatest institutional maturity, the longest history of trial and error (in both the political and experimental sense), and the most sustained evaluation, is the example that most policy makers reach for when they try to define successful employment policy. The more diffuse the mandate, the more complex the research agenda, and the less well-defined and mature the institutional capacity of the delivery system, the more difficult it is to deliver services and do research on them. The fact that the knowledge development effort produced as much as it did is testimony to the ability of many people to operate under heavy expecta- tions and unreasonable time constraints. On the organizational side, two main facts stand out: the lack of capacity within DOL to manage an effort of the scale required by the YEDPA mandate, and the lack of explicit consideration of organizational alternatives to the one finally chosen. The lack of capacity is as much a commentary on the nature of federally managed programs as it is on the qualifications of DOL/OYP staff. There were limits to how much research expertise one could expect people with essentially program- matic backgrounds to bring to their jobs. But even with the best- qualified federal staff, running a large-scale federal research and development program is an exercise in indirect management. The programs are administered by people whose main interest is in deliver- ing services, the research and evaluation are done bv People whose main : _ ~ ~ ~ ~ _ ~: _ : ~: ~: _ ~ _ _ ~ ~a: _ ~: ~ A_ _ ~ , ~ _ all U=vl~l~ly =~1~ e^~u~lrls ~lu~l~. The job of the federal administrator, in this set of relationships, is to mediate conflicting interests and to use financial and regulatory incentives to get other people to do their jobs. As Taggart can testify, this is devilishly difficult work for which few people are equipped by experience or training. The more complex the system of administrative relationships, the more skill required to manage it, and the less uniform one can expect the results to be.

OCR for page 281
344 In other words, "lack of capacity" can mean both lack of qualified staff and lack of direct control. Taggart's administrative strategy for dealing with limited capacity was to create capacity in other organizations and manage them from the center. It was well suited to OYP's capacity, in both senses of the term. But it had the weakness of all such strategies--it was vulnerable to variability at the periphery. Some external alliances worked well, because they were well organized and well staffed; others did not. If there are too many cases of the latter, the system becomes difficult to manage from the center. The solutions to this problem lie either in working on a much smaller scale--an alternative not really available under YEDPA--or in gen- erating more capacity on the periphery--something that takes time to do. The lack of an explicit consideration of organizational alternatives to the one that evolved is not unusual in federal agencies. No one in the executive branch specializes in thinking about alternative ways to organize complicated undertakings. DOL and other executive actors with an interest in YEDPA were preoccupied with larger issues at the begin- ning of the effort. Taggart was not the sort either to pose alter- natives or to stand back and wait while others did. He did what he considered necessary: he consolidated program operations, research, and evaluation in OYP. Prom Taggart's point of view this was the best solution. It is not clear, however, that it was the best solution from the point of view of DOL, Congress, or the executive branch. Neither is it clear, however, that any of the alternatives for dispersing YEDPA authority among other DOL units would have worked any better. The lesson is not that there was a better way to organize knowledge develop- ment. The lesson is, rather, that the decision of how to organize such an effort is probably the most important high-level executive decision that cabinet-rank officials face. It merits careful analysis. It did not get that analysis in this instance. Payoffs A few conclusions about the expected payoffs of large-scale research and development efforts like YEDPA follow from this analysis. The first is that, especially when solutions to chronic social problems involve changes in existing institutions or the creation of new ones, ordinary knowledge is a prior condition to the creation of scientific knowledge. Administrators and practitioners need to know what to do, or what to do differently, in the most practical sense, before they can begin to act in systematically different ways. Legislators need to know whether programs can be administered and whether benefits can be delivered, before they can make judgments about whether broader social problems can be solved. Social science methods, by themselves, do not deliver this knowledge. Investing in useful knowledge, then, entails investing as much in simple information, practical intelligence, and networks of communication as in research and evaluation. Second, there is a serious danger in justifying new policies on the basis that they will increase our knowledge of how to solve chronic problems, rather than merely delivering services to constituencies and individuals. If the

OCR for page 281
345 problems turn out to be resistant to social science inquiry, as they usually do, the failure of research discredits the delivery of services. Third, there is little anyone can do to limit the effect of shifts in the political environment on large-scale research and demonstration efforts, but if the complexity of the enterprise is inconsistent with the time constraints imposed by shifting political priorities, the blame for failures should be shared equally by elected officials and administrators. Fourth, one element of large-scale research and development efforts that is subject to executive control is their organization. Initial decisions about how to organize large-scale efforts should be subjected to explicit analysis and high-level executive scrutiny: What capacity is required? What organizations have the required capacity? What capacity needs to be developed? What incentives are available for mobilizing that capacity? ACKNOWLEDGMENTS I would like to acknowledge the assistance of Charles Betsey, study director for the Committee on Youth Employment Programs, and Robinson Hollister, chair of the committee, in preparing this paper. Members of the committee provided useful comments and criticism at an early stage of the research. Special thanks also to Bernard Anderson, Gordon Berlin, Michael Borus, Seymour Brandwein, Erik Butler, David Cohen, Fred Fischer, Tom Glynn, Andrew Hahn, George Iden, Arnold Packer, Dan Saks, Nat Semple, and Bill Spring for comments on earlier drafts, though they bear no responsibility for the final product. This paper is based in part on a series of interviews with participants in YEDPA program development and policy-making processes. Their willingness to discuss events and share insights contributed greatly to the paper. REFERENCES Borus, M. 1984 Why Do We Keep Inventing Square Wheels? What We Know and Don't Know About Remedial Employment and Training Programs for High School Dropouts. Unpublished paper. Manpower Demonstration Research Corporation, New York City. Butler, E., and J. Dar r 1979 Lessons from Experience: An Interim Review of the Youth Employment and Demonstration Projects Act of 1977. Center for Public Service. Waltham, Mass.: Brandeis University. Hahn, A. 1979 Taking stock of YEDPA: the federal youth employment initiatives, Part I. Youth and Society 2~2~:237-261. Hahn, A., and R. Lerman 1983 The CETA Youth Employment Record. Washington, D.C.: U.S. Department of Labor.

OCR for page 281
346 Hargrove, E., and G. Dean 1980 The bureaucratic politics of evaluation: a case study of the Department of Labor. Public Administration Review - 40(March/April):150-159. Lindblom, C., and D. Cohen 1979 Usable Knowledge: _Social Science and Social Problem Solving. New Haven, Conn.: Yale University Press. Lowry, J.H., and Associates 1979 Determining the Viability of Intermediary Non-Profit 1980 Corporations for Youth Programming. 4 Vols. Chicago, Ill.: James H. Lowry and Associates. Lynn, L. 1981a The President as Policymaker: Jimmy Carter and Welfare Reform. Philadelphia, Pa.: Temple University Press. 1981b Managing the PUbliC'S Business. New York: Basic Books. National Council on Employment Policy 1983 Back to Basics Under JTPA. Washington, D.C.: National - Council on Employment Policy. n.d. Investing in America's Future. Alexandria, Va.: Remediation and Training Institute. Peters, T., and R. Waterman 1982 In Search of Excellence. New York: Harper and Row. - Rein, M., and S. White 1977 Policy research: belief and doubt. Policy Analysis 3:239-271. Rivlin, A. 1971 Systematic Thinking for Social Action. Washington, D.C Brookings Institution. Salamon, L. 1981 Rethinking public management: third-party government and the changing forms of government action. Public Policy 24(3):255-276. Taggart, Re 1980 Youth Knowledge Development: The Process and Product. Unpublished paper. 1981 A Fisherman's Guide: An Assessment of Training and Remediation Strategies. Kalamazoo, Mich.: Upjohn Institute for Employment Research. U.S. Congress 1977 Conference Report 95-456. June 22, 1977. U.S. Congress, House of Representatives, Committee on Education and Labor 1977 Youth Employment Innovative Demonstration Projects Act of 1977. Report 95-314. June 22, 1977. U.S. Congress, Senate, Committee on Human Resources 1977 Youth Employment Act. Report 95-173. May 16, 1977. U.S. Department of Labor 1978 Employment and Training Report of the President. Washington, D.C.: U.S. Department of Labor. 1979 Employment and Training Report of the President. Washington, D.C.: U.S. Department of Labor. .: The

OCR for page 281
347 1980a Employment and Training Report of the President. Washington, D.C.: U.S. Department of Labor. 1980b Knowledge Development Activities, Fiscal Years 1978-1979. Office of Youth Programs, Youth Knowledge Development Report 1.2. Washington, D.C.: U.S. Department of Labor. 1980c Knowledge Development Agenda. Office of Youth Programs, Youth Knowledge Development Report 1.1. Washington, D.C.: U.S. Department of Labor. 1980e Proceedings of an Overview Conference. Office of Youth Programs, Youth Knowledge Development Report 1.3. Washington, D.C.: U.S. Department of Labor. 1981 Employment and Training Report of the President. Washington, D.C.: U.S. Department of Labor. Employment and Training Report of the President. D.C.: U.S. Department of Labor. 1982 Washington,