Click for next page ( 100


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 99
4 Procedures Used in Evaluating the Effectiveness of YEDPA Programs An explicit purpose of YEPDA was the establishment of a variety of programs to explore alternative means of dealing with youth employment problems. Implicit in the legislation was the concept that programs be evaluated to determine their relative effectiveness or "what works best for whom." To date, the only effort to provide a systematic evaluation of the products of the YEDPA research effort is that conducted by Hahn and Lerman (1983) of Brandeis University. Their assessment yielded what are described as "representative" findings from the YEDPA evaluations, based on a review of the significant findings, where significance was defined "in terms of the reliability of the research reviewed and the importance of the policies addressed by the findings (Hahn and Lerman, 1983:47. The results of the Brandeis evaluation focused attention on the studies undertaken under YEDPA and led, at least indirectly, to this committee's creation. SOURCES OF DATA AND CRITERIA USED IN SELECTING REPORTS The major source of information that the committee used for assessing the effectiveness of the YEDPA programs was the research and evaluation reports commissioned by the Office of Youth Programs (OYP) as part of its discretionary knowledge development effort. A collection of over 400 reports, compiled by the Employment and Training Administra- tion (ETA), had been forwarded to us for review. We also searched the available literature beyond the reports generated as part of the YEDPA process (see, for example, the discussion of national data bases in Chapter 9) and consulted people who had experience with youth programs and related research. Even so, with the exception of studies of two regular CETA programs (the Job Corps and the Summer Youth Employment Program) and Supported Work, programs that predated YEDPA, we relied almost exclusively on the reports of particular youth demonstration projects carried out under YEDPA to assess the effectiveness of youth programs. We first screened the documents obtained for review to identify reports meeting two criteria: (1) that the report be on a youth employment program that had actually been implemented and was in 99

OCR for page 99
100 operation during the YEDPA period (roughly 1977 to 1981) and (2) that the report contain quantitative data on the effectiveness of that pro- gram, be it at any stage from implementation to program completion or follow-up. As a result of this screening, we eliminated about 170 reports from further consideration. These were reports on subjects not specifically related to any implemented programs, for example, tech- nical assistance guides, conference proceedings, and program plans and descriptions of a general nature. Some of these reports met the first condition, that is they reported on youth programs actually implemented, but were not evaluations of program effectiveness. Some others pur- ported to be effectiveness evaluations, but were so lacking in data on program outcomes that they could not be evaluated by the committee. About 200 reports on youth projects met the initial screening criteria. For each project the reports variously described program implementation, in-program effects, and outcome evaluations. These project reports were then classified according to program type and the target groups served (the classification framework is described later in this chapter). The most comprehensive report on each project was then subjected to a second screening to identify reports of sufficient scientific merit to be reviewed in more depth by the committee as the basis for judging the effectiveness of YEDPA programs. Reports were screened using the following criteria: 1. that there be preprogram and postprogram measurement of major program objectives; 2. that comparable comparison group data be presented; and 3. that initial sample sizes and response rates for participant and control groups be of sufficient size, preprogram and postprogram, to allow usual standards of statistical significance to be applied to measured program effects, and to alleviate concern for attrition bias. Subcommittees defined by the four major program types reviewed in depth the project reports that met the three criteria and assessed their effectiveness. Reports of interest for the information they provided on program implementation, whether or not they met the effectiveness criteria, were included in the implementation review reported in Chapter 2. A list of all reports considered by the committee for the effectiveness and implementation reviews appears in Appendix B. COMPARI SON OF PROJECTS RE\IIhWED AND PROJECTS EXCLUDED FROM REVIEW The projects included in the effectiveness review were selected on the basis of the scientific merit of their research reports. The projects that form the basis of this review, therefore, may not necessarily be representative of all of the various youth employment projects operated under YEDPA, either discretionary or formula funded. Table 4.1 shows the major YEDPA discretionary demonstration projects by subpart and funding level and the disposition of reports on

OCR for page 99
101 a) ._, 3 Q 3 U] in: U] V o o A' a, U] .~1 o , - o in o Q. cr; o o an a U] ) U] ~ C) m `, U] Cal O ~ . - - ~ . ~ O ~ 0 - c: ~ rot - ~ O U) - rat O :, L4 ~ Cut ~ Cut .z Pi U] O ~ - - ~ us ~al rn . ~ - ret ~ ~ID ~ us O ~- .~' ~ - U) a) ~ k4 ~ a' a) n. - ~ ~ ~ 0 i- ~0 'c' A ,0 _ a' ~ ,' H % 4 Q n P: - zO ~ 0 .,' 4~ _1 ~n o~ u' ,1 c' r-, 04~ ~0 ~E-, v H 4U 3 U) P~ U] O ~ - - ~ ~ - [4- q ~ O U] - J~\ s~ V n~ ~ ~ S-l z ~ _ u, c ~ 0 ~ - - ~ ~ - [4- N q ~_ O U] L4 V n ~ 0 3 ~ Z ~ _ ~n C O ~ ,' - ~ ~ .- :5 ~ r4- _ O U] - 4U V a, n ~ 0 z o . u~ co c~ N . N, - ~ ~rn ~ Q _ _ on ~ ,i _ h 3 v :r _ _ _ N N ~i ~, - 0D ~ \.0 ~ O r- N ~1 ~I t- ~ ~1 ~) ~N a, ~ ~ n O n n n .-, ~ ~ ~ ~: 3 v u: <: c: _ _ _ _ ~1 ~ ~ N ~r :^ V _ a 4 a ~O ~ ~ ~ S ~ 4~ 4U V ~1 3 S: ~ .,' ~ ~ . - a' `: ~O ~, ~ Pi a' ~4 E-~ c~ ~- ~n 1:~ O ~0 3 n >4 ~ ~ U ~4 U) ~ 3 tU O ~ ~ - _ v ~3 .,' ~: ~ 3 L. - a' ~U] ~ O ~h 0 Q5 ~4 X ~ - ~>S aJ ~3 .,' : tn 3 ~ V a .,_, C) ~ O O S - 4U :5 O O V O >4 O ~' O _ O ~, ~E ~s {* ~4 - C ._! (: q~ ~0 s" 0 s ~ C ~a, n c ~ ~ - 4 ~c ~ .- a, u2 ~a ._, ~, s E~ 3 ~S C 0-- q~ C ~ U) ~5 a ~~ 4 ~0 q ~C V ~O X ~- ~ 4~ C a) ~ w ~S S ~ ~ ~ o ~ ^ U] O O V O P~ - O X ~ ~ C 4~ ' ~ '- C O ' V) 0 a ~ _ r~ ~ C ~ <: 4J O PJ U] ~ ~a H ~ ~ ~4 O C ~ t~ V aJ t`S - O - ~ u' p, ~ a' ~ ~ ~ S - C V C ~ C c ~ a' ~t ~ F ~ C a, ~ ~ ~ `s `15 ~4 0 C 4) U1 4~ P4 ~ ^ O ~ ~ ~ O :D n5 '- C~ :n ._t ~ S - 4 C . - 4~ ~ C 4~ :~ C C O O C q~ 1 0 ~ ~ a a) 4 ~aJ ~ v u, ~ a) <: 0 O ~ ~ , - S ~- ~ S a, ~ ~ 4U ~0 ~c u' ~ a' ~ Q, ~[L1 3 ~a, ~1 ~_d V ~ C ~S C - X ~ ~ ~ ~ U2 a) ~ c n h J ~ ~ ~ ~O v 0 c~- ~n ~ 3 ~ n4 ~n - 4J V H ~- 1 O C ~ aJ C L ~H (_) ~ tTS q~l a) 0 s - 3 ~n ~ c ~ 0 ~ ~ u~ - c m' - O v ~ ~a .- ~ ~ c :3 ~ ~ ~ 0 ~ ~ O ~ ~h U] ~ ~ ~`15 ~, a~ q~ Q~ O O U] a, c a) c ~ ~ a' c 3 a, a) v ~ v a) ~ ~ c ~3 ~ 3 0 ~ ~ H 4 J ~ C7) t(S ~4 o4 n5 o4 4 Q H a) ~ C ~ O n 4 ~ E~ ~ ~ C 0) 3 O (J) U] C H S O Z ~ mn ~cn

OCR for page 99
102 their effectiveness in our review process. The 61 projects shown (column 1) are those funded in fiscal 1978 and 1979 at amounts of $200,000 or above, and they include all of the major demonstration projects operated under YEDPA through fiscal 1981. Of these 61 projects, 17 (column 2) could not be reviewed for their effectiveness because reports on their program impacts either were not commissioned or not produced. Nine of the 17 projects were operated under inter- agency agreement and accounted for $34.5 million in funding. Two community conservation projects (operated by the Department of Housing and Urban Development) alone accounted for $21.7 million in funding. The committee screened reports on all 44 projects for which reports were available (columns 3' 4/ and 5). Of these projects, eight were excluded at the initial screening stage due to lack of effectiveness data (column 3~. Two of these projects alone accounted for $30 million in funding; both were managed by intermediary organizations created to administer, operate, and evaluate these demonstrations. Of the remaining 36 projects (columns 4 and 5) 20 upon further review did not meet established criteria for comparison groups (pre- program to postprogram), sample coverage, and attrition, and they were excluded from further consideration (column 4~. Five of these projects, accounting for $15 million in funding, were operated under interagency agreements, including the Youth Community Services Demonstration and the Career Intern Program. Also excluded was the Vocational Exploration Demonstration, a major project funded at $8.7 million. In addition to the projects shown in Table 4.1, we reviewed and excluded five other demonstration projects not included in the 1978-1979 funding. None of these represented a major budget amount. Our review indicated that reports on 16 projects (column 5) met the established criteria, and they were therefore included in our review of YEDPA program effectiveness. These 16 projects represent about 63 percent of YEDPA discretionary funding, including the entitlement program (YIEPP). The projects include YIEPP ($240.2 million), Ventures in Community Improvement ($10.8 million), the Youth Career Development Project for School-to-Work Transition ($9.6 million), three Summer Career Exploration Projects ($6.8 million), the Public Versus Private Sector Jobs Demonstration ($7.6 million), and the Service Mix Alter- natives Demonstration ($5.3 million). In addition to the discretionary projects listed in this table, we also included in our review the two largest regular CETA youth programs, the Job Corps and the Summer Youth Employment Program, as well as the youth portion of the Supported Work demonstration project. Also included but not shown in this table were nine other discretionary projects not included in the 1978-1979 findings. These additional projects bring the total number of projects included in our review to 28, representing every subpart of YEDPA (with the exception of the Young Adult Conservation Corps), including the Job Corps and the summer youth program.

OCR for page 99
103 OVERALL FRAMh'WORK FOR EVALUATION Our framework for evaluating the effectiveness of YEDPA programs draws together three major dimensions: program goals, program types, and target groups. We organized our review primarily according to program type, noting, in addition, the target groups served and assessing the degree to which the programs affected each of the given program goals (when measurements bearing on each were provided). There- fore, the discussion of program effectiveness appearing in Chapters 5 through 8 is presented largely in terms of program types. In the sections that follow we discuss each of these dimensions briefly. Program Goals The YEDPA legislation states a variety of goals for youth programs (see Elmore in this volume). Goals or outcomes can be divided into intermediate goals and long-run or ultimate goals. The long-run goals of different employment and training programs are generally similar; most, if not all, programs intend to effect longer-term improvement in participants' employment stability, earnings, family stability, and so forth. Intermediate program goals, such as increased educational attainment, work experience, knowledge of and attitudes about the workplace, and short-run increases in employment and earnings, vary across programs. Long-run program goals are of ultimate interest from both social and policy perspectives. Intermediate goals, while not usually ends in themselves, may serve as indicators of long-run outcomes to the extent that they are expected to affect longer-term goals. Ultimately, whether intermediate goals are reliable indicators of longer-range outcomes is an empirical question. Program Types Under YEDPA an attempt was made to ensure that a wide array of program types were tested, covering in one fashion or another most of the concepts about appropriate program types that would emerge from a systematic analysis of goals (U.S. Department of Labor, 1980b). How- ever, even the documents describing the knowledge development effort do not provide a categorization of program types that lends itself readily to a classification scheme useful for evaluating the effectiveness of YEDPA programs. Others who have reviewed youth programs have used This formulation of program goals relies heavily on Barth (1972~. As will be discussed in detail in subsequent chapters, YEDPA discretionary projects devoted substantial resources to the collection of data on measures of intermediate goals, much of it (such as attitude measures) subjective in nature.

OCR for page 99
104 TABLE 4.2 Youth Program Classifications PIQ Service Categoriesa Testing, assessment and employability plan development Counseling (personal and career) Other preemployment: World-of-Work Basic Skills Job Search Vocational exploration, job rotation Remedial education, GED, ESL Classroom vocational skills training On-the-job training Work experience Support services (transportation, child care) Placement and job development b b Service Category Classifications . . Committee on Youth Hahn and Lerman Employment Programs Labor market Labor market preparation (career development) Labor market preparation (career development) Labor market preparation (career development) Labor market preparation (career development) preparation Labor market preparation Labor market preparation Labor market preparation Labor market preparation Occupational skills training Intensive skills training (out-of-school, e.g., Job Corps) Intensive skills training (out-of-school) Work experience: in school and out-of school Summer Youth Temporary jobs program Employment Program Occupational skills training Temporary jobs programs b Labor market Job placement preparation program NOTE: This chart compares the program classifications used by Hahn and Lerman (1983) and the committee. The first column presents the 10 service activities from the Process Information Questionnaires (PIQ) of the Standard Assessment System (SAS), and the columns to the right indicate how Hahn and Lerman and CYEP classify each PIQ service category for assessment purposes. The CBO (1982) in its analysis of youth programs classified program activities as demand side (i.e., to increase employment demand for the target group), supply side (i.e., to increase employability of youths through training, education, and employment experience), or transition (i.e., improving exchange of labor through job search and placement). aOther authors have used classifications based on amounts of time in various PIQ categories for analysis of youth programs (Cole et al., 1982; Fuller and Nelson, 1982; Rock et al., 1982). bNot classified. different classifications from those used in the knowledge development documents (e.g., Hahn and Lerman, 1983; Rock et al., 1982; and Congres- sional Budget Office, 1982; see Table 4.2~. None of the classifications of program types developed and utilized by others seemed appropriate for our task. These classifications were often based on combinations of specific program activities and services

OCR for page 99
105 (e.g., work experience, on-the-job training, classroom training, skills training, counseling, and participant assessment), on the one hand, and client group characteristics, on the other. On examination, we found that most YEDPA programs provided combinations of services to a mix of client groups. Thus, reviews based on classifications with fine breakdowns of service type and client group were forced to discuss a given program repeatedly under different classifications of services (e.g., Hahn and Lerman, 1983~. In designing the classifications of programs for this review we sought to minimize complexity without obscuring essential differences between programs. To this end, we chose four broad program types defined on the basis of intermediate goals. Each program evaluation report was placed in only one type, according to its intermediate goal: 1. Occupational skills training: to equip youths with specific occupational skills and knowledge as a prerequisite either to further training or job placement in that occupational field. (Examples include both on-the-job and classroom training in such fields as welding, drafting, carpentry, health, and computer occupations.) 2. Labor market preparation: to improve attitudes, knowledge, and basic skills as preparation for entering employment. This category encompasses such programs as career exploration and world-of-work orientation and programs designed to enhance youths' general educational level and skills, thereby improving their future career possibilities. (Examples of the latter are basic--remedial--education and GED programs.) 3. Temporary jobs: to provide youths with employment and general work experience in temporary subsidized jobs, either full time or part time. (Examples of such programs include work experience programs and the Summer Youth Employment Program.) 4. Job placement: to place youths in unsubsidized jobs. Services provided may include job search assistance, placement, and follow-up activities. Target Groups At the outset, our evaluative framework cross-classified programs by the four broad program types just described and by the target groups served, as classified by school status and age. School status distingu- ished in-school youths from out-of-school youths, the latter being further subdivided into those who had graduated from high school and those who had dropped out. The age groups, defined to correspond roughly with grade level, were 14-15, 16-18, and 19-21. The racial, ethnic, and sex composition of program participants were also indicated. It was our hope that this specification of target groups, cross- classified with program types, would allow us to address the question of what works best for whom. In practice, while we did take note of the details of participant target groups, we found it was not possible to carry out separate analysis according to all of these target group categories. This was

OCR for page 99
106 primarily due to the fact that most of the programs contained mixes of participants from the different categories and few of the evaluation reports on which our assessments were based provided separate outcome analyses for different categories of participants conforming to our detailed classification. As a result, while our reviews of program effectiveness provide as detailed information as the source material allows, our summary conclusions distinguish only between in-school and out-of-school youths, cross-classified by program type. LIMITATIONS OF THIS REVIEW Our ability to draw firm conclusions about the effectiveness of youth employment and training programs was constrained by two conditions that affected the implementation of YEDPA and particularly the conduct of research. First, YEDPA programs and research were mounted in con- siderable haste in a period in which many other employment and training and research efforts were ongoing, so that both program and research resources were stretched very thin. Second, in 1981, less than 3 years after their quick start-up and troubled implementation, many programs and evaluation efforts were abruptly halted with the change of adminis- tration. As a consequence of these factors, most of the data on which evalu- ations were based, with a few exceptions, were gathered at a stage at which programs had not yet become stabilized. As a further consequence, relatively few program evaluations provide data for long postprogram periods; virtually all of the YEDPA project evaluations had postprogram follow-ups of only 3 to 8 months. Only two evaluations had as much as a 3-year follow-up, and both of those were of pre-YEDPA programs (the Job Corps and Supported Work). Further limiting our ability to draw firm conclusions were the serious problems many YEDPA researchers apparently had in creating reasonable comparison groups and preventing sample attrition over waves of the data collection. These problems sharply reduced the number of studies that could be reviewed and put in question the reliability of the results of several others. ~ final limitation of this review concerns the very magnitude of YEDPA and CETA programs from 1977 through 1981. It has been estimated that in 1978 as much as one-half of all jobs held by black teenagers during the summer were in the summer program and as much as two-fifths of jobs held in 1979 were in government employment and training programs (Crane and Ellwood, 1984; Elmore, in this volume). Thus, even when comparison groups were reasonably created, there may well have been substantial amounts of employment and training among the comparison group members. To the degree this program participation is undetected in the evaluation data, the participant-comparison contrasts will underestimate the impact of these programs. We have attempted to test the individual YEDPA research reports against reasonable standards of scientific quality with respect to both the data collected and the methods used to measure program effects. The reports that met such standards were not necessarily evenly

OCR for page 99
107 distributed over the range of operational youth programs or target groups being served. Thus, there are issues with respect to the role and effectiveness of youth employment and training programs that we could not address due to a dearth of reliable evidence. In addition, the quality of the available evidence varies, sometimes supporting strong conclusions, sometimes merely suggesting the direction of program effects. Our assessments of the effectiveness of youth programs derive from examining published evaluation reports on these efforts rather than our own evaluation of the programs themselves. Since it is possible that poorly executed or poorly presented research reflects unfairly on the programs being examined, it is important that we clearly distinguish between the quality of the research and the (possibly unobserved) quality of the programs. While we have attempted to avoid drawing inferences about program effectiveness on the basis of research quality, and are fairly confident that we have been successful in doing so, we caution the reader to bear in mind that to make a determination of either effectiveness or ineffectiveness requires credible evidence. Lack of evidence on effectiveness is not synonymous with lack of effectiveness. In our evaluation of the effectiveness of youth employment and training programs, issues related to the adequacy of the evidence often overshadowed those related to the policy or practical significance of the magnitudes of reported effects. The question of the reliability of estimated effects is logically prior to a consideration of their policy importance. Consequently, when results fail the test of reliability (in an evaluative or statistical sense), further discussion of their implications for policy is rendered moot. Because many of the reports we reviewed did not provide reliable estimates of program effects, we often could not address the issue of the policy significance of the findings.