Cover Image

PAPERBACK
$108.00



View/Hide Left Panel
Click for next page ( 316


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 315
A P P E N D I X The Rating of EWorker Functions and Worker Traits PAMELA S. CAIN and BERT F. GREEN, IR. In the course of producing the DOT, jobs and occupations were rated for a variety of characteristics, called worker functions and worker traits. These ratings and the procedures by which they were assigned are described in chapter 6. Because of the widespread and varied use made of these ratings both inside and outside the U.S. Employment Service, it is especially important that they be accurate that is, that they measure what they purport to measure. The ratings assigned to DOT occupations, like all such ratings, are subject to various influences, some of which are legitimate bases of variation and some of which are not. An occupation might be rated differently on a given characteristic not only because it actually requires different levels or amounts of the characteristic in question but also because of the particular circumstances in which the ratings were made, characteristics of the raters, specific features of the occupation itself, etc. Such ratings invariably entail some measurement error; they reflect, to some extent, characteristics other than those they are supposed to measure. There are several reasons to suspect that the ratings of DOT occupations for worker functions and worker traits are subject to error. First, the factors that the DOT scales purport to measure are vague and ambiguously defined. It is not readily apparent what they are intended to measure, i.e., what the "true" scores of the phenomenon being rated should be. Worker functions, for example, are said to "express the total level of complexity of the job-worker situation" (U.S. Department of Labor, 1972:5), but 315

OCR for page 315
316 WORK, JOBS, AND OCCUPATIONS "complexity" is never defined or specified further. Sidney Fine, who was instrumental in developing worker functions, has also written that they reflect skill estimates (Fine, 1968a:374) and worker autonomy, i.e., the extent to which workers are engaged in "prescribed versus discretionary duties" (Fine, 1968b:7~. The reliability of the ratings is also called into question by the extremely high correlations (of the order of .90) between some of them and measures of the social status or prestige of occupations. This concern has been voiced about general education development (GED) by several researchers, notably Siegel (1971) and Duncan et al. (1972~.i Concern about the reliability of the DOT factors arises for other reasons as well. Analysts reported difficulty in assigning scores on certain factors, especially specific vocational preparation (svP) and aptitudes. Reasons cited for this were the ambiguity of the factors and the inadequacy of instructions contained in the Handbook for Analyzing Jobs (U.S. Depart- ment of Labor, 1972~. Furthermore, production of the fourth edition DOT was highly decentralized. Analysts were spread across 10 field centers and 1 special project, and there was reportedly little communication or coordination of effort among them, nor were their activities closely supervised or standardized by the national office. In order to assess the impact of several potential sources of variation in these ratings, we carried out an experimental study to (1) determine the overall level of reliability for selected worker functions and traits and (2) identify significant bases of variations in or influences on the ratings. In the latter regard we investigated whether the ratings were influenced by (1) analysts' field center affiliation, (2) the type of occupation being rated, i.e., whether in service or manufacturing, (3) the general education develop- ment level of the occupation, (4) the particular job description (one of two) of the occupation being rated, and (5) the particular analyst making the rating. The interactions of these various influences were also taken into account in the design and analysis of the study. The specific effects, along with their labels and a brief description of each, are given in Table E-1. STUDY DESIGN With the assistance of national office personnel we asked six experienced job analysts at each field center with at least 6 months' training and experience to rate one of two sets of job descriptions. If more than six iIf an occupation's social standing is indeed dependent on its functional requirements, as some theorists, notably Davis and Moore (1945) have argued, then it could be argued alternatively that correlations of this magnitude are evidence of the validity of the worker functions.

OCR for page 315
Rating DOT Worker Functions and Worker Traits TABLE E- 1 Sources of Variation in Ratings of Occupational Characteristics Source Label 1. T 2. G 3. TO 4. J(TG) 5. C 6. CT 7. CG 8. CTG 9. CJ(TG) 10. DJ(TG) 11. CDJ(TG) 12. R(CD) 1 3. RT(CD) 317 Description of Effect type of occupation level of general educational development (GED) interaction of job type and GED jobs nested within the interaction of job type and GED center interaction of center and job type interaction of center and GED interaction of center with interaction of job type and GED interaction of center and jobs nested within interaction of job type and GED interaction of description and jobs nested within the interaction of job type and GED interaction of center with interaction of description and jobs nested within interaction of job type and GED raters nested within the interaction of centers and description interaction of raters and job types nested within interaction of centers and description 14. RG (CD) interaction of raters and GED nested within interaction of centers and description 15. RTG(CD) interaction of raters with interaction of job type and GED nested within interaction of centers and description 16. RJ (TGCD) residual LEGEND: T. one of two types of occupation: service versus manufacturing; G. one of four levels of GED; J. one of three DoT occupations within eight categories of job type by GED;C, one of seven field centers; D, one of two job descriptions for given DoToccupa tion; R. one of 42 individual occupational analysts. experienced analysts were available at a given center, we chose six at random to participate in the study. Three centers with fewer than six experienced analysts (Florida, Texas, and Utah) were eliminated from the analysis, although they did participate in the actual ratings task. Analysts at the Arizona special project participated in a pretest of the ratings task. Each set of job descriptions represented 24 distinct DOT occupations. To select occupations and job descriptions, we created two types of jobs: (1) "service," which consisted of base title occupations in the clerical and sales and service categories of the DOT, and (2) "manufacturing," which consisted of base title occupations in the DOT categories of processing, machine trades, benchwork, and structural occupations. Preliminary analysis established that the variation in ratings over all occupations is

OCR for page 315
318 WORK, JOBS, AND OCCUPATIONS approximately the same in these two categories (the standard deviation of GED for service occupations is .784 versus .880 for manufacturing occupations; the range of GED is 1-6~. This equivalence offered some measure of confidence that we could make valid comparisons between the reliabilities of the two categories. Within these two broad categories of occupations, titles were stratified by four levels of GED. A set of base title occupations was then selected at random within each of the eight combinations of job type (2) by GED (4~. The source files of these occupations were inspected in order to locate titles with two adequate job descriptions.2 Descriptions were judged adequate if items 4 (job summary) and 15 (description of tasks) of the job analysis schedule had been completed according to instructions in the Handbook for analyzing Jobs (U.S. Department of Labor, 1972~. Thus the description had to contain information on the purpose and nature of the job; the significant involvement of workers with data, people, and things; the level of such involvement; and a detailed description of job tasks with an indication as to the amount of time spent on each. If fewer than two acceptable descriptions were available for an occupation, we eliminated it and proceeded to the next randomly selected occupation in the set. If more than two acceptable descriptions were available for an occupation, two of the descriptions were chosen at random. In this way, two job descriptions for each of three base title occupations were selected for eight combina- tions of job type by GED. (It might be noted in passing that we had to go through 92 DOT codes in order to obtain the necessary two descriptions for each of 24 occupations, yet another indication of the poor quality of the DOT source data.) Fifteen occupations (16 percent of the total number of codes we inspected) were eliminated because we could not match the code we had obtained from the DOT summary tape (provided by the national office) to a code in the source data. In most such cases one of the worker function codes on the tape was one point lower than it was in the source data. The systematic nature of the discrepancy resulted from some last-minute changes in occupational codes prior to publication of the DOT that were apparently not incorporated on the summary tape. The results are based on the ratings of 42 analysts at 7 field centers. Each analyst rated 24 job descriptions taken verbatim from job analysis schedules. Each job description was rated with respect to worker functions (DATA, PEOPLE, and THINGS); training times (the reasoning, math, and language components of GED, plus SVP); all six physical capacities; and all 2The source materials for the fourth edition DOT are housed at the North Carolina field center. We wish to express our gratitude to the staff there for the assistance we received in choosing job descriptions for our study.

OCR for page 315
Rating DOT Worker Functions and Worker Traits 319 seven environmental conditions. Each description was thus rated on 20 separate factors. The ratings task and the rating form used closely approximated the ratings made in the normal course of job analysis for the DOT, although analysts were unable to observe the jobs directly, as they would usually do. The rating task was administered to the 42 raters at their respective centers on June 1 1, 1979, under controlled conditions. Analysts worked in conference rooms rather than at their desks and were proctored by the field center supervisor or a designated assistant. There was no time limit and analysts were instructed to work at their normal pace. Analysts were also instucted- not to consult the DOT or one another while making the ratings. Ratings were assigned according to procedures contained in the Handbook for Analyzing Jobs. Raters were free to consult the Handbook for additional instruction or bench marks, if needed. Supervisors were not requested to keep track of the time required to complete the ratings, but according to informal reports most analysts finished in about 4 hours. On the last page of the questionnaire, analysts were invited to comment on the ratings task. Eighteen of the 42 raters did so. Almost every comment noted that the descriptions contained insufficient information to rate jobs for physical capacities and environ- mental conditions. Some analysts noted the same difficulty for svP. Despite this difficulty, analysts completed almost all of the ratings, and there were few missing data. Of the total of 20,160 ratings (42 raters rating each of 24 jobs for 20 factors), only 21 were not made. For these, missing data were replaced with sample means. The amount of missing data is so small that this replacement procedure should have a negligible effect on our estimates. RESULTS An analysis of variance technique is used to calculate the reliability of the ratings for the worker functions (DATA, PEOPLE, and THINGS), GED, SVP, STRENGTH, and Location factors. For a discussion of the rationale for and use of the analysis of variance to calculate reliabilities, see, for example, Lindquist (1953~. Generally, the advantage of this method over other methods is that it enables the user to disentangle the effects of separate influences on the ratings and hence to estimate the amount of error due to each source. Complete results from the analysis of variance are presented in Tables E-2, E-3, and E-4. (These tables are not discussed but are provided for the interested reader.) Table E-5 presents three estimates of the reliability of each rating, making different assumptions for each about what constitutes "error

OCR for page 315
320 o Ct ._ ~ ' ~ o ~ C: so Cal ._ so Ct o U) V, Ct a' - ~: o C: ~- ON ON O cry oo oo ~ ~ ~1 1 1 1 1 o 0 mo JO 0 ~ . . . . . . . . . . sir o ._ > ._ a ~0 Ct A: ._ c d. car ~cry ~ ~ ~ to ~ _ ~ ~ ~ ~ ray _ o ~ ~ ~ ~ _ C~ ~ _. ~_ _ ~o o o ~ V) o V~ oo o [ 00 ~1 ~ ~ ~ ~ ~ 00 [ ~1 1 1 1 1 ~ _ ~ _ _ _ o 1 1 1 1 1 ~ ~ ~ ~ ~t . . oo . 11 - V) a~ 11 ._ _ ., r~ oo 11 - - - 11 ~ '_ ~ -, c ~c ~ . .= 1 1 1 1 ~ o ~\~) oo - .+ cd oo r ~oo ~r~ . ~+ + + + ~ oo ~ oo ~ oo _ O O ~o ~ ~ cr ~ ~ _ ~ ~ ~s:~ ~} ~) ~= ~ c: -~ ~ ~ o. ~ ~ ~1 1 1 1 1 ~ ~ ~ ~ ~ ~1 ~oo ~ ~ ~ oo ~ ~ ~ X V) ~oo ~ ~ o ~ ~ ~ ~_ ~ ~ _ oo C ~.+ a> ~0 c ~ ~ ~ r~ ~ O _ _ ~ ~ ~ oo ~ Cd =~_ _ oo ~ - _ C ~_, ~ m_ ~ ~oo ~o s^~. oSo ~ 11 _ r ~ ~ ~ 00 00 ~ ~ ~ 00 00 d" ~ 00 ,'> _ _ _ ~ ~4 ~ ~ ~ ~ ~ := ~ 8 ~ ~ a a ~ ~ ~ ~ ;,: ~ , ,, _ ~ ~ u~ ~ ~ oo cr~ 0 ~ ~ ~ ~ ~ ~ 0 ~ _ _ _ _ _ _ ~ _

OCR for page 315
321 ~ A: y o Cal ~ . _ 52: ~ o o o Ct ._ Cal o ~7 ._ V] Ct - o m EM V) ~O At, ~ 1 $ ~ ~O o V] ._ fi o _ _ rid - o - Ct ._ c in; o iL, ~ a o Ct ._ o V, a> SO Ct ~ ~ ~ - o ~ 0 a' it> Cal at Cat ~ ~ ~ Cal rat rat ~ 1 1 1 1 1 ~ ~ ~ ~ ~ '~ ~ 0 0 0 0 0 V) C~ ~ ~ ~ 00 ~ _ r~ ~ ~ ~ r~ _ _ ~ ~ ~ ~ _ 00 ~ ~ 0 _ ~ ~ V, ' 1 1 ~ _ ~ _ _ _ - ~ ~1 1 1 1 1 cr~ 0 ~ V ~0 0 V) ~ 1. 1 1 - ~- O O r~ - ~ + + + oo ~ r~ ~oo ~ cr~ ~ ~ ~ ~ ~ ~ ~ ~1 ~ ~1 1 1 1 1 1 oo oo oo oo oo oo - '- ~- t<) ~) 's' '= oo ~ ~ oo ~ ~ oo - t - oo o ~ d" r~ l~ ~ oo ~ ~ ~ ~ 0 0 . ~ v oo oo oo oo - - ~ oo ~ - - o - ~- ~ - r~ ~ ~ ~ oo oo ~ ~ ~ oo oo ~ ~ oo _ _ _ _ ~ r~ - ~ ~ ~ ~ ~ L) ~ ~ V ~ ~ a a Q~ ~ y ~ - ~i ~ ~ ~ ~ ~ oo ~ o - ~ ~ ~ u~ ~ - - - - - - - . ~ oo oo . 11 o u~ - 11 - . ~ o oo 11 . . - . ~ - r~ x c~ . ~ ~n - . - ao ~ - . ~. 11 11 =- :5 ~ - :3 v, ~ 11 . ~ o ~ ~: C~ v~ s ~ ._ _ ~ o ~ ~ _

OCR for page 315
322 U. Ct CQ Ct so o at .... o U. so 04 . . On - C~ ~ au Ct c: ~ fir . - V) Ct ~ o ~ U. ._ Cd Liz m of O I ~7 z V, U) LO ; ;} a z LO ~ 1 a ~ ILL A: ;: 1 ~ a a: LU ~ in :~: o a: a: a ~0 at as - lo: ~ ~ ~ ~ C~ ~ ~ ~ v~ ~ ~ sD ~ r~ 00 C~} -} ~ ~} 00 _ ~ 0N ~ ~ O ~ _ O ~ v~ O ~ ~ r~ ~ ~ ~ _ ~ ~ _ _ _ _ 0 . . . . . . . . . . . . . . . . ~ _ _ - ~ ON 0N 00 00 ~ ~ 00 00 r~ c~ ~ ~ ~ 0 ~ 0 r~ ~ - 00 ~ O ~ ~ ~ ~ ~ ~ ~ ~ r~ ~ - ) C~ _ _ 0 0 cr~ - cr~ 0 ~ r~ oo ~ ~ oo ~ O ~ oo oo ~ cr cr 0 ~ ~ oo oo ~ c~ O - O r _ _ ~ ~ ~ oo ~ ~ ~ 0 - ~ O O oo _ r~ ~ _ ~ _ _ _ O _ _ O - t~ ~ 0 ~ ~o ~ ~ ~ ~ ~ O ~ oo O ~ ~$ 0 ~ _ 0 oo - oo ~ ~ O _ ~ ~ v~ ~ ~ oo r~ 00 ~ ~ d. ~ . . . . . . . . . . . . . . . . _ ~ ~ _ _ [~ _ C~ O ~ ~ _ ~ ~ oo 0 ~ oo ~ ~ _ 0 ~ ~ tn ~ ~ ~ ~ ~ ~ oo ~ ~ O oo ~ ~ ~ oo ~ c~ ~ ~ oo oo ~ r~ r~ . . . . . . . . . . . . . . . . O ~ _ _ _ cr~ (~\ Ir~ _ r~ O O ~ oo ~ oo - 0 0 ~ _ V) ~ _ ~ 0 ~ ~ V) oo _ ~ _ c~ _ oo ~ ~ os ~ ~ r~4 ~ ~ ~ ~ ~ c~ r~a . . . . . . . . . . . . . . . . ~ 0 - _ r~ Ir~ {~) c~l {~. c~ l~ ~ ~ - ~ ~n - O ~ ~ oo t - ~ - - ~ c~ o ~ ~ o ~ ~ oo ~ - ~ o ~ ~ ~ ~ ~ ~ - oo ~ ~ oo ~ o . . . . . . . . . . ~ . . . . . c~ r~ ~ ~ ~ ~ _ ~ oo ~ oo ~ ~ ~ r~ ~ ~ oo ~ ~ ~ - \0 ~ ~ O - oo - =\ ~ l- t- l0N _ 0 oo ~ ~ oo ~ ~ oo ~ r~ Oo ~ O 00 0 ~ ~ ~ ~ r~ ~ oo ~ ~ ~ ~ O O oo oo oo oo - - oo ~ - - o - ~ - - _ oo ~ ~ 0 ~ ~ r~ _ - ) t~ - oo ~ O ~ ~ ~ r~ ~ ~ 0 - - ~ ~ ~ oo r~ ~ ~ ~ - ~ cr~ ~ ~ ~ ~ ~ oo ~ . . . . . . . . .... . . .. ~ oo ~ ~ ~ - ~- oo v} ~ oo _ r~ ~ ~ ~ ~ 00 00 ~ ~ ~ 00 00 ~ ~ 00 ON - ~ ~ ~ ~ v ~ ~ - ~ ~ ~ ~ ~ ~ E~ ~ ~ ~ ~ C) ~ ~ ~ ~ =: =: . . . . . . . . ~ . . . . . . _ ~ ~ ~ ~ ~ ~ oo ~ O _ r~ ~ ~ ~ ~ - - -

OCR for page 315
323 ~5 U' ._ - ._ Ct ._ - C~' ._ V] Ct V, 4- ~: ~_ ~: Ct ._ ~: oo ._ VO o UO r~ C~ e~ ._ Ct m z ~0 o 1 ;~ z ~: ;~ a z U) ;: a ;: z 1 a ;: ;: z - I 1 C~ o a ~_ ~ oo oo oo ~ O ~ ~ ~ cr ~ o o 1 ~ o o 8 8 8 8 o - ~ ~ ~ . . . . . . . . . . . . . '=~ l- ~ _ ~ O ~ oo c~ ~ 1 oo ~ ~ ~ _ c~ ~t o~ o o _ _ o o o o o _ . . . . . . . . . . O ~ ~ r~ ~ ~ r~ . . . ~ ~t- ~ r~ _ ~ ~ ~ oo_ IO O~ ~ ~ r~ ~ ~ ~ v,oo~ O ~ ~ ~ _ ~ o o _ ~- . . . . . . . . . ... . oo (~4 - ~ ~ ~ o ~ ~ ~ 1 ~ ~ ~ o ~ ~ ~ o_0 ~ o 00 0 0 0 _ 0 0 0 ~0 cr ~ ~ 0 ~ ~ ~ _t- 1 ~ ~ o - ~ ~ _ 0 ~00 _ _ ~O O O _ 0 0 0 . . . . . . . . . ... . _ ~0 ~ Oo Oo 0 r~ ~ ~ _ c ~1 ~ _ ~ _ ~ _ o _ ~V) ~ oo 00 0 _ O O O O O ~ ~00 ~ . . . . . . . . . .. .. O 0 oo ~ ~ ~ ~ ~ 00 ~ 00t- 0 0 ~ ~ ~ ~ ~ ~ ~ ~ ~_ 0 0 0 ~ r~ . . . . . . . . . . . . . . _ ~V~ ~ ~O ~ - ~ C~ ~ r~ ~O - ~D 1 o ~ ~ ~ ~ ~ ~ 0 ~ 0 ~ - ~ ~O r~ 0 0 0 0 0 ~ oo oo cr - ~ ~ cr c~ 0 ~ oo ~ ~ ~ ~ ~ ~ CN ~ ~ ~ C~ ~ 0 ~_ ~ ~ 0 l- ~ ~ _ O O O O O _ ~ 00 00 . . . . . . . . . . . . . . . r ~ - V ~-~ ~ ~ C) ~ _ V ~ ~ ~ ~ ~ =: . . . . . . . . . . . _ ~ r~ ~ ~n ~ ~ oo ~ O - _ _ s:: o . - - 'e x o - x - u' ~o k o C~ U, D Ct o ._ - Ct 4- a' ._ a' _ c: - - Ct C) C~ - - ._ Ct ._ ~: :3

OCR for page 315
324 WORK, JOBS, AND OCCUPATIONS variance." Reliabilities are calculated from variance components estimated according to procedures in the work of Green and Tukey (1960~. The variance components shown in the body of Table E-5 are the proportion of variation in a given characteristic due to particular effects. Variance components were calculated only for effects that were statistically significant at the 1-percent level of probability. Comparing all the analyses, we found that in most of them a standard pattern emerged in which the effects related to analysts' field center affiliation (effects C through CJ(TG)) were nonsignificant. Thus variance components were not calculated for these effects. The nonsignificance of field center effects is a substantively important finding. It is also somewhat unanticipated, given the lack of coordination among field centers. What it means is that ratings do not vary according to the particular features of field centers. Reliabilities are calculated across all 24 occupations. Each reliability represents the proportion of total variation due to true sources. In all the analyses the effects of occupation, type of job (manufacturing versus service), and the general education development level of the job (T through J(TG)) are considered to be true or valid sources of variation in the ratings. In all, the residual (RJ(TGCD)) is assumed to be random or error variance. As noted, however, we made alternative assumptions about what other effects constituted error. In calculating the first set of reliability estimates (labeled "minimum") we considered variation due to the particular description being rated (DJ(TG)) and variation due to the assorted rater effects (CDJ(TG) through RTG(CD)) to be error, in addition to the residual. This set of reliabilities-the most stringent, lower- bound estimate gives us a sense of the reliabilities that would be obtained if each occupation were rated by one rater on the basis of only one description. Under the second assumption, variation due to different descriptions is considered to be true or valid, and only rater effects in addition to the residual are considered to be error. These estimates of reliability (labeled "medium") can be interpreted as the reliabilities that would be obtained if each occupation were rated by one rater on the basis of two job descriptions. The third set of reliabilities (labeled "maximum") relaxes the assump- tions about error even more. In these estimates, only the residual effect is considered to be error; the differences between raters and field centers are taken as valid sources of variation. The difference between reliabilities in the first and second set of estimates indicates the contribution of the job description effect per se to the total variation in the ratings. Similarly, the difference between the

OCR for page 315
Rating DOT Worker Functions and Worker Traits 325 second and third reliability estimates indicates the contribution of the rater effects per se. Turning to the results in Table E-5, because of the presence of significant, sometimes relatively large, job description and rater effects, we note that the three sets of estimates often differ considerably from one another. The impact of the job description effect is best seen by comparing the first and second sets of reliability estimates for each factor. While differences between the two sets average .08, they range from .01 (DATA) to .21 (THINGS), an indication that the ratings on some factors are more sensitive than others to particular features of the job description. The effect of job description is relatively small for DATA, GED-MATH and GED- LANGUAGE, SVP, and LOCATION. It has a larger impact on the remaining ratings, especially those for PEOPLE, THINGS, GED-REASON, and STRENGTH. Comparison between the second and third reliability estimates reveals large rater effects on all the ratings. The erect is especially large for THINGS (a difference of .19), GED-MATH (.24) and GED-LANOUAGE (.19), and STRENGTH (. 1 9~. Across characteristics the reliabilities also vary greatly. Under the most stringent assumptions (r~minimum)), reliabilities range from a low of .25 for THINGS to a high of .84 for DATA. The second set of estimates probably embodies the most realistic assumptions about what constitutes error. These reliabilities are not especially high, ranging from .46 for THINGS to .85 for DATA. Under the most relaxed assumption, reliabilities (rtmaximum)) are up to fairly acceptable levels, in the high .80's and low .90's for all of the ratings except THINGS, STRENGTH, and LOCATION. It should be kept in mind, however, that in these estimates, rater variation is considered to be true variance, hardly a tenable assumption. These estimates, in fact, are only useful insofar as they enable us to calculate the magnitude of variation due to raters. The especially low reliabilities of the THINGS and STRENGTH scales may well result from insufficient information in the description being rated. Of the 18 analysts who made comments at the end of the study, most noted that the descriptions contained insufficient information to rate jobs for physical capacities and environmental conditions. Although a similar difficulty was not reported for the THINGS factor, the scale used to rate THINGS iS almost completely dominated by functions that deal with the relation of the worker to machines (five of its eight levels). Thus the lower reliabilities on THINGS might be due to the difficulty of assigning ratings to occupations with tasks in which machines are unimportant. Overall, the reliabilities are low enough to cause concern. The large effects of job description (the difference between the medium and minimum estimates) reveal that for each of the characteristics there is

OCR for page 315
326 WORK, JOBS, AND OCCUPATIONS considerable diversity in the description of jobs classified within an occupation. Certainly there is more than would be assumed from a reading of the Definition Writer's Manual (U.S. Department of Labor, 1974) or from the fact that, typically, only a small number of jobs are analyzed for each occupation (see chapter 7~. Moreover, although there is no significant difference beween ratings across field centers, there are significant differences across analysts within field centers. Thus ratings are substan- tially affected by the idiosyncrasies of individual analysts. The implications of these results are twofold. If a reliable rating is desired of a given characteristic for a given occupation, it will be necessary both to use more raters and more descriptions per occupation and to average the sets of ratings thus obtained. The number of raters and descriptions needed to achieve a desired level of reliability can be estimated from the results presented here using the general Spearman-Brown formula (see, for example, Allen and Yen (1979~. Thus starting with an initial r (medium) of .80 (for example, svP), a reliability of .89 can be achieved by increasing the number of raters to two; if three raters are used, a reliability of .93 can be obtained. Substituting jobs for raters and using the same procedures, with r (minimum) as the base, we find that by having the raters rate two job descriptions per occupation the reliability of svP will increase from .76 to .86; by having raters rate three job descriptions a reliability of .90 can be obtained. Therefore for all of the factors, both the number of raters and the number of jobs rated per occupation will need to be increased somewhat in order to achieve satisfactory levels of reliability. The increase needed will be relatively smaller for those factors with higher initial reliability. In a second analysis of these ratings we calculated reliabilities separately for the two types of jobs service and manufacturing in order to see whether the ratings were less reliable for the service category. We reasoned that they might be because the scales were developed during a historical period in which manufacturing jobs predominated. The scales might as a result be better suited to the rating of manufacturing jobs. Furthermore, because most occupations contained in the DOT are in manufacturing industries, analysts are presumably more practiced in rating such occupations. The reliabilities by job type- service versus manufacturing- are presented in Table E-6. These reliability estimates were calculated using the same set of assumptions about error that were used in previous analysis. For all the characteristics with only one exception (STRENGTH), all three estimates of reliability are lower for the service occupations than they are for manufacturing. These results suggest that particular attention should be paid to the rating of service occupations in order to bring their reliabilities up to par

OCR for page 315
Rating DOT Worker Functions and Worker Traits TABLE E-6 Estimated Reliabilities, by Type of Occupation a Characteristic b DATA Service Manufacturing ~ r (minimum) .694 .880 r (medium) .727 .889 r (maximum) .798 .918 PEOPLE r (minimum) .666 .908 r (medium) .795 933 r (maximum) .830 .972 THINGS i.. ~ r(minimum) .107 .186 r (medium) .329 .406 r (maximum) .632 .637 G ED-R EASON r (minimum) .652 .694 r (medium) .717 .794 r (maximum) .792 .888 GED-MATH r (minimum) .422 .629 r (medium) .431 .682 r (maximum) .771 .878 GED-LANGUAGE r (minimum) .552 .690 r (medium) .609 .739 r (maximum) .853 .862 svP r (minimum) .724 .768 r (medium) .739 .834 r (maximum) .873 .925 STRENGTH r (minimum) .435 .138 r (medium) .594 495 r (maximum) .724 .705 a Reliabilities are calculated under three different assumptions about sources of error. See text for explanation. b Reliabilities for the LOCATION factor could not be calculated separately for service and manufacturing occupations because there was no variation on this factor for the manufacturing occupations. 327

OCR for page 315
328 WORK, JOBS, AND OCCUPATIONS with those for manufacturing occupations. Although the addition of more raters and descriptions would raise the reliabilities for service occupations, the results of this analysis also suggest that other steps need to be taken. Additional training and practice in the rating of service occupations may be needed, or perhaps better guidelines and bench marks in the Handbook instructions. More fundamentally, the scales used to rate occupations for these characteristics may need to be adapted to the unique features of . . SerVlCe JODS. Analysis of the ratings of the remaining physical demands and environmental conditions requires a different approach. These variables are dichotomous and take on only one of two values, signifying either the presence or the absence of a given characteristic. To assess the reliability or consistency of ratings on these factors, two types of analyses were conducted. First, for each characteristic the modal or most frequently occurring rating was determined for each of the 24 DOT occupations. Consensus among raters was then calculated as the proportion of raters giving the modal response. If all raters agreed that a given characteristic was present, the proportion is 1.00, indicating perfect consensus. Table E-7 presents estimates of consensus obtained in this way. The average consensus across jobs (last row of the table) varies considerably from scale to scale. Ratings are least consistent for TALK (.84) and SEE (.68~. Except for these ratings, however, the overall proportion of agreement is quite high, at least .87 for NOISE, with a high of .96 for CLIMB. A second feature of these results is that the poorest consensus among raters (lowest proportions) occurs disproportionately for occupations in the service category (top half of table). These results echo the finding that reliabilities are lower for service than for manufacturing occupations. A proportion of less than .80 (boldface in the table) occurs in 29 percent of the 144 rater-byjob combinations for the service jobs but in only 17 percent of the 144 combinations for manufacturing jobs. To assess the consistency of individual raters in rating each factor, we calculated the correlation across all jobs between the rating of each rater and the average rating of all other raters. Since half of the raters rated the first set of job descriptions for the 24 occupations and half rated the second set, the two groups of raters were analyzed separately. Table E-8 gives the correlations of each rater with the average of the other 20 raters in his or her set. For raters who had no variance on the characteristic in question across all jobs (that is, raters who rated all jobs the same way on a given characteristic), this correlation could not be calculated. These ratings are denoted by asterisks in the table. Results indicate that there is little problem with the consistency of

OCR for page 315
Rating DOT Worker Functions and Worker Traits 329 ratings for CLIMB, TALK, and HAZARDS, as witnessed by the predominance of correlations of .80 and above. The low correlations for COLD, HEAT, WET, and ATMOSPHR are a result of the infrequency of a positive rating and do not necessarily reflect inconsistency. The low correlations for STOOP, REACH, SEE, and NOISE, on the other hand, are indicative of inconsistency among the raters, since these characteristics occur sufficiently often to compute a meaningful correlation. Generally, these results suggest that in order to achieve a greater degree of consistency among raters, given the amount of information available in the description, ratings on all these dichotomous variables should be established by pooling the judgment of at least three or four raters (see the technical note at the end of this appendix). For the variables with the lowest degree of consistency, 8 or 10 raters would be needed to achieve stable and consistent responses. As mentioned previously, however, many analysts felt that the descriptions contained insufficient information with which to assign these particular ratings. Perhaps if additional information were incorporated into the description, higher levels of consistency would be achieved with the same, or only a slightly larger number of raters. TECHNICAL NOTE More precise estimates of the number of raters needed to increase alpha reliability to desired levels can be obtained using the following procedures: Coefficient alpha (a), the reliability (homogeneity) of a sum or average of k homogeneous items or raters, is given by k i/ iambi \) k-1 \ aT2 / \ where trig is the variance of the ith item and ~72 iS the variance of the sum of k items. If we let c be the average intercovariance, c equals rij~icrj. If we also let v be the average variance, then alpha can be written as k2c or = kick- l~c+kv where Ail = kick - 1~+ kv.

OCR for page 315
330 ~ 1 o Ct o o o ~o . . o .~ U) u Ct LL1 LL1 ;I: o L) ~: ~: - o z LL1 3 ~: ;~: o C~ ~: 2: 8 CO - r: o Ct ~s o o o oo o o oo o o o o o ~ o o o~ o o ~ o o o ~o o U) . . . . . . . . . . . . _ _ _ _ ~ _ _ _ oo ~ o o oo ~ o o ~ ~ o o' oo oo o o ~ oo o o ~ oo o . . . . . . . . . . . . _ _ _ _ _ oo oo _ o oo o o o o, o o ~ cr~ oo O m.m O O ~ L0 0 . . . . . . . . . . . . _ _ _ _ o o o o o ~ o o oo o o ~ o o o ~ o o, ~ V) o. V) _ ~4 _ _ _ _ o o o o o oo o o oo ~ o ~ ~ 0 0 0 0 o~ 0 0 cr~ ~ 0 In . . . . . . . . . . . . _< _4 _ _ _ _ - oo o o o o ~ o o oo ~ o oo 0 0 0 0 ~ 0 0 cr oo 0 oo . . . . . . . . . . . _ _ _ _ _ _ _ ~r ~ ~ ~ ~r u, ~ ~ ~n ~ oo ~n r~ ~ \0 0 ~ O O ~o u) v~ ~ oo oo ~ ~ ~ cr . . . . . . . . . . oo o o o o oo ~ ~ o. ~ cr~ ~ oN - ~ oo L~ v) ~ . . . . ) oo o o ~ o o o ~ o o, a~ 0 0 ~ 0 0 0 ~ 0 ~ . . . . . . . . . . . ~ . - - - - - - o oo o o o oo o o o ~ oo ~ o c~ o o o oo o o o so cr . . . . . .. . . . . . - - - - - - - .o ~ - ~ ~> ~- - - ~> c~

OCR for page 315
331 oo ~ _ o ~ o ~ oo ~ oo oo o oo o oo C~ . . . . . . . . . . . _ _ V~ ~o ~ ~ ~ oo ~ oo ~ _ o ~ ~ oo cr ~ ~ ~ oo ~ oo . . . . . . . . . . . . o . o o ~ ~ oo ~ oo o o oo ~ ~ oo ~ o oo ~ oo oo ~ o ~ ~ ~ ~ oo oo . . . . . . . . . . . . . _ _ O ro O O oo ~ oo o. o. oo o. ~ ~ oo '_ _ _ o ~ o o o. ~ o. o _ o o oo o o o o o o o oo o o oo o o o o o o oo~ . . . . . . . . . . . .. _ _ ~ ~ _ _ ~_ _ _ o oo o o ao o o o o o o ooo o ~ o o ~ o o o o o o o~ . . . . . . . . . . . .. ~ _ _ ~ _ _ _ _ ~ _ u~ ~ ~ ~ In ~ ~ 0 ~ ~ - r~ ~ ~ ~ ~ o~ oo ~ ax\0 .. . . . . . . . . . . - ~ oo o o oo o o o ~ ~ ~ ~ oo ~ ~ o ~ o o ~ oo oo oo oo . . . . . . . . . . . . _ _ _ oo o o o oo o o oo oo ~ ~ oo oo o o o ~ o o ~ oo ~ ~ ~oo . . . . . . . . . . . _ _ _ ~ _ o ~ ~ o ~ oo o ~ V) ~ oo oo oo 0 ~ ~ 0 oo ~ O r~ o~ oo c~ oo oo ~ _ _ o oo o o o o o o o o o o o ~ o o o o o o o o o o . .. . . . . . . . . . _ _ _ _ _ _ _ _ _ _ _ ._ C~ ~ ~ oo CrS O _ C~ (~} ~1 ct o . - ~ - 04 o ~ o o . - c~> - . - a, c) - o ~ - 'e o oo . c~s s - v' a' - u, ~: o - o o :s

OCR for page 315
332 Cal o ._ Ct o Cal Ct o - o a' ~4 a' a) ._ V, Cal o o ._ ~_ Ct - C5 o ~ o ._ oo ~ 1 [L] - C) D ~o - - ~ _ ~ ~ ~ oo o ~ * ~ ~ ~ ~ ~ ~ ~ ~ ~ * 6 6 ~ o ~ os o o ~ o ~ ~ ~r ~ ~ o * o o o ~ ~ o ~ ~ o ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ C ~ ~ ~ ~ o ~ _ ......... ~...... C~ ~i Ct =: * * * * ~ ~ ~ ~ * ~ * * ~ ~ ~ ~r ~o ~ r~ ~ ~ . . . . . . . . . . . . . . oo ~ ~ ~ ~ ~ ~ ~ ~ ~ o ~ ~o ~ oo ~o ~o ~ ~ ~o oo oo ~ ~ oo . * . * . . . . * . * . . . * . . . * . . Yo ~ S ~r ~ ~ ~ ~o ~ u~ v~ . * . * . * * * . . . . . . . . . . * 0 ~ ~ r~ ~ ~ ~ O ~ ~ ~ cr, ~ ~ ~ ~ cr~ cr~ oo oo oo ~ ~ ~ ko .. . . . . . . . . . ~ ~ ~ ~ r~ ~ ~ O ~ ~ ~ * ~ ~ ~ * 0 * * * ~r ~ oo ~ oo ~ oo * . * ~ * 1 r~ ~ ~ 0 0 ~ ~ * ~ ~ ~ u~ ~ oo r~ ~r ~ * ~ 0 * ~ ~ ~r ~ v; . . . . . . . . . . . . . . . . . . ~ r~ ~ ~ ~ r~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ oo ~ ~ ~ oo oo oo oo oo oo oo oo oo oo oo Oo oo oo oo oo oo ~ oo oo oo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . - - ~ ~ ~ u~ ~ ~ oo ~ o - ~ ~ ~ ~ ~ ~ oo ~ o - , ~ - - - - - - - - - - ~ ~ u)

OCR for page 315
333 C ~ oo ~ao oo oo ~ ~ oo oo r~ ~ ~o * * ~ ~ * ~ * ~ ~ * ~ ~ * ~oo X ~ ~ O * * * * 1 * * * * * * * * * oo ~ V] ~v) ~ ~ ~ ~ . . . . . * ~ oo oo oo oo 0 ~ o~ ~ a~ . . . . .. o ~ . * . * * . \0 ~ ~ ~ ~ ~ ~ oo ~ ~o ~ cr~ ~ ~ V~ _ ~ ~ ~r r~ ~ 00 Oo 0 ~ oo oo ~ ~ ~ oo oo oo oo oo ~ ~ n ~ oo . . . . . . . . . . . . .. . . . . . . .. r~ X ~ ~ - r~ 0 ~_ ~ ~ r~ oo oo~ * ~ ~ * * * oo ~ ~o ~ * * ~o * ~o ~ ~ ~ ~oo ~C~ ~ ~ o _. ~ ~ ~_ ~ ~V) ~ oo o ~ ~ ~ oo~ v~ v; ~ In ~ u~ _4 * ~ ~ ~ *~ ~ ~ ~ oo * u, ~oo . . . . . . . . . . . . . . . . . . r~ ~ r~ ~ r~ oo oo oo oo oo oo oo ~ * * ~ oo oo oo oo oo o oo oo oo oo _ . . . . . . . . . . . . . . . . . . . 1 ~ - ~ ~ ~ ~ ~ ~ ~ ~ o - ~ ~ ~ ~ ~ ~ ~ ~ o - ~ - - - - - - - - - - - ~ ~ c: ~ - ce o . c: ~ o 'e 4- ~ 'e (u =; ~ e : ~ c~ ;> . - ;^ u, ~: c~ o a' OCR for page 315
334 It follows that WORK, JOBS, AND OCCUPATIONS 0~= k(c/v) kr = 1 +(k-l)(c/v) 1 +(k-1)r where r equals c/ii and is the so-called intraclass coefficient of correlation (see Stanley, 1971:398). That is, the logic of alpha is exactly the same as the logic of the Spearman-Brown formula, with r, the average interrater reliability, being stepped up, via Spearman-Brown, to alpha, the reliability of the average of k raters. Thus to find r from alpha, we use formula 4.8 from Allen and Yen (1979), with our notation: r= (1 /k) OCR for page 315
Rating DOT Worker Functions and Worker Traits 0.67 - 0 09. 21 - 20~0.67) 335 The number of raters we would need to raise ~ from 0.67 to 0.80 is then k=t 0.8 \~l/1-0.09~=40 ~ 1 -0.8 / \ 0.09 /