Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 44
Research on Future Skill Demands: A Workshop Summary 5 Promising New Data and Research Methods Moderator Christopher Sager (University of Central Florida) introduced the speakers, promising a lively discussion of new data and research methods for analyzing changes over time in workplace skill demands. FEASIBILITY OF USING O*NET TO STUDY SKILL CHANGES Industrial/organizational psychologist Suzanne Tsacoumis provided an overview of the Occupational Information Network (O*NET) database and described its potential for researching recent changes in skill demands and projecting future changes (Tsacoumis, 2007a). She explained that, during the 1930s, the U.S. Department of Labor (DOL) created the Dictionary of Occupational Titles (DOT), a compilation of occupational data from trained job analysts who observed and interviewed workers. The DOT was updated several times over the following decades, and the 1991 edition included information on about 12,000 occupations. A DOL advisory panel, convened in 1990 in response to increasing criticism of this large, expensive, and inflexible job catalogue (e.g., National Research Council, 1980), recommended creating an electronic database that would collect more information and rely primarily on surveys of job-holders to update information on jobs. This led to creation of O*NET, which uses a common language to describe many occupations, facilitating comparisons and analysis.
OCR for page 45
Research on Future Skill Demands: A Workshop Summary Overview of the O*NET Database Tsacoumis described two core elements of O*NET—the content model and the occupational taxonomy. The content model, based on extensive research on job analysis (Peterson, Mumford, Borman, Jeanneret, and Fleishman, 1999), organizes job information into six broad categories (see Figure 5-1). There are three types of information related to the individual worker: (1) characteristics, such as abilities the worker brings to the job; (2) requirements for entry into the occupation, including skills, knowledge, and education; and (3) experience required for entry, including training, skills, and licensing. The content model also includes three types of information related to the job: (1) occupational requirements, such as what work activities are performed; (2) workforce characteristics, including information on projected demand for this occupation; and (3) occupation-specific information, including tasks and technology. Tsacoumis explained that within each of these six broad categories there is a wealth of additional information and descriptors (Tsacoumis, 2007b). Although the O*NET occupational information was originally structured according to the Occupational Employment Statistics classification system of the Bureau of Labor Statistics (BLS), an Office of Management and Budget directive in the year 2000 led to a reorganization of the in- FIGURE 5-1 The O*NET content model. SOURCE: National Center for O*NET Development (2007). Reprinted with permission.
OCR for page 46
Research on Future Skill Demands: A Workshop Summary formation to align with the Standard Occupational Classification (SOC) system.1 The current taxonomy, known as O*NET-SOC 2006, includes 812 SOC occupations for which data is being gathered. The National Center for O*NET Development, which develops and maintains the database, obtains most of the data related to these occupations from job incumbents. The center regularly surveys workers in targeted occupations to obtain information on job tasks, skills, and the education and training required to enter the occupation. However, the center obtains information on abilities from a different source—trained job analysts, who review the other types of updated information provided by the job incumbents and rate the ability levels required for various jobs. This approach was selected because “the constructs that are represented by … abilities are … harder for some people to understand,” Tsacoumis explained. All ratings, by job analysts and by job incumbents, are carefully analyzed in terms of reliability, inter-rater agreement, and standard errors of the mean in order to evaluate and improve data quality. Tsacoumis said a “rigorous sampling activity,” including random sampling of businesses and employees in those businesses, is used to identify job incumbents. The sampling procedures and questionnaires were developed on the basis of extensive review of the literature and were pilot-tested for effectiveness. As a result, they have had a high response rate, with 70 percent of businesses and 66 percent of employees in those businesses agreeing to complete the online surveys. Tsacoumis explained that several different O*NET databases have been developed over the past decade. When the content model was initially created, in the mid-1990s, it was populated with the only source of job information available at that time—data collected by job analysts for inclusion in the DOT. She said this was called the analyst database. However, since 1998, most information has been obtained from job incumbents. This information is updated and released to the public in a new O*NET database every six months. Currently, she said, O*NET-SOC 11.0 includes updated information on 680 occupations. Uses of the O*NET Database Tsacoumis said that many individuals and organizations use O*NET for a variety of purposes. Students and educators use it to understand what occupations are available and plan for future careers. Job seekers use it to access information on demand for various occupations and the types of 1 Tsacoumis explained that analysts developed an comprehensive system of crosswalks, allowing comparison of information from the O*NET-SOC 2006 with information from the DOT and other occupational information systems (Tsacoumis, 2007b).
OCR for page 47
Research on Future Skill Demands: A Workshop Summary skills, knowledge, abilities, and education required for entry into those occupations. However, she said, the question of the day is how O*NET can be used to assess current and future skill demands. First, she said, some organizations are already using O*NET to project future skill demands. The Projects Managing Partnership, a collaboration that includes DOL and several other organizations, has linked O*NET data with BLS occupational projections in order to project future skill demands and potential skill gaps in different states. Using this approach, the state of Illinois recently projected potential shortages of 15 skills in the year 2012; the largest projected skill shortages are in reading comprehension, active listening, speaking, and writing (Ginsburg and Robinson, 2006). Other organizations have linked skills information from O*NET with data from BLS and the Census Bureau to analyze supply and demand of skills in local labor markets. Second, Tsacoumis suggested that researchers may want to consider using the various categories of O*NET information, not just the information on skills to study changes in skill demands. For example, information on generalized work activities or abilities may be valuable for some studies. She outlined several different options for using data from O*NET to investigate changes in skill demand over time. One option, she said, would be to compare information from the earlier analyst database with information from the updated O*NET-SOC 11.0 database. These longitudinal data could be used to assess any changes in skill demands in one or more of the 680 occupations in the current database, and the data could be analyzed in a variety of ways. A second option would be to couple O*NET information with BLS occupational employment survey data to identify rapidly growing occupations and examine the skills, abilities, or other characteristics of these occupations. PROJECTING THE IMPACT OF COMPUTERS ON WORK TO 2030 Economist Stuart Elliott (National Research Council) described an approach to projecting the impact of computers on work as well as a pilot application of the approach. He emphasized that his goal was not “to claim that the results are right,” but “to get help from all of you” about how the approach could be improved in the future (Elliott, 2007a). Elliott explained that his focus on the year 2030 resulted from his interest in rapidly evolving computer technology and in K-12 education. Based on his expectations that major changes in education might be needed to respond to rapidly developing computer capabilities and that it would take at least a decade to implement such changes in schools, he said it was important to “shift your focus into the future.” Referring to an earlier presentation (see Chapter 2), Elliott said that David Autor had focused on ex-
OCR for page 48
Research on Future Skill Demands: A Workshop Summary amining technological changes that occurred in the past. Elliott advocated looking “forward rather than backward” in order to avoid missing possible “substantial changes” in work “until it is too late” to make the required changes in education. He emphasized that it is time to seriously examine the possibility “of computers being able to do effectively all human skills.” Elliott described the two main elements of his proposed approach. First, he examined computer capabilities through the lens of human skills. He said that O*NET provides a taxonomy of human skills, serving as a guide for examining the computer science literature in order to compare human and computer skills. Second, he used the current research literature in computer science to predict future computer capabilities, by assuming that ideas now being discussed in the research would be developed in the next decade and widely diffused in the following decade (Elliott, 2007b). Describing his pilot application of the approach, Elliott noted that, among the many sets of descriptors included in the O*NET database (O*NET Online, 2007), he focused on abilities. He examined 52 different dimensions of ability included in O*NET and discarded 30, based on his belief that computers are already better than humans along these dimensions. Elliott explained that he then grouped the remaining abilities into four categories: (a) language, (b) reasoning, (c) vision, and (d) movement. He said that the questionnaires used to collect information from job incumbents for the O*NET database include “anchoring tasks,” which help to define seven levels of abilities in these categories. For example, on the language abilities scale, reading a street sign is rated at 2, as a low-level task, writing a letter of recommendation or giving multistep instructions is rated at 4, a medium level, and the high level task of giving a lecture on a technical subject is rated 6. Next, Elliott said, he looked for recent articles in the journal Artificial Intelligence that described active research programs in enough detail to be compared with the abilities ratings in O*NET. For example, research currently under way in the area of computer language abilities aims to provide customer service for sales and repairs (Barbuceanu, Fox, Hong, Lallement, and Zhongdong, 2004) and to describe the movement of cars in a traffic video (Nagel, 2004). Based on his expectation that these projects will be developed over the next decade and widely deployed in the following decade, he placed expected computer language abilities in 2030 at 4, a medium level, on the 7-point O*NET scale. After applying the same steps to the other groups of abilities, he estimated that computer reasoning ability would be at level 5, vision at level 3, and movement at level 3 by the year 2030. Cautioning that “you are not supposed to believe any of this,” because it is just a pilot effort, Elliott went on to describe the third step, in which he compared the projected ratings of computer abilities with the current levels of human abilities in various occupations included in the O*NET database.
OCR for page 49
Research on Future Skill Demands: A Workshop Summary For example, O*NET data indicate that, on average, top executives have an ability level of 5 in language, 5 in reasoning, 4 in vision, and 2 in movement (see Table 4-1). Because the language and vision abilities are higher than those projected for computers in 2030, he said, he does not project a major impact on top executives. However, he did project that computers would be capable of substituting for most K-12 teachers whose abilities are currently rated in the O*NET ability scales at 4 in language, 4 in reasoning, 2 in vision, and 2 in movement, and for most workers in food service, retail service, and other occupations. His overall projection is that computer abilities could substitute for human abilities in occupations that currently employ 60 percent of the national workforce (see Table 5-1). Elliott explained that, although he did not mean to suggest that there would be no teachers in 2030, he does think it is possible that teachers at current ability levels would be replaced over time with teachers possessing higher levels of ability. He also noted that his pilot analysis focused narrowly on cognitive abilities, when, in reality, teachers also play a social and emotional role in children’s development. Elliott suggested that a similar approach could be used by examining O*NET measures of social and emotional skills and examining current research aimed at engaging computers in social and emotional reasoning and interaction. Elliott concluded that the real bottom line of his analysis is that projecting future computer abilities is a very important part of any effort to project future skill demands. He argued that it is possible to project future computer and human abilities in a systematic way and that his preliminary results from the pilot analysis suggest that such a systematic approach would be worthwhile. RESPONSE TO TWO PRESENTATIONS Sociologist Kenneth Spenner (Duke University) commented on the two presentations described above. Response: Feasibility of Using O*NET to Study Skill Changes Spenner concurred with what he described as the most important part of the paper by Tsacoumis—her conjecture that O*NET could be used to research changing workplace skills over time. While agreeing with her that O*NET could be used to “produce a full map of both the content and compositional shifts in the United States economy,”2 Spenner identified six key methodological challenges. 2 See Spenner (1983) for a discussion of “content” shifts (changes of the skill required in individual jobs) and “compositional” shifts (changes in the mix of occupations in the national economy).
OCR for page 50
Research on Future Skill Demands: A Workshop Summary TABLE 5-1 Projected Displacement by 2030 in Major Occupational Groups Major Occupational Group Percentage of Total Employment Percentage Displaced Within Group 11-0000 Management 6 41 13-0000 Business and financial operations 3 32 15-0000 Computer and mathematical science 2 21 17-0000 Architecture and engineering 2 11 19-0000 Life, physical, and social science 1 10 21-0000 Community and social services 2 36 23-0000 Legal 1 6 25-0000 Education, training, and library 6 74 27-0000 Arts, design, entertainment, sports, and media 2 50 29-0000 Health care practitioners and technical 5 10 31-0000 Health care support 2 29 33-0000 Protective service 2 16 35-0000 Food preparation and serving related 8 88 37-0000 Building and grounds cleaning and maintenance 4 78 39-0000 Personal care and service 3 81 41-0000 Sales and related 11 93 43-0000 Office and administrative support 17 90 45-0000 Farming, fishing, and forestry 1 43 47-0000 Construction and extraction 5 39 49-0000 Installation, maintenance, and repair 4 12 51-0000 Production 7 53 53-0000 Transportation and material moving 7 64 TOTAL 100 60 NOTE: This table projects the portion of 2004 employment vulnerable to displacement by computers given the current skill sets used in each major occupational group. It does not reflect changes to employment that might occur from restructuring occupations toward higher level skills. SOURCE: Elliott (2007b).
OCR for page 51
Research on Future Skill Demands: A Workshop Summary First, he asked whether DOL would retain all the data at different time periods and make them accessible to researchers. Second, noting that earlier data were obtained from job analysts (as the pilot version of O*NET was populated with information from the DOT) while more recent data have been provided by job incumbents, Spenner asked about the validity and reliability of comparisons between these two types of job information. Third, he said that the occupational classification system used to organize the data had changed more than once during the first seven years of populating O*NET. Crosswalks between the systems “can be extremely messy or indeed even inadequate” if, for example, an occupation at one point in time splits into multiple occupations at another point in time. Fourth, Spenner asked whether there were any “built-in dependencies” in the ways a single job is analyzed at different points in time. He noted that, if job incumbents or analysts were allowed to see earlier ratings and simply update them, this would be “a potential serious methodological limitation,” as happened when the third edition of the DOT was used to update the fourth edition (Cain and Treiman, 1981). Fifth, he asked which of the “over 200-plus items that only an industrial-organizational psychologist could love” that are included in the O*NET database would provide the best measures of skill. Spenner said that Tsacoumis (2007b) had acknowledged the continuing debate among industrial-organizational psychologists about the quality of such O*NET descriptors as skills, knowledge, and generalized work activities. Finally, noting that a planned O*NET data file on new and emerging opportunities would be “a fascinating source of new data,” Spenner encouraged also creating a data file on dying occupations. Response: Projecting the Impact of Computers on Work in 2030 Spenner described Elliott’s paper as “fascinating” and encouraged the audience to read it. He said that Elliott had introduced a new approach to projecting skill demands by examining the detailed literature on a specific technology and projecting changes under a set of assumptions to “signal possible levels of skill change in the economy.” Saying he expected that Elliott’s “astounding” prediction that computers might displace about 60 percent of the workforce “both raises some eyebrows and generates some discussion,” Spenner said he would focus on some major methodological issues and “absolutely heroic assumptions” that would need to be resolved in order to move the approach beyond the pilot version. First, Spenner questioned the assumption that the exponential increase in computer processing power observed over the past century (Moravec, 1999) would continue unabated over the next few decades. He said that his colleagues in computer science had mentioned that such factors as
OCR for page 52
Research on Future Skill Demands: A Workshop Summary computer memory, input-output transfer speeds, power demands, and even heat might constrain the rate of future advances. Spenner suggested that a revised methodology could model different scenarios about the rate of improvement in computer processing power. Second, Spenner asked whether the sample of articles Elliott had reviewed is representative of the current state of knowledge about artificial intelligence, noting that it is possible to expand the sample. He suggested inviting a group of artificial intelligence experts to independently review the sample of articles Elliott had selected to assess their usefulness as a basis from which to “extrapolate computing technology developments.” Third, Spenner asked whether the pilot method had used the right measures of skill within the O*NET taxonomy. He suggested inviting expert industrial-organizational psychologists to evaluate the decision to use the ability descriptors in comparison to the skills and knowledge descriptors or other measures included in O*NET. Fourth, Spenner suggested that Elliott had used “dead reckoning” when reviewing a group of articles describing artificial intelligence research projects and then rating the computer ability levels in terms of the O*NET ability scales. He joked that Elliott might be “perfectly valid and reliable” now, but that he was worried about what might happen if Elliott decided to shift fields. To address this problem, Spenner suggested inviting expert job analysts to score and rate the selected articles about artificial intelligence “as though they were scoring and rating an occupation.” Fifth, Spenner described weaknesses in the model of technological and occupational change in the pilot study. In this model, he said, technology is adopted “swiftly, smoothly, and efficiently.” However, studies of the history of adoption of other technologies, ranging from railroads to electricity to flexible manufacturing systems, illuminate a more “jerky, discontinuous, and problematic” process (Cyert and Mowery, 1988; Granovetter and McGuire, 1998). Spenner said that, in the pilot model, technology affects jobs in only one way—by destroying them. He said the model does not allow for “compositional shifts” (changes in the national mix of occupations), although previous research suggested that such compositional shifts may have a greater impact on overall national skill demands than “content shifts” (changes in the skills required within jobs) (Spenner, 1983). To illustrate this weakness, Spenner said that, if the model had been applied 30 years ago, it would have correctly predicted that the introduction of digital telephone switching systems would eliminate telephone operators—a direct content shift. However, the model would have totally missed the compositional shifts that took place as digital switching technology enabled an “explosion” of new industries and occupations built around 800 numbers, including jobs in telephone call centers and telemarketing. In another criticism, Spenner said that Elliott’s model of technological change
OCR for page 53
Research on Future Skill Demands: A Workshop Summary excludes production functions, switching costs, opposition by groups of workers, and “any effects from the organizational environment.” At times, Spenner said, the model seemed to have “annihilated all economists and sociologists,” leaving only industrial-organizational psychologists. Finally, Spenner observed that the model assumes that all change driven by computer technology upgrades skill demands and eliminates occupations. “To my knowledge,” he said “this would be the first known example of a technological change in the past 200 or more years that had that specific signature.” Spenner concluded that, despite all of these reservations, he believes that the pilot model has “great promise” and that it would be worthwhile to improve it in some of the ways he had suggested. A NEW SURVEY OF WORKPLACE SKILLS, TECHNOLOGY, AND MANAGEMENT PRACTICES Sociologist Michael Handel (Northeastern University) said he had just finished collecting data in the first wave of a new survey focusing on skills, technology, and management practices, known as the STAMP survey (Handel, 2007a, 2007b). He explained that the survey was motivated by research on several important policy questions, including the growth of wage inequality; employment and earnings of less skilled workers; racial and ethnic inequalities in the labor market, and transitions from welfare to work. In addition, Handel noted that there is continuing concern about the quality of education, particularly in terms of its role in supporting U.S. international competitiveness. Finally, he said that journalists and the public are interested in how work is changing. He asserted that press accounts and some government reports, such as the secretary of labor’s Commission on Achieving Necessary Skills (U.S. Department of Labor, 1991) assume that “change is unprecedented. It is rapid. It is ubiquitous and it is accelerating.” To inform all of these concerns and assumptions about the workplace, Handel said, the STAMP survey focuses on four key questions: How many jobs require what levels of various skills, computer use, and participation in employee involvement practices? In other words, what is the skill profile of American jobs? How are skill requirements, technology, computer use, and employee involvement related to each other? What are the effects of skill requirements, computer use, and employee involvement on wages, working conditions, and other job characteristics (e.g., work intensity, layoffs, job satisfaction)? What are the trends in types and levels of skill requirements, technology use, and employee involvement practices, in their interre-
OCR for page 54
Research on Future Skill Demands: A Workshop Summary lationships and in the relationships between those three variables and wages, working conditions, and other outcomes? Handel said that researchers had long recognized a “data gap” regarding workplace skills and had periodically called for improved measures, as early as 1983 (Spenner, 1983) and as recently as 2002 (U.S. Department of Health and Human Services, 2002). Noting that the last comprehensive national survey of the quality of work was conducted in 1977 (Quinn and Staines, 1979), Handel said that the STAMP survey tries to fill this gap. Turning to the sample and survey administration, Handel described STAMP as a random-digit-dial telephone survey of employed wage and salary workers in the United States who are at least 18 years old. The first wave of the survey was conducted, using English and Spanish questionnaires, in 2005. The sample size is slightly over 2,300, and the survey uses a refreshed panel design similar to what was used in the Quality of Employment surveys in the 1970s (Quinn and Staines, 1979). Handel explained that he hoped to reinterview as many people as possible 3 years after the initial survey administration, and he will also interview a new, smaller sample to address expected attrition from the original panel. The second wave of data will be designed to be “fully representative of the labor force at that time,” to allow models of career growth and trend analyses. Handel then presented an outline of the survey content (Box 5-1), noting that it addresses cognitive skills that would be of interest to educators—such as reading and writing and mathematics—and also interpersonal job tasks and physical job tasks. The survey also addresses questions about worker autonomy and includes an extensive battery of items on computer and noncomputer technology. In addition, he explained, the survey addresses employee involvement, job downgrading, and job satisfaction—all topics that have been part of a public debate about the availability of “good jobs” versus “bad jobs.” To address all of these topics, the survey includes 166 questions and takes about 28 minutes to complete. Handel said the measurement philosophy of the survey is to obtain individual-level data. He explained that, because he was not satisfied with existing measures of skills, he tried to write questions focusing on objective behavior and to make both the questions and response options intuitively meaningful to respondents. The questions were designed to cover the three survey domains of skills, technology, and employee involvement, aiming to capture the full range and levels of complexity in these domains. As an example, Handel presented the questions about mathematics, which include “filters,” such as “Do you use mathematics on your job in any way?” Those who respond affirmatively are asked first about the use of addition, subtraction, multiplication, and division and then about the use of any more complex forms of mathematics, such as algebra, geometry, trigonometry,
OCR for page 55
Research on Future Skill Demands: A Workshop Summary BOX 5-1 Content of the STAMP Survey Skill and Task Requirements Cognitive skills Math, reading, writing, documents Problem-solving Education and training requirements Interpersonal job tasks Physical job tasks Supervision, Autonomy, Authority Closeness of supervision, autonomy, repetitiveness Supervisory responsibilities over others Decision-making authority over organizational policies Computer and Other Technology Machinery and electronic equipment Mechanical and electronics knowledge Set-up, maintenance, and repair Equipment and tool programming Computers Frequency of use Use of 14 specific applications Use of advanced program features, specific and new software Training times Complexity of computer skills required Adequacy of respondents’ computer skills Computer experience of nonusers in prior jobs Employee Involvement Job rotation, cross-training, pay for skill Formal quality control program Teams activity levels, responsibilities, and decision-making authority Bonus and stock compensation Job Downgrading Downsizing, outsourcing, technological displacement Promotion opportunity Work load, pace, and stress Reductions in pay and retirement and health benefits Strike activity Job Satisfaction SOURCE: Handel (2007a).
OCR for page 56
Research on Future Skill Demands: A Workshop Summary statistics, and calculus. Handel noted that these questions, as well as the survey questions on reading and writing, refer to “concrete behaviors, concrete tasks that generalize across occupations.” Handel then presented preliminary survey results. Although 94 percent of respondents reported using at least simple levels of mathematics at work, only 22 percent said they use more complex mathematics, and the number who said they use calculus is only 5 percent. Similarly, a high percentage of people reported writing at work, but only about one-fourth write documents as a regular part of their work day. Turning to results in the domain of computers and technology, Handel said that 16 percent of respondents indicated that they had been introduced to new software that took more than a few days to learn, and only about 12 percent indicated that they used macros or formulas when interacting with spreadsheets. He explained that the questions had been specifically designed to measure the level of complexity in respondents’ use of computers. In the domain of employee involvement and management practices, Handel said that about a quarter of respondents reported belonging to a work team. However, because the research shows that the word “team” may have a variety of meanings (Appelbaum, Bailey, Berg, and Kalleberg, 2000), the survey included a battery of items to elicit information about team activities. The responses indicate that about 17 percent of the sample participated in a team that had responsibility for quality improvement. Handel ended by identifying several possible extensions of the survey. He noted that, if the survey was repeated, “it could be a social indicator for monitoring trends.” In addition, survey results (from employees) could be linked with data from employers or with test score data. For example, data from the national assessment of adult literacy (Kutner, Greenberg, and Baer, 2006) could be calibrated against workers’ reported use of reading, writing, and mathematics on the job from the STAMP survey. Response Sociologist Arne Kalleberg (University of North Carolina, Chapel Hill) commented that Handel’s paper not only “very nicely” frames debates about the quality of work, but also provides measures that “permit us to … assess many of the hypotheses” about work proposed by research in a variety of disciplines. Kalleberg said that the “well-designed” survey provides nationally representative data on such questions as what proportion of the workforce uses different levels of mathematics. He predicted that the survey results would provide the basis for many types of analysis, including correlations between work activities and the demographic characteristics of workers. Kalleberg suggested that the survey results could be used to examine
OCR for page 57
Research on Future Skill Demands: A Workshop Summary Autor’s ideas about the extent to which different groups of occupations may be automated in the future (see Chapter 2). Another possibility would be to compare the survey findings with O*NET, for example by examining the correlations between the survey results and O*NET job descriptors. However, he noted that the survey measures cognitive skills, such as mathematics, reading, and writing, in more detail than social and emotional skills. Referring to the argument that social and caring skills required in jobs filled mostly by women are not fully recognized or rewarded because these skills are viewed as natural feminine qualities (Gatta, Boushey, and Appelbaum, 2007a), Kalleberg warned that, to the extent that this argument is correct, that bias would be built into Handel’s survey. While commending the coverage and strength of the survey, Kalleberg also identified several limitations. First, he noted that, to date, the survey has been conducted only at a single point in time, using a cross-sectional research design. Second, he said the survey was small, especially in comparison to the regular Current Population Survey used by BLS in developing employment projections. Third, he observed that the survey focused on “workers’ perceptions,” which are useful for describing job characteristics, but less useful for obtaining information about the firm or organization. Kalleberg said that a survey of this type might be most valuable when used in conjunction with data from other sources. Fourth, Kalleberg predicted that, when Handel submitted papers to journals, he would be asked questions about causality, such as, “Does skill cause wages or do wages cause skill?” To make causal arguments, Kalleberg suggested drawing on qualitative data. Stating that the “real value” of the survey lies in the trend analysis, Kalleberg said he would have taken a different approach than Handel had. Kalleberg stated that survey researchers always face trade-offs between using earlier items to allow a trend analysis or developing better items in order to more accurately measure the phenomenon being studied. Kalleberg then compared example items focusing on worker autonomy. The STAMP survey item asks, “How much freedom do you have to decide how to do your job in your own way, rather than following a fixed procedure or supervised instructions?,” allowing responses ranging from zero, no freedom, to 10, complete freedom. The 1977 Quality of Employment survey asked about autonomy as follows “I have the freedom to decide what I do on my job. One equals strongly disagree. Five equals strongly agree.” While acknowledging that Handel’s new item was “better” than the earlier survey item, Kalleberg said that it was not “that much better,” pointing out that changing the item made it impossible to examine trends in worker autonomy going back 30 years. Kalleberg said he would have chosen to use the earlier question rather than creating a new “optimal question.” Kalleberg then commented on the possibility of conducting future
OCR for page 58
Research on Future Skill Demands: A Workshop Summary waves of the STAMP survey to monitor long-term trends in workplace skills, technology, and management practices. While acknowledging that the second wave of the survey is probably already under way, he advised Handel to wait longer than three years. Kalleberg suggested that three years might not be enough time to assess change over time in workplace skills and especially to analyze and revise the survey. He questioned whether the research design used in the Quality of Employment survey—which surveyed a refreshed panel of respondents in 1973 and again in 1977 (Quinn and Staines, 1979) had yielded very useful information and whether it is an appropriate model for the STAMP survey. Kalleberg advised Handel to take three or four more years to publish—and obtain reactions to—findings from the first wave of the survey and to use the feedback to improve the next wave of the survey. DISCUSSION Moderator Sager asked Elliott whether it would be possible to retrospectively apply his method to assess how accurately it might have forecast past changes in technology and employment. He also suggested that the method might be more successful at predicting that an occupation was going to “experience turmoil” than at predicting that an occupation would be completely “annihilated.” Elliott responded that Spenner’s use of the term “annihilated” suggested that there would be 60 percent unemployment as a result of new computer capabilities, but this did not accurately describe his findings. Elliott emphasized that his analysis focuses on the distribution of skills, predicting that computers would eliminate some portion of this distribution and that employment would then be clustered in other parts of the skill distribution. Stating that he is not concerned with which occupational titles are attached to those skill distributions, he agreed with Sager, saying that his method could be described as predicting that an occupation “whose current skill cluster is likely to be completely replaced by computers” would either “go through some transformation … or be obliterated.” Commenting on the idea of applying the method historically, he said it would be possible to go back in time and predict how such technologies as mechanical calculators and engines would replace human skills. However, he said he is not sure how to approach an important element of such a retrospective study—searching for clear examples in which technology was capable of replacing human skills but did not do so. Sager suggested that teachers, managers, and service workers who interact directly with other individuals in “a dyadic relationship” might be difficult to replace with computers. He argued that a manager may be able to motivate an employee because of a human, or emotional, commitment,
OCR for page 59
Research on Future Skill Demands: A Workshop Summary not by perfectly applying management by objectives or other management techniques. Elliott responded that it is important to “think carefully” before assuming that interpersonal human relationships could not be replaced by computers. He pointed to the example of the Eliza computer program, created by Joseph Weizenbaum of Massachusetts Institute of Technology in 1966 (Weizenbaum, 1966). Although the program was based on scripts and simply reflected what someone said to it, Elliott said, many people reacted “in a strongly emotional way to having what felt like an intimate interaction with the computer.” Elliott said that he could not answer a question about why technology had not yet succeeded in improving educational outcomes. He said that research is currently under way to assess which technologies have positively affected student learning, but he cautioned that this research focuses on technologies that had been developed 10 to 15 years earlier, while his paper looked ahead to educational technology that may be available in the future. David Autor made two comments on Elliott’s paper. First, he asked why Elliott had proposed that computers would have an absolute advantage over humans within a certain time frame and had not considered the comparative advantage of human labor. He noted that, in a standard economic model, there are gains from trade between two individuals, even when one has an absolute advantage in multiple activities. He argued that it is important to think about activities in which human labor is “specifically appropriate” and computers act as a complement, rather than a substitute. Autor challenged the proposition that the nation would “run out of jobs,” arguing that modern American society has an “endless ability to create activities for ourselves and do things that other people value.” Finally, he said that he found the model very mechanistic in its assumptions. Elliott responded that he is not predicting the disappearance of jobs, at least in the short run, but that the comparative advantage model does not apply to the “end state” he predicts. He said that, in the short run, as long as people have an absolute advantage in some tasks, then companies will engage them for those tasks. The long-term situation, in which computers “are effectively able to do everything … better than people” and the economy could operate without humans, is not the same as an international trade situation, he argued. In the trade situation, he said, although the people in the other country might not be absolutely better in any of their skills, their need for goods and services would drive them to continue finding ways to arrange trade. Elliott asserted that, in the future, when computers have the “absolute advantage on all skills” and no human labor is required, there will be no need to “trade” or consider comparative advantage between human and computer skills. Regarding the argument that people will create new jobs, Elliott said
OCR for page 60
Research on Future Skill Demands: A Workshop Summary that it is not applicable within the framework of his approach. He explained that his framework focuses on the cognitive processes people use to perform jobs, and human cognitive processes are limited. He said that it was not possible to imagine other tasks or occupations that humans might take on after computers are capable of carrying out all cognitive tasks that humans currently perform. Autor suggested that Elliott’s analogy is that humans are something like horses, which no longer have any advantage relative to cars and now live in country clubs. Unlike horses, however, humans are “residual claimants”3 on the activities they formerly carried out and so are unlikely to be “used as pet food.” Autor said that, if this analogy is correct, the U.S. economy should have already eliminated human employment, since farming and manufacturing have already been heavily automated but this is “absolutely contrary to what has occurred,” as the national economy experiences full employment. Finally, Autor suggested that it would be worthwhile to continue to discuss the absolute advantage of humans and computers in another venue, and Elliott agreed. In response to a question, Tsacoumis said that the concept of “competencies” is an important part of the discussion of future work. Sager explained that industrial-organizational psychologists consider individual characteristics, such as abilities, interests, and personality, as somewhat basic, one-dimensional constructs. These individual characteristics combine with experience, training, and feedback in the development of skills, such as skills to operate a cash register or to effectively interact with customers. Sager said that competencies are larger, multidimensional constructs that include skills, behaviors, and sometimes personality characteristics (e.g., Marrelli, Tondora, and Hoge, 2005). Sager said that defining competencies through competency modeling studies might be helpful for companies to communicate their values to employees, but that competencies might be less useful for social scientists, because they are such multidimensional concepts. Helen Ladd (Duke University) asked Elliott who would program the computers if computers were to take over teachers’ skills, and Elliott responded that researchers in computer science are currently finding answers to that question. Ladd said that software is not yet capable of responding to the variation in what children need to know and be able to do. Elliott responded that his model indicates that teachers currently use cognitive abilities that “lie within research that is being done in computer science” and that software programs using the full range of teachers’ abilities would be developed over the coming two decades. 3 In legal terminology, residual claimants are those with a claim on any remaining income of a bankrupt firm and may include shareholders, employees, and creditors (see Black, 1999).
OCR for page 61
Research on Future Skill Demands: A Workshop Summary Turning to Handel, Ladd said that as she thinks about how to improve high school education in North Carolina she sees a danger in his survey results. She said some people might conclude, based on his finding that only 19 percent of the workforce uses algebra, that it was no longer necessary to teach algebra to most high school students. She suggested that emphasizing the larger educational goal of helping people “to think, using lots of different tools or approaches” might be more important than focusing on one or more particular school subjects. Handel responded that his main objective had been to provide a “reality check” to speculations about the skills required for work. Acknowledging that advanced knowledge of school subjects opens “avenues of opportunity” for people, Handel agreed with Ladd that the goal of education should be to teach people to think. He concluded that he was not arguing for less education “but perhaps for more reason in the debates over education.” Sager concluded the session with several observations. First, he said that future-oriented job analysis methods face a trade-off, between the length of the time horizon and the strength of the inferences. Some techniques—such as Elliott’s pilot approach—look further out on the time line but have “relatively weak inferences,” he said, while other techniques—such as using the O*NET data to analyze skill changes over time—might be used to project changes much closer in time, with stronger inferences. Emphasizing the importance of understanding these trade-offs, he said he would place the papers on future skill demands in knowledge work and service work somewhere in the middle. He said that the authors of those papers had identified some “forward-looking” occupations and industries, but that he did not know whether these examples were representative of future work more generally. Second, he cautioned that, although large-scale efforts to analyze many jobs are valuable for assessing changes in workplace skill demands, such efforts always face a trade-off between parsimony and verisimilitude. A very simple, parsimonious model of job skills may be powerful but not provide “a rich representation of reality,” whereas a more complex model, including detailed job information on many occupations, may be expensive and cumbersome to maintain. Sager suggested that the developers of O*NET tried to change the DOT to increase verisimilitude, without creating the system too large to maintain. He observed that, “it remains to be seen” whether the O*NET designers made the right choices and whether the government will continue to pay for additional data collections to maintain the large O*NET database. Third, Sager said that, in comparison to the DOT, the O*NET provides “a much richer opportunity to assess its reliability and validity.” He cited as an example the ratings that job analyst make of the types and levels of abilities for various occupations. One way to answer reliablity is to com-
OCR for page 62
Research on Future Skill Demands: A Workshop Summary pare different job analysts’ ratings of a particular ability, such as reading, required for a particular occupation, to see how much they agree. Another approach is to compare different job analysts’ ratings of 52 O*NET abilities for a particular occupation, to see how much they agree about which abilities were most and least important. In addition, it is possible to compare ratings of a particular ability across different jobs. Sager noted that all of these types of analysis can be done and have been done (Peterson, Mumford, Borman, Jeanneret, and Fleishman, 1999; see also Tsacoumis, 2007b). Overall, he observed, the O*NET developers had “done fairly well” in addressing the reliability and validity of judgments about job skills and abilities, unlike the DOT, in which many ratings were made based on the “judgment of one analyst [in] one circumstance.”