In order to examine the ways in which IT is changing work, the committee first considers the current and emerging states of technological capabilities and their applications.
Changes to the technological landscape arise from two quite different forces. The first is technology creation: the combination of fundamental capabilities enabled by advances in foundational science and engineering research to yield a new functionality. The second force is technology diffusion: the adoption of these technologies in new products and services and their emergence in new markets over time.1,2
For example, consider the invention of tools such as the Internet, the mobile phone, home wireless networks, computer algorithms that recognize faces, or self-driving vehicles. Although technology for high-speed Internet connectivity has been available for decades, the diffusion of high-speed Internet connectivity to all corners of the Earth is still under way, as are its impacts on the workforce. Similarly, although technology for detecting faces in images has been available since at least the 1990s, it is only over the past 5 years that this technology has been deployed widely in cameras that now automatically detect and adjust camera focus for faces. Technology for self-driving vehicles is at an even earlier stage today, but large research and development (R&D) investments in this area
1 E.H. Rogers, 1995, Diffusion of Innovations, 4th ed., The Free Press, New York.
2 M. Cain and R. Mittman, 2002, Diffusion of Innovation in Health Care, California Healthcare Foundation, Oakland, Calif.
suggest it will mature and diffuse over the coming years, with potentially major impacts on the workforce. The rate of diffusion of technology is itself influenced by many forces, including technology maturity, cost, demand, competitive pressures, societal acceptance and norms, government policies and regulations, safety requirements, resistance by entrenched interests, and the inventiveness of entrepreneurs in creating and marketing products. Given that the diffusion of technology from its birth to widespread adoption can take many years, one can often project changes to the technological landscape by anticipating the continued development and diffusion of technologies that already exist in research laboratories or in leading-edge firms and products.3 In this sense, the research prototypes and early products of today anticipate technologies that may become widespread tomorrow.
This section characterizes recent trends in technological capabilities and technology adoption and identifies possible changes to the technological landscape over the coming years, with an eye to technologies most relevant to the workforce.
Perhaps the most obvious ongoing technology trend is the widespread use of computers, digital and online data, and the communication infrastructure of the Internet. The practice of moving services and data onto computers and online is generally referred to as “digitization.” This trend, already decades old, has affected nearly all aspects of our lives, and there are still significant opportunities for more widespread adoption. Individuals routinely see the impact of this digital infrastructure, for example, in automated teller machines (ATMs), online retail services such as Amazon, personalized advertising that is informed by mining traces of our personal digital lives, navigation services available in cars and on smartphones, and free video Internet calls. Business enterprises and their internal operations have been revolutionized by new computer systems that capture, organize, optimize, and partly automate business processes. Health care is also changing due to incorporation of computing technologies, although more slowly than expected; despite sluggish penetration, computing systems are expected to have strong potential for enhancing
3 W.J. Abernathy and J.M. Utterback, 1986, Patterns of Industrial Innovation, Product Design and Technological Innovation, (UK / Philadelphia: Open University Press: Milton Keynes), 257-264.
Dissemination of news and opinions worldwide has also been transformed, with today’s IT and communications infrastructure superseding much of the 20th-century system of print newspapers and hard-copy mail. Online publications, e-mail, text messages, Twitter, and websites are targeted to many specialized interests, resulting in nearly instantaneous dissemination of news and opinions and a world where more people than ever before have a platform for their opinions (see Box 2.2). However, access to the necessary resources, such as high-speed Internet, is not equal among all populations. For example, a 2015 issue brief from the President’s Council of Economic Advisers highlights this “digital divide,” noting that 2013 rates of household Internet access correlate with education level of the head of household and that members of underrepresented minority groups have lower access rates. Geography also plays a significant role in determining access.5
Education has also been impacted by digitization, with increasing access to online courses, including video lectures; experts who can answer specific questions through online discussion boards such as Quora.com; and early technologies for customizing courses to individual students based on the digital trace of their performance to date—not to mention the trove of digital knowledge to be explored by learners.
This digitization of nearly every aspect of our lives has important impacts on the workforce. It has changed the nature of individual jobs, decreasing the need for some, empowering others, and creating yet others. It has created opportunities to work more productively at home using video conferencing and online business processes and has led to greater expectations that workers will be available evenings and weekends. It has changed how we find jobs, as many job seekers now use online sites, such as Monster.com or Indeed.com, to find jobs. Freelancers now use online services such as Upwork.com or HourlyNerd.com to locate short-term jobs.
Today, most jobs involve some interaction with IT systems, driving a general need for the workforce at large to be informed about or trained on these systems—and to possess general fluency with IT. This also means
4 B. Chaudhry, J. Wang, S. Wu, M. Maglione, W. Mojica, E. Roth, S.C. Morton, and P.G. Shekelle, 2006, Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care, Annals Internal Medicine 144:742-752; D. Blumenthal and J.P. Glaser, 2007, Information technology comes to medicine, New England Journal of Medicine 356:2527-2534.
5 The White House, 2015, “Mapping the Digital Divide,” Council of Economic Advisers Issue Brief, July, https://www.whitehouse.gov/sites/default/files/wh_digital_divide_issue_brief.pdf.
that people on the job can encounter and are more influenced by the problems of IT. Because of the centrality of IT, workers and businesses can develop a dependency on systems working “seamlessly” to get core work done.
Computing Power and Networking
The increasing use of digital technologies has been enabled by foundational advances in computing power and networked connectivity. Over the last five decades there has been tremendous progress in computing capacity, in line with the famous Moore’s Law, which predicts that available computer power will double every 18 months. While this prediction has held remarkably well since 1965, the ability to increase power
by further miniaturizing components of electronic devices will ultimately hit a fundamental limit imposed by the optical limits on the resolution of photolithography, and ultimately by the sizes of atoms. Nonetheless, there has been major progress in the use of new parallel architectures—rather than reduction in component size—to grow computing power, enabling growth to continue to keep pace with Moore’s Law. For example, graphical processing units, or GPUs, have enabled a new family of massively parallel architectures that have gained significant popularity for machine-learning and big data applications, as discussed in the section “Advancing Technological Capabilities” below. Given current trends, shown in Figure 2.1, computing power and networking capabilities are expected to continue to advance at least over the coming decade.6 Research laboratories continue to pursue new approaches, such as quantum computing, which have not yet been proven practical, but which hold the potential for significant future improvements in computing power for some tasks. In addition to advances in computer processing hardware, access to computer networks has extended to a large portion of the population. By 2015, more than 84 percent of the U.S. adult
6 M. Galloy, 2013, “GPU vs. CPU performance data,” MichaelGalloy.com, http://michaelgalloy.com/2013/06/11/cpu-vs-gpu-performance.html.
population had access to the Internet.7 Internet bandwidth has grown by approximately 50 percent every year over the last two decades.8 Wireless connectivity has become faster and more pervasive through 3G and 4G—or third- and fourth-generation—wireless protocols, while wired network speeds have improved. By 2014, a typical Internet speed was 100 megabits per second for end users; Google has introduced gigabit per second access to metropolitan areas across the United States, with companies such as AT&T and Comcast beginning to provide similar service levels. One user’s documented evolution of Internet bandwidth since 1983 is illustrated in Figure 2.2.
The Global Positioning System (GPS), an accurate satellite-based method for identifying geographic coordinates, has been an important
7 A. Perrin and M. Duggan, 2015, “Americans’ Internet Access: 2000-2015: As Internet Use Nears Saturation for Some Groups, a Look at Patterns of Adoption,” Pew Research Center, http://www.pewInternet.org/2015/06/26/americans-Internet-access-2000-2015/.
8 J. Nielsen, 1998, “Nielsen’s Law of Internet Bandwidth,” Nielsen Norman Group, last modified 2014, https://www.nngroup.com/articles/law-of-bandwidth/.
enabler of mobile computing applications. GPS dates to the 1970s, when it was developed for use by the U.S. military with an intentionally degraded version made available to consumers, a distinction known as “selective availability.”9,10 In 2000, this intentional degradation was turned off, enabling accurate consumer-level GPS positioning to approximately 10-15 meters; its accuracy has since improved. This capability is now at the heart of mobile computing applications such as location-aware Internet search, real-time traffic directions, and “find my friends” social networking tools. Another benefit of GPS technology is the direct transmission of highly accurate timing signals to computing systems, allowing more effective and cost-efficient synchronization of network activities and work processes.
9 Federal Aviation Administration, “Satellite Navigation—GPS—Policy—Selective Availability,” last modified November 13, 2014, https://www.faa.gov/about/office_org/headquarters_offices/ato/service_units/techops/navservices/gnss/gps/policy/availability/.
10 W. Reynish, 2000, “The Real Reason Selective Availability was Turned Off,” Aviation Today Magazine Online, http://www.aviationtoday.com/av/issue/feature/The-Real-ReasonSelective-Availability-Was-Turned-Off_12739.html#.VrOZmDYrLMU.
American corporations have spent billions of dollars on digitizing their major processes and operations, investing in a variety of large-scale systems, such as enterprise resource planning, supply chain management, customer relationship management and human resource management, and EHRs. These systems can cost tens or hundreds of millions of dollars to implement and are often deployed over a period of several years. The biggest costs are in process redesign, often led by consulting firms. In addition, employee training and user documentation, manuals, and support documentation must be updated and maintained. As a result, many work processes have been significantly redesigned, or “reengineered,” as some authors have called it,11 boosting productivity and, in many cases, reducing labor requirements.
Few jobs have been untouched by the need to interact with IT systems, which also means that more and more of the workforce at large needs to be informed or trained on IT systems. In most cases, the costs of business process redesign and employee training, including on-the-job learning, vastly exceed the direct costs of IT hardware and software. These costs have been described as investments in organizational capital and human capital since they are expected to yield benefits over many years. Thus, they add to the intangible asset base of companies and the nation, even if they are often unaccounted for on balance sheets.12
Mobile and Remote Systems
The broad impact of mobile-based IT on the workforce is due to both hardware and software system evolutions. The history of enterprise-capable mobile hardware technologies dates to the early 1990s, with the advent of systems such as the IBM Simon and the Palm Pilot. In addition to the computing power of the mobile device itself, its utility as a component of an enterprise workplace environment has expanded with wireless network bandwidth capabilities. Such capabilities have expanded from 12.2 kilobytes per second (Global System for Mobile Communications standard) for the first compatible mobile devices in 1993, through five orders of magnitude growth to the long-term evolution, or LTE, standard of 128 megabytes per second in 2013.13
11 M.M. Hammer and J. Champy, 2006, Reengineering the Corporation: Manifesto for Business Revolution, HarperCollins, New York.
12 L. Hitt, S. Yang, and E. Brynjolfsson, 2002, Intangible assets: Computers and organizational capital, Brookings Papers on Economic Activity 1:137-199.
Combined with the availability of mobile hardware capabilities, enterprise-based mobile communications have enabled the widespread distribution of data, tasks, and workers across a wide range of organizational settings. Since the invention of e-mail in 1980, organizational growth of electronic communications has enabled increased levels of distribution of work and information exchange.14 “Enterprise e-mail” can be seen as celebrating its 25th anniversary in 2016-2017, dating to the release of Lotus Notes in 1991 and the Microsoft Exchange Client in 1993. The number of e-mail accounts worldwide grew nearly 16-fold, from 25 million to 400 million, between 1996 and 1999. By 2013, business e-mail accounts had exceeded 900 million; although this only represented 24 percent of e-mail accounts, e-mail traffic to business accounts represented a majority of e-mail traffic. Further, a majority of organizational communications were conducted via e-mail.15 Internet-based infrastructure in the workplace has led to a shift in telephony use, creating large-scale implementation of Internet-based voice-over Internet protocol, or VoIP, as well as digital voice messaging. However, there are limited data regarding the effects of VoIP versus older telephony systems on organizational productivity.
Another result of the shift to Internet-based IT systems has been the growth of videoconferencing as a productivity tool.16 Videoconferencing has enabled the geographical distribution of project work via meetings that may integrate computer presentations, face-to-face exchanges, and data sharing. The use of these forms of data and information exchange in organizations are affected by combinations of context, task urgency, and bandwidth; although studies of these aspects of organizational data sharing date to the 1990s, the capability of high-speed Internet infrastructure has led to a majority of survey respondents reporting daily or weekly videoconferencing.17,18 Mobile computing, increased Internet bandwidth and infrastructure support, and cloud-based data storage can also support the growing role of flexible “hoteling” or “touchdown” spaces, which
15 S. Radicati, 2013, Email Statistics Report, 2013-2017, The Radicati Group, http://www.radicati.com/wp/wp-content/uploads/2013/04/Email-Statistics-Report-2013-2017-ExecutiveSummary.pdf.
16 J. Kruger, 2013, “New Research Finds Use of Videoconferencing Growing As an Enterprise Productivity Tool,” IMCCA (blog), http://www.imcca.org/news/new-research-findsuse-of-videoconferencing-growing-as-an-enterprise-productivity-tool.
17 B.S. Caldwell, S. Uang, and L.H. Taha, 1995, Appropriateness of communications media use in organizations: Situation requirements and media characteristics, Behaviour and Information Technology 14(4):199-207.
18 J. Kruger, 2013, “New Research Finds Use of Videoconferencing Growing As an Enterprise Productivity Tool,” IMCCA (blog), http://www.imcca.org/news/new-research-findsuse-of-videoconferencing-growing-as-an-enterprise-productivity-tool.
limit the number of fixed offices necessary for a workforce of a given size. This reduction changes the real estate footprint associated with the size of an organization.19 These effects of enterprise software, IT infrastructure, and mobile computing devices substantially affect the traditional mappings of organization size to workplace organizational elements such as location, work hours, and distribution of members of work teams.
Educational Tools and Platforms
Traditional models of higher education and training have been steadily augmented by technology for years, from the introduction of overhead projectors to current video streaming and real-time remote-meeting technologies such as Google Hangouts or Skype. IT tools such as Webex, BlueJeans, GotoMeeting, Piazza, and Blackboard can be used by college faculty to record and distribute course content, often with asynchronous file exchange and chat features, to remotely located students.
With the general availability of high-speed networks to people’s homes, universities can now stream lectures to students across the world, and students can communicate with instructors and each other via the network. This new mode of online education with many students is called a massive open online course (MOOC). In a typical MOOC, class video lectures are prerecorded and often include associated exercises that are carried out by students in isolation or in very small groups. Tests may be given over the Internet, and in many cases evaluations of tests and exercises are carried out through peer evaluation, which also provides students with a broader perspective. The ability to teach without the use of a physical classroom allows for enrollment of much larger classes, with some MOOC classes having as many as 80,000 students enrolled. Although student completion rates can often be as low as 10 percent, that means 8,000 motivated students may still complete a class. Companies such as Coursera and Udacity, along with many universities, are now experimenting with a wide range of variations on this MOOC model, including methods for tuning individual course delivery for students by automatically tracking their course progress (and with different models for tracking meritorious performance and issuing certification). While these innovative educational tools have stimulated much excitement, it is also important to understand exactly who is participating and benefitting from online courses. Although estimates are difficult to confirm, many of the participants in Coursera and edX courses are those who already have
19 K. Lazar, and S. Long, 2014, “Downtown Office Market Starts to See Effects of Evolving Workspace Needs,” Shepard Schwartz & Harris LLP, http://www.ssh-cpa.com/newsroominsights-chicago-office-market-hoteling.html.
college degrees, and who may be participating for supplementary “ad hoc” or “just-in-time” learning activities. Studies suggest that students enroll in MOOCs for different reasons, with different engagement levels, and with varying capabilities for success. The MOOC environment is challenging for learners who are not already self-directed.20 In addition, access to the necessary resources, such as high-speed Internet, is not equal among all populations. For example, a 2015 issue brief from the President’s Council of Economic Advisers highlights this “digital divide,” noting that 2013 rates of household Internet access correlate with education level of the head of household, and that members of underrepresented minority groups have lower access rates. Geography also plays a significant role in determining access.21
At the same time, there are limits to what can be learned through remote online tools, such as for fields relying on intensive apprenticeship with significant hands-on and embodied competency. The important informal dimensions of learning through mentorship, observation, and participation may require different mechanisms. In addition to creating opportunities for learning, this technology may also change the nature of work for teachers and others in education-related professions. While the jury is still out on the final impact of these online methods for education and which types of students they will most benefit, there is no question that they provide new access mechanisms to lifelong education, just-in-time training to workers seeking to qualify for new jobs, and educational materials to many who would not otherwise have access to them.
Peer-to-Peer Exchanges and Matching and Reputation Systems
Advances in IT have led to new online peer-to-peer exchange networks through which resource holders or distributors can easily connect with resource seekers: eBay and Airbnb are examples of companies that have capitalized on these new platforms. One requirement for success for any given application of these peer-to-peer resource-exchange systems is the development of trust between the parties, who are often unknown to one another in advance. Although technology for matching providers to seekers, and for establishing sufficient trust to support the transaction,
20 Department for Business, Innovation, and Skills, 2013, “The Maturing of the MOOC,” BIS Research Paper #130, https://core.ac.uk/download/pdf/18491288.pdf; D. Fisher, 2012, “Warming up to MOOC’s,” Chronicle of Higher Education, http://chronicle.com/blogs/profhacker/warming-up-to-moocs/44022; A. Kirshner, 2012, A Pioneer in online education tries a MOOC, Chronicle of Higher Education 59(6):21-22.
21 The White House, 2015, “Mapping the Digital Divide,” Council of Economic Advisers Issue Brief, July, https://www.whitehouse.gov/sites/default/files/wh_digital_divide_issue_brief.pdf.
has led to success for companies such as those listed above, this technology remains at an early stage.
Reputation systems, which enable providers and seekers to voluntarily rate one another after a transaction, are widely used to establish the trustworthiness of participants. However, these reputation systems rely on voluntary investment of time and energy to provide ratings and may therefore be gamed or simply skewed toward participants with strong views and available time to participate, providing potentially inaccurate or at least unrepresentative data.
When used in the evaluation of individual workers, such reputation systems can impact workers’ future salary potential and their ability to retain jobs. Similar reputation systems are also used in more traditional companies to obtain feedback on employees who interface with customers (e.g., customer calls are often greeted with the question “Do you agree to take a brief survey after you have completed your call?”). Given that the use of online peer-to-peer networks has grown very visibly over the past 10 years and that this trend is still in its early stages, it can be expected to continue to diffuse into new applications. Its eventual spread may be determined in part by improvements in the technology for establishing accurate reputations.
The Internet of Things
The Internet of Things is a term introduced to capture the growing accessibility of many diverse devices to the Internet. As Wikipedia defines it, “the Internet of things is the network of physical objects—devices, vehicles, buildings and other items—embedded with electronics, sensors, and network connectivity that enables these objects to collect and exchange data.”22 Already, many devices, from thermostats to home alarms, communicate via the Internet and provide phone apps to interact with them (e.g., to adjust temperature before the user arrives home). But many more devices, from transit buses to refrigerators, already make use of computer processors and are beginning to have Internet connectivity. In addition, radiofrequency identification technology provides a low-cost method to identify and track any physical item without use of battery power and has been widely used, for example, to track items during shipping. The significance of the Internet of Things is that it will further accelerate the trend toward digitization of everything, making it possible for the Internet to serve as a communication tool for capturing, sharing, and acting on even more digital information. While the full impact of this trend is not
yet certain, experiments are already under way in cities such as Songdo, South Korea, and Santander, Spain, to explore the potential for city-wide connectivity of devices to improve aspects of city life, from the logistics of finding a parking space to improvement of air quality.
One important trend in the use of networking and computing is the growth of companies and services that offer disk storage and computing as a service over the Internet. For example, companies like Dropbox and Box offer the ability to store data in the cloud (i.e., on their servers via the Internet). Other companies such as Amazon and Google offer cloud-computing platforms in which users can rent time on very large computing clusters, accessible over the Internet, instead of purchasing their own hardware to run computationally intensive jobs. Beyond providing cloud access to raw storage and computing power, these and other IT companies, such as Microsoft, IBM, and Salesforce, now provide entire services over the Internet (software as a service). For example, Salesforce offers a cloud-based customer relationship management service used by many companies.
Cloud-based solutions have the advantage that the user can pay for only what they need—only as much storage or computational effort as they need—without investing in their own computing infrastructure. Cloud-based services similarly provide a convenient way to outsource certain tasks; they can be disseminated nearly instantly across the entire Internet, with no need to physically transport equipment or people. In addition, data sharing and other forms of collaboration become easier via the cloud (e.g., Google Docs provides a cloud-based document editor in which multiple users can simultaneously edit a document and view changes being made by all parties in real time), augmenting workers’ capabilities.
The paradigm shift to digitization enabled by development of the Internet and advances in computational power, networking speed, and data capture and storage has been transforming society for decades. New and compelling uses of these technologies, enabled by enhanced connectivity and computing power, continue to emerge.
Today, many concerned with the impact of technology on the workforce have turned their attention to the progress of technologies that perform functions commonly thought of as “human”—and thus present new opportunities for automating work functions traditionally carried
out by people, which could have the effect of eliminating jobs or changing the skills requirements and tasks associated with certain jobs. Many of these areas are still in the research and development phases. In the section below, the committee discusses the progress of these technologies.
Artificial intelligence, or AI, refers to principles and applications of computer algorithms that attempt to mimic various aspects of human intelligence. Although the term was coined in the 1950s, it took decades of research before aspects of AI research reached the point of significant commercial impact.
By the mid-1990s, practical, commercial AI-based systems for automating or assisting in a variety of human decision-making tasks had been developed and were being used in fraud detection and in the configuration of computer systems.23,24 While early AI systems were typically constructed manually—that is, with programmers writing computer-interpretable rules to define a computer-based decision process—there has been a shift toward AI systems based on machine learning methods—that is, algorithms that infer their own decision-making rules from training data—by harnessing large data sets. For example, fraud-detection strategies are now developed automatically by machine learning algorithms that analyze millions of historical credit card transactions.
The increasing use of machine learning, along with other innovations, has produced significant progress in a variety of AI subfields, including computer vision, speech recognition, robot control, automated translation between languages, and automated decision-making.25 These advances in AI component technologies have in turn produced a number of highly visible AI systems over the past decade, including the following:
- Intelligent agents such as Apple’s Siri, Google Now, Microsoft’s Cortana, and Amazon’s Echo. These AI systems combine speech recognition, background knowledge about the user, mixed-initiative interaction with users, and a variety of specific apps to perform useful tasks. These systems demonstrate the ability to combine spoken natural-language interaction with a range of intelligent services and electronic commerce.
23 S. Zoldi, 2016, “Four Analytic Breakthroughs That Are Driving Smarter Decisions,” Fico Blog, May 26, http://www.fico.com/en/blogs/analytics-optimization/four-analyticbreakthroughs-that-are-driving-smarter-decisions/.
24 J. McDermott, 1980, “R1: An Expert in the Computer Systems Domain,” Carnegie Mellon University, https://www.aaai.org/Papers/AAAI/1980/AAAI80-076.pdf.
25 S. Russell and Peter Norvig, 2010, Artificial Intelligence: A Modern Approach (3rd Edition), Prentice Hall, Upper Saddle River, N.J.
- Self-driving vehicles. Multiple universities and companies have now demonstrated self-driving vehicles. For example, in 2015 the automotive company Tesla released software that allows its customers to put their automobile into self-driving mode on public highways, and Uber has recently begun testing self-driving cars on the streets of Pittsburgh.26 This demonstrates that computer perception and control—in particular, computer vision and self-steering—have reached an important threshold of practical reliability.
- AI and robotic systems that sense and act within the physical world. An example is Nest’s intelligent thermostat, which learns to customize individual buildings to their occupants’ routines.
- AI systems capable of answering many factual questions. IBM’s Watson system defeated the world Jeopardy! champion in 2011,27 and Wolfram|Alpha28 provides a similar broad-scope resource for answering diverse factual questions. Note that Jeopardy! requires the contestant to answer unforeseen questions about a very diverse set of unforeseen topics. Watson’s win over humans in Jeopardy! demonstrates that computers can achieve human-level competence at answering diverse factual questions, while using huge volumes of unstructured text as underlying sources of knowledge.
- AI game-playing systems that defeat humans at chess, backgammon, and Go. In recent years, AI systems for game playing have defeated the top human players in each of these games. Most recently, in 2016, Google’s AlphaGo system defeated the world champion Go player in a best-of-five match by a score of four to one. These achievements demonstrate the capability of machine learning to automatically discover complex problem-solving strategies by training on millions of games in which the computer plays against different variants of itself (see Figure 2.3). However, it is important to realize this strategy-learning approach is applicable only to problems where near-perfect simulations are feasible. For example, games can be simulated perfectly, but the effect of a bicycle hitting a rock cannot be.
These visible AI advances illustrate the growing competence of AI technology. The committee examines related technologies in the following sections.
26 Reuters, 2016, “Uber Debuts Self-Driving Cars in Pittsburgh,” Fortune, September 14, http://fortune.com/2016/09/14/uber-self-driving-cars-pittsburgh/.
27 D.A. Ferrucci, 2012, “Introduction to ‘This is Watson,’” IBM Journal of Research and Development 56(3.4):1:1-1:15.
28 Wolfram|Alpha is a computational knowledge engine that was developed by Wolfram Research. See Wolfram|Alpha, 2016, “About Wolfram|Alpha,” Wolfram|Alpha, https://www.wolframalpha.com/about.html, accessed May 27, 2016.
Machine Learning and Big Data
One of the most important drivers of AI advances over the past two decades has been machine learning: computer algorithms that automatically improve their competence through “experience.” This experience is often in the form of historical data, which the machine-learning algorithm analyzes in order to detect patterns or regularities that can be extrapolated to future cases. For example, given experience in the form of a historical database of medical records, machine-learning algorithms are now able to predict which future patients are likely to respond to which treatments. Given experience in the form of speech signals from a specific individual, machine-learning algorithms now automatically improve their ability to understand the accent of that particular individual. Given experience observing which movies a user watches online, machine-learning algorithms now automatically improve their ability to recommend additional movies of interest. In many cases, including the above examples, the abstracted machine-learning problem is to learn some classification function from training data consisting of input-output pairs for that function. For example, a classification mapping each patient to a recommended
treatment may be generated from automated analysis of historical data describing a patient and their successful treatment.
A set of machine learning algorithms called deep neural networks has had a major impact in recent years. These are complex networks of threshold elements trained to fit the training data. These algorithms are able to discover useful abstract representations of complex data. Over recent decades, deep learning has helped to advance the state of the art in computer vision, speech recognition, and other areas, especially in tasks that involve complex perceptual or sensor data.29 For example, Xu et al. have trained a deep network to generate text captions for photographic images. While this and other deep network algorithms are still limited, their ability to train on millions of examples to generate models with billions of learned parameters has led to major improvements across many applications, such as robotics, information extraction from text documents, and prediction of customer behavior. Over the coming decade, these and other machine learning algorithms are likely to advance further, and new applications of existing algorithms remain to be explored.
While algorithm development is one driver of progress in machine learning, another major driver is the growth of online data that fuel machine-learning systems. Companies now capture growing volumes of data about their customers to learn to better serve and market to them. Companies have also moved an increasing fraction of their routine work flows online, thereby capturing new data that might be used to learn decision-making rules to partially automate these routine work flows. New sensors are appearing in many contexts, from cameras mounted on streetlights to pulse-sensing watches worn by individuals. Building on technological advances in wireless sensor networks and the Internet of Things, much of these data are now available in real time across the network, making it possible to generate intelligent systems that are embedded in critical infrastructure systems. A few such examples include (1) urban mobility, with companies such as Waze providing real-time route advice, and Lyft and Uber using the Internet to match passengers to drivers; (2) smart homes and accommodation systems, with companies like Nest providing home automation and Airbnb providing significant competition to the traditional hotel and short-term rental market; (3) automated agriculture, in which weather, water, and soil data are used to automatically control farming practices; (4) the electric power grid, in which consumer behavior can be learned in real time, making it possible to accurately schedule heterogeneous distributed energy resources, such as solar and wind; and (5) assistive devices, such as robotic wheelchairs
29 Y. LeCun, Y. Bengio, and G. Hinton, 2015, Deep learning, Nature Magazine 521:436-444, doi:10.1038/nature14539.
and new reality and robotic platforms like the PR2 Robot for Research and Innovation,30 to support design and development of a wide range of personal assistive tasks.
Over the coming decade, the impact of machine learning, big data capture and analysis, and data science is likely to grow as the diversity and volume of online data sets continues to grow, new types of sensors are designed to acquire new types of data, more companies learn how to collect and use online data to optimize profits, and the supply shortage of technical experts in this area is ameliorated by the growing number of college students studying this subject.
At the same time, there are several constraints on the rate and types of progress that can be expected. For example, although the volume of online data will almost certainly grow, access to data will likely limit their potential uses. Data access will be bounded by personal privacy concerns, by the willingness of companies that own much of this data to share it, and by government regulations, such as those under the Health Insurance Portability and Accountability Act of 1996, which govern access to medical data. In addition, while some data are in the form of highly structured databases, many are in the form of unstructured video, audio, and text that are much less interpretable by computers, despite recent progress. Other technical issues must also be addressed, such as incompatibilities in data schema across different databases, differences in the temporal and spatial grain size of data, and differences in data distributions sampled by different databases. Growing research in machine learning and data science is actively addressing these issues, but many of these technical issues remain unsolved.
Robotics is a field at the intersection of mechanical engineering, electrical engineering, and computer science that “deals with the design, construction, operation, and use of robots, as well as computer systems for their control, sensory feedback, and information processing.”31 In general, a robot is a mechanical machine that uses sensors to gather information about the world it operates in and a computer program to guide its actions. The academic discipline of robotics centers on the study, development, and deployment of electromechanical systems that sense and interact in the physical world, guided by a computer program or
simpler electronics. The concept of robotics can be broadened to include embedded sensing and actuation systems, referred to broadly as “cyberphysical systems.”
Real-world application of robots dates back to 1961, when George Devol’s and Joseph Engelberger’s Unimate system was deployed at General Motors for handling of die-cast metal.32 Use of robots in the automation of physical tasks provides benefits such as quality, repeatability, and power and can enable the removal of humans from dangerous tasks. In their early days, robots were used predominantly in automotive manufacturing. The initial introduction of robots in the automotive industry helped to ensure consistent quality over time and a reduction in defects.
Since then, the field has seen tremendous technical progress. The early robot systems had high mechanical precision but were not programmable. They used a fixed sequence of actions to perform a task. By 1974, the first microprocessor-controlled robot was introduced. Today’s robot utilizes different types of sensors, and some of them are directly programmed from human demonstration.
The International Organization for Standardization defines a robot as an “actuated mechanism programmable in two or more axes with a degree of autonomy, moving within its environment, to perform intended tasks.”33 The standard also distinguishes between industrial and service robots.
This formal definition allows consistent capture of sales and inventory statistics across sectors, regions, and nations. An annual report issued by the International Federation of Robotics34 captures the sales and inventory of both industrial and service robotics in most countries and includes a break-down across use-cases.
The industrial robotics market today has annual sales in excess of $10 billion each year, or more than $30 billion including installation costs and sale of accessories. Annual sales of industrial robots had grown to 230,000 robots by 2014, with close to 25 percent of sales originating in China. Five countries (China, the United States, Japan, Republic of Korea, and Germany) are responsible for 70 percent of global sales. In the United States, close to 32,000 robots were sold during 2014; see Figure 2.4 for worldwide industry trends over time.35
As of 2014, the top use of robots remained as automotive manufacturing, which accounts for 42 percent of all applications, with electron-
32 S.Y. Nof, 1999, Handbook of Industrial Robotics, Volume 1, Wiley & Sons, Hoboken, N.J., doi:10.1002/9780470172506.
33 ISO Standard 8373.
The number of robots shipped in the United States had a compound annual growth rate of 11 percent between 2009 and 2015.37 Recently there has been a stronger move toward the use of robot technology to enable increased flexibility and customization of products. For example, automobile manufacturer Audi now has the ability to produce 1031 different car model options, customizable to consumer preference for features such as color, wooden panels, audio systems, navigation systems, safety options, and more. At the same time, the lifetime of some products is getting shorter. For example, cell phone models typically have a lifetime of 12 months or less. This requires a manufacturing line to be available for production of multiple product types to allow for capitalization of infra-
structure, again resulting in a requirement for flexibility and reprogrammability in robotic assembly.
A service robot is a robot that “performs useful tasks for humans or equipment, excluding industrial automation applications.”38 The service robotics market is divided into professional and consumer applications. The professional applications include cleaning, material handling, surveillance, rehabilitation, surgery, logistics, and construction as well as defense applications. The market is still small (24,207 units with sales of $3.77 billion in 2014) compared to the industrial robotics market, but it is seeing significant annual growth, with current growth rates on par with those of industrial robots. The biggest market in this segment is robots used in minimally invasive surgery. The service robotics market is expected to see major growth significantly beyond the industrial market, since it includes subsectors such as driverless cars, unmanned aerial vehicles (sometimes referred to as drones), and entertainment robots. Recent industry predictions indicate that first-generation driverless cars will be available by 2020, and by 2030 such cars are likely to be a service. More than 3 million unmanned aerial vehicles have already been sold, and their growth is pre-
38 ISO Standard 8373.
The consumer market for robots includes household robots and entertainment and leisure robots. This includes domestic service robots, automated wheelchairs, personal mobility assistance robots, and pet-exercising robots. Autonomous pool-cleaning, rain-gutter-cleaning, and carpet-cleaning robots41 are sold commercially. Hospital robots that deliver supplies are also emerging.42 While approximately 4.7 million service robots were sold in 2014 for personal and domestic use, this accounted for sales of only $2.2 billion due to lower costs per unit.
In addition to the already fielded applications of robotics, university and corporate research is under way in nearly all aspects of component technologies. Research on computer vision, sound perception, and other modalities for perception; reinforcement-learning algorithms to give robots the ability to improve through experience; and natural-language interaction with robots are active areas discussed in the following sections. There is also research to explore styles of interaction between robots and people, such as work on building robots from more pliable materials to avoid accidental harm to people; research on styles of conversation between robots and people to produce effective communication; human instruction of robots; and robots’ explanation of their actions.
Computer Perception: Vision and Speech
Over the last 15 years, tremendous progress has been made in computer perception, especially in the areas of computer vision and speech recognition.43 Computer vision is widely used today in a range of applications, including fingerprint recognition at safety barriers, high-speed processing of handwritten addresses on letters by the U.S. Postal Service, reading of checks deposited at ATMs or via cell phone cameras, and recognition of individual faces in personal online photo albums. Even 10 years ago, recognition of hundreds of different objects in images was impossible, whereas now systems can classify images of 1,000 different
39 International Federation of Robotics, 2015, “Industrial Robot Statistics.”
40 T. Flanigan, 2017, “Forget Taxis; Dubai Wants to Fly You Around in Passenger Drones,” February 16, http://mashable.com/2017/02/16/taxi-dubai-passenger-drones/#HpoK4G4xPmqO.
41 iRobot Corporation, 2016, “Roomba 98: The Power to Change the Way You Clean,” http://www.irobot.com/For-the-Home/Vacuum-Cleaning/Roomba.aspx, accessed March 31, 2016.
43 X. Huang, J. Baker and R. Reddy, 2014, A historical perspective of speech recognition, Communications of the ACM 57(1):94-103, doi:10.1145/2500887.
objects with 62 percent average precision.44 In merely the interval from 2010 to 2014, the error rate in image classification for one major test set of images, the ImageNet set, was reduced from 28 percent to under 8 percent (see Figure 2.6).45 At the end of 2015, multiple image recognition systems reported reaching human level performance of approximately 4 percent error rates on the ImageNet challenge, widely used to evaluate image classification.46 Much of this recent improvement has been driven
44 O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A.C. Berg, and L. Fei-Fei., 2015, “ImageNet Large Scale Visual Recognition Challenge,” January 30, http://arxiv.org/pdf/1409.0575v3.pdf.
46 See, for example, R.C. Johnson, 2015, “Microsoft, Google Beat Humans at Image Recognition” EE Times, February 18, http://www.eetimes.com/document.asp?doc_id=1325712;K. He, X. Zhang, S. Ren, and J. Sun, 2015, Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification, pp. 1026-1034 in Proceedings of the IEEE International Conference on Computer Vision, https://arxiv.org/pdf/1502.01852.pdf; and R. Eckel, 2015, “Microsoft Researchers’ Algorithm Sets ImageNet Challenge Milestone,” Microsoft Research Blog, February 10, https://www.microsoft.com/en-us/research/blog/microsoft-researchers-algorithm-sets-imagenet-challenge-milestone/.
by applying deep network machine learning algorithms to larger training sets of images. Computer vision algorithms are now also capable of tracking people, cars, and other objects in video streams as well as analyzing static images.
Much of the progress in computer vision has been independent of its use in robotics. Recently, vision technology has started to see applications in robotics, in particular for design of “smart” (driverless) cars and for the sorting of goods by supply-chain companies such as Amazon.
Computer vision on video and live imagery is also making progress, with advances in identifying objects and recognizing intentions in videos,47 and for employing machine vision for perceptual tasks for robotics applications in the open world, such as in autonomous and semi-autonomous driving.48
Similar progress has been made in the area of speech perception, which is now widely used in phone-based customer service systems and to input commands to mobile phones and other devices. Again, as recently as the turn of the 21st century, it was impossible to achieve speech recognition accuracies sufficient to support such applications (see Figures 2.7-2.8 for an illustration of historical progress in automated speech recognition). As in the case of computer vision, much of the recent progress in speech-to-text systems has been due to the use of deep network machine learning algorithms. Microsoft reported in October 2016 they had reached a 5.9 percent word recognition error rate, equal to humans on the switchboard transcription task.49
Natural Language Processing
Natural language processing refers to computer-based analysis of natural language (language written and spoken by humans) in useful ways. Common applications include search engines, spam filters that examine e-mail to determine whether it is spam, systems that automatically extract mentions of people, places, organizations, and events from news articles,
47 J. Liu, J. Luo, and M. Shah, 2009, “Recognizing realistic actions from videos ‘in the wild,’” IEEE Computer Vision and Pattern Recognition, doi: 10.1109/CVPR.2009.5206744; R. Poppe, 2010, A survey on vision-based human action recognition, Image and Vision Computing 28(6):976-990.
48 E. Ohn-Bar and M.M. Trivedi, 2016, Are all objects equal? Deep spatio-temporal importance prediction in driving videos, Pattern Recognition, in press, http://www.sciencedirect.com/science/article/pii/S0031320316302424.
49 A. Linn, 2016, “Historic Achievement: Microsoft Researchers Reach Human Parity in Conversational Speech Recognition,” October 18, https://blogs.microsoft.com/next/2016/10/18/historic-achievement-microsoft-researchers-reach-human-parity-conversational-speech-recognition/#sm.000vjpv5z169ewp105s11ee1nrzuq.
systems for automatic translation of text from one language to another, and question-answering systems that respond to questions posed in natural language. Recent years have seen strong progress in the ability to extract structured factual information from unstructured text, although computer methods that understand the full meaning of text are far from realized. Here again, progress has been driven largely by machine learning applied to large text data sets. Due to intense competition in the area of search engines and related problems, corporate investments in research and development are large in this area, suggesting rapid progress in the future. As this technology improves, research assistants and paralegals may rely increasingly on support from computers.
One widely regarded example of intelligent text processing is the IBM Watson family of applications, whose methods were originally designed to participate in Jeopardy!. Watson systems operate by interpreting natural language questions, then performing inference on a huge collection of text and other types of data to identify candidate answers and rank them to produce a final answer. This technology has many other potential uses, and IBM is now applying it to medical applications using large collections of medical text. Given that the rate of publication of new medical results outpaces the ability of doctors to read journal articles, decision support systems such as Watson have potentially game-changing consequences for augmenting human capabilities in fields that require knowledge-based decision-making.
While computers have remarkable speed and processing capabilities, humans still outperform computers in certain tasks and contexts. New models of human engagement have focused on how best to leverage the strengths of humans and computers for optimal completion of a given task. Such thinking is apparent in emerging modalities of work, including distributed or crowdsourced labor. Methods for supporting and optimizing complementary engagements of people and machines enable a mix of computer and machine initiatives or contributions for addressing tasks and solving problems. Such work includes developing context- and problem-specific models of machine and human competencies, methods for recognizing the state of a solution and the efforts of machines and people, and means for coordinating the contributions of people and machines.50
Recent research efforts in this area have focused on applying machine learning methods to enable systems to learn how to best combine the
50 E. Horvitz, 2007, “Reflections on Challenges and Promises of Mixed-Initiative Interaction,” AAAI Magazine 28, Special Issue on Mixed-Initiative Assistants.
intellect and physical abilities of people and machines.51,52 While these approaches are still largely deployed in research prototypes, they illustrate a model where systems leverage the complementary skills of people and machines to complete cognitive and physical tasks. This suggests that new types of work may arise with roles that rely on uniquely human skills.
Complementary computing and mixed-initiative solutions also extend to collaborations between robotic systems and people in the physical world. For example, efforts have been under way to develop collaborative robotic systems in surgery to allow robotic surgical systems to work hand in hand with human surgeons. Promising prototypes and research to date have considered technologies to recognize and understand the actions and intentions of human surgeons and coordinate activities between robotic and human surgeons.53,54,55
Directions in research and development on complementary computing systems show how the talents of machine competencies can be joined with the intellect and physical prowess of people and highlight the likelihood that technical advances will bring to the fore new roles and types of work for people in joint human-machine problem solving—where people bring critical, uniquely human contributions into the mix of initiatives. However, the types and nature of contributions and new potential roles for rewarding work for people remain unclear.
Many automated tasks require machines to interface with humans. For example, some online retailers have highly automated warehouses that use robots to bring items for a retail order from their storage shelves
51 E. Kamar, S. Hacker, and E. Horvitz, 2007, “Combining Human and Machine Intelligence in Large-scale Crowdsourcing,” International Conference on Autonomous Agents and Multiagent Systems, June 4-8, 2012, Valencia, Spain; E. Horvitz and T. Paek, 2007, Complementary computing: Policies for transferring callers from dialog systems to human receptionists, User Modeling and User Adapted Interaction 17(1):159-182, doi: 10.1007/s11257-006-9026-1.
52 D. Shahaf and E. Horvitz, 2010, “Generalized Task Markets for Human and Machine Computation,” Association for the Advancement of Artificial Intelligence, http://research.microsoft.com/en-us/um/people/horvitz/generalized_task_markets_Shahaf_Horvitz_2010.pdf.
53 C.E. Reiley, H.C. Lin, B. Varadarajan, B. Vagvolgyi, S. Khudanpur, D.D. Yuh, and G.D. Hager, 2008, Automatic recognition of surgical motions using statistical modeling for capturing variability, Studies in Health Technology and Informatics 132: 396-401.
54 A. Shademan, R.S. Decker, J.D. Opfermann, S. Leonard, A. Krieger, and P.C.W. Kim, 2016, Supervised autonomous robotic soft tissue surgery, Science Translational Medicine 8(337):337ra64, doi: 10.1126/scitranslmed.aad9398.
55 N. Padoy and G.D. Hager, 2011, “Human-Machine Collaborative Surgery Using Learned Models,” in 2011 IEEE International Conference on Robotics and Automation (ICRA), doi:10.1109/ICRA.2011.5980250.
to a human worker, who then packs and loads them onto a truck. This interface between machine and human workers requires, for example, that the rate of goods provided by the robot to the human matches the varying rate at which humans can process the workload, and that the robot provide a “failsoft”56 mechanism for the human to take control in the event that it makes an error. In recent years, advances have been made in designing systems that couple humans with automation: techniques have been developed to enhance situational awareness and to build predictive models of human behavior in different contexts.57,58 Nevertheless, significant work remains to be done in the development of core scientific and engineering principles for designing such human-in-the-loop systems.
Overall, the committee expects the rapid pace of IT advances to continue or accelerate due to (1) continuing advances in AI algorithms and in underlying computational hardware that allows continuing scale-up at reduced cost; (2) continuing growth in the diversity and volume of online data, which, coupled with machine-learning software, is driving many AI advances; and (3) increasing investments by industry in research and development in AI and other parts of IT. Although it is impossible to predict future capabilities perfectly, certain ongoing technology trends make the following workforce-relevant developments likely over the coming decade.
- Mobile robots. Over the next decade, it is anticipated that self-driving vehicles, which have been demonstrated and are already in limited commercial use (e.g., the Tesla self-driving mode), will mature and become more widespread, with possibly significant impacts, such as decreased demand for drivers, on employment in the transportation sector. Analogous developments in and deployment of self-flying aerial vehicles is anticipated, if government regulations allow.
- Assembly-line automation. Further technical progress in automating assembly lines is expected, including diffusion into lower-volume manufacturing as flexibility and reprogrammability improve. Progress
56 A “failsoft” is a mode that a particular piece of software enters in the event of disruption that enables retention of some (though generally degraded) level of service, to avoid otherwise substantive failure.
57 M.R. Endsley, 2000, “Theoretical underpinnings of situation awareness: A critical review,” pp. 3-28 in Situation Awareness Analysis and Measurement (M. R. Endsley and D.J. Garland, eds.), LEA, Mahwah.
58 T.G. Dietterich and E.J. Horvitz, 2015, Rise of concerns about AI: Reflections and directions, Communications of the ACM 58(10):38-40, doi:10.1145/2770869.
in robotics toward manipulation of soft and inconsistent materials could lead to increased automation in the manufacture of apparel, leather goods, and commodity furniture, which could devalue the labor endowments of some of the poorer countries in the world and potentially lead to some (minimal) reshoring of these industries. Beyond robotic automation, 3D printing (also known as additive manufacturing) is also likely to progress and impact specialized, low-volume manufacturing. Automated assembly, coupled with automated transportation systems, is expected to have serious impacts on manufacturing (partly reshoring), but also the full supply chain, from mine to customer. This could result in decreased demand for workers and the shifting of work tasks in this sector.
- Computer perception of speech, video, and other sensory data. It is likely that computer competence in perceptual tasks, including speech recognition, computer vision, and interpretation of nonspeech sounds, will advance, potentially leading to significantly improved abilities in several areas, such as listening and image processing. This could augment or replace human functions for a wide range of jobs, such as security guard and policing jobs. It could also lead to a generation of new products, such as intelligent light bulbs that “see” and “hear” what is occurring in their field of view and use this capability to offer assistance.
- Automatic translation between languages by computers. Automatic translation is already in use, although it is imperfect (e.g., Skype now offers an automatic translation service for its calls). This technology might advance over the coming decade to the point of providing widespread, high-reliability, real-time translating telephones.
- Text reading by computers. The ability of computers to interpret and extract information from unstructured text documents (e.g., extracting mentions of specific people, companies, and events) has advanced significantly over the past decade, but computer reading skills still fall far short of human competence. This gap is likely to narrow over the coming decade, with potentially significant impacts on automating knowledge-worker jobs such as paralegal researchers and news reporters.
- Work flow automation. Businesses, governments, and other organizations are increasingly using computers for conducting routine business, leading to a great deal of online data to train systems to automate or semi-automate routine work flows. New companies such as Claralabs and x.ai now offer an online meeting scheduling service—a service that might initially be performed by remote humans, and that might become increasingly automated by applying machine learning to the large quantities of scheduling training data they acquire over time. Semi-automation of routine work flow may reduce the need for clerical staff, even if it does not automate these jobs fully. Systems such as those developed within IBM’s Watson suite have started to generate decision support in the medi-
cal field. This practice is likely to be expanded to a large number of related fields, such as intelligence gathering, equipment maintenance, and business decision support systems.
These areas are likely to advance significantly in the coming years at a level that will impact the workforce.
Additional advances are also possible that, while less likely, would have major impacts on the workforce. For example, if advances are made in technology for privacy-preserving machine learning methods, which would use data while guaranteeing preservation of individual privacy, this would dramatically increase the variety of data mining and machine learning applications that reach the market—for example, medical applications that are currently avoided because of privacy concerns. If it becomes possible for computers to learn how to accomplish tasks through instruction from their users, this could have a truly dramatic effect: it would change the number of effective computer programmers from its current short supply to billions of people, enabling each worker to custom instruct their system on how to best assist them. If technology for text analysis reaches the point of human-level reading by computers, the impact would also be dramatic, as computers can scale to read the entire Web and would be better read than any person, by a factor of millions.
The committee also notes the possibility of unanticipated, disruptive changes in the technology landscape—that is, rapid, broad, or deep changes with significant impact on society. First, a major and unanticipated scientific or engineering breakthrough could accelerate the creation or deployment of a new technology, with concomitant disruptions to the workforce, either positive or negative. Of primary concern are disruptions that lead to the displacement or unsettling of workers, industries, or economies that are unprepared to adapt. Examples might include an unexpected breakthrough in AI algorithms that enable the straightforward automation of a type of knowledge work. History provides examples of disruptive inventions, such as the horseless carriage, where the need for physical production resulted in a slow diffusion into society. While today’s software innovations can be spread worldwide rapidly by downloading them onto mobile devices, the development, testing, and integration of usable software from fundamental algorithmic advances takes time, as does the integration of new software into businesses. The nature of the corresponding disruption to the workforce would be not only a product of the new technological capability, but also determined by how those in power choose to make use of it, driven largely, by market factors. Second, it is also possible that an existing and ubiquitous technology could undergo a catastrophic failure or collapse—for example, due to emergence of a flaw, collapse of infrastructure as a result of man-made or
natural disaster, depletion of the physical resources required to build or run a given technology or product, the sudden imposition of regulatory controls or limitations, or the sudden and widespread loss of trust in a given technology. Such regressive disruptions could remove the affordances of a given technology from the workplace, requiring workers and businesses to get by without the tools to which they are accustomed or necessitating that human workers perform previously automated tasks that they may not have been trained to carry out. Other examples include large-scale disruption to the power grid, depletion of critical materials required for building microchips or other components, or loss of confidence due to a hacking takedown or other security incident with a pivotal software or service.
Information technology will continue to transform the way we work as well as other aspects of our lives. To summarize,
- The impact of IT is pervasive and has already touched nearly all aspects of our personal and work lives. It has already eliminated and created jobs, but more frequently it has transformed jobs and the way they are performed. IT has transformed business practices as companies have moved routine operations online, where they can be better tracked and partly automated (e.g., supply chain management or customer relationship management). Similarly, it has transformed our personal lives as we have moved our calendars, mail, photographs, and shopping online, again opening up the feasibility of computer support for these core aspects of our lives, including a significant fraction of our social lives. It is beginning to change the nature of education, as video courses become increasingly available over the Internet, and is changing the nature of freelance work, as peer-to-peer networking allows just-in-time matching of customers to resource providers.
- Much of the impact of IT has been driven by hardware advances, especially the spread and use of the Internet and inexpensive computing power. Networking has moved from hard-wired to wireless at that same time that the Internet has spread worldwide. The Internet of Things refers to a recent trend in which many physical devices with sensors are increasingly communicating via the Internet, suggesting that we are moving toward a world in which the Internet serves as a worldwide communications network to connect a diverse array of people, institutions, and physical artifacts from buildings to vehicles. This underlying communications network of computing and sensing devices provides the substrate for rapid deployment of new technology. The sensors used in driverless
cars were impractically expensive a decade ago, yet today they are found in some video game consoles. Mobile phones have hastened rapid cost reductions in GPS chips, high-resolution compact cameras, motion sensors, and touch-sensitive and fingerprint-sensing hardware.
- The impact of IT is also largely driven by new software advances, especially in AI and machine learning. These software advances have provided reliable speech-recognition systems that are now used routinely on smartphones, image recognition systems capable of recognizing single individuals in photographs, and the first commercial self-driving cars. Machine learning algorithms are now mining the exploding volume of online data to capture regularities that enable them to automate or semi-automate many knowledge-intensive decisions, from deciding which credit card transactions to approve to deciding which X-ray images contain evidence of tumors. As increasing volumes of data and decisions come online, the potential applications of this technology will grow as well. These software advances are enabled and amplified by the hardware advances discussed in the previous paragraph.
- The impact of technology on the workforce follows from both the invention of new technologies and the diffusion and maturation of existing technologies. For example, although the Internet was invented and deployed decades ago, its impact continues to grow as it diffuses geographically around the globe and as its technology matures (e.g., augmenting early hard-wired Internet connections with wireless Internet connections). In seeking to anticipate future technology trends and their impact on the workforce, it is helpful to consider the likely diffusion and maturation of technologies that already exist in nascent form in research laboratories and forward-leaning companies (e.g., self-driving vehicles, which currently represent only a tiny fraction of vehicles on the road, are likely to diffuse and mature enough to have a significant impact on employment in the transportation sector).