The third day of the workshop included two panel sessions on how new and emerging technologies will interact with and shape the social landscape. In the first session, Sanjay Tripathi, Vice President of Growth Initiatives and Strategic Partnerships at Watson IoT at IBM, discussed the internet of people and things, and his presentation was followed by an open discussion moderated by Tilak Agerwala, IBM emeritus. The second session, on artificial intelligence, featured three presentations, by John Markoff, Research Fellow at the Center for Advanced Study in the Behavioral Sciences at Sanford University; Dario Gil, Chief Operating Officer of IBM Research and Vice President of AI and Quantum Computing at IBM; and Melvin Greer, Chief Data Scientist Americas at Intel Corporation. Following the presentations, Markoff moderated a discussion with his fellow panelists.
The world is being rapidly digitized, said Sanjay Tripathi, as evidenced by the exponential advances in microprocessor design and software that now powers phones, cars, utilities, health care, personal fitness, and most every other aspect of life in a technologically developed country such as the United States and those of Europe and many parts of Asia. Sensors, microprocessors, and miniaturized networked communication devices are filling homes and factories, workplaces, and recreation spots, bringing with them potential improvements in quality of life, as well as many of the ethical, privacy, and security issues that have been discussed at the workshop, he said. What this explosion of devices, the so-called Internet of Things (IoT), is producing is a torrent of data that has shifted the view of technology from “Is it affordable?” to “What can all this data be used for?” and that, said Tripathi, is where machine language and artificial intelligence will prove to be important. “We are going from automating work to really understanding how the world works and driving better outcomes,” he said.
One transformation that has come with this shift from automating the world to understanding it is a dramatic shift in value creation, Tripathi said. Today, hardware and software, or goods and services, are now lower-margin items. Today,
knowledge creation and insights have taken over as the new high-value items, and the data needed to create new knowledge and derive insights are available in multimedia format, from consumer, manufacturing, and enterprise IoT, genomics, text, structured sources, and others. Data, said Tripathi, is the new source of value.
IBM’s position on data responsibility, he explained, is that any organization that collects, stores, manages, or processes data is obliged to handle it responsibly, where responsibility entails issues regarding ownership, data flows access, security and trust, and the role of artificial intelligence. IBM’s view is that its clients own their data and the insights derived from those data, and that the owners of data—not governments—should decide where data are stored and processed. In addition, it should be up to the owners of data to decide whether those data should be provided to a government agency and not a company that stores those data in a backup center, for example.
Regarding data security, Tripathi said that IBM opposes any effort to weaken or limit the effectiveness of the commercial encryption technologies that are essential to modern business, and it does not put “backdoors” into its products or provide source code or encryption keys to any government agency to access client data. In addition, the company supports the use of internationally accepted encryption standards and algorithms rather than those developed and mandated by individual governments. IBM’s position on artificial intelligence and big data is that artificial intelligence cannot and will not replace human decision making, judgment, intuition, or ethical choices. Moreover, companies using artificial intelligence must be able to explain what went into their algorithm’s recommendations, and if they cannot or will not, their systems should not be on the market. In fact, IBM supports transparency and data governance policies that will ensure people will be able to understand how an artificial intelligence system came to a given conclusion or recommendation.
When it comes to deriving value from the data produced by the Internet of Things, Tripathi said it will require a diverse set of participants working together, including those developing the applications, the devices and networks, and the data storage and analysis platforms. The analytic platforms and software needed to derive useful information from IoT data will need to be designed for specific applications, he said, and often require the participation of multiple parties, which can create challenges when it comes to securing intellectual property and deciding who should reap the associated financial rewards.
Tripathi then discussed some examples of projects involving IoT-derived data, starting with the International Technology Alliance Project, which is a set of projects sponsored by the United Kingdom Ministry of Defense and the U.S. Army Research Laboratory. These collaborative projects, involving academia, industry, and government, were intended to identify “things” based on their acoustic signatures, such as identifying the make of a car based on the sound it makes as it passes by a sensor, and to develop technologies to store data efficiently. This project did not involve sensitive data, nor any data from a human, and the intent was for data from the two governments to be shared freely with all members of the consortium.
However, a few government entities raised concerns about unencumbered data sharing, and the academic and corporate researchers involved pushed back against the need for any restrictions on data sharing. After protracted negotiations, three contracts were created, one for open data, one for U.S. entities stating that data from the Army Research Laboratory would be kept in one location and only shared with authorizations, and the third with the same restrictions for United Kingdom data and participants. Tripathi joked that data that he could have acquired by sitting on a street corner with a tape recorder never came by the time the project ended and the final report was written. On a more serious note, he said that IBM researchers in the United States and United Kingdom were unable to access each other’s data. One lesson from this outcome, he said, is that the value of data is often in the eyes of the beholder. Another lesson is that, while a partner may say they will be good data stewards and have internal compliance and training, it is important to understand a partner’s true capabilities and track record.
A more successful project, called Green Horizons, involved a collaboration between IBM and China to reduce particulate air pollution in Beijing, where levels of potentially hazardous 2.5-micron particles exceeded the World Health Organization standard by 9-fold, with spikes of up to 50-fold. China contracted with IBM to model complex physical and chemical processes related to the generation of these particles and to identify a variety of means to reduce atmospheric particulate matter, such as idling certain factories on certain days. Weather is a key component of the generation of these particles, and while IBM owns the digital assets of The Weather Channel, it and any other private company are prohibited by law from making weather forecasts in China. As a result, getting the historical data to create and test the models took an extensive effort, even though the Chinese government was a formal partner in this project. In the end, IBM used data from Indonesia to develop and test the models, which have since been deployed in Beijing.
The third example involved service personalization based on personal data captured by electronic devices and other means. Tripathi noted that in India and the Middle East, advertising appears on mobile phones as people travel, something that people in the United States would likely revolt over. “That consumers are okay with us mining their data from where they are, where they go, who they call, and leveraging that to do real-time targeted ads just floored me,” he said. In the Middle East, for example, people are okay getting coupons pushed to them for a store they are near as an enticement to shop there, pointing to cultural differences in how people view what most Americans would consider sensitive personal data. Once again, he said, the value of data is in the eyes of the beholder.
Another project aimed to use real-time monitoring and alerts from sensors installed in worker safety equipment to improve worker safety and reduce the number of lives lost on the job in the United States. Tripathi noted that while the sensor technology to enable this project is not new, what is new is the interconnectedness of these devices and the IoT element. For example, there are sensors in hard hats that can tell if someone is wearing it or not or if ambient noise levels are too loud. The idea was to produce a dashboard that a site manager or foreman
could use to see where the people on a construction site are and whether they are okay and following mandated safety precautions. One challenge this project faced is that some of these sensors measure personal information, such as heart rate or temperature, and the data they generate are regulated by the Food and Drug Administration. Another challenge was that some employees and worker unions expressed concerns that the data would be used to track performance and productivity, so even though this type of system would benefit workers, there was a trust issue that acted as a barrier to adoption. The solution was that every worker has to opt into the system.
The last project he discussed aimed to develop a remote monitoring and connectivity solution that would serve as a postoperative recovery facility for the home. A clear business and medical case argues for people recovering from certain types of surgery to do so in their homes instead of in the hospitals, said Tripathi, but hospitals want to make sure that patients are following directives at home so that they will not have to return to the hospital. Hospitals already have sensor data aggregated from patients who have had similar surgeries and good or bad postoperative recovery situations, but the issue is that data from an individual patient are covered by the Health Insurance Portability and Accountability Act, which made it difficult to get personal data from live postsurgical patients to test the system.
In summary, Tripathi made three points that need to be considered when entering into a multinational, collaborative partnership involving data. The first was that it is important to consider the purpose for which the data will be used. In that regard, broad use rights are typically not granted, which is not usually a problem for product development because products tend to have a specific use in mind. However, research typically has multiple objectives and has a discovery aspect that can often veer off into new areas. Second, there is significant diversity in the types of data and value that consumers and business ascribe to it. There are thing data and people data, consumer data and business data, and open data and protected government data, each of which needs to be treated differently. In addition, he said, there are national and cultural differences in the value ascribed to data.
His final point was that the sophistication of the entities involved when it comes to data matters varies, and thus there are varying levels of ability to comply with voluntary measures and following directives, statutes, and regulations. “It is incumbent on everybody to understand the abilities of other parties to be able to be good stewards of the data,” he said in conclusion.
or a car, to look at the risk and benefit for Internet of Things applications, and of the need to balance the tension between wanting to share and provide access and the trust required to enable sharing. It was noted that the consequences of misplaced trust for the data source are growing, in financial costs, loss of reputation, and loss of personal privacy, as well health and safety risks.
Dario Gil started the workshop’s final panel session by stating that machine learning and artificial intelligence are foundational to the future of all professions, a message that college students, based on their soaring enrollment in introduction machine learning classes, seem to have gotten. He explained that machine learning is a subcategory of artificial intelligence, neural networks are a subcategory of machine learning, and deep learning is a subcategory of neural networks. Work on artificial neural networks began in the 1940s, but the deep learning explosion, as Gil called it, occurred in 2012, when large labeled data sets merged with deep neural networks running on powerful graphical processing unit hardware accelerators to produce “the largest decrease in the error rate in history” when a computer system classified visual images. Within 3 years, in fact, deep neural networks had a lower error rate than humans at classifying visual images for specific domains. Today, he added, artificial intelligence works in specialized applications including language translation, speech transcription, natural language processing, and facial recognition, but that there is still significant work ahead to go from narrow applications of artificial intelligence in a single domain to broader applications that cross domains and are less of a black box.
There are three concepts related to the evolution of artificial intelligence: explainability, security, and ethics. Gil said that explainability refers to the current black box nature of deep learning and noted that artificial intelligence developers are putting an increasing emphasis on model transparency. “It is not enough to say ‘it kind of works empirically’ and not have a theory behind it and understanding,” said Gil. Rather, he said, artificial intelligence algorithms need to be understandable to both developers and users so that they have some explanation for the results produced by an artificial intelligence system.
Security, he continued, is a key area of artificial intelligence at it relates to the fragility of the models. As Gil explained, there are cases where tiny perturbations in the input data can alter the output dramatically. As a result, it is possible to corrupt a model by poisoning the training data, which would have serious consequences if the model was managing a self-driving car, for example, or was used to recognize specific individuals on a no-fly list.
Regarding ethics and artificial intelligence, Gil shared five main areas of dialogue. The first has to do with “the singularity,” or the fear that artificial intelligence will surpass human intelligence and then evolve beyond the ability to control it. The second deals with concerns that the data sets and algorithms shaping the future of artificial intelligence are not representative of the global population and therefore raise the prospect that artificial intelligence will be biased and work
against diversity and inclusion. The third area is focused on how to best align artificial intelligence with human codes and values. The fourth revolves around ways to make AI more transparent. The fifth deals with whether we should fear technological-driven unemployment as artificial intelligence and robots get more capable.
Regarding the issue of bias, diversity, and inclusion, Gil recounted a recent incident where a researcher from the MIT Media Lab contacted IBM about a test she had run on artificial intelligence facial recognition systems developed by Microsoft, Face++ Cognitive Services, and IBM. Using photographs of parliamentarians from all over the world to test the accuracy of these services, she found that the error rate for females with dark complexions was between 23 and 36 percent for the three systems, compared to less than 1 percent for white males. “This was not an instance of malicious intent or an error associated with the algorithm,” said Gil. “It has to do with the data sets used to train the algorithm, and the common data sets in the public domain that everybody uses for training have heavy bias segmentation.” Once aware of the problem, Gil’s colleagues at IBM, working with the researcher and her colleagues at the MIT Media Lab, engaged in a focused effort to fix this problem and earlier this year released a new training set that reduced the error by a factor of 10.
Gil said this example is illustrative of efforts that the artificial intelligence field will need to gain the public’s trust in these systems, similar to the so-called white hat hackers who attempt to find security flaws in software before bad actors do and then notify the owners of the susceptible code. “We have got to build a culture where we correct systems, improve the data sets, and have the methodology to do this,” said Gil. Toward that end, IBM issued a white paper in 2016 declaring its policies on how to build and test artificial intelligence systems (Banavar, 2016) and a set of principles in January 2017 (IBM THINK, 2017). These principles state that the purpose of artificial intelligence is to augment, not replace, human intelligence; that there should be trust and transparency in the development and deployment of artificial intelligence systems and in the way data are handled; and that there is a responsibility for education to support workforce evolution. He noted in closing that IBM has committed $240 million over 10 years to support the MIT-IBM Watson Artificial Intelligence Lab to jointly create the future of artificial intelligence.
As an introduction to his remarks, Melvin Greer said that Intel over the previous 18 months had decided to pivot and become a data company, and in particular, to focus on developing artificial intelligence–based solutions. He noted that one of his colleagues has coined the phrase, “artificial intelligence is the new electricity,” a slogan that provides a framework for why it is so important to get the principles and ethics of artificial intelligence sorted out. The analogy he used was that without the lightbulb, the foundational capabilities associated with the discovery of electricity and development of a ubiquitous electrical distribution system would have been muted. Ultimately, realizing the promise of electricity required the efforts of many researchers and engineers, as well as government. He sees a similar situation with artificial intelligence, where it will take the efforts of
many, including government, to develop the applications and guiding principles to turn promise into reality.
This workshop, he said, plays a role in that effort by promoting the discussions that are needed to develop a set of principles that define the nature of free will, moral responsibility, self-deception, consciousness, and personal identity. “If we ascribe these to things that are nonhuman, what does that mean and how are we going to adjudicate the perspective that each and every data scientist offers as we try and build this this working framework?” asked Greer. Many people, he said, advocate for an independent watchdog to decide how artificial intelligences should operate, and others are trying to figure out whether an ethical discussion around thinking machines is even possible or reasonable. From his observations, Greer believes that most people see these discussions around ethics as a counterbalance against and not a promoter of innovation. “That is unfortunate because in the human condition, we do not think of ethics as a negative to innovation,” said Greer.
Intel, he noted, is in a unique position to engage in the discussion about ethics and artificial intelligence because 98 percent of the world’s cloud computing capabilities run on Intel hardware or software, giving it a large window on how researchers are using data. What he is seeing is that industry is forming its own ethics groups to guide the development of artificial intelligence and that companies are establishing their own ethics boards as a means of gaining the public’s trust that it is developing artificial intelligence in an ethically responsible manner. “We will not get the benefits of artificial intelligence and all promise associated with it if people believe that it has all of the same biases, all of the same foibles, all the same ways of thinking that humans do today,” said Greer.
As an example, Greer presumes that even if autonomous driving could cut the number of people who die in car accidents from 50,000 people a year to 5, some people may continue to distrust autonomous driving because it has been shown that machines can make a mistake. Similarly, he said, as people understand how fragile these systems are and how small changes in the input data can significantly shift the outcome of the insight and inferences that come from the use of artificial intelligence, they would limit the sharing of their data.
Greer said that there are groups such as the IEEE that have initiatives on the ethics of autonomous and intelligent systems, and one organization, Data for Democracy, is partnering with Bloomberg and BrightHive to develop a code of ethics by data scientists for data scientists through a community-driven approach.1 Regarding the latter, Greer encouraged the workshop participants to join this effort. He also noted that the European Convention on Human Rights, since early 1998, has been thinking in earnest about what it means to be human and what innate human rights are associated with privacy of their data and privacy in general. He added that the U.S. National Science and Technology Council, in 2016,
issued a strategic plan for the development of artificial intelligence that also codifies the importance of understanding and addressing the ethical, legal, and societal implications of artificial intelligence.
John Markoff began his comments with a brief history lesson about the origins of the field of artificial intelligence, which he said happened in two laboratories on opposite ends of the Stanford University campus. At one end, John McCarthy coined the term artificial intelligence and claimed in a 1962 proposal to the Defense Advanced Research Project Agency that he would create a set of technologies that would ultimately replace the human being. On the other side of campus, Douglas Englebart invented the term intelligence augmentation to reflect the philosophy that this new form of computing would extend rather than replace humans. In thinking about these two philosophies, Markoff realized they lead to a paradox. “If you extend the human, you also need fewer humans.”
In Silicon Valley, where venture capitalists have largely switched from investing in social media to investing in artificial intelligence, it is fashionable to say that artificial intelligence will destroy jobs, a camp to which Markoff said he belonged at one time. To point out the fallacy of this notion, Markoff noted the oft-cited statistic that 140,000 employees at Kodak lost their jobs because of 13 programmers at Instagram. The problem with this out-of-context statistic is that Instagram could not have come into existence without the modern internet, which has created somewhere between 2.5 and 4 million jobs. Today, he said, he has become convinced that demography trumps technology, and that aging societies will need both humans and robots powered by artificial intelligence to meet the care needs of the elderly. In fact, today, rather than asking the experts when there will be a self-driving car, he asked when there will be a robot that can give an aging human a shower safely, which he argues is the harder problem.
Markoff said he is optimistic about the development of responsible artificial intelligence because at least three chief executive officers of major players in the field—Google, Microsoft, and IBM—have committed their companies to engage in and develop human-centered artificial intelligence. One thinker in the field, Ben Shneiderman at the University of Maryland, argues against autonomy for machines on ethical grounds, which Markoff explained results from the idea that separating code from humans separates human from responsibility when something goes wrong. As an example, Markoff recounted how Microsoft developed an autonomous conversation agent that soon turned into a sexist, misogynistic, racist program and had to be shut down. At the same time, researchers in China created a similar program that became a big success among young Chinese users who saw it as a way to be alone with their thoughts and have “someone” to talk to about them. “A very different framing than we might think about,” said Markoff.
In closing, Markoff wondered if the current generation of scientists and technologists who are designing these technologies have taken society back 100 years, when philosophers were arguing that modern society would be so complex that inevitably it would be run by engineers. “We may be back there, and we hope they have the right set of values regarding whether we are going to design humans into or out of the future.”
Markoff started the conversation with Gil and Greer by asking them what effect the General Data Protection Regulations (GDPR) were having on the commercial as well as research and development efforts of their respective companies. Gil replied that the first effect was to trigger a massive effort within IBM to comply with all aspects of GDPR, both for the data it holds from European citizens and on what the company is doing on behalf of its clients. This compliance activity has generated an opportunity for the company to offer its expertise on GDPR to others as a service. Markoff asked if GDPR’s Article 22, which states that “the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly affects him or her,” had a chilling effect on the company’s artificial intelligence work in Europe, and Gil replied that the European Union has issued a clarification that Article 22 was not meant to outlaw artificial intelligence. Gil said he thinks Article 22 has to do more with the explainability of algorithms. Greer, however, believes the issues over Article 22 have not yet been settled and that there is still discussion as to the intent of this GDPR provision.
Intel has focused first on GDPR compliance by all its and its partners’ global operations. Intel, said Greer, sees GDPR as an important differentiator and is working to deploy the provisions of GDPR in every environment in which it works, regardless of whether an activity touches a citizen of the European Union. “We intend to use that as a competitive advantage in our ability to market and sell,” said Greer. Gill added that IBM is taking the same approach.
Greer noted that there are some countries that are hypersensitive to privacy and others that are relatively desensitized to privacy, which he said points to why the discussion on innovation is so important. “If we cannot convince people that we can innovate with privacy, there will be some fundamental changes that people may not be prepared for,” said Greer. While Article 22 presents some issues to resolve, the GDPR provision that may be most impactful today deals with the right to be forgotten. This provision, said Greer, could mean that there will be a significant draw down in the amount of data available to support clinical trials and experimentation in other areas, such as education. At the same time, it creates the opportunity to open the conversation with the public about privacy and the use of data to make inferences and create insights, which Greer said will become increasingly important as the percentage of data science deliverables generated by citizens, as opposed to formally trained data scientists, continues to grow.
Markoff then turned to the issue of explainability. His understanding, he said, is that the field is at the point of making early progress in adding transparency to deep learning systems. His question to Gil was when he thinks IBM would be able to deploy a technology that would explain the behavior of an artificial intelligence–based system to the satisfaction of a European regulatory body. Gil replied that the company is making progress toward transparency with its development of visualization tools that can show the sensitivity of a particular network
structure to a specific input. Such tools could help answer the question of whether a system is understandable and produces consistent results. Another focus is on developing confidence intervals for a given result, which is proving important in the company’s work with health care professionals.
Greer added that Intel is working with all its partners, including IBM, to try to solve the transparency problem. Greer explained that transparency does not only apply to an algorithm. Even if an algorithm is completely transparent, the data set may include missing, erroneous, and deceitful data, which imposes a need for additional efforts to ensure that the outcome of the model was justifiable and probabilistically correct. One approach the company is taking is to create application-specific integrated circuits that are married to specific deep learning algorithms, providing a mechanism for tuning, verifying, and evaluating algorithms within a silicon architecture.
Markoff then asked Gil and Greer to comment on whether fictional models of artificial intelligence—such as HAL in the movie 2001: A Space Odyssey, or the sentient artificial intelligence program in the movie Her—get in the way of a clear view of where this technology is headed. Gil said that these fictional representations do have an effect and can distort the conversation, though the effect can vary depending on the culture. In Japan, for example, robots and artificial intelligence are largely seen as benign and, in the context of an aging population, a way of maintaining a standard of living and economic productivity, a representation in popular culture that is much different than the Terminator, for example. In the United States, he added, the view of robots and artificial intelligence is more dystopian. To Gil, what is more problematic is the potential to codify bias and disparity in algorithms to determine credit scores, for example. “We have got to address that first,” said Gil.
The next question Markoff posed was whether the time was coming where the decision would have to be made as to whether to augment call center operators or replace them with completely automated call centers, for example. “I could see a situation where your designers want to augment people and maybe your customer would want the lowest cost solution, which would be no humans in the loop,” said Markoff. “Are you on the cusp of those kinds of decisions?” Gil said that IBM takes the position that artificial intelligence will impact every job. The challenge is to identify those parts of a job that can be replaced entirely, those that can be augmented, and those that will not be touched at all. This is not a question related solely to artificial intelligence, he said, because it is a question already being asked and answered with automation. His opinion, which he believes is IBM’s position, is that the goal of replacing all human labor is the wrong design point because work has intrinsic meaning for humans. “Therefore, we have to design technology that is compatible with that human goal, and we need to then make technology that enhances your ability to perform your work, to live better, and to be richer from it,” said Gil. “If we design it the other way—a magical technology that could replace our work—that would be the essence of social chaos and revolution, and that will not be a good design principle.”
Greer said that while it is easier to talk about the destructive component of new technologies on jobs, Intel is focused on where the new jobs will be and is working to identify the top new jobs that do not exist today but will in the next 5 years. “This is one reason why our Artificial Intelligence University program is so important, because we are providing insight to what those jobs are so that we can incentivize and help administrators build curriculum for those types of jobs,” said Greer. One such job, he predicted, will be the personal privacy manager who people will hire to manage the privacy aspects of their digital self. This job will be needed because the ability to separate the digital self from the physical self will be close to indistinguishable, said Greer. In addition, he said, it is important to think about jobs that require a capability that is uniquely human, such as mindfulness, empathy, or emotional intelligence that cannot be programmed. “If we can provide new jobs that incorporate these key skills, what history tells us is that the destructive component would be relatively small, and the constructive component would be huge,” said Greer.
Returning to Markoff’s question, Gil said the decision of whether to augment or replace humans will be devilishly complicated and require the participation of many stakeholders in making that decision. He predicted that these decisions will be aided by experimentation and data as to what works and what does not and how technologies fit into the workflow, incentives, and even collective bargaining agreements. Greer added that he sees a steady desensitization to the risk of artificial intelligence happening.
Markoff then posed a scenario in which IBM and Intel were hired by a Russian troll factory to more effectively influence the next U.S. election without human intervention. Gil replied that IBM would never accept such a contract, but that crafting the type of two- to three-line messages that appear on social media is within the capability of what a narrow form of artificial intelligence can generate today. Markoff then asked if there were technologies that could serve as an “anti-bot” to counter this type of attack, and Greer said Intel is developing artificial intelligence capabilities that can identify bias and deceit in artificial intelligence offerings. “We are deconstructing a cleansing, tagging, classification, and labeling model that would help us to figure out how data is being used and internalized to create a deceptive position or view,” said Greer.
“More significantly, we are engaging a number of social scientists and ethics experts that are trying to identify ways that people would exploit the system so that we can create an automated response.” Such a response, he said, could be something rudimentary, similar to a spam filter, but probably more advanced such as a cyber kill chain, which he explained is a set of steps that allows a cyber analyst to understand every step that must happen for a threat to a vulnerability to be successful. The goal, he added, is not to interrupt the vulnerability but to let it progress to the point that one understands what it looks like and can create a countermeasure for future instances. Intel is also working to identify how a distributed ledger or blockchain-type mechanism would allow for peer-to-peer sharing of data, Greer added.
The final question Markoff asked was what the effect of the end of Moore’s law will be on computational ability. Gil replied that it has been accepted in computer science that the performance of the transistors that go into integrated circuits is not improving and that the ability to increase the density of transistors also has limits. The answer, he said, is going to be specialization. “Rather than a general-purpose computer that can perform all programs and all tasks, we will also have very specialized chips that are designed to tackle particular tasks,” Gil said. This change is already happening with artificial intelligence and it is producing a doubling in machine learning and deep learning application performance a year, a rate he sees continuing for at least another decade. The result, he said, is that the cost of building a deep learning model will fall by a factor of 1000 in the next 10 years, a statement with which Greer agreed.
The next great advance in computational power, said Gil, will come with quantum computing, an entirely new paradigm that has enormous potential beyond artificial intelligence but that will have profound implications for how scientists model nature, assimilate chemistry, do optimizations, and perform machine learning. According to Gil, quantum computers are working already, though they are about where classical computing was in the 1950s. Greer predicted that it will be another 10 years or so before quantum computing is used to solve general purpose problems in society.
Responding to a question about whether IBM and Intel are working with stakeholders other than large businesses on their artificial intelligence projects, Greer said that Intel has a community-based Smart Cities program that is working on the public safety, health, and transportation needs of urban populations. Intel also has more than 100 university partners in its Artificial Intelligence University program, including five historically black colleges and universities that are just starting to develop data science programs, and has either acquired or invested in dozens of small companies and startups. Gil said IBM has taken a similar approach of engaging a wide range of stakeholders, which is important because of the breadth of clients it has worldwide, from large corporations and governments to high schools and education centers. Both Greer and Gil noted, though, that artificial intelligence has become a buzzword, and many information technology or analytics project are tagged as “artificial intelligence” because of the cache associated with the term.
Greer answered a question about the jobs that will not exist in 10 years by indicating that the jobs in danger of being impacted are those where there is a repetitive process involved, such as writing a standard contract or reading a typical x-ray. He added, though, that most every job will be disrupted to some extent by artificial intelligence, and Gil said that just because something can be done by a computer does not mean that it will if humans give intrinsic value to that activity or profession. To make that point, he said that computers can outplay humans in chess, but to his knowledge no one has gone to watch two computers play chess against each other.
Markoff also noted that the rate of change in many occupations may be slow enough that humans adapt without much disruption. For example, electronic discovery is making inroads in the legal profession, but today it accounts for a single-digit percentage of all billable hours. Similarly, while autonomous vehicles may soon prove capable of safely getting from one place to another, the capital cost of replacing every automobile, bus, and truck with an autonomous vehicle would be astronomical, so any transition is likely to be slow. “People make these predictions and the horizon on which they make the predictions is alarmist, but they are not grounded in fact,” said Gil. In addition, predictions about the future of jobs are often wrong, as was the case when the ATM was invented. At the time, forecasts held that there would be fewer bank tellers in the future, but over the next 25 years, the number of tellers increased because the cost of opening small branches went down. Greer added, though, that if even one-tenth of 1 percent of jobs targeted by artificial intelligence were going to be affected in the first 10-year timeframe, society should have a fundamental conversation about the future of work. In his mind, the most pressing problem today is not the future of work but dealing with the issues of trust and transparency.
A workshop participant asked the panelists how less developed countries are going to afford these technologies, and Greer said that he expects these nations to leapfrog technologies, as many did when going from no telephone system straight to cell phones instead of first deploying land lines. Natural disasters, while devastating, offer opportunities—and usually an influx of cash—to replace old technologies with new ones, and he said examples of this process are occurring in the developing world.
This page intentionally left blank.