National Academies Press: OpenBook

Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril (2019)

Chapter: 4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence

« Previous: 3 How Artificial Intelligence Is Changing Health and Health Care
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

4
POTENTIAL TRADE-OFFS AND UNINTENDED CONSEQUENCES OF ARTIFICIAL INTELLIGENCE

Jonathan Chen, Stanford University; Andrew Beam, Harvard University; Suchi Saria, Johns Hopkins University; and Eneida A. Mendonça, Regenstrief Institute

INTRODUCTION

Chapter 3 highlights the vast potential for artificial intelligence (AI)-driven solutions to systematically improve the efficiency, efficacy, and equity of health and medicine. Although we optimistically look forward to this future, we address fears over potentially unintended (but predictable) consequences of an AI future in human health, with key considerations about how to recognize and mitigate credible risks.

This chapter reviews how hype cycles can promote interest in the short term but inadvertently impede progress when disillusionment sets in from unmet expectations as in the AI Winters discussed in Chapter 2. We further explore the potential harms of poorly implemented AI systems, including misleading models, bias, and vulnerability to adversarial actors, all of which warrant an intentional process for validation and monitoring. We round out this chapter with a discussion of the implications of technological automation to improve health care efficiency and access to care, even as we expect AI to redefine job roles and potentially exacerbate existing inequities without dedicated investments into human workforce development.

HYPE VERSUS HOPE

One of the greatest near-term risks in the current development of AI tools in medicine is not that it will cause serious unintended harm, but that it simply cannot meet the incredible expectations stoked by excessive hype. Indeed, so-called AI

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Image
FIGURE 4-1 | Gartner Hype Cycle.
SOURCE: Gartner Hype Cycle HD, Gartner, Inc. 2017.

technologies such as deep learning and machine learning are riding atop the utmost peak of inflated expectations for emerging technologies, as noted by the Gartner Hype Cycle, which tracks relative maturity stages for emerging technologies (Chen and Asch, 2017; Panetta, 2017) (see Figure 4-1). Without an appreciation for both the capabilities and limitations of AI technology in medicine, we will predictably crash into a “trough of disillusionment.” The greatest risk of all may be a backlash that impedes real progress toward using AI tools to improve human lives.

Over the past decade, several factors have led to increasing interest in and escalating hype of AI. There have been legitimate discontinuous leaps in computational capacity, electronic data availability (e.g., ImageNet [Russakovsky et al., 2015] and digitization of medical records), and perception capability (e.g., image recognition [Krizhevsky et al., 2017]). Just as algorithms can now automatically name the breed of a dog in a photo and generate a caption of a “dog catching a frisbee” (Vinyals et al., 2017), we are seeing automated recognition of malignant skin lesions (Esteva et al., 2017) and pathology specimens (Ehteshami et al., 2017). Such functionality is incredible but can easily lead one to mistakenly assume that the computer “knows” what skin cancer is and that a surgical excision is being considered. It is expected that an intelligent human who can recognize an object

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

in a photo can also naturally understand and explain the context of what they are seeing, but the narrow, applied AI algorithms atop the current hype cycle have no such general comprehension. Instead, these algorithms are each designed to complete specific tasks, such as answering well-formed multiple-choice questions.

With Moore’s law of exponential growth in computing power, the question arises whether it is reasonable to expect that machines will soon possess greater computational power than human brains (Saracco, 2018). This comparison may not even make sense with the fundamentally different architectures of computer processors and biological brains, because computers already can exceed human brains by measures of pure storage and speed (Fischetti, 2011). Does this mean that humans are headed toward a technological singularity (Shanahan, 2015; Vinge, 1993) that will spawn fully autonomous AI systems that continually self-improve beyond the confines of human control? Roy Amara, co-founder of Institute for the Future, reminds us that “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run” (Ridley, 2017). Among other reasons however, intelligence is not simply a function of computing power. Increasing computing speed and storage makes a better calculator, but not a better thinker. For the near future at least, this leaves us with fundamental design and concept issues in (general) AI research that have remained unresolved for decades (e.g., common sense, framing, abstract reasoning, creativity; Brooks, 2017).

Explicit advertising hyperbole may be one of the most direct triggers for unintended consequences of hype. While such promotion is important to drive interest and motivate progress, it can become counterproductive in excess. Hyperbolic marketing of AI systems that will “outthink cancer” (Brown, 2017) can ultimately set the field back when confronted by the hard realities in attempting to deliver changes in actual patient lives (Ross and Swetlitz, 2017). Modern advances do reflect important progress in AI software and data, but can shortsightedly discount the “hardware” of a health care delivery system (people, policies, and processes) needed to actually execute care. Limited AI systems can fail to provide insights to clinicians beyond what they already knew, undercutting many hopes for early warning systems and screening asymptomatic patients for rare diseases (Butterfield, 2018). Ongoing research has a tendency to promote the latest technology as a cure-all (Marcus, 2018), even if there is a “regression to regression” where well-worn methods backed by a good data source can be as, or more, useful than “advanced” AI methods in many applications (Razavian et al., 2015).

A combination of technical and subject domain expertise is needed to recognize the credible potential of AI systems and avoid the backlash that will come from overselling them. Yet, there is no need for pessimism if our benchmark is improving on the current state of human health. Algorithms and AI systems cannot provide “guarantees of fairness, equitability, or even veracity” (Beam and Kohane, 2018),

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

but no humans can either. The “Superhuman Human Fallacy” (Kohane, 2017) is to dismiss computerized systems (or humans) that do not achieve an unrealizable standard of perfection or improve on the best performing human. For example, accidents attributed to self-driving cars receive outsized media attention even though they occur far less frequently than accidents attributed to human-driven cars (Felton, 2018). Yet, the potential outsized impact of automated technologies reasonably makes us demand a higher standard of reliability (Stewart, 2019) even if the necessary degree is unclear and may even cost more lives in opportunity cost while awaiting perfection (Kalra and Groves, 2017). In health care, it is possible to determine where even imperfect AI clinical augmentation can improve care and reduce practice variation. For example, gaps exist now where humans commonly misjudge the accuracy of screening tests for rare diagnoses (Manrai et al., 2014), grossly overestimate patient life expectancy (Christakis and Lamont, 2000; Glare et al., 2003), and deliver care of widely varied intensity in the last 6 months of life (Barnato et al., 2007; Dartmouth Atlas Project, 2018). There is no need to overhype the potential of AI in medicine when there is ample opportunity (as reviewed in Chapter 3) to address existing issues with undesirable variability, crippling costs, and impaired access to quality care (DOJ and FTC, 2015).

To find opportunities for automated predictive systems, stakeholders should consider where important decisions hinge upon humans making predictions with a clear outcome (Bates et al., 2014; Kleinberg et al., 2016). Though human intuition is powerful, it is inevitably variable without a support system. One could identify scarce interventions that are known to be valuable and use AI tools to assist in identifying patients most likely to benefit. For example, an intensive outpatient care team need not attend to everyone, but can be targeted to only those patients that AI systems predict are at high risk of morbidity (Zulman et al., 2017). In addition, there are numerous opportunities to deploy AI workflow support to assist humans to rapidly answer or complete repetitive information tasks (e.g., documentation, scheduling, and other back-office administration).

HOW COULD IMPROPER AI HURT PATIENTS AND THE HEALTH SYSTEM?

The evolution of AI techniques applied to medical-use cases parallels better processing power and cheaper storage capabilities (Deo, 2015) and the exponential increase in health data generated from scientific and clinical systems (e.g., electronic health records [EHRs], picture archiving and communication systems, and -omics) if not directly from patients (e.g., mobile sensors and social media interactions). Most of the conceptual foundations for AI are not new, but the combined advances can finally now translate theoretical models into usable technologies.

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

This will mark a fundamental change in the expectations for the next generation of physicians (Silver et al., 2018). Though there is much upside in the potential for the use of AI systems to improve health and health care, like all technologies, implementation does not come without certain risks. This section outlines some ways in which AI in health care may cause harm in unintended ways.

Correlation or Causation? Prediction Versus Action?

Poorly constructed or interpreted models from observational data can harm patients. Incredible advances in learning algorithms are now toppling world-class professional humans in games such as chess, go (Silver et al., 2018), poker (Brown and Sandholm, 2018), and even complex real-time strategy games (AlphaStar Team, 2019). The key distinction is that these can be reliably simulated with clear outcomes of success and failure. Such simulations allow algorithms to generate a virtually unlimited amount of data and experiments. In contrast, accurate simulations of novel medical care with predictable outcomes may well be impossible, meaning medical data collection requires high-cost, high-stakes experiments on real people. In addition, high-fidelity, reliably measured outcomes are not always achievable, because AI systems are constrained to learning from available observational health data.

The implementation of EHRs and other health information systems has provided scientists with rich longitudinal, multidimensional, and detailed records about an individual’s health data. However, these data are noisy and biased because they are produced for different purposes in the process of documenting care. Health care data scientists must be careful to apply the right types of modeling approaches based on the characteristics and limitations of the underlying data.

Correlation can be sufficient for diagnosing problems and predicting outcomes in certain cases. In most scenarios, however, patients and clinicians are not interested in just predicting outcomes given “usual care” or following a “natural history.” Often, the whole point of paying attention to health data is to intervene to change the expected outcomes.

Predictive models already help decision makers assess patient risk. However, methods that primarily learn associations between inputs and outputs can be unreliable, if not overtly dangerous when used for driving medical decisions (Schulam and Saria, 2017). There are three common reasons why this is the case. First, performance of association-based models tends to be susceptible to even minor deviations between the development and the implementation datasets. The learned associations may memorize dataset-specific patterns that do not generalize as the tool is moved to new environments where these patterns no longer hold (Subbaswamy et al., 2019). A common example of this phenomenon is shifts in provider practice with the introduction of new medical evidence,

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

technology, and epidemiology. If a tool heavily relies on a practice pattern to be predictive, as practice changes, the tool is no longer valid (Schulam and Saria, 2017). Second, such algorithms cannot correct for biases due to feedback loops that are introduced when learning continuously over time (Schulam and Saria, 2017). In particular, if the implementation of an AI system changes patient exposures, interventions, and outcomes (often as intended), it can cause data shifts or changes in the distribution of the data that degrade performance. Finally, it may be tempting to treat the proposed predictors as factors one can manipulate to change outcomes, but these are often misleading.

Consider, for instance, the finding discussed by Caruana et al. (2015) regarding risk of death among those who develop pneumonia. Their goal was to build a model that predicts risk of death for a hospitalized individual with pneumonia so that those at high risk could be treated and those at low risk could be safely sent home. The model applying supervised learning counterintuitively learned that patients who have asthma and pneumonia are less likely to die than patients who only have asthma. They traced the result back to an existing policy that patients who have asthma and pneumonia should be directly admitted to the intensive care unit, therefore receiving more aggressive treatment that in turn improved their prognosis (Cabitza et al., 2017). The health care system and research team noted this confounded finding, but had such a model been deployed to assess risk, then sicker patients might have been triaged to a lower level of care, putting them at greater risk. In this example, the association-based algorithm learned risk conditioned on the triage policy in the development dataset that persisted in the implementation environment. However, as providers begin to rely on these types of tools, practice patterns deviate (a phenomenon called practice policy shift) from those observed in the development data. This shift hurts the validity and reliability of the tool (Brown and Sandholm, 2018).

In another example, researchers observed that the time a lab value is measured can often be more predictive than the value itself (Agniel et al., 2018). For instance, the fact that a hospital test was done at 2:00 a.m. was more predictive of patient outcomes than the actual results of the test, because the implied emergency that prompted the test was at an unusual time. Similarly, a mortality prediction model may learn that patients visited by the chaplain have an increased risk of death (Chen and Altman, 2014; Choi et al., 2015).

Finally, a prostate screening test can be determined to be “protective” of near-term mortality, not because the actual test does anything, but because patients who receive that screening test are those who are already fairly healthy and have a longer life expectancy (Agniel et al., 2018). A model based on associations may very well be learning about the way local clinical operations run but not generalize well when moving across hospitals or units with different practice

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

patterns (Schulam and Saria, 2017). More broadly, both humans and predictive models can fail to generalize from training to implementation environments because of many different types of dataset shift—shift in dataset characteristics over time, in practice pattern, or across populations—posing a threat to model reliability and the safety of downstream decisions made in practice (Subbaswamy and Saria, 2018). Recent works have proposed that proactive learning techniques are less susceptible to dataset shifts (Schulam and Saria, 2017; Subbaswamy et al., 2019). These algorithms proactively correct for likely shifts in data.

In addition to learning a model once, an alternative approach is to update models over time so that they continuously adapt to local and recent data. Such adaptive algorithms offer constant vigilance and monitoring for changing behavior. However, this may exacerbate disparities when only well-resourced institutions can deploy the expertise to do so in an environment. In addition, regulation and law, as reviewed in Chapter 7, faces significant challenges in addressing approval and certification for continuously evolving systems.

Rule-based systems are explicitly authored by human knowledge engineers, encoding their understanding of an application domain into a computing inference engine. These are generally more explicit and interpretable in their intent, making these easier to audit for safety and reliability. On the other hand, they take less advantage of relationships that can be automatically inferred through data-driven models and therefore are often less accurate. Integrating domain-knowledge within learning-based frameworks, and combining these with methods for measuring and proactively eliminating bias, provides a promising path forward (Subbaswamy and Saria, 2018). Much of the literature on predictive modeling is based on black box models that memorize associations. Increases in model complexity can reduce both the interpretability and ability of the user to respond to predictions in practical ways (Obermeyer and Emanuel, 2016). As a result, these models are susceptible to unreliability, leading to harmful suggestions. Evaluating for reliability and actionability are key in developing models that have the potential to affect health outcomes. These issues are at the core of the tension between “black box” and “interpretable” model algorithms that afford end users some explanation for why certain predictions are favored.

Training reliable models depends on training datasets to be representative of the population where the model will be applied. Learning from real-world data—where insights can be drawn from patients similar to a given index patient—has the benefit of leading to inferences that are more relevant, but it is important to characterize populations where there are inadequate data to support robust conclusions. For example, a tool may show acceptable performance on average across individuals captured within a dataset but may perform poorly for specific subpopulations because the algorithm has not had enough data to learn from.

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

In genetic testing, minority groups can be disproportionately adversely affected when recommendations are made based on data that do not adequately represent them (Manrai et al., 2016). Test-time auditing tools that can identify individuals for whom the model predictions are likely to be unreliable can reduce the likelihood of incorrect decision making due to model bias (Schulam and Saria, 2017).

Amplification or Exacerbation?

AI systems will generally make people more efficient at what they are already doing, whether that is good or bad. Bias is not inherently undesirable, because the whole point of learning from (clinical) practices is that there is an underlying assumption that human experts are making nonrandom decisions biased toward achieving desirable effects. Machine learning relying on observational data will generally have an amplifying effect on our existing behavior, regardless of whether that behavior is beneficial or only exacerbates existing societal biases. For instance, Google Photos, an app that uses machine learning technology to organize images, incorrectly identified people with darker skin tones as “gorillas,” an animal that has historically been used as a racial slur (Lee, 2018). Another study found that machine translation systems were biased against women due to the way in which women were described in the data used to train the system (Prates et al., 2018). In another example, Amazon developed a hiring algorithm based on its prior hiring practices, which recapitulated existing biases against women (Dastin, 2018). Although some of these algorithms were revised or discontinued, the underlying issues will continue to be significant problems, requiring constant vigilance, as well as algorithm surveillance and maintenance to detect and address them (see Chapter 6). The need for continuous assessment about the ongoing safety of systems is discussed in Chapter 7, including a call for significant changes in regulatory compliance. Societal biases reflected in health care data may be amplified as automated systems drive more decisions, as further addressed in Chapters 5 and 6.

AI Systems Transparency

Transparency is a key theme underlying deeper issues related to privacy and consent or notification for patient data use, and to potential concerns on the part of patients and clinicians around being subject to algorithmically driven decisions. Consistent progress will only be feasible if health care consumers and health care systems are mutually recognized as trusted data partners.

As discussed in detail in Chapter 7, there are tensions that exist between the desire for robust data aggregation to facilitate the development and validation of novel AI models and the need to protect consumer privacy as well as demonstrate respect

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

for consumer preferences through informed consent or notification procedures. However, lack of transparency about data use and privacy practices runs the risk of fostering a situation that lacks clear consent when patient data are used in ways that patients do not understand, realize, or accept. Current consent practices for the use of EHRs and claims data are generally based on models focused on the Health Insurance Portability and Accountability Act (HIPAA) privacy rules, and some argue that HIPAA needs updating (Cohen and Mello, 2018). The progressive integration of other sources of patient-related data (e.g., genetic information, social determinants of health) and the facilitated access to highly granular and multidimensional data are changing the protections provided by the traditional mechanisms. For instance, with more data available, reidentification becomes easier to perform (Cohen and Mello, 2019). As discussed further in Chapter 7, regulations need to be updated and consent processes will need to be more informative of those added risks. Educating patients about the value of having their data used to help advance science and care, but also being explicit about the potential risks of data misuse or unintended negative effects is crucial.

In addition to issues related to data use transparency, peer and community review of publications that describe AI tools, with dissemination of code and source data, is necessary to support scientific reproducibility and validation. The risks of “stealth research” (Ioannidis, 2015), where claims regarding important, high-stakes medical advancements are made outside of the peer-reviewed scientific literature, are too great. While there will be claims of commercial concerns for proprietary intellectual property and even controversial concerns over “research parasites” (Longo and Drazen, 2016), some minimal level of transparency must be expected. Before clinical acceptance of systems can be expected, peer-reviewed publication of model performance and sources of training data should be expected just as much as population descriptions in randomized controlled trials. This is necessary to clarify the representativeness of any models and what populations to which they can reasonably be expected to apply.

For review of AI model development and validation, different models of accountability can be considered, such as the development of review agencies for automated AI and other systems in medicine. If not through existing structures such as the U.S. Food and Drug Administration (FDA) or Clinical Laboratory Improvement Amendments (CLIA), these can be modeled after the National Transportation Safety Board. In the latter case, such an agency has no direct enforcement authority, but in the event of any adverse event that could harm people, full disclosure of all data and information to the review board is required to ensure that the community can learn from mistakes. Refer to Chapter 7 for additional reading related to current and necessary policies and regulations in the use of AI systems in health care.

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Cybersecurity Vulnerabilities Due to AI Automation

Vulnerabilities

Most of this chapter focuses on the side effects of nonmalicious actors using ethically neutral AI technology. Chapter 1 discusses some of the challenges in the ethical uses of health care AI tools. However, it is also important to consider how increasing automation opens new risks for bad actors to directly induce harm, such as through overt fraud. E-mail gave us new ways to communicate and increased productivity, but it also enabled new forms of fraud through spam and phishing. Likewise, new health care technology may open up new streams for fraud and abuse. After the widespread adoption of digital health records, data breaches resulting in the release of millions of individuals’ private medical information have become commonplace (Patil and Seshadri, 2014). These breaches will likely increase in an era when our demand for health data exceeds its supply in the public sector (Jiang and Bai, 2019; Perakslis, 2014). Health care systems are increasingly vigilant, but ongoing attacks demonstrate that safeguarding against a quickly evolving threat landscape remains exceedingly difficult (Ehrenfeld, 2017). The risk to personal data safety will continue to increase as AI becomes mainstream and commercialized. Engaging the public on how and when their secondary data are being used will be crucial to preventing public backlash as we have seen with the Facebook–Cambridge Analytica data scandal (Cadwalladr and Graham-Harrison, 2018). A recent study also indicates that hospital size and academic environment could be associated with increased risk for breaches, calling for better data breach statistics (Fabbri et al., 2017).

Health care data will not be the only target for attackers; the AI systems themselves will become the subject of assault and manipulation. FDA has already approved several AI systems for clinical use, some of which can operate without the oversight of a physician. In parallel, the health care economy in the United States is projected to represent 20 percent of the gross domestic product by 2025 (Papanicolas et al., 2018), making automated medical AI systems a natural target for manipulation as they drive decisions that move billions of dollars through the health care system.

Though recent advances in AI have made impressive progress on clinical tasks, the fact remains that these systems as currently conceived are exceptionally brittle, making them easy to mislead and manipulate with seemingly slight variations in input. Medical images that have small but intentionally crafted modifications (imperceptible to the human eye) can be used to create error in the diagnoses that an AI system provides (Finlayson et al., 2018, 2019). Such attacks allow the attacker to exert arbitrary control over the AI model by modifying the input provided to the system. Figure 4-2 demonstrates how such an attack may be carried out.

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Image
FIGURE 4-2 | Construction of an “adversarial example.” Left: An unaltered fundus image of a healthy retina. The AI system (bottom left) correctly identifies it as a healthy eye. Middle: Adversarial “noise” that is constructed with knowledge of the AI system is added to the original image. Right: Resulting adversarial image that superimposes the original image and the adversarial noise. Though the original image is indistinguishable from the adversarial example to human eyes, the AI system has now changed the diagnosis to diabetic retinopathy with essentially 100 percent confidence.
SOURCE: Image was provided by Samuel Finlayson.

These kinds of attacks can give potential adversaries an opportunity to manipulate the health care system. For instance, suppose that AI has become ubiquitous in the health care system and payers require that an AI system evaluate and confirm an imaging-based diagnosis before a reimbursement is granted. Under such a system, a motivated provider could be incentivized to modify “borderline” cases to allow them to perform a procedure in pursuit of reimbursement. These kinds of attacks could be conducted on a larger scale where similar large financial gains are at stake. Consider that clinical trials that are based on imaging endpoints (e.g., tumor burden in X-rays) will likely be evaluated by AI systems in the future to ensure “objectivity.” Any entity could intentionally ensure a positive result by making small and untraceable adversarial changes to the image, which would cause the AI system to think that tumor burden had been reduced. It is unlikely that the hypothetical scenarios discussed above will happen in the near term, but they are presented as cautionary examples to encourage a proactive dialogue and to highlight limitations of current AI technology.

Adversarial Defenses

There are roughly two broad classes of possible defenses: infrastructural and algorithmic (Qiu et al., 2019; Yuan et al., 2018). Infrastructural defenses prevent image tampering or detect if it has occurred. For instance, an image hash, also

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

known as a “digital fingerprint,” could be generated and stored by the device as soon as an image is created. The hash would then be used to determine if the image had been altered in any way, because any modification would result in a new hash. This would require an update to hospital information technology (IT) infrastructure, which has historically been very difficult. However, a set of standards similar to ones for laboratories such as CLIA could be established to ensure that the medical imaging pipeline is secure.

Algorithmic defenses against adversarial attacks are a very active area of research within the broader machine learning community (Qiu et al., 2019; Yuan et al., 2018). As of yet, there are no defenses that have proven to be 100 percent effective, and new defenses are often broken almost as quickly as they are proposed. However, there have been successful defenses in specific domains or on specific datasets. On the handwritten digit dataset known as MNIST, several approaches have proven to be robust to adversarial attacks while retaining high levels of predictive accuracy (Kannan et al., 2018; Madry et al., 2017). It remains to be seen if some specific property of medical imaging (such as low levels of pose variance or restricted color spectrum) could be leveraged to improve robustness to adversarial attacks, but this is likely a fruitful direction for research in this area.

Both types of defenses, infrastructural and algorithmic, highlight the need for interdisciplinary teams of computer scientists, health care workers, and consumer representatives at every stage of design and implementation of these systems. Because these AI systems represent a new type of IT infrastructure, they must be treated as such and continually probed for possible security vulnerabilities. This will necessarily require deep collaborations between health care IT experts, computer scientists, the traditional health care workforce, and those the algorithm is designed to affect.

HOW COULD AI RESHAPE MEDICINE AND HEALTH IN UNINTENDED WAYS?

The examples in this chapter largely revolve around clinical cases and risks, but the implications reach far beyond to all of the application domains explored in Chapter 3. Public health, consumer health, and population health and/or risk management applications and risks are all foreseeable. Operational and administrative cases may be more viable early target areas with much more forgiving risk profiles for unintended harm, without high-stakes medical decisions depending on them. Even then, automated AI systems will have far-reaching implications for patient populations, health systems, and the workforce in terms of the efficiency and equity of delivering against the unmet and unlimited demands for health care.

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Future of Employment and Displacement

“It’s just completely obvious that in five years deep learning is going to do better than radiologists. It might be 10 years,” according to Geoffrey Hinton, a pioneer in artificial neural network research (Mukherjee, 2017). How should health care systems respond to the statement by Sun Microsystems co-founder Vinod Khosla that “Machines will replace 80 percent of doctors in a health care future that will be driven by entrepreneurs, not medical professionals” (Clark, 2012)? With the advancing capabilities of AI, and a history of prior large-scale workforce disruptions through technology advances, it seems reasonable to posit that entire job categories may be replaced by automation (see Figure 4-3), including some of the most common (e.g., retail clerks and drivers) (Desjardins, 2017; Frey and Osborne, 2013).

Are job losses in medicine a credible consequence of advancing AI? In 1968, Warner Slack commented that “any doctor that can be replaced by a machine should be replaced by a machine” (deBronkart and Sands, 2018). This sentiment is often misinterpreted as an argument for replacing people with computer systems, when it is meant to emphasize the value a good human adds that a computer system does not. If one’s job is restricted to relaying information and answering well-structured, verifiable multiple-choice questions, then it is likely those tasks should be automated and the job eliminated. Most clinical jobs and patient needs require much more cognitive adaptability, problem solving, and communication skills than a computer can muster. Anxiety over job losses due to AI and automation are likely exaggerated, but advancing technology will almost certainly change roles as certain tasks are automated. A conceivable future could eliminate manual tasks such as checking patient vital signs (especially with self-monitoring devices), collecting laboratory specimens, preparing medications for pickup, transcribing clinical documentation, completing prior authorization forms, scheduling appointments, collecting standard history elements, and making routine diagnoses. Rather than eliminate jobs, however, industrialization and technology typically yield net productivity gains to society, with increased labor demands elsewhere such as in software, technical, support, and related services work. Even within the same job category, many assumed automated teller machines would eliminate the need for bank tellers. Instead, the efficiencies gained enabled expansion of branches and even greater demand for tellers that could focus on higher cognitive tasks (e.g., interacting with customers, rather than simply counting money) (Pethokoukis, 2016). Health care is already the fastest growing and now largest employment sector in the nation (outstripping retail), but most of that growth is not in clinical professionals such as doctors and nurses, but rather in home care support and administrative staff (Thompson, 2018).

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Filling the Gap for Human Expertise, the Scarcest Health Care Resource

Besides using AI automation to tackle obvious targets such as repetitive administrative tasks (clinical documentation, scheduling, etc.), more important is to consider the most valuable and limited resource in medicine, which is access to and time with a competent professional clinician. More than 25 million people in the United States alone have deficient access to medical specialty care (Woolhandler and Himmelstein, 2017). For everyone to receive levels of medical care that the insured metropolitan populations do, we already lack >30,000 doctors in the United States to meet that demand. With growing and aging populations, the demand for physicians continually outpaces supply, with shortfalls projected to be as much as 100,000 physicians in the United States alone by 2030 (Markit, 2017) (see Figure 4-4). The scarcity of available expertise runs even deeper in international and rural settings, where populations may not be able to reach even basic health care without prolonged travel. This pent-up and escalating demand for health care services should direct advances in telemedicine and AI automation to ultimately increase access and fill these shortfalls. At the same time, we should not feel satisfied with broad dissemination of lower quality services that may only widen inequity between affluent urban centers with ready access to multiple tiers of service and remote rural populations with more limited choices.

Instead of trying to replace medical workers, the coming era of AI automation can instead be directed toward enabling a broader reach of the workforce to do more good for more people, given a constrained set of scarce resources.

Net Gains, Unequal Pains

Even with the optimistic perspective that increasing automation through AI technology will be net beneficial in the end, the intervening processes of displacement can be painful, disruptive, and can widen existing inequality (Acemoglu and Restrepo, 2018). Automation reflects a movement of production from labor to capital. This tends to mean unequal distribution of benefits, as productivity starts coming from those holding the capital while the labor force (wage workers) is progressively constrained into a narrower set of tasks, not sharing as much in the control or growth in overall income (Acemoglu and Restrepo, 2018).

Everyone is enriched when something needed (e.g., food or medical care) becomes less expensive to produce through automation (Herrendorf et al., 2009). The response to such technological shocks can be slow and painful, however, with costly reallocation and retraining of workers. This can be particularly challenging when there is a mismatch between new technology and workforce skills. Such disruptive

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Image
FIGURE 4-4 | Projected total physician shortfall range, 2015–2030.
SOURCE: LaPointe, 2017.

changes tend to be harder on small (usually under-represented) groups who are already on the margins, amplifying existing inequity. Those who can adapt well to different economies and structures are likely those who already have better resources, education, and socioeconomic stability. Advances in health care AI technologies may well create more jobs (e.g., software development, health care system analysts) than are eliminated (e.g., data entry, scribing, scheduling), but those in the jobs that are easy to automate are unlikely to be the ones able to easily adopt the skills needed for the new jobs created. The fallout from a growing mismatch between employer skill set demand and employee training are reflected with only one in four employees feeling that they are getting training to adapt to an AI tech world (Giacomelli and Shukla, 2018).

Although the above example is on the individual worker level, even at the system level, we are likely to see increasing disparities. Well-resourced academic medical centers may be in a position to build and deploy adaptive learning AI systems, whereas smaller health care systems that care for the majority of the population are unlikely to have the resources to assemble the on-site expertise and data infrastructure needed for more than out-of-the-box systems that are subject to all of the modeling risks previously discussed.

While it is important to measure total and average improvement in human outcomes, it is equally important to also measure equitable distribution of such

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

benefits (e.g., Gini index [Gastwirth, 1972]). By analogy to a “food apartheid,” if we only optimize for production of total calories per acre (Haspel, 2015), all can get fatter with more empty calories, but the poor are less likely to access actual nutrition (Brones, 2018). If high-tech health care is only available and used by those already plugged in socioeconomically, such advances may inadvertently reinforce a “health care apartheid” (Thadaney and Verghese, 2019).

New Roles in an AI Future

Prior advances in technology and automation have resulted in transitions of jobs from agricultural to manufacturing to service. Where will medical workers go when even service jobs are automated? Most of the near-term changes discussed are largely applied AI in terms of data analytics for prediction, decision making, logistics, and pattern recognition. These remain unlikely to displace many human skills such as complex reasoning, judgment, analogy-based learning, abstract problem solving, physical interactions, empathy, communication, counseling, and implicit observation. There will thus likely be a shift in health care toward jobs that require direct physical (human) interaction, which are not so easily automated. The advent of the AI era will even require the creation of new job roles (Wilson et al., 2017), including

  • Trainers: Teaching AI systems how to perform will require deliberate effort to evaluate and stress test them. AI systems can automate tasks and find patterns in data, but still require humans to provide meaning, purpose, and direction.
  • Explainers: Advancing AI algorithms often have a “black box” nature, making suggestions without clear explanations, requiring humans versed in both the technical and application domains to explain how such algorithms can be trusted to drive practical decisions.
  • Sustainers: The intelligence needs of human endeavors will continually evolve, preventing the advent of “completed” AI systems. Humans must continue to maintain, interpret, and monitor the behavior and unintended consequences of AI systems.

Deskilling of the Health Care Workforce

Even if health-related jobs are not replaced by AI, deskilling (“skill rot”) is a risk of over-reliance on computer-based systems (Cabitza et al.,2017). While clinicians may not be totally displaced, the fear is that they may lose “core competencies” considered vital to medical practice. In light of the rapid advancements of AI capabilities in reading X-rays (Beam and Kohane, 2016; Gulshan et al., 2016), will radiologists of the future be able to perform this task without the aid of

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

a computer? The very notion of a core competency is an evolving one that professionals will need to adapt as technology changes roles (Jha and Topol, 2016).

As the skills needed in imaging-based specialties change rapidly, radiologists and pathologists “must be willing to be displaced to avoid being replaced” (Jha and Topol, 2016). Jha and Topol (2016) articulate a future in which physicians in these specialties no longer operate as image readers as they currently do, but have evolved to be “information specialists” that manage complex AI systems and integrate the various pieces of information they might provide. Indeed, they argue that radiology and pathology will be affected by AI in such a similar manner that these specialties might be merged under the unified banner of information specialists, to more accurately reflect the skills needed by these physicians in the AI-enabled future. While this may be extreme due to the significantly different clinical information required in the two disciplines, it highlights that this era of health care is likely to be substantially disrupted and transformed.

Need for Education and Workforce Development

Advancing technologies in health care can bring substantial societal benefits, but will require significant training or retraining of the workforce for roles that emphasize where humans and machines have different strengths. The Industrial Revolution illustrated the paradox of overall technological advance and productivity growth, which first passed through a period of stagnated wages, reduced share to laborers, expanding poverty, and harsh living conditions (Mokyr, 1990). An overall beneficial shift only occurred after mass schooling and other investments in human capital to expand skills of the workforce. Such adjustments are impeded if the educational system is not able to provide the newly relevant skills.

A graceful transition into the AI era of health care that minimizes the unintended consequences of displacement will require deliberate redesign of training programs. This ranges from support for a core basis of primary education in science, technology, engineering, and math literacy in the broader population to continuing professional education in the face of a changing environment. Any professional’s job changes over time as technology and systems evolve. While complete replacement of health-related jobs by AI computer systems is unlikely, a lack of adaptation will result in a growing skill set mismatch, decreases in efficiency, and increasing cost of care delivery. In the face of the escalating complexity in medicine and computerization of data, medical training institutions already acknowledge that emphasizing rote memorization and repetition of information is suboptimal in an information age, requiring large-scale transformation. Health care workers in the AI future will need to learn how to use and interact with information systems, with foundational

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

education in information retrieval and synthesis, statistics and evidence-based medicine appraisal, and interpretation of predictive models in terms of diagnostic performance measures. Institutional organizations (e.g., the National Institutes of Health, health care systems, professional organizations, universities, and medical schools) should shift focus from skills that are easily replaced by AI automation to specific education and workforce development programs for work in the AI future with emphasis in science, technology, engineering, and medicine and data science skills and human skills that are hard to replace with a computer. Along with the retraining required to effectively integrate AI with existing roles, new roles will be created as well (e.g., trainers, explainers, sustainers), creating the need to develop and implement training programs to address these roles.

Moravec’s paradox notes that “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility” (Moravec, 2018). Respectively, clinicians will need to be selected for, and emphasize training in, more distinctly “human” skills of counseling, physical examination, communication, management, and coordination.

AI System Augmentation of Human Tasks

Anxieties over the potential for automated AI systems to replace jobs rests in a false dichotomy. Humans and machines can excel in distinct ways that the other cannot, meaning that the two combined can accomplish what neither could do alone. In one example of a deep learning algorithm versus an expert pathologist identifying metastatic breast cancer, the high accuracy of the algorithm was impressive enough, but more compelling was that combining the algorithm with the human expert outperformed both (Wang et al., 2016).

HOW WILL AI TRANSFORM PATIENT, PROVIDER, AND COMPUTER INTERACTIONS?

The progressive digitization of U.S. medicine underwent a massive shift in just the last decade with the rapid adoption of EHRs spurred by the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 (HHS, 2017). This transformation creates much of the digital infrastructure that will make AI in medicine possible, but the pace of change was so rapid that we may not have yet achieved the maturity to effectively benefit from new technology without compromising core values of the profession. Advancing AI systems will depend on massive data streams for their power, but even relatively basic billing processes, quality reporting, and business analytics that current EHRs support is

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

burning out a generation of clinical professionals because of increased electronic workflow requirements (Downing et al., 2018; Hill et al., 2013; Verghese, 2018).

As AI medical guidance systems driven by automated sensors increasingly direct medical care, there is concern that a result will be greater separation of patients from clinicians by digital intermediaries (Gawande, 2018). The future may see patients asking for advice and receiving direction from automated chatbots (Miner et al., 2016, 2017) while doctors and patients attentively analyze and recommend treatments for “iPatient” avatars that represent the data of their patients but are not the physical human beings (Verghese, 2008).

WHAT WILL HAPPEN TO ACCEPTANCE, TRUST, AND LIABILITY IN A HUMAN AND MACHINE AI FUTURE?

Information retrieval systems will increase democratization of medical knowledge, likely to the point where fully automated systems, chatbots, or intelligent agents are able to triage and dispense information and give health advice to patients (Olson, 2018). Less clear is how this disrupts conventions of who and what to trust. Widespread distribution of information comes with a respective risk of circulating misinformation in digital filter bubbles and echo chambers (Cashin-Garbutt, 2017).

Who is sued when something goes wrong, but all there is to point at is a faceless automation backed by a nebulous bureaucracy? Regulatory and guidance frameworks (see Chapter 7) must adapt, or leave us in an ethically ambiguous space (Victory, 2018).

HOW WILL HEALTH CARE PROVIDER ROLES BE CONCEPTUALIZED?

The classical ideal of a clinician evokes an image of a professional laying his or her stethoscope on patients for skillful examination, fulfilling a bonding and healing role. The gap between this image and reality may only widen further with advancing AI technology in medicine. The patient’s ability to tell his or her story to a live person could change in a world with voice-recognition software and AI chatbots. This may actually allow patients to be more honest in their medical interactions (Borzykowski, 2016), but could diminish one of the most effective therapeutic interventions, that of simply feeling that you are being listened to by an attentive and empathetic human being. In an AI-enabled world, the role of clinician will likely move progressively toward manager, coordinator, and counselor, challenging the classical perception of what their role is and what should be counted among one’s core competencies. Digitization of medicine is

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

intended to improve care delivery, particularly at the population level, but these benefits may not be felt on the frontlines of care. Instead, it can turn clinical professionals into data entry clerks, feeding data-hungry machines (optimized for billing incentives rather than clinical care). This may escalate as AI tools need even more data, amid a policy climate imposing ever more documentation requirements to evaluate and monitor metrics of health care quality.

The transition to more IT solutions, computerized data collection, and algorithmic feedback should ultimately improve the consistency of patient care quality and efficiency. However, will the measurable gains necessarily outweigh the loss of harder-to-quantify human qualities of medicine? Will it lead to different types of medical errors when health care relies on technology-driven test interpretations and care recommendations instead of human clinical assessment, interpretation, and management? These are provocative questions, but acknowledging that these are public concerns and addressing them are important from a societal perspective.

More optimistically, perhaps such advancing AI technologies can instead enhance human relationships. Multiple companies are exploring remote and automating approaches to “auto-scribe” for clinical encounters (Cashin-Garbutt, 2017), allowing patient interactions to focus on direct care instead of note-taking and data entry. Though such promise is tantalizing, it is also important to be aware of the unintended consequences or overt actions of bad actors who could exploit such passive monitoring, intruding on confidential physician–patient conversations that could make either party unwilling to discuss important issues. Health care AI developments may be better suited in the near term to back-office administrative tasks (e.g., coding, prior authorization, supply chain management, and scheduling). Rather than developing patches like scribes for mundane administrative tasks, a holistic system redesign may be needed to reorient incentives and eliminate the need for low-value tasks altogether. Otherwise, AI systems may just efficiently automate low-value tasks, further entrenching those tasks in the culture, rather than facilitating their elimination.

WHY SHOULD THIS TIME BE ANY DIFFERENT?

A special article in the New England Journal of Medicine proclaimed that

Rapid advances in the information sciences, coupled with the political commitment to broad extensions of health care, promise to bring about basic changes in the structure of medical practice. Computing science will probably exert its major effects by augmenting and, in some cases, largely replacing the intellectual functions of the physician. (Schwartz, 1970)

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

This was published in 1970. Will excitement over the current wave of AI technology only trigger the next AI Winter? Why should this time be any different? General AI systems will remain elusive for the foreseeable future, but there are credible reasons to expect that narrow, applied AI systems will still transform many areas of medicine and health in the next decade. Although many foundational concepts for AI systems were developed decades ago, only now is there availability of the key ingredient: data. Digitization of medical records, aggregated Internet crowdsourcing, and patient-generated data streams provide the critical fuel to power modern AI systems. Even in the unlikely event that no further major technological breakthroughs follow, the coming decades will be busy translating existing technological advances (e.g., image recognition, machine translation, voice recognition, predictive modeling) into practical solutions for increasingly complex problems in health.

KEY CONSIDERATIONS

Though this chapter is meant to highlight potential risks and unintended consequences of the developing AI future of medicine, it should not be read as pessimism or discouragement of progress. Complexity and challenges in health care are only escalating (IOM, 2013) as is global competition in AI technology (Metz, 2019). “If we don’t change direction soon, we’ll end up where we’re going” (Horne, 2016). Doing nothing has its own risks and costs in terms of missed opportunities. Leaders can integrate the key considerations outlined below to develop strategies and thinking around effective use of AI.

Viewed through a medical ethics framework (Gillon, 1994), these considerations are guided by four principles:

  • Beneficence: Use AI systems to do good and consider that it would even be a harmful missed opportunity to neglect their use.
  • Non-maleficence: Avoid unintentional harm from misinterpreting poorly constructed models or the overt actions of bad actors.
  • Autonomy: Respect individual decisions and participation, including as they pertain to transparency in personal data collection and the applicability of AI-driven decisions.
  • Justice: Act on the basis of fair adjudication between competing claims, so that AI systems can help reduce rather than exacerbate existing disparities in access to quality health resources.

The review in this chapter seeks to soften any crash into a trough of disillusionment over the unintended consequences of health care AI, so that we may quickly move on to the slope of enlightenment that follows the hype cycle (Chen and Asch,

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

2017; see Figure 4-1) where we effectively use all information and data sources to improve our collective health. To that end are the following considerations:

  1. Beware of marketing hype, but recognize real opportunities. There is no need to over-hype the potential of AI in health care when there is ample opportunity (as reviewed in Chapter 3) to address existing issues from undesirable variability, to crippling costs, to impaired access to quality care.
  2. Seek out robust evaluations of model performance, utility, vulnerabilities, and bias. Developers must carefully probe models for unreliable behavior due to shifts in population, practice patterns, or other characteristics that do not generalize from the development to the deployment environment. Even within a contained deployment environment, it is important to measure robustness of machine learning approaches relative to shifts in the real-world, data-generating processes and sustain efforts to address the underlying human practices and culture from which the algorithms are learning.
  3. Respective effort should be deliberately allocated to identify, mitigate, and correct biases in decision-making tools. Computers/algorithms are effective at learning statistical structure, patterns, organization, and rules in complex data sources, but they do not offer meaning, purpose, or a sense of justice or fairness. Recognize that algorithms trained on biased datasets will likely just amplify those biases (Rajkomar et al., 2018; Zou and Schiebinger, 2018).
  4. Demand transparency in data collection and algorithm evaluation processes. The trade-offs between innovation and safety and between progress and regulation are complex, but transparency should be demanded along the way, as more thoroughly explored in Chapter 7.
  5. Develop AI systems with adversaries (bad actors) in mind. Take inspiration from the cybersecurity industry with arms races between “white hats” versus “black hats.” Deep collaborations between “white hat” health care IT experts, computer scientists, and the traditional health care workforce are needed to sniff out system vulnerabilities and fortify them before the “black hat” bad actors identify and exploit vulnerabilities in live systems (Symantec, 2019).
  6. Prioritize education reform and workforce development. A graceful transition into the AI era of medicine that minimizes displacement will require deliberate redesign of training programs and workforce development toward roles that emphasize where humans have different strengths than computers.
  7. Identify synergy rather than replacement. Humans and machines can excel in distinct ways that the other cannot, meaning that the two combined can accomplish what neither could do alone. Rather than replacement, consider applications where there is limited access to a scarce resource (e.g., clinical expertise) that AI systems can relieve.
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
  1. Use AI systems to engage, rather than stifle, uniquely human abilities. AI-based automation of mundane administrative tasks and efficient health-related operations can improve system efficiency to give more time for human patients and clinicians to do what they are better at (e.g., relationship building, information elicitation, counseling, and management). As explored in Chapter 6, avoid systems that disrupt human workflows.
  2. Use automated systems to reach patients where existing health systems do not. Even as there is unique value in an in-person clinician–patient interaction, more than 90 percent of a patient’s life will not be in a hospital or doctor’s office. Automated systems and remote care frameworks (e.g., telemedicine and self-monitoring) can attend to, guide, and build patient relationships to monitor chronic health issues, meeting many who were previously not engaged at all.

REFERENCES

Acemoglu, D., and P. Restrepo. 2018. Artificial intelligence, automation and work. Working Paper 24196. National Bureau of Economic Research. https://doi.org/10.3386/w24196.

Agniel, D., I. S. Kohane, and G. M. Weber. 2018. Biases in electronic health record data due to processes within the healthcare system: Retrospective observational study. BMJ 361:k1479.

AlphaStar Team. 2019. AlphaStar: Mastering the real-time strategy game StarCraft II. DeepMind. https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii (accessed November 12, 2019).

Barnato, A. E., M. B. Herndon, D. L. Anthony, P. M. Gallagher, J. S. Skinner, J. P. W. Bynum, and E. S. Fisher. 2007. Are regional variations in end-of-life care intensity explained by patient preferences? A study of the US Medicare population. Medical Care 45(5):386–393.

Bates, D. W., S. Saria, L. Ohno-Machado, A. Shah, and G. Escobar. 2014. Big data in health care: Using analytics to identify and manage high-risk and high-cost patients. Health Affairs 33:1123–1131.

Beam, A. L., and I. S. Kohane. 2016. Translating artificial intelligence into clinical care. JAMA 316:2368–2369.

Beam, A. L., and I. S. Kohane. 2018. Big data and machine learning in health care. JAMA 319:1317–1318.

Borzykowski, B. 2016. Truth be told, we’re more honest with robots. BBC WorkLife. https://www.bbc.com/worklife/article/20160412-truth-be-told-were-more-honest-with-robots (accessed November 12, 2019).

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Brones, A. 2018. Food apartheid: The root of the problem with America’s groceries. The Guardian. https://www.theguardian.com/society/2018/may/15/food-apartheid-food-deserts-racism-inequality-america-karen-washington-interview (accessed November 12, 2019).

Brooks, R. 2017. The seven deadly sins of AI predictions. MIT Technology Review. https://www.technologyreview.com/s/609048/the-seven-deadly-sins-of-ai-predictions (accessed November 12, 2019).

Brown, J. 2017. Why everyone is hating on IBM Watson—including the people who helped make it. Gizmodo. https://gizmodo.com/why-everyone-is-hating-on-watson-including-the-people-w-1797510888 (accessed November 12, 2019).

Brown, N., and T. Sandholm. 2018. Superhuman AI for heads-up no-limit poker: Libratus beats top professionals. Science 359:418–424.

Butterfield, S. 2018. Let the computer figure it out. ACP Hospitalist. https://acphospitalist.org/archives/2018/01/machine-learning-computer-figure-out.htm (accessed November 12, 2019).

Cabitza, F., R. Rasoini, and G. F. Gensini. 2017. Unintended consequences of machine learning in medicine. JAMA 318(6):517–518.

Cadwalladr, C., and E. Graham-Harrison. 2018. Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election (accessed December 7, 2019).

Caruana, R., P. Koch, Y. Lou, M. Sturm, J. Gehrke, and N. Elhadad. 2015. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM. Pp. 1721–1730. https://doi.org/10.1145/2783258.2788613.

Cashin-Garbutt, A. 2017. Could smartglass rehumanize the physician patient relationship? News-Medical.net. https://www.news-medical.net/news/20170307/Could-smartglass-rehumanize-the-physician-patient-relationship.aspx (accessed November 12, 2019).

Chen, J. H., and R. B. Altman. 2014. Automated physician order recommendations and outcome predictions by data-mining electronic medical records. In AMIA Summits on Translational Science Proceedings. Pp. 206–210.

Chen, J., and S. Asch. 2017. Machine learning and prediction in medicine—Beyond the peak of inflated expectations. New England Journal of Medicine 376:2507–2509.

Choi, P. J., F. A. Curlin, and C. E. Cox. 2015. “The patient is dying, please call the chaplain”:The activities of chaplains in one medical center’s intensive care units. Journal of Pain and Symptom Management 50:501–506.

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Christakis, N. A., and E. B. Lamont. 2000. Extent and determinants of error in physicians’ prognoses in terminally ill patients: Prospective cohort study. Western Journal of Medicine 172:310–313.

Clark, L. 2012. Machines will replace 80 percent of doctors. Wired. https://www.wired.co.uk/article/doctors-replaced-with-machines (accessed December 13, 2019).

Cohen, G., and M. Mello. 2018. HIPAA and protecting health information in the 21st century. JAMA 320(3):231–232.

Cohen, G., and M. Mello. 2019. Big data, big tech, and protecting patient privacy. JAMA 322(12):1141–1142.

Dartmouth Atlas Project. 2018. Dartmouth Atlas of Health Care. http://www.dartmouthatlas.org (accessed November 12, 2019).

Dastin, J. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G (accessed on November 12, 2019).

deBronkart, D., and D. Sands. 2018. Warner Slack: “Patients are the most underused resource.” BMJ 362:k3194.

Deo, R. C. 2015. Machine learning in medicine. Circulation 132:1920–1930.

Desjardins, J. 2017. Visualizing the jobs lost to automation. Visual Capitalist. http://www.visualcapitalist.com/visualizing-jobs-lost-automation/?link=mktw (accessed November 12, 2019).

DOJ and FTC (U.S. Department of Justice and Federal Trade Commission). 2015. Executive summary [updated]. Improving health care: A dose of competition. https://www.justice.gov/atr/executive-summary (accessed February 11, 2019).

Downing, N. L., D. W. Bates, and C. A. Longhurst. 2018. Physician burnout in the electronic health record era: Are we ignoring the real cause? Annals of Internal Medicine 169:50–51.

Ehrenfeld, J. M. 2017. WannaCry, cybersecurity and health information technology: A time to act. Journal of Medical Systems 41:104.

Ehteshami, B. B., M. Veta, P. J. van Diest, B. van Finneken, N. Karssemeijer, G. Litjens, J. A. W. M. van der Laak, the CAMELYON16 Consortium, M. Hermsen, Q. F. Manson, M. Balkenhol, O. Geessink, N. Stathonikos, M. C. Van Dijk, P. Bult, F. Beca, A. H. Beck, D. Wang, A. Khosla, R. Gargeya, H. Irshad, A. Zhong, Q. Dou, Q. Li, H. Chen, H. J. Lin, P. A. Heng, C. Hab, E. Bruni, Q. Wong, U. Halici, M. U. Oner, R. Cetin-Atalay, M. Berseth, V. Khvatkov, A. Vylegzhanin, O. Kraus, M. Shaban, N. Rajpoot, R. Awan, K. Sirinukunwattana, T. Qaiser, Y. W. Tsang, D. Tellez, J. Annuscheit, P. Hufnagl, M. Valkonen, K. Kartasalo, L. Latonen, P. Ruusuvuoiri, K. Kiimatainen, S. Albargouni, B. Mungal, A. George,

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

S. Demirci, N. Navab, S. Watanabe, S. Seno, Y. Takenaka, H. Matsuda, H. Ahmady Phoulady, V. Kovalev, A. Kalinovsky, V. Liauchuk, G. Bueno, M. M. Fernandez-Carrobles, I. Serrano, O. Deniz, D. Racoceanu, and R. Venancio. 2017. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 318:2199–2210.

Esteva, A., B. Kuprel, R. A. Novoa, J. Ko, S. M. Sweter, H. M. Blau, and S. Thrun. 2017. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542:115–118.

Fabbri, D., M. E. Frisse, and B. Malin. 2017. The need for better data breach statistics. JAMA Internal Medicine 177(11):1696.

Felton, R. 2018. The problem isn’t media coverage of semi-autonomous car crashes. Jalopnik. https://jalopnik.com/the-problem-isn-t-media-coverage-of-semi-autonomous-car-1826048294 (accessed November 12, 2019).

Finlayson, S. G., H. W. Chung, I. S. Kohane, and A. L. Beam. 2018. Adversarial attacks against medical deep learning systems. arXiv.org. https://arxiv.org/abs/1804.05296 (accessed November 12, 2019).

Finlayson, S. G., J. D. Bowers, J. Ito, J. L. Zittrain, A. L. Beam, and I. S. Kohane. 2019. Adversarial attacks on medical machine learning. Science 363:1287–1289.

Fischetti, M. 2011. Computers versus brains. Scientific American. https://www.scientificamerican.com/article/computers-vs-brains (accessed November 12, 2019).

Frey, C., and M. Osborne. 2013. The future of employment: How susceptible are jobs to computerisation? Working Paper. Oxford Martin Programme on Technology and Employment.

Gastwirth, J. L. 1972. The estimation of the Lorenz curve and Gini index. Review of Economics and Statistics 54:306–316.

Gawande, A. 2018. Why doctors hate their computers. The NewYorker. https://www.newyorker.com/magazine/2018/11/12/why-doctors-hate-their-computers (accessed November 12, 2019).

Giacomelli, G., and P. Shukla. 2018. How AI can combat labor displacement. InformationWeek. https://www.informationweek.com/big-data/how-ai-can-combat-labor-displacement-/a/d-id/1331997 (accessed November 12, 2019).

Gillon, R. 1994. Medical ethics: Four principles plus attention to scope. BMJ 309:184–188.

Glare, P., K. Virik, M. Jones, M. Hudson, S. Eychmuller, J. Simes, and N. Christakis. 2003. A systematic review of physicians’ survival predictions in terminally ill cancer patients. BMJ 327:195–198.

Gulshan, V., L. Peng, M. Coram, M. C. Stumpe, D. Wu, A. Narayanaswamy, S. Venugopalan, K. Widner, T. Madams, J. Cuadros, R. Kim, R. Raman, P. C. Nelson, J. L. Mega, and D. R. Webster. 2016. Development and validation of a deep

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316:2402–2410.

Haspel, T. 2015. In defense of corn, the world’s most important food crop. The Washington Post. https://www.washingtonpost.com/lifestyle/food/in-defense-of-corn-the-worlds-most-important-food-crop/2015/07/12/78d86530-25a8-11e5-b77f-eb13a215f593_story.html (accessed November 12, 2019).

Herrendorf, B., R. Rogerson, and A. Valentinyi. 2009. Two perspectives on preferences and structural transformation. Working Paper 15416, National Bureau of Economic Research. https://doi.org/10.3386/w15416.

HHS (U.S. Department of Health and Human Services). 2017. HITECH Act enforcement interim final rule. https://www.hhs.gov/hipaa/for-professionals/special-topics/hitech-act-enforcement-interim-final-rule/index.html (accessed December 26, 2018).

Hill, R. G., Jr., L. M. Sears, and S. W. Melanson. 2013. 4000 Clicks: A productivity analysis of electronic medical records in a community hospital ED. American Journal of Emergency Medicine 31:1591–1594.

Horne, F. 2016. If we don’t change direction soon, we’ll end up where we’re going. Healthcare Management Forum 29:59–62.

Ioannidis, J. P. A. 2015. Stealth research. JAMA 313:663.

IOM (Institute of Medicine). 2013. Best care at lower cost: The path to continuously learning health care in America. Washington, DC: The National Academies Press. https://doi.org/10.17226/13444.

Jha, S., and E. J. Topol. 2016. Adapting to artificial intelligence: Radiologists and pathologists as information specialists. JAMA 316:2353–2354.

Jiang, J. X., and G. Bai. 2019. Types of information comprised in breaches of protected health information. Annals of Internal Medicine. https://doi.org/10.7326/M19-1759 (accessed November 12, 2019).

Kalra, N., and D. G. Groves. 2017. The enemy of good: Estimating the cost of waiting for nearly perfect autonomous vehicles. Santa Monica, CA: RAND Corporation. https://www.rand.org/pubs/research_reports/RR2150.html (accessed November 12, 2019).

Kannan, H., A. Kurakin, and I. Goodfellow. 2018. Adversarial logit pairing. arXiv.org. https://arxiv.org/abs/1803.06373 (accessed November 12, 2019).

Kleinberg, J., J. Ludwig, and S. Mullainathan. 2016. A guide to solving social problems with machine learning. Harvard Business Review. https://hbr.org/2016/12/a-guide-to-solving-social-problems-with-machine-learning (accessed November 12, 2019).

Kohane, I. 2017. What my 90-year-old mom taught me about the future of AI in health care. CommonHealth. https://www.wbur.org/commonhealth/2017/06/16/managing-mom-weight-algorithm.

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Krizhevsky, A., I. Sutskever, and G. E. Hinton. 2017. ImageNet classification with deep convolutional neural networks. Communications of the ACM 60:84–90.

LaPointe, J. 2017. Physician shortage projected to grow to 104k providers by 2030. RevCycle Intelligence. https://revcycleintelligence.com/news/physician-shortage-projected-to-grow-to-104k-providers-by-2030 (accessed December 13, 2019).

Lee, N. T. 2018. Detecting racial bias in algorithms and machine learning. Journal of Information, Communication and Ethics in Society 16:252–260.

Longo, D. L., and J. M. Drazen. 2016. Data sharing. New England Journal of Medicine 374:276–277.

Madry, A., A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv.org. https://arxiv.org/abs/1706.06083 (accessed November 12, 2019).

Manrai, A. K., G. Bhatia, J. Strymish, I. S. Kohane, and S. H. Jain. 2014. Medicine’s uncomfortable relationship with math: Calculating positive predictive value. JAMA Internal Medicine 174:991–993.

Manrai, A. K., B. H. Funke, H. L. Rehm, M. S. Olesen, B. A. Maron, P. Szolovits, D. M. Margulies, J. Loscalzo, and I. S. Kohane. 2016. Genetic misdiagnoses and the potential for health disparities. New England Journal of Medicine 375:655–665.

Marcus, G. 2018. Deep learning: A critical appraisal. arXiv.org. https://arxiv.org/abs/1801.00631 (accessed November 13, 2019).

Markit, I. 2017. The complexities of physician supply and demand: Projections from 2015 to 2030. Final report. Washington, DC: Association of American Medical Colleges. https://aamc-black.global.ssl.fastly.net/production/media/filer_public/a5/c3/a5c3d565-14ec-48fb-974b-99fafaeecb00/aamc_projections_update_2017.pdf (accessed November 12, 2019).

Metz, C. 2019. A. I. shows promise assisting physicians. The New York Times. https://www.nytimes.com/2019/02/11/health/artificial-intelligence-medical-diagnosis.html (accessed November 12, 2019).

Miner, A. S., A. Milstein, S. Schueller, R. Hegde, C. Mangurian, and E. Linos. 2016. Smartphone-based conversational agents and responses to questions about mental health, interpersonal violence, and physical health. JAMA Internal Medicine 176:619–625.

Miner, A. S., A. Milstein, and J. T. Hancock. 2017. Talking to machines about personal mental health problems. JAMA 318:1217–1218.

Mokyr, J. 1990. The lever of riches: Technological creativity and economic progress. New York: Oxford University Press.

Moravec, H. 1990. Mind children: The future of robot and human intelligence. Cambridge, MA: Harvard University Press. http://www.hup.harvard.edu/catalog.php?isbn=9780674576186 (accessed November 12, 2019).

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Mukherjee, S. 2017. A. I. versus M. D. The New Yorker. https://www.newyorker.com/magazine/2017/04/03/ai-versus-md (accessed November 12, 2019).

Obermeyer, Z., and E. J. Emanuel. 2016. Predicting the future—Big data, machine learning, and clinical medicine. New England Journal of Medicine 375:1216–1219.

Olson, P. 2018. This health startup won big government deals—but inside, doctors flagged problems. Forbes Magazine. https://www.forbes.com/sites/parmyolson/2018/12/17/this-health-startup-won-big-government-dealsbut-inside-doctors-flagged-problems/#5d4ddbeeabba (accessed November 12, 2019).

Panetta, K. 2017. Top trends in the Gartner Hype Cycle for emerging technologies, 2017. Gartner. https://www.gartner.com/smarterwithgartner/top-trends-in-the-gartner-hype-cycle-for-emerging-technologies-2017 (accessed December 13, 2019).

Papanicolas, I., L. R. Woskie, and A. K. Jha. 2018. Health care spending in the United States and other high-income countries. JAMA 319:1024–1039.

Patil, H. K., and R. Seshadri. 2014. Big data security and privacy issues in healthcare. In 2014 IEEE International Congress on Big Data. Pp. 762–765. https://doi.org/10.1109/BigData.Congress.2014.112.

Perakslis, E. D. 2014. Cybersecurity in health care. New England Journal of Medicine 371:395–397.

Pethokoukis, J. 2016. What the story of ATMs and bank tellers reveals about the “rise of the robots” and jobs. AEIdeas Blog. https://www.aei.org/publication/what-atms-bank-tellers-rise-robots-and-jobs (accessed November 12, 2019).

Prates, M. O. R., P. H. C. Avelar, and L. Lamb. 2018. Assessing gender bias in machine translation—A case study with Google Translate. arXiv.org. https://arxiv.org/abs/1809.02208 (accessed November 12, 2019).

Qiu, S., L. Qihe, Z. Shijie, and W. Chunjiang. 2019. Review of artificial intelligence adversarial attack and defense technologies. Applied Sciences 9(5):909.

Rajkomar, A., M. Hardt, M. D. Howell, G. Corrado, and M. H. Chin. 2018. Ensuring fairness in machine learning to advance health equity. Annals of Internal Medicine 169:866–872.

Razavian, N., S. Blecker, A. M. Schmidt, A. Smith-McLallen, S. Nigam, and D. Sontag. 2015. Population-level prediction of type 2 diabetes from claims data and analysis of risk factors. Big Data 3:277–287.

Ridley, M. 2017. Amara’s Law. MattRidleyOnline. http://www.rationaloptimist.com/blog/amaras-law (accessed December 13, 2019).

Ross, C., and I. Swetlitz. 2017. IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close. STAT. https://www.statnews.com/2017/09/05/watson-ibm-cancer (accessed November 12, 2019).

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Russakovsky, O., J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. 2015. ImageNet large scale visual recognition challenge. arXiv.org. https://arxiv.org/abs/1409.0575 (accessed November 12, 2019).

Saracco, R. 2018. What is the computational power of our brain? EIT Digital. https://www.eitdigital.eu/newsroom/blog/article/what-is-the-computational-power-of-our-brain (accessed November 12, 2019).

Schulam, P., and S. Saria. 2017. Reliable decision support using counterfactual models. Advances in Neural Information Processing Systems 30:1697–1708.

Schwartz, W. B. 1970. Medicine and the computer. The promise and problems of change. New England Journal of Medicine 283:1257–1264.

Shanahan, M. 2015. The technological singularity. Cambridge, MA: MIT Press. https://mitpress.mit.edu/books/technological-singularity (accessed November 12, 2019).

Silver, D., T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis. 2018. A general reinforcement learning algorithm that masters chess, Shogi, and Go through self-play. Science 362:1140–1144.

Stewart, E. 2019. Self-driving cars have to be safer than regular cars. The question is how much. Vox. https://www.vox.com/recode/2019/5/17/18564501/self-driving-car-morals-safety-tesla-waymo (accessed November 12, 2019).

Subbaswamy, A., and S. Saria. 2018. Counterfactual normalization: Proactively addressing dataset shift and improving reliability using causal mechanisms. arXiv.org. https://arxiv.org/abs/1808.03253 (accessed November 13, 2019).

Subbaswamy, A., P. Schulam, and S. Saria. 2019. Preventing failures due to dataset shift: Learning predictive models that transport. Proceedings of Machine Learning Research 89:3118–3127.

Symantec. 2019. What is the difference between black, white and grey hat hackers? Norton Security Center. https://us.norton.com/internetsecurity-emerging-threats-what-is-the-difference-between-black-white-and-grey-hat-hackers.html (accessed November 12, 2019).

Thadaney, S., and A. Verghese. 2019. Humans and AI, not humans versus AI. Stanford Medicine. http://medicine.stanford.edu/2019-report/humans-and-ai.html (accessed November 12, 2019).

Thompson, D. 2018. Health care just became the U.S.’s largest employer. The Atlantic. https://www.theatlantic.com/business/archive/2018/01/health-care-america-jobs/550079 (accessed on November 12, 2019).

Verghese, A. 2008. Culture shock—Patient as icon, icon as patient. New England Journal of Medicine 359:2748–2751.

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Verghese, A. 2018. How tech can turn doctors into clerical workers. The New York Times. https://www.nytimes.com/interactive/2018/05/16/magazine/health-issue-what-we-lose-with-data-driven-medicine.html (accessed November 12, 2019).

Victory, J. 2018. What did journalists overlook about the Apple Watch “‘heart monitor’” feature? HealthNewsReview.org. https://www.healthnewsreview.org/2018/09/what-did-journalists-overlook-about-the-apple-watch-heart-monitor-feature (accessed November 13, 2019).

Vinge, V. 1993. The coming technological singularity: How to survive in the post-human era. National Aeronautics and Space Administration. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19940022856.pdf (accessed November 12, 2019).

Vinyals, O., A. Toshev, S. Bengio, and D. Erhan. 2017. Show and tell: Lessons learned from the 2015 MSCOCO image captioning challenge. IEEE Transactions on Pattern Analysis and Machine Intelligence 39:652–663.

Wang, D., A. Khosla, R. Gargeya, H. Irshad, and A. H. Beck. 2016. Deep learning for identifying metastatic breast cancer. arXiv.org. https://arxiv.org/abs/1606.05718 (accessed November 13, 2019).

Wilson, H. J., P. R. Daugherty, and N. Morini-Bianzino. 2017. The jobs that artificial intelligence will create. MIT Sloan Management Review. https://sloanreview.mit.edu/article/will-ai-create-as-many-jobs-as-it-eliminates (accessed November 12, 2019).

Woolhandler, S., and D. U. Himmelstein. 2017. The relationship of health insurance and mortality: Is lack of insurance deadly? Annals of Internal Medicine 167:424–431.

Yuan, X., H. Pan, Q. Zhu, and X. Li. 2018. Adversarial examples: Attacks and defenses for deep learning. arXiv.org. https://arxiv.org/pdf/1712.07107.pdf (accessed November 12, 2019).

Zou, J., and L. Schiebinger. 2018. Design AI so that it’s fair. Nature 559:324–326. https://www.nature.com/magazine-assets/d41586-018-05707-8/d41586-018-05707-8.pdf (accessed November 12, 2019).

Zulman, D. M., C. P. Chee, S. C. Ezeji-Okoye, J. G. Shaw, T. H. Holmes, J. S. Kahn, and S. M. Asch. 2017. Effect of an intensive outpatient program to augment primary care for high-need veterans affairs patients: A randomized clinical trial. JAMA Internal Medicine 177:166–175.

Suggested citation for Chapter 4: Chen, J., A. Beam, S. Saria, and E. A. Mendonça. 2020. Potential trade-offs and unintended consequences of artificial intelligence. In Artificial intelligence in health care: The hope, the hype, the promise, the peril. Washington, DC: National Academy of Medicine.

Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 99
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 100
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 101
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 102
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 103
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 104
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 105
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 106
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 107
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 108
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 109
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 110
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 111
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 112
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 113
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 114
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 115
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 116
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 117
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 118
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 119
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 120
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 121
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 122
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 123
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 124
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 125
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 126
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 127
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 128
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 129
Suggested Citation:"4 Potential Trade-Offs and Unintended Consequences of Artificial Intelligence ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 130
Next: 5 Artificial Intelligence Model Development and Validation »
Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril Get This Book
×
 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril
Buy Paperback | $42.00 Buy Ebook | $33.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The emergence of artificial intelligence (AI) in health care offers unprecedented opportunities to improve patient and clinical team outcomes, reduce costs, and impact population health. While there have been a number of promising examples of AI applications in health care, it is imperative to proceed with caution or risk the potential of user disillusionment, another AI winter, or further exacerbation of existing health- and technology-driven disparities.

This Special Publication synthesizes current knowledge to offer a reference document for relevant health care stakeholders. It outlines the current and near-term AI solutions; highlights the challenges, limitations, and best practices for AI development, adoption, and maintenance; offers an overview of the legal and regulatory landscape for AI tools designed for health care application; prioritizes the need for equity, inclusion, and a human rights lens for this work; and outlines key considerations for moving forward.

AI is poised to make transformative and disruptive advances in health care, but it is prudent to balance the need for thoughtful, inclusive health care AI that plans for and actively manages and reduces potential unintended consequences, while not yielding to marketing hype and profit motives.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!