National Academies Press: OpenBook

Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril (2019)

Chapter: 1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril

« Previous: Summary
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

1
ARTIFICIAL INTELLIGENCE IN HEALTH CARE: THE HOPE, THE HYPE, THE PROMISE, THE PERIL

Michael Matheny, Vanderbilt University Medical Center and U.S. Department of Veterans Affairs; Sonoo Thadaney Israni, Stanford University; Danielle Whicher, National Academy of Medicine; and Mahnoor Ahmed, National Academy of Medicine

INTRODUCTION

Health care in the United States, historically focused on encounter-based care and treating illness as it arises rather than preventing it, is now undergoing a sweeping transformation toward a more population health–based approach. This transformation is happening via a series of changes in reimbursement. Among these changes are multiple eras of managed care and capitated population management explorations and increases in reimbursement for value-based care and prevention, both of which attempt to manage the overall health of the patient beyond treatment of illness (ASTHO, 2019; CMS, 2019; Kissam et al., 2019; Mendelson et al., 2017). Even so, U.S. health care expenditures continue to rise without corresponding gains in key health outcomes when compared to many similar countries (see Figure 1-1).

To assess where and how artificial intelligence (AI) may provide opportunities for improvement, it is important to understand the current context of and drivers for change in health care. AI is likely to promote automation and provide context-relevant information synthesis and recommendations (through a variety of tools and in many settings) to patients, “fRamilies” (friends and family unpaid caregivers), and the clinical team. AI developers and stakeholders should prioritize ethical data collection and use, and support data and information visualization through the use of AI (Israni and Verghese, 2019).

Technology innovations and funding are driven by business criteria such as profit, efficiency, and return on investment. It is important to explore how

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Image
FIGURE 1-1 | Life expectancy gains and increased health spending, selected high-income countries, 1995–2015.
SOURCE: Figure redrawn from OECD, 2017, Health at a Glance 2017: OECD Indicators, OECD Publishing, Paris, https://doi.org/10.1787/health_glance-2017-en.

these criteria will influence AI–health care development, evaluation, and implementation. This reality is further challenged by U.S. public and government views of health and health care, which oscillate between health care as social good and health care as economic commodity (Aggarwal et al., 2010; Feldstein, 2012; Rosenthal, 2017). These considerations are likely to drive some clear use cases in health care business operations: AI tools can be used to reduce cost and gain efficiencies through prioritizing human labor focus on more complex tasks; to identify workflow optimization strategies; to reduce medical waste (failure of care delivery, failure of care coordination, overtreatment or low-value care, pricing failure, fraud and abuse, and administrative complexity); and to automate highly repetitive business and workflow processes (Becker’s Healthcare, 2018) by using reliably captured and structured data (Bauchner and Fontanarosa, 2019). When implementing these tools, it is critical to be thoughtful, equitable, and inclusive to avoid adverse events and unintended consequences. This requires ensuring that AI tools align with the preferences of users and with end targets of these technologies, and that the tools do not further exacerbate historical inequities in access and outcomes (Baras and Baker, 2019).

Driven by a shift to reimbursement and incentives that support a population health management approach rather than a fee-for-service approach, innovation in AI technologies are likely to improve patient outcomes via applications, workflows, interventions, and support for distributed health care delivery outside

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

a traditional brick and mortar, encounter-based paradigm. The challenges of data accuracy and privacy protection will depend on whether AI technologies are regulated as a medical device or classed as an entertainment application. These consumer-facing tools are likely to support fundamental changes in interactions between health care professionals and patients and their caregivers. Tools such as single-lead electrocardiogram (ECG) surveillance or continuous blood glucose monitors will transform how health data are generated and utilized. They offer the opportunity to incorporate social determinants of health (SDoH) to identify patient populations for target interventions to improve outcomes and reduce health care utilization (Lee and Korba, 2017). Because SDoH interventions are labor-intensive, their scalability is poor. AI may reduce the cost of utilizing SDoH data and provide efficient means of prioritizing scarce clinical resources to impact SDoH (Basu and Narayanaswamy, 2019; Seligman et al., 2017).

All this presumes building solutions for health care challenges that will truly benefit from technological solutions, versus technochauvinism—a belief that technology is always the best solution (Broussard, 2018).

These topics are explored through subsequent chapters. This first chapter sets the stage by providing an overview of the development process and structure of this publication; defining key terms and concepts discussed throughout the remaining chapters; and describing several overarching considerations related to AI systems’ reliance on data and issues related to trust, equity, and inclusion, which are critical to advancing appropriate use of AI tools in health care settings.

NATIONAL ACADEMY OF MEDICINE

Given the current national focus on AI and its potential utility for improving health and health care in the United States, the National Academy of Medicine (NAM) Leadership Consortium: Collaboration for a Value & Science-Driven Learning Health System (Leadership Consortium)—through its Digital Health Learning Collaborative (DHLC)—brought together experts to explore opportunities, issues, and concerns related to the expanded application of AI in health and health care settings (NAM, 2019a,b).

NAM LEADERSHIP CONSORTIUM: COLLABORATION FOR A VALUE & SCIENCE-DRIVEN LEARNING HEALTH SYSTEM

Broadly, the NAM Leadership Consortium convenes national experts and executive-level leaders from key stakeholder sectors for collaborative activities to foster progress toward a continuously learning health system in which science,

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

informatics, incentives, and culture are aligned for enduring improvement and innovation; best practices are seamlessly embedded in the care process; patients and families are active participants in all elements; and new knowledge is captured as an integral by-product of the care experience. Priorities for achieving this vision include advancing the development of a fully interoperable digital infrastructure, the application of new clinical research approaches, and a culture of transparency on outcomes and cost.

The NAM Leadership Consortium serves as a forum for facilitating collaborative assessment and action around issues central to achieving the vision of a continuously learning health system. To address the challenges of improving both evidence development and evidence application, as well as improving the capacity to advance progress on each of those dimensions, Leadership Consortium members (all leaders in their fields) work with their colleagues to identify the issues not being adequately addressed, the nature of the barriers and possible solutions, and the priorities for action. They then work to marshal the resources of the sectors represented in the Leadership Consortium to work for sustained public–private cooperation for change.

DIGITAL HEALTH LEARNING COLLABORATIVE

The work of the NAM Leadership Consortium falls into four strategic action domains—informatics, evidence, financing, and culture—and each domain has a dedicated innovation collaborative that works to facilitate progress in that area. This Special Publication was developed under the auspices of the DHLC. Co-chaired by Jonathan Perlin from the Hospital Corporation of America and Reed Tuckson from Tuckson Health Connections, the DHLC provides a venue for joint activities that can accelerate progress in the area of health informatics and toward the digital infrastructure necessary for continuous improvement and innovation in health and health care.

PUBLICATION GENESIS

In 2017, the DHLC identified issues around the development, deployment, and use of AI as being of central importance to facilitating continuous improvement and innovation in health and health care. To consider the nature, elements, applications, state of play, key challenges, and implications of AI in health and health care, as well as ways in which the NAM might enhance collaborative progress, the DHLC convened a meeting at the National Academy of Sciences (NAS) building in Washington, DC, on November 30, 2017. Participants included AI experts from across the United States representing different stakeholder groups within

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

the health care ecosystem, including health system representatives; academics; practicing clinicians; representatives from technology companies; electronic health record (EHR) vendors; nonprofit organizations; payer representatives; and representatives from U.S. federal organizations, including the National Institutes of Health, the National Science Foundation, the U.S. Department of Defense, the U.S. Department of Veterans Affairs (VA), and the U.S. Food and Drug Administration. The agenda and participant list for this workshop are included as Appendix B.

Participants generated a list of practical challenges to the advancement and application of AI to improve health and health care (see Table 1-1). To begin to address these challenges, meeting participants recommended that the DHLC establish a working group on AI in health and health care. Formed in February 2018, the working group is co-chaired by Michael Matheny of the Vanderbilt University Medical Center and the VA and Sonoo Thadaney Israni of Stanford University. The group’s charge was to accelerate the appropriate development, adoption, and use of valid, reliable, and sustainable AI models for transforming progress in health and health care. To advance this charge, members determined

TABLE 1-1 | Practical Challenges to the Advancement and Application of Artificial Intelligence Tools in Clinical Settings Identified During the November 30, 2017, Digital Health Learning Collaborative Meeting

Challenge Description
Workflow integration Understand the technical, cognitive, social, and political factors in play and incentives impacting integration of artificial intelligence (AI) into health care workflows.
Enhanced explainability and interpretability To promote integration of AI into health care workflows, consider what needs to be explained and approaches for ensuring understanding by all members of the health care team.
Workforce education Promote educational programs to inform clinicians about AI/machine learning approaches and to develop an adequate workforce.
Oversight and regulation Consider the appropriate regulatory mechanism for AI/machine learning and approaches for evaluating algorithms and their impact.
Problem identification and prioritization Catalog the different areas of health care and public health where AI/machine learning could make a difference, focusing on intervention-driven AI.
Clinician and patient engagement Understand the appropriate approaches for involving consumers and clinicians in AI/machine learning prioritization, development, and integration, and the potential impact of AI/machine learning algorithms on the patient–provider relationship.
Data quality and access Promote data quality, access, and sharing, as well as the use of both structured and unstructured data and the integration of non-clinical data as critical to developing effective AI tools.
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

that they would work collaboratively to develop a reference document for model developers, clinical implementers, clinical users, and regulators and policy makers to:

  • understand the strengths and limitations of AI;
  • promote the use of these methods and technologies within the health care system; and
  • highlight areas of future work needed in research, implementation science, and regulatory bodies to facilitate broad use of AI to improve health and health care.

PUBLICATION WORKFLOW

Authors were organized from among the meeting participants along expertise and interest, and each chapter was drafted with guidance from the NAM and the editors, with monthly publication meetings where all authors were invited to participate and update the group. Author biographies can be found in Appendix C.

As an initial step, the authors, the NAM staff, and co-chairs developed the scope and content focus of each of the chapters based on discussion at the initial in-person meeting. Subsequently, the authors for each chapter drafted chapter outlines from this guideline. Outlines were shared with the other authors, the NAM staff, and the working group co-chairs to ensure consistency in the level of detail and formatting. Differences and potential overlap were discussed before the authors proceeded with drafting of each chapter. The working group co-chairs and the NAM staff drafted content for Chapters 1 and 8, and were responsible for managing the monthly meetings and editing the content of all chapters.

After all chapters were drafted, the resulting publication was discussed at a meeting that brought together working group members and external experts at the NAS building in Washington, DC, on January 16, 2019. The goal of the meeting was to receive feedback on the draft publication to improve its utility to the field. Following the meeting, the chapter authors refined and added content to address suggestions from meeting participants. To improve consistency in voice and style across authors, an external editor was hired to review and edit the publication in its entirety before the document was sent out for external review. Finally, 10 external expert reviewers agreed to review the publication and provide critiques and recommendations for further improvement of the content. Working group co-chairs and the NAM staff reviewed all feedback and added recommendations and edits, which were sent to chapter authors for consideration for incorporation. Final edits following chapter author re-submissions were

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

resolved by the co-chairs and the NAM staff. The resulting publication represents the ideas shared at both meetings and the efforts of the working group.

IMPORTANT DEFINITIONS

Throughout the publication, authors use foundational terms and concepts related to AI and its subcomponents. To establish a common understanding, this section describes key definitions for some of these terms and concepts.

U.S. Health Care

This publication relies on preexisting knowledge and a general understanding of the U.S. health care domain. Due to limited space here, a table of key reference materials is included in Appendix A to provide the relevant health care context. This list is a convenient sample of well-regarded reference materials and selected publications written for a general audience. This list is not comprehensive.

Artificial Intelligence

The term “artificial intelligence” (AI) has a range of meanings, from specific forms of AI, such as machine learning, to the hypothetical AI that meets criteria for consciousness and sentience. This publication does not address the hypothetical, as the popular press often does, and focuses instead on the current and near-future uses and applications of AI.

A formal definition of AI starts with the Oxford English Dictionary: “The capacity of computers or other machines to exhibit or simulate intelligent behavior; the field of study concerned with this,” or Merriam-Webster online: “1: a branch of computer science dealing with the simulation of intelligent behavior in computers, 2: the capability of a machine to imitate intelligent human behavior.” More nuanced definitions of AI might also consider what type of goal the AI is attempting to achieve and how it is pursuing that goal. In general, AI systems range from those that attempt to accurately model human reasoning to solve a problem, to those that ignore human reasoning and exclusively use large volumes of data to generate a framework to answer the question(s) of interest, to those that attempt to incorporate elements of human reasoning but do not require accurate modeling of human processes. Figure 1-2 includes a hierarchical representation of AI technologies (Mills, 2015).

Machine learning is a family of statistical and mathematical modeling techniques that uses a variety of approaches to automatically learn and improve the prediction of a target state, without explicit programming (e.g., Boolean rules)

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Image
FIGURE 1-2 | A summary of the domains of artificial intelligence.
SOURCE: Adapted with permission from a figure in Mills, M. 2015. Artificial Intelligence in Law—The State of Play in 2015? Legal IT Insider. https://www.legaltechnology.com/latest-news/artificial-intelligence-in-law-the-stateof-play-in-2015.
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Image
FIGURE 1-3 | A summary of the most common methods and applications for training machine learning algorithms.
SOURCE: Reprinted with permission from Isazi Consulting, 2015. http://www.isaziconsulting.co.za/machinelearning.html.

(Witten et al., 2016). Different methods, such as Bayesian networks, random forests, deep learning, and artificial neural networks use different assumptions and mathematical frameworks for how data are ingested, and learning occurs within the algorithm. Regression analyses, such as linear and logistic regression, are also considered machine learning methods, although many users of these algorithms distinguish them from commonly defined machine learning methods (e.g., random forests, Bayesian networks). The term “machine learning” is widely used by large businesses, but “AI” is more frequently used for marketing purposes. In most cases, “machine learning” is more appropriate. One way to represent machine learning algorithms is to subcategorize them by how they learn inference from the data (as shown in Figure 1-3). The subcategories are unsupervised learning, supervised learning, and reinforcement learning. These frameworks are discussed in greater detail in Chapter 5.

Natural language processing (NLP) enables computers to understand and organize human languages (Manning and Schütze, 1999). NLP needs to model human reasoning because it considers the meaning behind written and spoken language in a computable, interpretable, and accurate way. NLP has a higher bar

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

than other AI domains because context, interpretation, and nuance add needed information. NLP incorporates rule-based and data-based learning systems, and many of the internal components of NLP systems are themselves machine learning algorithms with pre-defined inputs and outputs, sometimes operating under additional constraints. Examples of NLP applications include assessment of cancer disease progression and response to therapy among radiology reports (Kehl et al., 2019), and identification of post-operative complication from routine EHR documentation (Murff et al., 2011).

Speech algorithms digitize audio recordings into computable data elements and convert text into human speech (Chung et al., 2018). This field is closely connected with NLP, with the added complexity of intonation and syllable emphasis impacting meaning. This complicates both inbound and outbound speech interpretation and generation. For examples of how deep learning neural networks have been applied to this field, see a recent systematic review of this topic (Nassif, 2019).

Expert systems are a set of computer algorithms that seek to emulate the decision-making capacity of human experts (Feigenbaum, 1992; Jackson, 1998; Leondes, 2002; Shortliffe and Buchanan, 1975). These systems rely largely on a complex set of Boolean and deterministic rules. An expert system is divided into a knowledge base, which encodes the domain logic, and an inference engine, which applies the knowledge base to data presented to the system to provide recommendations or deduce new facts. Examples of this are some of the clinical decision support tools (Hoffman et al., 2016) being developed within the Clinical Pharmacogenetics Implementation Consortium, which is promoting the use of knowledge bases such as PharmGKB to provide personalized recommendations for medication use in patients based on genetic data results (CPIC, 2019; PharmGKB, 2019).

Automated planning and scheduling systems produce optimized strategies for action sequences (such as clinic scheduling), which are typically executed by intelligent agents in a virtual environment or physical robots designed to automate a task (Ghallab et al., 2004). These systems are defined by complex parameter spaces that require high dimensional calculations.

Computer vision focuses on how algorithms interpret, synthesize, and generate inference from digital images or videos. It seeks to automate or provide human cognitive support for tasks anchored in the human visual system (Sonka et al., 2008). This field leverages multiple disciplines, including geometry, physics, statistics, and learning theory (Forsyth and Ponce, 2003). One example is deploying a computer vision tool in the intensive care unit to monitor patient mobility (Yeung et al., 2019), because patient mobility is key for patient recovery from severe illness and can drive downstream interventions.

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

AI and Human Intelligence

Combining human intelligence and AI into augmented intelligence focuses on a supportive or assistive role for the algorithms, emphasizing that these technologies are designed to enhance human processing, cognition, and work, rather than replace it. William Ross Ashby originally popularized the term “amplifying intelligence,” which transformed into “augmented intelligence” (Ashby, 1964). These terms are gaining popularity because “artificial intelligence” has been burdened with meaning by marketing hype, popular culture, and science fiction—possibly impeding a reasoned and balanced discourse.

AI SYSTEMS RELIANCE ON DATA

Data are critical for delivering evidence-based health care and developing any AI algorithm. Without data, the underlying characteristics of the process and outcomes are unknown. This has been a gap in health care for many years, but key trends (such as commodity wearable technologies) in this domain in the past decade have transformed health care into a heterogeneous data-rich environment (Schulte and Fry, 2019). It is now common in health and health care for massive amounts of data to be generated about an individual from a variety of sources, such as claims data, genetic information, radiology images, intensive care unit surveillance, EHR care documentation, and medical device sensing and surveillance. The reasons for these trends include the scaling of computational capacity through decreases in cost of technology; widespread adoption of EHRs promoted by the Health Information Technology for Economic and Clinical Health (HITECH) Act; precipitous decreases in cost of genetic sample processing (Wetterstrand, 2019); and increasing integration of medical- and consumer-grade sensors. U.S. consumers used approximately 3 petabytes of Internet data every minute of the day in 2018, generating possible health-connected data with each use (DOMO, 2019). There are more than 300,000 health applications in app stores, with more than 200 being added each day and an overall doubling of these applications since 2015 (Aitken et al., 2017).

Data Aggregation

The accumulation of medical and consumer data has resulted in patients, caregivers, and health care professionals being responsible for aggregating, synthesizing, and interpreting data far beyond human cognitive and decision-making capacities. Figure 1-4 predicts the exponential data accumulation and the limits of human cognition for health care decision making (IOM, 2008).

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Image
FIGURE 1-4 | Growth in facts affecting provider decisions versus human cognitive capacity.
SOURCES: NRC, 2009; presentation by William Stead at IOM meeting on October 8, 2007, titled “Growth in Facts Affecting Provider Decisions Versus Human Cognitive Capacity.”

The growth in data generation and need for data synthesis exceeding human capacity has surpassed prior estimates. This trend most likely underestimates the magnitude of the current data milieu.

AI algorithms require large volumes of training data to achieve performance levels sufficient for “success” (Shrott, 2017; Sun et al., 2017), and there are multiple frameworks and standards in place to promote data aggregation for AI use. These include standardized data representations that both manage data at rest1 and data in motion.2 For data at rest, mature common data models (CDMs),3 such as Observational Medical Outcomes Partnership (OMOP), Informatics for Integrating Biology & the Bedside (i2b2), the Patient-Centered Clinical Research Network (PCORNet), and Sentinel, are increasingly providing a backbone to format, clean, harmonize, and standardize data that can then be used for the training of AI algorithms (Rosenbloom et al., 2017). Some of these CDMs (e.g., OMOP) are also international in focus, which may support compatibility and portability of some AI algorithms across countries. Some health care systems have invested in the infrastructure for developing and maintaining at least one CDM

___________________

1 Data at rest: Data stored in a persistent structure, such as a database or in a file system, and not in active use.

2 Data in motion: Data that are being transported from one computer system to another or between applications in the same computer.

3 A common data model is a standardized, modular, extensible collection of data schemas that is designed to make it easier to build, use, and analyze data. Data are transformed into the data model from many sources, which allows experts to make informed decisions about data representation, which allows users to easily reuse the data.

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

through funded initiatives (OHDSI, 2019; Ohno-Machado et al., 2014). Many others have adopted one of these CDMs as a cornerstone of their clinical data warehouse infrastructure to help support operations, quality improvement, and research. This improves the quality and volume of data that are computable and usable for AI in the United States, and promotes transparency and reproducibility (Hripcsak et al., 2015). It is also important for semantic meaning to be mapped to a structured representation, such as Logical Observation Identifiers Names and Codes and International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM), as these CDMs leverage these standardized representations.

For data in motion, in order to manage the critical interdigitation with consumers, EHRs, and population health management tools, HL7 FHIR is emerging as an open standard for helping data and AI algorithm outputs flow between applications and to the end user, with many of the large EHR vendors providing support for this standard (Khalilia et al., 2015). Another technology being explored extensively in health care is the use of blockchain to store, transport, and secure patient records (Agbo et al., 2019). Made popular by the bitcoin implementation of this technology, blockchain has a number of benefits, including (1) being immutable and traceable, which allows patients to send records without fear of tampering; (2) securing all records by cryptography; (3) allowing new medical records to be added within the encryption process; and (4) making it possible for patients to get stronger controls over access.

However, there are still many instances where the standardization, interoperability, and scale of data aggregation and transfers are not achieved in practice. Health information exchanges (HIEs), with appropriate permissions, are one method by which data may be aggregated and used for AI algorithm training and validation. Public health agencies and EHRs extensively support data exchange protocols that provide the technical capacity for electronic data sharing. However, because of a variety of barriers, health care professionals and patients are frequently unable to electronically request patient records from an outside facility after care is delivered (Lye et al., 2018; Ross, 2018). Most of today’s health data silos and assets reside in individual organizations, and current incentives leave little motivation for much needed collaboration and sharing. A recent review of the legal barriers to the operation and use of HIEs found that legislation in the past 10 years has lowered barriers to use, and points to economic incentives as the most significant current challenge (Mello et al., 2018).

Data access across health care systems, particularly data on staffing, costs and charges, and reimbursements, is critical for private health insurers and the U.S. health care delivery market. But, given the sensitive nature of this information, it is not shared easily or at all. Many institutions, particularly the larger integrated health care delivery networks, are developing internal AI and analytics to help

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

support business decisions; however, in order to effectively influence U.S. health care expenditures, AI algorithms need access to larger and more population-representative data, which is possible through improved data sharing, transparency, and standardization (Green, 2019; Schulte, 2017). The U.S. government has been driving this movement toward access through ongoing efforts to prevent data blocking, and through implementation of a key provision in the 21st Century Cures Act that aims to promote price transparency (ONC, 2019). While further transparency may provide AI with additional opportunities, the U.S. health care system still has a long way to go in addressing the myriad issues preventing widespread data sharing and standardization. This disadvantages U.S. research and innovation when compared to that of other countries.

A key challenge for data integration is the lack of definitive laws and regulations for the secondary use of routinely collected patient health care data. Many of the laws and regulations around data ownership and sharing are country-specific and based on evolving cultural expectations and norms. In 2018, a number of countries promoted personal information protection guidance, moving from laws to specifications. The European Union has rigorous personal privacy prioritizing regulatory infrastructure, detailed in the General Data Protection Regulation that went into effect on May 25, 2019 (European Commission, 2018). In the People’s Republic of China (Shi et al., 2019), a non-binding but comprehensive set of guidelines was released in the Personal Information Security Specification. Great Britain’s National Health System allows national-level data aggregation for care delivery and research. However, even in more monolithic data environments, reuse of these data for AI is justifiably scrutinized. For instance, in 2018, the British House of Lords report on AI criticized the sharing of identifiable patient data with a profit-motivated Silicon Valley company (House of Lords Select Committee on Artificial Intelligence, 2018, Chapter 2).

Variation in laws and regulations is in part a result of differing and evolving perceptions of appropriate approaches or frameworks for health data ownership, stewardship, and control. There is also a lack of agreement on who should be able to profit from data-sharing activities. In the United States today, health care data that are fully de-identified may be reused for other purposes without explicit consent. However, there is disagreement over what constitutes sufficiently de-identified data, as exemplified by a 2019 lawsuit against a Google–University of Chicago partnership to develop AI tools to predict medical diseases (Wakabayashi, 2019). Patients may not realize that their data could be monetized via AI tools for the financial benefit of various organizations, including the organization that collected the data and the AI developers. If these issues are not sufficiently addressed, we run the risk of an ethical conundrum, where patient-provided data assets are used for monetary gain, without explicit consent or compensation.

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

This could be similar to Henrietta Lacks’s biological tissue story where no consent was obtained to culture her cells (as was the practice in 1951), nor was she or the Lacks family compensated for their monetization (Skloot, 2011). There is a need to address and clarify current regulations, legislation, and patient expectations when patient data are used for building profit-motivated products or for research (refer to Chapter 7).

The lack of national unique patient identifiers in the United States could greatly reduce the error rates of de-duplication during data aggregation. However, there are several probabilistic patient linkage tools that are currently attempting to fill this gap (Kho et al., 2015; Ong et al., 2014, 2017). While there is evidence that AI algorithms can overcome noise from erroneous linkage and duplication of patient records through use of large volumes of data, the extent to which these problems may impact algorithm accuracy and bias remains an open question.

Cloud computing that places physical computational resources in widespread locations, sometimes across international boundaries, is another particularly challenging issue. Cloud computing can result in disastrous cybersecurity breaches as data managers attempt to maintain compliance with many local and national laws, regulations, and legal frameworks (Kommerskollegium, 2012).

Finally, to make AI truly revolutionary, it is critical to consider the power of linking clinical and claims data with data beyond the narrow, traditional care setting by capturing the social determinants of health as well as other patient-generated data. This could include utilizing social media datasets to inform the medical team of the social determinants that operate in each community. It could also include developing publicly available datasets of health-related factors such as neighborhood walkability, food deserts, air quality, aquatic environments, environmental monitoring, and new areas not yet explored.

Data Bias

In addition to the issues associated with data aggregation, selecting an appropriate AI training data source is critical because training data influences the output observations, interpretations, and recommendations. If the training data are systematically biased due to, for example, under-representation of individuals of a particular gender, race, age, or sexual orientation, those biases will be modeled, propagated, and scaled in the resulting algorithm. The same is true for human biases (intentional and not) operating in the environment, workflow, and outcomes from which the data were collected. Similarly, social science research subject samples are disproportionately U.S. university undergraduates who are Western, educated, industrialized, rich, and democratic (WEIRD), and this data bias is carried through the behavioral sciences used as the basis for

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

developing algorithms that explain or predict human behaviors (Downey, 2010; Sullivan, 2010).

Bias can also be present in genetic data, where the majority of sequenced DNA comes from people of European descent (Bustamante et al., 2011; Popejoy and Fullerton, 2016; Stanford Engineering, 2019). Training AI from data resources with these biases runs the risk of inaccurately generalizing it to non-representative populations. An apt and provocative term used to describe this training is “weapons of math destruction” (O’Neil, 2017). In her book of the same title, Cathy O’Neil outlines the destruction that biased AI has caused in criminal justice sentencing, human resources and hiring, education, and other systems. If issues of potential biases in training data are not addressed, they further propagate and scale historical inequities and discrimination.

PROMOTING TRUST, EQUITY, AND INCLUSION IN HEALTH CARE AI

Trust, equity, and inclusion need to be prioritized in the health care AI development and deployment processes (Vayena et al., 2018). Throughout this publication, various chapters address topics related to the ethical, equitable, and transparent deployment of AI. In addition, a growing number of codes of ethics, frameworks, and guidelines describe many of the relevant ethical issues (see Table 1-2 for a representative, although not comprehensive, list).

Judy Estrin proposes implementing AI through the lens of human rights values and outlines the anticipated friction, offering thought-provoking questions through which to navigate dilemmas (see Figure 1-5).

Building on the above, we briefly describe several key considerations to ensure the ethical, equitable, and inclusive development and deployment of health care AI.

Diversity in AI Teams

To promote the development of impactful and equitable AI tools, it is important to ensure diversity—of gender, culture, race, age, ability, ethnicity, sexual orientation, socioeconomic status, privilege, etc.—among AI developers. An April 2019 AI Institute Report documents the lack of diversity in the field, describing this as “a moment of reckoning.” The report further notes that the “diversity disaster” has led to “flawed systems that exacerbate . . . gender and racial biases” (West et al., 2019). Consider the fact that the “Apple HealthKit, which enabled specialized tracking, such as selenium and copper intake, . . . neglected to include a women’s menstrual cycle tracker until iOS 9” (Reiley, 2016). The development team reportedly did not include any women.

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

TABLE 1-2 | Relevant Ethical Codes, Frameworks, and Guidelines

Guiding Codes and Frameworks Reference
ACM Code of Ethics and Professional Conduct Gotterbarn, D. W., B. Brinkman, C. Flick, M. S. Kirkpatrick, K. Miller, K. Vazansky, and M. J. Wolf. 2018. ACM code of ethics and professional conduct. https://www.acm.org/binaries/content/assets/about/acm-code-of-ethics-and-professional-conduct.pdf.
Artificial Intelligence at Google: Our Principles Google. 2018. Artificial intelligence at Google: Our principles. Google AI. https://ai.google/principles.
Ethical OS: Risk Mitigation Checklist Institute for the Future and Omidyar Network. 2018. Ethical OS: Risk Mitigation Checklist. https://ethicalos.org/wp-content/uploads/2018/08/EthicalOS_Check-List_080618.pdf.
DeepMind Ethics & Society Team DeepMind. 2020. DeepMind Ethics & Society Team. https://deepmind.com/about/ethics-and-society.
Partnership on AI Tenets Partnership on AI. 2018. Partnership on AI tenets. https://www.partnershiponai.org/tenets.
AI Now Report 2018 Whittaker, M., K. Crawford, R. Dobbe, G. Fried, E. Kaziunas, V. Mathur, S. M. West, R. Richardson, J. Schultz, and O. Schwartz. 2018. AI Now Report 2018. AI Now Institute at New York University. https://stanford.app.box.com/s/xmb2cj3e7gsz5vmus0viadt9p3kreekk.
The Trouble with Algorithmic Decisions Zarsky, T. 2016. The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology, & Human Values 41(1):18–132.
Executive Office of the President Munoz, C., M. Smith, and D. J. Patil. 2016. Big data: A report on algorithmic systems, opportunity, and civil rights. Executive Office of the President. https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdf.
Addressing Ethical Challenges in Machine Learning Vayena, E., A. Blasimme, and I. G. Cohen. 2018. Machine learning in medicine: Addressing ethical challenges. PLoS Medicine 15(11):e1002689. Figure 1.3.
Do No Harm: A Roadmap for Responsible Machine Learning in Health Care Wiens, J., S. Saria, M. Sendak, M. Ghassemi, V. X. Liu, F. Doshi-Velez, K. Jung, K. Heller, D. Kale, M. Saeed, P. N. Ossorio, S. Thadaney-Israni, and A. Goldenberg. 2019. Do no harm: A roadmap for responsible machine learning for health care. Nature Medicine 25(9):1337–1340.

In addition, it is imperative that AI development and validation teams include end-user representatives who are likely to be most familiar with the issues associated with frontline implementation and who are knowledgeable about potential biases that may be incorporated into the data.

When developing, validating, and implementing AI tools that aim to promote behavior change to address chronic conditions such as obesity, heart disease, and diabetes, it is critical to engage behavioral scientists to ensure the tools account for behavioral theory and principles to promote change (see Chapter 6 for additional information on AI implementation). AI products that rely too heavily on reminders

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Image
FIGURE 1-5 | Framework for implementing artificial intelligence through the lens of human rights values.
SOURCE: Reprinted with permission from Judy Estrin. Based on a slide Estrin shared at The Future of Human-Centered AI: Governance Innovation and Protection of Human Rights Conference, Stanford University, April 16, 2019.

(e.g., “Remember to exercise 30 minutes today!”) and positive reinforcement through social approval (e.g., “Good job!” or “You did it!”) to effect change are unlikely to be successful. Decades of research show that behavioral change requires knowledge of the impact of health behaviors as well as a willingness to forgo short-term, concrete reinforcements (e.g., calorie-dense foods) in order to achieve longer-term, more abstract goals (e.g., “healthy weight”). This rich area of research stretches from early conceptual paradigms (Abraham and Sheeran, 2007; Prochaska and Velicer, 1997; Rosenstock, 1974) to more recent literature that have applied behavioral principles in developing digital tools to prevent and manage chronic illnesses in the short and long term (Sepah et al., 2017). The recent melding of behavioral science with digital tools is especially exciting, resulting in companies such as Omada Health, Vida Health, and Livingo, who are deploying digital tools to enhance physical and mental health.

AI-powered platforms make it easier to fractionalize and link users and providers, creating a new “uberization”4 in health care and a gig economy (Parikh, 2017) in which on-demand workers and contractors take on the risk of erratic employment and the financial risk of health insurance costs. This includes Uber and Lyft drivers, Task Rabbit temporary workers, nurses, physician assistants, and even physicians. Health care and education experienced the fastest growth of gig workers over the past decade, and the continuing trend forces questions related to a moral economy that explores the future of work and workers, guest workers,

___________________

4 According to the online Cambridge Dictionary, uberization is the act or process of changing the market for a service by introducing a different way of buying or using it, especially using mobile technology.

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

and more5 (British Medical Association, 2018). Many of these on-demand workers also have personal health issues that impact their lives (Bajwa et al., 2018). Thus, it is judicious to involve political and social scientists to examine and plan for the societal impacts of AI in health care.

Problem Identification and Equitable Implementation

Health care AI tools have the capability to impact trust in the health care system on a national scale, especially if these tools lead to worse outcomes for some patients or result in increasing inequities. Ensuring that these tools address, or at least do not exacerbate, existing inequities will require thoughtful prioritization of a national agenda that is not driven purely by profit, but instead by an understanding of the important drivers of health care costs, quality, and access.

As a starting point, system leaders must identify key areas in which there are known needs where AI tools can be helpful, where they can help address existing inequities, and where implementation will result in improved outcomes for all patients. These areas must also have an organizational structure in place that addresses other ethical issues, such as patient–provider relationships, patient privacy, transparency, notification, and consent, as well as technical development, validation, implementation, and maintenance of AI tools within an ever evolving learning health care system.

The implementation of health care AI tools requires that information technologists, data scientists, ethicists and lawyers, clinicians, patients, and clinical teams and organizations collaborate and prioritize governance structures and processes. These teams will need a macro understanding of the data flows, transformations, incentives, levers, and frameworks for algorithm development and validation, as well as knowledge of ongoing changes required post-implementation (see Chapter 5).

When developing and implementing those tools, it may be tempting to ignore or delay the considerations of the needed legal and ethical organizational structure to govern privacy, transparency, and consent. However, there are substantial risks in disregarding these considerations, as witnessed in data uses and breaches, inappropriate results derived from training data, and algorithms that reproduce and scale prejudice via the underlying historically biased data (O’Neil, 2017). There must also be an understanding of the ethical, legal, and regulatory structures that are relevant to the approval, use, and deployment of AI tools, without which there will be liability exposure, unintended consequences, and limitations (see Chapter 7).

___________________

5 A 2018 survey showed that 7.7 percent of UK medical workers who are EU citizens would leave the United Kingdom for other regions if the United Kingdom withdrew from the European Union, as it did in 2019.

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

There are substantial infrastructure costs for internally developed AI health care solutions, and these will likely deter smaller health care delivery systems from being early adopters. Deploying AI tools requires careful evaluation of performance and maintenance. If health care AI tools are effective for cost reduction, patient satisfaction, and patient outcomes, and are implemented as a competitive edge, it could leave resource-constrained systems that do not deploy these tools at a disadvantage. Thus, clear guidance is needed on best practices for assessing and interpreting the opportunities and costs of implementing AI tools. Best practices should be driven by an implementation science research agenda and should engage stakeholders to lower the cost and complexity of AI technologies. This is particularly important for smaller health care systems, many of which are in rural and resource-constrained environments.

Post-implementation, the health care systems and stakeholders will need to carefully monitor the impact of AI tools to ensure that they meet intended goals and do not exacerbate inequities.

Impact of AI on the Patient–Provider Relationship

The well-intentioned introduction of EHRs and the HITECH Act incentives contributed to converting physicians into data-entry clerks, worsening physician burnout, and reducing patient satisfaction (Verghese, 2018). To ensure health care AI tools do not worsen that burden, a fundamental issue is the potential impact of AI on the patient–provider relationship. This could include further degradation of empathic interactions as well as a mismatch between existent and needed skills in the workforce. Throughout this publication, we emphasize the power of AI to augment rather than replace human intelligence, because

the desirable attributes of humans who choose the path of caring for others include, in addition to scientific knowledge, the capacity to love, to have empathy, to care and express caring, to be generous, to be brave in advocating for others, to do no harm, and to work for the greater good and advocate for justice. How might AI help clinicians nurture and protect these qualities? This type of challenge is rarely discussed or considered at conferences on AI and medicine, perhaps because it is viewed as messy and hard to define. But, if the goal is for AI to emulate the best qualities of human intelligence, it is precisely the territory that cannot be avoided. (Israni and Verghese, 2019)

As discussed in Chapter 4, the U.S. health care system can draw important lessons from the aviation industry, the history of which includes many examples of automation addressing small challenges, but also occasionally creating

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

extraordinary disasters. The 2009 plane crash of an Air France flight from Rio to Paris showed the potential

unintended consequence of designing airplanes that anyone can fly: anyone can take you up on the offer. Beyond the degradation of basic skills of people who may once have been competent pilots, the fourth-generation jets have enabled people who probably never had the skills to begin with and should not have been in the cockpit. As a result, the mental makeup of airline pilots has changed. (Langewiesche, 2014)

More recently, disasters with Boeing’s 737 Max caused by software issues offer another caution: Competent pilots’ complaints about next-generation planes were not given sufficient review (Sharpe and Robison, 2019).

Finally, just because technology makes it possible to deploy a particular solution, it may still not be appropriate to do so. Recently, a doctor in California used a robot with a video-link screen in order to tell a patient that he was going to die. After a social media and public relations disaster, the hospital apologized, stating, “We don’t support or encourage the use of technology to replace the personal interactions between our patients and their care teams—we understand how important this is for all concerned, and regret that we fell short of the family’s expectations” (BBC News, 2019). Technochauvinism in AI will only further complicate an already complex and overburdened health care system.

In summary, health care is a complex field that incorporates genetics, physiology, pharmacology, biology, and other related sciences with the social, human, and cultural experience of managing health. Health care is both a science and an art, and challenges the notion that simple and elegant formulas will be able to explain significant portions of health care delivery and outcomes (Toon, 2012).

PUBLICATION ORGANIZATION

This publication is structured around several distinct topic areas, each covered in a separate chapter and independently authored by the listed expert team. Figure 1-6 shows the relationship of the chapters.

Each chapter is intended to stand alone and represents the views of its authors. In order to allow readers to read each chapter independently there is some redundancy in the material, with relevant references to other chapters where appropriate. Each chapter initially summarizes the key content of the chapter and concludes with a set of key considerations for improving the development, adoption, and use of AI in health care.

Chapter 2 examines the history of AI, using examples from other industries, and summarizes the growth, maturity, and adoption of AI in health care. The

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Image
FIGURE 1-6 | Chapter relationship.

chapter also describes the central importance of AI to the realization of the learning health care system.

Chapter 3 describes the potential utility of AI for improving health care delivery and discusses the near-term opportunities and potential gains from the use of AI in health care settings. The chapter also explores the promise of AI by key stakeholder groups, including patients and families, the clinical care team, population and public health program managers, health care business and administrative professionals, and research and development professionals.

Chapter 4 considers some of the unintended consequences of AI in health care work processes, culture, equity, patient–provider relationships, and workforce composition and skills, and offers approaches for mitigating the risks.

Chapter 5 covers the technical processes and best practices for developing and validating AI models, including choices related to data, variables, model complexity, learning approach, set up, and the selection of metrics for model performance.

Chapter 6 considers the key issues and best practices for deploying AI models in clinical settings, including the software development process, the integration of models in health care settings, the application of implementation science, and approaches for model maintenance and surveillance over time.

Chapter 7 summarizes key laws applicable to AI that may be applied in health care, describes the regulatory requirements imposed on AI systems designed for health care applications, and discusses legal and policy issues related to privacy and patient data.

The final chapter builds on and summarizes key themes across the publication and describes critical next steps for moving the field forward equitably and responsibly.

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

REFERENCES

Abraham, C., and P. Sheeran. 2007. The health belief model. In Cambridge handbook of psychology, health and medicine, edited by S. Ayers, A. Baum, C. McManus, S. Newman, K. Wallston, J. Weinman, and R. West, 2nd ed., pp. 97–102. Cambridge, UK: Cambridge University Press.

Agbo, C. C., Q. H. Mahmoud, and J. M. Eklund. 2019. Blockchain technology in healthcare: A systematic review. Healthcare 7(2):E56.

Aggarwal, N. K., M. Rowe, and M. A. Sernyak. 2010. Is health care a right or a commodity? Implementing mental health reform in a recession. Psychiatric Services 61(11):1144–1145.

Aitken, M., B. Clancy, and D. Nass. 2017. The growing value of digital health: Evidence and impact on human health and the healthcare system. https://www.iqvia.com/institute/reports/the-growing-value-of-digital-health (accessed May 12, 2020).

Ashby, W. R. 1964. An introduction to cybernetics. London: Methuen and Co. Ltd.

ASTHO (Association of State and Territorial Health Officials). 2019. Medicaid and public health partnership learning series. http://www.astho.org/Health-Systems-Transformation/Medicaid-and-Public-Health-Partnerships/Learning-Series/Managed-Care (accessed May 12, 2020).

Bajwa, U., D. Gastaldo, E. D. Ruggiero, and L. Knorr. 2018. The health of workers in the global gig economy. Global Health 14(1):124.

Baras, J. D., and L. C. Baker. 2009. Magnetic resonance imaging and low back pain care for Medicare patients. Health Affairs (Millwood) 28(6):w1133–w1140.

Basu, S., and R. Narayanaswamy. 2019. A prediction model for uncontrolled type 2 diabetes mellitus incorporating area-level social determinants of health. Medical Care 57(8):592–600.

Bauchner, H., and P. B. Fontanarosa. 2019. Waste in the US health care system. JAMA 322(15):1463–1464.

BBC News.2019. Man told he’s going to die by doctor on video-link robot. March 9. https://www.bbc.com/news/world-us-canada-47510038 (accessed May 12, 2020).

Becker’s Healthcare. 2018. AI with an ROI: Why revenue cycle automation may be the most practical use of AI. https://www.beckershospitalreview.com/artificial-intelligence/ai-with-an-roi-why-revenue-cycle-automation-may-be-the-most-practical-use-of-ai.html (accessed May 12, 2020).

British Medical Association. 2018. Almost a fifth of EU doctors have made plans to leave UK following Brexit vote. December 6. https://psmag.com/series/the-future-of-work-and-workers (accessed May 12, 2020).

Broussard, M. 2018. Artificial unintelligence: How computers misunderstand the world. Cambridge, MA: MIT Press.

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Bustamante, C. D., F. M. De La Vega, and E. G. Burchard. 2011. Genomics for the world. Nature 475:163–165.

Chung, Y. A., Y. Wang, W. N. Hsu, Y. Zhang, and R. Skerry-Ryan. 2018. Semi-supervised training for improving data efficiency in end-to-end speech synthesis. arXiv.org.

CMS (Centers for Medicare & Medicaid Services). 2019. Medicare and Medicaid Programs; Patient Protection and Affordable Care Act; Interoperability and Patient Access for Medicare Advantage Organization and Medicaid Managed Care Plans, State Medicaid Agencies, CHIP Agencies and CHIP Managed Care Entities, Issuers of Qualified Health Plans in the Federally-Facilitated Exchanges and Health Care Providers. Proposed rule. Federal Register 84(42):7610–7680.

CPIC (Clinical Pharmacogenetics Implementation Consortium). 2019. What is CPIC? https://cpicpgx.org (accessed May 12, 2020).

DeepMind Ethics and Society. 2019. DeepMind ethics & society principles. https://deepmind.com/applied/deepmind-ethics-society/principles (accessed May 12, 2020).

DOMO. 2019. Data never sleeps 6.0. https://www.domo.com/learn/data-never-sleeps-6 (accessed May 12, 2020).

Downey, G. 2010. We agree it’s WEIRD, but is it WEIRD enough? Neuroanthropology. July 10. https://neuroanthropology.net/2010/07/10/we-agree-its-weird-but-is-it-weird-enough (accessed May 12, 2020).

European Commission. 2018. 2018 reform of EU data protection rules. https://www.tc260.org.cn/upload/2019-02-01/1549013548750042566.pdf (accessed May 12, 2020).

Feigenbaum, E. 1992. Expert systems: Principles and practice. In The encyclopedia of computer science and engineering. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.34.9207&rep=rep1&type=pdf (accessed May 12, 2020).

Feldstein, P. J. 2012. Health care economics. Clifton Park, NY: Cengage Learning.

Forsyth, D. A., and J. Ponce. 2003. Computer vision: A modern approach. Upper Saddle River, NJ: Prentice Hall.

Ghallab, M., D. S. Nau, and P. Traverso. 2004. Automated planning: Theory and practice. San Francisco, CA: Elsevier.

Google. 2018. Artificial intelligence at Google: Our principles. https://ai.google/principles (accessed May 12, 2020).

Gotterbarn, D. W., B. Brinkman, C. Flick, M. S. Kirkpatrick, K. Miller, K. Vazansky, and M. J. Wolf. 2018. ACM code of ethics and professional conduct. https://www.acm.org/binaries/content/assets/about/acm-code-of-ethics-and-professional-conduct.pdf (accessed May 12, 2020).

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Green, D. 2019. Commentary: Data-sharing can transform care, so let’s get connected. Modern Healthcare. https://www.modernhealthcare.com/opinion-editorial/commentary-data-sharing-can-transform-care-so-lets-get-connected (accessed May 12, 2020).

Hoffman, J. M., H. M. Dunnenberger, J. K. Hicks, M. W. Carillo, R. R. Freimuth, M. S. Williams, T. E. Klein, and J. F. Peterson. 2016. Developing knowledge resources to support precision medicine: Principles from the Clinical Pharmacogenetics Implementation Consortium (CPIC). Journal of the American Medical Informatics Association 23:796–801.

Hripcsak, G., J. D. Duke, N. H. Shah, C. G. Reich, V. Huser, M. J. Schuemie, M. A. Suchard, R. W. Park, I. C. K. Wong, P. R. Rijnbeek, J. van der Lei, N. Pratt, N. Norén, Y-C. Li, P. E. Stang, D. Madigan, and P. B. Ryan. 2015. Observational Health Data Sciences and Informatics (OHDSI):Opportunities for observational researchers. Studies in Health Technologies and Information 216:574–578. https://tmu.pure.elsevier.com/en/publications/observational-health-data-sciences-and-informatics-ohdsi-opportun (accessed May 12, 2020).

House of Lords Select Committee on Artificial Intelligence. 2018. AI in the UK: Ready, willing and able? House of Lords, April 16.

Institute for the Future and Omidyar Network. 2018. Ethical OS: Risk mitigation checklist. https://ethicalos.org/wp-content/uploads/2018/08/EthicalOS_Check-List_080618.pdf (accessed May 12, 2020).

IOM (Institute of Medicine). 2008. Evidence-based medicine and the changing nature of health care: 2007 IOM annual meeting summary. Washington, DC:The National Academies Press. https://doi.org/10.17226/12041.

Isazi Consulting. 2015. What is machine learning? http://www.isaziconsulting.co.za/machinelearning.html.

Israni, S. T., and A. Verghese. 2019. Humanizing artificial intelligence. JAMA 321(1):29–30.

Jackson, P. 1998. Introduction to expert systems. Boston, MA: Addison-Wesley Longman Publishing Co., Inc.

Kehl, K. L., H. Elmarakeby, M. Nishino, E. M. Van Allen, E. M. Lepisto, M. J. Hassett, B. E. Johnson, and D. Schrag. 2019. Assessment of deep natural language processing in ascertaining oncologic outcomes from radiology reports. JAMA Oncology. Epub ahead of print. doi: 10.1001/jamaoncol.2019.1800.

Khalilia, M., M. Choi, A. Henderson, S. Iyengar, M. Braunstein, and J. Sun. 2015. Clinical predictive modeling development and deployment through FHIR web services. AMIA Annual Symposium Proceedings 717–726.

Kho, A. N., J. P. Cashy, K. L. Jackson, A. R. Pah, S. Goel, J. Boehnke, J. E. Humphries, S. D. Kominers, B. N. Hota, S. A. Sims, B. A. Malin, D. D. French, T. L. Walunas,

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

D. O. Meltzer, E. O. Kaleba, R. C. Jones, and W. L. Galanter. 2015. Design and implementation of a privacy preserving electronic health record linkage tool. Chicago Journal of the American Medical Informatics Association 22(5):1072–1080.

Kissam, S. M., H. Beil, C. Cousart, L. M. Greenwald, and J. T. Lloyd. 2019. States encouraging value-based payment: Lessons from CMS’s state innovation models initiative. The Milbank Quarterly 97(2):506–542.

Kommerskollegium National Board of Trade. 2012. How borderless is the cloud? https://www.wto.org/english/tratop_e/serv_e/wkshop_june13_e/how_borderless_cloud_e.pdf (accessed May 12, 2020).

Langewiesche, W. 2014. The human factor. Vanity Fair. September. https://www.vanityfair.com/news/business/2014/10/air-france-flight-447-crash (accessed May 12, 2020).

Lee, J., and C. Korba. 2017. Social determinants of health: How are hospitals and health systems investing in and addressing social needs? Deloitte Center for Health Solutions. https://www2.deloitte.com/content/dam/Deloitte/us/Documents/life-sciences-health-care/us-lshc-addressing-social-determinants-of-health.pdf (accessed May 12, 2020).

Leondes, C. T. 2002. Expert systems: The technology of knowledge management and decision making for the 21st century. San Diego, CA: Academic Press.

Levi, M. 2018 (April). Towards a new moral economy:A thought piece. https://casbs.stanford.edu/sites/g/files/sbiybj9596/f/levi-thought-piece-april-20184.pdf.

Lye, C. T., H. P. Forman, R. Gao, J. G. Daniel, A. L. Hsiao, M. K. Mann, D. deBronkart, H. O. Campos, and H. M. Krumholz. 2018. Assessment of US hospital compliance with regulations for patients’ requests for medical records. JAMA Network Open 1(6):e183014.

Manning, C. D., and H. Schütze. 1999. Foundations of statistical natural language processing. Cambridge, MA: MIT Press.

Mello, M. M., J. Adler-Milstein, K. L. Ding, and L. Savage. 2018. Legal barriers to the growth of health information exchange—Boulders or pebbles? The Milbank Quarterly 96(1):110–143.

Mendelson, A., K. Kondo, C. Damberg, A. Low, M. Motuapuaka, M. Freeman, M. O’Neil, R. Relevo, and D. Kansagara. 2017. The effects of pay-for performance programs on health, healthcare use, and processes of care: A systematic review. Annals of Internal Medicine 166(5):341–355.

Mills, M. 2015. Artificial intelligence in law—The state of play in 2015? https://www.legaltechnology.com/latest-news/artificial-intelligence-in-law-the-state-of-play-in-2015 (accessed May 12, 2020).

Munoz, C., M. Smith, and D. J. Patil. 2016. Big data: A report on algorithmic systems, opportunity, and civil rights. Executive Office of the President.

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdf (accessed May 12, 2020).

Murff, H. J., F. Fitzhenry, M. E. Matheny, N. Gentry, K. L. Kotter, K. Crimin, R. S. Dittus, A. K. Rosen, P. L. Elkin, S. H. Brown, and T. Speroff. 2011. Automated identification of postoperative complications within an electronic medical record using natural language processing. JAMA 306:848–855.

NAM (National Academy of Medicine). 2019a. Leadership Consortium for a Value & Science-Driven Health System. https://nam.edu/programs/value-science-driven-health-care (accessed May 12, 2020).

NAM. 2019b. Digital learning. https://nam.edu/programs/value-science-driven-health-care/digital-learning (accessed May 12, 2020).

Nassif, A. B., I. Shahin, I. Attilli, M. Azzeh, and K. Shaalan. 2019. Speech recognition using deep neural networks: A systematic review. IEEE Access 7:19143–19165. https://ieeexplore.ieee.org/document/8632885.

NRC (National Research Council). 2009. Computational technology for effective health care: Immediate steps and strategic directions. Washington, DC:The National Academies Press. https://doi.org/10.17226/12572.

OECD (Organisation for Economic Co-operation and Development). 2017. Health at a glance 2017. https://www.oecd-ilibrary.org/content/publication/health_glance-2017-en (accessed May 12, 2020).

OHDSI (Observational Health Data Sciences and Informatics). 2019. Home. https://ohdsi.org (accessed May 12, 2020).

Ohno-Machado, L., Z. Agha, D. S. Bell, L. Dahm, M. E. Day, J. N. Doctor, D. Gabriel, M. K. Kahlon, K. K. Kim, M. Hogarth, M. E. Matheny, D. Meeker, J. R. Nebeker, and pSCANNER team. 2014. pSCANNER: Patient-centered scalable national network for effectiveness research. Journal of the American Medical Informatics Association 21(4):621–626.

ONC (The Office of the National Coordinator for Health Information Technology). 2019. 21st Century Cures Act: Interoperability, information blocking, and the ONC Health IT Certification Program. Final Rule. Federal Register 84(42):7424.

O’Neil, C. 2017. Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Broadway Books.

Ong, T. C., M. V. Mannino, L. M. Schilling, and M. G. Kahn. 2014. Improving record linkage performance in the presence of missing linkage data. Journal of Biomedical Informatics 52:43–54.

Ong, T., R. Pradhananga, E. Holve, and M. G. Kahn. 2017. A framework for classification of electronic health data extraction-transformation-loading

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

challenges in data network participation. Generating Evidence and Methods to Improve Patient Outcomes (eGEMS) 5(1):10.

Parikh, R. 2017. Should doctors play along with the uberization of health care? Slate. https://slate.com/technology/2017/06/should-doctors-play-along-with-the-uberization-of-health-care.html (accessed May 12, 2020).

Partnership on AI. 2018. Partnership on AI tenets. https://www.partnershiponai.org/tenets.

Patient-Centered Clinical Research Network. 2019. Data driven. https://pcornet.org/data-driven-common-model (accessed May 12, 2020).

PharmGKB. 2019. Home. https://www.pharmgkb.org (accessed May 12, 2020).

Popejoy, A. B., and S. M. Fullerton. 2016. Genomics is failing on diversity. Nature 538(7624):161–164.

Prochaska, J. O., and W. F. Velicer. 1997. The transtheoretical model of health behavior change. American Journal of Health Promotion 12(1):38–48.

Reiley, C. E. 2016. When bias in product design means life or death. Medium. https://medium.com/@robot_MD/when-bias-in-product-design-means-life-or-death-ea3d16e3ddb2 (accessed May 12, 2020).

Rosenbloom, S. T., R. J. Carroll, J. L. Warner, M. E. Matheny, and J. C. Denny. 2017. Representing knowledge consistently across health systems. Yearbook of Medical Informatics 26(1):139–147.

Rosenstock, I. 1974. Historical origins of the health belief model. Health Education & Behavior 2(4):328–335.

Rosenthal, E. 2017. An American sickness. New York: Penguin.

Ross, C. 2018. The government wants to free your health data. Will that unleash innovation? STAT. https://www.statnews.com/2018/03/29/government-health-data-innovation (accessed May 12, 2020).

Schulte, D. 2017. 4 ways artificial intelligence can bend health care’s cost curve. https://www.bizjournals.com/bizjournals/how-to/technology/2017/07/4-ways-artificial-intelligencecan-bend-healthcare.html (accessed May 12, 2020).

Schulte, F., and E. Fry. 2019. Death by 1,000 clicks: Where electronic health records went wrong. Kaiser Health News & Fortune (Joint Collaboration). https://khn.org/news/death-by-a-thousand-clicks (accessed May 12, 2020).

Seligman, B., S. Tuljapurkar, and D. Rehkopf. 2017. Machine learning approaches to the social determinants of health in the health and retirement study. SSM-Population Health 4:95–99.

Sepah, S. C., L. Jiang, R. J. Ellis, K. McDermott, and A. L. Peters. 2017. Engagement and outcomes in a digital Diabetes Prevention Program: 3-year update. BMJ Open Diabetes Research and Care 5:e000422. doi: 10.1136/bmjdrc-2017-000422.

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Sharpe, A., and P. Robison. 2019. Pilots flagged software problems on Boeing jets besides Max. Bloomberg. https://www.bloomberg.com/news/articles/2019-06-27/boeing-pilots-flagged-software-problems-on-jets-besides-the-max (accessed May 12, 2020).

Shi, M., S. Sacks, Q. Chen, and G. Webster. 2019. Translation: China’s personal information security specification. New America. https://www.newamerica.org/cybersecurity-initiative/digichina/blog/translation-chinas-personal-information-security-specification (accessed May 12, 2020).

Shortliffe, E. H., and B. G. Buchanan. 1975. A model of inexact reasoning in medicine. Mathematical Biosciences 23(3–4):351–379. doi: 10.1016/0025-5564(75)90047-4.

Shrott, R. 2017. Deep learning specialization by Andrew Ng—21 lessons learned. Medium. https://towardsdatascience.com/deep-learning-specialization-by-andrew-ng-21-lessons-learned-15ffaaef627c (accessed May 12, 2020).

Skloot, R. 2011. The immortal life of Henrietta Lacks. New York: Broadway Books.

Sonka, M., V. Hlavac, and R. Boyle. 2008. Image processing, analysis, and machine vision, 4th ed. Boston, MA: Cengage Learning.

Stanford Engineering. 2019. Carlos Bustamante: Genomics has a diversity problem. https://engineering.stanford.edu/magazine/article/carlos-bustamante-genomics-has-diversity-problem (accessed May 12, 2020).

Sullivan, A. 2010. Western, educated, industrialized, rich, and democratic. The Daily Dish. October 4. https://www.theatlantic.com/daily-dish/archive/2010/10/western-educated-industrialized-rich-and-democratic/181667 (accessed May 12, 2020).

Sun, C., A. Shrivastava, S. Singh, and A. Gupta. 2017. Revisiting unreasonable effectiveness of data in deep learning era. https://arxiv.org/pdf/1707.02968.pdf (accessed May 12, 2020).

Toon, P. 2012. Health care is both a science and an art. British Journal of General Practice 62(601):434.

Vayena, E., A. Blasimme, and I. G. Cohen. 2018. Machine learning in medicine: Addressing ethical challenges. PLoS Medicine 15(11):e1002689.

Verghese, A. 2018. How tech can turn doctors into clerical workers. The New York Times, May 16. https://www.nytimes.com/interactive/2018/05/16/magazine/health-issuewhat-we-lose-with-data-driven-medicine.html (accessed May 12, 2020).

Wakabayashi, D. 2019. Google and the University of Chicago are sued over data sharing. The New York Times. https://www.nytimes.com/2019/06/26/technology/google-university-chicago-data-sharing-lawsuit.html (accessed May 12, 2020).

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

West, S. M., M. Whittaker, and K. Crawford. 2019. Gender, race and power in AI. AI Now Institute. https://ainowinstitute.org/discriminatingsystems.html (accessed May 12, 2020).

Wetterstrand, K. A. 2019. DNA sequencing costs: Data from the NHGRI Genome Sequencing Program (GSP). https://www.genome.gov/sequencingcostsdata (accessed May 12, 2020).

Whittaker, M., K. Crawford, R. Dobbe, G. Fried, E. Kaziunas, V. Mathur, S. M. West, R. Richardson, J. Schultz, and O. Schwartz. 2018. AI Now report 2018. AI Now Institute at New York University. https://stanford.app.box.com/s/xmb2cj3e7gsz5vmus0viadt9p3kreekk (accessed May 12, 2020).

Wiens, J., S. Saria, M. Sendak, M. Ghassemi, V. X. Liu, F. Doshi-Velez, K. Jung, K. Heller, D. Kale, M. Saeed, P. N. Ossorio, S. Thadaney-Israni, and A. Goldenberg. 2019. Do no harm: A roadmap for responsible machine learning for health care. Nature Medicine 25(9):1137–1340.

Witten, I. H., E. Frank, M. A. Hall, and C. J. Pal. 2016. Data mining: Practical machine learning tools and techniques. Burlington, MA: Morgan Kaufmann.

Yeung, S., F. Rinaldo, J. Jopling, B. Liu, R. Mehra, N. L. Downing, M. Guo, G. M. Bianconi, A. Alahi, J. Lee, B. Campbell, K. Deru, W. Beninati, L. Fei-Fei, and A. Milstein. 2019. A computer vision system for deep-learning based detection of patient mobilization activities in the ICU. NPJ Digital Medicine 2:11. doi: 10.1038/s41746-019-0087-z.

Zarsky, T. 2016. The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology, & Human Values 41(1):118–132. doi: 10.1177/0162243915605575.

Suggested citation for Chapter 1: Matheny, M., S. Thadaney Israni, D. Whicher, and M. Ahmed. 2019. Artificial intelligence in health care: The hope, the hype, the promise, the peril. In Artificial intelligence in health care:The hope, the hype, the promise, the peril. Washington, DC: National Academy of Medicine.

Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 7
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 8
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 9
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 10
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 11
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 12
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 13
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 14
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 15
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 16
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 17
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 18
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 19
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 20
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 21
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 22
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 23
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 24
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 25
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 26
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 27
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 28
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 29
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 30
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 31
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 32
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 33
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 34
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 35
Suggested Citation:"1 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril ." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 36
Next: 2 Overview of Current Artificial Intelligence »
Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril Get This Book
×
 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril
Buy Paperback | $42.00 Buy Ebook | $33.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The emergence of artificial intelligence (AI) in health care offers unprecedented opportunities to improve patient and clinical team outcomes, reduce costs, and impact population health. While there have been a number of promising examples of AI applications in health care, it is imperative to proceed with caution or risk the potential of user disillusionment, another AI winter, or further exacerbation of existing health- and technology-driven disparities.

This Special Publication synthesizes current knowledge to offer a reference document for relevant health care stakeholders. It outlines the current and near-term AI solutions; highlights the challenges, limitations, and best practices for AI development, adoption, and maintenance; offers an overview of the legal and regulatory landscape for AI tools designed for health care application; prioritizes the need for equity, inclusion, and a human rights lens for this work; and outlines key considerations for moving forward.

AI is poised to make transformative and disruptive advances in health care, but it is prudent to balance the need for thoughtful, inclusive health care AI that plans for and actively manages and reduces potential unintended consequences, while not yielding to marketing hype and profit motives.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!