National Academies Press: OpenBook

Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril (2019)

Chapter: 8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril

« Previous: 7 Health Care Artificial Intelligence: Law, Regulation, and Policy
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

8
ARTIFICIAL INTELLIGENCE IN HEALTH CARE: HOPE NOT HYPE, PROMISE NOT PERIL

Michael Matheny, Vanderbilt University Medical Center and U.S. Department of Veterans Affairs; Sonoo Thadaney Israni, Stanford University; Danielle Whicher, National Academy of Medicine; and Mahnoor Ahmed, National Academy of Medicine

INTRODUCTION

Health care delivery in the United States, and globally, continues to face significant challenges from the increasing breadth and depth of data and knowledge generation. This publication focuses on artificial intelligence (AI) designed to improve health and health care, the explosion of electronic health data, the significant advances in data analytics, and mounting pressures to reduce health care costs while improving health care equity, access, and outcomes. AI tools could potentially address known challenges in health care delivery and achieve the vision of a continuously learning health system, accounting for personalized needs and preferences. The ongoing challenge is to ensure the appropriate and equitable development and implementation of health care AI. The term AI is inclusive of machine learning, natural language processing, expert systems, optimization, robotics, speech, and vision (see Chapter 1), and the terms AI tools, AI systems, and AI applications are used interchangeably.

While there have been a number of promising examples of AI applications in health care (see Chapter 3), it is judicious to proceed with caution to avoid another AI winter (see Chapter 2), or further exacerbate health care disparities. AI tools are only as good as the data used to develop and maintain them, and there are many limitations with current data sources (see Chapters 1, 3, 4, and 5). Plus, there is the real risk of increasing current inequities and distrust (see Chapters 1 and 4) if AI tools are developed and deployed without thoughtful preemptive planning, self-governance, trust-building, transparency, appropriate

Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

levels of automation and augmentation (see Chapters 3, 4, and 5), and regulatory oversight (see Chapters 4, 5, 6, and 7).

This publication synthesizes the major literature to date, in both the academic and general press, to create a reference document for health care AI model developers, clinical teams, patients, “fRamilies,” and regulators and policy makers to:

  1. identify the current and near-term uses of AI within and outside the traditional health care systems (see Chapters 2 and 3);
  2. highlight the challenges and limitations (see Chapter 4) and the best practices for development, adoption, and maintenance of AI tools (see Chapters 5 and 6);
  3. understand the legal and regulatory landscape (see Chapter 7);
  4. ensure equity, inclusion, and a human rights lens for this work; and
  5. outline priorities for the field.

The authors of the eight chapters are experts convened by the National Academy of Medicine’s Digital Health Learning Collaborative to explore the field of AI and its applications in health and health care, consider approaches for addressing existing challenges, and identify future directions and opportunities.

This final chapter synthesizes the challenges and priorities of the previous chapters, highlights current best practices, and identifies key priorities for the field.

SUMMARY OF CHALLENGES AND KEY PRIORITIES

This section summarizes the key findings and priorities of the prior chapters without providing the underlying evidence or more detailed background. Please refer to the referenced chapters for details.

Promote Data Access, Standardization, and Reporting of Data Quality, While Minimizing Data Bias

It is widely accepted that the successful development of an AI system requires high-quality, population-representative, and diverse data (Shrott, 2017; Sun et al., 2017). Figure 8-1 outlines a standardized pathway for the collection and integration of multiple data sources into a common data model, which efficiently feeds the transformation to a feature space for AI algorithm training. However, some of the standardization tools and data quality assessments and methodologies for curating the data do not yet exist. Interoperability is critical at all layers, including across the multivendor electronic health record and ancillary

Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

components of a health care system, between different health care systems, and with consumer health applications. We cannot disregard the fact that there are varying data requirements for the training of AI and for the downstream use of AI. Some initiatives do exist and are driving the health care community in the direction of interoperability and data standardization, but they have yet to see widespread use (HL7, 2018; Indiana Health Information Exchange, 2019; NITRD et al., 2019; OHDSI, 2019).

Methods to assess data validity and reproducibility are often ad hoc. Ultimately, for AI models to be trusted, the semantics and provenance of the data used to derive them must be fully transparent, unambiguously communicated, and available, for validation at least, to an independent vetting agent. This is a distinct element of transparency, and the conflation of data transparency with algorithmic transparency complicates the AI ecosystem’s discourse. We suggest a clear separation of these topics. One example of a principles declaration that promotes data robustness and quality is the FAIR (findability, accessibility, interoperability, and reusability) Principles (Wilkinson et al., 2016).

These principles, put forth by molecular biology and bioinformatics researchers, are not easily formalized or implemented. However, for health care AI to mature, a similar set of principles should be developed and widely adopted.

The health care community should continue to advocate for policy, regulatory, and legislative mechanisms that improve the ease of data aggregation. These would include (but are not limited to) a national patient health care identifier and mechanisms to responsibly bring together data from multiple sources. The debate should focus on the thoughtful and responsible ability of large-scale health care data resources to serve as a public good and the implications of that ability. Discussions around wider and more representative data access should be carefully balanced by stronger outreach, education, and consensus building with the public and patients in order to address where and how their data can be reused for AI research, data monetization, and other secondary uses; which entities can reuse their data; and what safeguards need to be in place. In a recent commentary, Glenn Cohen and Michelle Mello propose that “it is timely to reexamine the adequacy of the Health Insurance Portability and Accountability Act (HIPAA), the nation’s most important legal safeguard against unauthorized disclosure and use of health information. Is HIPAA up to the task of protecting health information in the 21st century?” (Cohen and Mello, 2018).

When entities bring data sources together, they face ethical, business, legislative, and technical hurdles. There is a need for novel solutions that allow for robust data aggregation while promoting transparency and respecting patient privacy and preferences.

Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Prioritize Equitable and Inclusive Health Care

In addition, these solutions need to be equitable to avoid a potential conundrum (see Chapters 1 and 4) in which patients, especially those who are the least AI-savvy, are unaware of how their data are monetized. “That which is measured, improves,” opined Karl Pearson, famed statistician and founder of mathematical statistics. Therefore, prioritizing equity and inclusion should be a clearly stated goal when developing and deploying AI in health care. It is imperative for developers and implementers to consider the data used to develop AI tools and unpack the underlying biases in that data. It is also essential to consider how the tool should be deployed, and whether the range of deployment environments could impact equity and inclusivity. There are widely recognized inequities in health outcomes due to the social determinants of health (BARHII, 2015) and the perverse incentives in existing health care systems (Rosenthal, 2017).

Unfortunately, consumer-facing technologies have often exacerbated historical inequities in other fields, and the digital divide continues to be a reality for wearables deployment and the data-hungry plans they require, even if the initial cost of the device is subsidized. As Cathy O’Neil reported in Weapons of Math Destruction, AI and related sciences can exacerbate inequity on a monumental scale. The impact of a single biased human is far less than that of a global or national AI (O’Neil, 2017).

Data transparency is key to ensuring AI adopters can assess the underlying data for biases and to consider whether the data are representative of the population in which the AI tool will be deployed. The United States has some population-representative datasets, such as national claims data, and high levels of data capture in certain markets (such as the Indiana Health Information Exchange). But, in many instances AI is being developed with data that are not population-representative, and while there are efforts to link health care data to the social determinants of health, environmental, and social media data to obtain a comprehensive profile of a person, this is not routine. Nor are there ethical or legal frameworks for doing so. It is imperative that we develop and standardize approaches for evaluating and reporting on data quality and representativeness. It is equally vital that we ensure and report on the diversity of gender, race, age, and other human characteristics of AI development teams to benefit from their much-needed diverse knowledge and life experiences (see Chapters 1 and 5).

Executing and delivering on equity and inclusion will require a new governance framework. Current self-governance efforts by technology companies are plagued with numerous struggles and failures, Google’s April 2019 Ethics Board dissolution being one recent example (Piper, 2019). Mark Latonero suggests, “In order for

Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

AI to benefit the common good, at the very least its design and deployment should avoid harms to fundamental human values. International human rights provide a robust and global formulation of those values” (Latonero, 2018).

For objective governance, a new neutral agency or a committee within an existing governmental or nongovernmental entity, supported by a range of stakeholders, could own and manage the review of health care AI products and services while protecting developers’ intellectual property rights. One example of this type of solution is the New Model for Industry-Academic Partnerships, which developed a framework for academic access to industry (Facebook) data sources: The group with full access to the data is separate from the group doing the publishing, but both are academic, independent, and trusted. The group with full access executes the analytics and verifies the data, understands the underlying policies and issues, and delivers the analysis to a separate group who publishes the results but does not have open access to the data (Social Science Research Council, 2019). To ensure partisan neutrality, the project is funded by ideologically diverse supporters, including the Laura and John Arnold Foundation, the Democracy Fund, the William and Flora Hewlett Foundation, the John S. and James L. Knight Foundation, the Charles Koch Foundation, the Omidyar Network, and the Alfred P. Sloan Foundation. Research projects use this framework when researchers use Facebook social media data for election impact analysis, and Facebook provides the data required for the research but does not have the right to review or approve the research findings prior to publication.

Perhaps the best way to ensure that equity and inclusion are foundational components of a thriving health care system is to add them as a dimension to the quadruple aim, expanding it to a Quintuple Aim for health and health care: better health, improved care experience, clinician well-being, lower cost, and health equity throughout (see Figure 8-2).

Promote a Spectrum of Transparency-Based Trust, Based on Considerations of Accuracy, Risk, and Liability

A key challenge to the acceptance and widespread use of AI is the tension between data and algorithmic transparency, accuracy, perceived risk, and tort liability. One of the priorities identified in this publication is the need to present each health care AI tool along with the spectrum of transparency related to the potential harms and context of its use. Evaluating and addressing appropriate transparency in each subdomain of data, algorithms, and performance, and systematically reporting it must be a priority. In addition, health system leaders must understand the return on investment and the risks and benefits of adoption,

Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Image
FIGURE 8-2 | The Quintuple Aim to ensure equity and inclusion are stated and measured goals when designing and deploying health care interventions.

including the risks of adverse events post-implementation; and informatics implementers must understand the culture and workflows where AI tools will be used so the algorithms can be adjusted to reflect their needs. All stakeholders should prioritize equity and inclusion, requiring transparency on how AI tools are monitored and updated. Many of these are shared, not siloed, responsibilities.

In all cases, the transparency of the underlying data used for AI model generation should be endorsed. While granular, patient-level data should not be publicly shared, publishing information on the data sources from which they were aggregated; how the data were transformed; data quality issues; inclusion and exclusion criteria that were applied to generate the cohort; summary statistics of demographics; and relevant data features in each source should be conventional practice. This information could be a supporting document and would tremendously improve the current understanding of and trust in AI tools.

The need for algorithmic transparency is largely dependent on the use context. For applications that have immediate clinical impact on patient quality of life or health outcomes, the baseline requirement for transparency is high. However, the level of transparency could be different depending on the (1) known precision accuracy of the AI; (2) clarity of recommended actions to end users; (3) risk to the patient or target; and (4) legal liability. For example, if an AI tool has high-precision accuracy and low risk, provides clear recommendations to the

Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Image
FIGURE 8-3 | Summary of relationships between requirements for transparency and the three axes of patient risk, user trust, and algorithm performance within three key domains: data transparency, algorithmic transparency, and product/output transparency.
NOTE: While not comprehensive, examples of how different users and use cases require different levels of transparency in each of these three domains.

end user, and is unlikely to impose legal liability on the institution, manufacturer, or end user, then the need for complete algorithmic transparency is likely to be lower. See Figure 8-3 for additional details on the relationships of transparency and these axes within different conceptual domains.

Focus of Near-Term Health Care AI: Augmented Intelligence Versus Full Automation

Although some AI applications for health care business operations are likely to be poised for full automation, most of the near-term dialogue around AI in health care should focus on promoting, developing, and evaluating tools that support human cognition rather than replacing it. Popular culture and marketing have overloaded the term “AI” to the point where it means replacing human labor, and as a result, other terms have emerged to distinguish AI that is used to support human cognition. Augmented intelligence refers to the latter, which is the term the authors of this chapter endorse.

Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

The opportunity for augmenting human cognition is vast, from supporting clinicians with less training in performing tasks currently limited to specialists to filtering out normal or low-acuity clinical cases so specialists can work at the top of their licensure. Additionally, AI could help humans reduce medical error due to cognitive limits, inattention, micro-aggression, or fatigue. In the case of surgery, it might offer capabilities that are not humanly possible.

Opportunities exist for automating some business processes, and greater automation is possible as the field matures in accuracy and trust. But it would not be prudent to deploy fully automated AI tools that could result in inaccuracy when the public has an understandably low tolerance for error, and health care AI lacks needed regulation and legislation. This is most likely to create a third AI Winter or a trough of disillusionment as seen in the Gartner Hype Cycle (see Chapter 4).

Differential levels of automation are even more relevant to consumer health applications because they are likely to have more automation components, but are regulated as entertainment applications, and their standards and quality controls are much more variable. The quandaries here are perhaps even more dire given consumer health applications’ widespread use and the difficulties of tracking and surveilling potential harms that could result from their use in the absence of expert oversight.

Develop Appropriate Professional Health Training and Educational Programs to Support Health Care AI

Stanford Univerity’s Curt Langlotz, offered the following question and answer: “Will AI ever replace radiologists? I say the answer is no—but radiologists who use AI will replace radiologists who don’t” (Stanford University, 2017).

In order to sustain and nurture health care AI, we need a sweeping, comprehensive expansion of relevant professional health education focused on data science, AI, medicine, humanism, ethics, and health care. This expansion must be multidisciplinary and engage AI developers, implementers, health care system leadership, frontline clinical teams, ethicists, humanists, and patients and “fRamilies,” because each brings essential expertise and AI progress is contingent on knowledgeable decision makers balancing the conflicting pressures of the relative ease of implementing newly developed AI solutions while understanding their validity and influence on care.

To begin addressing challenges, universities such as the Massachusetts Institute of Technology, Harvard, Stanford, and The University of Texas have added new courses focused on the embedding ethics into their development process.

Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Mehran Sahami, a Stanford University computer science faculty member who formerly worked at Google as a senior research scientist said, “Technology is not neutral. The choices that get made in building technology then have social ramifications” (Singer, 2018).

Health care professionals have requirements for continuing education as part of their scope of practice; we suggest that new continuing education AI curricula be developed and delivered. Some important topics that should be covered are how to (1) assess the need, validity, and applicability of AI algorithms in clinical care; (2) understand algorithmic performance and the impact on downstream clinical use; (3) navigate medical liability and the ways in which AI tools may impact individual and institutional liability and medical error; (4) advocate for standardization and appropriate transparency for a given use case; (5) discuss emerging AI technologies, their use, and their dependence on patient data with patients and “fRamilies” and the patient–clinician relationship; (6) ensure the Quintuple Aim of equity and inclusion when measuring impact; and (7) know when and how to bring in AI experts for consults. As the field evolves, the nature and emphasis of these topics will change, necessitating periodic review and updating.

Professional health education should incorporate how to critically evaluate the utility and risk of these AI tools in clinical practice. Curricula should provide an understanding of how AI tools are developed, the criteria and considerations for the use of AI tools, how best to engage and use such tools while prioritizing patient needs, and when human oversight is needed.

For health care system leadership and AI implementers, it is important to have training on the importance and lenses of the multiple disciplines that must be brought together to evaluate, deploy, and maintain AI in health care.

Current clinical training programs bear the weight of growing scientific knowledge within a static time window of training. We recognize the impracticality of each clinician or team being an expert on all things health care–AI related. Instead, we propose that each team has a basic and relevant understanding as described and adds an AI consult when and where needed. Such consults could be done virtually, supporting the team effort and group decision making, and costing less than if they were done onsite. Regional or content-expert AI consults could be leveraged across many health care systems. One example of such regional consults is the National Institutes of Health–funded Undiagnosed Diseases Network (UDN), which seeks “to improve and accelerate diagnosis of rare and undiagnosed conditions (NIH, 2019). The UDN uses both basic and clinical research to improve the level of diagnosis and uncover the underlying disease mechanisms associated with these conditions.” National (or global) efforts like this can support the building and deployment of responsible AI solutions for health care.

Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

It is necessary to develop retraining programs to target job categories that are likely to be the most susceptible to a shift in desired skill sets with AI deployment. It is unlikely that many health care jobs will be lost, but skill and knowledge mismatches are to be expected (see Chapter 4).

Articulate Success Factors for the Development, Adoption, and Maintenance of AI in Health Care

In order to implement AI tools in health care settings with sustained success, it is important that system leadership, AI developers, AI implementers, regulators, humanists, and patients and “fRamilies” collaboratively build a shared understanding and expectations. The success factors for development, adoption, and maintenance of AI tools will need clarity, acknowledging that practices will differ depending on the physical, psychological, or legal risk to the end user, the adoption setting, the level of augmentation versus automation, and other considerations. Dissonance between levels of success and users’ expectations of impact and utility are likely to create harm and disillusionment. Below, we summarize the key components that must be wrangled.

The global health care AI community must develop integrated best-practice frameworks for AI implementation and maintenance, balancing ethical inclusivity, software development, implementation science, and human–computer interaction. These frameworks should be developed within the context of the learning health care system and can be tied to various targets and objectives. Earlier chapters provide summaries and considerations for both technical development (see Chapter 5) and health care system implementation (see Chapter 6). However, the AI implementation and deployment domain is still in a nascent stage, and health systems should maintain appropriate skepticism about the advertised benefits of health care AI.

It is important to approach health care AI as one of many tools for supporting the health and well-being of patients. Thus, AI should be deployed to address real problems that need solving, and only among those problems in which a simpler or more basic solution is inadequate. The complexity of AI has a very real cost to health care delivery environments.

Health care AI could go beyond the current limited, biology-focused research to address individual patient and communal needs. The current medical enterprise is largely focused on the tip of the iceberg (i.e., human biology), lacking meaningful and usable access to relevant patient contexts such as social determinants of health and psychosocial risk factors. AI solutions have the potential (with appropriate consent) to link personal and public data for truly personalized health care.

Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

The April 2019 collaborative effort by UnitedHealthcare and the American Medical Association to create nearly two dozen International Classification of Diseases, Tenth Revision, codes to better incorporate social determinants of health into health care delivery is a laudable and responsible step in the right direction (Commins, 2019).

AI should be considered where scale is important and resources are insufficient for current needs. Some of these environments include complex patients with multiple comorbid conditions, such as chronic disease sufferers and the elderly, or low-resource settings. For innovative telehealth—disaster relief and rural areas—when resources are limited and access difficult, triaging or auto-allocating resources can be powered by AI solutions. Current mobile technology allows for critical imaging at the local site, and the U.S. Department of Veterans Affairs has operationalized a robust telehealth program that serves its very diverse population (VA, 2016).

We strongly suggest that a robust and mature underlying information technology governance strategy be in place within health care delivery systems prior to embarking on substantial AI deployment and integration. The needs for on- or offsite hardware infrastructure, change management, inclusive stakeholder engagement, and safety monitoring all require substantial established resources. Systems that do not possess these infrastructure components should develop them before significant AI deployment.

Balancing Regulation and Legislation for Health Care Innovation

The regulatory and legislative considerations for AI use in consumer and professional health care domains are documented in Chapter 7. AI applications have great potential to improve patient health but could also pose significant risks, such as inappropriate patient risk assessment, treatment recommendations, privacy breaches, and other harms (Evans and Whicher, 2018). Overall, the field is advancing rapidly, with a constant evolution of access to data, aggregation of data, new developments in AI methods, and expansions of how and where AI is added to patient health and health care delivery. Regulators should remain flexible, but the potential for lagging legislation remains an issue.

In alignment with recent congressional and U.S. Food and Drug Administration developments and guidance, we suggest a graduated approach to the regulation of AI based on the level of patient risk, the level of AI autonomy, and how static or dynamic certain AI tools are likely to be. To the extent that machine learning–based models continuously learn from new data, regulators should adopt postmarket surveillance mechanisms to ensure continuing (and ideally, improving)

Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Image
FIGURE 8-4 | Relationship between regulation and risk.

high-quality performance. Liability accrued within the deployment of various contexts of AI will continue to be a developing area as regulators, courts, and the insurance industry weigh in. Understanding regulation and liability is essential to evaluating risks and benefits.

The linkages between innovation, safety, progress, and regulation are complex. Regulators should engage in collaborative efforts with stakeholders and experts to continuously evaluate deployed clinical AI for effectiveness and safety based on real-world data. Throughout that process, transparency can help deliver well-vetted solutions. To enable both AI development and oversight, governmental agencies should invest in infrastructure that promotes wider data collection and access to data resources for building AI solutions, within a framework of equity and data protection (see Figure 8-4).

The Global Conundrum

The United States and many other nations prioritize human rights values and are appropriately measured and thoughtful in supporting data collection, AI development, and AI deployment. Other nations, with China and Russia being prime examples, have different priorities. The current AI arms race in all fields, including and beyond health care, creates a complex and, some argue, untenable geopolitical state of affairs (Apps, 2019). Others point out that it is not an AI arms race because interdependencies and interconnections among nations are needed to support research and innovation. Regardless, Kai Fu Lee outlines China’s competitive edge in AI in his 2018 book AI Superpowers: China, Silicon Valley, and the New World Order (Lee, 2018). Putin has also outlined a national

Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

AI strategy. And in February 2019, the White House issued an Executive Order on Maintaining American Leadership in Artificial Intelligence (White House, 2019). The downstream implications of this AI arms race in health care raise questions and conundrums this publication does not cover. We acknowledge they are countless and should be investigated.

CONCLUSIONS

AI is poised to make transformative and disruptive advances in health care and could improve the lives of patients, “fRamilies” and health care professionals. However, we cannot start with an AI hammer in our hands and view every problem as the proverbial nail. When balancing the need for thoughtful, inclusive health care AI that plans for and actively manages and reduces potential unintended consequences while not yielding to marketing hype, we should be guided by the adage “haste makes waste” (Sample, 2019). The wisest guidance for AI is to start with real problems in health care that need solving, explore the best solutions for the problem by engaging relevant stakeholders, frontline users, and patients and their “fRamilies”—including AI and non-AI options—and implement and scale the ones that meet the Quintuple Aim.

In 21 Lessons for the 21st Century, Yuval Noah Harari writes, “Humans were always far better at inventing tools than using them wisely” (Harari, 2018, p. 7).

It is up to us, the stakeholders, experts, and users of these technologies, to ensure that they are used in an equitable and appropriate fashion to uphold the human values that inspired their creation—that is, better health and wellness for all.

REFERENCES

Apps, P. 2019. Are China, Russia winning the AI arms race? Reuters. https://www.reuters.com/article/apps-ai/column-are-china-russia-winning-the-ai-arms-race-idINKCN1PA08Y (accessed May 13, 2020).

BARNII (Bay Area Regional Health Inequities Initiative). 2015. Framework. http://barhii.org/framework (accessed May 13, 2020).

Cohen, G., and M. Mello. 2018. HIPAA and protecting health information in the 21st century. JAMA 320(3):231–232.https://doi.org/10.1001/jama.2018.5630.

Commins, J. 2019. UnitedHealthcare, AMA push new ICS-10 codes for social determinants of health. HealthLeaders. https://www.healthleadersmedia.com/clinical-care/unitedhealthcare-ama-push-new-icd-10-codes-social-determinants-health (accessed May 13, 2020).

Evans, E. L., and D. Whicher. 2018. What should oversight of clinical decision support systems look like? AMA Journal of Ethics 20(9):857–863.

Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Harari, Y. N. 2018. 21 Lessons for the 21st century. New York: Random House.

HL7 (Health Level Seven). 2018. Fast healthcare interoperability resources. https://www.hl7.org/fhir (accessed May 13, 2020).

Indiana Health Information Exchange. 2019. https://www.ihie.org (accessed May 13, 2020).

Latonero, M. 2018. Governing artificial intelligence: Upholding human rights & dignity. Data & Society, October.

Lee, K. F. 2018. AI superpowers: China, Silicon Valley, and the new world order. New York: Houghton Mifflin Harcourt.

NIH (National Institutes of Health). 2019. Undiagnosed diseases network. https://commonfund.nih.gov/diseases (accessed May 13, 2020).

NITRD (Networking and Information Technology Research and Development), NCO (National Coordination Office), and NSF (National Science Foundation). 2019. Notice of workshop on artificial intelligence & wireless spectrum: Opportunities and challenges. Notice of workshop. Federal Register 84(145):36625–36626.

OHDSI (Observational Health Data Sciences and Informatics). 2019. https://ohdsi.org.

O’Neil, C. 2017. Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Broadway Books.

Piper, K. 2019. Exclusive: Google cancels AI ethics board in response to outcry. Vox. https://www.vox.com/future-perfect/2019/4/4/18295933/google-cancels-ai-ethics-board?_hsenc=p2ANqtz-81fkloDdAtmNyGvd-pgT9QxeQEtzEGXeQCEi6Kr1BXZ5cLT8AFGx7wh_24vigoA-QP9p0CLTRvbpnI85nEsONPzEvwUQ&_hsmi=71485114 (accessed May 13, 2020).

Rosenthal, E. 2017. An American sickness: How healthcare became big business and how you can take it back. London, UK: Penguin Press.

Sample, I. 2019. Scientists call for global moratorium on gene editing of embryos. The Guardian. https://www.theguardian.com/science/2019/mar/13/scientists-call-for-global-moratorium-on-crispr-gene-editing?utm_term=RWRpdG9yaWFsX0d1YXJkaWFuVG9kYXlVUy0xOTAzMTQ%3D&utm_source=esp&utm_medium=Email&utm_campaign=GuardianTodayUS&CMP=GTUS_email (accessed May 13, 2020).

Shrott, R. 2017. Deep learning specialization by Andrew Ng—21 lessons learned. Medium. https://towardsdatascience.com/deep-learning-specialization-by-andrew-ng-21-lessons-learned-15ffaaef627c (accessed May 13, 2020).

Singer, N. 2018. Tech’s ethical “dark side”: Harvard, Stanford and others want to address it. The New York Times. https://www.nytimes.com/2018/02/12/business/computer-science-ethics-courses.html (accessed May 13, 2020).

Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×

Social Science Research Council. 2019. Social data initiative: Overview: SSRC. https://www.ssrc.org/programs/view/social-data-initiative/#overview (accessed May 13, 2020).

Stanford University. 2017. RSNA 2017: Rads who use AI will replace rads who don’t. Center for Artificial Intelligence in Medicine & Imaging. https://aimi.stanford.edu/about/news/rsna-2017-rads-who-use-ai-will-replace-rads-who-don-t (accessed May 13, 2020).

Sun, C., A. Shrivastava, S. Singh, and A. Gupta. 2017. Revisiting unreasonable effectiveness of data in deep learning era. arXIV. https://arxiv.org/pdf/1707.02968.pdf (accessed May 13, 2020).

White House. 2019. Executive Order on Maintaining American Leadership in Artificial Intelligence. Executive orders: Infrastructure & technology. https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence (accessed May 13, 2020).

Wilkinson, M. D., M. Dumontier, I. J. Aalbersberg, G. Appleton, M. Axton, A. Baak, N. Blomberg, J. W. Boiten, L. B. da Silva Santos, P. E. Bourne, and J. Bouwman. 2016. The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data 3.

VA (U.S. Department of Veterans Affairs). 2016. Veteran population projections 2017–2037. https://www.va.gov/vetdata/docs/Demographics/New_Vetpop_Model/Vetpop_Infographic_Final31.pdf (accessed May 13, 2020).

Suggested citation for Chapter 8: Matheny, M., S. Thadaney Israni, D. Whicher, and M. Ahmed. 2020. Artificial intelligence in health care: Hope not hype, promise not peril. In Artificial intelligence in health care: The hope, the hype, the promise, the peril. Washington, DC: National Academy of Medicine.

Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 235
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 236
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 237
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 238
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 239
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 240
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 241
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 242
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 243
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 244
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 245
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 246
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 247
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 248
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 249
Suggested Citation:"8 Artificial Intelligence in Health Care: Hope Not Hype, Promise Not Peril." National Academy of Medicine. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press. doi: 10.17226/27111.
×
Page 250
Next: Appendix A: Additional Key Reference Materials »
Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril Get This Book
×
 Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril
Buy Paperback | $42.00 Buy Ebook | $33.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The emergence of artificial intelligence (AI) in health care offers unprecedented opportunities to improve patient and clinical team outcomes, reduce costs, and impact population health. While there have been a number of promising examples of AI applications in health care, it is imperative to proceed with caution or risk the potential of user disillusionment, another AI winter, or further exacerbation of existing health- and technology-driven disparities.

This Special Publication synthesizes current knowledge to offer a reference document for relevant health care stakeholders. It outlines the current and near-term AI solutions; highlights the challenges, limitations, and best practices for AI development, adoption, and maintenance; offers an overview of the legal and regulatory landscape for AI tools designed for health care application; prioritizes the need for equity, inclusion, and a human rights lens for this work; and outlines key considerations for moving forward.

AI is poised to make transformative and disruptive advances in health care, but it is prudent to balance the need for thoughtful, inclusive health care AI that plans for and actively manages and reduces potential unintended consequences, while not yielding to marketing hype and profit motives.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!