National Academies Press: OpenBook

An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop (2020)

Chapter: 2 Ethically Leveraging Digital Technology for Health

« Previous: 1 Introduction
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

2

Ethically Leveraging Digital Technology for Health

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

The use of digital health technologies, artificial intelligence (AI), and machine learning in biomedical research and clinical care was discussed during the first two panel sessions. A range of ethical concerns can emerge in the development and implementation of new science and technologies, said Bernard Lo of The Greenwall Foundation and moderator of the sessions. Deborah Estrin, an associate dean and the Robert V. Tishman ’37 Professor at Cornell NYC Tech, provided an overview of the digital health technology landscape, and Michelle Mello, a professor of law and medicine at Stanford University, discussed ethical issues associated with emerging digital health technologies. Suchi Saria, the John C. Malone Assistant Professor in the Department of Computer Science and the Department of Health Policy and Management at Johns Hopkins University, reviewed the state of AI and machine learning in biomedical research, and Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin, discussed

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

ethical issues associated with the use of machine learning, including deep learning neural networks, in health care.

DEVELOPING, TESTING, AND INTEGRATING DIGITAL TECHNOLOGIES INTO RESEARCH AND CLINICAL CARE

Overview of the Digital Health Technology Landscape

Estrin said current and emerging digital technologies are increasingly being used in self-care, clinical care, and biomedical research across four main categories: wearables, mobile applications (apps), conversational agents, and digital biomarkers. Moreover, technologies such as mobile phones have been used to support care delivery for more than a decade (e.g., by community health workers in resource-limited settings).

Wearables for Biometrics and Behavior

Wearables are mobile devices that measure and track biometric data such as heart rate, activity, temperature, or stages of sleep, Estrin said. Some examples of wearables include activity and sleep trackers and smart watches by Fitbit, Garmin, Apple, and Oura, to name a few. She noted that the availability and usability of wearables have increased dramatically since the early days of actigraphy (noninvasive monitoring of cycles of movement and rest). Even if current wearables do not meet clinical standards, she said, they can track trends; most wearables are used in association with a companion mobile app that provides the wearer access to data summaries.

The increasing ability of machine learning algorithms to interpret the data collected by wearables is enhancing the utility of those data for individuals in self-care decision making as well as for use in guiding clinical care and informing research. For example, Estrin suggested, a wearable might help an individual better understand how exercise, diet, and alcohol consumption contribute to his or her poor sleep patterns; the clinician might use the data to evaluate the effectiveness of interventions to reduce the impacts of poor sleep quality on cognition or metabolism; and the data can help inform research on interventions to improve sleep quality.

Mobile Apps

There are also stand-alone mobile apps that are used independently of a wearable digital device. These mobile apps are focused on an interaction with the patient for self-care, for clinical engagement (e.g., to encourage adherence to a treatment plan), or for research purposes. Estrin briefly described four categories:

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
  • Symptom Trackers—This category of mobile app allows individuals to enter symptoms and see how they change over time. One example, Estrin said, is the Memorial Sloan Kettering Cancer Center symptom tracker. Using the mobile app, patients recovering from surgery and undergoing treatment can track their symptoms and plot their data against expected results. This interactive approach allows patients to see their progress and better evaluate if they are progressing sufficiently to avoid an unnecessary emergency room visit.
  • Access to Clinical Health Records—Mobile apps are also used to provide individuals with access to their clinical health records. Estrin said that Apple HealthKit and Android CommonHealth are developer platforms that take advantage of data interoperability standards, such as Fast Healthcare Interoperability Resources, to provide access to electronic health records (EHRs). App developers can use these platforms to create apps that allow users to access and share their clinical health information securely.
  • Health Behavior Apps—Another category is health behavior apps that provide coaching and guidance for individuals on choosing healthy behaviors. Examples include diabetes prevention programs such as Omada, the Noom app for weight loss, and the Livongo apps, which support health goals across several conditions. Some health behavior apps have been shown to have a positive effect on behavior, Estrin said, but many others have not been vetted or tested.
  • Behavioral Health Apps—The final category, which involves behavioral health, is different from health behavior apps because of the focus on mental health support, Estrin said. PTSD Coach, developed by the U.S. Department of Veterans Affairs, was an early example of a behavioral health app, which provides “in-the-moment support” based on clinical guidelines. Other examples of behavioral health apps include Talkspace, LARK, and HealthRhythms.

Conversational Agents

Conversational agents are chatbots and voice agents, many of which can be accessed via digital assistants such as Google Home and Alexa, are programmed to hold a conversation in a manner similar to a human. Examples of emerging health-specific conversational agents include Sugarpod for diabetes, Kids MD by Boston Children’s Hospital, and other chatbots for use by patients, nurses, and home health aides. Some conversational agents are entirely automated, and others provide details to a human provider or coach, Estrin said, but starting with an automated interaction to address more routine concerns allows providers to better meet and manage client needs.

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

Digital Biomarkers

Digital traces (i.e., records of online activity) are also being explored as digital biomarkers, Estrin said. For example, she said, researchers have collected data for mood analysis from social media interactions1 and others have used individual Internet search data as indicators of health status.2 Another example is an institutional review board (IRB)-approved retrospective study by Northwell Health of the Internet searches done by individuals prior to their first hospital admission for a psychotic episode.3 Individuals in the study consented to sharing their Google search history (via Google’s Takeout data download service), which is used by researchers to look for temporal patterns of online searching, location data, and other online activity that are associated with serious mental illness. Such research seeks to inform specific models for how to use such data to inform care at a population and individual level.

Risks and Concerns Related to Digital Technologies

Potential ethical risks and concerns associated with the use of digital technologies in research and clinical care include privacy exposure when using these digital technologies for health-related surveillance, data use, and transparency around AI-assisted agents, Estrin said. How the data should be controlled depends on the context of use, Estrin explained, and she said that laws and system architectures addressing how data are shared for surveillance need to take the context of use into account. Contextual integrity allows for a more nuanced view of privacy issues than traditional dichotomies (Nissenbaum, 2018). It exposes the risks associated with how an individual’s data flow and how they are used. The use of unquestioned consumer-app terms of service for health-related apps might allow the app provider to sell a user’s health data. There are some concerns that health data should be protected differently in order to prevent its use in discriminatory ways related to insurance coverage, employment, issuing credit, or dating, for example. This may require legal, as well as technical, protections, Estrin said. Another concern is transparency with regard to when an individual using a digital health technology is interacting with an AI agent (i.e., a “softbot” or software robot) or a human agent. A question for consideration is whether or when it is the right of patients to know if they are interacting with a person or an AI-mediated agent, she said.

___________________

1 For more information, see Saha et al., 2017.

2 For more information, see White et al., 2018.

3 For more information, see Kirschenbaum et al., 2019.

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

ETHICAL ISSUES FOR EMERGING DIGITAL TECHNOLOGIES

The increasing use of individuals’ data traces in novel ways for both research and clinical care challenges the norms of human subjects research ethics and existing privacy laws, Mello said. Existing research ethics concerns have been heightened by the advent of new digital technologies, she said, and new concerns have also emerged as the use of digital technologies has expanded (summarized in Box 2-1).

Existing Concerns Compounded by Digital Technologies

Purpose and Repurpose

Existing concerns about purpose and repurpose center on the informed consent process and the extent to which data and biospecimens generated for one purpose may be used for other purposes without securing fresh consent. These concerns now encompass data generated by digital technologies, including whether such data can be shared or sold for research purposes. The digital data of interest for research might include data from user interactions with apps and websites and clinical data generated by digital technologies in the care setting (e.g., ambient listening devices such as surgical black boxes).4 Data mining raises additional concerns since the research is often not hypothesis-driven but exploratory. It is also possible that unrelated datasets might be linked for research or clinical purposes. As highlighted by a legal complaint filed by a patient in 2019 against Google and The University of Chicago, EHR data collected for clinical purposes may be transferred to private companies for the purpose of developing new commercial products,5 and even with direct identifiers removed they are potentially re-identifiable through linkages to other data (e.g., linking smartphone geolocation data to the EHR data could reveal which clinics a patient has visited, when, and for what purpose) (Cohen and Mello, 2019). Once a patient is re-identified, Mello said, the EHR data could potentially be linked to other data such as social media and online browsing activity.

The three main solutions that have generally been used to address concerns about purpose and repurpose have been de-identification, waiver of consent, and blanket consent, Mello said, adding that there are issues with each approach. De-identification is “infinitely harder” for digital data than for tissue specimens. Consent waivers, granted when an IRB determines

___________________

4 Surgical black boxes can record a range of data during surgical procedures, including videos of the procedure, conversations in the room, and ambient conditions for the purpose of identifying intraoperative errors that may have led to adverse events.

5 For more information on Dinerstein v. Google, see https://edelson.com/wp-content/uploads/2016/05/Dinerstein-Google-DKT-001-Complaint.pdf (accessed April 20, 2020).

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

that the research meets certain requirements and therefore some or all consent elements can be waived, are a practical solution in the sense that securing fresh consent is often impracticable, Mello said, but they are not a principled solution to the problem of informed consent for repurposed digital information.6 Blanket consent might be a more transparent solution, she continued, but it arguably is not meaningful consent if researchers

___________________

6 A waiver of informed consent (45 CFR 46.116) can be granted by an IRB if research involves minimal risk to participants, if research cannot be conducted practically without a waiver, if the waiver does not negatively affect the rights of the participant, and if participants will be provided additional information about their participation following the study (when applicable). Blanket consent refers to a research participant consenting to all uses of their data with no restrictions.

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

cannot explain to participants the potential range of uses of their data and the potential for future data linkages. The field needs to think deliberately about the issue of informed consent for repurposed digital information, Mello said, and there may be real limits to using transparency as a strategy given the challenges with adequately describing what participants are consenting to and the lack of choice that many users of digital technologies have about accepting the terms of use.

Context Transgressions

Individual expectations of privacy vary depending on the context, Mello said, reiterating the point made by Estrin. Expectations are influenced by the relationship one has with whomever is receiving one’s information and by how one expects that information to be used (Nissenbaum, 2011; Sharon, 2016; Solove, 2020). Furthermore, she said, empirical research has found that the willingness to provide one’s information varies significantly depending on whether that information is expected to be used for noncommercial or commercial purposes. For example, how a person feels when one of that person’s doctors shares very sensitive clinical information with other health care providers (e.g., to coordinate care) can be very different from how that person feels about social media platforms (e.g., Facebook) sharing much less sensitive information about him or her with other entities for commercial purposes.

The problem of transgressions of context is related to the problem of purpose and repurpose, but it is distinct, Mello said. Historical examples of context transgressions include the case of Henrietta Lacks7 and the case of Moore v. Regents of University of California,8 both of which involved an individual’s property rights, or lack thereof, in relation to commercial products derived from the person’s biospecimens.9 For rapidly exchanged digital information, the potential for transgressions of context is very high, Mello said—in particular, via the shift in context from noncommercial to commercial uses of data. A current example is health care organizations transferring large volumes of EHR data to technology companies for use in developing commercial products and services.

Addressing potential context transgressions has generally involved clearly disclosing that individuals do not have any rights to a share of the

___________________

7 For more information, see https://www.hopkinsmedicine.org/henriettalacks/upholding-the-highest-bioethical-standards.html (accessed April 20, 2020).

8 For more information, see https://law.justia.com/cases/california/supreme-court/3d/51/120.html (accessed April 20, 2020).

9 In each case, cancer cells collected from patients Henrietta Lacks and John Moore in the course of their clinical care were used to develop cell lines that were later commercialized, without the patients’ knowledge or consent.

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

profits from technologies developed from their biospecimens, Mello said, or removing any information identifying the individual, or both. Alternatively, commercial and noncommercial context transgressions could be avoided by simply not sharing information, but Mello said this strategy is neither feasible nor desirable because needed products and services stem from data sharing. Another approach could be to eliminate the expectation of privacy altogether and make individuals aware that they are relinquishing control of their information in exchange for a variety of current and future benefits (e.g., free and low-cost services, development of precision medicine technologies). This approach conflicts with current privacy laws and human subjects protections, she said, and would shift the data sharing model from one of individual control over data to one of group deliberation and benefit sharing.

Corporate Involvement

For-profit corporations, including pharmaceutical companies and others, have long been involved in biomedical research, Mello said, and concern about the influence that corporations have on research persists. Digital technology companies have now emerged as dominant forces in biomedical product development. When they are not partnering with academic researchers or government, digital technology companies operate outside the ambit of structures that traditionally have provided ethical oversight of biomedical research (e.g., IRBs), Mello said, and comparable ethics mechanisms are largely absent in the industrial sector. Furthermore, digital technology companies have developed sufficient analytic capacity that they no longer need to interact with academic biomedical researchers for anything except to acquire patient data. The need for that interaction is also declining since digital product developers can often obtain health information directly from consumers or from direct-to-consumer companies. Corporate involvement is essential for product development, she said, but there are many issues yet to be addressed.

Incidental Research Subjects

Incidental research subjects are individuals who have not consented to be research participants but who have inadvertently come under the observation of researchers by association with others who are sharing data. Incidental sharing of information is a concern in the field of genetics, for example, where one person’s genomic data can reveal information about family members. The digital version of the problem is much broader, Mello said. For example, digital technologies such as ambient listening devices collect all conversations, not just those of the device owner, and digital traces such as social media posts can sweep in information about other identifi-

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

able individuals (e.g., geolocation data). The problem of incidental research subjects is not addressed by the current model of individual control of data through end user license agreements or informed consent.

Emerging Issues for Digital Technologies

The Scale of Data Collection

Mobile devices, ambient listening devices, and other passive data-collection technologies have the capability to collect vast amounts of data with minimal cost and effort, Mello said. There are benefits to this scale of data collection, but there are also concerns. Individual privacy is one such concern, but addressing this concern can raise other issues. For example, allowing surgical patients to opt out of having black box data collected during their procedures could impact quality improvement efforts. Data quality is also a concern, as mobile app users can “fudge” their data in ways that are not generally possible in clinical trials. There are also potential social consequences, such as health care providers stigmatizing or discriminating against noncompliant patients whose behaviors are detected through passive data collection.

The End of Anonymity

The de-identification of data is now recognized to be a temporary state, Mello said. Advances in computer science (e.g., hashing techniques, triangulation of data) have enabled the re-identification of individuals’ unlinked data from anonymized datasets. Human research protections are based on the concept that de-identified individual patient data do not present a privacy risk and, therefore, transfers of de-identified data do not require oversight. The increasing potential for re-identification calls for reassessment of this thinking, she suggested.

The Ethical Adolescence of Data Science

Traditional training in science and medicine imparts a set of cultural scientific norms and ethical commitments that may not yet be embedded in the training of computer scientists, Mello said. Digital technology companies currently have a high degree of freedom to self-regulate, yet they may lack a fully formed ethics framework to guide their work. Privacy laws do apply to some degree, though perhaps not to the extent people may think, she added. (The Health Insurance Portability and Accountability Act, for example, does not apply to companies that are not providing health care or supporting health care operations.) There is a need to “establish this

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

profession as a distinct moral community,” she said, pointing to the work of Metcalf (2014) and Hand (2018). The field of computer science has developed initial codes of ethics, which she said are a starting point, but more attention is needed.

Next Steps

Some of the ethical concerns associated with emerging digital technologies are new, Mello said, but many are long-standing concerns applied in a new context and with new implications. These ethical concerns cannot be adequately addressed within the existing regulatory system, she concluded. In addition, efforts to address these concerns need to engage people of younger generations and to take into consideration their perspectives on privacy and tradeoffs.

USING ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING IN RESEARCH AND CLINICAL CARE

Artificial Intelligence, Machine Learning, and Bias

The future of AI, Saria said, is in augmenting health care providers’ capabilities to more efficiently offer higher-quality care. This includes, for example, reducing diagnostic errors, recognizing diagnoses or complications earlier, targeting therapies more precisely, and avoiding adverse events. Ideally, AI would increase the efficiency of care without increasing the burden on providers.

There has been much discussion and concern about bias in AI algorithms, Saria said. To address these concerns, it is necessary to understand the different underlying problems, but this is hindered by a lack of a taxonomy for understanding bias. Using facial recognition algorithms as an example, Saria discussed six potential errors that could introduce bias.

Inadequate Data

Saria presented a study by Buolamwini and Gebru (2018) that found that the performance of three different facial recognition algorithms in determining gender varied by skin tone. In particular, the algorithms frequently misclassified the gender of darker-skinned females, while the genders of lighter females and both darker and lighter males were classified with much greater accuracy. This weakness, Saria said, is a result of inadequate data. In this case, the underrepresentation of data from specific subpopulations can be addressed by augmenting the data or correcting the algorithms to account for the underrepresentation. Understanding the weakness allows for

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

corrections to be made, but a lack of awareness of the weakness can lead to consequences downstream whose exact nature will depend on how these algorithms are used (e.g., for crime investigation, surveillance, employment).

Asking Bad Questions

Another type of error that can lead to bias is what Saria described as “bad questions.” As an example, she described the facial personality profiling offered by a startup technology company. The company claims to use machine learning algorithms for facial analysis to determine traits such as IQ, personality, and career prospects (e.g., whether a person might be a professional poker player or a terrorist). These questions cannot be answered using current observational datasets, Saria said, and there are no experimental interventional datasets in existence to be able to answer these questions. Furthermore, the algorithm is not learning true causal relationships, but rather it is simply mimicking and learning from the data already in the dataset.

Lack of Robustness to Dataset Shift10

Error can also be introduced when an algorithm is not robust to dataset shift. To illustrate, Saria described the training and use of a deep learning algorithm for detecting pneumonia in chest X-rays (Zech et al., 2018). The algorithm performed well when used by the same hospital from which the training data were obtained. However, the diagnostic performance deteriorated when the algorithm was then used by a different hospital. This lack of robustness when analyzing datasets from another site, Saria said, was found to be related to style features of the X-rays that varied by institution (e.g., inlaid text or metal tokens visible on the images). This potential source of bias could be corrected by adjusting the algorithm to account for those style features that are not generalizable across datasets.

Evolving Health Care Practice

Provider practice patterns evolve over time, Saria said, and if predictive algorithms are not robust to this type of dataset shift, this can lead to false alerts. As an example, an algorithm for the early detection of sepsis based its predictions on the laboratory tests being ordered by providers and, in

___________________

10 Dataset shift is a condition that occurs when data inputs and outputs differ between the training and testing stages. When this occurs, researchers are unable to make generalizations that may allow them to predict events that could occur (Quiñonero-Candela et al., 2009; Subbaswamy et al., 2020).

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

particular, on whether a measurement of serum lactate level was ordered. The model was trained on data from 2011 through 2013 and performed well when tested in 2014, she said. In 2015, however, predictive performance deteriorated significantly, which Saria explained was associated with a new Centers for Medicare & Medicaid Services requirement for public reporting of sepsis outcomes. As a result of the new regulation, health care institutions increased sepsis surveillance considerably, and laboratory testing for serum lactate levels increased correspondingly. Because the algorithm was not robust to this dataset shift, there were more false alerts.

Model Blind Spots

A small perturbation to a dataset can result in “blind spots” that can lead an algorithm to become “confidently wrong,” Saria said. She described a well-known example in which an image, which was correctly identified with confidence by an algorithm as a panda, was minimally perturbed with random noise. Although the change was imperceptible to the human eye and the image appeared to be the same panda, the algorithm determined with high certainly that the image was now a gibbon (Goodfellow et al., 2015). It is important to understand how a learning algorithm is performing so that errors can be addressed, she said.

Human Error in Design

Human error can also lead to bias in models, Saria said. A recent study uncovered bias in an algorithm designed by Optum that is widely used to identify higher-risk patients in need of better care management (Obermeyer et al., 2019). The algorithm was designed to use billing and insurance payment data to predict illness so that high-cost patients could be assigned case managers to help them more proactively manage their health conditions. However, the study found that the high users of health care identified by the algorithm tended to be white, with black individuals using health care less frequently. This resulted in health systems unknowingly offering more care to those already accessing care and thereby further widening the disparities in care.

Addressing Algorithm Biases

A common element across these scenarios, Saria said, is that the errors are generally fixable if the source of the error is known. Changing human behavior is difficult, she said, but point-of-care algorithms, corrected for the sources of bias discussed, can provide “real-time nudges” to influence health care provider decision making.

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

In developing AI for health care, there is a need for safe and reliable machine learning, Saria said, suggesting that the field could draw from engineering disciplines, which focus on both understanding how a system should behave and then ensuring that it behaves that way. There is excitement about the use of AI in the health care field and interest in downloading and deploying tools, she said, but the underlying “engineering” principles critical for building safe and reliable systems are currently often overlooked (i.e., understanding how these tools work, determining if they are working, and guaranteeing that they continue to work as expected). She described the three pillars of safe and reliable machine learning as failure prevention, failure identification and reliability monitoring, and maintenance. Engineering health care algorithms for safety and reliability involves ensuring that algorithms are more robust to sources of bias (e.g., dataset shift), are able to detect errors (e.g., inadequate data) and identify scenarios or cases that may be outliers in real-time (test-time monitoring), and are updated as needed when shifts or drifts are detected. She referred participants to her tutorial on safe and reliable machine learning for further information (Saria and Subbaswamy, 2019). In closing, Saria suggested that algorithms should be developed, deployed, and monitored post-deployment with the same rigor as prescription drugs (see Coravos et al., 2019).

ETHICAL ISSUES IN MACHINE LEARNING AND DEEP LEARNING NEURAL NETWORKS

Sharing Health Care Data

Many of the ethical issues associated with machine learning involve concerns about data sharing, Ossorio said. There is governance in place for the sharing of research data, and she said that clinical trial participants are becoming increasingly better informed about the ways their data might be shared. However, there is less governance of the sharing of clinical care data. At the federal level there have been efforts to collect and use clinical care data for quality analysis purposes. The training of algorithms requires large amounts of data, which is why, for example, developers such as Alphabet and Microsoft seek to acquire millions of medical images and accompanying medical data from hospital picture archiving and communication systems.

Unlike the case with data collected as part of clinical research, patients interacting with the health care system do not expect their clinical care data to be shared (beyond what is needed to facilitate and coordinate their care). The commercial use of health data currently operates under a very different set of norms, professional commitments, and economic commitments than the clinical research enterprise, Ossorio said, reflecting earlier comments

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

from Mello. While pharmaceutical companies are subject to regulations that protect research participants and the future users of their products, there is not yet comparable oversight of developers of AI for health care. Data are being transferred from the health care context, where the norm is to put the interests of the patient at the center of decision making, to a different context that is not patient-centered.

Price and Cohen (2019) have looked at sharing health data for machine learning, which Ossorio said discusses expanding the “circles of trust” to include entities that develop AI. Whether this should or could be done, given that the norms that govern these types of commercial entities (e.g., Google, Microsoft) are very different from the norms governing clinical research and health care, remains an open question, she said. For example, the norm for development by these types of commercial entities is often to deploy a technology as quickly as possible and then make corrections and updates based on additional data collected while the product is being used in the marketplace. This approach might be acceptable when developing apps that, for example, recommend books or movies for the user. In the health arena, however, drugs and devices generally require premarket assessment of safety and efficacy, she said.

Developing and Implementing Responsible Artificial Intelligence for Health Care

Based on her experience, Ossorio said that many of the companies developing AI do not fully understand the scope or context of health care data. For example, in the case of a machine learning algorithm to aid in the interpretation of clinical laboratory test results, to improve that algorithm after deployment one would need data about how the clinical laboratory is using that test as well as patient clinical outcomes data. However, patient outcomes data generally reside with the health care provider (outcomes data are not usually maintained by the testing laboratory), and the outcomes of interest might appear over the long term. In addition, not understanding the context in which the data were generated can result in the development of an algorithm that is inherently biased or lacks clinical utility.

Another concern, Ossorio continued, is the common perception that those who are expert in developing algorithms can do so using any type of data and that simply providing them with access to volumes of health data will transform the practice of medicine. Collaboration among subject matter experts and algorithm developers is essential for the development and assessment of safe, reliable, useful tools, she said. There is also a need for standards to help ensure the responsible implementation of machine learning algorithms in health care practice (Wiens et al., 2019).

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

Regulating Machine Learning Algorithms

Algorithms should be held to rigorous standards that are similar to those necessary for the development of drugs, Ossorio said. Most algorithms are not regulated, and those that have been subject to regulation by the U.S. Food and Drug Administration (FDA) thus far have been treated as medical devices. Medical devices generally do not need to meet the same standard of evidence required of drugs before authorization for marketing. The validation of algorithms requires sharing of both code and datasets, Ossorio continued, and researchers are also being encouraged by journals to share code. Because some algorithms are in fact medical devices, Ossorio said, the data used for validation need to be shared according to current regulations and guidelines and need to be labeled as being for research purposes only.

FDA is currently considering how to regulate machine learning algorithms for health care. Algorithms that are cleared or approved by FDA as medical devices are trained, tested, and then locked down prior to implementation, Ossorio said. The challenge now, she said, is how to regulate unlocked or partially unlocked machine learning algorithms that might change over time, perhaps in unpredictable ways.

DISCUSSION

Ethics Training in Data Science and Artificial Intelligence

Are there efforts, Lo asked, to incorporate a discussion of ethical issues into the training of data scientists and AI researchers? Individuals in data science have learned norms and behaviors in the context of the companies they work for and the incentive structures they are presented with, Estrin said, and these do not translate to the health care context. Corrections will require a combination of professional ethics and law, she said. Saria agreed and expressed optimism that positive, corrective action is occurring. All of the leading conferences in the data science field now have discussions of ethics, bias, transparency, and fairness on the agenda, she said, and there are also meetings entirely devoted to these topics. Ossorio was also optimistic and said that, in her experience, data scientists are very interested in discussing ethical issues. Because of this interest she was asked to develop an ethics class for data scientists at her institution, which she said has been popular and is now required for many students in biomedical engineering, biostatistics, public health, and bioinformatics. The curriculum is built around case studies, and she said that engineers and computer scientists have skills in problem solving that translate to solving bioethical problems. A new presidential initiative at Stanford University to provide ethics train-

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

ing to students in computer science has also been very well received, Mello added.

Education on ethical issues in data science is increasing, Estrin said, but the growing interest in ethics in data science should be supported and should also align with laws, regulations, and a shift in incentive structures to help ensure that ethical products can reach the marketplace. Mello said a given field will go through three stages of ethical maturation: recognizing that there are ethical issues, developing a framework for solving those problems, and gaining traction and leadership buy-in so that those who are trained in ethics are supported in taking ethical actions. The field of data science is currently in the first stage, she said, and is just beginning to enter the second.

Transparency and Data Sharing

Transparency in the Absence of Choice

What is the value of transparency, Lo asked, if patients have no choice but to accept sharing of their data as an aspect of receiving services? The health care system where he receives his care, he said, is negotiating the transfer of patient data to a company for algorithm development and validation. Patients do not have a choice about whether their data will be shared, other than to choose not to receive care.

People often feel exploited when they do not feel they have a real choice about sharing their data and do not see a clear benefit of giving up their information, Ossorio said. For example, people might feel they have no choice but to use social media to be informed about work-related information or to stay in touch with family and therefore have no real choice about submitting to the collection and use of their data by the social media websites. They do not perceive that they have made a rational tradeoff of providing information to receive benefits. Transparency about data sharing, even in the absence of choice, she suggested, is better than no transparency because it allows people to engage in political activity to help shape laws and norms. Transparency is also important for building trust. However, transparency is not a solution to deeper ethical problems.

Data Governance

Transparency without data governance is also insufficient, Ossorio said. How data are transferred to the commercial context is important, and license agreements should not lead individuals to relinquish all control. There is a distinction, Estrin said, between institutions selling data and relinquishing responsibility and them collaborating as institutions or pro-

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

viders with companies and bringing their own norms to that collaborative process. This idea, currently part of work being done by her colleagues at Cornell NYC Tech, may be an interesting way to think about data sharing, she added. Many academic institutions have initiated collaborations with for-profit companies, Ossorio said, but full collaboration is often not possible as the digital technology developer is often interested in using the data for an area of research that the data-sharing department or institution does not have interest or expertise in. A challenge, she said, is to define the data governance approach that would be appropriate for that middle ground relationship between a simple transfer of the data and a full research collaboration.

Allowing Patients to Consent or Opt Out of Data Sharing

Should patients be able to opt out of the sharing of their health care data for secondary purposes, Lo asked, and how might that impact the datasets and the ability of researchers to develop and validate digital technologies? Individuals should be given the choice to opt out of data sharing, Saria said, adding that having some patients choose to not share data should not create technical problems for researchers. Institutional infrastructure is the main barrier to implementing an opt-out choice for patients, she said. In her personal experience she has been asked to consent or decline to the sharing of her health data. Whether patients like and trust their providers can influence their decisions on sharing their health data.

There are generational differences in culture and norms that affect the acceptance of information sharing, Saria observed. Generations that grew up using the Internet tend to be more skeptical of what they read online, while older generations are more likely to believe that what they read online is true. Furthermore, she said, she and many others who grew up with the Internet understand and accept that they are receiving services of value to them in exchange for websites collecting and using their user data. Informed consent is important, but there is also a need for education to ensure that people understand what consenting means for them, she said. Many people do perceive data sharing to be exploitative and do not understand or consider the benefits and costs of sharing or not sharing one’s data. Institutional leaders who are resistant to providing data to for-profit technology developers at no cost are less concerned about protecting patient privacy, Saria said, and more concerned about not leveraging a potential revenue stream. Instead, they should be thinking about how patients might benefit from more efficient and transparent use of their data.

Estrin agreed that opting out of data sharing should be allowed; however, she said, simply allowing opt outs is not sufficient, and institutions

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

still need to behave responsibly and establish ethical norms, independent of patient choice. Just because a technology or service is provided to consumers at no cost monetarily does not necessarily mean it is acceptable, she added. In many cases it is difficult to avoid a free technology or service because it has become part of the digital technology infrastructure, and in a capitalist economy consumers cannot vote with their purchase power when something is already free.

Unlike patients in integrated health systems, many people do not have the ability to transfer their health data from one provider to another, Estrin noted. Solutions, such as HealthKit (Apple) and CommonHealth (Android), have emerged to allow patients to download their own clinical and health data and to share it across apps and providers. A challenge, she said, is defining which apps or other data users are allowed access to an individual’s data. It has been suggested, Estrin said, that consenting to share one’s data should be included in the standard terms of service for apps, which advocates say would support frictionless development and innovation by startup companies. However, she said, there is empirical evidence that this type of consent is not sufficient. One option under discussion is that a health system could approve the use of apps that provide some oversight of data sharing and use (i.e., apps that do not sell or reuse patients’ data).

Effects of Machine Learning and Artificial Intelligence–Based Tools on Clinician Practice

Is it possible, one workshop participant asked, that physicians might become dependent on digital technology–based interventions that propose interpretations and solutions, and could that dependence degrade provider expertise?

Physician integration with technology is not a new problem, Mello said, and providers have long used different types of decision-support tools (e.g., clinical practice guidelines, automated decision support within the EHR). Questions have been raised as to whether standards are needed for how physicians should interact with digital technologies, she continued, and whether codes of conduct in the medical professions need to address this explicitly. Clinical providers are currently using laboratory tests that employ algorithms for race correction of results (e.g., the calculation of estimated glomerular filtration rate adjusted for race), said a workshop participant. Existing race-based corrections in medicine need to be examined along with new and emerging algorithms, the participant added.

Clinicians and clinical laboratory professionals need training on how to properly use algorithms, Ossorio said. Before that can happen, the algorithms need to be studied to better understand their characteristics and role in practice (e.g., generalizability, indications, contra-indications). These

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

types of studies, however, are not incentivized by the current regulatory system for medical devices, she said.

To what extent, a workshop participant asked, should patients be made aware that provider decisions are being assisted by AI? Providers do not generally discuss with patients the specific resources they use in the course of practice, Mello said, and it is not clear that a patient encounter needs to include discussion of any algorithms used by the provider.

Structural Inequalities in Datasets Used for Algorithm Development

There are structural inequalities embedded in the data being used to develop and train machine learning algorithms, Dorothy Roberts, the George A. Weiss University Professor of Law and Sociology and the director of the Penn Program on Race, Science, and Society at the University of Pennsylvania, said, which can result in the outcomes of predictive analytics being biased (racially biased in particular). Predictive policing, which uses arrest data to predict who in a community is likely to commit crimes in the future, is an example, she said. Discriminatory law enforcement practices (e.g., racially biased stop-and-frisk programs, policing efforts focused on African American neighborhoods) result in racially skewed arrest data that then lead algorithms to predict that those who have the characteristics of black people are likely to commit crimes in the future, she said. There are similar examples in medicine of existing structural inequalities being perpetuated by algorithms, Roberts continued, such as the study of the Optum algorithm discussed by Saria. In that case, an algorithm designed to identify high-risk users of health care in need of additional services was trained using payment data. In choosing health care costs as the dataset, the developers of the algorithm did not take into account the fact that less money is spent caring for black patients, who are often sicker, she said.

Greater collaboration is needed, Roberts said, but that collaboration needs to extend beyond medical professionals and algorithm developers. Collaborations also need to include sociologists and others who understand structural inequality in society and who can recognize errors in datasets that could lead to bias, a point with which Saria agreed.

Structural inequality patterns reflected in datasets can be due to social inequalities that exist outside of the health care system, inequalities in access to health care (e.g., insurance coverage, proximity to providers), and inequalities that have been created within the health care system, Ossorio said. Machine learning algorithms could be helpful in identifying inequalities so that they can be addressed, she suggested; however, assessing the performance of commercial algorithms can be hampered by the fact that these products are frequently licensed—often with restrictions on how they can be studied—rather then sold outright. In some cases, for

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

example, the data used for development are considered a trade secret and are not disclosed.

Potential Research Questions for Funding

Are there research topics in the areas of bioethics, data science, computer science, and digital technology development that should be funded for study? This was the next question Lo posed.

Views on Health Data Sharing and Privacy

Research is needed to better understand how patients would respond if given the choice to opt out of having their clinical data shared with digital technology companies, said Benjamin Wilfond, the director of the Treuman Katz Center for Pediatric Bioethics at Seattle Children’s Hospital and Research Institute and a professor in and the chief of the Division of Bioethics and Palliative Care in the Department of Pediatrics at the University of Washington School of Medicine. Mello agreed that how people think through a choice to opt out could be better understood. Studies have used administrative data to assess how many people opt out of programs such as electronic health information exchanges, but these studies do not differentiate between those who have made an informed decision to not opt out (i.e., to participate) and those who simply take no action and participate by default. The role of education in understanding the benefits and risks of participation versus opting out could be studied, she said. It would also be helpful to understand the higher rate of opting out among certain racial and ethnic groups and how the health care enterprise can build trust with these communities. When presented with the choice to opt out, most people will not do anything, Estrin said, so a better question might be how people respond to the choice to opt in (i.e., asking patients to share their data). Most patients presented with an opt-out choice do not fully understand what they are being asked to decide, Saria added. In particular, they do not understand the potential ramifications of not participating (e.g., products of value to them that might not be developed). There is an initiative in the United Kingdom to educate the public about the benefits and risks of sharing or not sharing health data, she said, and this could be a good initiative to replicate in the United States in order to help individuals move from a general fear of data sharing to an understanding of the good that can result.11 Investment

___________________

11 For more information about the United Kingdom’s Understanding Patient Data Initiative, see https://understandingpatientdata.org.uk (accessed April 21, 2020).

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

should go beyond informed consent research to studies of better ways to use data to improve people’s lives, she added.

Improving Stakeholder Literacy

Pilot studies could be conducted to explore alternative approaches to individual informed consent, Mello said. Some institutions, for example, have established data use committees to evaluate the proposed uses of health data. Studies could be undertaken to identify the benefits and drawbacks of this approach, compare how the decisions made by the data use committee align with what individuals would choose for themselves, and assess the extent to which committee deliberations reflect the views of minority communities. Understanding intergenerational shifts in perceptions of privacy is another area in need of further research, Mello said. This includes understanding different views on the acceptability of trade-offs (e.g., sharing personal information in return for receiving goods and services at low or no cost). Privacy rules being established now might not be relevant for the next generation, she added. Research could be done, Lo said, to assess patients’ understanding of their options regarding data sharing, to identify effective approaches for informing them of their options, and to determine if educating patients about their options changes their behavior.

Research is also needed on how to improve stakeholder literacy, said Camille Nebeker, an associate professor in the University of California, San Diego, School of Medicine. This includes, for example, ensuring that research participants have an adequate understanding of research, data, and technology; that researchers have sufficient literacy in data management; and that students in technology fields gain literacy in ethics. This is an important area for research, Estrin agreed. Developing ethics training programs for computer scientists and educational materials for consumers should not be difficult, Mello said; the challenge is gaining and holding the attention of consumers who are already bombarded with opportunities to consider information and make decisions about data sharing. Ossorio said that an educational approach being developed at Duke provides information about an algorithm in the form of prescribing information (e.g., recommended use, contraindications). This approach quickly and concisely communicates the most important information about an algorithm to users. Research could be done to understand the impact of this and other types of educational interventions on outcomes of interest, Lo suggested.

Assessing Algorithms for Bias and Fairness

Developing metrics and tests that can measure whether an algorithm is biased is another area for research, Saria said. Studies could explore differ-

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

ent scenarios in which bias might be present and be used to design tests and metrics to assess the likelihood of bias. Automated approaches to detecting, diagnosing, and correcting bias are needed, she explained, because access to proprietary code and datasets might not be provided, and significant time and resources are needed to conduct in-depth analyses. Metrics are needed for assessing the datasets used for algorithm training, Ossorio agreed, and she noted the importance of understanding the impact of data cleaning on the fairness of datasets. Researchers at the University of Wisconsin have written algorithms that can assess the fairness of other algorithms and can provide input during algorithm training to increase fairness, Ossorio said. This approach is more challenging in the health care context than in many other contexts, she added. There is value in getting researchers and scholars to collaborate in considering different theories of fairness, how they apply in a given context, why one theory might be chosen over another, and how the theories can be built into a software product, she said.

It is also important to learn from the cases of algorithms that did not perform as expected, Estrin said. Working backward to see how the implementation of regulations, laws, or incentives might have altered the outcomes (e.g., prevented the biased outcomes), could be one option, she suggested. In the case of the Optum algorithm discussed by Saria, for example, the company was seeking to optimize patient care in order to control costs. The research question in this case could be, Estrin said, What laws and regulations might have allowed for this optimization function while ensuring ethical outcomes?

Moving Forward

In closing, the panelists reiterated the need for funding to support broad interdisciplinary research in the areas of bioethics and digital technology development. Potential ethical issues need to be addressed up front, Mello said, before digital technologies are released for use, while Estrin underscored the need to understand the incentive structures that currently drive digital technology development and deployment.

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×

This page intentionally left blank.

Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 11
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 12
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 13
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 14
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 15
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 16
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 17
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 18
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 19
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 20
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 21
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 22
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 23
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 24
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 25
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 26
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 27
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 28
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 29
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 30
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 31
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 32
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 33
Suggested Citation:"2 Ethically Leveraging Digital Technology for Health." National Academies of Sciences, Engineering, and Medicine. 2020. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25778.
×
Page 34
Next: 3 Ethical Questions Concerning Nontraditional Approaches for Data Collection and Use »
An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop Get This Book
×
 An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop
Buy Paperback | $60.00 Buy Ebook | $48.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

On February 26, 2020, the Board on Health Sciences Policy of the National Academies of Sciences, Engineering, and Medicine hosted a 1-day public workshop in Washington, DC, to examine current and emerging bioethical issues that might arise in the context of biomedical research and to consider research topics in bioethics that could benefit from further attention. The scope of bioethical issues in research is broad, but this workshop focused on issues related to the development and use of digital technologies, artificial intelligence, and machine learning in research and clinical practice; issues emerging as nontraditional approaches to health research become more widespread; the role of bioethics in addressing racial and structural inequalities in health; and enhancing the capacity and diversity of the bioethics workforce. This publication summarizes the presentations and discussions from the workshop.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!