In the previous chapters, the committee addresses why returning results provides value to participants and scientific stakeholders, what research results could be returned, and the timing of returning individual research results. This chapter focuses on the “how.” As discussed earlier in this report, the return of individual research results is a natural progression in the push for increasing transparency in the research enterprise, and the committee envisions a future where participants have greater access to their individual research results. The committee acknowledges, however, that expanding the return of research results places new demands on the research enterprise, including the development of needed expertise on study teams and assembling the resources needed to offer and return individual research results appropriately. Inconsistency in practices will need to be addressed in order to minimize the risk of harm from the return of results, an evidence base will be needed for the development of best practices for returning results, best practices will need to be developed and disseminated, and these best practices will need to be broadly implemented in order to prevent inequities. Recognizing that it will take time to fully implement best practices for the return of results—and that in the immediate term this will be an aspirational target—the committee sees opportunity for incremental progress. In the beginning, a number of relatively simple measures (“low-hanging fruit”) could be implemented in ongoing and near-term studies without prohibitive investments of time or resources. These early steps have the potential to help the research enterprise begin to develop an evidence base for the return of results and will be important when working toward the committee’s vision of a broad return of research results, as discussed in the previous chapters.
In this chapter the committee provides some concrete strategies for advancing practices for offering and returning results, including setting appropriate expectations for participants (for example, in the consent process) and incorporating established principles for effective communication into the return-of-results process. The chapter also discusses how the appropriate return of individual research results requires investment and careful forethought regarding the necessary contextualizing information, takeaway messages, and disclaimers. To return research results effectively will require research stakeholders to consider how to communicate in ways that are appropriate for participants with different needs, resources, and backgrounds. Returning research results can be done (and it can be done well), up-front investments can be scalable, and the development of best practices over time will improve the consistency and quality of the process of returning individual research results.
Given the complexity and uncertainty often inherent in research results, research teams would benefit from guidance on how to accomplish the challenging task of accurately communicating research results to individual participants. Investigators will need to understand how to effectively enable understanding and simultaneously communicate how to use individual research results when appropriate and how to caution against overuse. Importantly, previous experiences with returning results in health care and research settings can inform future best practice and guidance development by helping pinpoint what is effective and what is not with different groups of participants. In addition, principles for the disclosure of risks and benefits in the informed consent process will need to be adapted for use in best practices for the return of results.
Learning from the Return of Clinical Test Results: Opportunities and Limitations
The health care enterprise has considerable experience with the generation, interpretation, and return of clinical test results. In most clinical contexts, the flow of information passes through a clinician before reaching the patient. The clinician’s role, therefore, has been one part gatekeeper and one part interpreter. It is important to note, however, that information technology is increasingly changing this pattern. In many health care systems, patients can access laboratory test results directly through patient portals to electronic record systems, thereby reviewing these data without a clinician present to explain the results and their significance (AHA, 2016). Furthermore, health systems vary in the degree that clinicians are required to review or annotate results before they are released to patients. Direct-to-consumer testing represents another model for the direct
return of results to an individual, one in which a clinician may not even know that a test has been conducted until the patient presents the result report to their physician (O’Connor, 2016).
While the health care delivery experience may offer lessons for the return of research results, this is not to say that best practices for communication are always (or even usually) applied in clinical practice. Research indicates that the current level of information provided with clinical test results may be insufficient to enable patients to understand their meaning (O’Kane et al., 2015). Clinical biomarker results, for example, are generally returned in numerical or tabular form with a standard reference range. However, recent evidence suggests that many patients struggle to determine whether a result is inside or outside of the standard reference range, which is the most basic form of understanding needed for meaningful use (Zikmund-Fisher et al., 2014). Sometimes (but not always) results are also accompanied by an interpretive statement from the ordering clinician, but the language used in such statements may vary across clinicians and situations. Despite this, there are situations in which clinical results are returned with additional contextual information where the purpose of the test and the information generated during the test are addressed. For example, in clinical genetics patients are often given substantial contextual information (e.g., counseling, the meaning of a negative result, clear statements of known impact of particular mutations) to help them understand their results (Haga et al., 2014). The same practices may be appropriate for research-based genetic testing, although research results may be associated with greater uncertainty, which may require further clarification.
Challenges communicating clinical test results and other medical information effectively may stem, in part, from gaps in health literacy1 and other forms of literacy, such as graph literacy and health numeracy.2 In 2006 the National Center for Education Statistics released a National Assessment of Adult Literacy and found that “the majority of adults (53 percent) had intermediate health literacy while about 22 percent had basic and 14 percent had below basic health literacy” (National Center for Education Statistics, 2006, p. v). Extensive research shows that low health literacy, poor numeracy, poor graphical literacy (Joint Commission, 2007), and language barriers all impede an individual’s ability to interpret and use information such as test result communications (Rodríguez et al., 2013; Zikmund-Fisher et al., 2014, 2017). This underscores the importance of understanding the limitations that poor literacy may impose on understanding and emphasizes the importance of clear communication in the provision of health information (Joint Commission, 2007), including clinical and research test results. To address literacy
1 Health literacy is “the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions” (IOM, 2004, p. 20).
2 Health numeracy is “the degree to which individuals have the capacity to access, process, interpret, communicate, and act on numerical, quantitative, graphical, biostatistical, and probabilistic health information needed to make effective health decisions” (Golbeck et al., 2005, p. 375).
and numeracy barriers, such information needs to be provided in a format and with content that is accessible to the target audience (Parker et al., 2016). This may entail
- creating materials in users’ primary languages and considering language-based sources of misunderstanding to address language barriers,
- creating materials that reflect participants’ preferences regarding terminology,
- using plain language to overcome low literacy (CDC, 2016; IOM, 2014), and
- using evidence-based formats that facilitate understanding of quantitative information by those with low numeracy and graphical literacy (IOM, 2014).
In addition, the National Academies of Sciences, Engineering, and Medicine’s Roundtable on Health Literacy published a perspective on health literacy and precision medicine, which concluded that “participant input into the crafting of clear, navigable, and useful messages and processes” is a hard-learned lesson from the field of health literacy (Parker et al., 2016, p. 3). While those in the field of health care have acknowledged these gaps in practice which inhibit patient understanding and have made strides to correct this, there are still areas where improvements can be made to the processes of clinical test return and messaging. Research has the opportunity to learn from both the good and the bad in clinical test return. Doing so will allow the research enterprise to shape the return of research results into a practice that simultaneously benefits the participant most fully and is done in a way that does not burden the investigator. However, research sponsors and funding agencies will need to support an assessment of best practices and how to apply these to a research context first.
CONCLUSION: Many existing practices in the return of clinical results are potentially applicable to the return of individual research results, but they will need to be critically evaluated before they are adopted in the return-of-research-results context.
Learning from Current Practices in Return of Individual Research Results
Research results differ substantially from clinical test results in a number of ways, which limits the degree to which clinical experience can offer guidance on the return of research results. Most notably, research results are often associated with a greater degree of uncertainty as a result of incomplete scientific knowledge, and the uncertainties present at the level of individual results are even larger than the uncertainties present in aggregate results. However, as research continues, quality management systems are adopted by research laboratories, and evidence accumulates, the uncertainty in research test results can be reduced.
When patients’ results are returned by the treating clinicians or clinical laboratories, the results are often accompanied by well-established population distributions or reference ranges3 that enable interpretation by the patient and clinician (Medscape, 2014). Expected reference ranges for clinical tests (e.g., blood counts) are known because the results are generated by standardized procedures used across broad populations of patients which allow for the establishment of normal result ranges for different patient characteristics, such as age or gender. In contrast, because of the significant variability in practices used in research settings, a result may need to be accompanied by documentation on what was actually done or not done in order to evaluate its meaning (and potential value or actionability). Moreover, reference information (e.g., standard ranges) for research results is often unavailable, non-representative, or unreliable for understanding whether a result is normal or abnormal and for guiding decision making. As discussed in more detail later in this chapter, research teams will need to think carefully about what reference information is available and potentially valuable for use in communicating with participants about the meaning of their individual results.
Uncertainty is difficult to communicate, particularly when it relates to something that is already probabilistic in nature, such as genetic-related risk; therefore, uncertainty is often ignored (Han et al., 2011). A critical part of the return of research results, uncertainty needs to be conveyed effectively, or else investigators risk the participant putting too much or too little trust in the results. As discussed in more detail later in this chapter, attention needs to be paid to providing reference information that enables participants (and, in some cases, their treating physicians) to be able to interpret and understand the potential (or lack thereof) for using the research results.
Although the return of individual results is not currently widespread among research studies, certain investigators are already returning research results to individual participants. This is particularly true in the fields of genetics and environmental health (discussed in the sections “Returning Individual Genetic Research Results” and “Environmental Health and the Return of Individual Research Results”). These fields’ experiences with the return of research results may be valuable in the development of best practices and guidance for other types of research results.
Returning Individual Genetic Research Results
In the field of genetics, some research investigators and direct-to-consumer (DTC) companies have been using and exploring methods for returning individual
3 “A reference range is a set of values that includes upper and lower limits of a lab test based on a group of otherwise healthy people. The values in between those limits may depend on such factors as age, sex, and specimen type (blood, urine, spinal fluid, etc.) and can also be influenced by circumstantial situations such as fasting and exercise. These intervals are thought of as normal ranges or limits” (American Association for Clinical Chemistry, 2017).
results for years. Numerous surveys have been done to assess customer comprehension and interpretation and the psychological effects on customers of receiving their genetic results. While usability research has helped to mitigate concerns, the possibility that customers may not fully comprehend or will misunderstand results is always a worry. For example, the Food and Drug Administration (FDA) decision summaries for 23andMe carrier screening and genetic health risk tests include special controls that describe not only the criteria for user comprehension studies and the required performance on comprehension assessments, but the specific language that must be included when reporting results to the lay user to convey the likelihood that a particular positive test was in fact positive (FDA, 2015, 2017a,b). These studies find that consumers may overrate their ability to interpret test results, which may help explain why consumers are not likely to consult health professionals for assistance with test interpretation, even when such services are made available (e.g., genetic counseling offered via telephone) (Roberts and Ostergren, 2013). One important conclusion from studies evaluating consumer comprehension of DTC genome testing is that
there may not be a one-size-fits-all approach to communicating genetic test information. Greater tailoring of the presentation of personal genetic testing information based on individual characteristics and type of test result may be needed—especially when results are not delivered in a clinical setting or via a trained health care professional. (Ostergren et al., 2015, p. 9)
In the 1990s, when the link between BRCA and breast and ovarian cancer was being established (prior to the development of a clinical test), a group at the University of Michigan developed a process for returning results to family members involved in a linkage study.4 The process involved pre-counseling education and assessment, during which the risks and benefits of receiving results were explained and informed consent was obtained, and also a post-testing disclosure of results with clinical counseling by a multidisciplinary team (Biesecker et al., 1993).
Similarly, a survey of investigators who planned to return genetic research results found that the investigators frequently used more than one method for return, with the results most commonly returned using a genetic counselor or other trained professional (Heaney et al., 2010). The genetic counseling community is a rich source of expertise and experience in explaining laboratory test results to individuals. These professionals have skills and an understanding of genetic disorders combined with an education in laboratory methods that allows them to communicate effectively about test results, accuracy, interpretation, and
4 “Genetic linkage study: A genetic linkage study is a family-based method used to map a trait to a genomic location by demonstrating co-segregation of the disease with genetic markers of known chromosomal location; locations identified are more likely to contain a causal genetic variant. This technique is particularly useful for the identification of genes that are inherited in a Mendelian fashion” (Nature.com, 2018).
limitations (what the test results do and do not mean) (Doyle et al., 2016; Miller et al., 2014; Patch and Middleton, 2018). In addition, these professionals focus on tailoring the return of complex information so as to respect the cultural, religious, and ethnic beliefs of the participants (Warren, 2011; Weil, 2001). It may be useful to engage genetic counselors once discussions progress to the design and implementation of return-of-results communication plans. Other methods used by investigators for the return of results by telephone, via mail, in person, via referral to a physician, or by e-mail. While some investigators were more inclined to return results if they had a medical degree and were able to provide detailed information to the participant in the context of the participant’s personal health care, other investigators found that it was not always necessary to use a care provider to return results and interact with the participant.
A number of studies have emphasized the importance of the relationship between researchers and clinicians. For example, in the Framingham Heart Study results are given to the treating physician, who interprets results for the participant.5 Geisinger Health System places genetic results in the electronic health records (EHRs) and notifies the primary care physician, who then discusses the results with their patient.6 Additionally, a study returning results for genome sequences associated with pancreatic cancer emphasized that the ideal scenario for return would be one in which a close relationship existed between researchers and clinicians in order to enable full communication among investigators, clinical teams, and the participant (Johns et al., 2014).
However, this level of face-to-face communication with the input of a physician is not always possible, nor always necessary. Wendy Chung, the Kennedy Family Professor of Pediatrics and Medicine at the Columbia University College of Physicians and Surgeons, has discussed the variety of methods used by her team to return research results in their studies of the genetic basis of human diseases (Wynn et al., 2017).7 The communication methods employed included giving participants the option of receiving results with a genetic counselor present to enable in-depth interpretation and contextualization of the genetic results or providing participants their nucleotide sequence data in a BAM file,8 leaving interpretation up to the participant (perhaps through the use of outside interpretive services the participant could pay for) (Wynn et al., 2017). In providing a BAM file to the
5 Testimony of Joanne Murabito of the Framingham Heart Study at the public meeting of the Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories on September 6, 2017.
6 Testimony of Adam Buchanan of Geisinger Health System at the public meeting of the Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories on September 6, 2017.
7 Testimony of Wendy Chung of Columbia University at the public meeting of the Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories on September 6, 2017.
8 The BAM format is a binary format for storing sequence data (University of Michigan Center for Statistical Genetics, 2013).
participants, Chung said, she was not concerned that they would not understand the results, but rather she was concerned about perpetuating health disparities—by providing only the sequence data to participants, it could put those who could not afford outside services for analysis at a disadvantage.9 However, Chung did caution against providing participants a VCF10 file containing a list of their genetic variants because the genetics community is not in consensus about what many variants mean, so providing these files could lead to misunderstanding on the part of the participants.11 Similarly, Jessica Langbaum of the Banner Alzheimer’s Institute described options for returning genetic results, including in-person counseling, telemedicine, and Web modules. She said that the field is still struggling to determine what delivery modalities are available, scalable, and most appropriate and that further work needs to be done.12
The various practices discussed above ultimately demonstrate that the return of results involves varying types of data, can be done using a wide range of methods, and can be tailored to the nature of the research being conducted. This heterogeneity represents a significant challenge to the design of return-of-results processes, particularly when potentially incorporating participants’ varying preferences. There is both value in adjusting the format or language of communication according to participant preferences and evidence that what participants say they want is not always what will maximize their comprehension. Because the trade-offs may be different in different situations, the committee suggests that investigators should consider incorporating participant preferences, but it has not specified exactly how that should be done.
Environmental Health and the Return of Individual Research Results
The return of research results from environmental health biomonitoring13 studies is well established both in the literature and by guidelines proposed by expert groups (Brody et al., 2014; Dunagan et al., 2013; Exley et al., 2015; Haines et al., 2011; Judge et al., 2016; Morello-Frosch et al., 2009; Quigley, 2012). The return of results in this field is done because the research participants generally have a significant interest in learning their individual research results for their
9 Testimony of Wendy Chung of Columbia University at the public meeting of the Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories on September 6, 2017.
10 The variant call format (VCF) is a generic format for storing DNA polymorphism data such as single nucleotide polymorphisms, insertions, deletions, and structural variants, together with rich annotations (Danecek et al., 2011, p. 2156).
11 Testimony of Wendy Chung of Columbia University at the public meeting of the Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories on September 6, 2017.
12 Testimony of Jessica B. Langbaum of the Banner Alzheimer’s Institute at the public meeting of the Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories on September 6, 2017.
own use and safety (Brody et al., 2014). A key consideration in determining how best to report results in an environmental monitoring study is whether a known clinical range or action level has been established for the analyte being assessed (a point we reinforce later in this chapter). Where a clinical or preclinical effect is known, this knowledge allows better guidance to be provided to participants, particularly in terms of follow-up. As is the case with exposure to lead or arsenic, acceptable blood levels and public health procedures are defined with the goal of mitigating future exposure (CDC, 2018; WHO, 2017), although it is not uncommon for such guidance to change over time. For example, The Maternal–Infant Research of Environmental Chemicals study used predetermined guidelines to define its return and communication strategy, specifically, whether a result exceeded normal levels and might be associated with a health risk (Haines et al., 2011). However, it is not uncommon for a chemical, pesticide, or other environmental contaminant to lack reference-range information, (i.e., an analyte that is not well characterized in a population) or to have differing reference ranges or other bias in datasets that can cause challenges in interpretation (NRC, 2006). Therefore, determining the meaning and clinical interpretation of such test results can be a challenge, and “reference ranges do not provide conclusions on safety or risk. Presenting that fact and other limitations is an essential aspect of communicating reference-range information to individuals, the general public, and organizational decision-makers” (NRC, 2006, p. 151).
The return of research results with unknown clinical significance is also practiced in environmental health research. In 1999 the Household Exposure Study, which focused on identifying 89 endocrine-disrupting compounds, grappled with questions of whether the results (both from biomonitoring and environmental samples) should be returned to participants, including those results with unknown clinical meaning. Ultimately, after consideration of ethical guidelines and in consultation with community members, investigators allowed participants to access their individual and household results (Brody et al., 2007, 2014; Dunagan et al., 2013). Similarly, in 2004 the University of Michigan Dioxin Exposure Study, which conducted tests for the presence of 29 dioxins, furans, and polychlorinated biphenyls in participants’ blood, household dust, and residential property soil, also gave participants the option to choose whether they would receive the results from each of their samples (Garabrant et al., 2009). This option was provided for two key reasons. First, regulations were not available for the dioxin content of household dust, nor were medical guidelines available for the interpretation of serum dioxin levels at the time. Second, the researchers were aware that the disclosure of soil levels to property owners could cause those participants financial harm by affecting their property values.
In general, this literature concludes that the unknown should not dissuade investigators from returning results with uncertain meaning because “what little evidence we have suggests that a globally uncertainty-averse public is a myth; responses [to receiving uncertain information] vary widely across the population”
(NRC, 2006, p. 207). This variability does, however, emphasize the need to return information with the input from the community or study population as results can often have community-wide implications or health risks.
As the committee heard in discussions with environmental health researchers, those participating in environmental exposure studies frequently want to know their results because they are the ones carrying the products of these exposures in their bodies.14 For this reason, investigators in this field may feel a greater need to return such results. Such studies also frequently take place in communities where several households are affected and, therefore, the results of the study will likely be translatable to many in the community. To this end, investigators may use community partnerships in the design of communication plans. In a study by Erin Haynes of the University of Cincinnati, community engagement was used to develop the methods of communication used in the return of results (Haynes et al., 2016). Working together, the study team and community members developed easy-to-read graphics and written materials tailored to the reading level of the recipients as well as a comparison to help in the interpretation of their results (i.e., comparing a recipient’s results with those from other studies or for other children). The research team found that including community input in the development of its dissemination plans helped them translate biological data into a format that was usable by the target audience. Haynes et al. concluded that “scientists should include community partners from the target population in the development of research and data disclosure strategies in order to enhance the quality of research, to support the rights of the study participants to know their individual results, and to increase environmental health literacy” (Haynes et al., 2016, p. A26). See Box 5-1 for select engagement and communication practices for the return of research results in environmental health.
CONCLUSION: Current research projects that return research results to individual participants use a variety of practices that have been tailored to reflect differences in study goals, populations, types of results, and other factors.
Applying Principles for Effective Communication to the Return of Research Results
Applying existing principles for clear communication represents a concrete strategy for improving the quality of return-of-results practices. While the body of evidence is still small, these issues have begun to be examined in health communication and environmental health studies. California law, for example, requires that biomonitoring results be made available to participants, and the state has
14 Testimony of Nicholas Newman of the University of Cincinnati at the public session of the Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories on October 24, 2017.
conducted usability testing for content, allowing others to benefit from this work (Biomonitoring California, 2018; Brown-Williams and Morello-Frosch, 2011). More empirical testing is needed to guide stakeholders, but there is work already occurring in this arena (as discussed above). IRBs would benefit from using best practices and reviewing the literature outside of their field; e.g., biomedical scientists can benefit from the existing guidance in environmental monitoring in developing their return-of-results communication plans. IRBs do not need to rely on gut opinions when evidence-based guidance exists and can inform participant and community input in plans. The key principles in communication that have been identified include (1) taking audience characteristics and needs into consideration and (2) having a clearly defined communication objective (i.e., what cognitive, emotional, motivational, or behavioral outcomes should ideally result from the communication) (Haga et al., 2014; Nelson et al., 2009; Schiavo, 2014).
Consideration of audience characteristics and needs includes taking into account how much background knowledge a research participant has (i.e., what he or she knows about a particular disease or condition, about research, etc.) and what kinds of experiences the participant has had in the past. Research studies need to approach all participants and every community with respect and cultural humility. Doing so supports the development of trust between researchers and participants, and such trust is especially important given the known history of exploitation in racial and ethnic minorities and intellectually disabled individuals (Carlson, 2013; Corbie-Smith et al., 2002; Yancey et al., 2006). Because different stakeholders will have varied perspectives and preferences, those differences need to be considered and weighed. It may be necessary to design separate return-of-results communication plans for different stakeholder groups, since something designed for one audience is likely to be non-optimal for other audiences. As a result, a one-size-fits-all approach will rarely be effective in results communication.
Research studies are designed to produce generalizable information that is applicable to the broad population and results have meaning to multiple users, from the participants who contributed to the study to the investigators who ran the study. Results can sometimes be interpreted as a characteristic of an individual participant rather than an aggregate result reflective of a broad population, making it relevant or meaningful to family members, a physical community, or a demographic group, which may have implications for the communication approach. For example, the discovery of a genetic variant in a participant provides information about that individual participant’s future disease risk but, if the variant is heritable, the discovery may also offer information about family members’ risks and lead to generalizations about a group’s risk. Similarly, an environmental exposure result may be relevant not only to the participant, but potentially to others who share that environment (e.g., family members, neighbors, coworkers).
Using layered presentations of information is a key communication approach for meeting different needs. For example, many communications should start with a clear and concise summary of the primary points that is designed to be
maximally understandable to all users. However, providing access to more detailed information (which may be more difficult to understand) is often beneficial for users with greater personal interest, literacy, or numeracy skills. When participants have different informational baselines and literacy levels, research teams will need to consider how much background information to provide to each audience. For some people, “less is more,” while for others, “more is more” (Arcia et al., 2016).
When returning results to participants investigators need a clearly defined communication objective and should consider what specific change in knowledge, beliefs, motivation, or behavior is intended. The objectives of the communication will need to take into account the individuals’ needs more than the investigators’ needs, and they should be focused, with just one or a few objectives. A general truth is that the more one attempts to convey in a communication, the less effective that communication is likely to be (Heath and Heath, 2007).
Good design practices can significantly improve people’s ability to overcome communication barriers. For example, the CDC’s Clear Communication Index identifies key characteristics that enhance and aid people’s understanding of information. These include the use of materials translated into the recipient’s primary language, use of plain language with minimal jargon, use of good visual design principles, and use of evidence-based visual displays of data (CDC, 2016; Kosslyn, 2006; Plain Language Action and Information Network, 2018; Tufte, 2001). These practices represent minimum standards that all results communications (including clinical results) should achieve. As such, they should be included as part of training initiatives for investigators and clinicians as the research enterprise works to build the necessary expertise for effective return of results.
In returning research results to participants, investigators should set participants’ expectations up front (Tarrant et al., 2015). This will require investigators to plan for when and how results will be returned early in the study process both so that participant preferences can be incorporated in the study design and participant expectations for the return of results can be addressed during the initial consent process. Addressing expectations during the initial consent process not only helps build trust between the researcher and participants, but it also provides information to participants to make a decision about whether to participate in the study.
Consent is more than just telling a participant what he or she should expect and ensuring participant comprehension. Consent design also prepares investigators for the role of administering consent; this requires investigators to establish a strategy for how consent will be administered (including the use of educational materials) (Nusbaum et al., 2017). Consent may be a one-time event or an ongoing process, particularly if results will be returned at intermittent times over the course of the study. In particular, the traditional consent occurring only
at the time of enrollment may not always be sufficient (Appelbaum et al., 2014). There are several key issues related to the return of research results that investigators need to convey clearly to participants, regardless of the model of consent used. These include (1) what will be returned to research participants and how it will be returned, (2) the appropriate reference information and communication formats to enable understanding, and (3) the benefits and harms that may occur.
First, during the consent process, investigators will need clarity regarding what individual results will be offered to participants or what individual research results participants can access upon request;15 when participants can expect results; the conditions under which researchers will alert participants of the availability of results; and how and when results will be communicated to participants (Fernandez et al., 2012; Simon et al., 2011). The Multi-Regional Clinical Trials collaborative has developed a toolkit that provides guidance for informed consent documents and processes for the return of general as well as genomic research results (MRCT Center, 2017b). In planning the consent process, investigators also will need to consider whether participants have a right to request and receive their results under the Health Insurance Portability and Accountability Act of 1996 (HIPAA) access right (i.e., when research laboratories operate as part of a HIPAA-covered entity, discussed more in Chapter 6). It is not clear how many participants (or patients) are aware of their HIPAA access rights, but researchers and institutions have an obligation to disclose participants’ right to access research results under HIPAA, when applicable. Regardless, information about access rights should not be buried in the consent form. Rather, this particular pathway for accessing results, when applicable, should be made clear during the consent process. The consent should also explain how results will be returned in response to a request under HIPAA. The HIPAA access right may grant access to raw data, but it does not require that participants receive a tailored message as might be expected in a clinical care setting. Still, while HIPAA does not require the investigator to provide interpretation, in any case where results are to be returned, the goal should be to provide them in a way that is useful.
Second, due to the variability in research results and the frequent lack of clear reference information, participants may need help in determining whether they want the results and, if so, what the results might look like. To further shape participant expectations and guide decision making during the consent process regarding which results, if any, they would like to receive, it may be helpful to provide participants with examples of what results may look like (NHGRI, 2018) and
15 As discussed in Chapter 3, what research results will be offered will depend on the analytical and clinical validity, the value to the participant, and how feasible it is for the investigators to return the results. These considerations will be weighed in determining what to return and the timing for return. Timing will be especially relevant in longitudinal studies or trials where information may need to be withheld to support study design objectives. Additionally, if blinding is required in a clinical trial, results may not be able to be returned as they are generated because it may jeopardize the scientific integrity of the study (MRCT Center, 2017a).
how they may experience possible outcomes of their choice (Hibbard and Peters, 2003). Concrete examples can help people consider how they would feel or what they would do based on specific findings (versus whether they want to know “their results” in general) (Kim et al., 2017), which may be particularly helpful when addressing the risks and benefits of receiving results. While descriptions of possible outcomes are important, case studies or hypothetical narratives may also be useful to enable participants to anticipate not just what the possible results might be but also their potential implications (medical, emotional, or otherwise) (Shaffer et al., 2013). The examples that investigators provide of the types of results that might be returned would not be based on the participants’ data, but rather would be derived from previous research that used similar assays; this will give participants a sense of what the information will look like upon return. Having participants engage in a brief values clarification exercise may help them determine what they care about and hence whether the receipt of different types of test results might confer benefits or risks to them (Fagerlin et al., 2013; Holly et al., 2016).
Third, receiving either clinical or research test results can result in both benefits and harms, and it is critical to address these during the consent process. The benefits of receiving results may include the identification of treatable disorders, enhanced life planning, or increased knowledge about oneself. Certain types of results may have immediate practical benefits, and participants should be informed of the conditions under which researchers will alert them of the availability of urgent results. However, framing the possible value of information in a purely positive manner (overly focusing on benefits in relation to risks) is ethically inappropriate. The risks associated with the return of results can take the form of participant anxieties and fears or the misuse of research results in a medical context, leading to inappropriate medical or personal actions. In addition, results that are not actionable may cause emotional or other sorts of distress (Zikmund-Fisher, 2017). Investigators will need to consider both the benefits and the risks prospectively, but under certain circumstances they may not even know that tests will be done; therefore, they may not always be able to offer participants a great deal of specificity when describing the potential benefits and risks (Appelbaum et al., 2014). To adequately address the potential harms from return of research results, investigators will need to acknowledge the uncertainty in research and the possibility that non-useful information will be generated. Furthermore, in addition to sometimes lacking usefulness, research results may also sometimes be incorrect. For example, a research test may generate a false-positive16 or false-negative17 result, either of which can cause emotional, physical, or financial harm. Alternately, the understanding of the science behind the result may change, thereby affecting
the meaning of the result for the participant. The emotional or psychological harms that may be associated with return should be discussed with participants during the consent process and again later in the study process, when investigators are actively returning results.
The design of the consent process should consider that participants’ desires and willingness to take on risk may change over time and that the meaning of the results may change over time. As a result, participants ought to be given the opportunity to determine whether they want to receive their research results when they are eventually made available to the participants. Even if a participant consented to receiving results at the start of a study, he or she should have the opportunity to refuse (or accept) results once available. To accomplish this, when planning their studies investigators may need to consider models of flexible consent that will include return of research results. One option outside of a one-time consent is a staged model for consent (Bunnik et al., 2013). Staged consent means that investigators “obtain consent in stages, with brief mention of [incidental findings] at the time of initial consent, but with more detailed consent obtained if and when reportable results are found” (Appelbaum et al., 2014, p. 6). The flexibility of staged consent models must, however, be weighed against the fact that participants who are re-contacted for further consent may infer (accurately or inaccurately) the type of result that has been found (i.e., positive or negative, good or bad) simply because of the new contact.
Models of Consent
Current consent processes are not standardized and are frequently inadequate to ensure understanding on the part of all participants. In fact, some research suggests that clinicians rarely meet even the minimum standards for disclosure necessary for the purposes of obtaining true consent (Hall et al., 2012). Unfortunately, many investigators do not have appropriate training in consent practices. Furthermore, they can (much like participants) be susceptible to therapeutic misconception and may, as a result, convey biased messages to participants (Larson et al., 2009).
In selecting a consent model and administering consent, investigators may want to consider how technology can facilitate the consent process. For example, technology can be a particularly helpful way to incorporate the principles of health literacy (as discussed previously). Health literacy has a strong impact on what individuals understand and how they use information related to health care and decision making. As such, investigators would benefit from capitalizing on best practices in health-literate informed consent (see Box 5-2). “The challenge is finding practical, non-onerous ways to respect persons’ choices that have minimal negative effects on the science. Information technology may provide new opportunities to implement informed consent with minimal intrusion” (Grady et al., 2017, p. 857). For example, technology-assisted consent, such as the Apple
Research Kit for mobile devices, which includes a layered approach to consent in which the formal consent document is augmented by a visual, animated sequence, helps the user better understand the consent contents (ResearchKit, 2017).
Additionally, video-aided consent, like that used in the ADAPTABLE trial, can contribute to participant understanding (ADAPTABLE Asprin Study, 2018; Grady et al., 2017). Tele-consent is another method that enables researchers to remotely video-conference with prospective research participants. With tele-consent, investigators create a display that interactively guides participants in real time through a consent form, which they then electronically sign (Welch et al., 2016). However, while the use of electronic methods for consent may offer advantages for the return of research results in terms of convenience as well as providing varied approaches (e.g., use of multimedia interactive formats) for increasing understanding of the information and making possible structured assessments of that understanding, there are also a number of challenges that need to be considered
(Welch et al., 2016). These challenges include the fact that many people do not read terms of agreements on computers and mobile devices, there is a dearth of evidence regarding the advantages and disadvantages of electronic methods in terms of understanding of information, and since there are no face-to-face visits, verifying the identity of the individual giving consent may be difficult (Grady et al., 2017; NPR, 2014).
In addition to ensuring that investigators are meeting the communication needs of participants with health-literate consent, investigators and IRBs will need to consider the trade-offs among consent models and formats, no matter which model of consent is used, whether traditional paper or electronic format. See Table 5-1 for an example of how the advantages and disadvantages of consent models were assessed for the return of secondary findings. Table 5-1 discusses these secondary findings (also referred to as “incidental findings”) and consent. The committee considers secondary findings to be results that can be anticipated on the part of the investigator and that considerations similar to those presented in this table can be made for any anticipated result, whether or not it is the primary aim of the study or test. Fully assessing the models of consent and closing gaps in communication during consent, particularly with the added considerations that accompany returning results, will require training for investigators and clinicians. Such training will take concerted effort, but it has the potential to enhance benefits, minimize harm, and build trust in the research enterprise.
CONCLUSION: Details regarding the return of individual research results to participants are currently only addressed during the consent processes on an ad hoc basis, creating inconsistency across studies and institutions and inadequately setting participant expectations.
CONCLUSION: How the return of individual research results is, or is not, addressed in the consent process affects participant expectations.
CONCLUSION: The heterogeneity of research study designs and populations means that different consent processes will be appropriate in different situations, but regardless of the type of consent process, clear communication appropriate to varying levels of health literacy is essential.
|MODEL NAME||POTENTIAL ADVANTAGES||POTENTIAL DISADVANTAGES|
|1. Traditional Consent||
|2. Staged Consent||
|3. Mandatory Return||
SOURCE: Adapted from Appelbaum et al., 2014.
Once test results have been generated and the decision has been made to return these to research participants, investigators and institutions need to ensure that the results are delivered in an appropriate manner that achieves the communication goals and meets participants’ needs. Optimal communication methods need to be determined on a study-by-study basis both because the goals for each study are different and because the research team will need to take into account context-dependent considerations, such as the type of the research results (and their associated uncertainty) and the characteristics of the participants. As discussed above, participants with low health literacy, low numeracy, low graph literacy, or limited English proficiency are likely to have more difficulty with interpreting the results and understanding what kinds of actions may be appropriate in response to the result (Perzynski et al., 2013). Consequently, the processes for returning individual research results must either (1) use a “universal precautions” approach (Brega et al., 2015), which assumes that all research participants may have difficulty comprehending the information and promotes communication in ways that anyone can understand, or (2) include tailored approaches to meet the information needs of the research participants who wish to have more detailed information. (Box 5-3 highlights FDA experience with communication.)
Facilitating Understanding of the Meaning and Limitations of Results Through Reference Information
Having access to information is not the same as being able to understand and use that information. In particular, studies in both the consumer product marketing and medical decision-making fields have shown that people find it difficult to interpret unfamiliar data in the absence of relevant reference standards (Hsee, 1996; Zikmund-Fisher et al., 2004). As a result, hard-to-evaluate information is often ignored or not used in decision making. Many recipients of clinical test results are unable to interpret them because of a lack of familiarity with test characteristics or the possible range of test outcomes. Furthermore, even when recipients know what the result is, they may not understand its practical meaning (in terms of whether concern or action is appropriate) (O’Kane et al., 2015).
In sharing individual research results with participants (especially when results are offered as part of a return-of-results plan), research teams need to communicate not just what research or test was done, but why it was done and how. To improve the meaningfulness of research test results, especially those that are difficult to understand or that are generated from tests that are not commonly used, research teams need to provide clear cues regarding (1) how much participants should trust the result and (2) what the result means or what is not known about the meaning of the result. This is because the types of laboratory tests used in research studies may generate results that are more likely to produce hard-to-evaluate data because these tests are novel, their analytic validity is unknown or being established, or their clinical validity is unknown (see Chapter 3 for more details). To help make it easier for participants to understand results, investigators need to pay attention to what reference information (e.g., standard reference ranges, comparative risks, or categorization information) is needed or appropriate for each type of result communication. The information provided with the result may dictate recipients’ understanding and actions even more than the result itself. In some cases, results may need to be accompanied by multiple types of reference information (when available) to enable participant understanding.
To be clear, providing reference information for a result is not the same as providing personalized interpretation, such as clinical guidance. Clinical guidance requires integrating a research test result into the participant’s individual circumstances (e.g., known medical conditions, family history). While such integration is sometimes expected in certain study contexts, investigators may not be clinicians or may not be familiar with the specific health of the participant, in which case providing clinical guidance would not be appropriate. Additionally, clinical guidance may be labor intensive, requiring investigators to tailor the research results and reference information to each individual participant’s circumstances. Reference information, however, is a function of the test and the circumstances of the study but not of the individual. Consequently, providing reference information is scalable: investigators can more easily return results to a large number of participants because, in general, the reference information is applicable to all of them or to all similar participants receiving the same test. Emphasizing the identification and communication of appropriate reference standards is hence a cost-effective way of improving return-of-results communications.
Relevant reference information may be well established and standardized or may be unknown. For example, environmental contaminants such as radon and arsenic have established action thresholds or other benchmarks set by the Environmental Protection Agency. Similarly, standard clinical tests have established reference ranges (often interpreted as the range of normal values) and sometimes even pre-defined critical values (i.e., values high or low enough that a laboratory is obligated to immediately notify treating clinicians about the result to minimize associated risks; an example would be an elevated glucose level). In a genetics context, the impact of having a known BRCA1 mutation on lifetime breast cancer and
ovarian cancer risk is relatively well established when family history is also known (Paul and Paul, 2014). Other times, however, reference knowledge is known but no standard guidance is available; i.e., the reference information that is available cannot be generalized to a population. Alternately, reference information may not be well understood or may be completely unknown in research contexts. For example, safe or dangerous levels for a particular toxin or novel biomarker may not have been established. Dose–response relationships may be unknown or difficult to estimate for particular populations. Even relative standards, such as percentiles compared to reference distributions, may be unavailable or incorrectly used if no previous studies exist or if previous studies involved different populations, such as different racial or ethnic groups (Holland and Palaniappan, 2012). Genetic variants often have no clear significance or correlations with health outcomes, and many times the prevalence of the variants in different populations is unknown (Caswell-Jin et al., 2017; Saulsberry and Terry, 2013).
The more that is unknown about reference standards for a particular result, the more that the participant and either the investigator or the individual performing the communication should have a two-way communication to clarify “what this result means for me.” Clarification of meaning via dialogue is important not merely to improve participant understanding, but also to prevent an inaccurate interpretation or over-interpretation of results. When reference standards for a result are not known, investigators should weigh the benefits and risks of return and consider whether a return of aggregate results only would be more appropriate than a return of individual results. Regardless of whether aggregate or individual results are returned, the fact that reference information does not exist should be explicitly communicated to participants.
When developing a return-of-results plan, one explicit step should be the identification of appropriate reference information to be provided to participants. The reference information varies by the nature and type of results generated and by how informative the result is to the participant. Box 5-4 summarizes the kinds of reference information that may be appropriate to provide to participants, given the types of results that laboratories generate. Laboratory results are of two distinct types—continuous (e.g., biomarker levels that may vary across a continuous range of possible values) or binary (e.g., presence/absence of genetic variant or marker). In the clinical laboratory, these types of results are referred to as quantitative and qualitative results.
Continuous or Quantitative Results
When communicating continuous results, providing relative standards to which an individual result can be compared (e.g., a second data point for comparison or an observed distribution) can provide a certain degree of meaning (i.e., that the current result is higher or lower). However, relative standards may not sufficiently convey whether action should be taken, say, whether a participant
should consult a physician. If, for example, a study has measured blood levels of a specific pesticide, then returning the individual result and the range of values obtained for the other study participants will not indicate whether an individual is at risk of harm from exposure to that pesticide. Nor does it indicate whether the investigators know if the pesticide poses a health risk and, if so, at what dose. For instance, if an entire community has been exposed, having average exposure levels compared to other community members may nonetheless represent a significant risk.
Because relative standards provide only limited and potentially misleading meaning, it is generally preferable to provide absolute reference information (just as absolute risk communication is generally preferred over relative risk communications), though the committee acknowledges that this will not always be possible (Dunagan et al., 2013; Trevena et al., 2013). The absolute reference standard commonly provided with clinical test results is a standard or normal range, which in principle allows recipients to determine whether their results are normal when compared to the general population.18 In practice, however, many people with lower literacy and numeracy skills have significant difficulty determining whether the result is inside or outside of a standard range (Zikmund-Fisher et al., 2014). Furthermore, in many research contexts the substance being measured either should not normally be present or else normal ranges are unknown. The absolute meaning of continuous results can be communicated by binning results into easy-to-evaluate categories (e.g., high, moderate, low risk), noting whether a result falls within or outside of a target range; by mapping a result onto a dose–response curve; or by reporting whether the result falls above or below a harm, alert, or action threshold (Peters et al., 2009). For the latter method, marking the individual result and the harm threshold on a visual display of the range can be an intuitive way to convey this information (Zikmund-Fisher et al., 2018). Care should be taken, however, to ensure that important variations in meaning are not obscured by a categorization process and that people do not interpret below threshold results or those categorized as “low risk” to mean zero risk of harm.
Another critical challenge that arises when communicating continuous results involves conveying the degree of imprecision in an estimate and the corresponding uncertainties related to interpretation. Test results that are presented as point estimates without measures of variability and reliability fail to convey the uncertainty of the results (Pocock and Hughes, 1990). Therefore, people tend to assume that the value they receive from a test is both precise and accurate,19
18 “Typically, reference values or reference intervals are established for each laboratory test to delineate the range of values that would usually be encountered in a ‘healthy’ population” (Boyd, 2010, p. 84).
19 “A test method is said to be accurate when it measures what it is supposed to measure. This means it is able to measure the true amount or concentration of a substance in a sample. . . . A test method is said to be precise when repeated determinations (analyses) on the same sample give similar results. When a test method is precise, the amount of random variation is small. The test method can be trusted because results are reliably reproduced time after time. . . . A test method can be precise (reli
when in fact the true level may be higher or lower. The degree of uncertainty directly relates to the likelihood of misinterpretation of the meaning of the result. For example, if the value of a result is close to some reference value, people may overinterpret what is actually an unreliable difference because of the inherent error in the estimated value.
The limits of accuracy for point estimates can be communicated through confidence intervals, error bars, or standard errors. Even when such measures are provided, however, people often do not understand their meaning (Dieckmann et al., 2012). People tend to interpret uncertainty in such a way as to be favorable to their preferences or worldviews—the so-called “motivated evaluation” (Dieckmann et al., 2017). The use of plain language can help research participants better understand the limitations related to the validity of the test result and the implications in terms of whether the data should be relied on for decision making. For example, while many people may not be familiar with the term “95 percent confidence intervals,” the extent of uncertainty can be conveyed by discussing minimum and maximum levels or best and worst case scenarios (i.e., “the value might be as high as X or as low as Y”). However, including a description of capture probability (e.g., a 90 percent confidence interval) increases the likelihood that people interpret the distribution of values within that range as more normally distributed rather than uniformly distributed (Dieckmann et al., 2015). Further research is clearly needed to determine optimal language for expressing value uncertainty in different situations.
Binary or Qualitative Results
Despite the seemingly simple nature of binary results (i.e., the characteristic is either present or not, and the test result is accurate or not), meaningful communication of this type of test results remains challenging. The prevalence of the characteristic or finding, either in a study population or an external reference population, can be reported with the result. Prevalence rates and pretest probability information are of high value in determining the likelihood that the test result represents a true-positive rather than a false-positive result, or a true-negative rather than a false-negative result. In many research circumstances, the prevalence of the target characteristic may be uncertain, as may be the sensitivity and specificity of the test, all of which are relevant to an estimate of positive and negative predictive values (as discussed previously in Chapter 3). Prior knowledge, or lack of knowledge, of prevalence and test sensitivity and specificity will be relevant to a decision about whether results should be returned and to what degree confirmatory testing is recommended.
ably reproducible in what it measures) without being accurate (actually measuring what it is supposed to measure), or vice versa” (Lab Tests Online–AU, 2018).
In other cases, the question is not as much whether a result is accurate, but whether it is meaningful. An example would be a test that identifies a genetic variant. In such cases, prevalence rates have limited value in guiding recipient perceptions or actions (Zikmund-Fisher, 2013), especially once repeat testing provides confirmation of a finding. For example, how common or uncommon a particular genetic variant is in the population generally should not affect what the individual might want to do about a valid and true result. Prevalence rates should not be used by recipients as a proxy for how serious a finding is or whether action is needed, since common characteristics may sometimes have limited risk impact and rare conditions can sometimes have enormous impact on an individual’s risk. For binary results that are indicators of a disease (or other condition), penetrance information (i.e., information about the extent to which a particular gene is expressed in those carrying it) and relative risk statistics (i.e., information about the risk of the disease in people with the characteristic relative to the risk in those without the characteristic) are more useful than prevalence rates for helping recipients understand the meaning of their results. Furthermore, guidance documents for risk communication recommend communicating absolute risk reduction or risk increase whenever possible (Trevena et al., 2013; Zikmund-Fisher, 2013).
The meaning of binary results is most clear when they are classified into a specific action category (e.g., someone with a particular biomarker should consider a specific intervention) or at least a risk category (e.g., labeling as normal), although care must be taken to avoid misinterpretation of such labels (Marteau et al., 2001). However, classifying binary results into a specific action category is not always possible, particularly in the research context, both because disease is often multifactorial and because the scientific understanding of how binary risk factors (e.g., genetic markers) are associated with outcomes is often highly incomplete (Coulehan, 1979). For example, it may be difficult to communicate to research participants how much or how little effect a particular genetic marker may have on the incidence or severity of a condition—and, accordingly, whether an intervention or other action is appropriate. In such cases, as discussed below, the areas of uncertainty should be explicitly communicated to the recipient.
With binary results, the primary concern when trying to communicate issues of reliability is false certainty—that is, people often fail to consider the chance that the finding is wrong. The idea that a test may result in false-positive or false-negative results can be hard to understand. Consequently, recipients are likely to act on the assumption that the result they have received is accurate (Garcia-Retamero and Hoffrage, 2013; Kelman et al., 2016). Explicit statements that emphasize the potential for inaccuracies of all types (e.g., sample swaps, false positives, or false negatives) can help to offset this tendency, though their effectiveness is likely to be imperfect. Note that once a result is known, it is appropriate to communicate in plain language only the false-negative or the false-positive rate, whichever is relevant, since the other rate does not affect that particular participant and speaking about it is likely to add to confusion. However, a concrete visual
presentations of risk (e.g., icon array20 displays) may be needed to support a participant’s understanding of how likely it is that the returned binary result is in fact the opposite result (Garcia-Retamero and Hoffrage, 2013; Trevena et al., 2013).
CONCLUSION: The meaning of a test result is determined by what the result is compared against. The ability of individual participants to understand and make use of research results depends on the provision of relevant reference information that clarifies what is known or unknown about the meaning of the specific result. For some individuals, a reference range alone would do nothing because of their limited health literacy and numeracy.
CONCLUSION: The state of scientific knowledge about a particular test guides the types of reference information that are available and can be provided to research participants when returning individual research results. When the context for a test result is well established and standardized, then a strong presumption is that this reference information will be provided. When the context is unknown or uncertain, however, being clear how little is known is essential to participant understanding.
Communicating Key Takeaways, Including the Actionability of Individual Research Results
When returning results to participants, a single, clear takeaway message is important. Being given information and not knowing whether or how it should be acted upon can be disconcerting and potentially emotionally harmful to participants (Shani et al., 2008). Consistent with the ethical principles of beneficence and non-maleficence, research teams have some obligation to minimize and mitigate such potential harms. When results are being offered to participants, the most straightforward way of offering a single, clear takeaway message is to provide a concise statement of why the results are being returned and a clear summary of the meaning of the results based on the research team’s current knowledge of the test performed at that point in time. Given that scientific knowledge is constantly evolving, especially in terms of understanding research results, investigators should clarify both the date when the message is being generated by the study team and how likely or unlikely it is that the interpretation of the result might change in the future. In addition, given the evidence discussed above of substantial language and literacy barriers to comprehension, the importance of providing action steps (if appropriate) clearly and in plain language cannot be overstated.
The takeaway message can vary depending on the state of knowledge regarding the test result and its implications. When the meaning is uncertain (i.e., the investigators do not know how to interpret the result), this uncertainty and the fact
that no action can be recommended is the takeaway message. Such a clear message of no recommended action needs to be stated explicitly to prevent people from making inaccurate assumptions. In some cases, the meaning of the result may be known, but it has no implied action. An example of such a result would be the return of “normal” results from clinical testing that was conducted in the course of a research study. However, determining the appropriate takeaway message is not always so straightforward, such as when a genetic variant of unknown significance is identified in genetic testing. A communication with no recommended action can be particularly difficult because people may not believe that researchers would return a result but not want the participant to take any further action; there is also the issue of the potential “emotional burden, concern, or worry of knowing that there is nothing [the participant] could do about it” (Hyams et al., 2016, p. 5). Providing such information can have both positive effects (e.g., by drawing a participant’s attention to a particular disease risk) and negative effects (e.g., inducing anxiety or motivation to pursue unnecessary screening tests). In other cases, the result may indicate the need for possible or even highly encouraged action.
In the consent document, key information is optimally included at the beginning of the consent document and will contain a “concise and focused” description of the research and summarize the project information that is most important to potential participants in making their decision whether to enroll in the study (Federal Register, 2017). Similar methods (i.e., requiring concise and focused descriptions of the findings and its implications) should be applied in the return-of-results communications.
When participants will need to carefully consider a potential action (e.g., because of trade-offs), the more that a communication can identify both why participants should consider actions and why they might not want to do so, the more useful the communication will be. In addition, if a result implies an action that is highly encouraged, acknowledging the potential barriers or challenges to undertaking these actions is beneficial by helping to frame realistic expectations and prepare participants to overcome those barriers, when appropriate.
Guiding principles for the design of return-of-results procedures parallel the best practices for consent procedures and support the importance of providing key takeaway messages. Best practices need not be developed at the level of the individual investigator alone. Changes in community, federal, or industrial practices may be needed to develop better guidance for how the research committee needs to approach these situations. To deal with the fact that research participants often struggle to make sense of consent documents, the 2018 proposed revisions to the Common Rule mandate that consent documents provide a “key information” section at the beginning of the consent document that contains a “concise and focused” description of the research and summarizes the project information most important to potential subjects in making their decision whether to participate (Federal Register, 2017). Similar remedies (i.e., requiring concise and focused
descriptions of the findings and its implications) should be applied in the context of returning research results.
CONCLUSION: Individual research results need to be communicated with a clear takeaway message that includes a statement of actionability (or lack thereof).
Communicating Caveats and Uncertainties
Previous chapters discussed multiple reasons why research results often have substantial variance or potential for error which limits interpretation and usability for an individual participant. Even after accounting for the quality of laboratory procedures, research results may vary in their level of certainty and potential to guide personal action. For example, a cholesterol level obtained in a research study is likely to provide a research participant with readily interpreted information about cardiac risk (assuming that appropriate laboratory quality measures were in place), while other research results may reflect evolving knowledge that has substantial uncertainty. For example, a study might discover an association between a biomarker and a particular health risk, with an unknown effect size and no information to guide actions to reduce risk.
Most research participants, however, are unlikely to think about these threats to validity and interpretability. Hence, research results are prone to misinterpretation (e.g., confusing a research result with an established clinical test result) or misuse. As a result, it may be necessary to include a formal caveat or warning statement in return-of-results communications. Depending on the context, such statements may address
- uncertain standards,
- uncertain interpretation,
- an elevated potential for error in the result, and
- the fact that the result may not be the participant’s result (e.g., in the case of a sample swap or mislabeling).
For example, appropriate disclosure to the participant might include the caveats that the level of risk is still unknown and that no actions to reduce risk are known. Researchers might also include information about plans for future research to study these questions.
Investigators are not used to identifying the full list of threats to validity, uncertainties, and caveats that are applicable to their study. In fact, incentives in both the funding application process and the research publication process minimize attention to such threats. Consequently, investigators need both guidance (e.g., a list of key questions that should be asked) and incentives (e.g., explicit consideration in IRB review) to do this task. The Multi-Regional Clinical Trials Center toolkit
includes a checklist to guide IRBs and other ethics committees in reviewing plans for the return of research results (MRCT Center, 2017b).
Because people tend to assume that any test results they receive are both precise and accurate, providing information that conveys the uncertainty of the result is critical, particularly since the potential for error increases in research contexts. Furthermore, given that understanding and adjusting for uncertainty is psychologically difficult, it is reasonable to believe that, on average, the potential for over-interpretation of results and under-consideration of uncertainties is likely to be greater in practice than the reverse. The committee is already advocating for the return of results in novel circumstances, including (under certain conditions) when reliability is lower than it is for clinical results. As a result, the committee believes it is prudent to err on the side of promoting recipient attention to caveats and uncertainties. An outcome in which participants feel a need to confirm important results before acting on them would be appropriate in many situations. When a significant risk of therapeutic misconception is possible,21 a disclaimer distinguishing a research result from a clinical result is particularly critical.
Since clarity and concreteness are critical, caveats, cautions, or warnings that accompany the return of results need to be written in plain language. For example, many users will not understand or react to a statement that a test has “low validity.” Instead, statements should describe specific potential risks in simple terms, e.g., by making statements such as “Your result might be wrong,” “Your true results may be higher or lower than what is shown,” and “It is even possible that this result may not be yours.” Similarly, uncertainties about the meaning of the result could be stated as plainly as “We do not know what your results mean” and “We cannot recommend any actions for you to take.”
As caveats and warning statements are developed and used for the first time, they will need to be reviewed by the appropriate individuals (or groups) and tested for understanding and efficacy. Engagement with target populations is essential both for identifying which caveats are most critical to communicate and for determining the optimal methods for communication. Research has demonstrated that warnings can be used successfully to communicate benefits and risks, but only when they are specifically designed for the target audience (Andrews, 2011). Work in the environmental exposure field can offer some useful models and templates to share. The Association of Public Health Laboratories and Biomonitoring California offer models for communicating environmental exposure information to participants (Association of Public Health Laboratories, 2012; Biomonitoring California, 2018) and Biomonitoring California prototypes have undergone usability testing
21 “Therapeutic misconception (TM) was first described in the 1980s, when it was noticed that some research subjects ‘fail[ed] to appreciate the distinction between the imperatives of clinical research and of ordinary treatment.’ People who manifest TM often express incorrect beliefs about the degree to which their treatment will be individualized to meet their specific needs; the likelihood of benefit from participation in the study; and the goals of the researchers in conducting the project” (Appelbaum et al., 2012, p. 2).
by Health Research for Action researchers (Health Research for Action, 2011). Additionally, FDA has explored the issue of whether and how results should be provided directly to consumers many times through the use of advisory panels or workshops that have asked experts and lay users on preferences, to explore risks of return, and develop mitigations to those risks (FDA, 2010, 2016).
Once effective warning statements are developed by investigators in a variety of fields of research, the research community would benefit from the sharing of templates and examples to avoid repeating unnecessary effort, while still allowing adaptation for a given need or context-specific communication.
CONCLUSION: Research participants may fail to understand the degree to which research results may have substantially greater uncertainties than clinical results. Little evidence exists to guide best practices for communicating warnings and qualifiers that address potential inaccuracies or potential variance in interpretation.
Identifying the Appropriate Communication Modality
Different types of communication may be appropriate in different contexts. The communication methods commonly used for returning results include
- in-person discussion,
- phone- or video-conference–based discussion,
- electronic delivery (e.g., through secure portals, including those tethered to EHRs), and
- mailing of printed materials.
Other reports have described a number of different factors that go into the selection of an appropriate communication method for returning individual research results (Fitzpatrick-Lewis et al., 2010; MRCT Center, 2017a), and the committee recommends that study teams use available guidance. For example, the Multi-Regional Clinical Trials collaborative has developed toolkits that support the return of individual as well as aggregate results and provide guidance for investigators, sponsors, and ethics review committees throughout the study life cycle from planning through study completion (MRCT Center, 2017b). As discussed above, ideally participants should be queried on their preferred communication method early in studies in which results are to be returned, and investigators should take participants’ preferences into account. However, given that the potential cost, required infrastructure, and expertise will vary from study to study, the choice of how results will be communicated reflects a cost–benefit trade-off that needs to be evaluated for each study.
Delivering results in person maximizes the ability of the investigator to provide clarification, answer participant questions, and assess and address potential confusion or emotions from the participants. In some cases resources are needed
to support the inclusion of specialized expertise in return, for example when genetic counselors assist an investigator in returning results to participants. As a result, this return strategy is the most time and resource intensive. Wendy Chung estimated that returning results for a large study using a team of genetic counselors cost approximately $250 per participant.22 Because of the time and resources required to plan for in-person return, this strategy is not well suited to scenarios where the results to be returned are time sensitive. Additionally, the return of results via a genetic counselor may lead to participants declining to participate due to the time commitment of counseling sessions, as was encountered by Chung and colleagues (Wynn, 2016).
The return of results via phone has many of the advantages of in-person return, including opportunities for clarification, participant questions, and addressing emotions, but it is less personal. This method can be carried out quickly if the return is time sensitive and the participant must be reached promptly. The costs associated with return by phone, like in-person delivery, remain high due to the time and expertise required.
Many patients are familiar with using electronic portals, which are commonly used for delivering clinical laboratory or other medical results (Giardina et al., 2015). These portals can be used to provide documents detailing results to participants as well as to provide links to additional educational resources. In some instances, the research results could be tethered to an existing patient portal or EHR, such as in cases where a research participant may also be a patient receiving clinical care within the institution. Although such portals typically feature a secure two-way e-mail communication option, there are a number of potential disadvantages, including a lack of opportunity for the synchronous communication of a phone or in-person return and the fact that the portal is less likely to be used by racial and ethnic minority and rural populations and those with limited health literacy or technology proficiency (Sarkar et al., 2010, 2011; Sharit et al., 2014). Furthermore, including research results in a patient’s EHR may affect what is included in that patient’s designed record set. Investigators in environmental health have tested other digital methods to return personalized results and engage participants in the research (Boronow et al., 2017). Establishing and using a portal has some initial and maintenance cost, but it is more easily scalable than in-person delivery, with only a marginal cost for the addition of many participants.
The return of results by mail is most useful in scenarios where researchers are returning non-urgent, reference communications and may be particularly effective for accessing individuals in remote locations, like some tribal areas where telecommunications access is limited, unreliable, or unavailable.23 While mail is an
22 Testimony of Wendy Chung of Columbia University at the public meeting of the Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories on September 6, 2017.
23 Testimony of John Molina of Native Health at the public session of the Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories on December 11, 2017.
inexpensive method for return, communication by mail has a number of shortcomings, especially a lack of opportunity for dialogue and limitations in what can be communicated in a paper-based, visual format. Certified mail can be used to help prevent sensitive participant information from being received by someone other than the participant.
Data visualization is an effective tool for helping people understand their health data (see an example in Box 5-5), and many tools have been created to assist with the development of appropriate data visualizations. For example, the Data Viz Project by Ferdio is a website that organizes visualizations by functions (e.g., comparison, part-to-whole, correlation) to make it easier to select the right visualization for a particular communication goal (Data Viz Project, 2018). Resources also are available to help in choosing the most effective type of chart (e.g., the Extreme Presentation Method; see Abela, 2018). Developed by the Risk Science Center and Center for Bioethics and Social Sciences in Medicine at the University of Michigan, Icon Array provides open-source icon arrays for communicating risk (University of Michigan Risk Science Center, 2018). Electronic Info-graphics for Community Engagement, Education, and Empowerment (EnTICE3) is open-source software that allows a user to create tailored messages and visualization outputs that are responsive to overlapping participant characteristics such as language, age, and level of health literacy (Arcia et al., 2015; Unertl et al., 2016). This software has been used during participatory design sessions to create a communication style guide tailored to inform and engage the target community. Such communications also can be used to stimulate health-motivating behaviors, for example, by offering comparisons to national rates of depression (Bevans et al., 2014) or providing dietary standards, associated risks, or recommendations for preventative action (NASEM, 2017). Under the Precision in Symptom Self-Management Center at Columbia University, EnTICE3 is being expanded beyond its original use to support biomarker result reporting including cytokines, ancestry informative markers, and genetic mutations.24
As with any tool, visualization for returning research results must be well matched to the communication goal and data type (Arcia et al., 2013, 2018). No single visualization is ideal for all situations (Torsvik et al., 2013). Visual simplicity is also valuable, as visual embellishments (e.g., three-dimensional charts) tend to inhibit user comprehension (Tufte, 2001). A variety of authors have argued against three-dimensional graphs on both conceptual grounds (e.g., three-dimensional bars are more difficult to visually align with an axis to determine the level shown) and empirical grounds. In particular, while three-dimensional graphics may attract attention, they tend to perform worse in accuracy, which is perhaps the most critical dimension in the application to return of research test results (Fausset et al., 2008). Nor are more technologically advanced displays necessarily better: in at least some situations, interactive or animated data visualizations can be
24 Personal communication with Suzanne Bakken of Columbia University.
counterproductive, actually hurting an individual’s ability to process the underlying data (Torsvik et al., 2013; Trevena et al., 2013; Zikmund-Fisher et al., 2012). Additionally, the Health Level Seven standard for infobuttons supports context-aware retrieval (Health Level Seven, 2018), which is increasingly being used in clinical research and can be added to a variety of electronic communication methods (portal, designated website, e-mail, etc.) to link to additional context-specific explanatory content and resources, including those that are visual or are
In many situations, a multimodal approach to returning individual results will be beneficial (e.g., delivering results via mail or electronic portal and then following up with a phone discussion or in-person meeting to offer participants a chance to ask questions and seek clarification). Consequently, health care standards that support the integration of additional sources of information into EHRs and tethered patient portals provide a foundation for multimodal approaches. Beyond infobuttons, a National Academy of Medicine Genomics and Precision Health Roundtable Action Collaborative, DIGITizE: Displaying and Integrating Genetic Information Through the EHR, has specified a set of standards including Fast Healthcare Interoperability Resources (FHIR), Substitutable Medical Applications and Reusable Technologies (SMART) on FHIR, SMART on FHIR Genomics, and Clinical Decision Support (CDS) Hooks (see Box 5-6).
CONCLUSION: Research results can be returned through a variety of communication methods that are matched to participants’ needs and the context of the research results.
CONCLUSION: The appropriate use of visualizations can help achieve the communication goal for the return of research results.
CONCLUSION: Existing and emerging technical standards for the exchange of health data are available and relevant to support the return of research results at scale through electronic systems such as EHRs and secure portals.
The return of individual research results is a relatively new process for the research enterprise. To communicate effectively, the research community will need to develop a learning system in which processes for returning research results are continuously evaluated for benefits and harms in order to support the development of best practices over time. The committee notes that research to study the impact of returning individual research results is already under way, but more work will be required to generate best practices (Genomes 2 People, 2018; Miller
et al., 2008; MRCT Center, 2017a; Wynn et al., 2017). As best practices are identified, systems for translating that knowledge into practice will be needed. Given that most investigators are not currently trained in communication and may not be able to contextualize the meaning of a result, training will be critical if the return of results is expected for research on human biospecimens. Communication is a skill that needs to be developed over time, and what matters is the communicator’s ability to contextualize information and respond to questions by participants. In fact, the individual tasked with addressing participant expectations of return and communicating the results may not be the person with the most advanced expertise in the test itself (i.e., the principal investigator or someone on the research team) but rather may be a trained community member, a communication expert at an institution, or another individual adept in communication.
In developing training for current and future investigators, stakeholders will need to consider different methods of communication. Specifically, guidance is needed regarding what training should be expected for face-to-face interactions, phone interactions, or communication through patient portals, e-mail, or mail. Communicating the meaning of data in plain language will likely require different approaches, depending on the method used to communicate. Investigators will need assistance in determining which methods are most appropriate for their study.
These new communication tasks will, of course, have financial implications. The more context and interpretation that is required to be provided for a specific result (perhaps due to the potential harms associated with returning it), the higher the likely cost. To this end, future research into communicating results will need to address whether additional expertise should be included and factored into grant applications, under what circumstances face-to-face communication is needed and by whom, and which possible methods for return are appropriate for different types of research and groups of participants. As discussed in Chapter 3, institutions may be able to assist research teams by developing the required infrastructure for the return of results, and this could include infrastructure that enables investigators access to core communication expertise. As the return of individual research results becomes more widely practiced, including research communication cores into institutional development grants may be considered and would provide investigators access to experts and a standardized mechanism for communication and avoid potential costs associated with study-by-study assessments.
CONCLUSION: Ensuring effective return of research results requires developing skills and expertise among research teams as well as access to the resources, training, and relevant expertise needed to achieve good quality communication outcomes.
When it comes to funding empirical research for the return of individual research results, the National Institutes of Health (NIH) is the obvious, and likely primary, sponsor to fund such an endeavor. However, this should not be an NIH task alone. The return of research results will soon become part of the research enterprise, it is a global endeavor, and all sponsors of research using human biospecimens should put resources into addressing the needs of investigators and participants through the funding of empirical research in the practice. Having more unified guidance to the practice of return will help prevent dramatic variability in practice between institutions and aid IRBs in making informed decisions. Funding agencies have a responsibility to ensure that the processes for return are both feasible and implemented appropriately.
Abela, A. 2018. Charts. https://extremepresentation.com/design/7-charts (accessed February 8, 2018).
ADAPTABLE Aspirin Study. 2018. Learn about the study. https://adaptablepatient.com/en/prescreen/watch-thanks (accessed February 8, 2018).
AHA (American Hospital Association). 2016. Individuals’ ability to electronically access their hospital medical records, perform key tasks is growing. Washington, DC: American Hospital Association. https://www.aha.org/guidesreports/2016-07-14-individuals-ability-electronically-access-their-hospital-medical-records (accessed March 8, 2018).
Aldoory, L., K. E. Barrett Ryan, and A. M. Rouhani. 2014. Best practices and new models of health literacy for informed consent: Review of the impact of informed consent regulations on health-literate communications. In Informed consent and health literacy: Workshop summary. Washington, DC: The National Academies Press. Pp. 119–174.
American Association for Clinical Chemistry. 2017. Reference ranges and what they mean. https://labtestsonline.org/articles/laboratory-test-reference-ranges (accessed March 8, 2018).
Andrews, J. C. 2011. Warnings and disclosures. In B. Fischhoff, N. T. Brewer, and J. S. Downs (eds.). Communicating risks and benefits: An evidence-based user’s guide. Silver Spring, MD: Food and Drug Administration. Pp. 149–162.
Appelbaum, P. S., M. Anatchkova, K. Albert, L. B. Dunn, and C. W. Lidz. 2012. Therapeutic misconception in research subjects: Development and validation of a measure. Clinical Trials (London, England) 9(6):748–761.
Appelbaum, P. S., E. Parens, C. R. Waldman, R. Klitzman, A. Fyer, J. Martinez, W. N. Price, and W. K. Chung. 2014. Models of consent to return of incidental findings in genomic research. The Hastings Center Report 44(4):22–32.
Arcia, A., M. E. Bales, W. Brown 3rd, M. C. Co, Jr., M. Gilmore, Y. J. Lee, C. S. Park, J. Prey, M. Velez, J. Woollen, S. Yoon, R. Kukafka, J. A. Merrill, and S. Bakken. 2013. Method for the development of data visualizations for community members with varying levels of health literacy. AMIA Symposium 2013:51–60.
Arcia, A., M. Velez, and S. Bakken. 2015. Style guide: An interdisciplinary communication tool to support the process of generating tailored infographics from electronic health data using EnTICE3. eGEMs 3(1):1120.
Arcia, A., N. Suero-Tejeda, M. E. Bales, J. A. Merrill, S. Yoon, J. Woollen, and S. Bakken. 2016. Sometimes more is more: Iterative participatory design of infographics for engagement of community members with varying levels of health literacy. Journal of the American Medical Informatics Association 23(1):174–183.
Arcia, A., J. Woollen, and S. Bakken. 2018. A systematic method for exploring data attributes in preparation for designing tailored infographics of patient reported outcomes. eGEMs 6(1):1–9.
Association of Public Health Laboratories. 2012. Guidance for laboratory biomonitoring. Silver Spring, MD: Association of Public Health Laboratories.
Baratloo, A., M. Hosseini, A. Negida, and G. El Ashal. 2015. Part 1: Simple definition and calculation of accuracy, sensitivity and specificity. Emergency 3(2):48–49.
Bevans, M., A. Ross, and D. Cella. 2014. Patient-Reported Outcomes Measurement Information System (PROMIS®): Efficient, standardized tools to measure self-reported health and quality of life. Nursing Outlook 62(5):339–345.
Biesecker, B. B., M. Boehnke, K. Calzone, D. S. Markel, J. E. Garber, F. S. Collins, and B. L. Weber. 1993. Genetic counseling for families with inherited susceptibility to breast and ovarian cancer. JAMA 269(15):1970–1974.
Biomonitoring California. 2018. Communicating results. https://biomonitoring.ca.gov/results/communicating-results (accessed May 7, 2018).
Boronow, K. E., H. P. Susmann, K. Z. Gajos, R. A. Rudel, K. C. Arnold, P. Brown, R. Morello-Frosch, L. Havas, and J. G. Brody. 2017. DERBI: A digital method to help researchers offer “right-to-know” personal exposure results. Environmental Health Perspectives 125(2):A27–A33.
Bosl, W., J. Mandel, M. Jonikas, R. B. Ramoni, I. S. Kohane, and K. D. Mandl. 2013. Scalable decision support at the point of care: A substitutable electronic health record app for monitoring medication adherence. Journal of Medical Internet Research 2(2):e13.
Boyd, J. C. 2010. Defining laboratory reference values and decision limits: Populations, intervals, and interpretations. Asian Journal of Andrology 12(1):83–90.
Brega, A. G., J. Barnard, P. Mabachi, N. M. Mabachi, B. D. Weiss, D. A. DeWalt, C. Brach, M. Cifuentes, K. Albright, and D. R. West. 2015. AHRQ health literacy universal precautions toolkit.https://www.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/literacy-toolkit/index.html (accessed May 23, 2018).
Brody, J. G., R. Morello-Frosch, P. Brown, R. A. Rudel, R. G. Altman, M. Frye, C. A. Osimo, C. Pérez, and L. M. Seryak. 2007. “Is it safe?”: New ethics for reporting personal exposures to environmental chemicals. American Journal of Public Health 97(9):1547–1554.
Brody, J. G., S. C. Dunagan, R. Morello-Frosch, P. Brown, S. Patton, and R. A. Rudel. 2014. Reporting individual results for biomonitoring and environmental exposures: Lessons learned from environmental communication case studies. Environmental Health 13:40.
Brown-Williams, H., and R. Morello-Frosch. 2011. “Biomonitoring literacy” in the MIEEP/Chemicals in Our Bodies Project: Developing report-back materials with input from study participants. https://biomonitoring.ca.gov/sites/default/files/downloads/031611SGP_BiomonLiteracy.pdf (accessed May 22, 2018).
Bunnik, E. M., A. C. J. W. Janssens, and M. H. N. Schermer. 2013. A tiered-layered-staged model for informed consent in personal genome testing. European Journal of Human Genetics 21(6):596–601.
Carlson, L. 2013. Research ethics and intellectual disability: Broadening the debates. Yale Journal of Biology and Medicine 86(3):303–314.
Caswell-Jin, J. L., T. Gupta, E. Hall, I. M. Petrovchich, M. A. Mills, K. E. Kingham, R. Koff, N. M. Chun, P. Levonian, A. P. Lebensohn, J. M. Ford, and A. W. Kurian. 2017. Racial/ethnic differences in multiple-gene sequencing results for hereditary cancer risk. Genetics in Medicine 20:234–239.
CDC (Centers for Disease Control and Prevention). 2005. Third national report on human exposure to environmental chemicals. Atlanta, GA: Centers for Disease Control and Prevention.
CDC. 2016. Everyday words for public health communication.https://www.cdc.gov/other/pdf/everyday-words-060216-final.pdf (accessed May 23, 2018).
CDC. 2018. What do parents need to know to protect their children?https://www.cdc.gov/nceh/lead/acclpp/blood_lead_levels.htm (accessed March 29, 2018).
Cook, D. A., M. T. Teixeira, B. S. E. Heale, J. J. Cimino, and G. Del Fiol. 2016. Context-sensitive decision support (infobuttons) in electronic health records: A systematic review. Journal of the American Medical Informatics Association 24(2):460–468.
Corbie-Smith, G., S. B. Thomas, and D. M. St. George. 2002. Distrust, race, and research. Archives of Internal Medicine 162(21):2458–2463.
Coulehan, J. L. 1979. Multifactorial etiology of disease. JAMA 242(5):416.
Danecek, P., A. Auton, G. Abecasis, C. A. Albers, E. Banks, M. A. DePristo, R. E. Handsaker, G. Lunter, G. T. Marth, S. T. Sherry, G. McVean, R. Durbin, and 1000 Genomes Project Analysis Group. 2011. The variant call format and VCFtools. Bioinformatics 27(15):2156–2158.
Data Viz Project. 2018. About. http://datavizproject.com/about (accessed March 9, 2018).
Dieckmann, N. F., R. Gregory, E. Peters, and M. Tusler. 2012. Making sense of uncertainty: Advantages and disadvantages of providing an evaluative structure. Journal of Risk Research 15(7):717–735.
Dieckmann, N. F., E. Peters, and R. Gregory. 2015. At home on the range? Lay interpretations of numerical uncertainty ranges. Risk Analysis 35(7):1281–1295.
Dieckmann, N. F., R. Gregory, E. Peters, and R. Hartman. 2017. Seeing what you want to see: How imprecise uncertainty ranges enhance motivated reasoning. Risk Analysis 37(3):471–486.
Doyle, D. L., R. I. Awwad, J. C. Austin, B. J. Baty, A. L. Bergner, S. J. Brewster, L. A. H. Erby, C. R. Franklin, A. E. Greb, R. E. Grubs, G. W. Hooker, S. J. Noblin, K. E. Ormond, C. G. Palmer, E. M. Petty, C. N. Singletary, M. J. Thomas, H. Toriello, C. S. Walton, and W. R. Uhlmann. 2016. 2013 review and update of the genetic counseling practice based competencies by a task force of the Accreditation Council for Genetic Counseling. Journal of Genetic Counseling 25(5):868–879.
Dunagan, S. C., J. G. Brody, R. Morello-Frosch, P. Brown, S. Goho, J. Tovar, S. Patton, and R. Danford. 2013. When pollution is personal: Handbook for reporting results to participants in biomonitoring and personal exposure studies. Newton, MA: Silent Spring Institute.
Exley, K., N. Cano, D. Aerts, P. Biot, L. Casteleyn, M. Kolossa-Gehring, G. Schwedler, A. Castano, J. Angerer, H. M. Koch, M. Esteban, G. Schoeters, E. Den Hond, M. Horvat, L. Bloemen, L. E. Knudsen, R. Joas, A. Joas, M. C. Dewolf, E. Van de Mieroop, A. Katsonouri, A. Hadjipanayis, M. Cerna, A. Krskova, K. Becker, U. Fiddicke, M. Seiwert, T. A. Morck, P. Rudnai, S. Kozepesy, E. Cullen, A. Kellegher, A. C. Gutleb, M. E. Fischer, D. Ligocka, J. Kaminska, S. Namorado, M. F. Reis, I. R. Lupsa, A. E. Gurzau, K. Halzlova, M. Jajcaj, D. Mazej, J. S. Tratnik, O. Huetos, A. Lopez, M. Berglund, K. Larsson, and O. Sepai. 2015. Communication in a human biomonitoring study: Focus group work, public engagement and lessons learnt in 17 European countries. Environmental Research 141:31–41.
Fagerlin, A., M. Pignone, P. Abhyankar, N. Col, D. Feldman-Stewart, T. Gavaruzzi, J. Kryworuchko, C. A. Levin, A. H. Pieterse, V. Reyna, A. Stiggelbout, L. D. Scherer, C. Wills, and H. O. Witteman. 2013. Clarifying values: An updated review. BMC Medical Informatics and Decision Making 13(2):S8.
Fast Healthcare Interoperability Resources. 2017. FHIR overview. https://www.hl7.org/fhir/overview.html (accessed March 9, 2018).
Fausset, C. B., W. A. Rogers, and A. D. Fisk. 2008. Visual graph display guidelines. Atlanta, GA: Institute of Technology.
FDA (Food and Drug Administration). 1989. Labeling: Regulatory requirements for medical devices. FDA 89-4203. https://www.fda.gov/downloads/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm095308.pdf (accessed May 23, 2018).
FDA. 1995. Guidance for 510(k)s on cholesterol tests for clinical laboratory, physicians’ office laboratory and home use. https://www.fda.gov/RegulatoryInformation/Guidances/ucm094140.htm#toc_16 (accessed May 23, 2018).
FDA. 2001. Guidance on medical device patient labeling: Final guidance for industry and FDA reviewers.https://www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm070801.pdf (accessed May 23, 2018).
FDA. 2009. Guidance for industry: Presenting risk information in prescription drug and medical device promotion. https://www.fda.gov/downloads/drugs/guidances/ucm155480.pdf (accessed May 23, 2018).
FDA. 2010. FDA/CDRH public meeting: Oversight of laboratory developed tests (LDTs), July 19–20, 2010. https://web.archive.org/web/20110101182031/http://www.fda.gov/MedicalDevices/NewsEvents/WorkshopsConferences/ucm212830.htm (accessed May 21, 2018).
FDA. 2012. Summary of safety and effectiveness: Oraquick in home HIV test. https://www.fda.gov/downloads/BiologicsBloodVaccines/BloodBloodProducts/ApprovedProducts/PremarketApprovalsPMAs/UCM312534.pdf (accessed May 23, 2018).
FDA. 2015. Evaluation of automatic class III designation for the 23andme personal genome service carrier screening test for Bloom syndrome: Decision summary. https://www.accessdata.fda.gov/cdrh_docs/reviews/DEN140044.pdf (accessed May 23, 2018).
FDA. 2016. Public workshop—Patient and medical professional perspectives on the return of genetic test results, March 2, 2016. http://wayback.archive-it.org/7993/20171115050724/https://www.fda.gov/MedicalDevices/NewsEvents/WorkshopsConferences/ucm478841.htm (accessed May 21, 2018).
FDA. 2017a. Evaluation of automatic class III designation for the 23andme personal genome service (PGS) genetic health risk report for BRCA1/BRCA2 (selected variants): Decision summary. https://www.accessdata.fda.gov/cdrh_docs/reviews/DEN170046.pdf (accessed May 23, 2018).
FDA. 2017b. Evaluation of automatic class III designation for the 23andme personal genome service carrier screening test for Bloom syndrome: Decision summary correction. https://www.accessdata.fda.gov/cdrh_docs/reviews/DEN160026.pdf (accessed May 23, 2018).
FDA. 2018a. Bringing an over-the-counter (OTC) drug to market: Label comprehension. https://www.accessdata.fda.gov/scripts/cder/training/OTC/topic3/topic3/da_01_03_0170.htm (accessed May 21, 2018).
FDA. 2018b. Device labeling: Introduction to medical device labeling. https://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/Overview/DeviceLabeling/default.htm (accessed May 21, 2018).
FDA. 2018c. General controls for medical devices. https://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/Overview/GeneralandSpecialControls/ucm055910.htm (accessed May 21, 2018).
Federal Register. 2017. Federal policy for the protection of human subjects. 82 FR 7149:7149–7274.
Fernandez, C. V., K. Ruccione, R. J. Wells, J. B. Long, W. Pelletier, M. C. Hooke, R. D. Pentz, R. B. Noll, J. N. Baker, M. O’Leary, G. Reaman, P. C. Adamson, and S. Joffe. 2012. Recommendations for the return of research results to study participants and guardians: A report from the Children’s Oncology Group. Journal of Clinical Oncology 30(36):4573–4579.
Fitzpatrick-Lewis, D., J. Yost, D. Ciliska, and S. Krishnaratne. 2010. Communication about environmental health risks: A systematic review. Environmental Health 9(1):67.
Galesic, M., R. Garcia-Retamero, and G. Gigerenzer. 2009. Using icon arrays to communicate medical risks: Overcoming low numeracy. Health Psychology 28(2):210–216.
Garabrant, D. H., A. Franzblau, J. Lepkowski, B. W. Gillespie, P. Adriaens, A. Demond, B. Ward, K. LaDronka, E. Hedgeman, K. Knutson, L. Zwica, K. Olson, T. Towey, Q. Chen, and B. Hong. 2009. The University of Michigan Dioxin Exposure Study: Methods for an environmental exposure study of polychlorinated dioxins, furans, and biphenyls. Environmental Health Perspectives 117(5):803–810.
Garcia-Retamero, R., and U. Hoffrage. 2013. Visual representation of statistical information improves diagnostic inferences in doctors and their patients. SSM Social Science & Medicine 83:27–33.
Genomes 2 People. 2018. The BabySeq project. http://www.genomes2people.org/babyseqproject (accessed March 29, 2018).
Giardina, T. D., V. Modi, D. E. Parrish, and H. Singh. 2015. The patient portal and abnormal test results: An exploratory study of patient experiences. Patient Experience Journal 2(1):148–154.
Golbeck, A. L., C. R. Ahlers-Schmidt, A. M. Paschal, and S. E. Dismuke. 2005. A definition and operational framework for health numeracy. American Journal of Preventive Medicine 29(4):375–376.
Grady, C., S. R. Cummings, M. C. Rowbotham, M. V. McConnell, E. A. Ashley, and G. Kang. 2017. Informed consent. New England Journal of Medicine 376(9):856–867.
Haga, S. B., R. Mills, K. I. Pollak, C. Rehder, A. H. Buchanan, I. M. Lipkus, J. H. Crow, and M. Datto. 2014. Developing patient-friendly genetic and genomic test reports: Formats to promote patient engagement and understanding. Genome Medicine 6(7):58.
Haines, D. A., T. E. Arbuckle, E. Lye, M. Legrand, M. Fisher, R. Langlois, and W. Fraser. 2011. Reporting results of human biomonitoring of environmental chemicals to study participants: A comparison of approaches followed in two Canadian studies. Journal of Epidemiology and Community Health 65(3):191–198.
Hall, D. E., A. V. Prochazka, and A. S. Fink. 2012. Informed consent for clinical treatment. Canadian Medical Association Journal 184(5):533–540.
Han, P. K. J., W. M. P. Klein, and N. K. Arora. 2011. Varieties of uncertainty in health care. Medical Decision Making 31(6):828–838.
Haynes, E. N., S. Elam, R. Burns, A. Spencer, E. Yancey, P. Kuhnell, J. Alden, M. Walton, V. Reynolds, N. Newman, R. O. Wright, P. J. Parsons, M. L. Praamsma, C. D. Palmer, and K. N. Dietrich. 2016. Community engagement and data disclosure in environmental health research. Environmental Health Perspectives 124(2):A24–A27.
Health Level Seven. 2018. Hl7 version 3 standard: Context aware knowledge retrieval application (“infobutton”), knowledge request, release 2. http://www.hl7.org/implement/standards/product_brief.cfm?product_id=208 (accessed February 8, 2018).
Health Research for Action. 2011. Biomonitoring communications. http://healthresearchforaction.org/biomonitoring-communications (accessed May 21, 2018).
Heaney, C., G. Tindall, J. Lucas, and S. B. Haga. 2010. Researcher practices on returning genetic research results. Genetic Testing & Molecular Biomarkers 14(6):821–827.
Heath, C., and D. Heath. 2007. Made to stick: Why some ideas survive and others die. New York: Random House.
Hernick, A. D., M. Kathryn Brown, S. M. Pinney, F. M. Biro, K. M. Ball, and R. L. Bornschein. 2011. Sharing unexpected biomarker results with study participants. Environmental Health Perspectives 119(1):1–5.
Hibbard, J. H., and E. Peters. 2003. Supporting informed consumer health care decisions: Data presentation approaches that facilitate the use of information in choice. Annual Review of Public Health 24(1):413–433.
HL7 and Boston Children’s Hospital. 2018. Overview. http://cds-hooks.org (accessed March 9, 2018).
Holland, A. T., and L. P. Palaniappan. 2012. Problems with the collection and interpretation of Asian-American health data: Omission, aggregation, and extrapolation. Annals of Epidemiology 22(6):397–405.
Holly, O. W., D. S. Laura, G. Teresa, H. P. Arwen, F.-F. Andrea, D. Selma Chipenda, E. Nicole, C. K. Valerie, F.-S. Deb, F. C. Nananda, F. T. Alexis, and F. Angela. 2016. Design features of explicit values clarification methods: A systematic review. Medical Decision Making 36(4):453–471.
Hsee, C. K. 1996. The evaluability hypothesis: An explanation for preference reversals between joint and separate evaluations of alternatives. Organizational Behavior and Human Decision Processes 67(3):247–257.
Hyams, T., D. Bowen, C. Condit, J. Grossman, M. Fitzmaurice, D. Goodman, L. Wenzel, and K. L. Edwards. 2016. Views of cohort study participants about returning research results in the context of precision medicine. Public Health Genomics 19(5):269–275.
IOM (Institute of Medicine). 2004. Health literacy: A prescription to end confusion. Washington, DC: The National Academies Press.
IOM. 2014. Health literacy and numeracy: Workshop summary. Washington, DC: The National Academies Press.
IOM. 2015. Informed consent and health literacy: Workshop summary. Washington, DC: The National Academies Press.
Johns, A. L., D. K. Miller, S. H. Simpson, A. J. Gill, K. S. Kassahn, J. L. Humphris, J. S. Samra, K. Tucker, L. Andrews, D. K. Chang, N. Waddell, M. Pajic, J. V. Pearson, S. M. Grimmond, A. V. Biankin, N. Zeps, M. Martyn-Smith, H. Tang, V. Papangelis, and M. Beilin. 2014. Returning individual research results for genome sequences of pancreatic cancer. Genome Medicine 6(5):42.
Joint Commission. 2007. “What did the doctor say?”: Improving health literacy to protect patient safety. Oakbrook Terrace, IL: Joint Commission.
Judge, J. M., P. Brown, J. G. Brody, and S. Ryan. 2016. The exposure experience: Ohio River Valley residents respond to local perfluorooctanoic acid (PFOA) contamination. Journal of Health and Social Behavior 57(3):333–350.
Kelman, A., C. O. Robinson, E. Cochin, N. J. Ahluwalia, J. Braverman, E. Chiauzzi, and K. Simacek. 2016. Communicating laboratory test results for rheumatoid factor: What do patients and physicians want? Patient Preference and Adherence 10:2501–2517.
Kim, N. S., S. G. B. Johnson, W.-k. Ahn, and J. Knobe. 2017. The effect of abstract versus concrete framing on judgments of biological and psychological bases of behavior. Cognitive Research 2(1):17.
Kosslyn, S. M. 2006. Graph design for the eye and mind. New York: Oxford University Press.
Lab Tests Online–AU. 2018. Accuracy, precision, specificity & sensitivity. https://www.labtestsonline.org.au/understanding/test-accuracy-and-reliability/how-reliable-is-pathology-testing (accessed March 29, 2018).
Larson, E. L., E. G. Cohn, D. D. Meyer, and B. Boden-Albala. 2009. Consent administrator training to reduce disparities in research participation. Journal of Nursing Scholarship 41(1):95–103.
Mandl, K. D., J. C. Mandel, S. N. Murphy, E. V. Bernstam, R. L. Ramoni, D. A. Kreda, J. Michael McCoy, B. Adida, and I. S. Kohane. 2012. The SMART platform: Early experience enabling substitutable applications for electronic health records. Journal of the American Medical Informatics Association 19(4):597–603.
Marteau, T. M., V. Senior, and P. Sasieni. 2001. Women’s understanding of a “normal smear test result”: Experimental questionnaire based study. BMJ (Clinical research ed.) 322(7285):526–528.
Medscape. 2014. Laboratory reference ranges in healthy adults. https://emedicine.medscape.com/article/2172316-overview (accessed March 8, 2018).
Miller, C. E., P. Krautscheid, E. E. Baldwin, T. Tvrdik, A. S. Openshaw, K. Hart, and D. LaGrave. 2014. Genetic counselor review of genetic test orders in a reference laboratory reduces unnecessary testing. American Journal of Medical Genetics Part A 164(5):1094–1101.
Miller, F. A., M. Giacomini, C. Ahern, J. S. Robert, and S. de Laat. 2008. When research seems like clinical care: A qualitative study of the communication of individual cancer genetic research results. BMC Medical Ethics 9:4.
Morello-Frosch, R., J. G. Brody, P. Brown, R. G. Altman, R. A. Rudel, and C. Pérez. 2009. Toxic ignorance and right-to-know in biomonitoring results communication: A survey of scientists and study participants. Environmental Health: A Global Access Science Source 8(1):6.
MRCT (Multi-Regional Clinical Trials) Center. 2017a. Return of individual results to participants recommendations document. Boston, MA: MRCT Center.
MRCT Center. 2017b. Return of individual results to participants toolkit. Boston, MA: MRCT Center.
NASEM (National Academies of Sciences, Engineering, and Medicine). 2017. The challenge of treating obesity and overweight: Proceedings of a workshop. Washington, DC: The National Academies Press.
National Center for Education Statistics. 2006. The health literacy of America’s adults: Results from the 2003 National Assessment of Adult Literacy. Washington, DC: U.S. Department of Education, National Center for Education Statistics.
National Library of Medicine. 2018. Help me understand genetics. https://ghr.nlm.nih.gov/primer (accessed March 29, 2018).
Nature.com. 2018. Genetic linkage study. https://www.nature.com/subjects/genetic-linkage-study (accessed March 29, 2018).
Nelson, D. E., B. W. Hesse, and R. T. Croyle. 2009. Making data talk: Communicating public health data to the public, policy makers, and the press. New York: Oxford University Press.
NHGRI (National Human Genome Research Institute). 2018. Special considerations for genome research. https://www.genome.gov/27559024/informed-consent-special-considerations-for-genome-research (accessed February 8, 2018).
NPR. 2014. Whydoweblindlysigntermsofserviceagreements?https://www.npr.org/2014/09/01/345044359/why-do-we-blindly-sign-terms-of-service-agreements (accessed March 29, 2018).
NRC (National Research Council). 2006. Human biomonitoring for environmental chemicals. Washington, DC: The National Academies Press.
Nusbaum, L., B. Douglas, K. Damus, M. Paasche-Orlow, and N. Estrella-Luna. 2017. Communicating risks and benefits in informed consent for research: A qualitative study. Global Qualitative Nursing Research 4. doi: 10/1177/2333393617732017.
O’Connor, A. 2016. Direct-to-consumer lab tests, no doctor visit required. The New York Times, June 6.
O’Kane, M., D. Freedman, and B. J. Zikmund-Fisher. 2015. Can patients use test results effectively if they have direct access? British Medical Journal 350:h673.
Ostergren, J. E., M. C. Gornick, D. A. Carere, S. S. Kalia, W. R. Uhlmann, M. T. Ruffin, J. L. Mountain, R. C. Green, and J. S. Roberts. 2015. How well do customers of direct-to-consumer personal genomic testing services comprehend genetic test results? Findings from the impact of personal genomics study. Public Health Genomics 18(4):216–224.
Parker, M. B., S. Bakken, and M. S. Wolf. 2016. Getting it right with the Precision Medicine Initiative: The role of health literacy. NAM Perspectives. http://nam.edu/wp-content/uploads/2016/02/Getting-it-Right-with-the-Precision-Medicine-Initiative-the-Role-of-Health-Literacy.pdf (accessed May 23, 2018).
Patch, C., and A. Middleton. 2018. Genetic counselling in the era of genomic medicine. British Medical Bulletin. April 2. [Epub ahead of print].
Paul, A., and S. Paul. 2014. The breast cancer susceptibility genes (BRCA) in breast and ovarian cancers. Frontiers in Bioscience (Landmark edition) 19:605–618.
Perzynski, A. T., J. J. Terchek, C. E. Blixen, and N. V. Dawson. 2013. Playing the numbers: How hepatitis C patients create meaning and make healthcare decisions from medical test results. Sociology of Health & Illness 35(4):610–627.
Peters, E., N. F. Dieckmann, D. Västfjäll, C. K. Mertz, P. Slovic, and J. H. Hibbard. 2009. Bringing meaning to numbers: The impact of evaluative categories on decisions. Journal of Experimental Psychology 15(3):213–227.
Pocock, S. J., and M. D. Hughes. 1990. Estimation issues in clinical trials and overviews. Statistics in Medicine 9(6):657–671.
Quandt, S. A., A. M. Doran, P. Rao, J. A. Hoppin, B. M. Snively, and T. A. Arcury. 2004. Reporting pesticide assessment results to farmworker families: Development, implementation, and evaluation of a risk communication strategy. Environmental Health Perspectives 112(5):636–642.
Quigley, D. 2012. Applying bioethical principles to place-based communities and cultural group protections: The case of biomonitoring results communication. The Journal of Law, Medicine & Ethics 40(2):348–358.
ResearchKit. 2017. Obtaining consent. http://researchkit.org/docs/docs/InformedConsent/Informed-Consent.html (accessed February 8, 2018).
Roberts, J. S., and J. Ostergren. 2013. Direct-to-consumer genetic testing and personal genomics services: A review of recent empirical studies. Current Genetic Medicine Reports 1(3):182–200.
Rodríguez, V., A. D. Andrade, R. García-Retamero, R. Anam, R. Rodríguez, M. Lisigurski, J. Sharit, and J. Ruiz. 2013. Health literacy, numeracy, and graphical literacy among veterans in primary care and their effect on shared decision making and trust in physicians. Journal of Health Communications 18(Suppl 1):273–289.
Sarkar, U., A. J. Karter, J. Y. Liu, N. E. Adler, R. Nguyen, A. Lopez, and D. Schillinger. 2010. The literacy divide: Health literacy and the use of an Internet-based patient portal in an integrated health system-results from the Diabetes Study of Northern California (DISTANCE). Journal of Health Communication 15:183–196.
Sarkar, U., A. J. Karter, J. Y. Liu, N. E. Adler, R. Nguyen, A. López, and D. Schillinger. 2011. Social disparities in internet patient portal use in diabetes: Evidence that the digital divide extends beyond access. Journal of the American Medical Informatics Association 18(3):318–321.
Saulsberry, K., and S. F. Terry. 2013. The need to build trust: A perspective on disparities in genetic testing. Genetic Testing and Molecular Biomarkers 17(9):647–648.
Schiavo, R. 2014. Health communication: From theory to practice. 2nd ed. San Francisco, CA: John Wiley & Sons, Inc.
Shaffer, V. A., J. Owens, and B. J. Zikmund-Fisher. 2013. The effect of patient narratives on information search in a Web-based breast cancer decision aid: An eye-tracking study. Journal of Medical Internet Research 15(12):e273.
Shani, Y., O. E. Tykocinski, and M. Zeelenberg. 2008. When ignorance is not bliss: How feelings of discomfort promote the search for negative information. Journal of Economic Psychology 29(5):643–653.
Sharit, J., M. Lisigurski, A. D. Andrade, C. Karanam, K. M. Nazi, J. R. Lewis, and J. G. Ruiz. 2014. The roles of health literacy, numeracy, and graph literacy on the usability of the VA’s personal health record by veterans. Journal of Usability Studies 9(4):173–193.
Simon, C. M., J. K. Williams, L. Shinkunas, D. Brandt, S. Daack-Hirsch, and M. Driessnack. 2011. Informed consent and genomic incidental findings: IRB chair perspectives. Journal of Empirical Research on Human Research Ethics 6(4):53–67.
Tarrant, C., C. Jackson, M. Dixon-Woods, S. McNicol, S. Kenyon, and N. Armstrong. 2015. Consent revisited: The impact of return of results on participants’ views and expectations about trial participation. Health Expectations 18(6):2042–2053.
Torsvik, T., B. Lillebo, and G. Mikkelsen. 2013. Presentation of clinical laboratory results: An experimental comparison of four visualization techniques. Journal of the American Medical Informatics Association 20(2):325–331.
Trevena, L. J., B. J. Zikmund-Fisher, A. Edwards, W. Gaissmaier, M. Galesic, P. K. J. Han, J. King, M. L. Lawson, S. K. Linder, I. Lipkus, E. Ozanne, E. Peters, D. Timmermans, and S. Woloshin. 2013. Presenting quantitative information about decision outcomes: A risk communication primer for patient decision aid developers. BMC Medical Informatics and Decision Making 13(Suppl 2):S7.
Tufte, E. R. 2001. The visual display of quantitative information. Vol. 2. Cheshire, CT: Graphics Press.
Unertl, K. M., C. L. Schaefbauer, T. R. Campbell, C. Senteio, K. A. Siek, S. Bakken, and T. C. Veinot. 2016. Integrating community-based participatory research and informatics approaches to improve the engagement and health of underserved populations. Journal of the American Medical Informatics Association 23(1):60–73.
University of Michigan Center for Statistical Genetics. 2013. BAM. https://genome.sph.umich.edu/wiki/BAM (accessed March 29, 2018).
University of Michigan Risk Science Center. 2018. Icon array. http://www.iconarray.com (accessed February 8, 2018).
Warren, N. S. 2011. Introduction to the special issue: Toward diversity and cultural competence in genetic counseling. Journal of Genetic Counseling 20(6):543–546.
Weil, J. 2001. Multicultural education and genetic counseling. Clinical Genetics 59(3):143–149.
Welch, B. M., E. Marshall, S. Qanungo, A. Aziz, M. Laken, L. Lenert, and J. Obeid. 2016. Teleconsent: A novel approach to obtain informed consent for research. Contemporary Clinical Trials Communications 3:74–79.
WHO (World Health Organization). 2017. Arsenic. http://www.who.int/mediacentre/factsheets/fs372/en (accessed March 29, 2018).
Wynn, J. 2016. Genomic testing: A genetic counselor’s personal reflection on three years of consenting and testing. Journal of Genetic Counseling 25(4):691–697.
Wynn, J., J. Martinez, J. Bulafka, J. Duong, Y. Zhang, C. Chiuzan, J. Preti, M. L. Cremona, V. Jobanputra, A. J. Fyer, R. L. Klitzman, P. S. Appelbaum, and W. K. Chung. 2017. Impact of receiving secondary results from genomic research: A 12-month longitudinal study. Journal of Genetic Counseling 27(3):709–722.
Yancey, A. K., A. N. Ortega, and S. K. Kumanyika. 2006. Effective recruitment and retention of minority research participants. Annual Review of Public Health 27(1):1–28.
Zikmund-Fisher, B. J. 2013. The right tool is what they need, not what we have: A taxonomy of appropriate levels of precision in patient risk communication. Medical Care Research and Review 70(1 Suppl):37S–49S.
Zikmund-Fisher, B. J. 2017. When “actionable” genomic sequencing results cannot be acted upon. JAMA Oncology 3(7):891–892.
Zikmund-Fisher, B. J., A. Fagerlin, and P. A. Ubel. 2004. “Is 28% good or bad?” Evaluability and preference reversals in health care decisions. Medical Decision Making 24(2):142–148.
Zikmund-Fisher, B. J., H. O. Witteman, A. Fuhrel-Forbis, N. L. Exe, V. C. Kahn, and M. Dickson. 2012. Animated graphics for comparing two risks: A cautionary tale. Journal of Medical Internet Research 14(4):e106.
Zikmund-Fisher, B. J., N. L. Exe, and H. O. Witteman. 2014. Numeracy and literacy independently predict patients’ ability to identify out-of-range test results. Journal of Medical Internet Research 16(8):e187.
Zikmund-Fisher, B. J., A. M. Scherer, J. B. Solomon, N. L. Exe, A. M. Scherer, H. O. Witteman, B. A. Tarini, and A. Fagerlin. 2017. Graphics help patients distinguish between urgent and non-urgent deviations in laboratory test results. Journal of the American Medical Informatics Association 24(3):520–528.
Zikmund-Fisher, J. B., M. A. Scherer, O. H. Witteman, B. J. Solomon, L. N. Exe, and A. Fagerlin. 2018. Effect of harm anchors in visual displays of test results on patient perceptions of urgency about near-normal values: Experimental study. Journal of Medical Internet Research 20(3):e98.