The workshop opened with Cynthia Baur, director of the Herschel S. Horowitz Center for Health Literacy at the University of Maryland School of Public Health, discussing the findings of the literature review she and her colleagues conducted, as well as the methodology they used to identify studies to include in the review (see Appendix C for the full paper). She began her presentation with what she called the “big idea” in the review paper: Many people are looking seriously at communities in need and trying to do something to help those communities get better information, build their skills, and become more health literate. “That being said, we know that good intentions are not enough,” said Baur, “and we are here to discuss the science, the rigor of the design, and whether those were the right interventions.” Despite the fact that many of the studies she and her colleagues identified might have had quality or scientific issues, she hoped the commissioned paper provided a sense of the richness of what she and her collaborators found in the literature.
After acknowledging her three co-authors—Lourdes Martinez from the Centers for Disease Control and Prevention (CDC), librarian Nedelina Tchangalova from the University of Maryland, and Don Rubin from the University of Georgia—as well as two other colleagues from CDC—Tom Chapel and Dogan Erglu—who were instrumental in choosing an evalu-
1 This section is based on the presentation by Cynthia Baur, director of the Herschel S. Horowitz Center for Health Literacy at the University of Maryland School of Public Health, and the statements are not endorsed or verified by the National Academies of Sciences, Engineering, and Medicine.
ation framework, she listed the three questions that shaped the review as the review team began looking at the literature. The first question—What does it mean to take community seriously?—aimed to understand how community was identified in a study beyond merely the location. “We know that community has a much richer and broader meaning than just a location, but for many of the interventions, that was indeed the starting place,” explained Baur.
The second question—What distinguishes a health literacy intervention from an educational intervention?—was important because the team had to sort through a rich literature of educational interventions to find studies that specifically tried to grapple with health literacy. The third question—What is or could be the “value added” element of doing a health literacy intervention with a community?—aimed to identify what health literacy brought to the many community interventions that focused on knowledge and building skills related to specific health concerns or health topics. CDC, she noted, has made huge investments in community-based work, and the health literacy issues were often not obvious to the funders, program planners, and evaluators. In Baur’s opinion, there is valuable discussion to be had about what health literacy can bring to the work conducted under the rubric of community intervention. “How do we, as a field, help articulate that and increase the rigor and robustness of that work?” she asked.
With regard to the findings of the review, Baur chose to describe one of the two projects that had the greatest large-scale effects. The study she discussed, published in 2013, used a fotonovela developed at the Columbia School of Social Work (Hernandez and Organista, 2013) as the intervention vehicle. The study’s intended outcomes were to increase literacy about depression, decrease the stigma associated with depression, and increase help seeking, knowledge, and beneficial behaviors. The investigators developed the fotonovela using social-cognitive theory, culture-centric narratives, and a definition of health literacy espoused by David Baker at Northwestern University (Baker, 2006). One group of participants received the fotonovela as the health literacy tool while a control group engaged in a group discussion about family communication issues led by promotoras2 at a local community clinic for Latinas. The investigators designed the study as a pre- and posttest randomized experiment to test depression knowledge. The study included a health literacy score, measured using the Short Test of Functional Health Literacy in Adults (S-TOFHLA), but did not report a pre- and posttest assessment of health literacy. The results of the experiment showed a
2 A promotora is a community health worker who is trained to provide basic health education within her community. More information is available at https://mhpsalud.org/programs/who-are-promotoresas-chws (accessed January 11, 2018).
large effect on stigma and intent to seek treatment and a moderate effect on self-efficacy to seek treatment for depression.
Discussing the broader findings and conclusions from the entire body of work that she and her colleagues examined, Baur said they found a wide range of interventions reporting a mix of qualitative and quantitative evidence, with a heavy emphasis on qualitative results. The papers mentioned the terms “health literacy” and “community” frequently, though often without defining those terms. One finding that Baur said emerged organically from the review was a pattern in the way that interventions used health literacy. Researchers, for example, selected people for the intervention based on either an assumed or documented low level of health literacy. They tried to measure something about health literacy in their evaluations using tools such as the Rapid Estimate of Adult Literacy in Medicine (REALM), S-TOFHLA, the Newest Vital Sign (Powers et al., 2010), a single question—do you feel comfortable with forms? (Chew et al., 2008)—or instruments they developed themselves. The studies used health literacy to help shape the actual intervention and it was often listed as an outcome. “This is something we labeled as a finding because we saw it consistently as a pattern across the interventions,” said Baur. She also noted that the overall quality of the evidence in the publications could have been better.
One of the challenges to conducting this review was defining community, and Baur noted that the definition was a “moving target.” She and her colleagues decided to use a published definition (McLeroy et al., 2003) that gave community four meanings: setting, the target population, a resource or asset, and an agent where the community is engaged in whatever issue it is being asked to consider. Setting and target population were the most frequent uses of the term “community.” For example, publications would mention community hospitals, community clinics, or community agencies without defining the community element, or they would state that recruitment occurred in a community or that there were community participants without defining the boundaries of the community being studied. Community could also be defined by language, shared experience, or a combination of factors. “Communities were very challenging, and given our charge to find and talk about community-based health literacy interventions, we had to grapple with this concept throughout the review process,” said Baur.
When describing the methods used to conduct the review, Baur noted the important role the team’s professional research librarian played in maintaining the rigor of what she called a “scoping review.” The purpose of a scoping review is to identify a broad area that provides an overall picture of a field of study while allowing for different study designs and mapping key concepts, sources, and types of evidence. A systematic review, by contrast, uses much tighter definitions about the types of studies to consider and focuses more on analyzing the quality of the evidence in the identified
studies. The scoping review, Baur added, could serve as a prelude for conducting a systematic review at some point.
The professional librarian constructed and executed the search strategy, which included 14 commercial library databases and the Web-based “gray” literature. “Health literacy” and “community” were key search terms, with health literacy defined as both how people find, process, understand, and communicate about health information and services to protect and promote their health, and how organizations and systems support or hinder people in these activities. A community-based health literacy intervention was defined as any purposeful, organized activity to help a group of people find, understand, use, or communicate about health information, services, or issues for themselves or their communities.
For an evaluation framework, Baur and her colleagues used a modified version of the CDC Best Practices Framework, which looks at applied public health interventions and evaluates them based on five dimensions: effectiveness, reach, feasibility, sustainability, and transferability (Spencer et al., 2013). She noted that they found some information about effectiveness and reach in the papers they identified, but very little about feasibility, sustainability, and transferability, which are factors that would be important for scaling and spreading an intervention. They did multiple rounds of review, starting with titles and abstracts before moving to a full-text review, and then developed a list of reporting categories to organize the results (see Box 2-1).
Table 2-1 lists the inclusion and exclusion criteria Baur and her colleagues used to initially identify 2,402 papers. They reduced the list to 140 papers for full-text review and 74 papers for inclusion in their analysis. Review articles, she noted, were a source of additional references to check, but the review articles themselves were excluded from the analysis. She explained that the full-text review revealed frequent superficial uses of health literacy in abstracts, but little or no uses of health literacy in the actual intervention. As she noted earlier, only two papers—the one she described on depression and a second one about a cancer screening program—reported large-scale effects.
In discussing the results, Baur said there were quite a few studies on health literacy skill and capacity building, mostly from outside of the United States, and a few on information seeking, mostly led by librarians who were trying to build people’s information-seeking skills. Several studies had the specific goal of creating culturally sensitive or culturally targeted interventions, while studies in other categories included cultural sensitivity as an aspect. A handful of studies were about adult learners enrolled in formal programs offered through adult literacy organizations or at a community college, and other studies focused on Head Start programs. Some
SOURCE: Adapted from a presentation by Cynthia Baur at the Community-Based Health Literacy Interventions workshop on July 19, 2017.
studies were based in schools, although Baur said the number was fewer than expected, perhaps because schools are complicated institutions and not always easy to use as intervention sites.
Baur and her colleagues decided to include mental health as a category, even though previous reviews did not, because of the sheer number of interventions focused on mental health. “We decided that if there are researchers or practitioners in other fields invoking the term ‘health literacy’ or talking about what they do as health literacy, even if it does not mean the same thing to the people in this room today, it is still worth considering what they mean, what we mean, and how to engage with them,” said Baur. The only paper on a policy or a system change included in the review came from Sweden, she said, and it was supposed to have implications for health literacy, but was not on a policy specific to health literacy. Reported outcomes in the 74 papers are listed in Box 2-2.
Four themes emerged from the review, said Baur. There was a strong community engagement component in all of the papers, which included formative research to identify community needs or to address health literacy issues in a community. The studies all had a health literacy intervention component, and a few papers made a link between health knowledge and health literacy. The commissioned paper also discussed the limitations of
the evaluation framework used in the studies, which she said might speak to the standards of different journals and what they expect investigators to report as far as the level of detail. To that point, she said it would be worth having a discussion about what level of detail should be reported in papers on health literacy interventions. “I think there is more information that people should be providing to all of us about what they are doing, not just for the sake of transparency, but also to understand how these effects they are reporting are happening,” said Baur. “Much of that was really behind the curtain.”
Referring back to the three questions that framed the review process, Baur said she and her colleagues have not developed a hard and fast definition of community that they would be willing to defend. Community, however, has to move beyond just the setting and target audience if the goal is to take community engagement seriously and move beyond the 100- to 300-person studies that were typical in the papers they reviewed. She also noted that more discussion is needed about the distinction between health literacy and health education, and not just at a definitional level. “It is more productive to think about [that distinction] in the light of the question about what do we bring to the table,” said Baur. “How do we take what we have learned over the past 20 to 30 years about health literacy interventions and make that meaningful to people thinking they are doing health literacy interventions?” Her final comment was that many of the projects she and her colleagues reviewed would have had a bigger impact if they had looked at health literacy.
Rima Rudd from the Harvard T.H. Chan School of Public Health opened the discussion by asking Baur to explain why her team’s review identified 74 papers describing community-based health literacy interventions while another group, headed by Don Nutbeam at the University of Sydney, found only seven articles for a review his team conducted 1 year earlier. Baur replied that she and her colleagues posed that question to the workshop planning committee when they first started their review process and asked for direction on how narrowly the planning committee wanted her team to define the field.
The Nutbeam group’s framework, explained Baur, focused on functional, interactive, and critical health literacy and used health literacy as the outcome. Her team, taking direction from the planning committee, expanded its search criteria to include studies where health literacy was included at any point of an intervention. She noted that the seven papers Nutbeam identified used the terms “community” and “population” interchangeably. Her team, by contrast, used the term “community” exclusively
and defined community as having some element of groupness as a means of distinguishing what they called individual and non-individual interventions. As an example of the former, she said that some studies reported recruiting from a population, delivering the intervention to individuals, and then aggregating and reporting the data in some manner. “For us, that was an exclusion criterion if an intervention functioned that way,” said Baur. “There had to be some kind of group dynamic and some explanation of why the community was chosen.” One element missing from the majority of the papers was an explanation of how interactions among the community members affected outcomes.
Michael Paasche-Orlow from Boston University complimented Baur for acknowledging the complexity of the term “community” and wondered if her team’s definition of the word could have been even broader to include studies that are doing community-based research, but never use the term “community.” Baur agreed that might be the case, but in the end, she and her colleagues had to set some operating rules. “Any method is going to choose some things and not choose others,” said Baur. “We tried to be as transparent as possible about our methods.” She added that her group took the same approach with the term “health literacy.” Frequently, she said, papers would specifically mention health literacy in the title or abstract, but a closer reading of the paper failed to find the health literacy piece of the study.
Paasche-Orlow then asked Baur to comment on the observation that the way the 74 studies defined a community always focused their work on vulnerable or poor populations. “If these things work, why do we not use them on rich people?” he asked. Baur replied that the groups conducting those studies were looking at communities of greatest need. “If we are thinking about resource allocation, you could argue that resources are being allocated appropriately because they are going to the communities with the greatest need,” said Baur. She explained that none of the 74 studies defined communities by socioeconomic status and none looked at whether the intervention was more or less successful in high versus medium versus low socioeconomic communities. Bernard Rosof from the Quality HealthCare Advisory Group and Northwell Health remarked that perhaps all communities could benefit from health literacy interventions, to which Baur responded, “They may need them, but who needs them more?” Most of the papers, she said, opened with statements concerning the high needs of the communities on which their work focused.
William Elwood from the National Institutes of Health commented that epidemiology is not the sole driver of health literacy. What people know is also a driver, and often what people know is inaccurate. To that point, he asked Baur why the search criteria did not include knowledge as an
outcome. Baur replied that her group argued that knowledge should be considered an important outcome and it was considered an inclusion criterion.
Jennifer Dillaha from the Arkansas Department of Health said she agreed with Baur’s decision to make a distinction between health education and health literacy because in her opinion, health literacy enables people to gain knowledge more easily from health education interventions. She then asked Baur if she considered whether interventions designed for specific chronic diseases are more about health education than health literacy. Baur said she and her colleagues struggled with that question when trying to decide whether to include studies focusing on mental health and nutrition. “We thought it was better to bring that body of work to you and ask what you think rather than exclude them,” she said. “How do we feel about people parsing this and talking about diabetes literacy and nutrition literacy and cancer literacy and all these variants, [including] mental health literacy? Do we think that is a good thing? Do we need to be partnering with them? Do they mean the same thing? If they mean something different, how do we engage them in what we mean?”
John Gardenier, recently retired from the National Center for Health Statistics, commented that this discussion was missing attention to affect or motivation. “People may have high or low health literacy, but if they have no motivation to apply health literacy to their own or to their community situation, it does not seem to be likely that there will be a strong positive outcome,” said Gardenier. He also noted that people are defined by the parameters of a community, not necessarily by identifying with that community, and that a degree of affective identification with a community would have a great deal to do with the efficacy of a community intervention. Considering his second point reminded Baur of one study that provoked a great deal of discussion among her team. This dementia intervention involved three Aboriginal communities in Australia and the paper described how the intervention led the community to engage in discussion on a subject it had previously found difficult and to better understand what was causing some community members to act strangely. Clearly, said Baur, this intervention had a positive effect in the community, but she was unclear on how it was relevant to the type of effects the planning committee had asked her team to identify in their review.
Donney John from NOVA Scripts Central remarked that while sustainability was one of the five criteria for the review framework, Baur’s team did not find much information about sustainability in the reviewed papers. He asked Baur if she and her colleagues came across any programs that successfully scaled their interventions or were unable to sustain them. Baur replied that one or two papers may have had a sustainability component, but most of the investigators were conducting traditional pilot studies that looked at outcomes.
John then asked whether incorporating studies focused on providers, who were excluded from this review, might have found more work on sustainability given that providers might get reimbursed for this type of activity, which could lead to sustainability. Baur responded that doing so would have required deciding whether to count every counseling intervention as a community intervention. For example, there was a diabetes intervention (Handley et al., 2016) that trained promotoras and community health workers to do individual follow-ups with people. Her team questioned whether to count a one-on-one counseling intervention as a community intervention and decided that the community aspect of the study involved recruitment, not the intervention itself. Similarly, they found three studies that focused on training students, such as dental and pharmacy students, but decided there was no community component to these interventions.
Turkan Gardenier from Pragmatica suggested adding the term “comprehensibility” to any measure of value added. Baur replied that this was why they included content in the four-part structure of the review. She added that studies were addressing comprehension issues with the way in which the content used in their interventions was designed. Baur noted that the commissioned paper listed several excluded publications in a section of the review called papers of interest because these publications talked about reporting the results of their interventions back to the community.
Elwood noted that a new rule will go into effect on January 1, 2018, that will require any human subjects study receiving National Institutes of Health funding to list all social and behavioral interventions and, eventually, all study protocols on their ClinicalTrials.gov posting. These requirements, he said, create a great route for disseminating information. “I think we are about to see a tremendous expansion of the availability of information and a much broader and also deepened discussion about research, its results, and diffusing it across all levels of the American people, not just the research community,” said Elwood.
Ruth Parker from the Emory University School of Medicine said she knows of interventions that she would call “community-based” and related to health literacy that did not make the list of 74 publications Baur and her colleagues reviewed. “That makes me worried,” said Parker, because the terms “health literacy” and “community” were not in the abstracts or titles of those papers. Parker also expressed concern that interventions that targeted specific conditions such as diabetes or mental illness may have been left out of the commissioned paper because the titles and abstracts did not contain those two specific phrases. Baur replied that the authors had to have some way of distinguishing a health literacy intervention from other types of interventions. Parker then asked Baur if she knew the impact factors of the journals that published the reviewed papers, and Baur replied that she and her colleagues did not track impact factor.