The workshop’s third session featured a presentation on the issues and challenges in evaluating community-based health literacy interventions by Andrew Pleasant, senior advisor on Health Literacy Interventions, Research, and Evaluation at Health Literacy Media. Following that presentation, Cindy Brach, senior health policy researcher at AHRQ, moderated a discussion with three evaluators: Oscar Espinosa, senior associate at Community Science; Anil Thota, associate director for systematic review science at CDC; and Sherrie Flynt Wallington, assistant professor of oncology at the Georgetown University Medical Center’s Lombardi Comprehensive Cancer Center (see Box 4-1). Brach used questions developed using input that the workshop planning committee solicited from people who registered for the workshop.
In Pleasant’s opinion, ethical health literacy community programming requires looking at effectiveness and efficiency, addressing equity, and evaluating the program. Ethics, he said, refers to a set of principles of proper conduct based on a theory or belief system of moral values and effectiveness
1 This section is based on the presentation by Andrew Pleasant, senior advisor on Health Literacy Interventions, Research, and Evaluation at Health Literacy Media, and the statements are not endorsed or verified by the National Academies of Sciences, Engineering, and Medicine.
answers the question of whether a desired outcome occurred. Efficiency, which he said is rarely considered in relation to a health literacy program, is the ratio of effective output to the total input in any system, or the cost per unit of a desired outcome. Equity relates to the issues of fairness and justice and to the elimination of social, environmental, and health inequities, and evaluation is required to ensure that all of the above occurred and to advance the field. Pleasant said that he had found some beauty in the logic of combining all of these because being ethical requires being efficient, effective, and equitable, while proving effectiveness, efficiency, and equity requires evaluation. “Essentially, evaluation is an ethical approach to everything everybody in this room and in health literacy does,” said Pleasant.
Evaluation has many possible goals (see Box 4-2), including providing evidence of success or failure, needing to justify a program to funders and management, and developing best practices and plans for program improvement. Regardless of the ultimate goals of an evaluation, in Pleasant’s opinion the overarching purpose should be to produce valid and reliable evidence that can be used to make something better. In fact, said Pleasant, usability is an important aspect of good evaluation.
What evidence is not, said Pleasant, is a set of morals, casual conversations, a brainstorming session, continuing the status quo, or arguments in defense of past actions. Pleasant continued by saying that evidence is also not an opinion or value, a new idea that seems good, or something only a
person with a Ph.D. can collect, even though many people treat evidence in that manner. One thing that differentiates his approach to evaluation, said Pleasant, is that he puts it firmly in the realm of research. “There are people who will argue strenuously that evaluation research and research are two different animals, and I could not disagree more,” he said. “If evaluation does not live up to the definition of good research, then it is a bad evaluation. It is as simple as that.”
In Pleasant’s view an important yet often neglected piece of evaluation regards ownership of the data. “The data are not ours,” said Pleasant. “They belong to the people who are fortunate enough to be able to work within a community. They let me use it, but it is their property.” In his opinion, if he does not give evaluation data back to the community in a form they can use, he has not lived up to his part of the social contract. “If you do not think that social contract exists, then you better rethink your relationship to a community because it does,” he added.
Pleasant presented a research impact framework that indicates areas of possible evaluation across all research and that, in his view, is absolutely relevant to health literacy (Kuruvilla et al., 2006). This framework focuses on research-related impacts, policy impacts, service impacts, and societal impacts. He noted that other frameworks are also effective, in that they help the evaluator to think through what to include in the evaluation metrics.
When designing a community-based health literacy intervention and accompanying evaluation framework, the number one rule is to engage community members early and often because they have the most to teach anyone developing or evaluating a program. “If you do engage people—
not patients, not participants, but people with names—you will naturally follow a path that means you are talking to them about their entire lives,” said Pleasant. “You are not launching a program for people with type 2 diabetes. You are launching a program for people who happen to have type 2 diabetes and a whole bunch of other stuff going on in their lives that is going to get in the way of your intervention if you only focus on their type 2 diabetes, and do not talk to who they are in their entirety and do not include that entirety in your evaluation.” The reason for that, he added, is that an evaluation will find people for whom the intervention did not work, and without addressing their whole lives, it will be impossible to know why it did not work.
Given that a major goal for addressing health literacy in communities is to prevent chronic disease, it is important to remember that prevention at large also means taking care of people who already have the disease. Prevention, said Pleasant, is not just about stopping disease from developing; it is also about reversing the course of disease and preventing the worst outcomes.
Following the simple logic of engaging people early and often, including their whole lives, and preventing chronic disease will lead to a robust level of health literacy, remarked Pleasant. Although these steps may not seem to be about evaluation, in Pleasant’s view they are because engaging people early and often means conducting formative research and evaluation in a community, tailoring materials to meet the needs of community members, and engaging in participatory research and evaluation. Pleasant noted that including the whole lives of community members in an evaluation forces one to think about who to involve, when and where to ask questions, how and what to ask community members, and perhaps most importantly, who should ask the questions. In many of the communities where Pleasant works, “the last thing you want is a tall white guy with a pen and paper asking people questions, because that means trouble to many people in this world,” he said. “If you do not embrace that reality and figure out who is the right person to be asking the questions then you are going to miss the boat because people are going to tell the tall white guy what they always tell the tall white guy.”
Continuing with his discussion of the logic model, Pleasant said that thinking about prevention in evaluation terms reinforces the absolute need for an objective indicator in the evaluation. While satisfaction, understanding, and self-efficacy are terrific metrics, said Pleasant, those indicators cannot compare to measuring a drop in blood pressure or body mass index given that the goal is to help people create health in their lives. Regarding when to evaluate, Pleasant said that it is a mistake to wait to start an evaluation until the intervention is finished. The best practice, he said, is
to conduct formative, baseline, and process evaluation, and to evaluate outcomes and sustainability.
Turning to an example of interventions that he has launched in communities over the past decade, Pleasant described the Life Enhancement Program. This integrative health program, offered in partnership with a local health care provider, covers all of the health determinants in a person’s life over a minimum of 40 hours of contact time. The program includes one-on-one sessions, grocery store tours, group sessions, and hands-on practical exercises. The program, tailored to each community based on a formative evaluation, has at least 120 learning objectives. The logic model holds that improving health literacy will lead to behavioral changes. In turn this will improve individual health status, which in the aggregate will improve public health (see Figure 4-1).
In terms of evaluation, Pleasant said there are two approaches. One approach starts with the Calgary Charter’s definition of health literacy (Coleman et al., 2009), which includes five steps: find, understand, evaluate, communicate, and use (see Figure 4-2). Communicate, said Pleasant, is incredibly important at the community level because communication—measured by how many people each participant has shared their messages with—turns into a diffusion of innovations. He noted that each participant, on average, shares information with 15 others in the community. What he has not done yet is to find those 15 people and ask if they shared their new knowledge with others. Self-efficacy, he explained, is a good surrogate for the evaluate phase because when people have gained confidence in their ability to act on their increased health knowledge, they have, in essence, evaluated themselves. Use, he said, is the indicator of behavioral changes, and participants in this program realized statistically significant improvements in blood pressure indicators, depression, stress, civic engagement, and other positive health behaviors.
The second approach to evaluation uses the Calgary Charter health literacy scale to assess the same five steps of find, understand, evaluate, communicate, and use (see Table 4-1). Using this approach also produces about the same results, said Pleasant. “People are self-reporting fairly accurately when you compare their self-reports to in fact a more ‘objective’ indicator of the steps of health literacy,” said Pleasant. “You can do it both ways.”
Theater for Health, another community-based health literacy intervention that Pleasant has launched, uses theater to change behaviors around health issues. The first launch, in Peru, focused on household hygiene and followed the process of identifying local partners, conducting formative research, and learning about the community. Pleasant and his collaborators conducted a 2-week workshop in Lima with the actors and scientists on the team to teach the actors about the science and the scientists about the art of acting so they could work together on stage. The program consisted
|Frequency of Engaging in the Following Tasks:||Pre Average||Post Average||Percent Difference|
|Find/look for health information||2.7||2.8||7%|
|Understand information about your health||2.9||3.2||9%|
|Evaluate how health information relates to your life (e.g., determine if and how information is relevant to your life)||2.8||3.1||10%|
|Communicate about your health to others||2.5||2.9||14%|
|Act on information about your health||2.7||3.1||14%|
SOURCE: Adapted from a presentation by Andrew Pleasant at the Community-Based Health Literacy Interventions workshop on July 19, 2017.
of 12 episodes presented over 11 weeks, bookended by health festivals. Pleasant’s team collected process data from 20 randomly selected participants after each episode, asking questions about what the participants thought about the performance, what they expected to happen the following week, and if they were going to start making changes in their lives. After the 12 episodes, Pleasant and his colleagues collected data from a random sample of the community. One difference between this evaluation and the one for the Life Enhancement Program was that there was no guarantee that the people surveyed after the performances had actually been to the performances, and in fact, some had not.
After a quick review of the Theater for Health’s logic model (see Figure 4-3), Pleasant briefly discussed some of the evaluation data (see Figure 4-4). More than 97 percent of the randomly surveyed people in the community did go to the performances, though some 7 percent admitted that they did not pay attention. “That tells you they are telling the truth,” he explained. “In your evaluation, it is nice to have some checkpoints like this because the tendency to tell an interviewer what they want is strong in human relationships.” To build some internal validation into an evaluation, Pleasant suggested asking the same thing in two different ways to see if the responses are approximately the same. “If they are
significantly different, then something is going on,” he said. “It might be that people lied. It might be that they did not understand. It might be that your interviewer was poorly trained. There are many causes of that, but you need to check in on that.” He noted that knowledge about household hygiene in the community, not just among those who attended the performances, increased by 65 percent, and more than half of those surveyed reported that their family had made hygiene behavior changes during the performances.
Pleasant’s most recent intervention, the Healthy Table program, is a 17.5-hour program consisting of seven sessions with demonstrations on how to cook healthy food on a budget. Though he has not yet completed the evaluation, he has received reports that it is benefiting participants. “This is the smallest intervention, and we are still seeing that two people have reduced their hypertension medications because they reduced their salt intake,” said Pleasant.
|Dose: Contact hours||Response: During the last month, how many days were you prevented from doing normal activities because of physical problems or mental health?|
|Life Enhancement Program||40 minimum||41% decrease|
|Theater for Health||30 maximum||25% decrease|
|Healthy Table||~17.5||11% decrease|
SOURCE: Adapted from a presentation by Andrew Pleasant at the Community-Based Health Literacy Interventions workshop on July 19, 2017.
The point of discussing these three programs, he said, is to raise the issue of dose–response, or how much intervention is enough. While all three interventions produced a significant decrease in one common metric—how many days over the past month a participant was prevented from doing normal activities because of physical problems or mental health—there was clearly a dose–response, with the biggest decrease associated with the most expensive and time-consuming intervention (see Table 4-2).
The message to funders is clear, said Pleasant. “Health literacy interventions can be expensive, but if they are evaluated well, they are worth it,” he said. For example, for 100 Life Enhancement Program participants, the cost of improved health status in the first year following the intervention occurs at a cost between $376,400 and $570,500 less than other interventions to produce similar health gains. “This is what evaluation can and should teach us about what to do. It is not just outcomes or satisfaction, but how we change the health system and inform policy,” said Pleasant in closing.
Given the fact that the randomized controlled trial (RCT) cannot be the gold standard for evaluating community health literacy interventions, Brach began the discussion by asking the panelists to discuss what they would consider to be the gold standard for such interventions and to provide an example of such an evaluation. Thota, commenting from the perspective of
2 This section is based on the comments by Oscar Espinosa, senior associate at Community Science; Anil Thota, associate director for systematic review science at the Centers for Disease Control and Prevention; and Sherrie Flynt Wallington, assistant professor of oncology at the Georgetown University Medical Center’s Lombardi Comprehensive Cancer Center, and the statements are not endorsed or verified by the National Academies of Sciences, Engineering, and Medicine.
someone who conducts systematic reviews, noted first that the RCT is, in many cases, not ethical when working in a community on a public health issue. As an example, he pointed out that it would be unethical to conduct an RCT to see if seatbelts save lives.
As far as a gold standard is concerned, Thota said that the Community Preventive Services Task Force, which has spent the past 20 years developing evaluation methods to use when an RCT is inappropriate, has put the whole concept of a gold standard on the back burner. “You think about methods. You think about transparency. You think about making everything very clear as to how it is that you did the synthesis,” he said.
When his team looks at a study within the community guide review, they look at two aspects: the suitability of the study design itself to enable a sound decision based on the data it would generate, and the quality with which the investigators have executed their study. The study design, for example, might include a contemporaneous control group, which would be data from a comparable community over the same period as the intervention, or it may have a cross-sectional design. “Rather than say there is no evidence, we say this is the evidence we have,” said Thota. For execution quality, he and his colleagues look at its internal validity and the epidemiological concepts involved, and the external validity, or how much the study contributes to the generalizability or applicability of the findings.
Wallington began her comments with a story about her first experience as an independent investigator with a National Cancer Institute Career Award. Her intention was to conduct a community-based RCT health communication intervention in select communities in Washington, DC, and to go beyond merely implementing the study to learning what the communities really think. She realized that conducting a neighborhood-level health literacy evaluation was not possible with the funding she had, so she started talking to people in the community to make connections and identify champions in the community. However, when she got into the communities in Southeast Washington, DC, particularly in Ward 8, where there are high rates of breast, cervical, and prostate cancer, she started hearing other priorities and had to adjust her plans to reflect those priorities.
In her opinion, there is a deficit of good exemplars of robust, community-level health literacy interventions, though there are examples of good program evaluation. Two of her favorites are the evaluations of the text4baby program and the Head Start program that Herman discussed in the previous session. Wallington said that she would like to see more mixed methods and multilevel study designs for community- and neighborhood-level interventions, and she applauded national funders such as the National Institutes of Health for emphasizing mixed methods research that combines qualitative and quantitative research. She also noted that journals are starting to publish qualitative studies.
Next, Brach asked the panelists for their ideas on some of the methods, data sources, and tools that investigators can use creatively to bring more rigor to the evaluation of community health literacy interventions. Espinosa, picking up on Pleasant’s emphasis on execution as a key aspect of good evaluation, said that execution and engagement of the community will determine if a study is usable. One lesson he has learned from working in communities is to recruit people from within the community that are doing evaluation work or other types of research and make them part of the research team. “That is a key strategy,” said Espinosa. “It increases the validity of your findings tremendously.”
In terms of tools, Espinosa said that the key is good data collection and determining if the intervention led to actions that will eventually change health outcomes. Once again, he said, engaging community members is the key to getting quality data to demonstrate effectiveness at promoting better health outcomes. For example, when getting ready to conduct a pilot study, Espinosa has community members review the survey instrument to make sure it is something that people in that community would be willing to answer. “When you go into a community to do research to collect data, you are extracting something valuable from community members,” said Espinosa. As far as specific validated instruments, he recommended looking at models such as the Kirkpatrick model for evaluating training programs (Kirkpatrick, 1959; Kirkpatrick and Kirkpatrick, 2016) that go beyond looking at satisfaction. This particular model, he explained, looks at four levels: Was training information delivered in an effective way to the participants? Did participants understand the information? Can they connect what they have learned to an action? What were the results of the training?
Thota’s group looks at outcomes and study design, but also at applicability, and it brings in subject matter experts, preferably from the community, as often as possible to contribute to its study reviews. “We look at trying to get at the context setting, the populations, what is happening with equity, and what is happening with social economic status,” he said. While that information is not always available, that is the framework with which he and his team move forward because even when the taskforces develop recommendations, people want to know if they will work in their communities. “Applicability is a step in that direction,” said Thota. Above all, though, rigor is critical, both for the evaluation and the systematic review. “We map out everything that was done from A to Z,” said Thota, with the goal of enabling people to look at the study and see if it applies to their specific situation or whether there is something they can do differently to make it germane to their communities. He added that the more context he can see in a study and the more context he and his colleagues can synthesize in the systematic reviews, the more useful it will be.
Wallington said that one of the things she has found most useful as
a resource is the community advisory board that exists at most academic institutions. She then said that what most academics think of as community is not how communities envision themselves, and so it is important that community advisory boards not only include stakeholders from the health department, but also people who live in underserved communities and who know what works in their communities. She cautioned, however, that working with community advisory boards is a long-term process that involves investing in the community. “Communities have said, do not just call us when you have a grant due in 30 days and you need a letter of support,” said Wallington.
One useful step that the Georgetown University Medical School community advisory board did was to create a community listserv that researchers such as herself can tap into when pretesting tools, for example. “When we need things pretested, we do cognitive interviews,” she said. “We are not developing tools and taking these tools into the community and then having the community tell us this is not going to work.” In one study designed to increase mammography screening among African Americans and Hispanics, Wallington and her colleagues developed a short one-page pre- and posttest instrument and training modules. When she pretested the instrument, some of the women did not know the word mammogram, so she had to redesign the instrument to use plain language and images to explain what a mammogram was without scaring the women by using words such as “smashing” and “pressing.” After redesigning and retesting the instrument, her team sent the survey into the community and got a better than 85 percent response rate.
When Brach asked the panelists about the appropriate role for community engagement and if that differs for health literacy work as opposed to other community-based research, Espinosa said that he has been told by community members not to treat them like stepchildren. Engaging communities requires committing to a long-term relationship. “For true community engagement, it is all about execution,” he said, referring to who gets invited to be on the community advisory board and who gets to chair it. Community, he said, is not just a zip code. “It is about relationships that individuals have with each other. That is what really creates a community,” said Espinosa. One of his strategies is to recruit members of the evaluation team from the community benefit organizations working with the community of interest and the universities that are currently doing research there. That approach reduces the learning curve and it helps establish trust in the community, which he said is critical for measuring the intent of community members to act on the information delivered to them. “If they trust you, they are more likely to do what you are training them to do or what you are training them to help understand,” said Espinosa. He noted that another reason to involve members of the community in the evaluation process is
that it puts money into the community. “You are hiring somebody from the community, and putting dollars back into it gives you a lot of credibility,” said Espinosa.
As an example of how to better involve the community in a project, Espinosa said that he has been working with a health care system that holds community health events in a parking lot at the hospital. His slight tweak was to suggest that the hospital should partner with an entity in the community, such as a church or community health center, and provide money for travel/lodging stipends for patients. “That made all the difference,” he said. “It is those little decisions that you make throughout the evaluation that make a huge impact at the end and make the data usable.”
Brach, highlighting what she had heard so far, said that the important role for the community is to help choose measures that will be meaningful to them, and she asked Wallington to comment on that. Wallington said that her first step in working with a community is to identify a liaison or champion in the community, which is usually someone who is well respected. In Southeast Washington, DC, for example, a woman named Miss Betty helps run a community garden and everybody in that community knows her. “You are not going to get anything done if it does not go through Miss Betty,” said Wallington. “She is the eyes and ears of that community.” It is with Miss Betty’s help that she identifies people in the community who help her gain entry into the community, and working with her sends a signal to the community that Wallington is a person who the community can trust. “Communities have a good reason why they do not want to trust researchers or even participate in research,” she said. “You cannot just walk into these communities.”
Many communities across the United States are establishing boards that will vet research being done in the community, said Wallington, and going through those boards to get approval for a study is essential for success. The lay people on these boards will assess if a study will help the community and meet the priorities of the community, and it is critical for researchers to convince the board members that they are not there to do research about the community, but with the community. Her strategy before starting a study is to spend time there, going into the small businesses and getting to know the people, hiring a caterer from the community for the informational meetings she holds, and putting money into the community. Her team, for example, has hired two people from the public housing communities behind the Washington, DC, baseball stadium, and there is now a Lombardi Cancer Center research office in Southeast Washington, which is a medically underserved area. “When the community can see that you are actually investing and you are not doing what is referred to as helicopter research, where researchers come in, collect all these data, then run and
write papers and get more funding, that is what helps you engage the community,” said Wallington.
Brach then asked the panelists to comment on how researchers conducting evaluations of community health literacy interventions can ensure that individuals with limited health literacy benefit from the intervention, as opposed to just looking at the entire community. Thota said that when his group examines health equity in community guide reviews of early education interventions targeted to historically underserved populations, they look at applicability for specific populations in that setting and whether the researchers have stratified results by the different participants. Thota noted that as a result of listening to the workshop discussions, he will look closely at how to look at health literacy more explicitly in his group’s reviews. “I think this group can help us think more about how we collect the data,” he said.
Wallington, getting back to Brach’s request, said that looking specifically at whether an intervention benefits those individuals with low health literacy has to be done intentionally, not haphazardly. It is important, she said, to establish a baseline that will help establish a target for those individuals. She said that she likes the Institute of Medicine’s report on leading indicators for the Healthy People 2020 initiative (IOM, 2011) as a source of benchmarks. At the individual level, possible metrics could be improved knowledge or adherence to a specific regimen. For example, in her work on human papilloma virus vaccinations, she looks at initial uptake and whether parents bring their child back for the second required dose. Formative data, cognitive interviews, and even motivational interviewing can be good resources for establishing criteria, metrics, and baseline measures, she said. Wallington also pointed to the importance of actually asking people if they believe the intervention is helping them.
Referring to the fact that some members of the workshop’s audience come from underresourced organizations that are trying to do health literacy work, Brach asked the panelists for ideas on how those organizations can approach evaluation with limited resources or connect with qualified evaluators. “They are out there doing this work and want to produce the evidence,” said Brach. Espinosa suggested identifying the community benefit organizations in those communities and seeing if they have evaluation capacity and to find those universities that are already working in the community. He also said that there are many volunteers who can be convinced to work on an evaluation project. As far as getting qualified and experienced evaluators, his go-to source is the American Evaluation Association,3 which maintains a database of experienced evaluators, some of whom will occasionally work pro bono. Another strategy he uses is to break down an
evaluation into pieces and identify who in the community might be able to help with specific pieces at no or minimal cost.
Wallington reiterated Pleasant’s message that evaluation has to start at a project’s planning stage. One of her mentors at Georgetown told her to always include evaluation in the budget of a grant. It may not get funded, but if it is not included then there is no chance of it being funded. Echoing Thota’s suggestion to find the universities working in a community, Wallington noted that schools of nursing and psychology departments do great work and have graduate students who need projects. She also recommended contacting the Annie E. Casey Foundation,4 which has started a national evaluation training program primarily for junior faculty and requires every Fellow who goes through the program to have a project to evaluate. “If you do not have a lot of funding, contact Annie E. Casey to see if your organization could be one of those project sites,” said Wallington.
Another issue the attendees wanted the panel to address was how they can find out what evaluations have produced and how they can use those results. Thota said CDC’s Community Guide website5 has more than 250 recommendations based on reviews of community interventions that span the health spectrum. The website also includes methods and processes and contact information for the dissemination and implementation team, which provides technical consultations on implementation. He noted that the task-force also identifies evidence gaps in its reviews, with the goal of spurring future evaluation research to fill those gaps.
Brach noted that the paper commissioned for this workshop can also serve as a resource. Wallington added that in the cancer area, the National Cancer Institute’s Cancer Control P.L.A.N.E.T. portal6 provides access to data and resources that can help planners, program staff, and researchers to design, implement, and evaluate evidence-based cancer control programs. It contains study protocols as well as promotional and evaluation tools, examples of pre- and posttest surveys, and trial designs.
Brach’s final request to the panel was to talk about what happens if an evaluation shows that an intervention does not work. Espinosa replied that failure is very important. “I have certainly learned how to do better at engaging communities,” he said, referring to lessons he has learned from failed projects. An evaluation can provide information on how to fine tune an intervention, such as redesigning materials so they are more understandable, changing the way information was presented, or rethinking the community engagement and recruitment strategies.
Thota seconded Espinosa’s remarks and then expressed his frustra-
tion about researchers not publishing what does not work. In his opinion, researchers doing good evaluations should not be afraid of failure because it will only make the next study better. Reporting what does not work also prevents others from repeating the same mistakes. Wallington agreed with Thota and called on journal editors to embrace publications that talk about what does not work in addition to what does work. She also said it is important when an intervention does not work to go back to the community and discuss the results with community members and get their feedback on why they think the intervention may have been unsuccessful.
Willis started the open discussion by noting that one of the key principles of community-based participatory research (CBPR) is equitable power sharing, and she asked the panelists, given the historical context of researchers coming into communities, how they go about power sharing in their work. As an example, she said that she tries to develop a balanced Memorandum of Understanding before starting a project.
Wallington replied that one of the most important aspects of CBPR and power sharing is transparency. She recounted one case where a community would not agree to work with her until she disclosed the salaries of everyone involved in the project, and her advisor agreed with that proviso. Another thing she does is include money for the community in her grant proposals, which reviewers seem to appreciate. These days, she said, many national funders are looking to see how grantees are vested in being equitable, and part of equitability is demonstrating that the communities benefit financially. She also includes community members as co-authors on publications, and noted that constructing community advisory boards made largely of community members is another way of demonstrating equitability. Wallington said one of the biggest compliments she received was when a community member called the main switchboard at Georgetown and asked for “Sherrie Wallington.” When the operator asked if the caller meant Dr. Wallington, the caller replied, “No, she is just Sherrie to us.” That, said Wallington, meant the community saw her as one of them. “When you can get to that point, I think that you are on the road to making sure that you can do this type of research in an equitable way,” she said.
Alvarado-Little noted that some communities, such as an indigenous community from Oaxaca, Mexico, that she works with in upstate New York, do not fall under the community advisory board model and have a different hierarchy or power-sharing structure. She asked the panelists for their thoughts about how to conduct evaluations and include members from those types of communities at the beginning of the project. Wallington suggested that a town hall model might be appropriate for that
type of community. Another approach she uses is to have a community liaison go into public housing communities to do peer-to-peer training and provide feedback on what people in the community feel will work for them. Espinosa said community health workers, particularly those associated with federally qualified health centers (FQHCs), can be a great resource. He suggested talking to them and providing a stipend for their time in exchange for knowledge about the community and the state of the community. Community health workers can also help recruit participants for studies, particularly individuals who may be hard to reach, such as recent immigrants who do not speak much English. Such approaches, he said, are particularly important when trying to reach individuals with the lowest literacy.
Paasche-Orlow asked the panelists how they use the term “community.” Community, responded Thota, can be anything—people with a particular condition or a geopolitical community, for example. Espinosa said that when it comes to implementing interventions, researchers usually identify a community by zip code, but once they start to find out more about that community, they find that the defining element is more about relationships. “It is people that have something in common with somebody else,” said Espinosa. For much of the work he does, community is the service population for a community health center, and even then, he turns to community health workers to define who should participate in an intervention. Wallington agreed with what Espinosa said about relationships and added that there are three important words that go along with relationships: sharing, equitability, and respect.
Brach, who comes from a cultural competence field as well as health literacy, said she thinks of communities in terms of ethnic communities and looks for the leaders of those communities. However, she said, there is often tension in such communities regarding who are the legitimate leaders. She asked the panelists if they had any advice regarding how to identify who those leaders or gatekeepers are. Wallington said the most important thing is to know the community, and that requires getting out of the office and going to where the people are. For example, she goes to health fairs, even though research says that health fairs are not very effective. The reason she goes is because that is where the community will be and that is where community members can see her and talk to her, which is how she starts building trust. “You may start with one person, but as people start to see you in the community, then other people will start to reveal themselves to you as key leaders in the community,” Wallington explained.
Virginia Brown from University of Maryland Extensions commented that in her work with agriculturalists, financial educators, nutritionists, and horticulturalists, she has learned that the set of languages and tools are sometimes transferrable and that it is possible to borrow from other
fields to make research and evaluation more impactful. She then asked the panelists if they could suggest tools and frameworks that health literacy researchers could borrow from other fields to enable reaching out to different communities. Wallington, a social scientist in the communication field, said that she and her colleagues have always borrowed from other fields, and one that she particularly respects and has learned valuable lessons from is medical anthropology. She also recommended looking at some of the models used in psychology research and combining them with tried and true frameworks such as the Health Belief model. The work she is doing now is based on a social–ecological model that looks at multilevel aspects of the problem she is studying.
Baur said that the Research, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) framework and CDC best practices framework include valuable questions that researchers can use in health literacy work. As the practice section editor of the new journal Health Literacy Research and Practice, she encouraged everyone attending the workshop to consider writing up their community-based health literacy interventions and submitting them to the best practices section of the journal.
John asked the panelists where patients or community members who are not going to read peer-reviewed journals can find the results of an intervention. For example, there was a drop in the immunization rate in the Somali community in Minnesota that was triggered by an increase in the number of Somali children diagnosed with autism and the resulting barrage of misinformation from the antivaccination movement. John remarked that the valid research debunking the link between vaccination and autism is all in peer-reviewed journals, which nobody in that community was going to read, let alone understand. “Where do you suggest real information to the real public be presented?” asked John.
Thota said that whether an intervention works or not, there always needs to be some kind of translational mechanism in place to get the information back to the community in a form they can understand. Wallington said her group publishes what they call a lay health compendium, where they publish all of their research studies in a short booklet in lay language. Community liaisons then distribute the booklet to strategic places in the community, such as grocery stores, mom-and-pop shops, and schools. Her group also sends out a monthly newsletter; publishes short vignettes in ethnic, minority, and community newspapers; and uses a listserv to send out information to the public health community. She and her colleagues hold catered town meetings in the community at least two to three times per year, and her community advisory board has a standing meeting three times per year at which she talks about both results and future studies. She also holds conference calls if she needs to discuss something with the board. Within public housing communities, she explained, there are people who
live onsite who can help disseminate information. Wallington noted that as a result of the strong relationship she has developed with her community, she often gets calls from the board or community members requesting help with a problem that may have nothing to do with her area of expertise, but she will connect the caller with a colleague who can help address the issue at hand.