Important Points Made by the Speakers
- The need for training and mentoring, ongoing reflection and reflexivity, collecting data to the point of saturation, and ensuring the accuracy of the data collection process are principles of qualitative methods that are particularly relevant for evaluating large-scale programs.
- Openness to listening and learning are instruments of discovery in qualitative research.
- Relationships within a data team are valuable when challenging each other or providing critiques of data interpretation.
- Delineating a program’s sphere of control, sphere of influence, and sphere of interest is important in understanding a program’s role as a change agent.
- Stories about exceptional results can provide insights into the factors contributing to those results.
The use of qualitative methods in evaluating large-scale initiatives can help evaluators understand not only whether something works but how and why it works. This subject was covered in several of the full-panel discussions, but one concurrent session aimed to take a deeper look at the use of qualitative methods for evaluation.
In the concurrent session on qualitative methods, Sharon Knight, professor of health education in the College of Health and Human Performance at East Carolina University, used the PEPFAR evaluation as an example of how to ensure the qualitative aspects of a large-scale, complex evaluation are as rigorous and credible as possible. She noted that there is no consensus in the literature as to what makes a rigorous qualitative evaluation design but that a few concepts appear repeatedly, such as the need for training and mentoring for those on the evaluation team who are not familiar with qualitative methods; ongoing reflection and reflexivity throughout the data collection and analysis processes; the concept of saturation, or collecting data to the point of redundancy; and ensuring the accuracy of the data collection process.
Qualitative approaches are also called naturalistic inquiry, Knight said, because they are field-based, nonmanipulative, and noncontrolled. Qualitative researchers go into situations with a mindset of appreciating what is already there. “In fact, you want to make every effort not to change the environment, and certainly not the participants.” Knight cautioned against a tendency to drift toward trying to explain qualitative data with quantitative language because of the desire to understand or signify something as important numerically. One example of this is the tendency to believe something is important because many people said it rather than one or a few who may have had a more nuanced viewpoint. One of the premises of qualitative research and evaluation is the appreciation of multiple views of reality and different perspectives. “Even if you have an n of one with a particular perspective that differs from everyone else’s, that perspective deserves to be honored. It’s not a situation where you have to throw it away because it’s an outlier. Instead, you try to understand it, and certainly not ignore it,” said Knight.
In a qualitative evaluation, she added, it is important to remember that the evaluator is an instrument of discovery that needs to listen, learn, and ask more. “Openness is a stance that we as evaluators should embrace,” Knight stated, “and open-ended questions are the kind of questions we all strive for because it makes it more likely for the participant to be able to tell us the stories that really interest us.” In the same vein, she said that everything is data in this kind of study, but that nothing becomes data unless it is documented and becomes part of the record. It is ethically important to ensure that each individual has given their consent before their words are captured or documented to be used as data. Knight added that in the PEPFAR evaluation, project leadership reinforced these ideas continually.
For the IOM’s evaluation of PEPFAR, both staff and committee members were trained in qualitative methods. Formal training occurred in
a 1-week workshop with the IOM staff on the evaluation team. “Team training,” said Knight, “has to begin with thinking about the qualitative assumptions, the method, the approach, the paradigm, and the worldview that qualitative evaluation invites and demands if someone is going to be able to engage in it fully.” Formal training for the committee included informational presentations at committee meetings and a first-day in-country reorientation on process and tools. Role modeling, continuous mentoring, and ongoing discussions provided opportunities for informal training. Every member of the IOM staff also received a copy of Qualitative Research and Evaluation Methods by Michael Quinn Patton (Patton, 2002), which Knight said proved useful for answering questions that arose during the evaluation.
The evaluation team used purposeful sampling for the selection of countries to visit and whom to interview within the countries based on a list of considerations determined by data requirements. Except when prevented by diplomatic protocol, interviews were arranged directly by the evaluation team and not by PEPFAR staff. Knight noted that the team conducted almost 400 interviews in total, including individuals with direct experience with PEPFAR in 13 countries and individuals involved in headquarters management within PEPFAR as well as in the global response to HIV. On country visits, team members received a country visit toolkit comprising a daily agenda for each team member, the interview field note format, a post-interview debriefing form, interview guides, an informed consent script, a guide to evaluation topics, and interview team roles and responsibilities. To support self-awareness through reflection and reflexivity, team members were encouraged to keep a personal journal and to discuss as a team any issues that could affect interviewing and listening skills as well as the interpretation of the data.
The overarching qualitative evaluation question was, “What is PEPFAR’s contribution to the global HIV/AIDS response?” From that starting point, 10 questions emerged covering various aspects of four key evaluation areas: PEPFAR operations, implementation, effects, and transition to sustainability. Each question was formulated into smaller open-ended questions, and subsets of interview questions were selected based on the interviewee. For example, on the subject of knowledge management, the questions were designed to identify how knowledge and information were managed in order to monitor the activities and effects of the program. For the interviewer, this question led to an open-ended request for interviewees to describe the data they collect related to HIV/AIDS programs. If needed, the interviewers could use prompts such as “How do you manage the data you collect?” or “How do you use the data you collect?” Knight says these prompts were merely memory triggers and were not meant to try to direct the interviewee’s response.
The accuracy of the data collection process was ensured at a number of levels. Knight explained that for in-depth qualitative interviews the PEPFAR evaluation team used end-of-interview summaries with the participants as an immediate assurance that they had been heard accurately. The team then conducted debriefings after each interview, at 1–2 day intervals, and at the end of each week in the field. Interview notes were reconciled among at least two team members, and audio recordings and transcripts were also used for some interviews. On a broader level, for the PEPFAR evaluation a database was used to log all interviews, and an audit trail tracked ongoing design decisions, data collection, and data analysis.
In closing, Knight said that the team members found that relationships within the team were valuable when challenging each other on issues relating to data acquisition or when providing critiques of ongoing data interpretation.
Qualitative methods can reframe, explore different perspectives, and facilitate, said Anastasia Catsambas, president of EnCompass LLC. That last item—facilitate—is particularly important in the types of evaluations that she conducts, because they tend to be highly participatory. “People think participatory means just sitting around and participating, but participatory to us means very structured, very deliberate activities,” she said. These activities have agendas, including structured discussions that get biases on the table and create interactions that lead to learning, which can be documented and incorporated into the evaluation.
Turning to the issue of reframing, Catsambas discussed the use of outcome mapping, which she and colleagues used to help the Saving Newborn Lives program reframe its understanding of its role as a catalyst or change agent. Outcome mapping, she said, starts by examining a program’s activities in terms of its sphere of control—the things it controls and for which it is accountable. It then moves to the sphere of influence, which examines how the program influences changes in the behavior and actions of partners and stakeholders. Finally, this type of analysis looks at the sphere of interest, which for this program would be changes in the supply and demand for newborn care and newborn health.
Using the framework helped the Saving Newborn Lives program to think in terms of priorities and where it will get the biggest impact in terms of value added. For example, the program was criticized for conducting research, and the framework freed them to accept that research was not where its competitive advantage was high. The end result was that Saving Newborn Lives changed its emphasis from acting as a catalyst for introducing newborn health activities—which involved research, translating
evidence into advocacy, launching information campaigns, trying to influence policy, and similar activities—to one that focuses on its catalytic role in scaling up newborn care. With its new emphasis, the program is focused on building newborn health into the maternal-child continuum, mobilizing communities, and promoting sharing of evidence and the spread of newborn health practices to engage a wider audience. Catsambas said that the program seized on this new approach and implemented it before the evaluation was complete.
To illustrate the role of qualitative methods in exploring perspectives, Catsambas discussed an evaluation her company, EnCompass, did of PEPFAR’s activities in the eastern Caribbean region. From the start, she and her team observed that the U.S. government and the 13 countries had a different perspective on why the program was not achieving the desired effects. Both perspectives were real, she explained, and when the parties received the draft report they both contended that the other side was represented better in the evaluation. Through a process called appreciative inquiry, the two sides came to realize that they needed to stop thinking about the evaluation and instead focus on how they would work together more effectively in the future. The evaluators as facilitators need to stay appreciative, affirm all realities, be respectfully honest, and be open to different conclusions than they made originally, she explained.
Catsambas explained the idea behind appreciative inquiry. “Appreciative inquiry starts with the premise that something works, even if it’s the exception,” she said, “and it starts by identifying an affirmative topic of excellence that we want to inquire about.” Next comes the inquiry phase, which uses facilitated dyad interviews and group interpretation of the shared data. “It is basically storytelling, but it’s not static like storytelling,” Catsambas said. “What you want to do is to see what does this system look like at its best dynamic?”
The inquiry phase is followed by a visioning process, a design phase, and then an innovation phase, which she characterized as the hardest step in appreciative inquiry. “The innovation phase is where you are really talking about the design components of the future, and this is where you get a lot of great ideas for indicators,” said Catsambas.
The final step is implementation. The process develops culture competence, thereby contributing to implementation, because people tell stories in their own language with their own pictures. It also preserves everyone’s voice, which increases both participation and buy-in while promoting a whole-systems view of the issues.
During the discussion that followed her presentation, Catsambas commented that storytelling is an important component of qualitative data gathering because it represents someone’s expert experience. As an example, she said that when using appreciative inquiry methods, it is possible to
solicit stories about exceptional results and get valuable details on what factors contributed to those results.
The discussion session focused largely on the nitty-gritty of conducting interviews and analyzing responses. In response to a question about translation issues, Knight emphasized the importance of sharing key points with an interviewee to make sure a conversation was captured accurately. She also noted that it can be very difficult to use transcripts because of difficulties with translations. In her project, two or three people conducted interviews, with one person taking notes and the others checking those notes afterward.
Several people in the session stated they do not use audio recordings for these and other reasons. However, others remarked on the value of a recording, even if it is used just to check on notes. Some points can only be derived from repeated reading of a transcript. Also, even conscientious interviewers can get tired and lose information if they are relying solely on notes. One participant pointed to the value of tablets with which notes can be written and stored electronically, which also can facilitate analysis.
In addition, several workshop participants discussed whether it is better to bring in people from outside a country to do interviews or hire local people to do interviews. Both options have advantages and disadvantages. Local people may be less trained in qualitative research but are often much more versed in the nuances of a setting. Local people can be trained to do interviews, which builds capacity within a country, because then they are a source of interviewing expertise. At the same time, local evaluation associations are increasing in number and can be a source of trained interviewers. Also, local people may be able to do the interviews while outside researchers do the analysis.
Qualitative research can be complicated by the fact that some interviewees are more observant than others, and some interviewers are more capable of eliciting useful responses. One participant noted that this indicates why it can be so valuable to have groups of interviewers talk to groups of interviewees. Though such data gathering, similar to focus groups, raises additional issues, it gives people a chance to hear each other and augment or amend what is said.
Participants also discussed the value of open-ended questions, which can produce information unlikely to surface with more focused questions. However, the responses can be more complex and time consuming to analyze. Software for qualitative research can help with such analyses, one participant observed, even with complex responses.