National Academies Press: OpenBook

Evaluation Design for Complex Global Initiatives: Workshop Summary (2014)

Chapter: 6 Applying Qualitative Methods to Evaluation on a Large Scale

« Previous: 5 Mapping Data Sources and Gathering and Assessing Data
Suggested Citation:"6 Applying Qualitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

6

Applying Qualitative Methods to Evaluation on a Large Scale

Important Points Made by the Speakers

  • The need for training and mentoring, ongoing reflection and reflexivity, collecting data to the point of saturation, and ensuring the accuracy of the data collection process are principles of qualitative methods that are particularly relevant for evaluating large-scale programs.
  • Openness to listening and learning are instruments of discovery in qualitative research.
  • Relationships within a data team are valuable when challenging each other or providing critiques of data interpretation.
  • Delineating a program’s sphere of control, sphere of influence, and sphere of interest is important in understanding a program’s role as a change agent.
  • Stories about exceptional results can provide insights into the factors contributing to those results.

The use of qualitative methods in evaluating large-scale initiatives can help evaluators understand not only whether something works but how and why it works. This subject was covered in several of the full-panel discussions, but one concurrent session aimed to take a deeper look at the use of qualitative methods for evaluation.

Suggested Citation:"6 Applying Qualitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

RIGOR AND CREDIBILITY IN QUALITATIVE DESIGN

In the concurrent session on qualitative methods, Sharon Knight, professor of health education in the College of Health and Human Performance at East Carolina University, used the PEPFAR evaluation as an example of how to ensure the qualitative aspects of a large-scale, complex evaluation are as rigorous and credible as possible. She noted that there is no consensus in the literature as to what makes a rigorous qualitative evaluation design but that a few concepts appear repeatedly, such as the need for training and mentoring for those on the evaluation team who are not familiar with qualitative methods; ongoing reflection and reflexivity throughout the data collection and analysis processes; the concept of saturation, or collecting data to the point of redundancy; and ensuring the accuracy of the data collection process.

Qualitative approaches are also called naturalistic inquiry, Knight said, because they are field-based, nonmanipulative, and noncontrolled. Qualitative researchers go into situations with a mindset of appreciating what is already there. “In fact, you want to make every effort not to change the environment, and certainly not the participants.” Knight cautioned against a tendency to drift toward trying to explain qualitative data with quantitative language because of the desire to understand or signify something as important numerically. One example of this is the tendency to believe something is important because many people said it rather than one or a few who may have had a more nuanced viewpoint. One of the premises of qualitative research and evaluation is the appreciation of multiple views of reality and different perspectives. “Even if you have an n of one with a particular perspective that differs from everyone else’s, that perspective deserves to be honored. It’s not a situation where you have to throw it away because it’s an outlier. Instead, you try to understand it, and certainly not ignore it,” said Knight.

In a qualitative evaluation, she added, it is important to remember that the evaluator is an instrument of discovery that needs to listen, learn, and ask more. “Openness is a stance that we as evaluators should embrace,” Knight stated, “and open-ended questions are the kind of questions we all strive for because it makes it more likely for the participant to be able to tell us the stories that really interest us.” In the same vein, she said that everything is data in this kind of study, but that nothing becomes data unless it is documented and becomes part of the record. It is ethically important to ensure that each individual has given their consent before their words are captured or documented to be used as data. Knight added that in the PEPFAR evaluation, project leadership reinforced these ideas continually.

For the IOM’s evaluation of PEPFAR, both staff and committee members were trained in qualitative methods. Formal training occurred in

Suggested Citation:"6 Applying Qualitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

a 1-week workshop with the IOM staff on the evaluation team. “Team training,” said Knight, “has to begin with thinking about the qualitative assumptions, the method, the approach, the paradigm, and the worldview that qualitative evaluation invites and demands if someone is going to be able to engage in it fully.” Formal training for the committee included informational presentations at committee meetings and a first-day in-country reorientation on process and tools. Role modeling, continuous mentoring, and ongoing discussions provided opportunities for informal training. Every member of the IOM staff also received a copy of Qualitative Research and Evaluation Methods by Michael Quinn Patton (Patton, 2002), which Knight said proved useful for answering questions that arose during the evaluation.

The evaluation team used purposeful sampling for the selection of countries to visit and whom to interview within the countries based on a list of considerations determined by data requirements. Except when prevented by diplomatic protocol, interviews were arranged directly by the evaluation team and not by PEPFAR staff. Knight noted that the team conducted almost 400 interviews in total, including individuals with direct experience with PEPFAR in 13 countries and individuals involved in headquarters management within PEPFAR as well as in the global response to HIV. On country visits, team members received a country visit toolkit comprising a daily agenda for each team member, the interview field note format, a post-interview debriefing form, interview guides, an informed consent script, a guide to evaluation topics, and interview team roles and responsibilities. To support self-awareness through reflection and reflexivity, team members were encouraged to keep a personal journal and to discuss as a team any issues that could affect interviewing and listening skills as well as the interpretation of the data.

The overarching qualitative evaluation question was, “What is PEPFAR’s contribution to the global HIV/AIDS response?” From that starting point, 10 questions emerged covering various aspects of four key evaluation areas: PEPFAR operations, implementation, effects, and transition to sustainability. Each question was formulated into smaller open-ended questions, and subsets of interview questions were selected based on the interviewee. For example, on the subject of knowledge management, the questions were designed to identify how knowledge and information were managed in order to monitor the activities and effects of the program. For the interviewer, this question led to an open-ended request for interviewees to describe the data they collect related to HIV/AIDS programs. If needed, the interviewers could use prompts such as “How do you manage the data you collect?” or “How do you use the data you collect?” Knight says these prompts were merely memory triggers and were not meant to try to direct the interviewee’s response.

Suggested Citation:"6 Applying Qualitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

The accuracy of the data collection process was ensured at a number of levels. Knight explained that for in-depth qualitative interviews the PEPFAR evaluation team used end-of-interview summaries with the participants as an immediate assurance that they had been heard accurately. The team then conducted debriefings after each interview, at 1–2 day intervals, and at the end of each week in the field. Interview notes were reconciled among at least two team members, and audio recordings and transcripts were also used for some interviews. On a broader level, for the PEPFAR evaluation a database was used to log all interviews, and an audit trail tracked ongoing design decisions, data collection, and data analysis.

In closing, Knight said that the team members found that relationships within the team were valuable when challenging each other on issues relating to data acquisition or when providing critiques of ongoing data interpretation.

THE VALUE OF QUALITATIVE METHODS

Qualitative methods can reframe, explore different perspectives, and facilitate, said Anastasia Catsambas, president of EnCompass LLC. That last item—facilitate—is particularly important in the types of evaluations that she conducts, because they tend to be highly participatory. “People think participatory means just sitting around and participating, but participatory to us means very structured, very deliberate activities,” she said. These activities have agendas, including structured discussions that get biases on the table and create interactions that lead to learning, which can be documented and incorporated into the evaluation.

Turning to the issue of reframing, Catsambas discussed the use of outcome mapping, which she and colleagues used to help the Saving Newborn Lives program reframe its understanding of its role as a catalyst or change agent. Outcome mapping, she said, starts by examining a program’s activities in terms of its sphere of control—the things it controls and for which it is accountable. It then moves to the sphere of influence, which examines how the program influences changes in the behavior and actions of partners and stakeholders. Finally, this type of analysis looks at the sphere of interest, which for this program would be changes in the supply and demand for newborn care and newborn health.

Using the framework helped the Saving Newborn Lives program to think in terms of priorities and where it will get the biggest impact in terms of value added. For example, the program was criticized for conducting research, and the framework freed them to accept that research was not where its competitive advantage was high. The end result was that Saving Newborn Lives changed its emphasis from acting as a catalyst for introducing newborn health activities—which involved research, translating

Suggested Citation:"6 Applying Qualitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

evidence into advocacy, launching information campaigns, trying to influence policy, and similar activities—to one that focuses on its catalytic role in scaling up newborn care. With its new emphasis, the program is focused on building newborn health into the maternal-child continuum, mobilizing communities, and promoting sharing of evidence and the spread of newborn health practices to engage a wider audience. Catsambas said that the program seized on this new approach and implemented it before the evaluation was complete.

To illustrate the role of qualitative methods in exploring perspectives, Catsambas discussed an evaluation her company, EnCompass, did of PEPFAR’s activities in the eastern Caribbean region. From the start, she and her team observed that the U.S. government and the 13 countries had a different perspective on why the program was not achieving the desired effects. Both perspectives were real, she explained, and when the parties received the draft report they both contended that the other side was represented better in the evaluation. Through a process called appreciative inquiry, the two sides came to realize that they needed to stop thinking about the evaluation and instead focus on how they would work together more effectively in the future. The evaluators as facilitators need to stay appreciative, affirm all realities, be respectfully honest, and be open to different conclusions than they made originally, she explained.

Catsambas explained the idea behind appreciative inquiry. “Appreciative inquiry starts with the premise that something works, even if it’s the exception,” she said, “and it starts by identifying an affirmative topic of excellence that we want to inquire about.” Next comes the inquiry phase, which uses facilitated dyad interviews and group interpretation of the shared data. “It is basically storytelling, but it’s not static like storytelling,” Catsambas said. “What you want to do is to see what does this system look like at its best dynamic?”

The inquiry phase is followed by a visioning process, a design phase, and then an innovation phase, which she characterized as the hardest step in appreciative inquiry. “The innovation phase is where you are really talking about the design components of the future, and this is where you get a lot of great ideas for indicators,” said Catsambas.

The final step is implementation. The process develops culture competence, thereby contributing to implementation, because people tell stories in their own language with their own pictures. It also preserves everyone’s voice, which increases both participation and buy-in while promoting a whole-systems view of the issues.

During the discussion that followed her presentation, Catsambas commented that storytelling is an important component of qualitative data gathering because it represents someone’s expert experience. As an example, she said that when using appreciative inquiry methods, it is possible to

Suggested Citation:"6 Applying Qualitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×

solicit stories about exceptional results and get valuable details on what factors contributed to those results.

OTHER TOPICS RAISED IN DISCUSSION

The discussion session focused largely on the nitty-gritty of conducting interviews and analyzing responses. In response to a question about translation issues, Knight emphasized the importance of sharing key points with an interviewee to make sure a conversation was captured accurately. She also noted that it can be very difficult to use transcripts because of difficulties with translations. In her project, two or three people conducted interviews, with one person taking notes and the others checking those notes afterward.

Several people in the session stated they do not use audio recordings for these and other reasons. However, others remarked on the value of a recording, even if it is used just to check on notes. Some points can only be derived from repeated reading of a transcript. Also, even conscientious interviewers can get tired and lose information if they are relying solely on notes. One participant pointed to the value of tablets with which notes can be written and stored electronically, which also can facilitate analysis.

In addition, several workshop participants discussed whether it is better to bring in people from outside a country to do interviews or hire local people to do interviews. Both options have advantages and disadvantages. Local people may be less trained in qualitative research but are often much more versed in the nuances of a setting. Local people can be trained to do interviews, which builds capacity within a country, because then they are a source of interviewing expertise. At the same time, local evaluation associations are increasing in number and can be a source of trained interviewers. Also, local people may be able to do the interviews while outside researchers do the analysis.

Qualitative research can be complicated by the fact that some interviewees are more observant than others, and some interviewers are more capable of eliciting useful responses. One participant noted that this indicates why it can be so valuable to have groups of interviewers talk to groups of interviewees. Though such data gathering, similar to focus groups, raises additional issues, it gives people a chance to hear each other and augment or amend what is said.

Participants also discussed the value of open-ended questions, which can produce information unlikely to surface with more focused questions. However, the responses can be more complex and time consuming to analyze. Software for qualitative research can help with such analyses, one participant observed, even with complex responses.

Suggested Citation:"6 Applying Qualitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 55
Suggested Citation:"6 Applying Qualitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 56
Suggested Citation:"6 Applying Qualitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 57
Suggested Citation:"6 Applying Qualitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 58
Suggested Citation:"6 Applying Qualitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 59
Suggested Citation:"6 Applying Qualitative Methods to Evaluation on a Large Scale." Institute of Medicine. 2014. Evaluation Design for Complex Global Initiatives: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18739.
×
Page 60
Next: 7 Applying Quantitative Methods to Evaluation on a Large Scale »
Evaluation Design for Complex Global Initiatives: Workshop Summary Get This Book
×
Buy Paperback | $50.00 Buy Ebook | $39.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Every year, public and private funders spend many billions of dollars on large-scale, complex, multi-national health initiatives. The only way to know whether these initiatives are achieving their objectives is through evaluations that examine the links between program activities and desired outcomes. Investments in such evaluations, which, like the initiatives being evaluated, are carried out in some of the world's most challenging settings, are a relatively new phenomenon. In the last five years, evaluations have been conducted to determine the effects of some of the world's largest and most complex multi-national health initiatives.

Evaluation Design for Complex Global Initiatives is the summary of a workshop convened by the Institute of Medicine in January 2014 to explore these recent evaluation experiences and to consider the lessons learned from how these evaluations were designed, carried out, and used. The workshop brought together more than 100 evaluators, researchers in the field of evaluation science, staff involved in implementing large-scale health programs, local stakeholders in the countries where the initiatives are carried out, policy makers involved in the initiatives, representatives of donor organizations, and others to derive lessons learned from past large-scale evaluations and to discuss how to apply these lessons to future evaluations. This report discusses transferable insights gained across the spectrum of choosing the evaluator, framing the evaluation, designing the evaluation, gathering and analyzing data, synthesizing findings and recommendations, and communicating key messages. The report also explores the relative benefits and limitations of different quantitative and qualitative approaches within the mixed methods designs used for these complex and costly evaluations.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!