The workshop’s fifth session presented an initiative to develop a framework to standardize measurement and reporting across private-sector initiatives to improve access to noncommunicable disease treatment and care. The presentation by Peter Rockers and Veronika Wirtz from Boston University focused on the decision-making process for the framework’s design and how it is being applied. Following the presentation, the workshop participants engaged in a discussion with the presenters, moderated by John Monahan from Georgetown University.
Rockers began the presentation with a comment about the proliferation of PPPs in recent years and the worry that they may not have achieved their desired impacts. In his opinion, this is where measurement can benefit global health PPPs. “There is the opportunity that measurement provides to identify those programs that do have the greatest impact and start to invest more in them,” he said.
The framework that he and Wirtz presented was developed as part of their work with the AA initiative that Danielle Rollmann described in the previous workshop session. Rockers reminded the workshop that AA had many partners involved in multiple programs taking place at the same time. The framework’s unit of analysis focuses on the level of the individual programs. In addition to developing the measurement framework, Rockers and Wirtz’s role in AA includes three other primary aspects: creating the Access Observatory reporting system, building capacity among the partners for measurement, and supporting the project to help specific programs with measurement.
Rockers said that just as it was important for the partners to be transparent about their principles, so too was it important at the beginning of their engagement with the project to clearly articulate their principles as academics and independent evaluators. These principles included being transparent as partners, which manifested itself as building a system that would be fully transparent in terms of the information and data that the partners collect and report on as well as being transparent in their relationship with AA. Toward the latter, the Boston University team put its master service agreement that they signed as independent evaluators onto their website for every partner to see.
A second principle was the need to be flexible while maintaining consistency. Flexibility was important, said Rockers, because of the heterogeneity across the different programs operating under the AA umbrella. At the same time, the framework had to be consistent to enable synthesis across the programs. The third principle was to be practical while maintaining rigor. Any framework, said Rockers, is only as valuable as its usefulness in the field, but at the same time, the Boston University team was committed to bringing rigor to measurement and assessment activities.
The framework that Rockers, Wirtz, and their collaborators developed has three main components. The first is a taxonomy of 11 strategies to develop a simplified approach to categorizing the hundreds of different programs in AA. The 11 strategies within the framework’s taxonomy are community awareness and linkage to care, health service strengthening, health service delivery, supply chain, financing, regulation, manufacturing, product development research, licensing agreements, pricing scheme, and medicine donation. Rockers noted that many programs use multiple strategies. A logic model for each strategy laid out the pathways through which program activities aimed to achieve the intended outcomes and impacts, and each concept in each logic model had a corresponding indicator with a clear definition. These indicators enabled the partners, program designers, and implementers to collect and report standardized data.
The Access Observatory mentioned earlier is a public website that complements the framework and fulfills the Boston University team’s transparency principle, said Rockers. It houses AA program descriptions, collected data, and the methodologies for the data collection. He noted that everyone will be able to access all of the information the partners are collecting on these programs to compare and synthesize across programs. From his and Wirtz’s perspective, the Access Observatory will be the vehicle for generating a body of evidence across the various strategies and programs to determine which ones are working best and which ones are not meeting their goals and to start to move the entire
initiative toward greater investments in those strategies that are most cost-effective.
Wirtz then described the process by which the Boston University team developed the framework, which included two points in the process when the team received formal feedback from corporate partners, the World Bank, and UICC. The first feedback received at 5 months, she said, helped the team clarify terminology and descriptions of the metrics. The second feedback opportunity, regarding the forms used to report into the Access Observatory, occurred several months later.
To Wirtz, the most interesting part of the development process was the tensions that arose and the opportunities and challenges those tensions created to strengthen interactions among the partners. The sources of tension included the commercial aims versus social aims, practicality versus rigor, and confidentiality versus transparency. For example, the programs in AA often had both commercial and social aims, and the tension between these two was explicit in some of the training activities when corporate partners questioned why measuring social aims would benefit their objectives. The tension between confidentiality and transparency can be seen in the pharmaceutical sector, as a pharmaceutical company may want to report issues but is unable to because of regulatory restrictions. Similarly, the tension was apparent when the Boston University team had to negotiate with the university’s legal team to post the master service agreement on the Access Observatory.
Having a shared language enabled effective communication with and among the partners when addressing these challenges. Developing a shared language required careful listening, said Wirtz, to gain familiarity with how the various partners used informal language. She and her collaborators and the partners went through a collective and iterative process to develop the shared language and terminology and agreement on concepts. As an example, Wirtz said that some of the corporate partners said they use the term patient journey, and the Boston University team had to first understand what the term meant, translate it into words all partners could understand, and then find an adequate place for that concept in the framework.
Turning to the two dimensions of governance—transparency and accountability—that Michael Reich discussed in his opening presentation, Wirtz said the framework addresses transparency to the public regarding the scope of the program activities and the social impact of those programs through the Access Observatory. However, the mechanism by which measurement will address accountability is still a work in progress. “It is important because measurement for measurement’s sake is not what we want,” said Wirtz. “We want measurement to result in actionable progress and strategies in making these programs better.” Her
final point was that measurement requires commitment from the global health community. Achieving better measurements, she said, requires public investments, and the return on those investments would be transparency, accountability, and shared learning.
John Monahan asked Wirtz and Rockers about how many people they and their colleagues had to speak with to develop the shared language and how they knew when they had succeeded in developing it. Wirtz said she could not identify exactly how many people the Boston University team spoke with, but she noted that they spoke with representatives from all 23 corporate partners, the World Bank, UICC, and the metrics groups. Developing the language was an iterative process, and even now, that process continues. An important part of the process, she said, was to document these discussions and iterations. Rockers added that the public health literature also contributed to the development of the common language, and the team is now immersed in the business literature to further develop the shared language.
Rollmann, who is engaged in the metrics efforts of AA, remarked that one of the requests of the Boston University team was to develop a framework to measure the aggregate results of diverse programs. She noted that there are a range of companies within the AA initiative, and while one company may have questioned Boston University about the need for measurement of social aims, there are others that design programs with social aims in mind and regularly publish results. That difference, she said, stems from the companies’ diversity of experience. The companies vary in both size and level of experience in designing and implementing programs that support health system strengthening to advance noncommunicable disease care and treatment.
Brenda Colatrella asked Rockers and Wirtz to further describe the debate about practicality versus rigor and who makes the ultimate decision about what is practical. Rockers replied that the point about practicality versus rigor is one that comes up in every conversation he and his colleagues have with the corporate partners. From his perspective, learning what is practical is a process and is not self-evident. The hope is that the process of instituting measurements within the corporate partners will evolve over time regarding the capacities that can be built and the resources that can be made available. While his expectations are modest, he believes that companies will report on the scope of program activities to start, with a few instances of more rigorous evaluation. “The companies that are at the point where they are ready to invest in that kind of evaluation are the ones
that have a history of understanding the value of that kind of evidence,” said Rockers.
Wirtz added that the Boston University team had extensive interactions with companies on their current data collection processes and what would be feasible in those contexts. She and her colleagues then offered advice and support on what could be feasible in those specific contexts. “Having the right balance is important and requires an intensive listening exercise to understand what is done and how it is done and then with our expertise in data collection to think about what could be done and what resources are available,” said Wirtz.
Robert Bollinger asked how the team optimizes the quality of the data when there are such diverse sources of data and a range of quality. Rockers said that since the Access Observatory has his team’s name on it, the team is responsible for ensuring that the data are of high enough quality to put them out in the public. However, the team cannot go to every project site and validate the data, so the approach is to have as much transparency as possible in reporting on the processes the program used to collect the data. That information is captured on a form that each program completes that says from where every indicator they report came. In fact, he said, part of what his team has been instilling in the programs and partners is a commitment to clearly understand where the data come from and how they were collected. That can be an issue because the nongovernmental partner often collects the data.
Hanna Kettler from the Bill & Melinda Gates Foundation applauded AA’s embrace of impact measurement as a core part of its activities, particularly given the diversity of the programs within the initiative. She asked if the companies or programs are collecting the data or if there has been an investment in building capacity to do evaluation at the program level. Wirtz replied that the Boston University team and AA have started an initiative to involve other institutions that are interested in evaluation. In fact, one of her team’s aims is to be a convener for bringing together interested institutions and building evaluation capacity in the global health area in general.
This page intentionally left blank.