Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Proceedings of a Workshop IN BRIEF January 2020 Informing the Selection of Leading Health Indicators for Healthy People 2030 Proceedings of a Workshopâin Brief Experts from the health measurement and population health fields gathered on May 28, 2019, in Washington, DC, at a workshop organized by the National Academies of Sciences, Engineering, and Medicine for the Committee on Inform- ing the Selection of Leading Health Indicators for Healthy People 2030 (hereafter, âthe committeeâ). The workshop pre- sentations and discussion aimed to help inform the committeeâs task, which is to (1) advise on the criteria for selecting Healthy People 2030âs Leading Health Indicators (LHIs), and (2) to propose a slate of LHIs for the Healthy People Federal Interagency Workgroup (FIW) to consider in finalizing the Healthy People 2030 (HP2030) plan.1 Committee chair George Isham of HealthPartners Institute welcomed guests and attendees and described the committeeâs process before introducing Don Wright, Deputy Assistant Secretary for Health, to give prefatory remarks. Wright described the history of the Healthy People program in the U.S. Department of Health and Human Services (HHS) as a decadal initiative to measure and track the nationâs disease prevention and health promotion efforts, and the purpose of the LHIs as distilling what is most important to the nation in the health domain. Wright noted that the Healthy People LHIs are intended to be supported by available and quality data; describe the state of population health and galvanize efforts to improve it; and facilitate collaboration at all levels of government and across sectors. After giving an overview of the committeeâs tasks and the HHS timeline for finalizing the HP2030 plan, Wright concluded by taking questions from the committee. Paula Lantz of the University of Michigan asked Wright about the key benefits of the Healthy People process. Wright pointed to Healthy Peopleâs ability to facili- tate shared priority setting and mobilization of resources among the many stakeholders invested in the nationâs health. Marthe Gold of The New York Academy of Medicine asked Wright about HHSâs efforts to engage non-traditional health sectors (e.g., transportation) in the HP2030 process. Wright mentioned several efforts to involve other sectors of the federal government in the HP2030 process, in and beyond the work of the FIW. Isham asked Wright to address the apparent tension between long-standing Healthy People objectives emphasizing disease prevention and the newer emphasis on the upstream factors that influence health in HP2030âs framework. Wright explained Healthy Peopleâs evolving emphasis on well-being in parallel to an increased appreciation in public health of the social determinants of health (SDOHs), and he stated that HHS plans to build on this thinking in HP2030 priorities over the coming year. Lastly, following a question by Gold about the origin of data sources used for HP2030, Wright confirmed that Healthy People relies on non-federal in addition to federal data. Â¹ More information about the study Informing the Selection of Leading Health Indicators for Healthy People 2030, including the two consensus study reports, one released on August 6, 2019, and the other anticipated in early 2020, is available at http://nationalacademies.org/hmd/Activities/PublicHealth/LeadingHealthIndicatorsForHealthyPeople2030.aspx (accessed September 5, 2019).
The remainder of the meeting focused on four main sessions on the topics of: 1. Perspectives on the purpose and use of the LHIs (or a small, high-level set of indicators for the nation more broadly) with consideration for both national and community needs; 2. Data sources for objectives and the LHIs; 3. Harmonizing with other national metrics sets; and 4. Measuring health equityâinsights for the LHIs A moderator, each a member or chair of the Secretaryâs Advisory Committee on National Health Promotion and Disease Prevention Objectives for 2030 (SAC)âwhich was established by the Secretary of HHS to provide advice on the HP2030 effortâprovided remarks for context at the start of each session. Following each panelâs presentations, a moder- ated question and answer session followed. The structure of the workshop has been used to organize this Proceedings of a Workshopâin Brief. PANEL 1: PERSPECTIVES ON THE PURPOSE AND USE OF LHIS Therese Richmond from the University of Pennsylvania and chair of the SACâs Subcommittee on LHIs, Subcommittee on Approaches, and Subcommittee on Objectives Review, opened the panel by reflecting on lessons gleaned from the previ- ous decadeâs effort, Healthy People 2020, and their significance to the HP2030 process. Richmond observed the mixed outcomes of Healthy People 2020 and its particular shortcomings in reaching stated goals for all population subgroups. Looking ahead, Richmond acknowledged both opportunities and key challenges inherent in the HP2030 frameworkâs emphases on well-being, eliminatingânot merely reducingâhealth disparities, and targeting upstream SDOHs more ro- bustly.2 Richmond reiterated a recommendation of the SAC that HP2030 objectives include 10 cross-cutting core objectives that explicitly target SDOHs and well-being and âdirectly address structural and systematic prejudices and discrimination through law, policy, and practices.â Richmond then introduced Anita Chandra of RAND Corporation, who described her research to integrate the sci- ence of well-being with government operations and measurement efforts at both the local level (with the city government of Santa Monica, California) and the national level (with the Robert Wood Johnson Foundationâs Culture of Health Action Framework). In both cases, Chandra highlighted the usefulness of civic well-being frameworks to engage stakeholders in communitiesâ well-being efforts, as such frameworks point the way for needed changes and promote cross-sector solu- tions to make health a shared value. Chandra closed with several proposals for the HP2030 LHIs selection process, namely including measures of subjective well-being to broaden monitoring frameworksâ sensitivity; thinking beyond âclinical so- cial factorsâ to identify and explicitly measure what drives health at a structural level; and insisting on community and civic measures of well-being to capture what policies and environments support thriving communities and people (see Box 1). BOX 1 APPROACHING WELL-BEING MEASUREMENT From the presentation of Anita Chandra on May 28, 2019 â¢ Health monitoring efforts often capture clinical or otherwise narrow understandings of health and its im- mediate social factors, while missing structural and systemic drivers of health in the broader sense of overall well-being. â¢ To address this gap, considerable capacity exists for well-being measurement within a social determinants of health framework to capture not just day-to-day emotions or traditional measures of individual health but also life meaning and life satisfaction, civic engagement, and political participation and governance outside of traditional health realms. â¢ Integrating the measurement and science of well-being with government budgeting and planning repre- sents a significant growth area for the health and well-being sector at the local and national levels. â¢ Sets of well-being measures need to both capture and catalyze stakeholdersâ willingness to invest in equi- table health and well-being for people and communities. Â² The HP2030 framework describes the initiativeâs vision, mission, foundational principles, and overarching goals, and its development included a round of public comments responding to a draft version. See https://www.healthypeople.gov/2020/About-Healthy-People/ Development-Healthy-People-2030/Framework (accessed September 9, 2019). 2
Next, Bobby Milstein of ReThink Health reminded the audience of the 1988 Institute of Medicine report that found that public health was in disarray, and he cautioned against perpetuating that organizational disarray by defining problems and addressing solutions in a fragmented and disconnected way. He offered five design principles to consider when crafting indicators based on a broad, multi-sector understanding of the system that produces health and well-being in regions across the United States. The principles are: 1. Counter disarray in the system with a big picture view that shows broad patterns and relationships; 2. Differentiate personal health and well-being from vital conditions (e.g., humane housing, lifelong learning) and urgent services (e.g., homeless services, unemployment and food services); 3. Use data to tell a larger story (e.g., local successes despite worrying national trends); 4. Inspire effort to fulfill widely-shared norms; and 5. Celebrate success and confront unfinished work. Milstein summarized a framework, already endorsed by the SAC for HP2030, featuring a portfolio of âurgent servicesâ that anyone under adversity might need temporarily to regain their best possible health and well-being, as well as the âvital conditionsâ that all people depend on all of the time to reach their full potential (see Figure 1). With this practical, systems-level framework, Milstein argued, it becomes considerably easier to select, monitor, and interpret indicators of overall system performance (e.g., self-rated health) in the context of major assets (e.g., high school graduation) and prevailing threats (e.g., household poverty), both of which affect demand and supply dynamics across the entire portfolio. Milstein added that this portfolioâand the indicators associated with itâprovides a portrait of current investments that would help to anticipate the future trajectory of population health and well-being. FIGURE 1 A practical portfolioâthe vital conditions and the urgent services. SOURCES: Presentation of B. Milstein on May 28, 2019, and ReThink Health, 2018 (https://www.rethinkhealth.org/wp- content/uploads/2018/10/RTH-WellBeingPortfolio_InstructionsSummary_10222018.pdf accessed November 4, 2019). 3
Soma Saha (Stout) of the Institute for Healthcare Improvement and Carley Riley of the Cincinnati Childrenâs Hospital Medical Center, both also representing the 100 Million Healthier Lives initiative, presented the Well-being in the Nation (WIN) report (developed by a publicâprivate partnership to inform the National Committee on Vital and Health Sta- tistics). Saha and Riley shared the goal of and lessons learned from WINâs health metrics design process. The process was intended to develop a measurement ecosystem (a parsimonious set of core measures and an expandable menu) through a Delphi process with 100 organizations. The resulting ecosystem includes âcore measures with well-being of people, well- being of places, and equityâ along with âa set of leading indicators with 12 domains and associated domains related to de- terminants of health with all upstream, midstream, and downstream considered.â The process also involved co-designing metrics with community stakeholders, (e.g., people with lived experience); harvesting and testing measures with com- munities during the process; and âroad-testingâ to ensure resonance and usefulness of a âliving libraryâ of measures. The core measures are organized by measures of the well-being of people, measures of the well-being of places, and measures of equity. Saha and Riley underscored the need for a measurement system that can accommodate rapidly proliferating data streams, and equip action-ready stakeholders across multiple sectors with the data they need to act; working backward from key health outcomes with a conceptual model of change that can help decision makers identify upstream drivers of health and well-being, and the leading indicators needed to capture them. Saha and Riley suggested that in addition to implementing such leading indicators, monitoring frameworks could be strengthened by incorporating metrics that have not previously been included in national frameworks in the United States, such as Cantrilâs Ladder (the Cantril Self-Anchor- ing Striving Scale). The Cantril scale asks the respondent to rank personal well-being on a ladder with rungs from 0 to 10, with 0 being âthe worst possible life for you,â and 10 being âthe best possible life for you,â3 to efficiently capture individu- alsâ objective and subjective well-being. Following these presentations, Isham moderated a question and answer session with the panel. Jonathan Skinner of Dartmouth College asked Milstein how and why to practically measure and interpret the utilization of urgent services, as opposed to measuring other upstream drivers whose increase or decrease is more clearly desirable or not. Milstein replied that metrics ought to assess a health systemâs popular basic services by tracking both access and utilization and the upstream factors that drive their demand. Longitudinal trends in utilization and a theoretical model including upstream factors can be used to interpret a change in urgent services utilization as good or bad. Next, Ebony Boulware of Duke Uni- versity asked where the biggest concentrations and gaps of well-being data may be found. Chandra responded that most well-being data are local and that expanding national well-being datasetsâparticularly to include data on subjective life self-evaluation and civic involvementâis a key monitoring priority. Riley reminded that local uptake of well-being initiatives is nonetheless still vital, and that signaling engagement to community partners and disseminating data to them are critical components of nationalâlocal programming coordination. Darcy Phelan-Emrick of the Baltimore City Health Department followed up by asking about the extent to which Cantrilâs Ladder is used in national health metrics instruments in the United States or abroad. Riley noted that Cantrilâs Ladder has not been used formally in public U.S. surveys; however, it has been extensively validated by Gallup for use in the United States and has been integrated into other countriesâ well-being metrics activities. Milstein and Richmond both endorsed the idea of U.S. measurement instruments adopting Cantrilâs Lad- der, which, Chandra added, is already used by the Organisation for Economic Co-operation and Development and as part of monitoring efforts in several states. Sheri Johnson of the University of Wisconsin asked whether subjective well-being measures might inspire com- placency if they exceed objective measures of well-being. Chandra acknowledged this possibility while highlighting it as a reason to measure surroundings and socio-environmental context as well as individual well-being. Saha also suggested such discrepancies ought to provoke more objective analysis; however, she asserted, and Riley and Milstein agreed, that such discrepancies may be a source of resiliency and even a normative good, rather than a mere discrepancy in data to be resolved. Next, Gold asked what constitutes an effective evidence base for the HP2030 objectives. Milstein mentioned practice-based knowledge about what it takes to assure the conditions people need to thrive, noting conventional inter- vention research has its inherent biases and limitations. Saha advised the committee to consider deviating from "tried-and- true" indicators when a new or non-health sector indicator might better inform HP2030 progress. Chandra emphasized that identifying effective evidence requires identifying and tracking the upstream drivers behind health indicators of inter- est. Lastly, Isham asked what opportunities exist to track disparities by non-geographical subgroups. Saha noted health care delivery systems could see intervention opportunities when stratifying by administrative factors like Medicaid enroll- ment. Riley suggested that differences in metrics among socio-demographic groups could become standalone metrics, while Chandra disagreed. She noted that the structural issues behind inequities such as chronic violence distinguish them 3 See, for example, https://news.gallup.com/poll/122453/understanding-gallup-uses-cantril-scale.aspx (accessed November 4, 2019). 4
from mere differences between two subgroupsâ outcomes. Rather, measuring such disparities according to the upstream drivers that engender them is more accurate and effective than simply analyzing data by race/ethnicity, sex, and so forth alone. Riley agreed with Chandraâs comment and advocated for the inclusion of both outcome measures and upstream driver measures. PANEL 2: DATA SOURCES FOR OBJECTIVES AND LHIS Edward Sondik, former director of the National Center for Health Statistics (NCHS) and chair of the Data Subcommittee of the SAC moderated the meetingâs second panel, opening with introductory remarks on considerations of data to be used for LHIs in HP2030. Acknowledging HP2030âs challenge in integrating national, state, and local efforts, Sondik asked the committee to consider how data from these different levels relate to each other, and to consider how aggregated local data relate to national data. Sondik underscored that local processes are unique and often defined by practice and moni- toring gaps. If data gaps are addressed with non-traditional data sources, strategic data partnerships could potentially help evaluate data quality, accommodate new data streams, and unleash the full potential of data, he added. Following these remarks, Sondik introduced Ali Mokdad of the Institute for Health Metrics and Evaluation (IHME) at the University of Washington. Mokdad explained IHMEâs role coordinating the Global Burden of Disease (GBD) effort, which is the work of an international consortium that measures âepidemiological levels and trends worldwide.â4 GBD reports several usual metrics, including mortality, prevalence, and incidence, as well as disability measuresâwhat is killing and ailing a population. To support this work, IHME utilizes data from many sources while evaluating them for quality and moving away from subjective measures (e.g., Grading of Recommendations, Assessment, Development, and Evaluations, GRADE, or the World Cancer Research Fund criteria) to a transparent evidence score for association and rating by p-value. Mokdad shared how the GBD platform can be used for different use cases and clients across sectors, and also highlighted GBDâs hierarchy of mutually exclusive and exhaustive causes of disease and injury, which together describe the state of global health at a granular level. One example illustrated IHMEâs capabilities and emphasis on tracking progress in nationsâ health improvements through its Sustainable Development Goals (SDGs) and Healthcare Access and Quality (HAQ) indi- ces. Notably, IHME offers both national and subnational HAQ profiles for all countries, highlighting the profound variation in health outcomes below the national level for some countries. Mokdad demonstrated IHMEâs work to model key devel- opments based on GBD data, including changes in disease expenditure for various health services, nationsâ future health scenarios, leading causes of DALYs (disability-adjusted life years) over time and their relative change in prevalence, and life expectancy estimates at census-tract levels. In his closing remarks, Mokdad offered several key IHME findings about U.S. health outcomes, including the United States' poor outcomesâespecially for womenâcompared to other high-income countries. Mokdad suggested the committee consider prioritizing metrics that are easily communicated and compatible with dynamic health scenarios, as these will illustrate where the countryâs health is headed to a broad array of stakehold- ers. He called for tracking disparities in health and risk factors at the local level, as key disparities emerge and are addressed at that level. Sondik next introduced Amy OâHara of the Massive Data Institute at Georgetown University. OâHaraâs presenta- tion focused on data access and governance, challenging the committee to consider where data are coming from and how they might be accessed, prepared, and analyzed to support HP2030âs monitoring activities. From where and how such data are collected, OâHara noted, are vital to understanding the sourceâs validity, consistency, and reliability over time, and susceptibility to distortion by inequities in Internet access or other access issues. Similarly, when or how often data are collected shape the nature of comparisons for which data are used (e.g., comparing outcomes in two different counties at a fixed point in time, in contrast with a comparison over several years). OâHara also provided a brief overview of consider- ations involved in integrating data from different sources or from different communities, states, or other levels of popula- tion. She emphasized the need for a common schema and a clear and well-documented process that can be replicated. OâHara focused the next portion of her remarks on reviewing several kinds of data sources. She noted the wealth of public-sector surveys from HHS, NCHS, and the Census Bureau while recognizing the bureaucratic barriers that often exist to accessing such data if no operational precedent exists for doing so. Although OâHara observed that public data sources are typically more transparent, she urged the committee to consider the utility of private-sector surveys and especially administrative data (e.g., from state Supplemental Nutrition Assistance Program or Temporary Assistance for Needy Families agency datasets) for substantiating HP2030 objectives and informing LHI selection. OâHara also pointed to place-based, community-generated, and potential future (e.g., Internet of Things) data as sources to be considered for possible relevance to HP2030 or future health monitoring efforts. More broadly, accessing these overlooked datasets calls for a broader effort to set norms, and to develop standards and practices to facilitate massive data collection across 4 See http://www.healthdata.org/gbd/about (accessed November 4, 2019). 5
all health-related sectors. To conclude, OâHara challenged the committee to emphasize less what data sources are imme- diately available, but rather to focus on what needs measuring and what data sources could yield effective indicators to accomplish that task. Following OâHaraâs presentation, Isham moderated a question-and-answer session with the committee. Skin- ner opened by asking Mokdad how IHME handles small area estimates with an appropriate amount of data imputation. Mokdad replied that data from other data sources such as the Census or neighboring counties or past years are used to establish covariates for small areas, and small area methodology is subsequently validated by sampling from large coun- ties (e.g., Los Angeles). The GBD methodology uses imputation when linking data between national health surveys for poorly established indicators. Next, Gold asked how the committee should address feasibility when considering whether a proposed LHI could be supported by interventions. Sondik highlighted this as an important and difficult question, because of over-reliance on significance testing to evaluate intervention effectiveness. He urged caution when predicting whether interventions will be successful when brought to new contexts. Isham asked about the ultimate aim of HP2030âs core objectives, and whether the reduction in core objectives (from roughly 1,200 to some 350 in the draft form, and ultimately up to approximately 600) still allows sufficient scope to adequately inform and drive change in Americaâs health system, per HP2030âs ambi- tions. Given the decision to reduce the number of core objectives has already been made, Sondik advised identifying the top priorities the select group of indicators should address. This is inherently a value judgment, Sondik noted, and would need to focus on the key questions about the quality and effectiveness of the U.S. health system and the key opportuni- ties to intervene (e.g., on the high maternal mortality rates). OâHara echoed that the list of core objectives should provide information critical to decision making and managing quality, not merely adding data. Any monitoring effort, OâHara added, must be understood by how the data at hand are generated, not merely how they are managed and analyzed. Mokdad noted that objectives must be specifically defined to enable good indicators, but added that some objectives backed by poor measures may merit inclusion because they stimulate a demand for better data. PANEL 3: HARMONIZING WITH OTHER NATIONAL METRICS SETS Dushanka Kleinman of the University of Maryland and a co-chair of the SAC introduced the panel with brief remarks. Klein- man acknowledged the importance of harmonization across multiple federal efforts, denoting especially âaligning ways in which to assess and monitor clinical researchâ and other activities. She also noted that not only must the LHIs be under- standable and actionable to a broad set of stakeholders, but the HP2030 effort must actively engage these stakeholders in new ways. This active stakeholder engagement includes identifying and incorporating stakeholders from non-health sec- tors that can support health and well-being. Kleinman also echoed earlier speakersâ emphasis on the importance and need for community-level data, and linkages between that community data and national datasets. Lastly, Kleinman encouraged the committee to consider the data partnership infrastructureâhow could HP2030 harness new data in innovative, broad ways with other government, business, and nonprofit entities? Kleinman introduced Tom Eckstein of Arundel Metrics, which oversees the annual Americaâs Health Rankings (AHR) study. AHR quantifies and ranks population health metrics for states and specific population subgroups, with the aim of providing sound science and longitudinal comparisons to inform cross-sector dialogue about population health. Eckstein explained that the model AHR uses compiles data on health behaviors, community and environmental conditions, policy, and clinical care to create aggregate, rankable health outcomes. In doing so, AHR prioritizes indicators that effec- tively enable communicating, comparing, and acting on data with a solid evidence base. He also mentioned the premium AHR places on indicators that are compatible with combined measures. Eckstein closed by describing some of the key chal- lenges that AHR faces. These include how to maintain an indicator list that is both parsimonious and adequately compre- hensive; how to categorize and communicate measures in accurate and user-friendly ways; and how to manage changes over time in data streamsâ availability, variability and soundness, and terminology. Following Ecksteinâs remarks, Kleinman introduced Kristen Lewis of the Social Science Research Councilâs Measure of America (MOA) initiative. MOA aims to provide sound, user-friendly tools for understanding well-being and inequity in America. MOAâs conceptual framework centers on the Human Development Index (HDI), which is intended to capture the well-being of people better than traditional economic measures. HDI emphasizes a limited set of indicatorsânamely, life expectancy at birth, educational attainment and school enrollment, and median earningsâto allow comparability, granularity, and effective advocacy and communication. To further support communication with stakeholders, MOA also emphasizes conveying its data in ways that make them accessible and understandable to a range of readers, highlight- ing take-home messages, key differences between subgroups, and the upstream drivers of health (i.e., high-level, such as public policies and societal investments). MOA works closely with community partners to disseminate these findings and to help communities apply the HDI framework for setting new well-being goals and designing programs to reach them. 6
Lewis underscored that adopting this active dissemination and community partnership role is key for a metrics set, such as HP2030, to catalyze real change. Lewis concluded with several other lessons learned from MOA. She observed that com- bining distinct datasets can reinforce key messages for stakeholders, who are more likely to act when presented with local health disparities data. Lewis also counseled that sharing data with both policy makers and community partners requires an explicit surfacing of the structural causes of health inequities behind those findings for the sake of fairness. Marjory Givens, a scientist at the University of Wisconsin and the County Health Rankings and Roadmaps (CHR) program was the third and final presenter on the panel. Givens presented on CHRâs work and lessons learned. CHR col- lects and ranks health outcomes data from state and local sources to provide informative, actionable comparisons of health and well-being across Americaâs counties through 35 ranked measures, and more than 37 additional measures (see Figure 2 for the CHR framework). In doing so, CHR supports population health by engaging communities in population health initiatives, identifying root causes of health and illness, supporting new policy and decision making, and offering monitoring and evaluation capabilities. Givens describes how CHR selects its measures according to stated program goals and community needsâfor example, whether the measures correspond to feasible and equitable health interventions, or whether they are easily communicated in conjunction with other metrics initiatives. CHR also considers technical feasibility issues: Are potential measures validated and reliable? Are these measures regularly collected at the county level, for nearly all counties, with minimal lag period? Are these measures rankable? Givens closed with several questions challenging the committee to think similarly critically about the LHIs. Should LHIs be scaled by geography and/or disaggregatedâand if so, how? Should data sources be uniform across geographic scale, or should they be adjusted for factors like rurality? How can we measure equity and include it as an explicit norm in metrics approaches? Most of all, what is progress in shaping the health of the nation? Following the panel presentations, Isham moderated discussion between the committee members and the speak- ers. Johnson started by asking which audiences are most critical to engage with HP2030âs LHIs. Lewis responded that the target audience depends on the level/sector that the LHI is meant to motivate, but it may be worthwhile to organize sets of indicatorsâor even distinct productsâfor specific target audiences. Gold next asked how important it is to have data at state and local levels amid the struggle to keep a parsimonious set of objectives that link national and subnational data. All three panelists affirmed the imperative for local-level data, suggesting that tracking equity is most effective at the local lev- el. Skinner asked next how conflicts between different, combined datasetsâ reported measures, whether directly measured or estimated, for a given population should be resolved. The panelists remarked that while combining datasets require clear transparency in reporting when conflicts occur, these instances are a much smaller issue compared to media and end-usersâ lack of data literacy. Addressing this question of the publicâs data literacy, Lantz asked Lewis whether the public FIGURE 2 County Health Rankings framework. SOURCES: Presentation of M. Givens on May 28, 2019; University of Wisconsin Population Health Institute, County Health Rankings & Roadmaps 2017. www.countyhealthrankings.org. 7
accurately understands life expectancy as MOA reports it. Lewis stated that public understanding of life expectancy report- ing relates to the extent to which the public could understand take-home messages and intervene accordingly. Lantz also asked whether fertility statistics are included in these metrics for demography purposes. Both Eckstein and Lewis respond- ed that AHR and MOA, respectively, do not include fertility statistics because they are not strictly rankable from best rate to worst; however, both measure sets include data on access to family planning services, and AHR tracks the proportion of intended pregnancy by state. MOA also includes additional data on the use of reproductive services and rates of intimate partner violence. After that exchange, Isham closed by asking what non-geographic disparities may be overlooked and would need to be considered for parceling population data to inform equity in HP2030âs objectives. Lewis noted that MOA explored non-geographical subgrouping, namely, by race and ethnicity, exercise amount, diet, and insurance status, when consulting with Santa Barbaraâs Cottage Health. Givens noted that focusing on administrative factors such as insurance status is worthwhile, especially given their relevance to policy design, but focusing there might encourage over-reliance on traditional health careâbased solutions. PANEL 4: MEASURING HEALTH EQUITY Moderator Nico Pronk, president of HealthPartners Institute and co-chair of the SAC, opened the last panel of the meeting with several comments. He reminded the committee that health equity is a central and cross-cutting piece of the HP2030 framework, and so is the emphasis on overall well-being. Considering that the LHIs derived from core objectives will only guide HP2030 in so much as they reflect the vision the HP2030 framework provides, Pronk highlighted the SACâs findings, found in its seventh report, that many of the FIW-proposed objectives did not align with this larger vision. However, this provides an opportunity to think more broadly about the approach for LHI selection, Pronk noted. He proposed that in ad- dition to adequately reflecting the HP2030 framework, the LHIs should allow disaggregation and clearly spotlight vulner- able populationsâ health needs. Moreover, the LHIs must provide a balanced focus between disease prevention and disease prevalence and allow meaningful health comparisons with other countries. Pronk noted that this may require loosening the restrictions on deriving the LHIs from the FIW-proposed list of core objectives so that the LHIs may adequately capture the vision of HP2030. Pronk introduced Brian Smedley of the National Collaborative for Health Equity. Smedley presented on the Health Opportunity and Equity (HOPE) initiative, which tracks health equity through a set of focused data and tools in an opportunity-focused narrative well suited for communicating in a changing political landscape. Smedley differenti- ated HOPE from other equity measurement tools because of its capacity for disaggregating dataâby race and ethnic- ity, income, education at both national and state levelsâand its benchmarking approach. In contrast to the traditional, white-normative method for benchmarking that compares other subgroupsâ health outcomes to typically better-off white outcomes, HOPE creates benchmarks using the average outcomes of the top five-performing statesâ high-earning and college-educated population. Utilizing both current performance and goals for a given subpopulation, HOPE also reports a distance-to-goal measure (i.e., the number and share of people who would need their health status or conditions for health to improve for the subpopulation to meet the HOPE goal). Smedley concluded by demonstrating several HOPE state profile tools to visualize ranked health outcomes and distance-to-goal measures by race/ethnicity for a given state. These visualizations and Smedleyâs key findings underscored the profound health improvements states and the nation would experience if all people enjoyed the same advantages as high-earning college graduates. Smedley also presented slides from co-presenter Steven Woolf, who was unable to attend the meeting. Woolfâs slides examined the body of re- search on population attributable risk (i.e., the proportion of cases that would be averted if a risk factor, such as systemic inequities, were eliminated). Smedley briefly touched on the statistical calculations used to compute such risk before de- scribing several landmark examples of these estimates in the literature, meant to illustrate attributable risk as an alternate way of conceptualizing and measuring health equity. Examples included McGinnis and Foegeâs work on the so-called âactual causes of death,â attributing 400,000 deaths per year to tobacco use and another 300,000 annual deaths to poor diet and inactivity.5 Elsewhere, Smedley cited work from Woolf and colleagues that nearly 900,000 excess African Ameri- can deaths occurred between 1991 and 2000 because of racial inequities.6 Smedley closed by highlighting the compara- tive value of such estimates, noting that between 1991 and 2000 closing inequities in educational attainment could have averted nearly 10 times as many excess deaths as making technological advances.7 Sarah Treuhaft of PolicyLink next presented the National Equity Atlas (NEA), which PolicyLink develops in 5 McGinnis, J., and W. Foege. 1993. Actual causes of death in the United States. Journal of the American Medical Association 270(18): 2207â2212. https://www.ncbi.nlm.nih.gov/pubmed/8411605 (accessed September 25, 2019). 6 Woolf, S. H., R. E. Johnson, G. E. Fryer, Jr., G. Rust, and D. Satcher. 2004. The health impact of resolving racial disparities: An analysis of US mortality data. American Journal of Public Health 94(12): 2078â2081. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1448594 (accessed September 5, 2019). 7 Ibid. 8
FIGURE 3 Total U.S. population and economically insecure population by race/ethnicity, 2015. SOURCE: Presentation of S. Treuhaft, May 28. 2019. Reprinted with permission from PolicyLink. Copyright 2018. collaboration with the University of Southern Californiaâs Program for Environmental and Regional Equity. Treuhaft ex- plained that NEA provides data on measures of population health, well-being, and inclusion at the national level, for all 50 states, and for the 150 largest metropolises and 100 largest cities in the United States. NEA disaggregates these data by race/ethnicity, gender, nativity, and income to inform community action. NEA specializes in disaggregating data at the local level; monitoring with unique indicators such as working poverty and the economic benefits of racial equity in income; implementing a community-tested and -approved equity framework; and disseminating data to shape the policy-making process. To this last point, Treuhaft cited NEAâs success in encouraging the advent of a One Fairfax Equity Policy in Fairfax County, Virginia, and the Office of Equity, Diversity, and Inclusion in Rhode Island. In closing, Treu- haft underscored the need to disaggregate indicators to highlight inequities. Specifically, she told the committee that examining race alongside other inequities requires both a holistic, SDOH-informed framework and data disaggregation to see which races are over- or under-represented. Additionally, she encouraged choosing pertinent and comparative metrics to illustrate key and overlooked/poorly measured inequities. These could include measuring the racial wealth gap as a standalone indicator, or using more accurate measures of economic insecurity than the federal poverty level (such as 200 percent of the federal poverty level) (see Figure 3 for the distribution of economic insecurity by race and ethnicity, 2015). Isham moderated the question-and-answer session. Gilbert Gee of the University of California, Los Angeles, asked how to capture so many inequities with the most parsimonious set of indicators and objectives. Both Smed- ley and Treuhaft both remarked that racial inequity and its enduring impacts on a changing population must be prioritized first when monitoring inequities. Smedley further recommended that HP2030 set as a data infrastructure goal developing a nuanced system for disaggregating data on multiple inequities simultaneously. Afterward, several questions arose about HOPEâs benchmarking methods. Johnson asked whether benchmarking against high-earning, college-educated subpopulationsâ outcomes may effectively lend itself to victim-blaming critiques and/or the health comparisons to wealthy Whites that HOPE purportedly aims to avoid. Smedley acknowledged this possibility and pointed to the need for effective messaging that dissuades victim blaming while objectively indicating this high- earning, college-educated group tends to enjoy the excellent health outcomes that should be attainable by everyone. Skinner also asked what benchmarking strategies might avoid artifactual spikes in equity when benchmark groups start experiencing declining outcomes. Treuhaft advised simply disaggregating outcomes over time by race/ethnic- ity to examine subgroupsâ trends independent of any benchmark group. Smedley noted that HOPE uses both count data and distance-to-goal measuresâthat is, both absolute and relative measuresâin these instances to limit such artifactual changes. Isham followed up on Smedleyâs comment by asking whether HOPEâs distance-to-goal measures 9
are standalone indicators. Smedley clarified that distance-to-goal measures are a communications-oriented detail, not themselves part of HOPEâs set of metrics. Lastly, Gold asked how to operationalize objectives that track policies (e.g., related to housing) so that HP2030 can address place-related structural inequities. Smedley highlighted the poignancy and flexibility that a measure examining the proportion of families in low-poverty neighborhoods (i.e., where fewer than 20 percent of the population at the census tract level is below poverty) could provide here. Treuhaft suggested that embedding policy frameworks, such as PolicyLinkâs All-In Cities or others, might support operationalizing the measurement of inequities at the local level. Isham opened the floor for public comment and final questions from the committee members. Kleinman ob- served that the Panel 4 question-and-answer session underscored why ongoing learning and the development of eq- uity measurements are vital. Better surveillance and data collection are needed to address critical inequitiesâespecially the least studied, such as lesbian, gay, bisexual, and transgender populations' health outcomes. Phelan-Emrick shared an anecdote regarding how community members think about health measures. During her tenure with the Baltimore City Health Department, Phelan-Emrick noted, she learned that residents may come to mistrust the Health Depart- mentâs messaging about improvements in life expectancyâfor example, when the agencyâs communication about im- provements to life expectancy contrasts with some residentsâ lived experience of illness and death in their social circle. Saha commented on the last panelâs discussion. She learned from her experience with 100 Million Healthier Lives that if data are disaggregated by race it multiplies other inequities. Instead of disaggregation, then, one feasible alternative measure could be the "opportunity for years of potential life gained," per Woolfâs and Treuhaftâs presentations. Such an indicator allows measuring individual and additive improvements without onerous additional data collection, and it avoids the "zero-sum thinking" of benchmarking against a better-off subgroup. Wayne Jonas of the Samueli Founda- tion made a final comment, challenging attendees to think of ways to link the aims of achieving health and well-being outcomes with day-to-day health care delivery goals. Once the public comment session concluded, Isham thanked the presenters, committee members, and attendees for their time, described the next steps for the committee, and ad- journed the meeting.â¦â¦â¦ DISCLAIMER: This Proceedings of a Workshopâin Brief was prepared by Andrew Koltun as a factual summary of what occurred at the meeting. The statements made are those of the rapporteur or individual workshop participants and do not necessarily represent the views of all workshop participants; the planning committee; or the National Acad- emies of Sciences, Engineering, and Medicine. *The National Academies of Sciences, Engineering, and Medicineâs planning committees are solely responsible for or- ganizing the workshop, identifying topics, and choosing speakers. The responsibility for the published Proceedings of a Workshopâin Brief rests with the rapporteur and the institution. REVIEWERS: To ensure that it meets institutional standards for quality and objectivity, this Proceedings of a Workshopâ in Brief was reviewed by Erica Russell, United Way of the National Capital Area, and Wayne Jonas, Samueli Founda- tion. Lauren Shern, National Academies of Sciences, Engineering, and Medicine, served as the review coordinator. SPONSORS: This workshop was supported by the U.S. Department of Health and Human Services' Program Support Center. For additional information regarding the workshop, visit http://nationalacademies.org/hmd/Activities/PublicHealth/ LeadingHealthIndicatorsForHealthyPeople2030.aspx. Suggested citation: National Academies of Sciences, Engineering, and Medicine. 2020. Informing the selection of Lead- ing Health Indicators for Healthy People 2030: Proceedings of a workshopâin brief. Washington, DC: The National Acad- emies Press. https://doi.org/10.17226/25654. Health and Medicine Division Copyright 2020 by the National Academy of Sciences. All rights reserved.