Previous reports from the National Academies of Sciences, Engineering, and Medicine (NRC, 2010b, 2012a, 2012b) highlighted the central role that infrastructural, institutional, and workforce capabilities will play in advancing weather and climate modeling and forecasting capacity in the coming decades. Specifically, the reports recognized (1) the importance of aligning modeling research and development with trends in computing and (2) creating professional incentive structures and workforce pipelines to ensure investment in pivotal yet currently underrepresented activities such as model development, moving research to operational systems, and meeting decision-maker needs.
Many of the barriers identified in these previous reports for weather forecasting and climate modeling are common to subseasonal to seasonal (S2S) prediction. Thus realizing the full potential of Earth system forecasts on S2S timescales will require overcoming many similar challenges to weather and climate modeling. This chapter describes two core capacity-building elements required for the success of an advanced S2S forecasting capability, building on, and sometimes reiterating, findings and recommendations issued in previous reports (NRC, 2010b, 2012a, 2012b): (1) building S2S cyberinfrastructure capacity and (2) building the S2S workforce.
This section reviews the risks and opportunities posed to the current S2S computational and data infrastructure by changes in technology as well as the growing cyberinfrastructure requirements to support S2S forecasting. Although the challenges posed by S2S prediction systems are similar to those faced by weather or climate modeling systems, the data and processing requirements for S2S prediction systems will likely test the current cyberinfrastructure capacity to at least as great an extent as those other systems, and an expansion of the cyberinfrastructure and human capital will be necessary to realize the potential of S2S forecasting.
Several factors drive the growing demand for cyberinfrastructure. Data assimilation, which integrates observational data with models, will be a major driver in growing
computational and storage infrastructure needed to enable significant improvements in S2S forecasting. As detailed for climate models generally (NRC, 2012b), future S2S models will require increased computational capacity because of the scientific need for higher spatial and temporal resolutions (e.g., for resolving clouds, ocean eddies, and orographic processes—see Chapter 5).
Typical data volumes from the output of S2S prediction models are discussed in Box 7.1. On the observing side, more than 1 billion scalars will be typical input volumes into the data assimilation component (see Chapter 5, sections on routine observations and data assimilation). Increases in observations and changes in model configurations will drive a greater than 1,000-times increase in data volume and transport from what the S2S community sees today. Finally, the analysis phase is multi-purpose and computationally significant; it needs to produce the first-guess fields for the next prediction run, and prepare numerous products for forecasting, decision-making, and research. All of these developments are considered to be essential for more accurate, reliable, and useful S2S forecasts. Combining these factors into an example, improving model resolution from 100 km to 25 km and doubling the number of vertical levels as well as model complexity, while running 100 ensemble members, could easily result in a 1,000-fold increase in computational costs compared to today. Thus, the S2S modeling enterprise fundamentally relies on sustained, dramatic improvements in supercomputing capabilities and needs to strategically position itself to fully exploit them.
Finding 7.1: Needed advances in S2S forecast models (e.g., higher resolutions, increased complexity) require dramatically increased computing capacities (perhaps 1,000x) and similar advances in related storage and data transport capacities.
The backdrop for this increase in computational requirements is a disruptive time in the broader landscape of computing systems and programming models. All indications are that increases in computing performance through the next decade will arrive not in the form of faster chips, but in the form of slightly slower chips holding many more computational elements (ASCAC, 2015; NRC, 2012b). Exploiting these new many-core chips will require not only refactoring the existing parallelism to effectively take advantage of their architectures, but also finding additional parallelism throughout S2S applications. As highlighted in NRC (2012a) for climate modeling, these can be achieved in three primary ways: (1) add parallelism by scaling out the problem—increasing the horizontal resolution does this, but at the expense of shortening the model time-step; (2) exploit parallelism that is already there but has not been used, for example by introducing task parallelism by overlapping certain physics calculations or by finding shared-memory parallelism (e.g., Open Multi-Processing [OpenMP]), or finally; (3) develop new algorithms with more inherent parallelism, such as the effort to create so-called parallel in time (PinT) algorithms (e.g., Cotter and Shipton, 2012). All three efforts will require much higher levels of collaboration between computer scientists, software engineers, applied mathematicians, and S2S scientists.
Finding 7.2: The transition to new computing hardware and software through the next decade will not involve faster processing elements, but rather more elements with considerably more complex embodiments of concurrency. This transition will be highly disruptive.
As with computing infrastructure, the hierarchy of storage devices, including cache, memory, disk, and tape, as well as the virtual memory and file-system abstractions that overlay them, will also undergo a dramatic, transformative change in the coming years (NRC, 2012b). As with climate modeling, these changes will require an assessment of most data storage elements (both memory and disk) of S2S applications to fully leverage the storage and memory hierarchy of emerging computer architectures. The work of identifying the elements of the code that can or should be addressed is in itself a
daunting task. Technologies such as solid state devices (SSD), 3-D “stacked” memory, and non-volatile memory (NVM) have been, or will soon be, introduced into planned compute and storage systems. These and other innovations will augment and blur the price points, sizes, and performance characteristics of the traditional storage hierarchy. Further out, hybrid devices such as memristors and other processor in memory (PIM) technologies will begin to blur even the distinction between memory and computing itself. Adapting the modeling systems and managing and optimizing the utilization of this increasingly complex storage hierarchy will be fundamental to realizing the full potential of supercomputing investments.
To fully realize the potential of these new computing formats and technologies, the S2S field will need a new breed of software engineers and modelers, and actual data scientists. Training of existing software engineers, along with workforce development to produce a pipeline of adequate trained engineers and scientists will be will be required. Universities will need to play a larger role in developing this next generation of computational and data scientists (see section below on Building Capacity in the S2S Modeling and Prediction Workforce).
Finding 7.3: Future storage technologies will be more complex and varied than today; leveraging these technological innovations will require numerous software changes and will likely be highly disruptive.
S2S Application Challenges
For climate models generally, increasing numbers of processing elements combined with deep and abstruse memory hierarchies will continue to push the limits of application code design and parallel programming standards and will create a challenging environment for high-performance-computing (HPC) application programmers (NRC, 2012b). Current S2S applications are already challenged by the need to take advantage of modern supercomputing systems (with efficiencies typically below 5 percent) (Roe and Wilkie, 2015; Wilkie, 2015). S2S applications possess several special characteristics that make them particularly challenging relative to current and, even more so, expected HPC architecture:
- S2S applications require long simulations compared with traditional numerical weather prediction simulations. This limits the resolution, the inherent number of parallel degrees of freedom, and therefore scalability. Similar concerns accompany certain data assimilation algorithms, such as 4D variational methods, which have limited scalability relative to ensemble approaches (NRC, 2008).
- S2S applications are large and complex with many component models. Both the Community Earth System Model (CESM) and the Climate Forecast System (CFS) for example, have more than 1.5 million lines of source code. Characteristics typical of many algorithms in S2S applications—large numbers of variables (e.g., from increased model complexity) and/or irregular memory access patterns (e.g., unstructured grids and some advection schemes)—do not work well on memory systems with deep cache hierarchies, wide cache lines, and decreasing amounts of memory per processing element. The introduction of vector capabilities into many core processors creates challenges for the “branchy” physics codes1 typical of S2S applications.
- S2S phenomena are representative of chaotic systems that are sensitive to initial conditions (see Chapters 4 and 5). For this reason, developers currently require bit-for-bit reproducibility (i.e., providing the same output when provided with the same input across different runs [Arteaga et al., 2014]) for testing and verification of model results. This restriction is a limiting factor in fully leveraging the optimization capabilities of compilers and elemental math libraries. In the future, this bit-for-bit requirement may become untenable when issues of fault resilience, and architectures with extreme levels of concurrency and complexity, further erode reproducibility (NRC, 2012b; Palmer, 2015). The possibility of irreproducible computation presents a fundamental challenge to the present methodology for the testing, verification, and validation of S2S model results. If architectural or software infrastructure changes, or compiler optimization nudges the answers, even by a minute amount, then there is no other way to prove whether the change has or has not pushed the system into a different climate state other than computing the climatology of long control runs (usually 100 years to account for slow climate processes). This requirement is restrictive and represents a considerable barrier to the development, testing, and optimization cycle. However, given the computation power that will be utilized for daily, multimember, long-lead S2S forecasts, that in some cases may involve daily reforecasts as well, the computation of a 100-year climatological simulation does not seem formidable even in the development cycle. There is evolving research into the use of imprecise computing (in which irreproducibility is not elevated to the level of a requirement) to address some of these issues (Palmer, 2015). One alternative being explored to reduce this cost is to run statistical tests on single-ensemble members for consistency with the parent distribution over much shorter periods (Baker et al., 2015). However,
1 “Branchy” refers to physics codes that include a lot of if-then statements, thus involving significantly more computing time.
having a mode where S2S models are able to give bit-by-bit reproducibility on computers that are able to support this reproducibility is essential for the efficient development and debugging of such models. The S2S modeling community may very well need to adapt to a world where reruns of experiments are only the same in a statistical sense. Similar to climate models, such adaptation would entail profound changes in methodology and would be an important research challenge for this decade (NRC, 2012b). A possible resolution of this issue is a compromise in which exact reproducibility is available for model development and testing but abandoned for large-scale operational computations that involve many ensemble members and stochastic parameterization and forcing.
Finding 7.4: S2S models are not taking full advantage of current computing architectures, and improving their performance will likely require new algorithms with better data locality, as well as significant refactoring of existing ones for more parallelism.
Shared Software Infrastructure Components
Similar to the climate modeling community (NRC, 2012a), a renewed and aggressive commitment to shared software infrastructure components across the S2S community could be an efficient way to navigate likely transitions in computing and storage infrastructure, and to overcome poor efficiencies from current applications. The transition will likely be more disruptive than the transition from shared memory vector to distributed memory parallel that started in the late 1990s. Indeed, conventional wisdom in the HPC community (see Zwieflhofer  and Takahara and Parks ) is that the next generation conversion will be significantly more complex and unpredictable than previous changes, given the absence of a clear technology path, programming model, and performance analysis tools.
The S2S modeling community is seeing the natural evolution of software component adoption (re-gridding from Earth System Modeling Framework [ESMF] used by CESM, the National Centers for Atmospheric Research’s [NCAR’s] Parallel I/O [PIO] library used by others). The committee believes that the community is at the point where developing an integrative modeling environment (across models and organizations) outweighs the costs of developing the tools to enable an integrative environment (e.g., Common Infrastructure for Modeling the Environment [CIME] at NCAR, ESMF at NOAA and Navy) and the cost of moving to them. With the experience, successes, and lessons learned in the past decade, the forecasting community is positioned to accelerate the development and adoption of an integrative modeling strategy.
So far, few software components have been broadly adopted as a standard, because modeling centers that initially invested in one solution have had insufficient funding and incentives to switch to another. The vector to parallel disruption led to widespread adoption of coupler technologies at the scale of individual institutions. The forecast modeling community can conceive of a common integrative modeling environment that includes a set of component elements that could be subscribed to by all major U.S. forecast modeling groups, supports a hierarchy of models with component-wise interchangeability, and supports development of high-performance implementations that enable forecast models of unprecedented resolution and complexity to be efficiently adapted to new architectural platforms. The U.S. Global Change Research Program’s Interagency Group on Integrative Modeling (IGIM)2 has begun work to better coordinate the country’s climate modeling efforts (USGCRP IGIM, 2015); such coordination would likely benefit S2S forecasting efforts as well. Concurrently, the National Earth System Prediction Capability (ESPC)—an agreement between the National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DOD), National Aeronautics and Space Agency (NASA), Department of Energy (DOE), and the National Science Foundation (NSF) to work on weather to subseasonal timescales—has adopted a standardized version of ESMF and has proposed common standards for implementing physics parameterizations into atmospheric models.3 ESPC and IGIM are exploring the potential for more commonality as their efforts move forward. Adopting joint standards between IGIM and ESPC will be especially important as the community moves toward seamless prediction as discussed in Box 5.2.
Finding 7.5: An integrative modeling environment presents an appealing option for addressing the large uncertainty about the evolution of hardware and programming models over the next two decades.
Data Storage, Transfer, and Workflow for S2S Prediction
In addition to the supercomputer/storage infrastructure and the forecasting models, a key element of the forecasting workflow includes data cyberinfrastructure, including the storage, transfer, analysis, and visualization workflows associated with big data sets. The data cyberinfrastructure for the end-to-end forecasting workflows may ultimately be an even larger challenge than the computing challenges confronting S2S prediction. The data elements include several that assimilate the large quantities
2http://www.globalchange.gov/about/iwgs/igim-resources, accessed January 27, 2016.
of operational data with model simulation data and facilitate the data analysis, visualization, and overall workflow for all of these elements.
Remote sensing systems (e.g., satellites, radars, instrumented aircraft, and drones) along with conventional and automated in situ measurements in both the atmosphere and ocean will produce more than 1 billion scalars per forecast cycle (see Chapter 5). Transport and preparation of these data for model assimilation is a challenge. Networks must have the necessary carrying capacity with minimal latency, and computing and storage must be available for data processing into model ready quantities (e.g., sea surface temperature in degrees Kelvin).
Model Simulation Data
Fundamentally this effort should be operationalized and extended to provide these vital functions. The data sharing and management infrastructure benefits from a “network effect” (where value grows exponentially as more nodes are added; see, e.g., Church and Gandal, 1992; Katz and Shapiro, 1985). It involves development of operational infrastructure for petabyte-scale (and soon exabyte-scale; see Overpeck et al., 2011) distributed data stores. The S2S project (described in Chapter 6) has begun efforts to archive and share data from multiple operational S2S forecasting systems, but this effort is still growing and is underutilized by the research community (see Finding 6.2).
Finding 7.6: Researchers do not currently have a good solution for processing and analyzing S2S data that are federated across many institutions. A dedicated and enhanced data-intensive cyberinfrastructure will be required to enable the distributed S2S community to access the enormous data sets generated from both simulation and observations.
Data Analysis Workflow
S2S data-intensive applications and workflows are likely to face data analysis challenges of scale and scope similar to those faced by the Coupled Model Intercomparison Projects (CMIP). The CMIPs have observed that, because storage systems—as part of an integrated data-intensive computing environment—have not kept up with advances
in computing, they have become a bottleneck and therefore a ripe target for enhancements. These lessons from CMIP efforts serve as a bellwether to what the S2S prediction community could expect. In addition, the demand for data storage, analysis, and distribution resources will grow as models move to finer resolutions, incorporate more complexity, and serve the needs of an increasingly diverse and sophisticated set of users. In response, data-centric workflows, like the applications themselves, must become more parallel and use storage infrastructure more efficiently. In addition, the community should consider reductions of data volumes that can be achieved through both lossless and lossy compression of data sets, as well as a shift away from the paradigm of “store now, analyze later” to mechanisms that allow model output to be analyzed on the fly and rerun as needed.
There is an increasing need to use, access, and manipulate large volumes of remotely stored data, which places new demands on infrastructure and requires systematic planning and investment at the national level.
Finding 7.7: New approaches to data-centric workflow software that incorporate parallelism, remote analysis, and data compression will be required to meet the demands of the S2S forecasting community.
Moving Forward with Building Capacity for S2S Cyberinfrastructure
Advances in S2S forecast models will require dramatically increased computing capacities, but the transition to new computing hardware and software during the next decade will be highly disruptive with the increasing concurrency of new HPC systems. In addition, future storage technologies will become more complex and varied. S2S models are not taking full advantage of current computing architectures, and improving their performance to leverage the coming technology innovations will require numerous software changes and will likely be highly disruptive.
At this time, the many emerging architectures do not adhere to a common programming model. Although new ways to express parallelism may well hold the key to progress, from the point of view of the software developers of large and complex scientific applications, the transition path is not clear (NRC, 2012b). Assessments undertaken by the Defense Advanced Research Projects Agency (DARPA) and the DOE (e.g., DOE, 2008; Kogge et al., 2008) indicate profound uncertainty about how one might program a future system that may encompass many-core chips, coprocessors and accelerators, and unprecedented core counts requiring the management of tens of millions of concurrent threads on such hardware. The President’s Council of Advisors
on Science and Technology (PCAST) has called for the nation to “undertake a substantial and sustained program of fundamental research on hardware, architectures, algorithms and software with the potential for enabling game-changing advances in high-performance computing” (PCAST, 2010). This challenge will grow to 1 billion threads by the end of this decade. The prevalent programming model for parallel systems today is based on MPI (Lusk and Yelick, 2007), shared-memory directives (e.g., OpenMP [Chandra et al., 2001]), or a hybrid of both. The adaptation of the Message Passing Interface (MPI)/OpenMP paradigm to exascale architectures is an area of active research investigation.
The weather and climate forecasting community has never retreated from experimenting with leading-edge systems and programming approaches to achieve required levels of performance. The current HPC architectural landscape, however, is particularly challenging because it is not clear what direction future hardware and software paradigms may follow. The collaborative nature of system co-design involves end-user/developer community and private-sector involvement (e.g., the Coral system4).
It is evident that more resources are needed to make the progress necessary to prepare S2S applications for next generation supercomputers. In light of these challenges, the committee recommends that a national plan and investment strategy be developed to take better advantage of current hardware and software and to meet the challenges in the evolution of new hardware and software for all components of the prediction process.
Recommendation O: Develop a national plan and investment strategy for S2S prediction to take better advantage of current hardware and software and to meet the challenges in the evolution of new hardware and software for all stages of the prediction process, including data assimilation, operation of high-resolution coupled Earth system models, and storage and management of results.
- Redesign and recode S2S models and data assimilation systems so that they will be capable of exploiting current and future massively parallel computational capabilities; this will require a significant and long-term investment in computer scientists, software engineers, applied mathematicians, and statistics researchers in partnership with the S2S researchers.
4http://energy.gov/articles/department-energy-awards-425-million-next-generation-supercomputingtechnologies, accessed January 27, 2016.
- Increase efforts to achieve an integrated modeling environment using the opportunity of S2S and seamless prediction to bring operational agency groups (e.g., the Earth System Prediction Capability [ESPC]) and integrated modeling efforts (e.g., the Interagency Group on Integrative Modeling [IGIM]) together to create common software infrastructure and standards for component interfaces.
- Provide larger and dedicated supercomputing and storage resources.
- Resolve the emerging challenges around S2S big data, including development and deployment of integrated data-intensive cyberinfrastructure, utilization of efficient data-centric workflows, reduction of stored data volumes, and deployment of data serving and analysis capabilities for users outside the research/operational community.
- Further develop techniques for high-volume data processing and in-line data volume reduction.
- Continue to develop dynamic model cores that take the advantage of new computer technology.
The current workforce of S2S model developers is insufficient to meet the growing need for S2S model development (Jakob, 2010). Most modeling centers have only a small number of people directly involved in model development. It is difficult to quantify the number of S2S model developers in the United States, because a systematic study on the modeling workforce has never been done. Many of the challenges faced in maintaining a robust S2S model development workforce are similar to those faced in climate model development. As such, much of the work in this section draws heavily on previous work on climate modeling (NRC, 2012b).
Current Challenges in the S2S Model Development Workforce
The development and use of comprehensive S2S models in the United States require a large number of talented individuals in a diverse set of disciplines. The critical point is that development of atmospheric and environmental prediction models, for S2S and other ranges, must become an interdisciplinary effort involving scientists, software engineers, and applied mathematicians (NRC, 2008). As described for climate models, these areas of expertise include the following (NRC, 2012b):
- scientists engaged in understanding the S2S prediction system, leading to the development of new parameterizations and other model improvements
(distinct cadres of scientists are often needed for various model components, such as the ocean or terrestrial ecosystem models);
- scientists engaged in using the models for well-designed numerical experiments and conducting extensive diagnostics of the models to better understand their behavior, ultimately leading to model products and to scientific insights that provide the impetus and context for model improvements;
- scientists studying the regional details provided by the archived results from global model simulations and related downscaling efforts, and how these vary across various models;
- support scientists and programmers to conduct extensive sets of numerical simulations in support of various scientific programs and to ensure their scientific integrity;
- software engineers, applied mathematicians, and scientists that straddle these areas to explore fundamental new algorithms and approaches that can fully utilize new generations of computing and storage architectures;
- software engineers to create efficient, parallelizable, and portable underlying codes, including the development and use of common software components;
- data scientists to understand and manage complex workflows and to facilitate easy and open access to model output through modern technologies;
- hardware and software engineers to maintain the high-end computing facilities that underpin the modeling enterprise; and
- interpreters to translate model output for decision-makers.
From the limited data available (NRC, 2012b), it appears that the level of human resources available for S2S modeling has not kept pace with the demands for increasing realism and comprehensiveness of the models. Data on the numbers of students involved in S2S model development do not exist, and any proxy data and anecdotal evidence (NRC, 2012b) suggest that the pipeline for S2S model developers is not growing in a robust fashion.
These considerations suggest that the development of S2S and other predictive models must increasingly become a community endeavor involving the operational centers and the academic community. To be effective, there must be mechanisms to encourage interchange of personnel and talent, either as long-term collaborators or as shorter-term visitors. For example, students might well perform their dissertation research in an operational center under the collaborative supervision of center scientists and faculty members in their academic institution.
In addition to not having sufficient human resources, many of the skills needed by the S2S workforce are yet to be developed (e.g., new algorithms, tight coupling between
the understanding of the science and the software requirements), which places an even greater imperative on maintaining a robust pipeline of early-career scientists who are involved in model development. This will become more critical with the next generation of supercomputers (see section above on Building Capacity for S2S Cyberinfrastructure), and serious efforts will be required to bridge the gap between scientists and the software engineering and numerical algorithms skills needed to utilize this new hardware. These gaps in the necessary workforce skills require significant attention and could be significant impediments to progress in S2S forecasting.
Finding 7.8: From the limited data available, it seems that the cadre of trained S2S modelers is not growing robustly in the United States and is not keeping pace with the needs of this rapidly evolving field.
Current Challenges in the S2S Applications Workforce
Some programs train students to work at the interface of climate science and society (e.g., Columbia University’s Master’s program in Climate and Society5), which could be a valuable resource to the S2S enterprise. However, as demands for S2S products continue to grow, there is also likely to be a shortage of the interdisciplinary researchers needed to improve connectivity between S2S forecasts and use. These include interdisciplinary researchers in boundary organizations and other interdisciplinary research centers, product development specialists in the private sector, and agency operations personnel with training or expertise in S2S predictability. These also includes social and behavioral researchers capable of examining decision processes to identify barriers to use and improve the flow of information between physical scientists and users.
The challenges of connecting information production to use are discussed in Chapter 3. Here, the focus is on the skills needed to enable those connections. The potential scale of use dwarfs the current production of people trained in interdisciplinary research or research in the social and behavioral sciences focused on using weather or climate information in decision-making. Weather and climate information is not well integrated into traditional academic disciplines that produce many of the agency personnel who may use S2S information, such as staff at water management agencies or large agricultural businesses. In addition, relatively few academic institutions offer interdisciplinary degrees that include physical, social, and behavioral sciences focused on issues related to weather or climate.
Finding 7.9: Interdisciplinary academic programs and centers lack the capacity to meet growing needs for research and applications necessary to maximize the use of S2S information. Few academic programs include weather or climate as a component of training the future workforce.
Building a More Robust S2S Workforce
S2S model development is a challenging job. It involves synthesizing deep and broad knowledge, working across the interface between science and computing, and working well in a team. Thus it is important to attempt to hire, train, and retain the most talented, available people in this field. There are often insufficient incentives to compel promising young people to work on S2S model development; this situation applies to early-career computer programmers who may have other more lucrative career opportunities, and to early-career scientists who may choose to work with S2S model output to examine scientific questions or other strategies that allow them to publish more journal articles, rather than work on model development. A suggested method for combating this bias would be an enhanced recognition and reward system for writing S2S model computer code and for producing modeling data sets, including the recognition of such effort through stronger requirements for citation and co-authorship, both within modeling institutions and by academic users and collaborators. This is a nontrivial challenge as discussed in NRC (2012b). S2S modeling groups could also compete by marketing relatively stable career tracks and the opportunity for stimulating cross-disciplinary interactions with a variety of scientists.
Modeling centers outside of the United States, such as the European Centre for Medium-Range Weather Forecasts (ECMWF), have attempted to attract and retain more people in S2S model development work by appointing model developers to 5-year terms, which is longer than typical research grant cycles in the United States (3 years). ECMWF offers strong incentives to attract top scientists to model development, such as access to excellent facilities, excellent tools (e.g., what some regard as the most advanced numerical weather prediction model in the world), and high, tax-free salaries. Furthermore, the inclusion of highly reputed scientists within the limited staff (150 staff members and 80 consultants) encourages a stimulating environment where delivering end-use forecasting products and conducting cutting-edge scientific research are valued and directly coupled.
Beyond the specific model developer needs of the S2S enterprise, there is an additional need for people who work at the component interfaces. As examined in this report, many of the challenges in the S2S realm arise from the linkages of the model
components. Therefore the overall S2S forecasting endeavor would benefit from paying particular attention to recruiting and rewarding scientists who can work across specific disciplines of Earth science to improve our ability to forecast the behavior of the Earth system as a whole.
Attention to workforce development is also needed to ensure that forecasts are as useful as possible to decision-makers. As discussed in Chapter 3, similar to weather forecasts and climate projections, most decision-makers are likely to acquire S2S information via an intermediary. A number of avenues exist for decision makers to interact with experts working on S2S forecasting, through so-called “boundary organizations” and other interdisciplinary entities. Boundary organizations exist within the public sector (e.g., NOAA’s Regional Integrated Sciences and Assessments program actively engages decision-makers through tailored products, educational programs, and efforts to co-produce climate products and services), academia (e.g., Columbia University’s International Research Institute for Climate and Society), and the private sector. Looking forward, continued growth of both the private sector and the array of products and services in the public sector are required to meet the growing demand for services on S2S timescales. In light of similar trends related to information on climate timescales, a recent NRC report (NRC, 2012b) recommended the formation of training programs for climate model interpreters—people who are trained in both physical and social sciences related to climate, weather, and decision-making, and who can facilitate two-way coproduction of knowledge. There is a similar need at S2S timescales for such training programs.
A possible concrete step forward would be a series of workshops to explore how to feature S2S in more undergraduate and graduate curriculums, how to identify and connect with organizations that can support this effort (e.g., the National Science Teachers Association), and how to interact with the private sector to help understand what skills are needed. Other entities such as the American Meteorological Society (AMS) or NSF may play a role with some of this coordination.
Forecasting work at all of these timescales—weather, S2S, and climate—involves the prediction of outcomes that people use to make important decisions and is therefore judged in very public ways. Predicted outcomes are validated (or not) on a continuous basis. The fact that S2S connects very strongly to managing environmental risks could be drawn upon more heavily to entrain talented and mission-driven young people into the field.
In looking across the numerous challenges facing the S2S workforce, the committee recommends that the nation pursue a collection of actions to examine the S2S workforce, remove barriers that exist across the entire workforce pipeline, and develop mechanisms to improve and sustain the workforce.
Recommendation P: Pursue a collection of actions to address workforce development that removes barriers that exist across the entire workforce pipeline and increases the diversity of scientists and engineers involved in advancing S2S forecasting and the component and coupled systems.
- Gather quantitative information about workforce requirements and expertise base to support S2S modeling to more fully develop such a training program and workforce pipeline.
- Improve incentives and funding to support existing professionals and to attract new professionals to the S2S research community, especially in model development and improvement, and for those who bridge scientific disciplines and/or work at component interfaces.
- Expand interdisciplinary programs to train a more robust workforce to be employed in boundary organizations that work in between S2S model developers and the users of forecasts.
- Integrate basic meteorology and climatology into academic disciplines, such as business and engineering, to improve the capacity within operational agencies and businesses to create new opportunities for the use of S2S information.
- Provide more graduate and postgraduate training opportunities, enhanced professional recognition and career advancement, and adequate incentives to encourage top students in relevant scientific and computer programming disciplines to choose S2S model development and research as a career.