resolution feasible. Storage and usability of large model data sets will also be key considerations. As discussed below, the committee recommends a two-pronged approach that involves the continued use and upgrading of dedicated computing resources at the existing modeling centers, complemented by an intensive research program on efficient implementation of high-resolution climate models on architectures requiring extreme concurrency (as also called out in Recommendation 10.2). This section also discusses possible pros and cons of a more radical step—establishment of a new national climate-specific computing facility of higher performance that any current U.S. climate modeling institution can afford to maintain.
Existing climate modeling centers typically use computing resources that are largely dedicated to their institution. These resources are a crucial underpinning of the development and use of climate models, because they provide the required degree of flexibility to support fast-turnaround model testing and innovative and risky model development activities, while providing the computing capabilities for institution or agency specific goals (such as simulations in support of assessments). This approach has proven extremely useful in the past, and this mode of operation and support needs to be maintained. These largely dedicated facilities must be maintained and refreshed on an ongoing basis. They represent a substantial national investment. For instance, the Committee estimates that maintaining a computing system of the class of Gaea (dedicated almost exclusively to GFDL climate modeling), or NCAR’s Yellowstone system (for which climate modeling is one major priority), is in excess of $30 million per year, including purchase, maintenance, power, human support, and assuming a 3-year replacement time scale.
However, as noted previously, this arrangement of dedicated climate computing assets does not currently provide the critical mass in computing for breakthrough, innovative modeling activities that require the largest possible computational capabilities. Examples of such activities include ultra-high-resolution climate model simulations for the study of regional climates and extremes, the use of eddy-resolving ocean models to study critical ocean issues such as the oceanic uptake of heat and carbon and their feedback on the climate system, and global cloud-resolving modeling to better understand the interaction of atmospheric convection and climate. The machines associated with individual institutions are well suited for their more targeted goals, but not necessarily for such breakthrough calculations. For climate models such as CESM, the most computationally intensive simulations are being performed on the largest supercomputing systems (e.g., as maintained by DOE) that serve a much broader scientific community than climate modeling. This strategy is attractive because it leverages costly external national resources and allows the climate modeling community to experiment with a wider class of computer architectures than it could internally afford