BOX 10.1 GLOSSARY OF SELECTED TERMS USED IN THIS CHAPTER

Computational infrastructure: the software basis for building, configuring, running, and analyzing climate models on a global network of computers and data archives. See Edwards (2010) for an in-depth discussion of “infrastructure” and software as infrastructure.

Refactoring: rewriting a piece of software to change its internal structure without in any way altering its external behavior. This is often undertaken to increase the efficiency or ease of use and maintenance of legacy software.

Core: an element of computational hardware that can process computational instructions. Some current computers bundle several such “cores” onto a single chip, leading to “multicore” (typically 8-16 cores per chip) and “many-core” systems (tens of cores per chip).

Node: an object on a network. In the context of high-performance computing (HPC) architecture it is a unit within a distributed-memory supercomputer that communicates with other nodes using network messaging protocols, i.e., “message passing.” It is the smallest entity within the cluster that can work as an independent computational resource, with its own operating system and device drivers. Within a node there may be more than one, indeed many, integrated but distinct computational units (“cores”) that can communicate using more advanced fine-grained communication protocols (“threading”).

Concurrency: simultaneous execution of a number of possibly interacting instruction streams on the same computer.

Flops: floating-point operations per second, a unit of computational hardware performance. Prefixed by the usual metric modifiers for orders of magnitude; a petaflop is 1015 flops and an exaflop is 1018 flops.

Exascale: computers operating in the exaflop range, coupled to storage in the exabyte range.

Threads: a stream of instructions executing on a processor, usually concurrently with other threads in a parallel context.

 

this allows global resolutions of 50 km. Each additional 10-fold increase in resolution leads to a more than 1,000-fold increase in operation count, before considering additional complexity. As recent history has demonstrated, the testing, debugging, and evaluation (e.g., participation in formal model evaluation processes, like the Coupled Model Intercomparison Project, Phase 5 [CMIP5]) required for any given model version continues to require ever-greater amounts of computing time as the model becomes more complex. Overall, the climate modeling enterprise relies on sustained improvements in supercomputing capabilities and must strategically position itself to fully exploit them.

Finding 10.1: As climate models advance (higher resolutions, increased complexity, etc.), they will require increased computing capacities.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement