National Academies Press: OpenBook
« Previous: Appendix A Statement of Task
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

B
Input from Workshop Invitees

Each workshop invitee1 was asked to give a quick overview from their disciplinary perspective to spark discussion in their session and to provide an extended abstract that would capture their thoughts for the workshop record. Most drafted their pieces prior to the workshop, but a few modified or wrote their pieces near the end of the workshop. The content and format of these pieces vary. Some give a brief summary of the state of the model parameterizations in which they have expertise (i.e., as defined by the six discussion sessions of the agenda as shown in Appendix D). Some discuss their opinions for means to improve parameterizations or aspects that require focused efforts for improvement. And some address fundamental problems in the atmospheric and climate communities that hinder advancements in modeling.

ATMOSPHERIC GRAVITY WAVE EFFECTS IN GLOBAL MODELS

Joan Alexander, Colorado Research Associates

Gravity waves are a major mode of geophysical variability on scales of O ~ 10–1000 km in stably stratified fluids. They span a wide range of frequencies and vertical scales as well. We have a simple linear theory that accurately describes wave propagation, which has been a valuable aid in interpretation of

1  

Because Isaac Held and Dennis Hartmann each gave an overarching presentation, the committee did not ask them for a brief summary. Therefore, this appendix includes only the summaries from the other 10 invitees.

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

observations and which forms the basis for all parameterizations of gravity wave effects in global models.

Gravity wave effects on the global scale include (1) mean-flow forcing effects, which are currently parameterized, and these are considered the most important effect below the stratopause; (2) mixing effects, which are weak in the stratosphere compared to existing model numerical diffusion but are important at higher altitudes; (3) energy dissipation and direct heating, which are only important in the upper atmosphere; and (4) cloud and heterogeneous chemistry interactions, which are important in conditions that are otherwise marginal for ice cloud formation, having impacts on ozone chemistry, stratospheric dehydration, potential radiative feedbacks, and convective cloud initiation, but these effects are not currently parameterized.

There are a variety of global-scale mean-flow forcing effects that are currently treated via gravity wave parameterization. These include (1) providing a drag force on the jets in the upper troposphere and lower stratosphere; (2) alleviation of the winter cold-pole problem common in GCMs; (3) enabling an earlier, more realistic stratospheric springtime transition to summer easterly winds; (4) providing roughly half of the total wave-driven forcing for the quasibiennial oscillation in the equatorial lower stratosphere zonal winds; (5) providing a drag force on the middle atmosphere jets, with accompanied reversal of wind direction and the radiative equilibrium temperature gradient at the mesopause; (6) forcing in the semiannual oscillation in winds near the stratopause and mesopause; and (7) modifying planetary waves and tides in the middle atmosphere with a variety of effects, including both amplification and reduction of amplitudes and vertical wavelengths of the planetary-scale waves.

Arguably the primary way that gravity waves influence tropospheric climate is via their effects on planetary wave propagation. Mechanistic model studies show that planetary wave refraction and vacillation cycles are very sensitive to winter stratosphere wave drag via the mean flow. Monthly-mean latitude-height wind distributions are generally used to assess the fitness of a particular gravity wave parameterization in model tuning, yet mechanistic model studies show that slight variations in gravity wave drag can give very similar mean-wind distributions while giving very different stratosphere warming frequencies because of this wind sensitivity. Without any wave drag, the winter polar jet can get stuck in a perpetual cold phase that excludes planetary waves entirely. The basic components of a gravity wave parameterization are (1) the source definition, (2) the momentum flux spectrum emanating from the source, and (3) the flux dissipation with height. Sources and dissipation are the nonlinear parts of the problem that parameterizations struggle to describe. We expect the dissipation to be controlled by basic instability mechanisms. There is currently some disagreement about how to describe this portion of the problem, but there is hope that small differences in these descriptions will be relatively negligible compared to the high-sensitivity parameterizations that have (and should have)

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

the characteristics of the waves emanating from the sources. The key is thus defining the momentum flux spectrum as a function of at least two propagation properties of the waves and their propagation directions. This is an enormous observational challenge, particularly since we are dealing with highly local and intermittent phenomena, and the information is needed on a global scale.

Ideally, we need to understand what controls the properties of the momentum flux spectrum from different sources. This is an active area of research involving both observational and theoretical and modeling studies. The observational studies include both local observations from radar, lidar, and radiosonde, as well as airborne and satellite measurements. This problem has not risen to a sufficient level of importance in the minds of the funding agencies to warrant a focused observational campaign or satellite instrument, but data sources designed for other purposes are used to attack the observational challenge with some continuing success.

The important sources include (1) flow over topography, (2) convection and fronts, and (3) jet stream instability and adjustment processes. Note that mountain waves are parameterized with zero phase speed, so they can only slow the winds. A full spectrum of waves is needed to model a circulation like the quasi-biennial oscillation (QBO). Mountain waves are limited geographically, mainly in the northern hemisphere. Convection is a very important source in the tropics and summer seasons. Different gravity wave parameterization methods all use linear theory to describe the wave propagation with height, but they differ in the descriptions of the wave sources and the wave dissipation with height. Note that the GISS model uses a very unique wave source description that allows feedbacks on the sources from changes in the climate like no other model. This could be a factor that leads to its greater AO sensitivity to green-house gas changes.

Can we find sufficient observational constraints for the parameterization of gravity wave effects? Many of the questions from the committee focused on this issue. As described above, there is a wealth of data that detect gravity waves, but it is a rare set of data that can provide sufficient information simultaneously on the wave momentum flux as well as the propagation properties of the waves, and no such data exist on the global scale. As with other physical processes, we must use models to aid in the interpretation of datasets, recognizing the observational limitations, to infer the wave properties needed to constrain parameterizations. I and others in the field are engaged in these kinds of research studies.

It is likely that such global observations will help constrain parameterizations but that the real advances in parameterization will come in the development of realistic source parameterizations, based on models tightly constrained by observations. The research community is actively working toward this end. Forecast and climate modeling centers are interested in raising their model tops into the middle atmosphere, and this will require the addition of

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

realistic gravity wave parameterizations. These are too poorly constrained at this time. Parameterization developments are behind the model needs but will eventually be sufficient, probably before the needs are considered top priority.

LAND-ATMOSPHERE INTERACTIONS: HYDROLOGIC CYCLES

Alan Betts, Atmospheric Research

I believe major progress has been made in the past few years in understanding and evaluating the coupling of physical processes over land in global models (Betts, 2004; Betts and Viterbo, 2005; Koster et al., 2004; Lawrence and Slingo, 2004). Consider the basic NASA question about the functioning of the global water and energy cycle: What are the effects of clouds and surface hydrologic processes on Earth’s climate? We have struggled for more than a decade to address this, yet understanding the fully coupled system, especially the complexity of the cloud interactions, has remained elusive.

It now appears that these climate interactions over land are within reach, as they can be diagnosed in models and in data, and consequently with this understanding the development of an Earth modeling system to represent them is now possible.

Let me comment on questions raised prior to and during the workshop:

  1. Are we losing the ability to make essential improvements in model physics because we are more concerned with fine-tuning existing representations? These are different exercises. New physics means the ability to step back and understand what is wrong. This needs a long-range plan and excellent scientific oversight and actually hiring people tasked to do it! It is easier for understaffed modeling centers to fine-tune.

  2. Are we losing the ability (and perhaps the will) to make critical but often arduous tests of model physics against observations? The real-world link is critical, and too many student projects get lost in virtual reality.

  3. Is the emphasis on quantifying model uncertainty diluting efforts to improve model physics? Yes, quantifying uncertainty, when model interactions between processes are poorly understood, is an illusion.

  4. What is the status of and what are the major errors associated with the parameterization of physical processes in atmosphere-land-ocean (A-L-O) models ranging from local-daily scales to regional-decadal scales? What effects do these errors have on model output compared to other sources of error? The primary source of error in the global water and energy cycle, which is at the core of Earth’s climate system, is in the tropics, where the dynamics, clouds, and physics are all tightly coupled. Over land it involves the coupling of many processes both at the land surface and in the atmosphere. Energy is transferred by the phase changes of water, both at the surface and in the atmo-

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

sphere, and clouds interact tightly with both the short- and long-wave radiation fields. Precipitation, atmospheric dynamics, and the radiation fields are tightly coupled. At the surface over land, the precipitation, surface hydrometeorology and vegetation, surface and boundary layer fluxes are also tightly coupled. Only with global models can we simulate all the interactions involved, and many processes are parameterized because the range of time and space scales involved is so broad that explicit simulation of them all is not possible. The consequence of this is that the model system must be evaluated carefully to see if the coupling between, say, cloud albedo, boundary layer depth, surface fluxes, and evaporative fraction is properly represented (Betts, 2004; Betts and Viterbo, 2005).

  1. How can model parameterizations be improved to represent the essential physics in A-L-O models? How can these parameterizations be tested rigorously, and what supporting infrastructure is needed to do so? A supporting diagnostic infrastructure is essential, but good frameworks exist; one was part of the ERA-40 system (Kållberg et al., 2004). Models can be evaluated

  • locally at “points” such as flux towers where we now have as many as 10 years of data;

  • on river basin scales, where precipitation and streamflow constrain the water budget by evaluating the way in which observables and processes are coupled in models and data (three time scales are easily verifiable: diurnal, diurnally averaged, and seasonal); and

  • in data assimilation, where the fit of the model to the data is an indicator of the accuracy with which the modeling system fits reality.

  1. What is the appropriate balance between the efforts being directed toward improving physical parameterizations and efforts directed toward other model development and application activities? A good modeling center needs both. The paradigm is straightforward:

  • Quantify model errors on a range of time and space scales.

  • Identify links/causes in either data assimilation or representation of physical processes, using data as a guide.

  • Basic research on new representations of physics, in parallel with pragmatic improvements, again tied to data.

  • Complete this development cycle in three to four years.

This appears simple, but very few centers actually complete this cycle (e.g., ECMWF) because it requires science-driven (as opposed to institutional) management and adequate resources or efficient utilization of resources.

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

LAND-ATMOSPHERE INTERACTIONS: CARBON CYCLES

Scott Denning, Colorado State University

Land-atmosphere interactions associated with the carbon cycle can be decomposed into two classes according to the time scales over which they act. Fast (ecophysiological) processes dominate carbon exchanges on time scales of seconds to years and are strongly coupled to exchanges of water and energy; slow (ecological) processes are responsible for time-mean sources and sinks of atmospheric CO2 over scales of decades to centuries. Fast processes are important to model because they help us get exchanges of energy and water right and are relatively well understood. Slow processes dominate uncertainty in future atmospheric CO2 and are poorly represented in current models. The slow processes are the “climate” of the carbon cycle and are inextricably linked to the rest of the climate system.

An important strategy in carbon cycle science is to observe variations in carbon compounds in the atmosphere and ocean and use them to understand underlying processes that govern sources and sinks. The trouble is that the fast processes dominate the observations, but the slow processes dominate the future behavior of the Earth system. We use models of the fast processes to “see through” high-frequency variability in the observations and thereby test mechanistic hypotheses about the slow processes. How do we model both? How do we know in what ways we’re wrong and how to do better?

Fast carbon cycle processes are the “weather” of the carbon cycle and are driven by radiation, temperature, and precipitation. They include photosynthesis (conversion of atmospheric CO2 to organic matter), autotrophic respiration, and decomposition of organic matter back into CO2. They are observed hourly by micrometeorological methods at a network of well over 200 sites around the world. Stomatal physiology enables vegetated plant canopies to modulate the Bowen ratio of surface energy exchange and strongly couples exchanges of carbon at the land surface to exchanges of energy and water. Diurnal and seasonal cycles of atmospheric CO2 are largely explained by these processes and provide strong constraints on their parameterization in climate models. Interannual variability in the fluxes is less well understood and parameterized, especially at regional and larger scales.

Important unresolved issues in the parameterization of photosynthesis include (1) representation of physiological stress due to dry soil and dry air; (2) canopy radiative transfer and the effects of direct versus diffuse light on photosynthesis and transpiration; (3) heterogeneity and nonlinear response to drying within grid cell; and (4) management effects such as urban and suburban development, crop fertilization, development, irrigation, and harvest. Respiration and decomposition are fast processes that provide first-order links to the slow processes. Decomposition is typically parameterized as being directly proportional to the sizes of a set of pools of organic matter (e.g., dead wood, leaf

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

litter, soil carbon), which must be initialized everywhere in a coupled model. The initialization problem is compounded by the parameterization of the sensitivity of decomposition to temperature and moisture, which appear not to be universal and are relatively less well constrained by observations relative to influences on photosynthesis. Incorrect parameterization of initial carbon pools and environmental dependence of respiration rates can then hide errors in models of the slower carbon flux processes that control future atmospheric CO2.

Eddy covariance measurements of surface fluxes of heat, moisture, and CO2 are tremendously valuable for elucidating the dependence of land-atmosphere exchanges on radiation, temperature, and precipitation (soil moisture). The network of tower sites has unfortunately been oversold as a way to directly observe the time-mean source or sink of carbon, which is probably the weakest aspect of their measurements. The very small footprints observed by this method severely limit their utility for quantifying slow processes that control the carbon balance.

Slow ecosystem processes that must be included in fully coupled climate models include (1) competition for resources and space among plant functional types and the related disturbance/succession/recovery dynamics in ecosystems; (2) biogeochemical cycling in soils and other organic matter; (3) intentional and inadvertent fertilization; and (4) the responses of ecosystems to changes in atmospheric composition and climate. These processes are more difficult to observe, or rather to extrapolate to large spatial scales based on limited observations than their fast counterparts.

To test mechanistic models of changes in slow carbon cycle processes, parameterizations in climate models must be evaluated against observations at spatially-aggregated scales. It is necessary but not sufficient that these parameterizations be evaluated against local data. For example, forward modeling of carbon storage and biomass over time scales of decades following forest fire or harvest must be compared to biometric inventory measurements (such as are available for over 100,000 plots within the United States). It is quite likely, however, that a parameterization of ecosystem biogeochemistry, plant competition, and succession could reproduce the broad statistics of these observations and still fail to capture variations at larger spatial scales due to poorly constrained extrapolation. There is also a need, therefore, for predictions made by forward models of these slow processes to be compared quantitatively to integral properties of the coupled system such as changes in atmospheric carbon gases. This is analogous to comparison of predictions made by cloud models to observable quantities at larger than climate model grid scales, or the comparison of spatially-integrated predictions of runoff to discharge from large river basins. This is a generic requirement for subgrid-scale physical parameterizations in climate models, not a particular requirement for carbon cycle science.

There is an emerging consensus in the carbon science community that diagnostic modeling and data assimilation provide a framework for leveraging

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

large-scale observations to better constrain parameterization of both fast and slow processes that modulate the interaction between carbon and the physical climate system. One example of such an approach is the use of transport inversions to estimate average carbon fluxes at regional/monthly scales and to interpret the results to constrain parameterizations of slow ecosystem controls on the budget. This requires treatment of unresolved time and space variations in the fluxes, especially to the extent they are coupled to or covary with transport processes (e.g., through rectifier effects). Excellent parameterization of fast processes and specification of highly-resolved spatial variability must be included in such calculations, or errors in the space-time patterns are unavoidably aliased into errors in the time-mean, regionally-integrated fluxes through aggregation error. Other aspects of the physical problem that are highly relevant for this exercise include the parameterization of clouds and planetary boundary layer processes in transport models used for inversions of atmospheric CO2 (as they also show up in other parameterization problems).

Cultural issues that hamper progress in carbon cycle parameterization include the traditional divides between observationalists and modelers, as well as perhaps more unique divides between modelers of fast versus slow processes. The design of field experiments, modeling activities, and data assimilation/ inverse modeling efforts to address these issues remains a high priority and a difficult problem to tackle.

CONVECTION IN COUPLED ATMOSPHERE-LAND-OCEAN-MODELS

Leo Donner, Princeton University

Thirty years after the initial publication of a cumulus parameterization including a cloud submodel (Arakawa and Schubert, 1974), the representation of convection in atmospheric general circulation models (AGCMs) at major climate research institutions is problematic. In general, these centers are at best employing the method of Arakawa and Schubert without further advances, and in many cases the methods used are not even at the level of Arakawa and Schubert. This is despite significant new observational knowledge of convection during that period and substantial success in modeling convective systems using high-resolution models, which can resolve the largest individual deep convective elements and many aspects of organized convective systems.

Cumulus parameterizations in major-center AGCMs generally continue to use as cumulus submodels only convective mass fluxes. Momentum transport is often treated crudely, if at all. There has been over the past several decades only limited research on closure for cumulus parameterizations, despite its central importance and evidence from observations of problems with current approaches, especially at subdiurnal time scales (Donner and Phillips, 2003).

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

Treatments are absent or extremely limited of interactions between deep convective towers and mesoscale circulations, interactions between convection and boundary layers, and interactions between deep and shallow convection. In many cases this is despite compelling observational evidence of the importance of these interactions; for example, observational evidence of the importance of mesoscale circulations associated with deep convection has existed for at least 20 years.

The sociological reasons for this situation probably can be found by examining the role of convection in AGCM development at major modeling centers. This development is strongly driven by goals of reducing biases in climate simulations and reducing uncertainty in climate sensitivity (to anthropogenic changes in atmospheric composition). Convection (and most other physical processes) tend to be viewed as tools toward reducing these biases and uncertainties. Since it is often unclear which physical processes are responsible for particular biases and sensitivity uncertainty, there has been a serious lack of focus on improving the fundamental physical soundness of convective parameterizations and an emphasis on tuning physical parameterizations to produce realistic climate simulations.

Recent experience at major modeling centers suggests that the tuning approach with current convective parameterizations may be reaching limits as to its usefulness in attacking model biases. The unphysical nature of the tuning process is also increasingly apparent and unsatisfactory to its practitioners. It is also clear that tuning to past or present climate conditions may not capture future climate change and is thus of limited use as a means of reducing uncertainty in climate sensitivity.

There are possibilities for advancing current parameterization capabilities using observations and high-resolution model results, but this will require enhanced efforts. A particularly promising avenue is emerging with current computational advances. This approach is multi-scale embedding of convection-resolving models in AGCMs (Randall et al., 2003). This is a major conceptual advance on past methods, which have required that central controls on the problem be based on closure assumptions, whose very existence has never been established. The embedded convection-resolving models draw on well-established dynamics of convection and rationalize many of the choices on how to treat issues related to cloud submodels. They have an enormous potential to indicate outstanding research issues that must be addressed, for example, the roles of microphysics and smaller-scale circulations.

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

PHYSICS OF AIR-SURFACE INTERACTIONS AND COUPLING TO OCEAN-ATMOSPHERE BOUNDARY LAYER PROCESSES

Chris Fairall, National Oceanic and Atmospheric Administration

My remarks are limited to consideration of parameterizations of fluxes at the air-sea interface. The fluxes of interest include the traditional meteorological forcing (momentum, sensible heat, and latent heat), precipitation, and net solar and infrared radiative fluxes. For climate purposes we must expand our considerations to include fluxes of trace gases (e.g., CO2, DMS, ozone) and particles. Oceanographic and meteorological energy and buoyancy forcing are made up with different weightings of the basic fluxes.

Precipitation is a critical variable, but in the global climate modeling context is not parameterized in terms of surface variables. Radiative fluxes at the surface do involve surface variables, but the principal source of variability (and uncertainty) is clouds. Although surface radiative flux parameterizations do exist, again from a climate model point of view the surface radiative flux is viewed as part of the entire atmospheric column problem.

For turbulent fluxes (meteorological, gas, and particle), the bulk flux model, where fluxes are computed as the product of wind speed, sea-air contrast, and a semi-empirical transfer coefficient, is essentially universally used in GCMs (and in higher-resolution models). In the last decade advances in ship-based measurement technologies and in physically-based formulations of bulk models have resulted in major progress. Meteorological transfer coefficients are now known, on average, to about 5 percent for wind speeds from 0 to 20 m s-1. In the last five years, direct covariance measurements of CO2 flux from ships have reduced the uncertainty in CO2 transfer significantly but have also illuminated the importance of (presumably) wave-breaking processes. Computations of global mean oceanic CO2 flux show large sensitivity (factor of 2) to the choice of simple wind-speed-based transfer formulations.

Direct measurement of particle fluxes is still exploratory, and interpretation of such measurements is uncertain and particle size dependent. Small-particle (radius r < 1 micron) fluxes can be determined with reasonable accuracy using direct covariance measurement, while large-particle (r > 10 micron) fluxes can be effectively determined from mean concentrations. A number of parameterization issues for surface fluxes are listed below:

  • Representation in GCM

    • Except for P, most observations are point time averages

    • Concept of gustiness sufficient?

    • Mesoscale variable? Precipitation, convective mass flux, …

  • Strong winds

    • General question of turbulent fluxes, flow separation, wave momentum input

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
  • Sea spray influence

  • Waves

    • Stress vector versus wind vector (two-dimensional wave spectrum)

    • zo versus wave age and wave height

  • Breaking waves

    • Gas and particle fluxes

    • Distribution of stress and TKE in ocean mixed layer

  • Gas fluxes

    • Bubbles

    • Surfactants (physical versus chemical effects)

    • Extend models to chemical reactions

  • Particle fluxes

    • Interpretation of measurements

    • Source versus deposition

The central theme is that the parameterizations are in good shape in regimes where we have good observations. Note that wave processes (on the oceanic and atmospheric sides of the interface) dominate this list.

Existing research programs in the United States provide a good venue for attacking many of these issues, but there are major gaps. The international Surface Ocean Lower Atmosphere Study (SOLAS) is the showcase program for fundamental research (measurement and modeling) on many of these topics. Research on oceanic gas transfer suffers from fragmented sources of support, but the large, highly organized U.S. Carbon Cycle program has (in my opinion) too much emphasis on observations to constrain oceanic PCO2. Unfortunately, the U.S. SOLAS program has lost momentum, and there is at present no U.S. agency funding lead. The withdrawal of the Office of Naval Research as a major player in funding ocean wave and particle flux research has also hurt. Because the gas transfer, particle, and wave problems are all tied together, it makes sense to address these problems in a holistic fashion, which requires an initiative such as SOLAS.

Overall, the U.S. research effort in this area has major strengths in top scientists and some unique infrastructure. There are clear concerns about the aging workforce and uncertainties about the training of the next generation of flux scientists (this is part of a general pattern of a declining population of students interested in getting their hands dirty). There is another issue of concern to me—that is, the recent trend of changing the emphasis of the national government laboratories toward performance measures, deliverables, and products and away from strategic technology development. This is partly based on the failed concept that operational segments of NOAA, the Department of Energy, etc., can simply order up from private industry the technology they think they need. The problem is that funds tend to move down agency stove-

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

pipes, but fundamental advances in technology and science cannot be confined to the same stovepipe going back up.

OCEAN MIXING

Raffaele Ferrari, Massachusetts Institute of Technology

Much of the influence of the ocean in climate involves the oceanic uptake and transport of scalar properties such as heat, fresh water, carbon, fluoro-carbons (CFCs), and the like. On daily to decadal time scales, this uptake and transport is controlled by upper ocean processes occurring in the surface mixed-layer and the wind-driven gyres. The turbulent mixing in the surface mixed-layer sets the rate at which properties are exchanged between the ocean and the atmosphere. The wind-driven gyres transport meridionally these properties, once the surface waters are subducted in the interior. In this session we will discuss what the key subgrid-scale processes are, which need to be parameterized to properly simulate the uptake and transport of properties in ocean models used for climate studies.

In present climate models, the ocean horizontal grid resolution is O(100) km or larger, and the vertical grid resolution is tens to a hundred meters. At this resolution the subgrid ocean processes that need to be parameterized can be divided into two categories:

  • mesoscale eddy fluxes due to balanced motions generated through instabilities of the mean circulation and

  • microscale turbulent processes due to unbalanced turbulent motions such as breaking internal waves, shear instabilities, double diffusion, or boundary layer mixing near the surface and bottom.

More powerful computers may decrease these scales to a marginal mesoscale eddy resolution of O(25) km in the next 10 years, but horizontal grids of better than O(10) km are needed to adequately resolve the fluxes produced by mesoscale motions. Even marginal mesoscale eddy resolution, sometimes called “eddy-permitting” resolution, requires some parameterization of the missing eddy transports. This has elicited a large literature in the last 10 years on parameterization schemes for mesoscale eddies in the oceanic interior and microscale turbulence in the surface mixed layer. This has not always been the case. One of the most significant problems with early ocean models was the high level of microscale turbulent mixing inherent in the numerics. These high levels of mixing affected the heat transport, stability, and variability of simulated climate in coupled models. Thus the emphasis in model development was on hydrodynamic codes. New numerical methods now allow scientists to construct models, which can operate with far smaller levels of spurious mixing.

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

Although research continues on improving the hydrodynamic codes, it is clear that modern ocean models suffer more from errors in the parameterization of subgrid-scale motions than from errors in the simulation of resolved hydrodynamic processes.

Parameterizations of mesoscale processes in oceanic general circulation models represent the adiabatic release of potential energy by baroclinic instability as well as the stirring and mixing of material tracers along isopycnal surfaces (Gent and McWilliams, 1990). These quasi-adiabatic conservation properties have led to a series of dramatic improvements in oceanic models. However, close to the boundaries, eddy fluxes develop a diabatic component, both because of the vigorous microscale turbulence in boundary layers and because eddy motions are constrained to follow the topography or the upper surface, while density surfaces can and often do intersect the boundaries. The dynamics of these diabatic near-boundary fluxes are not well understood, and there is as yet no standard parameterization. Recently a CPT has been funded to develop new approaches to mesoscale eddy parameterizations at the ocean boundaries, based on better dynamical understanding and analysis of available observations.

The physics of microscale turbulence in the oceanic boundary layers is the subject of a vast literature and parameterizations exist. Less is known about microscale turbulence in the ocean interior, and parameterizations are very rudimentary. The difference in development between the two fields has historical and practical reasons. The boundary layer problem benefited from the similarities with the well-developed corresponding atmospheric problem. Furthermore, surface boundary layers are fairly accessible and observations are available to test the proposed parameterization schemes. The situation is opposite for microscale turbulence in the ocean interior. The physics is very different from the atmospheric case, mostly because of the lack of radiative processes. Observations are very sparse and do not allow careful testing of numerical schemes. Ray Schmitt gives a comprehensive review of the progress being made on these issues. However, the conclusion is that more research is needed if we are to quantify the effects of interior microscale turbulence on the ocean’s climate and develop appropriate closure schemes to reproduce those effects.

REPRESENTATIONS OF DIAPYCNAL MIXING IN OCEAN MODELS

Raymond Schmitt, Woods Hole Oceanographic Institution

Although it is recognized that the diapycnal mixing coefficient for heat, salt, and tracers is much less than the isopycnal mixing rate, fluxes may actually be larger because vertical gradients are so much larger than horizontal gradients. In addition, it is well established that diapycnal mixing is essential for maintenance

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

of the meridional heat flux in the ocean; only the water mass transformations connected with diapycnal mixing can allow new dense water to enter the ocean depths. It has been conventional to assign a uniform diapycnal mixing coefficient to all of the ocean interior, tuning its value to yield a reasonable thermohaline circulation. However, recent observations have revealed tremendous dynamic range in the rates of turbulent vertical mixing, with mixing coefficients varying by four orders of magnitude within a small region of an ocean basin. Fortunately, there are strong indications that tides, topography, and internal wave dynamics control the rate of interior ocean mixing, so significant progress toward its parameterization appears quite feasible. Model runs show dramatic differences in circulation and meridional heat flux when a spatially variable mixing rate is introduced.

Tracer release experiments have provided convincing evidence of the spatial variations in turbulent mixing rate and have also shown that in some regions the rates of mixing of heat and salt are different due to double-diffusive convection. In the main thermocline of the tropical Atlantic, salt and tracers mix at a high rate of 1 cm2 s-1, which appears to be twice the rate for heat. This region is dominated by strong thermohaline staircases, the dynamics of which are poorly understood. The inverse case of diffusive convection, with heat mixing faster than salt, appears to dominate fluxes in large portions of the Arctic, which must influence the rate at which oceanic heat is made available for melting sea ice. Both forms of double diffusion occur in fine-scale intrusions and serve to provide a diapycnal flux from isopycnal stirring processes. Much work remains to properly parameterize double-diffusive mixing, with theoretical, experimental, observational, and numerical approaches needed.

These and other ocean mixing processes are discussed in a white paper titled “Coupling Process and Model Studies of Ocean Mixing to Improve Climate Models—A Pilot Climate Process Modeling and Science Team” (Schopf et al., 2003). Issues treated include equatorial upper ocean mixing, surface boundary layer processes, entrainment in gravity currents, mixing in the Southern Ocean, geography of internal wave mixing, and deep convection and restratification. The purpose was to promote Climate Process and Modeling Teams (CPTs) for U.S. CLIVAR to focus on these mixing topics. However, only two topics were initiated at very modest levels, and no consideration was given to field work. There is a serious shortfall in funding for field work, as the Office of Naval Research has ceased to be a significant funder of turbulence studies, NOAA never was, and NSF is under tremendous budgetary pressures. The situation is reaching a critical stage, as capabilities to perform crucial turbulence measurements and tracer release experiments may soon be lost.

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

CLOUD PROCESSES

Steve Sherwood, Yale University

Clouds have been recognized as the key source of divergence in model climate sensitivity. Particularly important are the height of middle- and upper-level cloud tops (affecting planetary emission to space), the amount of thin cirrus (ditto), the water content of all clouds (affecting planetary albedo), and the areal coverage of low clouds (ditto). Convincing theories do not exist for predicting most of these characteristics from first principles, since they ultimately depend on details of convective transport and microphysical behavior (although the altitude of the highest cloud tops in the tropics may be constrained by radiative cooling, as recently suggested by Hartmann and Larson, 2002).

A literature search reveals that attention to the treatment and role of clouds in GCMs really took off at the beginning of the 1990s, at around the time their importance to climate change uncertainty became widely recognized. Hot topics at that time were cloud feedbacks and attempts to relate observed cloud variations to the local thermodynamic state. This work did not lead to resolution of the main problems, since observed cloud variations are overwhelmingly controlled by dynamics, to a degree that a very precise understanding of dynamics would be needed to tease out the small but persistent impacts of thermodynamic or other subtle factors that might come into play in climate change. More recent work has shifted emphasis somewhat to microphysical forcing (e.g., aerosols) of clouds, chemical roles of clouds (e.g., processing of atmospheric sulfate), and underobserved but potentially important cloud types (thin cirrus and contrails).

In GCMs the cloud and convective physics are customarily separated. The cloud scheme must predict six variables: cloud fraction, liquid and ice concentration, liquid and ice effective radius, and single-scatter asymmetry parameter (for the ice, depends on shape). These variables are needed primarily for radiation. Prediction has proceeded through three phases: initially (1960s to 1970s), all were specified according to observations. Later (1980s to 1990s), models started diagnosing variable cloud cover and water content based on (primarily) the local relative humidity and (typically) temperature and/or height, respectively. This diagnosis was often unconnected to what the cumulus parameterization was doing, leading to the possibility of vigorous convection, but no clouds, in a grid cell (some parameterizations allowed for convection-dependent diagnosis rules making a very crude connection). Now, most models carry total cloud water (and sometimes either cloud cover or higher moments of the cloud water distribution) as prognostic variables, with better consistency between the convective and cloud schemes. Many (reasonable) ad hoc assumptions are required. One next step is super-parameterization, in which a Community Regional Model (CRM) is run inside each global grid location.

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

The choice of how to diagnose cloud water content illustrates the climate problem in a nutshell. Observations show that clouds at higher altitudes (lower temperatures) contain less water, roughly in accord with the decrease in saturation water vapor concentration. One can, among other options, choose to diagnose water in a model either from local temperature or height. This choice was considered by Hack et al. (1998) in the NCAR CCM2, who altered the (height-dependent) default scheme by making it temperature dependent. The resulting model cloud climatology changed relatively little compared with model data discrepancies. However, as pointed out by Somerville and Remer (1984), the assumption of temperature-dependent water content leads to a strong negative feedback on climate change due to the increase in global cloud albedo that accompanies global warming. Thus, the climate sensitivity is sensitive to a parameterization choice that cannot be justified one way or the other on empirical grounds but must be defended on the basis of basic system understanding. Though this type of problem motivates increasing the complexity of schemes, I believe that a well-supported argument in favor of a particular diagnostic parameterization may ultimately be better than a more sophisticated parameterization that pushes the uncertainty back to lower-level constants and unsupported assumptions. On the other hand, it may prove essential to carry cloud water in the model and perhaps other things too (TKE, for example).

In general, the ability to predict behavior in a situation not previously experienced is commensurate with understanding. The above example demonstrates that basic understanding is essential to climate prediction. We do not have enough previous examples of climate change (in fact, any, in the case of anthropogenic climate change) on which to proceed otherwise. One can be more confident that one understands the system if one can find elegant models (those with a high a priori probability of being correct) that are nonetheless powerful in explaining previous observations. In Bayesian terminology, the posterior probability of a model being correct is proportional to the product of the prior probability (i.e., elegance as judged by an expert) and the likelihood function (i.e., ability to explain evidence). This leads us to the necessity of developing simple (elegant) models as part of a hierarchy, as advocated by Hoskins (1983).

In my view the scientific method has somewhat fallen by the wayside as a guide to climate researchers. The key to progress is the formulation of testable and understandable hypotheses (simpler models), which can then be tested by comparing the behavior that they predict should occur in a comprehensive model with the actual behavior of such a model. It is not necessary to understand the comprehensive model per se, only to document that its behavior is consistent (or not) with the simpler theory. This can also be called “modeling models,” except that the goal is not to understand the model being modeled (as might be implied by that choice of words) but to test the validity of the simpler model in the face of complexities that were not considered in its formulation.

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

To the extent that the simple models can serve as parameterizations, this process also provides the basis for GCM development.

A perusal of the literature on clouds and GCMs reveals that exploring model sensitivity and making predictions accounts for nearly all GCM-related research, especially recently, with hypothesis testing or offering new explanations for observations occurring only rarely. The problem with the more popular types of GCM study is that they depend heavily on the fidelity of the GCM for their value, rendering their value questionable given the manifold weaknesses of present-day GCMs (parameter sensitivity in a model does not prove sensitivity in the real world to the corresponding process). The hypothesis-testing and observation-explaining paradigms are inherently more robust since they connect the model to something else.

Due to the complexity of the cloud problem, progress will depend not only on a complexity hierarchy of models but also on a scale hierarchy in which the behavior of microphysical parcel models, eddy- and cloud-resolving, regional, and global models are connected. A first principles microphysical calculation of hydrometeor growth in a single deep convective updraft is roughly as complicated as a climate calculation with a GCM. But this is not the only problem. Cloud microphysics suffers from significant gaps in our basic understanding, including our inability to agree on (or in some cases even invent) explanations of the following: anomalous ice particle concentrations observed in some clouds, precipitation from very shallow cumuli, cumulus electrification, and persistent supersaturations of water vapor with respect to ice near the tropopause even within cirrus clouds. Thus, we are not simply limited by computational power. Reinvigoration of cloud and ice microphysics is desperately needed; experimental and field observations are far too few (and satellite information too limited) to resolve the questions. On the positive side I believe we are poised, through CRMs with relatively detailed microphysics, to learn much in the not-too-distant future about how previously neglected microphysical degrees of freedom may be affecting macroscopic convective behavior. This may require a new look at convective parameterization.

The key question, assuming reasonable hypotheses can be advanced for the unresolved fundamentals, will be how to connect the scale hierarchy of models usefully. A promising new strategy is super-parameterization. This will be too expensive to do except in research mode, but fully coupling adjacent models in the scale hierarchy may be the secret to unlocking workable strategies for developing good parameterizations. The necessity of this approach becomes even greater if, as many fear, universally correct parameterizations for convective and cloud processes do not exist and specific ones must be tailored to each GCM.

Bringing the philosophical and physical issues together, we are faced with the following questions:

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
  1. Do universal parameterizations exist for convective and cloud processes?

  2. How much microphysical detail is enough? How do we tell?

  3. What microphysical mechanisms and/or degrees of freedom are important to weather and climate? How do we discover them?

  4. How can we develop sensible, testable hypotheses to aid us in making sense out of terabytes of model output and/or satellite data? How important is this?

CHALLENGES IN RATIONAL CLIMATE SCIENCE

Bjorn Stevens, University of California, Los Angeles

The challenge in representing physical processes in coupled atmosphere-land-ocean models is foremost a political, not a scientific, one. Our understanding of numerical methods and physical processes far outstrips our ability to systematically implement, test, and evaluate representations of this understanding in large-scale models. Moreover, work of the latter type has little reward for those evaluated using the norms of academia. Building large-scale models is a social enterprise that involves the cooperation and interaction of many communities. It also requires an infrastructure to which we appear unwilling to commit. Imagine if state-of-the-art numerical weather prediction (NWP) models, such as have been developed by the United Kingdom’s Met Office, or the European Centre for Medium-Range Weather Forecasts, had been developed using our social model for the development of climate models. It is not a coincidence that the most successful NWP models have been produced by centers with well-defined goals, staff with long-term support who are evaluated based on their ability to meet these goals, and fertile interactions with the broader scientific community. The central challenge in representing physical processes in A-L-O models is to overcome this structural deficit and develop the institutions capable of harvesting the immense, but often unstructured, insights of the broader academic community.

Cultural challenges are also evident, in particular how to attract talented and innovative people to the field. There is a perception, which resonates with my experience at the University, that analytically gifted people are not attracted in sufficient numbers to the meteorological and oceanographic sciences. This is a profound problem; to address it we need to capture the public imagination by highlighting the mystery and majesty of our field and forge strategic alliances with those more recognizable areas toward which more analytically talented students tend to gravitate (i.e., math and physics). One way of recapturing our imagination is by collectively recognizing challenges, or outstanding problems, and devoting resources and prestige toward their resolution. I am fond of the model of the Clay prize in mathematics, for which the community partakes in a

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

collective act of question stating. Overall, however, progress on this front is bound to be incremental.

From a physical perspective, the challenge is how to exploit technological advances in an attempt to improve the physical basis for the representation of physical processes in A-L-O models. An indispensable strategy is to use observations to identify regimes (i.e., recurrent patterns), simulations to understand, and observations to confirm our understanding of such regimes2. The use of numerical simulation invariably involves the solution of equations whose fidelity to the physical system is questionable, usually because limited numbers of degrees of freedom invariably require the simulation to represent the aggregate effect of some number of unresolved processes (e.g., small-scale turbulence, microphysical, radiative, chemical, or biological processes). Thus simulation is necessarily approximate, and its use requires judgments. Such judgments are typically rendered in one of two ways:

  • a priori: wherein the simulations are evaluated based on the fidelity of the underlying approximations being used. Here one recalls the rich literature, primarily within the engineering community, dealing with representation of unresolved scales in LES. Many believe that A-L-O models ultimately must be rationalized in a similar way.

  • a posteriori: here the simulations are evaluated based on their ability to represent benchmark flows or suggest new phenomena that are subsequently found in nature. An underappreciated branch of a posteriori testing is what one might call discovery.

Neither strategy can be used in isolation. To the extent that the underlying equations are approximate, a posteriori tests will always be the ultimate measure of the degree of fidelity of particular aggregation hypotheses. To the extent that one deviates from any given benchmark regime, a priori statements of accuracy become increasingly important. Both strategies provide new and fertile territory for observation and experiment, although for many flows only one or the other strategy avails itself. Because of its interplay with both strategies, LES of atmospheric boundary layer flows is unique among the many types of flow simulation. Typically its equation sets are the best justified, as they often retain the actual flow as a limit, and the flows it simulates often encompass scales that are likely to allow for benchmark solutions to be directly observed or perhaps reproduced in the laboratory (i.e., wind tunnel tests of flow over terrain, the development of dry convective boundary layers, relatively homogeneous cloud-

2  

Although much is made of Moore’s law and its implication for simulation, the impact of this exponential has been nearly as profound on observational technology, especially remote sensing. Remote sensing’s ability to resolve flow features in multiple dimensions and over a variety of scales complements these recognized attributes of simulation.

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

topped boundary layer flows such as are evident in stratocumulus regimes or the trades). Because it naturally distills many of the questions and strategies that might be used to rationalize other forms of simulation, it provides a useful lens through which to address the broader challenge stated above.

In studies of cloud-topped boundary layers, LES has been used to great effect to refine our understanding of important cloud regimes. We are only beginning to use observations to test this understanding, with the recent DYCOMS-II field experiment being an example of an a posteriori test in the stratocumulus regime; the pending RICO field program is designed to do the same for the trade cumulus regime. DYCOMS-II has demonstrated that LES is profoundly sensitive to the representation of small-scale turbulence near phase boundaries and in the presence of intense stratification and in so doing raises the profile of such questions for the observational and experimental communities. Nonetheless, the interplay between observation and simulation continues to be extraordinarily fruitful for studies in the atmospheric boundary layer. Given a sufficiently nurturing political and cultural environment, A-L-O models should begin reaping these fruits in the coming decade.

As a supplement to this note I also offer two articles, one published and one in preparation. The first gives a more philosophical overview on the use of LES and its relation to other more traditional ways of doing science; the second shows the example of a posteriori testing referred to above: (1) B. Stevens and D. H. Lenschow, 2001, “Observations, experiment and large-eddy simulation,” Bull. Amer. Meteorol. Soc. 82:283-294 and (2) B. Stevens et al., 2004, “Observations of nocturnal stratocumulus as represented by large-eddy simulation,” Mon. Wea. Rev. (in press).

INTERROGATION AND PARAMETERIZATION OF ATMOSPHERIC AND OCEANIC BOUNDARY LAYER PROCESSES

Peter Sullivan, National Center for Atmospheric Research

The atmospheric and oceanic boundary layers (ABL and OBL) are shallow but critical components of geophysical flows. In these thin layers turbulent mixing promotes the exchange of momentum and scalars between the atmosphere and land surfaces and couples the atmosphere and ocean at the air-sea interface. The ABL and OBL respond to large-scale forcing by generating three-dimensional, time-dependent turbulent motions. They are rich in structure with the important turbulent coherent structures varying with stratification: large-scale thermal plumes fill the ABL under unstable stratification; elongated streaky structures and hairpin vortices dominate near wall flow dynamics in neutral flows; and intermittent small-scale structures control mixing in the stable regime. In the marine boundary layers, surface gravity waves are critical components. They are visible signatures of coupling between the atmosphere

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

and ocean and promote global mixing in the OBL through the formation of Langmuir circulations and wave breaking. Progress in understanding boundary layer mechanics is being advanced through field observations, laboratory studies, and numerical simulations. High-Re (Reynolds number) large-eddy simulation is one of the important tools employed in current boundary layer research.

The enormous spectrum of scale interactions in high-Re turbulent flows and our inability to solve the governing dynamical equations require that empirical parameterizations be developed for boundary layer flows. In the case of large-scale climate and mesoscale codes, the parameterizations are by necessity ensemble average (or Reynolds average) closures that represent bulk features of the ABL and OBL, boundary layer depth, surface fluxes, and entrainment rates. In general, bulk boundary layer parameterizations are not well tested and can break catastrophically for certain flows—for example, stable boundary layers. The stable ABL parameterization is a prime candidate leading to the poor performance of large-scale numerical weather prediction models in cold regions. Also, new boundary layer dynamics—for example, combining Langmuir circulations and wave breaking in the OBL—can lead to nonlinear behavior. Thus boundary layer closures need to be tested over a wide set of mixed flow regimes.

LES is not immune from the subgrid-scale parameterization problem. However, by design LES computes the large-scale most energetic turbulent motions with a subgrid-scale closure for the small scales. In well-resolved regions of a turbulent flow the subgrid-scale motions are small and LES solutions are generally insensitive to the details of the closure. However, flows with laminar-to-turbulent transition, strong stable stratification, or near-solid boundaries, the subgrid motions can become large and their impact for LES solutions is generally poorly understood. Hence a deeper understanding of subgrid-scale parameterizations and their interactions with resolved scales is also required for LES. Improving LES parameterizations requires moving away from traditional eddy-viscosity approaches. Novel field campaigns, such as the Horizontal Array Turbulence Study (HATS), theoretical approaches, and direct numerical simulation can all be used to provide insight and databases to evaluate new subgrid-scale closures for LES. This research avenue is clearly exciting as it employs a beautiful blend of theory, experimentation, and numerical modeling.

References

Arakawa, A., and W. H. Schubert. 1974. Interaction of a cumulus cloud ensemble with the large-scale environment, Part I. J. Atmos. Sci. 31(3):674-701.


Betts, A. K. 2004. Understanding hydrometeorology using global models. American Meteorological Society, Robert E. Horton Lecture, January 14, 2004, Seattle. Bull. Amer. Meteorol. Soc. 85(11):1673-1688.

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×

Betts, A. K., and P. Viterbo. 2005. Land-surface, boundary layer and cloud-field coupling over the Amazon in ERA-40. ELDAS workshop report, Nov. 8-11, 2004. J. Geophys. Res. (submitted). Available online at ftp://members.aol.com/akbetts/AMAZHydroCloud.pdf. Accessed January 27, 2005.


Donner, L. J., and V. T. Phillips. 2003. Boundary layer control on convective available potential energy: Implications for cumulus parameterization. J. Geophys. Res. 108(D22):4701.


Gent, P. R., and J. C. McWilliams. 1990. Isopycnal mixing in ocean circulation models. J. Phys. Oceanogr. 20(1):150-155.


Hack, J., J. Kiehl, and J. Hurrell. 1998. The hydrologic and thermodynamic characteristics of the NCAR CCM-3. J. Climate 11:1179-1206.

Hartmann, D. L., and K. Larson. 2002. An important constraint on tropical cloud-climate feedback. Geophys. Res. Lett. 29(20):1951-1954.

Hoskins, B. J. 1983. Dynamical processes in the atmosphere and the use of models. Quart. J. Roy. Meteorol. Soc. 109(459):1-21.


Kållberg, P., A., Simmons, S. Uppala, and M. Fuentes. 2004. The ERA-40 archive. ERA-40 Project Report, No. 17. ECMWF, Shinfield Park, Reading RG2 9AX, UK.

Koster, R. D., P. A. Dirmeyer, Z. Guo, and the GLACE team. 2004. Regions of anomalously strong coupling between soil moisture and precipitation. Science 305:1138-1140.


Lawrence, D. M., and J. M. Slingo. 2004. Weak land-atmosphere coupling strength in HadAM3: The role of soil moisture variability. Submitted to J. Hydromet.


Randall, D., M. Khairoutdinov, A. Arakawa, and W. Grabowski. 2003. Breaking the cloud parameterization deadlock. Bull. Amer. Meteorol. Soc. 84(11):1547-1564.


Schopf, P., M. Gregg, R. Ferrari, D. Haidvogel, R. Hallberg, W. Large, J. Ledwell, J. Marshall, J. McWilliams, R. Schmitt, E. Skyllingstad, K. Speer, and K. Winters. 2003. Coupled process and model studies of ocean mixing to improve climate models: A pilot Climate Process Modeling and Science Team. Special report to the U.S. CLIVAR Scientific Steering Committee. Available online at http://www.usclivar.org/CPT/Ocean_mixing_whitepaper.pdf or from the U.S. CLIVAR Project Office, 1717 Pennsylvania Ave., NW, Suite 250, Washington D.C., 35 pp.

Somerville, R. C. J., and L. A. Remer. 1984. Cloud optical thickness feedbacks in the CO2 climate problem. J. Geophys. Res. 89:9668-9672.

Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 34
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 35
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 36
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 37
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 38
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 39
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 40
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 41
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 42
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 43
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 44
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 45
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 46
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 47
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 48
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 49
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 50
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 51
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 52
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 53
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 54
Suggested Citation:"Appendix B Input from Workshop Invitees." National Research Council. 2005. Improving the Scientific Foundation for Atmosphere-Land-Ocean Simulations: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11266.
×
Page 55
Next: Appendix C The Gap Between Simulation and Understanding in Climate Modeling »
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!