From Research to Reward

A NATIONAL ACADEMY OF SCIENCES SERIES
ABOUT SCIENTIFIC DISCOVERY AND HUMAN BENEFIT

Weather Warning: How Physics, Data, and Computers Combine to Make Better Prediction Possible

scroll down

About 50 people were watching Christmas movies in a theater in Van Wert, a town in northwestern Ohio, at 3 o’clock in the afternoon on November 10, 2002. Absorbed in the movies, they had no idea that a powerful tornado was heading straight for their town. Twenty-eight minutes later, winds in excess of 200 miles per hour tore the roof and walls to pieces. The twister pitched a Chevrolet sedan onto the plush blue front-row seats where children had been watching “The Santa Clause 2.”

The Van Wert tornado as captured on video by an Ohio state trooper.

Fortunately, by then the audience was out of harm’s way, sheltering between the sturdy cinderblock walls of the building’s lobby and restrooms—the only part of the structure that survived the storm. The theater manager, Scott Shaffer, had led everyone to safety. Decades of improvements in the science of weather forecasting provided the information he needed to take quick action.

Thirty years earlier, three scientists at the National Oceanic and Atmospheric Administration discovered a pattern of winds in a moving storm that was about to produce a tornado. This “tornadic vortex signature” was evident in Doppler weather radar scans of the Van Wert storm. Unlike weather radars installed before about 1985, which only revealed a storm’s location and intensity, Doppler radars reveal the speed and direction of the winds inside storms, leading the National Weather Service to equip all 159 of its stations with this technology by the mid-1990s. This helped give Van Wert 28 minutes’ warning that November day in 2002. In fact, installing those Doppler radars, according to one study, prevented 79 deaths and 1,050 injuries in the late 1990s—deaths that probably would have occurred if only the older radars were used.

Van Wert County’s recently installed warning system, equipped with the new radars, supplied stores and other businesses with alarms that would sound when a tornado warning was issued. When Shaffer heard the theater’s alarm start blaring at 3:02 p.m., he knew a tornado was likely and he had to act fast.

Despite the extensive
damage, no lives were
lost when a tornado
struck the Van Wert
movie theater in Ohio.
Despite the extensive damage, no lives were lost when a tornado struck the Van Wert movie theater in Ohio.

Line graph showing
improvements in the
average National Hurricane
Center track forecast for
tropical storms and
hurricanes by decade.
Line graph showing improvements in the average National Hurricane Center track forecast for tropical storms and hurricanes by decade.

So it was a virtuous cycle—basic physics research informing applied science, which in turn leads to better information for the public, which then bolsters its support for research—that saved the day in Van Wert. That cycle has progressed throughout the past 100 years, steadily improving the quality and usefulness of weather forecasts. Since the 1960s, the accuracy of forecasts for 3 to 10 days ahead has climbed by about 1 day per decade, so that today’s 6-day forecast is as accurate as the 5-day forecast of 10 years ago. And a forecast of next week’s weather in 2019 is about as correct as a forecast for the next day was in 1969.

Sometimes that kind of progress has saved lives directly, as it did in Van Wert. In other cases, the benefits are indirect: The better forecasts become, the better the odds that forecasters and government officials can succeed in the difficult task of persuading the public to protect itself from weather-related risk. Trusting forecasts results in better planning for travel or outdoor activities and greater compliance with evacuation plans during weather emergencies. In the long term, it can also lead to improved zoning and planning in cities and towns so future development minimizes foreseeable harm from weather events.

Achieving today’s level of accuracy in weather forecasting took decades of work, combining three critical factors: research insights from physics; improvements in satellites, radar, and many other technologies for collecting weather data; and advances in computing. “All three of these things have to march forward in parallel,” explains William B. Gail, Chief Technology Officer at the Global Weather Corporation in Boulder, Colorado. “The physics, the observations, and the computation.” All three contributed to the modern forecast’s basic tool: the model—“a replica, in a computer, of our atmosphere,” says Gail.

For nearly 100 years, scientists have been using equations to represent changes in the weather over time. Since they can’t track every molecule of air at every second, they instead divide the atmosphere from the ground up to the edge of space into a grid of imaginary cubes. “If you were on the Moon looking down at the Earth and the grid was displayed, it would look like a window screen,” says Lance F. Bosart, Distinguished Professor in the Department of Atmospheric and Environmental Sciences at the State University of New York at Albany. “At each intersection point between the horizontal and vertical lines on the screen, you’d have data—temperature, rainfall, wind, and so on,” says Bosart.

Plugging those data into equations that represent laws of nature (such as mass equations, dynamic equations, and Newton’s laws of motion), then using computers to calculate the atmosphere’s evolution according to the data and equations, is called numerical weather prediction (NWP). This method of weather forecasting is one of the 20th century’s greatest achievements in computer science, not only in its technical and scientific scope but also in its practical benefits to humanity. The complexity of the computations NWP tackles in global weather prediction is akin to the challenges of simulating the human brain or understanding the evolution of the early Universe.

Edward Lorenz
Edward Lorenz

The complexity of the computations numerical weather prediction tackles is akin to the challenges of simulating the human brain or understanding the evolution of the early universe.

Using NWP, meteorologists generate a prediction of what the weather measurements will be at a future time—an hour, a day, or a week in the future. But that first forecast is only the beginning. For one thing, the cubes of the grid are large—6.2 miles (10 kilometers) on a side in many models—which isn’t a high enough resolution to show everything that is happening inside the space. So if there is a thunderstorm in a single cube, Gail explains, measurements can’t necessarily capture all of the ways the top of the storm behaves differently from the bottom.

This is important because even small differences in measurements have big consequences for the future. For example, whether a hurricane batters a city or gives it a wide berth depends on slight differences in temperature, humidity, wind, and other measurements at various points in and around the storm. It is no accident that it was a meteorologist, Edward Lorenz, who coined the term “butterfly effect.” The phrase poetically imagines a butterfly flapping its wings in Brazil to eventually cause a hurricane in Texas. (Lorenz began work on the idea when he noticed a vast change in the outcome of a weather model after he rounded up one variable from 0.506127 to 0.506.)

To combat uncertainty and the butterfly effect, forecasters run their models repeatedly, varying their starting assumptions. “You might change the temperature at each grid point by a minute amount,” Bosart says. This “ensemble” approach yields a range of possible futures. So, for instance, the National Centers for Environmental Prediction ensemble will run a model 20 times. “That's like a horse race with 20 horses,” Bosart says. “Each horse represents a slightly different model solution. At the start of the race, all the horses are in the same position. At the end of the race, they’re dispersed. When you begin each of 20 model runs with a slightly different initial condition and project them out to 16 days, you construct an envelope of possibilities from which you can compute a probability distribution of equally likely weather forecast scenarios. Each of the 20 model runs coalesces around a different forecast solution and the difference between them grows with time.” A close horse race, in which a group of the model runs is closely packed, suggests forecasters can have high confidence in a forecast. If the model outcomes are spread far apart, the forecast has high uncertainty.

The need to run models is why computing power is important. “When you increase your resolution from, say, 10 kilometers to 5 kilometers, your computer workload goes up by an order of magnitude,” says Bosart. “That’s why you keep needing new and faster supercomputers.” Another reason for forecasting’s steady progress against uncertainty is the constant increase, during the past 50 years, in the amount and quality of information that forecasters can use. Today’s models get their data from a rich array of observation technologies, from weather balloons to commercial airliners to satellites. This makes it possible to make fine distinctions that were impossible until quite recently.

More and better data are enabling more accurate and comprehensive models, which are generating new scientific questions.

In the winter of 2017, for example, Mitchel Volk, Deputy Director of Research and Analytics in the Medical Division at the New York City Department of Sanitation, had snow forecasts for the New York City area that specified precisely where bands of heavier snowfall would occur within a larger storm. The finer-grained forecast told him which parts of the city would get larger amounts of snow than others—information that helps the Department of Sanitation plan the most efficient possible snow clearing and salt spreading. This is an example of how constant progress leads to constant improvement. It was something Volk could do in 2017 that he couldn’t do the year before.

As forecasts become more accurate, they become useful for more than preventing disasters like the one that almost befell the moviegoers in Van Wert. Today, many weather forecasters work for private-sector companies ranging from major electric utilities, where they predict the availability of renewable energy resources like wind and solar, to food manufacturer Mars, Inc., where they forecast cocoa production. At FedEx, weather forecasters predict winds, rain, and lightning that affect deliveries and at Walmart they answer questions ranging from “When is it time to start stocking patio furniture?” to “Which areas will need post-hurricane food and repair supplies?” “Prediction is really getting into other sectors like services and finance, where there are huge potential benefits as we improve,” says Jeffrey K. Lazo, Director of the Collaborative Program on the Societal and Economic Benefits of Weather Information at the National Center for Atmospheric Research in Boulder, Colorado. “Even to sectors that historically haven’t been paid as much attention by forecasters, like retail and real estate,” says Lazo.

The virtuous cycle that has made forecasting better will continue—if research in basic science, the technology of data collection, and computation continues to receive the necessary support. “When you go to higher resolutions in a model,” Gail says, “you can introduce new physics. For example, you can begin to model the detailed states of water and energy within a thunderstorm rather than resorting to averages over the entire storm cell.” More and better data are enabling more accurate and comprehensive models, which are generating new scientific questions. “You may not notice it from one year to the next,” says Bosart. “But when you look back 5 or 10 years, our skill level has noticeably improved.”

Continued advancements in weather prediction are necessary. Not all severe weather events end as happily as things did for the moviegoers in Van Wert in 2002. As long as there are tornadoes, hurricanes, flash floods, and other life-threatening weather events, the ability to escape their destructive power will depend on reliable forecasts that afford the precious moments to heed dire warnings.

Doppler radar
depicting Hurricane
Harvey, which struck
Texas in August 2017.
Doppler radar depicting Hurricane Harvey, which struck Texas in August 2017.




This article was written by David Berreby for From Research to Reward, a series produced by the National Academy of Sciences. This and other articles in the series can be found at www.nasonline.org/r2r. The Academy, located in Washington, DC, is a society of distinguished scholars dedicated to the use of science and technology for the public welfare. For more than 150 years, it has provided independent, objective scientific advice to the nation.

Annotated version

Please direct comments or questions about this series to Cortney Sloan at csloan@nas.edu.

Photo and Illustration Credits:
Van Wert theater: NOAA; forecast accuracy line graph: adapted from a graphic by NOAA’s National Hurricane Center; Edward Lorenz: Copyright University Corporation for Atmospheric Research (UCAR). By Curt Zukoski, licensed under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) License, via OpenSky; Hurricane Harvey: NOAA/National Weather Service.

© by the National Academy of Sciences. All rights reserved. These materials may be reposted and reproduced solely for non-commercial educational use without the written permission of the National Academy of Sciences.