It’s supposed to be showery and windy today, according to the National Weather Service, with up to a foot of new snow in the Cascades. But then these are the people who believe that, in 2013, this is a perfectly acceptable graphic (see right) for a government agency to produce. (It’s good that “impacts” is in yellow, all caps, and underlined, but you notice right away that it still needs a <blink> tag.)
University of Washington meteorologist Cliff Mass has bigger fish to fry with the NWS than yellow type on a red background; he’s produced a series of posts on what he calls a “weather prediction gap.” The European Center for Medium Range Weather Forecasting (ECMWF), he argues, is turning out verifiably better predictions of U.S. weather than the U.S. Global Forecast System (GFS). Hurricane Sandy? Nailed it.
In his latest post, Mass discusses “numerical weather prediction,” or forecasts based on running mathematical models, which in his mind is the future of meteorology. High-resolution modeling can pick out the chance of freak storms developing, but the process is supercomputer-time intensive, i.e., expensive. Ironically, much more processing time is being giving over to climate change research currently (ironic, that is, given the lack of effect the data generated is having on policy).
If it’s good enough for climate change, suggests Mass, despite the uncertainty in looking out over decades to come, surely it’s worth providing the funding to take a high-resolution look at the next 72 hours. As it is, this de facto outsourcing of meteorology computing means that we’re spending more and more money on weather prediction “products” from abroad, which serves to advance those efforts at the expense of our own.
But even that is not the end of the long pilgrimage toward accurate prediction. People still remain a stumbling block when it comes to probabilistic forecasts, which is what the models produce. (A model might literally be run a 100, a 1,000 different times to create a percent-chance view of the weather.)
Mass references a UW study (pdf), led by psychology professor Susan Joslyn, that found that people struggle with what the probability expressed in a “75 percent chance of rain” applies to. People were very suggestible that it might possibly refer to the amount of time that it might rain, or that it might apply to where it might rain in a given area, instead of what’s really meant, which is that, under these atmospheric conditions, it would rain 75 times out of a hundred.
It’s a statement that makes much more sense if you realize that weather prediction happens on a computer these days — it’s not a guesstimate. Variables are plugged in and tested, and 75 times out of a 100, the model rains. (In Seattle, that’s probably true of 75 percent of forecasts. No, we kid.)
Write the study authors: “If the user misinterprets the probabilistic forecast as deterministic and no precipitation is observed, it could be regarded as a false alarm, reducing trust in subsequent forecasts.” For a dramatic portrayal of this exact dynamic, we refer you to “Coffee’s for Meteorologists Only.”
Tomorrow, by the way, ought to be dry and cloudy.