One of the things Brad DeLong convinced me of during our talk yesterday was that I should be following Nate Silver (which I now am). I've always had only a vague interest in electoral politics so I've gotten my Nate Silver by reading from other people who link to him. No longer.
This is an interesting recent post discussing improvement in weather forecasting over the last twenty years. Nate writes:
"[If] prediction is the truest way to put our information
to the test, we have not scored well. In November 2007, economists in
the Survey of Professional Forecasters — examining some 45,000
economic-data series — foresaw less than a 1-in-500 chance of an
economic meltdown as severe as the one that would begin one month later.
Attempts to predict earthquakes have continued to envisage disasters
that never happened and failed to prepare us for those, like the 2011
disaster in Japan, that did."
But there is an exception...
"Weather forecasts are much better than they were 10 or 20 years ago. A
quarter-century ago, for instance, the average error in a hurricane
forecast, made three days in advance of landfall, was about 350 miles.
That meant that if you had a hurricane sitting in the Gulf of Mexico, it
might just as easily hit Houston or Tallahassee, Fla. — essentially the
entire Gulf Coast was in play, making evacuation and planning all but
My first thought in reading this is to ask whether Nate is comparing apples and oranges. The simples question is whether predicting a financial crisis a year out is the same as predicting a hurricane 350 miles out. I don't know, maybe it is. But that's fairly important information! Everything has its own time horizon of precision.
Forecasters who look at hurricane paths are really assessing unconditional probabilities. We have this hurricane. What will it do? I don't think (although I'm no meteorologist) there are any large exogenous events that determine it. The problem at hand is characterizing and predicting the behavior of a given hurricane.
This is not true of macroeconomic forecasts. These forecasts are highly conditional on other exogenous events, such as whether a systemically connected investment bank will fail or whether Congress will manage to pass a rescue package on September 29th (when it failed) or October 3rd (when it actually passed). That sort of thing can make a big difference. So it seems like we're dealing with two different animals here and talk about the quality of the forecast may be less meaningful than thinking about the forecastability of what we're talking about.
He also provides an explanation for how weather forecasting has improved:
"But there are literally countless other areas in which
weather models fail in more subtle ways and rely on human correction.
Perhaps the computer tends to be too conservative on forecasting
nighttime rainfalls in Seattle when there’s a low-pressure system in
Puget Sound. Perhaps it doesn’t know that the fog in Acadia National
Park in Maine will clear up by sunrise if the wind is blowing in one
direction but can linger until midmorning if it’s coming from another.
These are the sorts of distinctions that forecasters glean over time as
they learn to work around potential flaws in the computer’s forecasting
model, in the way that a skilled pool player can adjust to the dead
spots on the table at his local bar."
This is an excellent point, but surely this can't be the whole story. Weather forecasters have always had this sort of tacit knowledge. It's not like meteorologists have suddenly become more aware of these local facts that they can bring to bear on forecasts.
I'm guessing it has a lot more to do with tremendous advances in the availability of data and the ability to simulate weather events. And again that's something macroeconomists don't have. Meteorologists have several hurricanes a year to work from and a lot of real time data. Macroeconomists don't have that.
Science does not deal with concrete reality
24 minutes ago