Wednesday, September 12, 2012

AU lunch seminar: uncertainty vs. disagreement

One frustrating thing about my Sloan grant is that the data is very confidential so it has to be kept on a non-networked computer in a locked office at AU, which means to work with it I have to actually go downtown (an hour commute that I usually avoid, since the primary breadwinner in the house has a stronger claim to the car than the grad student). But a nice thing about that is that I'm putting three full-time days a week of face-time in at the department, and I'm around on Wednesday in the middle of the day so I can attend the lunch seminars.


So I'm going to try to do blog posts on those every week now.


I also actually don't spend all my time reading or learning or working on the stuff I blog about. The blog is in many ways an opportunity for me to get into stuff that is not a part of my day-to-day work. So these posts are going to be an opportunity for me to discuss some of the things that I spend the rest of my day mulling over.

*****

Today we just heard Robert Rich of the New York Fed present his paper, "The Measurement and Behavior of Uncertainty: Evidence from the ECB Survey of Professional Forecasters". His task was essentially to test proxy measures of uncertainty used in the literature with an ECB survey that asked for both point estimates and probability distributions on expectations about the future behavior of key macro variables.

A lot of studies that talk about the impact of uncertainty on the economy use the variance of the point estimates of surveyed forecasters to determine the level of uncertainty. So if point estimates are highly dispersed among the forecasters, researchers have considered that an indication of uncertainty about what's going on.

What we're really more interested in is the subjective assessments of the precision of those estimates by the forecasters themselves. When you use dispersion in the point estimates as a proxy, you're just assuming that dispersion in the first moment across forecasters correlates with broader density functions for individual forecasters (which is what we care about). But that need not be the case at all. Imagine person A predicts that X will have outcome X and person B predicts that X will have outcome 1.05 X a year from now, each with a variance for the density function that produced that prediction of V. What Rich calls "disagreement" here (dispersion in the point estimates) is 5% (person B's prediction is 5% higher than person A's... in the paper of course he uses other dispersion measures). If dispersion of the point estimates is a good proxy for uncertainty then V should increase as the dispersion of the point estimates increases.

But does it in practice?

Rich tests the common proxy with an ECB forecaster survey which requests that forecasters provide (1.) a point estimate of important macro variables, and (2.) fill out a histogram that attaches probabilities to various points around the point estimate. This allows him to directly compare uncertainty to point dispersion.

He has a couple estimators that are common in the literature, including an inter-quartile range, variance measures, and errors relative to actual values. You can go to the paper for specific findings and some great graphics, but the general conclusion is that dispersion of the point estimates don't provide a good proxy.

What does this mean? Take uncertainty studies that rely on this proxy with a grain of salt - it can be very tough to get at.

I had a couple reactions, none of which I shared during the seminar but which I'll present here:

1. First, of course, forecaster uncertainty and economic agent uncertainty are two different things. Presumably we really care about economic agent uncertainty. So we should care about this only insofar as the properties of the point estimate dispersion of forecasters is similar to the properties for other actors. I still think the paper is important for talking about economic agents for a couple reasons. First, big institutional investors as well as central banks and governments use forecasts and employ forecasters! These peoples' job is to provide expectations to big institutional actors! So that's probably pretty safe. What would be more interesting is to see if the point dispersion of forecasts by workers and families has the same properties. Perhaps there's a psychological literature on this.

2. I've gone back and forth on this one, but another thing to remember is that forecasters use models. So this could be telling us more about the various models in use than about real properties of subjective expectations  and uncertainty. I was going to suggest running the same regressions as in the paper but with a quantile regression. The idea is that maybe you have a class of models used by forecasters that are bullish and a class of models that are bearish. Rich finds that uncertainty is not related to the disagreement for the whole sample, but  perhaps on the sub-sample of bearish models and the sub-sample of bullish models (which may have real differences in specification), disagreement might be a reasonable proxy for uncertainty. It still doesn't mean the studies that use the proxy would be perfect, but it seems important to know.

3. My econometrics professor raised the question of whether the regressions should be conditioned on economic circumstances. Rich was pretty adamant that they should remain unconditioned because you don't condition proxies when you throw them in a model - the hope is that they approximate whatever you're looking at. I think I probably agree with Rich, but I'm still thinking about this one. It depends on what you're interested in I guess. If you're interested in assessing the proxy Rich is probably right. If you're interested in understanding the properties of the proxy there might be some value in conditioning the regression on the state of the economy.

4. Obviously this is a little different from the way the Austrian and Keynesian blogosphere talks about uncertainty in the Keynes-Knight sense. I don't think that means you should just disregard the paper if that's what you're interested in. When we talk about Keynesian or Knightian uncertainty we're often concerned with much more specific circumstances that may arise. We are not talking about a well defined aggregate. There's really no Keynesian or Knightian uncertainty to speak of when it comes to inflation or GDP growth. First it's all well defined. Whatever inflation or GDP growth turn out to be, they're going to be a real number with a probability of 1. It's not like there are any "unknown unknowns" to consider like there might be for an entrepreneur interested in consumer demand many years from now. And for the more Keynesian flavor of this "fundamental uncertainty" the story usually is that you rely on heuristics when you can't confidently assess a probability. Well heuristics can show up in the probability density data that Rich has! So there isn't really a problem. The question is, does the hard-to-mathematically-characterize Keynesian or Knightian uncertainty on the part of entrepreneurs influence the easy-to-mathematically-characterize uncertainty data that Rich has?  That's an empirical question, and I'd think that would make the density data that Rich has more dispersed, although I'd be open to hearing counter-arguments. Ultimately, they are somewhat different questions but I don't think that makes Rich's question an unimportant one.


3 comments:

  1. Having taken a look at the paper, there was only one reference to Daniel Ellsberg's term of "ambiguity", and that is by Allan Meltzer and a co-author. I think that Rich's paper would have been greatly improved had Daniel Ellsberg's dissertation, Risk, Ambiguity, and Decision, been cited. However, I have to respond to the following...

    The question is, does the hard-to-mathematically-characterize Keynesian or Knightian uncertainty on the part of entrepreneurs influence the easy-to-mathematically-characterize uncertainty data that Rich has? That's an empirical question, and I'd think that would make the density data that Rich has more dispersed, although I'd be open to hearing counter-arguments. Ultimately, they are somewhat different questions but I don't think that makes Rich's question an unimportant one."

    According to Dr. Michael Emmett Brady, uncertainty is best measured by the interval estimate approach to probability. It's also called "imprecise probability". The use of two numbers instead of one allow for a better model of uncertainty.

    However, at the same time Daniel Kuehn, don't buy too much into the Post Keynesian definition of uncertainty as complete ignorance, which is actually a special case of the weight of the evidence.

    For more information, I highly recommend reading the works of Dr. Michael Emmett Brady.

    http://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=1033456

    ReplyDelete
    Replies
    1. Right, but interval estimation is sort of similar to what Rich is getting at here, isn't it? It's not exactly the same - he's using a dispersion measure of the histogram, but they communicate essentially the same things.

      In fact in addition to the dispersion measure he uses an IQR measure which is essentially interval estimation (you're asking what the range of points is for the interquartile range rather than a point estimate).

      My only point is that when we're dealing with a well defined variable of interest, like inflation or GDP growth, less confidence in your probability assessments should translate to broader density functions, right?

      Another way of putting it is this: yes, uncertainty and risk are two different things. But for a well behaved outcome what is the impact of uncertainty on a subjective probability distribution? It should broaden that distribution, no? Unless you're using a heuristic (which Keynes thought people would use). Either way, it will be picked up in the density functions that Rich is looking at.

      Delete
    2. You would have to ask Dr. Michael Emmett Brady on the criteria for using John Maynard Keynes's interval estimate approach to probability, but if my memory serves me right, the logical approach to probability (of which Keynes's probability theory is a part of, other thinkers in the logical tradition include Carnap and Jaynes) is less restrictive when it comes to conditions of use compared to frequency interpretations of probability. In fact, I shall have to consult Dr. Michael Emmett Brady on this...

      Delete

All anonymous comments will be deleted. Consistent pseudonyms are fine.