Friday, November 2, 2012

Ambiguity aversion vs. risk aversion

Blue Aurora responds on Ellsberg with something that some other commenters brought up as well: "Part of the point of the Ellsberg paradox, if I have this down right, is that people are in theory supposed to favour both options equally 50-50. Instead, there is ambiguity aversion, Daniel. Not risk aversion. There's an important difference."

I think we should be careful not to confuse semantic differences with substantive differences when people talk about this. I had to google "ambiguity aversion". I've never studied decision theory or the literature around this. But it sounds like exactly what I was talking about in my post. It's uncertainty around the probability itself, rather than just around the outcome of the event we're discussing (in this case the ball draw).

This is very important. This is Keynesian/Knightian uncertainty and it's impressionistic and volatile and can cause a lot of problems when you don't consider it.

But I'm not sure how it's supposed to overthrow SEU except by the semantic rules of the people with an interest in overthrowing SEU.

I have had a lot more statistics than I have had decision theory (I'm guessing this is true of most economists), and in statistics we think about randomness in the actual outcome of an event (the ball draw) as well as sampling error: uncertainty about a particular likelihood we've assessed.

When I think of subjective expected utility, I think of any kind of uncertainty around a choice - both uncertainty about the outcome and any uncertainty about our model. This seems natural. I don't know why you wouldn't look at it this way. But if you wanted to segregate those two kinds of uncertainties for some reason, what kind of utility theory would you come up with?

You'd probably come up with something like the way I think about SEU! You'd say "we need to consider both of these types of uncertainty".

Yes, I agree.

I really don't see what damage is done to SEU here. You're just observing that simple applications to simple models (where there is no uncertainty about the probabilities) is not how you should apply these ideas to more complex cases (where there is uncertainty about the probabilities). I agree with that. But I can't think of any assumption or conclusion of SEU that's been overthrown here. We've just demonstrated that in this case raised by Ellsberg there is a good and a bad way to apply SEU.


  1. Subjective Expected Utility only covers risk and models it via linear and additive equations. A more comprehensive decision theory would resolve the paradoxes plaguing S.E.U. theory by being more dynamic - dealing with non-linearity and non-additivity. Subjective Expected Utility is paraded as a normative theory as a way to make decisions. The trouble is, it only works when you have a complete set of information. (Or as Dr. Michael Emmett Brady would put it, when the weight of evidence is at unity: W = 1.)

    In practice, we don't have a complete set of information, and it's a special case. Ellsberg's decision-makers aren't being irrational, they are being rational in the sense of avoiding uncertainty. (The Prospect Theory of Amos Tversky and Daniel Kahneman would claim that the agents are being irrational by not adhering to S.E.U., when they're not.)

  2. The potential problem with SEU is precisely that it lumps the two kinds of uncertainty together - if my prior is p=0.5, SEU doesn't allow me to distinguish if these are precise odds (say estimated from large dataset), or if they're uncertain themselves (say, true probability can be anything from p=0 to p=1, and I weigh all possibilities equally, which averages to p=0.5).

    It seems to me that your argument is semantic, i.e. your definition of SEU is more encompassing. But these things are actually defined pretty precisely in decision theory, so SEU is equivalent to preferences over gambles which satisfy certain axioms. Other models like those of ambiguity aversion can be often shown to result from different, or weaker sets of axioms (this is not my field either so sorry I can't be more precise).

    1. Well like I said, I can't speak for decision theory. But economics says that people maximize expected utility and that they are risk averse.

      I don't see how Ellsberg's paradox presents a problem to this understanding of utility. Obviously it presents a problem for treating imprecise odds like precise odds.

      I don't know what you or Blue Aurora are thinking in terms of but I have never seen SEU presented as assuming that all probabilities are assumed to be precise. I don't think that's some kind of defining axiom of SEU, from anything I've ever read. I think it's something that people who like to raise the issues impute to SEU.

    2. SEU is not some rule set in stone, it's a hypothesis about the type of preferences people have when facing uncertain outcomes, and it's perfectly possible that their preferences are actually described better by something different (but they still would be rational in the sense of respecting transitivity etc.). SEU can be thought of as particular "functional form" of preferences under uncertainty, and there may be others, more general forms.

      Anyway, all this doesn't mean that I'm against using expected utility, it's tractable, useful and whether we need to account for deviations from it will always depend on particular application.

  3. "When I think of subjective expected utility, I think of any kind of uncertainty around a choice - both uncertainty about the outcome and any uncertainty about our model."

    Ug this is painful, Daniel.

    This really has nothing to do with the uncertainty around the probabilities of different outcomes in a gamble. It is about the axioms of choice under uncertainty.

    Go back to your PhD micro textbook. You will see there a series of axioms that are proposed for an individual's preference relation when he is making choices under uncertainty. Once we go looking for a utility function that is consistent with these axioms we find that this utility function is **linear** in the effective probabilities on the outcomes, mainly because of the independence axiom (I'm pretty sure Mas-Colell includes a proof). I mean, this is how we DEFINE the "expected utility property" (that utility of the gamble g will be the expected value of the utilities of the outcomes: U(g) = [p1*U(a1)]+[p2*U(a2)]+.... where the a's are outcome).

    Now, you can argue that people are not given objective probabilities for each outcome, like you are arguing, and that's fine. Expected Utility Theory can handle that. For example, you could try to work with subjective probabilities.

    But the big deal about the Ellsberg Paradox is that not the probabilities are subjective, it is that there is NO set of probabilities that justify the choices people make in his experiment (at least not if we must assume their utility functions are linear in probabilities).

    One way to get around this problem inside the Expected Utility framework is to try and drop the independence axiom. That's what Schmeidler (1989) did.

    Or you could just give up on the framework entirely and switch to something else. Like Arrow's state-preference model.

    Either way, adjusting to the Ellsberg paradox requires either switching models or fundmanetally changing the axioms of choice.

    In other words, it will take a lot more than just broadening your definition of SEU.

    1. Thank you Wayne for putting the point more solidly. Non-linearity and non-additivity needs to be taken into account when modeling decision-making processes.

    2. Well I agree it's painful, Wayne.

      Yes, when SEU is applied to situations where there is a price probability of an outcome it's treated in the way you describe.

      That's not the situation Ellsberg proposes. Why are you thinking it ought to be applied in that situation?

      If you want to argue "most treatments of SEU don't talk about this case", I'd agree with you. But don't tell me it's a violation of SEU. There's no assumption of SEU (that I've ever come across - feel free to show me if there is) that says that all probabilities are always precisely known.

      In the definition you provide p1, p2, etc. are subjective but known. You're right - the subjectivity is not the problem at all. In Ellsberg's case the problem is that they aren't known.

  4. It's a violation of SEU if you define SEU functions to be consistent with the typical axioms of choice under uncertainty. As I mentioned, some models have been presented relatively recently to show that you can potentially relax some of these axioms of choice (independence) to correct for that violation. But it is hard to argue that the implications of the Ellsberg paradox is a just a matter of semantics.

    At least if you care about defining SEU functions precisely. If you just prefer to not attach precise meaning to the SEU function and only use the term to mean "its like a function representing the good feelings might get from the things you don't even know will happen, bro". Then you're right. The Ellsberg Paradox probably isn't that big of a deal for you. It really would be about semantics.

  5. I guess the issue here is this: does it matter if economists reason in terms of concepts that give rise to paradoxes, or are otherwise nonsense, when one tries to model them mathematically?

    It seems to me that a generation of economists were comfortable with arguments of the form:
    * as long as there are no crashes, the following theories are valid ...
    * as long as these theories are valid, there will be no crashes.
    Therefore, there will be no more crashes.

    It would now seem to me a good idea if we tried to go one step beyond what we thought was necessary in the precision of our reasoning. This has long been good practice in other fields. (Please note, I am advocating a more logical use of mathematics, not an incontinent use of 'mathematical' formulae.)


All anonymous comments will be deleted. Consistent pseudonyms are fine.