tag:blogger.com,1999:blog-1740670447258719504.post280203262225042762..comments2024-03-27T03:00:27.024-04:00Comments on Facts & other stubborn things: Ambiguity aversion vs. risk aversionEvanhttp://www.blogger.com/profile/12259004160963531720noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-1740670447258719504.post-65240527201189262832012-11-03T15:19:49.861-04:002012-11-03T15:19:49.861-04:00I guess the issue here is this: does it matter if ...I guess the issue here is this: does it matter if economists reason in terms of concepts that give rise to paradoxes, or are otherwise nonsense, when one tries to model them mathematically?<br /><br />It seems to me that a generation of economists were comfortable with arguments of the form:<br />* as long as there are no crashes, the following theories are valid ...<br />* as long as these theories are valid, there will be no crashes.<br />Therefore, there will be no more crashes.<br /><br />It would now seem to me a good idea if we tried to go one step beyond what we thought was necessary in the precision of our reasoning. This has long been good practice in other fields. (Please note, I am advocating a more logical use of mathematics, not an incontinent use of 'mathematical' formulae.)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-92001158022403760262012-11-02T16:02:15.942-04:002012-11-02T16:02:15.942-04:00It's a violation of SEU if you define SEU func...It's a violation of SEU if you define SEU functions to be consistent with the typical axioms of choice under uncertainty. As I mentioned, some models have been presented relatively recently to show that you can potentially relax some of these axioms of choice (independence) to correct for that violation. But it is hard to argue that the implications of the Ellsberg paradox is a just a matter of semantics.<br /><br />At least if you care about defining SEU functions precisely. If you just prefer to not attach precise meaning to the SEU function and only use the term to mean "its like a function representing the good feelings might get from the things you don't even know will happen, bro". Then you're right. The Ellsberg Paradox probably isn't that big of a deal for you. It really would be about semantics. Waynenoreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-46663427712258471552012-11-02T15:41:18.652-04:002012-11-02T15:41:18.652-04:00Well I agree it's painful, Wayne.
Yes, when S...Well I agree it's painful, Wayne.<br /><br />Yes, when SEU is applied to situations where there is a price probability of an outcome it's treated in the way you describe.<br /><br />That's not the situation Ellsberg proposes. Why are you thinking it ought to be applied in that situation?<br /><br />If you want to argue "most treatments of SEU don't talk about this case", I'd agree with you. But don't tell me it's a violation of SEU. There's no assumption of SEU (that I've ever come across - feel free to show me if there is) that says that all probabilities are always precisely known.<br /><br />In the definition you provide p1, p2, etc. are subjective but known. You're right - the subjectivity is not the problem at all. In Ellsberg's case the problem is that they aren't known.Daniel Kuehnhttp://www.factsandotherstubbornthings.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-84302194767685942682012-11-02T15:41:13.871-04:002012-11-02T15:41:13.871-04:00Thank you Wayne for putting the point more solidly...Thank you Wayne for putting the point more solidly. Non-linearity and non-additivity needs to be taken into account when modeling decision-making processes.Blue Auroranoreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-40777725915620806282012-11-02T15:34:41.102-04:002012-11-02T15:34:41.102-04:00"When I think of subjective expected utility,..."When I think of subjective expected utility, I think of any kind of uncertainty around a choice - both uncertainty about the outcome and any uncertainty about our model."<br /><br />Ug this is painful, Daniel. <br /><br />This really has nothing to do with the uncertainty around the probabilities of different outcomes in a gamble. It is about the axioms of choice under uncertainty. <br /><br />Go back to your PhD micro textbook. You will see there a series of axioms that are proposed for an individual's preference relation when he is making choices under uncertainty. Once we go looking for a utility function that is consistent with these axioms we find that this utility function is **linear** in the effective probabilities on the outcomes, mainly because of the independence axiom (I'm pretty sure Mas-Colell includes a proof). I mean, this is how we DEFINE the "expected utility property" (that utility of the gamble g will be the expected value of the utilities of the outcomes: U(g) = [p1*U(a1)]+[p2*U(a2)]+.... where the a's are outcome). <br /><br />Now, you can argue that people are not given objective probabilities for each outcome, like you are arguing, and that's fine. Expected Utility Theory can handle that. For example, you could try to work with subjective probabilities. <br /><br />But the big deal about the Ellsberg Paradox is that not the probabilities are subjective, it is that there is NO set of probabilities that justify the choices people make in his experiment (at least not if we must assume their utility functions are linear in probabilities).<br /><br />One way to get around this problem inside the Expected Utility framework is to try and drop the independence axiom. That's what Schmeidler (1989) did.<br />http://www.jstor.org/stable/1911053 <br /><br />Or you could just give up on the framework entirely and switch to something else. Like Arrow's state-preference model. <br /><br />Either way, adjusting to the Ellsberg paradox requires either switching models or fundmanetally changing the axioms of choice. <br /><br />In other words, it will take a lot more than just broadening your definition of SEU. Waynenoreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-81125337589992247222012-11-02T14:20:01.476-04:002012-11-02T14:20:01.476-04:00SEU is not some rule set in stone, it's a hypo...SEU is not some rule set in stone, it's a hypothesis about the type of preferences people have when facing uncertain outcomes, and it's perfectly possible that their preferences are actually described better by something different (but they still would be rational in the sense of respecting transitivity etc.). SEU can be thought of as particular "functional form" of preferences under uncertainty, and there may be others, more general forms.<br /><br />Anyway, all this doesn't mean that I'm against using expected utility, it's tractable, useful and whether we need to account for deviations from it will always depend on particular application.ivansmlnoreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-72079027360907509082012-11-02T13:30:23.987-04:002012-11-02T13:30:23.987-04:00Well like I said, I can't speak for decision t...Well like I said, I can't speak for decision theory. But economics says that people maximize expected utility and that they are risk averse.<br /><br />I don't see how Ellsberg's paradox presents a problem to this understanding of utility. Obviously it presents a problem for treating imprecise odds like precise odds.<br /><br />I don't know what you or Blue Aurora are thinking in terms of but I have never seen SEU presented as assuming that all probabilities are assumed to be precise. I don't think that's some kind of defining axiom of SEU, from anything I've ever read. I think it's something that people who like to raise the issues impute to SEU.Daniel Kuehnhttp://www.factsandotherstubbornthings.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-7966365664588438992012-11-02T13:22:52.800-04:002012-11-02T13:22:52.800-04:00The potential problem with SEU is precisely that i...The potential problem with SEU is precisely that it lumps the two kinds of uncertainty together - if my prior is p=0.5, SEU doesn't allow me to distinguish if these are precise odds (say estimated from large dataset), or if they're uncertain themselves (say, true probability can be anything from p=0 to p=1, and I weigh all possibilities equally, which averages to p=0.5).<br /><br />It seems to me that your argument is semantic, i.e. your definition of SEU is more encompassing. But these things are actually defined pretty precisely in decision theory, so SEU is equivalent to preferences over gambles which satisfy certain axioms. Other models like those of ambiguity aversion can be often shown to result from different, or weaker sets of axioms (this is not my field either so sorry I can't be more precise).ivansmlnoreply@blogger.comtag:blogger.com,1999:blog-1740670447258719504.post-15261243905069828812012-11-02T12:37:03.267-04:002012-11-02T12:37:03.267-04:00Subjective Expected Utility only covers risk and m...Subjective Expected Utility only covers risk and models it via linear and additive equations. A more comprehensive decision theory would resolve the paradoxes plaguing S.E.U. theory by being more dynamic - dealing with non-linearity and non-additivity. Subjective Expected Utility is paraded as a normative theory as a way to make decisions. The trouble is, it only works when you have a <b>complete set of information</b>. (Or as Dr. Michael Emmett Brady would put it, when the weight of evidence is at unity: W = 1.)<br /><br />In practice, we don't have a complete set of information, and it's a special case. Ellsberg's decision-makers aren't being irrational, they are being rational in the sense of avoiding uncertainty. (The Prospect Theory of Amos Tversky and Daniel Kahneman would claim that the agents are being irrational by not adhering to S.E.U., when they're not.)Blue Auroranoreply@blogger.com