Even this was problematic for one of his commenters:
"The problem isn’t the ‘rationality’, but the completeness and rigid transitivity that economics has to assign to an agent when trying to glean insight into the workings of an economy."I agree that completeness especially is unrealistic (transitivity, as far as I can figure, actually isn't really that bad), but honestly what changes by assuming it? The biggest problem in a world with transitivity but without completeness is that you just don't have the basis for a choice. The math doesn't work. You can't define a maximal set. But we could imagine some kind of algorithm for dealing with a world where preferences actually aren't complete (i.e. - the world we live in). A couple come to mind. First when you make pairwise comparisons you could just skip the cases where preferences aren't defined. You could also flip a coin (since transitivity holds I don't think this should cause too much trouble - the biggest problem is its randomness).
Let's say you had a bunch of agents with mostly complete preferences and you ran some simulations with these sorts of options. So we're not doing the optimization that requires the rationality assumptions Unlearningecon is uncomfortable with, we're doing more of an agent based approach. I don't do this sort of modeling but I'm guessing you get about the same results (particularly if you skip the undefined elements and the undefined elements are not maximal elements... which makes sense - why would your preferences be undefined on a bundles you highly value? Agents ought to be the foggiest about relatively low value bundles).
A more realistic approach is to assume that it takes cognitive resources to produce completeness in the preference relations. Indeed we could even think about a production function for preference relations and some way of assessing expected benefits of expending cognitive resources. If we think higher-value elements might have undefined preference relations this might be especially important. Think, for example, of grocery shopping that requires you to plan for a week's worth of meals given a huge variety of products. There are many high value bundles and you exert some effort in figuring out the best bundle but there's a point where you quit expending cognitive resources and just make the purchase. Again I'm guessing running this simulation approximates the results from more naïve optimization problems that go in assuming completeness - but of course it won't perfectly hit it.
I don't run agent based models so I'm just thinking through how these seem like they'd work. Is there any reason to temper my optimism? If we were to relax these assumptions how would things really go so wrong that adding the assumptions - for mathematical convenience - is a dangerous move? I just can't think of a reason for pessimism here. The unrealism we're dealing with seems to primarily pose problems for the math, not for the conclusion. Thoughts?