Wednesday, August 7, 2013

The importance of identifying your models (with some thoughts on psychology vs. economics)

Bob Murphy is on the case - in this case, with a military suicides study. He writes:
"In the car I heard this short (less than 4 minutes) clip on a new study that supposedly debunks the popular idea that military deployments in Iraq and Afghanistan are behind the increase in suicides among members of the military. The NPR guy summarizing the study said, “Long deployments did not increase the risk of suicide,” and then they quoted one of the authors (I think) who said, “The strongest predictor is mental health” including depression and alcoholism.

I am really hoping this study didn’t do what I fear it might have, namely, run a huge regression analysis with “Length of deployment” as one of the independent variables and “alcoholism” and “depression” as other ones.

If you don’t see why that would be a really dubious approach, imagine if I ran a regression and then announced, “A lot of people think clinical depression is a good predictor of suicide. But nope, once you control for people holding a noose, a gun, or sleeping pills, clinical depression actually doesn’t have much explanatory power at all.”"
I haven't read the study but some commenters who have suggests that this is essentially what was done (sounds like a competing hazards model which is basically a regression with time-to-failure as the dependent variable and some adjusted distributional assumptions to account for the form of the dependent variable and its state-dependence.

I've talked a lot with a psychologist friend of mine about different approaches to empirical work in psychology and economics and I think this is an excellent example. Psychologists often have experimental control over what they're looking at. Even when samples are self-selected to some degree they can still experimentally assign treatment. So generally they don't worry much about model identification - dealing with endogeneity and simultaneity. In some cases this is not a big deal. In many cases it's a huge deal. This isn't to say psychologists are bad statisticians in any sense. They just have different blindspots than economists.

Economists have blindspots too that have come out clearly in talking with this psychologist friend. One of the bigger ones we discuss is measurement theory. A lot of the quantities we work with are measured pretty well - employment, hours, wages, prices, production, etc. So economists never really were that concerned about thinking deeply about measurement problems. What's there to think about? A notable exception is index theory but who studies that anymore? Maybe a few people working on the CPI at BLS. In contrast psychological concepts are a lot harder to measure and so they think a lot about the metrics they use. When economists wander into difficult-to-measure concept areas I'm sure our work looks as problematic as the work of psychologists when they wander into non-experimental data work.

The solution is one that is, in many cases, a difficult one - interdisciplinary work. This is often looked down on by economists. I imagine similar prejudices exist in other fields. But it would make a real difference on a lot of studies. My psychologist friend and I had plans to do some interdisciplinary work on some occupational studies stuff but lots of other life-events, data hold ups, and projects have gotten in the way on both our ends. Maybe some time in the future.


  1. I agree with you that the communication between economics and other disciplines isn't very good - as a matter of fact, you could make the argument that interdisciplinary communication is very poor in general, full-stop.

    (Out of curiosity though, Daniel...does your friend have any view on S.E.U. decision theory?)

    1. I doubt it. She doesn't do work that would come into contact with that.

    2. So she hasn't even read any published scholarship on the research experiments done in decision theory that involve the Allais Paradox, or the Ellsberg Paradox, or the contributions of Daniel Kahneman and Amos Tversky?

  2. Hey Daniel, this is Sean from Bob's comment section.

    I get that it's problematic that they used the depression/etc variables in the same hazard model, but the additional point I was making was that looking solely at the deployment numbers, it goes against the popular myth: soldiers with more deployments/longer time deployed have a smaller rate of suicide compared to their non-deployed age- and sex-peers. I fail to see how the model screwed it up that much. If they had said 'Oh yes, there is positive correlation, but it is small compared to depression', I would agree with you. But they did not.

    The study isn't that definitive, I think, as I pointed out in my follow-up. They didn't cover the recent uptrend, and they do not have the higher suicide rates by service seen in other measures. They were also dealing with only 80 suicides, the great majority of which came from never-deployed personnel.


All anonymous comments will be deleted. Consistent pseudonyms are fine.