"In the car I heard this short (less than 4 minutes) clip on a new study that supposedly debunks the popular idea that military deployments in Iraq and Afghanistan are behind the increase in suicides among members of the military. The NPR guy summarizing the study said, “Long deployments did not increase the risk of suicide,” and then they quoted one of the authors (I think) who said, “The strongest predictor is mental health” including depression and alcoholism.I haven't read the study but some commenters who have suggests that this is essentially what was done (sounds like a competing hazards model which is basically a regression with time-to-failure as the dependent variable and some adjusted distributional assumptions to account for the form of the dependent variable and its state-dependence.
I am really hoping this study didn’t do what I fear it might have, namely, run a huge regression analysis with “Length of deployment” as one of the independent variables and “alcoholism” and “depression” as other ones.
If you don’t see why that would be a really dubious approach, imagine if I ran a regression and then announced, “A lot of people think clinical depression is a good predictor of suicide. But nope, once you control for people holding a noose, a gun, or sleeping pills, clinical depression actually doesn’t have much explanatory power at all.”"
I've talked a lot with a psychologist friend of mine about different approaches to empirical work in psychology and economics and I think this is an excellent example. Psychologists often have experimental control over what they're looking at. Even when samples are self-selected to some degree they can still experimentally assign treatment. So generally they don't worry much about model identification - dealing with endogeneity and simultaneity. In some cases this is not a big deal. In many cases it's a huge deal. This isn't to say psychologists are bad statisticians in any sense. They just have different blindspots than economists.
Economists have blindspots too that have come out clearly in talking with this psychologist friend. One of the bigger ones we discuss is measurement theory. A lot of the quantities we work with are measured pretty well - employment, hours, wages, prices, production, etc. So economists never really were that concerned about thinking deeply about measurement problems. What's there to think about? A notable exception is index theory but who studies that anymore? Maybe a few people working on the CPI at BLS. In contrast psychological concepts are a lot harder to measure and so they think a lot about the metrics they use. When economists wander into difficult-to-measure concept areas I'm sure our work looks as problematic as the work of psychologists when they wander into non-experimental data work.
The solution is one that is, in many cases, a difficult one - interdisciplinary work. This is often looked down on by economists. I imagine similar prejudices exist in other fields. But it would make a real difference on a lot of studies. My psychologist friend and I had plans to do some interdisciplinary work on some occupational studies stuff but lots of other life-events, data hold ups, and projects have gotten in the way on both our ends. Maybe some time in the future.