Thursday, September 15, 2011

Karl Smith, Ryan Murphy, and Empirical Economics

Karl Smith had a good response to Russ Roberts's repeated "scientism" accusations yesterday. This part was especially good:

"After all no one has – in my jargon – taken a ruler to the sun. No one has actually trekked from the earth to the sun with a tape measure to get its distance.

Indeed no one has even been to the sun or even out of earth’s orbit. People confidently mock those who say the sun is not at the center of the solar system but has anyone been outside the solar system to look down and check? Certainly not.

All of this is based on measurement and inference. And people trust the measurements and inferences of physical scientists even when they make wild conclusions based on highly technical derivations, complex models and slight differences in obscure measurements.

Two guys screw together a few half-silvered mirrors and all of a sudden the passage of time is just a fancy illusion. Is that more convoluted than Donohue and Levitt
?"

The simple argument is that making inferences is what all scientists do, and it's strange that Russ finds that so controversial. But clearly economics is different from physics (as it is different from all sciences and all sciences are different from each other). Ryan Murphy seems particularly concerned with that. He responds to Smith:

"The ratio of content to vitriol in Roberts’ original post is a bit low (though I’d be a hypocrite for being too harsh on someone for their vitriol), but it is rather obvious that Smith has no idea why anyone would believe that the problems in econometrics fundamentally differ from the problems of measurement in physics [I don't find it obvious that Smith has no idea about this - so perhaps Murphy could clarify]. Actually, let me rephrase- when straightforward distributions aren’t applicable in physics, they actually use the correct distributions instead of assuming asymptotics and doing other stupid things. Dr. Smith, if you look at the data in economics, you see fat tails. When you have fat tails, even according to mainstream figures like Peter Kennedy, you can’t do meaningful hypothesis testing. Which means coefficient estimates are bullshit. Actually, I’ll just cite him directly for simplicity’s sake.



The consequences of non-normality of the fat-tailed kind, implying infinite
variance, are quite serious, since hypothesis testing and interval estimation
cannot be undertaken meaningfully.

Found in a footnote on page 63 of Kennedy, Peter. 1998. A Guide to Econometrics. Fourth Edition. Cambridge, MA: The MIT Press."

I have Kennedy's fifth edition (this appears on page 70 there) and maybe the book changed since the fourth edition, but if it hasn't then Murphy has failed to quote the next line where Kennedy references an entire chapter on robust specifications and non-parametric approaches to use in these situations. In other words - precisely what physicists do that Murphy claims economists don't do.

The whole physics vs. economics game always seems strange to me. Economics is most obviously like biology, not physics - and for obvious reasons. We are studying the social behavior of a primate. When we study the social behavior of any other species of primate besides homo sapiens we usually call those scientists "biologists", but in the case of this one species we've chosen to call them "economists". Fair enough - there's good reason for the different nomenclature (we're not just primates, after all - we're pretty special). But for all intents and purposes we are biologists. The physics that non-physicists usually have the opportunity to interact with (admittedly only a subset of physics) is fairly stable, mechanical, deterministic, etc. It's not like the complex system of the economy or other subjects of biological research. I think Smith's point is exactly right - scientists do inference. It's legitimate to give a physics example to make that point. But otherwise physics is a poor standard to measure economics against because it's too different. Russ Roberts has this weird imbalance in the way he usually approaches these questions where he also agrees with me that economics is most like biology, but for some reason he assesses the empirical bona fides of economics by comparing it to physics. Why? I don't know (and I've asked him and while he's responded to other questions and comments of mine he's never answered that question).

Murphy also highlights the real empirical hurdle of economics: endogeneity. Exactly right. Because of endogeneity we have to tackle the problem of measurement and inference differently from physics. Endogeneity is what makes empirical economics so interesting (to me at least). Endogeneity is a reason to be skeptical of results - the disposition of a scientist is always skeptical. But Murphy shouldn't confuse the obligation of having a skeptical eye with an obligation to consider something illegitimate (which is often what Roberts mistakenly does).

Smith ends with more great thoughts, this time from Robin Hanson:

"They key difference, I think, is that more interested parties see themselves as losing if the public listens to economists, and these parties therefore dispute economists in public. Such interested parties also influence individual economists, and so weaken within-economics consensus. In contrast, few care enough about what physicists say to dispute them in public."

I do think this is the primary difference, and this bothers me about the way Russ Roberts approaches these questions. Let's take two examples - the big macroeconometric models and the impact analyses that instrument for endogeneity problems.

I personally accept the sort of parameterized macroeconometric modeling that is done by IHS Global Insights, Moody's Analytics, etc. as having limits, but completely legitimate. It's just modeling the impact of policy given what we know from similar past policies. Russ Roberts repeats Arnold Kling's argument about the illegitimacy of these approaches whenever they come out with a justification of fiscal stimulus using these models. Fair enough - not everyone is going to agree with me on the methodology. But when Heritage produces a criticism of the stimulus using the exact same methodology, and when that analysis makes a huge splash in the media and the blogosphere does Russ Roberts make a peep? Does he raise the same objections he did when the same model produced pro-stimulus results? No. Russ should be raising hell about that, but he doesn't. It's the exact same methodology. Not only is it the exact same methodology - but it's the same damn model (IHS Global Insights, in that case) with a different set of assumptions. The only explanation for the difference in Russ's response that I can see is that one supports stimulus and one doesn't.

The same goes with studies that instrument for stimulus. I like these because like most people I see endogeneity as the primary problem for multiplier estimates. They are hard, but that's part of life - and that's also what makes them interesting to work at. Russ, of course, doesn't like these studies. He's called them a "great place for faith based econometrics", and has downplayed their value in discussions with Ed Leamer on EconTalk. And, predictably, Russ has criticized Romer's estimates using this sort of IV approach which finds a reasonably sized fiscal multiplier. You would think, then, that when Robert Barro uses the same sort of approach and finds no evidence for a multiplier (and publishes this in the WSJ - his was not a low-profile study) Russ would be equally critical of Barro? If you think that, you'd think wrong. To my knowledge he never mentioned it, and when his co-blogger Don Boudreaux favorably cited Barro's estimate Russ didn't take the opportunity to make the same critique of Barro that he did of Romer.

I would like to come to a different conclusion, but I simply can't. When macroeconometric modeling comes out with a pro-stimulus result Russ criticizes it but when the exact same approach comes out with an anti-stimulus result, he doesn't. When intruments come out with a pro-stimulus result Russ criticizes it, but when the exact same approach comes out with an anti-stimulus result he doesn't. What else am I supposed to conclude here? It's exactly what Robin Hanson says: "more interested parties see themselves as losing if the public listens to economists, and these parties therefore dispute economists in public". Most of what Russ says about econometrics I now consider to be a politically motivated statement rather than a scientifically motivated statement, because I simply can't find much scientific consistency in the way he talks about.

Notice that the two studies I always praise here are Barro's and Romer's. Notice when I raised questions about the Heritage numbers, it was never because I objected to their use of the IHS Global Insights model - I've always considered that to be a good decision for Heritage. Notice that the studies I don't like are the cross-state studies, and I criticize these studies on here when they come up with pro-stimulus results or anti-stimulus results. This isn't a political game, much as some people like to treat it like one. The empirical work is hard, and we need to work to keep doing it better. But if you stake out a methodological position you need to apply it consistently. Too much criticism of empirical economics seems to me to be grounded in politics rather than science.

20 comments:

  1. What Barro and Romer papers are you referring to btw? I came in a little

    also citing Donahue and Levitt kind of blows Smith's point out of the water.

    ReplyDelete
  2. In non-parametrics and robust methods, the only way to get confidence intervals is bootstrapping, which is another way of saying you can't get real confidence intervals. Non-parametric methods theoretically give you a consistent estimate, but without confidence intervals, it is very questionable to what degree you are doing means anything.

    I also emphasize "theoretically," since most of the applied work uses semi-parametric methods because non-parametric methods are too hard. The people in economics who are actually doing any of this properly are generally associated with the Santa Fe Institute, and their crap looks weird, because if you do this properly, you will get weird results.

    I gave the rare times that I don't believe endogeneity is important. When we're not studying something like a natural experiment, our priors *should* be against the econometric technique being legitimate. This certainly applies the model Roberts was making fun of. Give me any nontrivial macro model, and I can give you a half dozen possible sources of endogeneity, and the only response to them will be "ignorability" or "we don't have data for that," neither of which are scientific explanations.

    ReplyDelete
  3. The obvious problem is that a lot of economists seem to confuse modeling with theorizing.

    ReplyDelete
  4. Stravinsky -
    Dan Klein makes a big deal of that distinction, but I don't really see that it matters all that much. The difference seems to be more a matter of degree and semantics than anything else, but I could be wrong. What are you concerned with here, exactly?

    ReplyDelete
  5. What did Dan Klein say about it?

    It's just a matter of personal observation based on my examination of the economic literature. Economists seem to think that constructing a model with words like "rationality" and "stochastic" thrown in constitutes saying something about reality. Models are a parallel universe. Physicists and biologists work their asses off to demonstrate the relevance of their parallel universes to the one we inhabit. Economists seem less interested in such a practice.

    ReplyDelete
  6. This is where Daniel Klein writes about it in Econ Journal Watch: http://econjwatch.org/articles/model-building-versus-theorizing-the-paucity-of-theory-in-the-journal-of-economic-theory

    This is an excellent response by John Quiggin, which makes my point that these concerns are largely semantics:
    http://econjwatch.org/articles/why-should-we-care-what-klein-and-romero-say-about-the-journal-of-economic-theory

    ReplyDelete
  7. re: "Physicists and biologists work their asses off to demonstrate the relevance of their parallel universes to the one we inhabit. Economists seem less interested in such a practice."

    This is nothing like my experience. I'm not sure exactly what gives you this impression.

    I think modeling plays a somewhat different role in sciences like economics and biology than it does in physics. With such a complex subject of study what we're ultimately doing is trying to present mechanisms that may be operating and demonstrating that they are useful for describing what happens in the world. There's no presumption that they are anything other than an abstraction, or that they explain everything. At least from what I've read and who I've heard there's no such presumption. But they can usefully model aspects of the way the world works.

    Physicists, we think, have perhaps a somewhat better prospect of actually parameterizing laws rather than modeling mechanisms. So it is certainly true that economists and physicists approach modeling and theorizing differently, but I think economists and biologists approach these problems in much the same way, and my experience is that they aren't naive as you imply.

    Where did you get that impression?

    ReplyDelete
  8. "But they can usefully model aspects of the way the world works."

    It's the "usefully" there that economist always assume but rarely seem interested in proving. Furthermore, it's less a matter of naivete so much as laziness, I would guess.

    The impression comes from reading through the archives of economic journals on JSTOR, the greatest institution known to man. "Modeling" papers are very different from "This is how things work in the real world" papers, with no attempt to bridge the gap between the two. Modelers and theorists in economics do not communicate.

    I don't speak very often to economists, so maybe they're aware of this problem. But they don't address it in the literature.

    I'd be happy to dredge up some examples if you're willing to pay me ~$100.

    ReplyDelete
  9. re: ""Modeling" papers are very different from "This is how things work in the real world" papers"

    If you mean these don't always get done in the same twenty pages you're right. So? One needs context and empirical work to make those judgements. There were a lot of Keynesian models floating around. You got variations and tweaks. You got lots of data and tests together, you got general assessments of how well it fit the data and then you got a broad based critique like the Lucas critique. Then you had lots of different RBC and New Keynesian models and the same process - and you had the emergence of new modeling strategies like search theory to again - match the models to the real world. And now you've got other big-picture critiques, many of which are inspired by the current crisis.

    You seem to be disappointed that the entire scientific project isn't done immediately in a single twenty page paper.

    Do you know of any discipline that delivers that? Why would you want/need that?

    You've presented an interesting model here, but it doesn seem useful for explaining the real world.

    ReplyDelete
  10. Macro - the eternal whipping boy - is especially good at this.

    Pick up any good macro textbook from intermediate undergraduate on up. Any textbook where they start teaching models rather than just intro material. Look at how they assess models after they present the model. They always assess it on the basis of "this model was an advance in characterizing this observed phenomenon, but it could not account for this other observed phenomenon".

    ReplyDelete
  11. Most modeling that economists do that get published in journals is never discussed, weighed against or incorporated into theory. So it's a waste of time and money. There's an oversupply. And economist choose which models to assess based on arbitrary and subjective judgements. It's a very inefficient process, and I cannot have much confidence in it. Which is not a strike against macro or econometrics per se, but I would need to believe in an economist before I believe in his theory. I don't need to trust Stephen Hawking to trust his theory of physics as long as I can trust physics generally.

    ReplyDelete
  12. Yes, we're all clear by now on your explanation. What we're still short on is evidence or a response to my evidence.

    Think of the last several Nobels - Pissarides, Mortensen, and Diamond; Ostrom and Williamson; Krugman. Each and every one of them has been given an award for generating a model that explains reality better than previous models which worked well for their time, but weren't perfect in explaining reality.

    Can you stop just repeating the claim and start giving reasons why you think this is how the discipline works? Because I've asked you several times now, and it's becoming increasingly obvious that you're just expressing a prejudice.

    ReplyDelete
  13. Whoops my comment was left incomplete. what barro and romer papers are you talking about? You've mentioned them a couple of times but I've apparently come late to the party and im not 100% sure which ones you are talking about.

    ReplyDelete
  14. Sorry - I understood your meaning but I got caught up with Stravinsky. I'm thinking of the tax paper with her husband and the military spending paper of Barro's (I think out in QJE now???).

    ReplyDelete
  15. blah that military spending paper is terrible!* The theory may be reasonable, but WWII dominates the results. Others who have used military dummies in time series to measure fiscal effects have avoided WWII because of the way price controls, rationing, and general government crowding out distort the result.



    *hes got a co-author, right?

    ReplyDelete
  16. I'm not saying the paper presents a usable estimate of the multiplier - I'm saying it recognizes and addresses the endogeneity problem in a more convincing way than, for example, the interstate comparisons or the essentially pre-post stuff that John Taylor has been doing.

    My biggest concern with Barro is that what he produces is an estimate of the multiplier averaged across the business cycle. Big deal. What we need is a multiplier during a period of a recession.

    But what I appreciate about Barro is that he understands the empirical task at hand and he understands how it ought to be dealt with a lot better than a lot of the punditry and bloggers out there. That's not to say I would uncritically repeat his estimates.

    ReplyDelete
  17. re: "*hes got a co-author, right?"

    Yes - Charles Redlick. Unfortunately I doubt I'm the only person out there that fails to give him credit in casual references.

    ReplyDelete
  18. *But what I appreciate about Barro is that he understands the empirical task at hand and he understands how it ought to be dealt with a lot better than a lot of the punditry and bloggers out there. That's not to say I would uncritically repeat his estimates. *

    Antagonistic part of my reply: Barro is the punditry and bloggers, though. That paper basically seemed to be done to prove the back-of-the-envelope calculation he did for a (i think it was) WSJ op-ed. Like I said, he basically follows other time series analysis (Ramey and Shapiro, 1998) but then adds in WWII precisely to get the result he wants.


    *My biggest concern with Barro is that what he produces is an estimate of the multiplier averaged across the business cycle. Big deal. What we need is a multiplier during a period of a recession.*

    Nerd part of my reply: I was actually gonna bring this up in regards to Romer and Romer I like that paper but it suffered from the problem all VAR have which is that impulse response functions are by definition the reaction of variables to exogenous shocks, which are essentially "timeless". (Ill be guilty of this too by the time my diss is done).

    Ive posted about this before but I really like the Auerbach and Gorodnichenkos (2011) paper which tries to estimate a VAR across the business cycle.

    http://elsa.berkeley.edu/~auerbach/measuringtheoutput.pdf

    ReplyDelete
  19. Robin Hanson went after Russ Roberts' for his (incomplete) skepticism here:
    http://www.overcomingbias.com/2007/10/if-not-data-wha.html
    Later on they had a podcast where Roberts got on the couch and Hanson acted as his bias therapist.

    ReplyDelete
  20. Daniel,

    Your profession should have the economics version of thunderdome - two economists enter, one economist leaves. ;)

    ReplyDelete

All anonymous comments will be deleted. Consistent pseudonyms are fine.