His long-awaited post is here (well, long-awaited by me because I've been discussing this a lot with him lately!). David Henderson's summary is here.
The post comes in two parts - first arguing that even if what I'll call the quasi-experimental studies (Dube's work and related papers) are right it does not mean raising the good idea. Bob presents familiar, and I think strong, arguments about the fact that the increase to $10.10 is not "modest" by the standards of the study, and second that even if employment doesn't decline, employment for disadvantaged workers might.
Both I think are essentially right, although I probably wouldn't make the point about a modest increase quite as strongly as Bob does. As I wrote up recently, the proposed nominal increase is certainly larger than what is typically studied in these studies, but that is only half of the equation. You also have to consider changes in labor demand, and productivity statistics strongly indicate that the point where the minimum wage is binding has probably been increasing faster than the minimum wage for quite awhile. Forget the fact that labor productivity as measured by the BLS isn't exactly marginal productivity - if one serves as a decent proxy for the other, the faster growth of productivity than the real minimum wage decade after decade seems notable if we are talking about whether the increase is a "modest" one or not. So I think Bob makes an important point here, but it's only half of the point that ought to be made.
The second half discusses whether the quasi-experimental studies should be trusted at all. We've been over this ground a lot recently. You know I think that:
1. The contiguous county sample is ESSENTIAL.
2. Meer and West have a strong critique (although it would be strange to think of when it might apply in the real world), but...
3. Dube seems to have done precisely what I thought would be the right response - using trends from the pre-period.
What's nice is that Bob's piece brings up Neumark and Salas, which I haven't discussed here. They show that alternative time trend specifications reverse some of these results (although I don't know if this is with a contiguous counties sample - Bob, do you know?). I don't know the paper well - my one questions is whether there is an overfitting issue - basically the Meer and West critique could very conceivably apply to Neumark and Salas.
That's just thinking off the top of my head - curious what you all think of Neumark and Salas.
I think Bob is right to treat these as open questions (both the scientific question and the policy question), but I think on the scientific question the quasi-experimental literature is quite strong. From my perspective, identification and eliminating bias in the estimate is the primary question - so the contiguous county studies carry a lot of weight with me.
Regarding your point #1, how effective do you think contiguous county analysis is at its stated objective? That is, do you believe that there is enough economic similarity between, say, Dallas and Tarrant county that either county functions as a good control group for the other? This strikes me as a rather strong conclusion, given the significant differences I have personally observed between various contiguous counties. Can you explain why you, personally, believe contiguous counties are valid control groups for economic analyses?
ReplyDeleteDo you think Arlington County, Virginia is a better comparison than Tarrant? Or what about Sheboygan, Wisconsin? Because if you reject Dube et al., that is what you're going back to - assuming that any county is an equally good match to other counties. The contiguous counties approach doesn't give you a perfect estimator, but it vastly improves on what was done before.
DeleteNo, contiguous counties are absolutely not perfect control groups (I've noted this here in talking about why you'd want to include time trends: http://factsandotherstubbornthings.blogspot.com/2014/01/thinking-about-specifications-of-dube_18.html). But they are much better than fixed effects models with no matching.
If you continuously improve the identification of the model and it keeps moving it in the direction of no effect, that tells you something about the policy.
I only claim, in other words, that Tarrant is a far better comparison to Dallas than Sheboygan, and it gets even better when you include time trends.
DeleteSurely we could improve these estimates even further by including a lot more time-variant county-specific information, but right now this is the gold standard.
If you want to argue that Sheboygan is just as good as Tarrant, by me guest but I don't want to argue that.
I shouldn't act like time trends are the only time-variant county-level controls. Of course they have others. But given the data they've got, not as much as one might like.
DeleteStill it seems untenable to prefer the fixed effects approach to an imperfect DID approach.