Recently, Hagedorn, Manovskii, and Mitman (hereafter HMM) released an NBER working paper on the impact of unemployment insurance benefit extensions on employment. I find it interesting enough to note here for two reasons: (1.) it uses the really nice county-border comparison approach that Dube, Lester, and Reich (2010) used which I like, and (2.) it gets some unusually high impact results. They conclude that elimination of the benefit extension created 1.8 million jobs despite the fact that only 1.3 million people had their benefits cut.
The relationship between unemployment insurance and employment (or unemployment duration, or any number of other outcomes you might be interested in) is one of those things that's fairly straightforward on the first approximation but then gets a little more complicated as you think about it.* Unemployment insurance should reduce labor supply and therefore increase unemployment and reduce employment. It's not the sign of the result that's surprising anyone here, it's the magnitude.
HMM test the impact of the unemployment extension with what is essentially a cross-border DID. When Congress did not reauthorize the benefit extension in 2013, the actual reduction in benefits varied across states because of variation in benefit generosity at the state level. So in effect different states experienced different shocks, and variations in those shocks are used to determine the impact of UI extensions. What's interesting about HMM is that they use border counties as a comparison group to account for unobserved state level heterogeneity that should be less variable across counties bordering each other. This follows in the tradition of Card and Krueger, and Dube, Lester, and Reich in the minimum wage literature. Dube, Lester, and Reich are a step ahead because they use county pairs as the unit of analysis (rather than counties), which allows them to control for some county-level time trends that a border county dataset alone can't get at, but it's essentially the same idea.
So the design I think is great. Mike Konczal does not agree with me on that. He considers the "gold standard" in this literature to be the papers that use non-recipients as the control group. This seems odd to me - non-recipients would have very different characteristics than recipients so what you'd want to do is include recipients in both the treatment and control group and then shock dosage, which is what HMM do. Konczal also thinks it's a liability that HMM look at the entire labor market, although I don't get this complaint either. It's not like the studies he cites are bad studies. When you're using non-experimental designs you want a range of estimates from a range of different approaches to try to understand what's driving the results and zero in on what the actual result probably is. But it's not clear to me at all why the Konczal preferred studies are a "gold standard".
I think a much better criticism is offered by Dean Baker, who focuses on the data rather than the study design. HMM use the CPS and the LAUS. The CPS is a particularly odd choice to look at counties because of how sampling is done. The LAUS combines the CPS, CES, and unemployment insurance data and in that sense is probably somewhat stronger. But Baker makes the point that the more appropriate choice is the CES, which is establishment based (the CPS is household based). Since unemployment insurance is determined by the state of the employer and not the address of the employee, the CES will more accurately reflect the labor market response to changes in UI. When Baker does a quick run at the results with CES data, it looks like they're reversed.
So I'm torn here. I like the design a lot, contra Konczal. And that should give us confidence in the results. But the results don't seem to be robust to data choice and that ought to be investigated further.
*There are at least two wrinkles worth noting. First, in a depressed economy putting money into the hands of an unemployed person is going to have a positive impact on demand, which may blunt the negative impact on labor market outcomes. Second, as welfare matter, we may like UI extensions even if they do increase unemployment. I love Martin Baily's old line on this - he said "unemployment may increase as a result of UI, but it matters less". We certainly shouldn't worry about doing harm to the recipient. If the recipient is hurt by a UI extension they wouldn't take it. To borrow an old trope, "nobody was holding a gun to his head and making him take UI". Revealed preference and all that. One of the important reasons why we think UI is good is that it allows people to hold out for better job matches rather than jumping at the first job that comes along because they need to feed their families. So in that sense we could have an increase in unemployment but an improvement in the efficiency of the labor market.
"Unemployment insurance should reduce labor supply and therefore increase unemployment and reduce employment."
ReplyDeleteThis is not the story that HMM tell though. Indeed they can't tell it, since so many of their 1.8 million aren't on UI or even in the labor force during 2014. This is a story about wage setting from the employer side. In a sense, it's a story about internal devaluation; wages finally came down enough to hire more people. Is that really what we observed in 2014?
I don't think it's right to say it's not the story they tell but certainly it's not the only story they tell. In any case it's the story I'm telling here - it ought to be the dominant effect of UI (although my asterisk is important for why we still might want to do it).
DeleteUltimately this sort of estimate isn't going to arbitrate between different theories of what's going on. I think the design here is a good one - I would not privilege the others over it. But I think the points about the data are important, and it's reassuring that when you use better data a more sensible magnitude result seems to hold.