My thoughts are with Baltimore tonight. The last time the governor
called the National Guard out to respond to riots my dad was able to get
out to his grandfather's farm in Baltimore County. Not everyone has
that option, then or now. That governor was Spiro Agnew, fwiw. Soon
afterward he got a VP slot on Nixon's ticket for his tough law and order
stance. My great grandfather's constitutional convention vote failed
shortly after too in part because it was perceived as too liberal
and reapportioned too much power to Baltimore city and the DC suburbs.
I've come across mixed reactions but some delegates think racial
tensions and the riot killed it.
Will Hogan be a VP? Probably not. And
chances are that this will accelerate some reforms. But the riots sure
didn't put Baltimore on a strong trajectory in 1968 and they're nothing
but bad news now. I wish both the cops and the rioters well. I've already
seen some celebration of harsh crackdowns, though. Hopefully this
settles down quickly because escalation of any sort will not be for the
better. Citizens' responsibility is to fight for justice and speak truth
to power. Police and National Guard responsibility is to protect and
serve. We need a great deal of both, I think.
Tuesday, April 28, 2015
Thursday, April 23, 2015
David Henderson on Barbara Bergmann and the wage gap
Posted by
dkuehn
at
2:08 PM
I wanted to highlight this post by David Henderson on the late Barbara Bergmann, much of which is a discussion of her writing on the wage gap. He promises more to come.
One of the things I like about this post is that it shows how both sides have to be careful with wage regression interpretations when talking about the wage gap. David's criticisms of Barbara are very much along the same lines as my criticisms of guys like Mark Perry. (I am not deeply knowledgeable about what she's written on it, so to a certain extent I'm taking David's word on it but it's a very common way of talking about things).
While I'm posting more on her, I'll also point out that Taylor and Francis is providing free access to a special issue of Feminist Economics on Barbara Bergmann this month. So download those pdfs before it's too late!
One of the things I like about this post is that it shows how both sides have to be careful with wage regression interpretations when talking about the wage gap. David's criticisms of Barbara are very much along the same lines as my criticisms of guys like Mark Perry. (I am not deeply knowledgeable about what she's written on it, so to a certain extent I'm taking David's word on it but it's a very common way of talking about things).
While I'm posting more on her, I'll also point out that Taylor and Francis is providing free access to a special issue of Feminist Economics on Barbara Bergmann this month. So download those pdfs before it's too late!
Tuesday, April 7, 2015
Does anybody have experience with a distributed lag model for panel data?
Posted by
dkuehn
at
9:19 PM
Does anybody have experience with a distributed lag model for a panel dataset? I'm getting this odd result where I'm trying a bunch of different lag lengths and no matter what I run the two longest lags have much bigger coefficients than the rest. So when I run with six lags, five and six have big coefficients but when I run with sixteen lags fifteen and sixteen do. I feel like this has to indicate something about the data structure and the model - it can't be real to always show that no matter what the lag length. I'm just not sure what it indicates.
If it matters - I'm looking at size of apprenticeship programs in an unbalanced panel with lags of the unemployment rate. No lags of the dependent variable.
Data adjustments - not a conspiracy, just a part of empirical work in economics
Posted by
dkuehn
at
1:07 PM
I got an email today announcing an Urban seminar, and the abstract reminded me of some of the Piketty debates around Bob Murphy and Phil Magness's paper and subsequent discussions. Here it is:
The CPS is typically not used to address inequality for all sorts of reasons, including the nature of the questions, coverage, and top-coding. But it still has income questions, and note that a recent redesign changes asset income reports. Of course if we were to use the CPS to think about some of Piketty's research questions, this change would be important. Moreover, if you wanted to use a consistent series from the CPS you would have to adjust the data to either move down the newer half of the series, or (probably preferably if this redesign represents an improvement) moving up the older half of the series. They do split samples discussed in the abstract so that you can understand the sort of adjustment that might be appropriate.
This is what Piketty is doing too when he harmonizes several of the wealth inequality series, and he uses years when the data series overlap to develop the adjustment factors. The figure Murphy and Magness like to call the "Frankenstein graph" suggests that certain blocks of the series come from different datasets, but in reality Piketty is typically taking data from several datasets to provide a harmonized estimate (for example, combining the Kopczuk and Saez data and the SCF data). This is how you'd want to merge several datasets, and it's generally not "pivoting" between datasets or "overstating" them as Murphy and Magness put it.
Anyone can criticize these sorts of data decisions, but it's a normal part of empirical work. If your criticism is just that the data decisions result in the conclusion that Piketty draws, that's not a very reasonable criticism. It's entirely circular: Piketty's conclusions are bad because his data decisions are bad. How do you know his data decisions are bad? Because they correspond to his conclusions!
"ABSTRACT: The 2014 Current Population Survey, Annual Social and Economic Supplement (CPS-ASEC) introduced major changes to the income questions. The questions were introduced in a split-sample design—with 3/8 of the sample asked the new questions and 5/8 asked the traditional questions. Census Bureau analysis of the 3/8 and 5/8 samples finds large increases in retirement, disability, and asset income and modest increases in Social Security and public assistance benefits under the new questions. However, despite the additional income, poverty rates are higher for children and the elderly in the sample asked the new questions. In this brownbag, we discuss the changes to the survey, the effects of the changes on retirement and other income, and describe how compositional differences among families with children in the 3/8 and 5/8 samples may explain the unexpectedly higher poverty rates in the 3/8 sample. The discussion has practical as well as theoretical importance, as researchers will have a choice of datasets to choose from when analyzing the 2014 CPS-ASEC data—the 3/8 sample weighted to national totals, the 5/8 sample weighted to national totals, a combined sample, and possibly also an additional file prepared by the Census Bureau that imputes certain income data to the 5/8 sample based on responses in the 3/8 sample."
The CPS is typically not used to address inequality for all sorts of reasons, including the nature of the questions, coverage, and top-coding. But it still has income questions, and note that a recent redesign changes asset income reports. Of course if we were to use the CPS to think about some of Piketty's research questions, this change would be important. Moreover, if you wanted to use a consistent series from the CPS you would have to adjust the data to either move down the newer half of the series, or (probably preferably if this redesign represents an improvement) moving up the older half of the series. They do split samples discussed in the abstract so that you can understand the sort of adjustment that might be appropriate.
This is what Piketty is doing too when he harmonizes several of the wealth inequality series, and he uses years when the data series overlap to develop the adjustment factors. The figure Murphy and Magness like to call the "Frankenstein graph" suggests that certain blocks of the series come from different datasets, but in reality Piketty is typically taking data from several datasets to provide a harmonized estimate (for example, combining the Kopczuk and Saez data and the SCF data). This is how you'd want to merge several datasets, and it's generally not "pivoting" between datasets or "overstating" them as Murphy and Magness put it.
Anyone can criticize these sorts of data decisions, but it's a normal part of empirical work. If your criticism is just that the data decisions result in the conclusion that Piketty draws, that's not a very reasonable criticism. It's entirely circular: Piketty's conclusions are bad because his data decisions are bad. How do you know his data decisions are bad? Because they correspond to his conclusions!
Subscribe to:
Posts (Atom)