Monday, January 2, 2012

Ability Bias and Education

Bryan Caplan has a good post up on ability bias in estimates of the returns to education.

The problem of ability bias is basically that high ability people do better in the labor market, but high ability people also pursue more school. So when you try to identify the relationship between school and labor market performance, you are producing an overestimate of the effect of school if you don't account for ability.

The traditional way of dealing with this for labor economists has been to find a measure of education that is uncorrelated with ability, usually from a natural experiment, an experiment, or an instrument. I haven't talked about this stuff in a while, but long-time readers know I'm skeptical of a lot of instrumental variables (which shows how bleak the macroeconometric estimation prospets are, since I'm willing to accept instrumental variables in multiplier estimates!). Indeed, my skepticism is rooted in precisely this ability bias debate (David Jaeger, a co-author of the linked paper, was my labor and my econometrics professor).

Caplan offers a relatively simple solution - just control for ability!

I think this solution is a lot better than people realize, but I think the problems with it need to be taken seriously too. He writes "Despite their mighty debunking efforts, labor economists almost never test for ability bias in the most obvious way: Measure ability, then re-estimate the return to education after controlling for measured ability." Perhaps it's true that they "almost never" do this, but I certainly have done it! First, in this Urban Institute paper, Vulnerability, Risk, and the Transition to Adulthood (2011), and also in a paper that Marla McDaniel and I are going to submit this week to the Review of Black Political Economy on the differential benefits of a high school diploma for black and white youth (in which, Bryan Caplan will be happy to know, we discuss the signalling role of a diploma).

Now we didn't control for ability to estimate the bias in a standard OLS estimate of the effect of education. We just knew we needed to account for ability bias and we had no interest in developing some crazy IV scheme to get it accepted into a more prestigious journal (in fact, we were rejected from our first more prestigious journal attempt and the endogeneity concern was cited... I think their point was weak, but nevertheless...).

So I am definitely in agreement with Caplan on the value of this approach. He seems to suggest that people don't do this as much as they should because it comes up with the result that education isn't as beneficial as you might think. Maybe, but I don't think it's anything nearly so devious. I think there are good reasons and bad reasons for ignoring this route.


*****

Good Reasons for not using this approach often:

1. Not a lot of data sets with the labor market information we need have intelligence or ability information.

2. We are not psychologists or educational specialists - we know very little about what "abilities" are important for the labor market. We don't even know much about how to define and talk about these "abilities". We do know something about uncorrelated measures and pseudo-randomization. Labor economists are probably wise to stick to what they know. Of course, that's no excuse not to co-author with a psychologist.

3. These ability measures are likely to be endogenous. Bryan Caplan just throws out suggestions like "just control for ability in the NLSY" without giving readers more background on what he's talking about. We used the NLSY in both of the papers I discussed above. Bryan is refering to the ASVAB test that's administered as a part of the NLSY. Let's forget the fact for a minute that my psychologist sister-in-law has told me on many occassions that the ASVAB is not considered a valid test of intelligence or ability by psychologists. It also simply introduces the very endogeneity problems we're trying to get away from. The ASVAB is administered in the first round of the survey (this is a longitudinal dataset, which is what's so nice about it). In the first round, the youth in question are between the ages of 12 and 16. In other words, they've been in school already for 6 to 10 years. And the ASVAB sections that are most used in the NLSY (math and verbal ability) is not like an IQ test - it is much more knowledge-based, like an SAT or a GRE. So those years of schooling are absolutely going to improve the ability score.

That's a big deal for what Bryan is proposing here. He suggests that the impact of education drops by something like 40% when you control for ability. But if this ability score is actually caused by high quality elementary education, then that may mean that a big chunk of that effect actually is caused by education.

Bad reasons for not using this approach as often:

1. I'm going to get more suspicious like Bryan here. The Card/Krueger/Angrist axis in labor economics is powerful and seductive. Labor economists really love tricky identification strategies (I certainly do, even though I'm more suspicious of IV models). Labor economists also have a comparative advantage in doing these things. They lose their comparative advantage if you start simply controlling for a measure that (1.) they don't understand as well, and (2.) you can execute and interpret with an undergraduate econometrics education.

1 comment:

  1. What statistical distribution was used for these studies? Just curious.

    ReplyDelete

All anonymous comments will be deleted. Consistent pseudonyms are fine.