Tuesday, May 14, 2013

My tied-for-favorite psychologist comments on IQ in the Richwine thread

(I can't say "favorite" or I'd get in trouble with my sister-in-law)

Dr. J, a good friend with expertise in psychometrics commented on the Richwine thread. In broad strokes this was my impression, but she's got a lot more detail:
"A few points here (a little late, and also WRT your prior post on this issue):

If IQ is highly determined by genetics (I defer to the geneticists on this): I believe current research shows it's about 50% genetics, 50% environment (most of the latter being non-normative social experience, that is, not the parents)

IQ is messy, messy, messy. It is a pain to operationalize. Many times, IQ is defined as "general intelligence" -- but what is intelligence? Well, whatever intelligence tests measure.

Other hitches: IQ tests were primarily, if not entirely, developed in the Western world, America in particular. How can we -- unless we are narcissistic ego centrists -- assume that what our society considers is intelligence could really generalize across cultures? And indeed, cross-cultural research pretty routinely reveals that different cultures prioritize different aspects of the human experience; many cultures have words for concepts that are not prominent/discussed in American society. So we might expect cross-cultural differences in scores on IQ tests; that doesn't mean necessarily that people differ on "intelligence", broadly construed. That just means it's ridiculous to expect measurement invariance of a western concept across all cultures.

And also, often, IQ tests are standardized around 100. Constantly standardizing test scores makes it a little difficult to look at changes over time.

Finally, we know that tests -- especially IQ tests -- test both "intelligence", but, to a large extent, also the ability to take tests. This is why, even within the US, we have sub-group differences in IQ. Because different groups have different access to the experiences necessary to develop testing competence. Now, we are working to develop culturally-ungrounded IQ tests (such as the Hannon-Daneman test or the Siena reasoning test), and these tests do help mitigate/decrease sub-group differences in American samples. But, you can see . . . this is a hot mess.

I think IQ is one of the most tricky areas of inquiry in psychology, actually, because in normal everyday conversation we all think we have a sense of what it means, but psychometrically it's fairly problematic. Which does not mean we can ignore it -- it's a great concept and a lot of great work has been done on it! -- but it does mean we have to discuss it in as many forums as possible!"
I reiterate my advice to economists: by all means throw anything even resembling an intelligence test on the right hand side of the equation if you've got it. It's important heterogeneity worth controlling for. But then you're probably best served ignoring and leaving the interpretation to others because it's questionable whether we really have the chops to talk about it.

14 comments:

  1. However, you should know that he takes a heterdox perspective on intelligence (he is like an Austrian of personality psychology).

    The standard IQ concept has enormous predictive power. Is surely measures something important.

    ReplyDelete
    Replies
    1. Are you referring to Dr. J? She, from what I understand, takes a fairly mainstream view of IQ, she does not deny that the standard IQ concept has enormous predictive power, and as far as I know she does not deny that it measures something important.

      The question is:
      1. Does predictive power map onto an accurate description of what is going on inside brains
      2. Is IQ culturally contingent
      3. Might it be "something important" but not the only thing that important - might a more meaningful understanding of human intelligence be more multi-faceted?

      Those two assertions you have, in other words, can be true without getting to the heart of the issue.

      In the future - please don't comment here anonymously.

      Delete
    2. When we talk about 'validity' (or, if you will, the truthiness of a measure), there are a number of different aspects to consider. One, which you mention, is predictive validity: does measure x predict things that we expect construct X (which measure x is intended to capture) to predict? However, this is only one of many aspects of validity, including content validity (does measure x sample from the entire content of construct X without sampling from content of other constructs), construct validity (sometimes this includes content validity, depending on what you read; generally, are you measuring what you think you're measuring?), and even consequential validity (social consequences --intentional and otherwise of test interpretations). If you want to get into these aspects of validity, Messick has some great --although dense-- articles on it that I had my psychometrics students read.

      Anyway, IQ tests generally (not all of course, but many of the mainstream tests) are high on predictive validity and to some extent, construct validity. Their larger problems are with, in my view, content validity (contamination with the construct of 'testing ability') and consequential validity (adverse impact in selection due to potentially erroneous black-white group differences, for example). Ideally, of course, a measure that we can be confident in, in terms of what it is measuring, would be solid on all of these forms of validity.

      Importantly, predictive validity itself is not very elucidating without the other forms of validity. This is why we rarely use biodata in selection anymore -- even though something predicts other things highly, that doesn't mean we know what it is measuring!

      Dan, to your 2nd question, I actually did some lit reviews last night and couldn't find a single comprehensive study of cross-cultural measurement invariance on IQ. Although, I did find a paper that was like "seriously, why haven't we done cross-cultural measurement variance on IQ tests?" And really, I have to agree with that question.

      Delete
    3. Hi Dan!

      #1. Depends on what you mean by "inside brains." As far as I know, there's no conclusive evidence as far as neuroimaging is concerned -- maybe a few more wrinkles in this sulcus, a slightly bigger gyrus, and other quasi-meaningful correlations. As far as I know, there aren't any classification algorithms that have been able to predict who has an IQ vs. a low one. My money is that we won't get that far because the way that people solve problems is far from universal and lends itself to an individual-differences approach.

      #2. As Dr. J has said, the jury is still largely out. My guess though is: yes, absolutely. This debate is highlighted in two good books: Murray and Herrnstein's THE BELL CURVE (1994) and Gould's THE MISMEASURE OF MAN (1996). As you can imagine, it's a tricky and messy situation, but the gist is that even in the U.S. certain cultures are more likely to do well on IQ tests than others. And because there's plenty of data to show that individualist/collectivist perceive things differently on a perceptual level, it wouldn't surprise me if these societal differences can even bias responding on basic visual-pattern based questions on tests.

      #3. This is another big debate in psychology. Some camps think that we can view intelligence as one latent variable (Spearman's G), others say that it neatly divides into "crystallized" and "fluid" intelligence. Others think that intelligence is more appropriately divided into near-infinite sub-categories (e.g., a music intelligence, a math intelligence, a bodily-kinesthetic intelligence, and so on.

      Now, I am far from an expert in any of this stuff. It's really not my area, so take the above with a grain (or 7) of salt.

      Delete
  2. The heritability residual is often called "non-shared environment", but we really have no idea what it is. It could be entirely random noise.
    http://www.wiringthebrain.com/2009/06/nature-nurture-and-noise.html

    I confess I'm not a psychometrician and don't know about Hannon-Daneman or Siena. What makes them more culturally neutral than Raven's/Cattell? At any rate, there have been many attempts to create new tests which don't rely on 'g' or have some other undesirable property, and like those I predict HD & Siena will give similar results as the previous unsatisfactory tests.

    ReplyDelete
    Replies
    1. Great point on the non-shared environment issue!

      Re: HD & Siena. Both are in the realm of fluid rather than crystallized intelligence, and both help remove cultural bias (at least within US samples, haven't tried internationally yet) by using nonsense words in the reasoning problems. By doing so, they ensure that everyone coming into the test has the same familiarity with the nouns/concepts -- none! Both have also been construct validated (HD published; Siena prepping the manuscript) with a variety of other IQ tests -- including Ravens, crystallized, fluid, spatial intelligence tests, and more -- and both are definitely measuring fluid intelligence, and Siena also mitigates adverse impact. Definitely not unsatisfactory in the sense of construct validity, according to research to date.

      What "undesirable properties" besides a lack of construct validity are you thinking of?

      Delete
  3. 1)Are we overstating the predictive powers of an IQ test since the groups that can do well on them generally have the potential to do well due to life circumstances?

    2) I agree with your conclusion. Certain economist should stay within their realm of expertise.

    ReplyDelete
  4. Also I should add, I am SO not against IQ tests. I want to enthusiastically reiterate that there has been a LOT of great work in this area. But, there are messy problems left to deal with, chief amongst them (in my mind) being generalizability across cultures and contamination with testing ability. I only grumble when I feel like someone says "well, we're doing good enough" -- come on now, science is never about good enough! It's always about doing better :)

    ReplyDelete
  5. It's always amazing to see how the establishment reacts when it catches one of its priests in the act of crimethink.

    ReplyDelete
  6. Speaking of psychometrics, IQ tests, and what not...I wonder what does Daniel Kuehn's friend, Dr. J, have to say with regard to these book reviews on Amazon.com?

    http://www.amazon.com/review/R27FREXO4EQ1S3

    http://www.amazon.com/review/R27C5ZFG64NECB

    http://www.amazon.com/review/R21THYUNZ9HE0T

    ReplyDelete
    Replies
    1. Interesting, thank you for sharing! The reviewer obviously doesn't like multiple choice tests! I do agree that IQ tests are contaminated with "testing ability". Part of this is of course the MC format of most tests. Part is actually also that tests are read/written on, as some research suggests that black-white mean differences, for example, lessen when questions are read aloud to applicants. Unfortunately, the question becomes, what alternative formats are practically feasible? While flawed, MC tests are rather easy to score/process for a large number of test-takers. Given our enormous reliance on different forms of IQ tests for evaluation, selection into colleges/work/etc., dropping the MC format could pose a major challenge! Personally, when I have to use tests in my classes, I always use a combination of different test forms (MC, fill-in-the blank, short response/essays) and check for things like adverse impact. But even with fairly small classes (40 or fewer students per class), grading these fairly can become cumbersome. Of course, I am 100% behind a re-envisioning of evaluation and selection processes, but there is (unsurprisingly) a lot of inertia in this area.

      I do not know much about how lead exposure influences IQ, so I can't comment much on that, but it makes perfect sense that environmental contaminants would influence development adversely.

      Having not read the books this person is reviewing (mostly I'm an article reader), I can't comment on them specifically. But, I do have to say that one thing we tend to do (I'm guilty of it as well) is clump all IQ tests together as though they all measure the same thing. IQ tests actually spread across a pretty broad spectrum, so I'm not sure that all IQ tests -- or even all MC IQ tests -- fail to measure anything higher order. But the reviewer's point about the contamination of the IQ construct with the testing format still stands.

      Delete
    2. If you don't mind me asking, Dr. J, what are your research interests and specialties? (I'm assuming, rightly or wrongly, that you are involved in academia in some way like Mr. D. P. Kuehn is.)

      Also, what is your view on the concept of "multiple intelligences"? I find it to be an interesting idea, even though I don't know much about it, but the issue of practicality is another matter. The reviewer actually has published an article on this matter in a scholarly journal many years ago.

      http://www.tandfonline.com/doi/abs/10.1080/00207239308710860

      (Just for the record, yes, I have corresponded with the reviewer of the books in the three links I provided to you before. Part of the reason he made this review, I think, is due to his educational background - he essentially was a triple major in undergrad, finishing the requirements for Mathematics, Philosophy, and Economics - and his experiences in scholarly publication.)

      But what do you make of his argument that intelligence cannot be classified as "general intelligence" and placed into the form of a statistical point estimate? I haven't reviewed the literature on this matter enough, so I can't comment or take sides, but his argument seems interesting.


      Also, did you have another major besides Psychology in undergrad, or no? (I've noticed that for some reason, even after the advent of Daniel Kahneman and Amos Tversky - psychologists with strong training in mathematics are uncommon. Please don't take offence at this, and if I painted you with the same paintbrush when you aren't, I apologise.)


      Lastly, would you have any interest in decision theory (especially under ambiguity) and experimental testing of decision-making?

      Delete
  7. My background is in psychology and statistics/measurement. I have a PhD in organizational psychology, and a graduate certificate in stats and measurement (plus lots of statistical consulting and teaching). I largely conduct research on personal/professional development and diversity (including bias). So, my experience with intelligence research is largely in adverse impact in selection decisions; pretty much just where intelligence runs up against race/gender and job selection.

    I'm a proponent of multiple intelligences, but that doesn't necessarily preclude that many of those intelligences might be related (in the sense of a higher-order factor structure, for example). I find it a little overly simplistic to assume there is only one way that people can be intelligent. After all, I've met a large number of people who might not be considered 'intelligent' in the sense of crystallized intelligence but nonetheless are brilliant in other ways. I think, because at least some models of multiple IQs have higher-order factor structures, the "g" factor ends up being the easy way to summarize the information. Of course, I don't specialize in IQ so I'm willing to be wrong!

    You are correct -- many psychologists do not have strong quant training these days, which is a shame, because psychologists initially pioneered a ton of statistical tests. But not much in the past 20 or so years. That's something I would personally love to help change, but that might be setting an inordinately high bar.

    I certainly like decision making -- especially in terms of bias (I'm working on a theoretical paper which will hopefully provide some clarity to the bias literature) -- although I am not an expert in decision theory in particular. That being said, I love learning new things and expanding my skillset/framework!

    ReplyDelete
  8. Well Dr. J, I realised too late that there was one part of my post I should have edited before sending my comment:

    "The reviewer actually has published an article on this matter in a scholarly journal many years ago.

    http://www.tandfonline.com/doi/abs/10.1080/00207239308710860
    "

    That should be:

    "On the issue of lead contamination affecting IQ levels, the reviewer has actually published an article...

    That stated, regarding decision theory...just for everyone's reference, the contributions of Kahneman and Tversky can be considered to be part of decision theory as a whole, actually. Decision theory as a field is a multidisciplinary/interdisciplinary area - there is no single, dominant scholarly journal nor is there a single, dominant academic department for it. Contributions to decision theory have been made by - apart from people in psychology and economics - philosophers, mathematicians, and statisticians. One can find contributions to what would be considered as "decision theory" in outlets as different as each other in the following list: Erkenntnis, The Quarterly Journal of Economics, The Journal of the American Statistical Association, The Journal of Experimental Psychology, and Discrete Applied Mathematics.

    The reviewer told me recently actually that he isn't opposed to the idea of general intelligence as a concept - he simply believes that representing it as a point estimate, though it may make things more tractable, also makes things arbitrary. He told me that he believes that IQ tests might be better off using lower and upper bounds to measure general intelligence. What do you make of what he recommends?

    BTW, he also has worked with an academic psychologist before: a fellow by the name of Howard B. Lee, an Associate Professor of Psychology at California State University at Northridge.

    http://www.csun.edu/csbs/departments/psychology/faculty/lee.html

    Regarding the paper you're working on...Dr. J, are you working on a survey of the literature on decision-making and bias, or is it going to be a totally original, theoretical contribution?

    If it's going to be an original contribution, then it might be best if you read the seminal contributions of three mathematicians to decision theory - Frank P. Ramsey, Bruno de Finetti, and Leonard J. Savge.

    http://www.brunodefinetti.it/Opere/probabilismo.pdf

    http://econpapers.repec.org/bookchap/hayhetcha/ramsey1926.htm

    http://books.google.com/books?id=zSv6dBWneMEC

    Then, read Maurice Allais's 1953 critique and Daniel Ellsberg's seminal 1961 article in the QJE, please.

    http://www.jstor.org/stable/1907921

    http://www.jstor.org/stable/1884324

    Then you should be able to work from there. That stated Dr. J, if you wish to take this correspondence to somewhere more private, feel free to ask your old friend, D. P. Kuehn, for my e-mail address. I feel uncomfortable giving it out here, but I wouldn't be opposed to keeping your identity secret if you decide to engage in correspondence with me via e-mail.

    ReplyDelete

All anonymous comments will be deleted. Consistent pseudonyms are fine.