Back to Post-publication discussions

1
Intelligence Really Does Predict Job Performance: A Long-Needed Reply to Richardson and Norgate

Submission status
Reviewing

Submission Editor
Noah Carl

Authors
Peter Zimmer
Emil O. W. Kirkegaard

Title
Intelligence Really Does Predict Job Performance: A Long-Needed Reply to Richardson and Norgate

Abstract

One commonly studied aspect of the importance of IQ is its validity in predicting job performance. Previous research on this subject has yielded impressive results, regularly finding operational validities for general mental ability exceeding 0.50. In 2015, Ken Richardson and Sarah Norgate criticized the research on the relationship between IQ and job performance, reducing it to virtually nothing. Their assessment of this topic has enjoyed little criticism since its publication despite the crux of their arguments being undermined by readily available empirical evidence and thirty years of replication of the contrary. This article replies to their main criticisms, including the construct validity of IQ tests and supervisory ratings, the validity of the Hunter-Schmidt meta-analytic methods, and possible psychological confounders.

Keywords
intelligence, IQ, cognitive ability, g-factor, job performance, meta-analysis, general mental ability, predictive validity, industrial-organizational psychology

Pdf

Paper

Reviewers ( 0 / 0 / 2 )
Reviewer 2: Accept
Reviewer 3: Accept

Tue 15 Feb 2022 03:24

Reviewer

This manuscript is a very though and complete rebuttal to the claim of the Richardson & Norgate paper in 2014. Richardson & Norgate’s main claim is that the well-established correlation between IQ and job performance is rather dubious and treated with much caution than is currently accepted among scholars. The reason for this contention consists of four main points:

 

  1. Construct validity of IQ is not solid.
  2. Job performance is measured by supervisor ratings, which is very inaccurate and prone to various prejudice and bias
  3. Meta-analysis is not a panacea and a strong conclusion cannot be obtained not as firmly as widely accepted.
  4. Non-cognitive factors are supposed to play more important roles, such as motivation, anxiety, emotional intelligence, etc.

 

In other words, Richardson and Norgate’s strategy is casting doubts on the concepts of IQ, the objectivity of job-performance ratings, reliability of meta-analysis, and finally suggest some other factors affecting job performance.

 

While this reviewer is not an expert on this IQ job performance literature and methodology of meta-analysis, none of Richardson & Norgate’s arguments sound persuasive enough to cast doubt on the well-established results from Hunter and Hunter (1984), Hunter (1986) among others. Authors carefully address these fallacies in the above reasonings in the manuscript with much wider relevant literature coverage and more rigorous logic.

 

Especially, this reviewer personally found that the suggestion of Emotional Intelligence as a relevant factor of job performance instead of IQ sounds like an amateur or journalist since EQ has long proven to be a mere mixture index derived from IQ and some personality traits in academia.

 

The manuscript is well-written at this point. Let me just point out two comments and two more suggestions of somewhat relevant literature on this topic.

 

1. P.20. About the description “. However, a more telling reason why Richardson & Norgate are incorrect about this is that conscientiousness, the willingness to do tasks thoroughly, actually has greater validity in lower complexity jobs, as shown by Le et al. (2010) and further discussed by Wilmot & Ones (2019)”.

 

Le et al (2010) actually confirmed their hypothesis that “: The level of Conscientiousness at which its relationship with task performance disappears (i.e., the inflection point) is determined by job complexity such that the inflection point for more complex jobs occurs at higher levels of Conscientiousness than the inflection point for less complex jobs”. This result is shown by figure1 in their paper and apparently, conscientiousness plays a more important role in more complex jobs. Please double-check the paper.

 

2. Wilmot & Ones’ (2019) paper on PNAS is missing in the reference section.

 

3. In Economics literature, IQ has been shown to be more important for income than the year of education. Altonji & Pierret (2001) estimate that 1 SD difference of the year of education have a 12% of annual return when a person gets a job. However, this return rate declines steadily to zero and instead, IQ becomes a more important factor, which shows a 13% annual return after 13 years of work experience.

 

4. It is widely known that 1 IQ point leads to a 1-3% increase in income (e.g., 2.5% by Dalliard, 2016; 3.1% by Jencks,1972; 1.4% by Zax & Rees, 2002,). Also, IQ and income are reported to correlate to some degree (e.g., 0.37 by Dalliard, 2016; 0.39 by Nyborg and Jensen, 2001; 0.297 by Zagorsky, 2007). There are certainly numerous papers of this kind out there. Given that these data are based not on a specific occupation but on all kinds of occupations in the real world, it is highly unlikely that IQ is not so much correlated with job performance.

 

Altonji, J. & Pierret, C. R., 2001, Employer Learning and Statistical Discrimination, Quarterly Journal of Economics, 116, 313-350.

Dalliard, IQ and Permanent Income: Sizing Up the “IQ Paradox”. Human Varieties, Jan. 31, 2016. https://humanvarieties.org/2016/01/31/iq-and-permanent-income-sizing-up-the-iq-paradox/

Jencks, S., 1972, Inequality, Penguin.

Nyborg, H. & Jensen, A. R. (2001), Occupation and income related to psychometrics. Intelligence 29, 45-55

Zagorsky, J. L. Do you have to be smart to be rich? The impact of IQ on wealth, income and financial distress. Intelligence (2007), doi:10.1016/j.intell.2007.02.003.

Zax, J.S. & Reese, D.I. 2002, IQ, Academic Performance, Environment, and Earning. The Review of Economics and Statistics,84,4,600-616.

 

 

Author | Admin
Thanks for the review. I'm sorry this revision took so long, I had slightly forgotten about this paper.

1. P.20. About the description “. However, a more telling reason why Richardson & Norgate are incorrect about this is that conscientiousness, the willingness to do tasks thoroughly, actually has greater validity in lower complexity jobs, as shown by Le et al. (2010) and further discussed by Wilmot & Ones (2019)”.

 

Le et al (2010) actually confirmed their hypothesis that “: The level of Conscientiousness at which its relationship with task performance disappears (i.e., the inflection point) is determined by job complexity such that the inflection point for more complex jobs occurs at higher levels of Conscientiousness than the inflection point for less complex jobs”. This result is shown by figure1 in their paper and apparently, conscientiousness plays a more important role in more complex jobs. Please double-check the paper.

 

Good point. I am not sure about the cause of this error. Their Figure 1 supports the opposite of what we wrote. However, the later and more thorough 2019 paper fits. In the abstract it says: "Finally, we discover that performance effects of C are weaker in high-complexity versus low- to moderate-complexity occupations.". We have removed the mention of the Le et al 2010 study.

 

2. Wilmot & Ones’ (2019) paper on PNAS is missing in the reference section.

 

We have added it. I think the manuscript was written without the use of a reference manager, so this was an oversight.

 

3. In Economics literature, IQ has been shown to be more important for income than the year of education. Altonji & Pierret (2001) estimate that 1 SD difference of the year of education have a 12% of annual return when a person gets a job. However, this return rate declines steadily to zero and instead, IQ becomes a more important factor, which shows a 13% annual return after 13 years of work experience.

 

This sounds like it is consistent with a signaling value of education in line with Bryan Caplan's review in his book The Case Against Education: Why the Education System Is a Waste of Time and Money. In fact he cites this study quite a number of times. We added a paragraph about this.

 

4. It is widely known that 1 IQ point leads to a 1-3% increase in income (e.g., 2.5% by Dalliard, 2016; 3.1% by Jencks,1972; 1.4% by Zax & Rees, 2002,). Also, IQ and income are reported to correlate to some degree (e.g., 0.37 by Dalliard, 2016; 0.39 by Nyborg and Jensen, 2001; 0.297 by Zagorsky, 2007). There are certainly numerous papers of this kind out there. Given that these data are based not on a specific occupation but on all kinds of occupations in the real world, it is highly unlikely that IQ is not so much correlated with job performance.

 

We added a bit about this, citing the new Marks 2022 study, which is on point. The full paragraph now reads:

Richardson & Norgate argue the relationship is partially due to the fact that such mental processes required for intelligence tests are taught in modern curriculum. However, longitudinal studies by Watkins, Lei, & Canivez (2007) and Watkins & Styck (2017) both found that a model where g causes educational achievement was best fit compared to vice versa. Similarly, while education has been shown to raise IQ scores (Ritchie & Tucker-Drob, 2018), Ritchie, Bates, and Deary (2015) found that the effect of years of education on IQ was only on specific skills rather than on g (i.e., schooling improved some broad abilities, but not general intelligence). These studies imply that the relationship between IQ and educational achievement is driven by individual differences in general mental ability. Another important criticism of Richardson & Norgate’s theory is that g and education have discriminant validity. Lubinski & Humphreys (1997) & Lubinski (2009), for example, showed that g is a better predictor of health outcomes than education (also see Gottfredson [2004] for a critical review of this topic). Gensowski, Heckman, & Savelyev (2011) found IQ predicted income beyond its relationship with education. If intelligence was just a proxy for social class or educational background, it is a mystery why employers would keep paying smarter people more money if they were not also more productive Indeed, numerous studies find correlations between income and intelligence, including controlling for parental social status (Marks 2022). Also in the economics literature, Altonji & Pierret (2001) found that education predicted income well at the beginning a person’s career. But as people’s careers progressed, intelligence became a better predictor. This finding is in line with a model where employers use education a proxy for intelligence and other traits, but over time, they learn an employee’s characteristics, so they don’t need the education proxy as much. Much more detail about signaling vs. human capital models of the value of education can be found in Bryan Caplan’s book length treatment (Caplan, 2018).

 

Bot

Authors have updated the submission to version #3

Reviewer

In their manuscript “Intelligence really does predict job performance: A long-needed reply to Richardson and Norgate”, the authors subject the publication “Does IQ really predict job performance?” by Richardson and Norgate (2015) to a strict examination. Richardson and Norgate argued that the association between IQ test results and job performance must be viewed skeptically, due to problems in construct and predictive validity of IQ tests, supervisory ratings, and meta-analytic procedures. Here, the authors go through the individual arguments step by step and show that these are mostly not sufficiently justified or even refuted.

Overall, the manuscript is well written both in terms of content and form. I have to admit that I looked at the form but did not carry out a detailed review here because I am not familiar with the journal's guidelines for manuscript design. However, I found inconsistencies in quotation, e.g., “Gignac, Vernon, Wickett, 2003” but “Gensowski, Heckman, & Savelyev (2011)”, and unusual formatting of statistical outputs, e.g., r not written italic.

The lines of argument are coherent and backed up with sufficient sources. However, I would like to add one comment to the chapter “Predictive Validity of IQ Tests”. 

The authors mentioned, Richardson and Norgate (2015) argued that there is an increase in the correlations between IQ and educational achievement with age but gave “no reason why this should support the point that the correlation between IQ and educational achievement is built into the tests.” If I got it right, the argumentation of Richardson and Norgate (2015) is that with higher school levels, children become more trained (in their words “learned”) in skills and knowledge required by IQ tests as IQ tests were assembled on the school curriculum. Interestingly, Richardson and Norgate (2015) just cited Sternberg et al. (2001), who argued that “(t)his increase may be due to the greater overlap in content between the two kinds of tests at higher levels, which in turn may reflect the greater overlap in the skills measured by the two kinds of tests at higher age” and reported that correlations also increased if composite measure of school achievement were analyzed. Furthermore, Sternberg et al. (2001) just cited McGrew and Knopik (1993), but they do not observe the relationship between IQ tests and school achievement, but “writing achievement”. Even if writing achievement is used as a proxy für school achievement, the MRA done by McGrew and Knopik (1993) suggests that the age-related increase is limited to the “Comprehension Knowledge cluster” (vocabulary knowledge), but not appeared in the “Fluid Reasoning cluster”, where even a slight decrease in correlations was observed (maybe explained by Cattell’s investment theory). Additionally, the regression coefficients for the “Comprehension Knowledge cluster” continue to increase after school age. Thus, in my opinion, the literature directly and indirectly cited by Richardson and Norgate (2015) contradict to their learning-hypothesis.

The paper by Richardson and Norgate (2015) is unfortunately very widespread and is also cited by university students. Simultaneously, the predictivity of job performance by IQ is one of the fundamental arguments for IQ testing. So, I see this submission as important work and I am grateful to the authors for taking up this topic.

May be of interest to the authors is the following paper: “When reasoning abilities required by a job exceeded worker abilities, workers reported more health conditions and were more likely to be retired versus working. Fewer health conditions were reported when reasoning abilities exceeded reasoning job demands.” (Beier et al., 2020). 

References

Beier, M. E., Torres, W. J., Fisher, G. G., & Wallace, L. E. (2020). Age and job fit: The relationship between demands–ability fit and retirement and health. Journal of Occupational Health Psychology, 25(4), 227–243. https://doi.org/10.1037/ocp0000164
McGrew, K. S., & Knopik, S. N. (1993). The relationship between the WJ-R Gf-Gc cognitive clusters and writing achievement across the life-span. School Psychology Review, 22(4), 687–695. https://doi.org/10.1080/02796015.1993.12085684 
Richardson, K., & Norgate, S. H. (2015). Does IQ really predict job performance? Applied Developmental Science, 19(3), 153–169. https://doi.org/10.1080/10888691.2014.983635
Sternberg, R. J., Grigorenko, E. L., & Bundy, D. A. (2001).  The predictive value of IQ. Merrill-Palmer Quarterly, 47(1), 1–41. https://www.jstor.org/stable/23093686
 

Author | Admin
Replying to Reviewer 3

In their manuscript “Intelligence really does predict job performance: A long-needed reply to Richardson and Norgate”, the authors subject the publication “Does IQ really predict job performance?” by Richardson and Norgate (2015) to a strict examination. Richardson and Norgate argued that the association between IQ test results and job performance must be viewed skeptically, due to problems in construct and predictive validity of IQ tests, supervisory ratings, and meta-analytic procedures. Here, the authors go through the individual arguments step by step and show that these are mostly not sufficiently justified or even refuted.

Overall, the manuscript is well written both in terms of content and form. I have to admit that I looked at the form but did not carry out a detailed review here because I am not familiar with the journal's guidelines for manuscript design. However, I found inconsistencies in quotation, e.g., “Gignac, Vernon, Wickett, 2003” but “Gensowski, Heckman, & Savelyev (2011)”, and unusual formatting of statistical outputs, e.g., r not written italic.

The typesetting process will take care of such things. It is not necessary for reviewers to spend time on formatting.

The lines of argument are coherent and backed up with sufficient sources. However, I would like to add one comment to the chapter “Predictive Validity of IQ Tests”. 

The authors mentioned, Richardson and Norgate (2015) argued that there is an increase in the correlations between IQ and educational achievement with age but gave “no reason why this should support the point that the correlation between IQ and educational achievement is built into the tests.” If I got it right, the argumentation of Richardson and Norgate (2015) is that with higher school levels, children become more trained (in their words “learned”) in skills and knowledge required by IQ tests as IQ tests were assembled on the school curriculum. Interestingly, Richardson and Norgate (2015) just cited Sternberg et al. (2001), who argued that “(t)his increase may be due to the greater overlap in content between the two kinds of tests at higher levels, which in turn may reflect the greater overlap in the skills measured by the two kinds of tests at higher age” and reported that correlations also increased if composite measure of school achievement were analyzed. Furthermore, Sternberg et al. (2001) just cited McGrew and Knopik (1993), but they do not observe the relationship between IQ tests and school achievement, but “writing achievement”. Even if writing achievement is used as a proxy für school achievement, the MRA done by McGrew and Knopik (1993) suggests that the age-related increase is limited to the “Comprehension Knowledge cluster” (vocabulary knowledge), but not appeared in the “Fluid Reasoning cluster”, where even a slight decrease in correlations was observed (maybe explained by Cattell’s investment theory). Additionally, the regression coefficients for the “Comprehension Knowledge cluster” continue to increase after school age. Thus, in my opinion, the literature directly and indirectly cited by Richardson and Norgate (2015) contradict to their learning-hypothesis.

Generally speaking, correlations between intelligence tests and grades and scholastic tests decrease with higher educational levels, but this is due to selection factors (reducing the variance). Their claim is entirely false because there are many intelligence tests that have no school related content, and indeed, schools do not teach children how to solve Raven's matrices at all (not any school I went to), but Raven's correlates strongly with school learning. Furthermore, many early childhood tests have no school content (e.g. Piagetian tests) but they still measure the same thing as the ordinary batteries that include crystalized tests (which will have some school related material, e.g. general knowledge).

 

I have added this paragraph:

    Richardson & Norgate argue the relationship is partially due to the fact that such mental processes required for intelligence tests are taught in modern curriculum. This is not actually true for many early childhood tests such as Piagetian tests. Yet factor analysis of such tests show that they measure the same thing as ordinary intelligence tests (Lasker 2022). In discussing age-related changes in the relationship between intelligence test scores and scholastic tests, Richardson & Norgate argue that the increase seen with age fits with their model. That may be true in the abstract, but if one checks the citation, it goes to Sternberg et al (2001) and then to McGrew and Knopik (1993) for the actual results. However, interpretation of this study is difficult because the authors adopted a correlated factors model, i.e., no general factor, so it is difficult to say whether the intelligence score increased its correlation with their achievement tests (two tests of writing) with age. The multiple R (square root of the more common r²) increased with age, but is this because g becomes more correlated with writing ability, or because non-g group factors increase their importance at later ages. Analysis of their reported regression coefficients (in their Table 3) suggests that it isn’t g’s correlation to the writing tests that’s increasing: it is mainly the crystalized ability (gc) that increases its relation to the writing test over time. The other ability factors, such as long term memory (glr), fluid ability (gf) show no increases. One simple interpretation here is then that as children accumulate school knowledge with age, this leads to an increasing overlap between the gc factor and the writing tests. As such, there is no need to invoke the interpretation proposed by Richardson & Norgate. It would be informative to reanalyze this study using modern structural equation methods.

 

R3:

The paper by Richardson and Norgate (2015) is unfortunately very widespread and is also cited by university students. Simultaneously, the predictivity of job performance by IQ is one of the fundamental arguments for IQ testing. So, I see this submission as important work and I am grateful to the authors for taking up this topic.

May be of interest to the authors is the following paper: “When reasoning abilities required by a job exceeded worker abilities, workers reported more health conditions and were more likely to be retired versus working. Fewer health conditions were reported when reasoning abilities exceeded reasoning job demands.” (Beier et al., 2020). 

References

Beier, M. E., Torres, W. J., Fisher, G. G., & Wallace, L. E. (2020). Age and job fit: The relationship between demands–ability fit and retirement and health. Journal of Occupational Health Psychology, 25(4), 227–243. https://doi.org/10.1037/ocp0000164
McGrew, K. S., & Knopik, S. N. (1993). The relationship between the WJ-R Gf-Gc cognitive clusters and writing achievement across the life-span. School Psychology Review, 22(4), 687–695. https://doi.org/10.1080/02796015.1993.12085684 
Richardson, K., & Norgate, S. H. (2015). Does IQ really predict job performance? Applied Developmental Science, 19(3), 153–169. https://doi.org/10.1080/10888691.2014.983635
Sternberg, R. J., Grigorenko, E. L., & Bundy, D. A. (2001).  The predictive value of IQ. Merrill-Palmer Quarterly, 47(1), 1–41. https://www.jstor.org/stable/23093686
 

I skimmed Beier et al. It is a very complicated methods paper. I don't see why it would be particularly relevant here.

Bot

Authors have updated the submission to version #4

Reviewer

Replying to Emil O. W. Kirkegaard

 

You need a period after the phrase "if they were not also more productive."

Everything I have suggested has been well-addressed, and I am happy to say the manuscript is ready to be published. I am delighted to be part of this publication process. Thank you. 

 

 

 

 

Thanks for the review. I'm sorry this revision took so long, I had slightly forgotten about this paper.

1. P.20. About the description “. However, a more telling reason why Richardson & Norgate are incorrect about this is that conscientiousness, the willingness to do tasks thoroughly, actually has greater validity in lower complexity jobs, as shown by Le et al. (2010) and further discussed by Wilmot & Ones (2019)”.

 

Le et al (2010) actually confirmed their hypothesis that “: The level of Conscientiousness at which its relationship with task performance disappears (i.e., the inflection point) is determined by job complexity such that the inflection point for more complex jobs occurs at higher levels of Conscientiousness than the inflection point for less complex jobs”. This result is shown by figure1 in their paper and apparently, conscientiousness plays a more important role in more complex jobs. Please double-check the paper.

 

Good point. I am not sure about the cause of this error. Their Figure 1 supports the opposite of what we wrote. However, the later and more thorough 2019 paper fits. In the abstract it says: "Finally, we discover that performance effects of C are weaker in high-complexity versus low- to moderate-complexity occupations.". We have removed the mention of the Le et al 2010 study.

 

2. Wilmot & Ones’ (2019) paper on PNAS is missing in the reference section.

 

We have added it. I think the manuscript was written without the use of a reference manager, so this was an oversight.

 

3. In Economics literature, IQ has been shown to be more important for income than the year of education. Altonji & Pierret (2001) estimate that 1 SD difference of the year of education have a 12% of annual return when a person gets a job. However, this return rate declines steadily to zero and instead, IQ becomes a more important factor, which shows a 13% annual return after 13 years of work experience.

 

This sounds like it is consistent with a signaling value of education in line with Bryan Caplan's review in his book The Case Against Education: Why the Education System Is a Waste of Time and Money. In fact he cites this study quite a number of times. We added a paragraph about this.

 

4. It is widely known that 1 IQ point leads to a 1-3% increase in income (e.g., 2.5% by Dalliard, 2016; 3.1% by Jencks,1972; 1.4% by Zax & Rees, 2002,). Also, IQ and income are reported to correlate to some degree (e.g., 0.37 by Dalliard, 2016; 0.39 by Nyborg and Jensen, 2001; 0.297 by Zagorsky, 2007). There are certainly numerous papers of this kind out there. Given that these data are based not on a specific occupation but on all kinds of occupations in the real world, it is highly unlikely that IQ is not so much correlated with job performance.

 

We added a bit about this, citing the new Marks 2022 study, which is on point. The full paragraph now reads:

Richardson & Norgate argue the relationship is partially due to the fact that such mental processes required for intelligence tests are taught in modern curriculum. However, longitudinal studies by Watkins, Lei, & Canivez (2007) and Watkins & Styck (2017) both found that a model where g causes educational achievement was best fit compared to vice versa. Similarly, while education has been shown to raise IQ scores (Ritchie & Tucker-Drob, 2018), Ritchie, Bates, and Deary (2015) found that the effect of years of education on IQ was only on specific skills rather than on g (i.e., schooling improved some broad abilities, but not general intelligence). These studies imply that the relationship between IQ and educational achievement is driven by individual differences in general mental ability. Another important criticism of Richardson & Norgate’s theory is that g and education have discriminant validity. Lubinski & Humphreys (1997) & Lubinski (2009), for example, showed that g is a better predictor of health outcomes than education (also see Gottfredson [2004] for a critical review of this topic). Gensowski, Heckman, & Savelyev (2011) found IQ predicted income beyond its relationship with education. If intelligence was just a proxy for social class or educational background, it is a mystery why employers would keep paying smarter people more money if they were not also more productive Indeed, numerous studies find correlations between income and intelligence, including controlling for parental social status (Marks 2022). Also in the economics literature, Altonji & Pierret (2001) found that education predicted income well at the beginning a person’s career. But as people’s careers progressed, intelligence became a better predictor. This finding is in line with a model where employers use education a proxy for intelligence and other traits, but over time, they learn an employee’s characteristics, so they don’t need the education proxy as much. Much more detail about signaling vs. human capital models of the value of education can be found in Bryan Caplan’s book length treatment (Caplan, 2018).

 

 

Bot

Authors have updated the submission to version #5

Author | Admin

Added the missing Figure 1. Not sure why it went missing to begin with. It's at the end.

Reviewer

I agree with the authors' answers and the changes made. I have no other reservations and I think that the paper can be accepted as it is now.

Bot

The submission was accepted for publication.

Bot

Authors have updated the submission to version #7

Bot

Authors have updated the submission to version #8

Bot

Authors have updated the submission to version #9

Bot

Authors have updated the submission to version #10

Bot

Authors have updated the submission to version #11