Back to [Archive] Post-review discussions

[ODP] The personal Jensen coefficient does not predict grades beyond its association
Admin
Journal:
Open Differential Psychology

Title:
The personal Jensen coefficient does not predict grades beyond its association with g

Authors:
Emil O. W. Kirkegaard

Abstract:
General intelligence (g) is known to predict grades at all educational levels. A Jensen coefficient is the correlation of subtests' g-loading with a vector of interest. I hypothesized that the personal Jensen coefficient from the subjects' subtest scores might predict grade average beyond g. I used an open dataset to test this. The results showed that the personal Jensen coefficient did not seem to have predictive power beyond g (partial correlation = -.02).

Keywords:
intelligence, Jensen effect, method of correlated vectors, g-loading, grade point average, educational achievement

PDF.
All project files.
Cannot you use the difference between g score and IQ score? (g-IQ)?I guess it's very similar to the MCV but may produce slightly different results.
Correl. between iq-g and gpa should be negative.
Admin
If my memory is correct, "unit-weighted average" is just a simple average such as (a+b+c+d)/4. In general, I hate to read these terms. I know a lot of examples where the authors use different names to say the same thing. It's very confusing. By the same token, I'm not sure I will recommend you to use the term "Jensen coefficient". Even I, I don't appreciate "Jensen effect". As I said, adding more and more new terms can be very exhausting for those who read.

However, it's correlation with GPA is weak.


I think it should be "its correlation", no ?

Your syntax may have a problem, because at some point, it's said that the Omega function needs the package GPA rotation, which you didn't have in your list.

More important. Can you explain me this code ? I don't understand it (but I see that DFcompleteZ is the "subset" of the data).

Jensen.coef = as.vector(rep(0, nrow(DF.complete.Z)))

for (case in 1:nrow(DF.complete.Z)){
cor = cor(as.numeric(DF.paf$loadings), as.numeric(DF.complete.Z[case,1:7]))
Jensen.coef[case] = cor
}
Admin
MH,

If my memory is correct, "unit-weighted average" is just a simple average such as (a+b+c+d)/4. In general, I hate to read these terms. I know a lot of examples where the authors use different names to say the same thing. It's very confusing. By the same token, I'm not sure I will recommend you to use the term "Jensen coefficient". Even I, I don't appreciate "Jensen effect". As I said, adding more and more new terms can be very exhausting for those who read.


Unit-weighted average just means that all the items have the same weight, i.e. a normal average. This is just to make it clear that one can weight averages in non-unit fashion as is done with factor scores.

It's linguistically inconvenient to use either "(anti)Jensen effeect" or the fully spelled out "MCV correlation with g-loadings". "Jensen coef." is pretty short and avoids the problems of the first.

I think it should be "its correlation", no ?


You were looking at an older revision. The newest one does already fixed that. :)

Your syntax may have a problem, because at some point, it's said that the Omega function needs the package GPA rotation, which you didn't have in your list.


The omega() function loads the required libraries itself. One does not need to load them beforehand. They need to be installed tho.

More important. Can you explain me this code ? I don't understand it (but I see that DFcompleteZ is the "subset" of the data).


Sure, here is a more annotated version.

#Personal Jensen coefficient
Jensen.coef = as.vector(rep(0, nrow(DF.complete.Z))) #set up an empty vector for personal Jensen coefs
#same length as the others

for (case in 1:nrow(DF.complete.Z)){ #loop over every number from 1 to the length of the dataset, this needed since we need to refer to the indexes
cor = cor(as.numeric(DF.paf$loadings), as.numeric(DF.complete.Z[case,1:7])) #calculate the personal Jensen coef.
Jensen.coef[case] = cor #insert it into the vector we created above
}

DF.complete.Z["Jensen.coef"] = Jensen.coef #put the results into the dataframe


-

I added a new revision with some changes changes as well as the metric Piffer suggested.
https://osf.io/gb3cy/
Admin
I want to be sure. The Jensen coefficient is calculated as follows : you take each person's z-score on all subtests (7) and the g-loading for these subtests as calculated for the entire group, and you correlate the z-score and g-loading for each person to get the Jensen coefficient. Am I correct ?

I don't see anything wrong in the article, and I want to approve, but I need to be sure about the above question.
Admin
Yes.

DF.paf is the factor analysis object. Using $scores gets you the g-loading weighted scores per case. The DF.complete.Z object is the data.frame ("DF") with only complete data (hence "complete") which has been standardized (hence "Z"). The first 7 columns are the subtest scores, the 8th col is the GPA (hence "1:7").

A pity the metric doesn't work. We can try in project TALENT too, if that dataset has grades or some other criteria variable.
Admin
It's ok. I approve.

EDIT: it's really optional, but i think the article will gain much by explaining the implication of a positive correlation between Jensen coeff and other variables (g scores, GPA, full IQ, etc) and how it differs with g-loading. Most people do not know what is the Jensen coeff. And it's also new to me.
Admin
Jensen coef. is not new, it's just a new name. It is just to avoid awkward language when the ... eh (anti)Jensen effect goes in different directions or neither.

Positive Jensen coef. = Jensen effect.
Negative Jensen coef. = AntiJensen effect.
Null Jensen coef. = no Jensen effect.

-

I ran one more analysis. Since people always call for research into the predictive power of non-g. I ran the partial correlations between subtests and GPA with g-factor partialled out. They were all pretty small.

ravenscore lretot nsetot voctot hfitot vantot aritot
-0.09 -0.06 -0.08 0.05 0.07 0.12 0.01

The largest is .12. N=289, so this has p=0.045. Likely a fluke.
Admin
New revision (#6) with the above results added in a new section. Nothing else changed.

https://osf.io/gb3cy/
New revision (#6) with the above results added in a new section. Nothing else changed.

https://osf.io/gb3cy/


"The personal Jensen coefficient correlates moderately with both g factor scores (.35) and the unit-mean (.23)."

Given SLDR, shouldn't the personal Jensen coefficient negatively correlate with g-scores? Or doesn't SLDR work this way? If it does, then your sample is, on this account, problematic - and you should note this.
Admin
I don't see why SLDR should predict that. The correlation between personal Jensen coef. and g scores and unit-mean simply shows that the smarter people tend to get their higher scores on the more g-loaded tests. The measure Piffer came up with (g score minus unit-mean) shows the same behavior.
Admin
g score is higher than full scale IQ score if the Jensen effect (cor with g-loadings) is positive, but g score is lower than full scale IQ if Jensen effect is negative. Now, what SLODR says is that the strength of the (sub)tests intercorrelation is lower at higher IQ levels. I don't remember it says that the correlation with g-loadings becomes negative. But perhaps it's possible that g-loadings' correlation is lower when IQ levels go up.
I don't see why SLDR should predict that. The correlation between personal Jensen coef. and g scores and unit-mean simply shows that the smarter people tend to get their higher scores on the more g-loaded tests. The measure Piffer came up with (g score minus unit-mean) shows the same behavior.


It would depend on how one conceptualized the mechanism behind SLDR. I was imaging that the population level correlations were lower at higher IQs, because on the individual level g was less potent. If this was the case, I would expect an individual level anti-Jensen Effect at higher IQs. I thought Armstrong suggested this. But I agree that this need not be the mechanism.

I like this paper. It was well written, simple and to the point. An interesting idea was explored.

I approve.
1) "correlation of subtests' g-loading"

g-loadingS

2) "Dutch students"

Specify, "university students"

3) "Perhaps this is because it is a student dataset with an above average level of g. According to the ability differentiation hypothesis, the higher the level of g, the weaker the g factor."

If the university students are selected based on g, there's also range restriction which reduces g variance.

4) "A conceptually similar measure is the g minus unit-mean metric (g advantage). This value is positive when the person has his highest scores on the more g-loaded subtests, and lower than the opposite is the case."

Rephrase the "lower than the opposite is the case." Also, with an increasing number of tests, the correlation between equally weighted and g-weighted scores approaches 1 because only the g variance tends to cumulate into composite scores regardless of weights used. See p. 103 in Jensen's g factor book. Accordingly, the correlation between g scores and unweighted scores in your data is 0.99, and the g advantage has no validity independently of g scores.

5) "They do not seem to have any unique predictive power for GPA beyond their association with g. Multiple regression gave a similar result (results not shown)."

What's the point of using MR here? It's superfluous with the partial correlation.

6) "Verbal analogies has a p value of .04 (N=289, two-tailed)"

The other p-values were >0.05, right? You should mention that.

7) "I ran the partial correlations with GPA and g partialled out"

Rephrase. You ran correlations between GPA and subtests, with g scores partialled out.

8) As a general point, the g factor is a between-individuals variable whereas your personal Jensen coefficient is a within-individual variable. You cannot easily generalize from individual differences processes to within-person processes, so your entire analysis is a bit suspect. Peter Molenaar has written about this a lot.
Admin
Thank you for the criticism Dalliard. I will update the revision with the fixes. As for the last point, the personal Jensen coef. is dependent both on the between-individual data and the within-individual data. The first is needed to find the g-loadings, the second to calculate the effect. I think it was an interesting idea, but it didn't pan out. In the spirit of preventing publication bias in the scientific literature, I decided to write up the results for a paper instead of just thinking "hm, I guess that didn't work" or blogging about it.
Admin
1) "correlation of subtests' g-loading"

g-loadingS


Fixed.

2) "Dutch students"

Specify, "university students"


Fixed.

3) "Perhaps this is because it is a student dataset with an above average level of g. According to the ability differentiation hypothesis, the higher the level of g, the weaker the g factor."

If the university students are selected based on g, there's also range restriction which reduces g variance.


Added: "Alternatively, one may think of it as range restriction of g, so that it is relatively smaller compared to the other sources of variance in the cognitive data."

4) "A conceptually similar measure is the g minus unit-mean metric (g advantage). This value is positive when the person has his highest scores on the more g-loaded subtests, and lower than the opposite is the case."

Rephrase the "lower than the opposite is the case." Also, with an increasing number of tests, the correlation between equally weighted and g-weighted scores approaches 1 because only the g variance tends to cumulate into composite scores regardless of weights used. See p. 103 in Jensen's g factor book. Accordingly, the correlation between g scores and unweighted scores in your data is 0.99, and the g advantage has no validity independently of g scores.


Changed to "This value is positive when the person has his highest scores on the more g-loaded subtests, and lower when the opposite is the case."

I agree regarding the comment. The g factor scores had no incremental validity above unit-weighted scores in accordance with statistical theory.

5) "They do not seem to have any unique predictive power for GPA beyond their association with g. Multiple regression gave a similar result (results not shown)."

What's the point of using MR here? It's superfluous with the partial correlation.


I have found that sometimes MR and partial correlations give markedly different results. For this reason I often test both to make sure it isn't some strange statistical fuck-up.

6) "Verbal analogies has a p value of .04 (N=289, two-tailed)"

The other p-values were >0.05, right? You should mention that.


Yes. Changed to "All the partial correlations were weak. Three were in the wrong direction. Only verbal analogies has a p-value below .05 (.04) (N=289, two-tailed) but since I tested 7 subtests and there is no adjustment made for multiple comparisons, it may be a fluke."

7) "I ran the partial correlations with GPA and g partialled out"

Rephrase. You ran correlations between GPA and subtests, with g scores partialled out.


Changed to: "Do the Jensen coefficient and g adv. explain unique parts of the variance of GPA? To test this, I ran the partial correlations between both measures and GPA controlling for g scores."

---

New revision can be found at OSF. https://osf.io/gb3cy/ Revision #7, dated 24th Oct.

Additionally, I reran all the analyses from my laptop. All analyses produced the same results as before (analytic reproducibility).
"A Jensen coefficient is the correlation between a subtests’ g-loadings and a vector of interest."

"I hypothesized that the personal Jensen coefficient from a subjects’ subtest scores will predict grade point average beyond g."

Rewrite: "Alternatively, one may think of it as range restriction of g, so that it is relatively smaller compared to the other sources of variance in the cognitive data" What is "it" referencing?

Was there no way to correct for subtest reliability? Could you mention that you were unable to do this despite this being standard methodology.
Admin
Thank you for reviewing, Chuck.

"A Jensen coefficient is the correlation between a subtests’ g-loadings and a vector of interest."


Presumably, you copied this from the abstract. However, the text is better as it is. It is nonsensical as you propose it to be.

"I hypothesized that the personal Jensen coefficient from a subjects’ subtest scores will predict grade point average beyond g."


Both additions are unnecessary. The first one makes it worse.

Rewrite: "Alternatively, one may think of it as range restriction of g, so that it is relatively smaller compared to the other sources of variance in the cognitive data" What is "it" referencing?


Changed it to "Alternatively, one may think of it as range restriction of g, so that the g variance is relatively smaller compared to the other sources of variance in the cognitive data."

Was there no way to correct for subtest reliability? Could you mention that you were unable to do this despite this being standard methodology.


Subtest reliability is not reported by Wicherts et al. As far as I can tell, these are uncommon tests, not commonly used tests so it will be harder to find reliability data. For th RAPM, one can find data, but correcting one without the others is bad for MCV.

-

I didn't upload the new version yet as there may be more mickey mouse edits necessary.
1) "finding the correlation between subtests' g-loading and each person's scores on the subtests" --> g-loadingS

2) Rephrase: "I ran the partial correlations with GPA and g partialled out."

3) I don't think corrections for measurement error make sense here because even if you had reliability data you could not correct the individual scores because the amount of error differs between individuals. Correcting the g-loadings might add as much error as it removes.

4) I think there should be more reflection on the intra- versus inter-individual differences problem, but I approve publication in any case.