Back to [Archive] Post-review discussions

[ODP]Reassessment of Jewish Cognitive Ability:
Hello, I would like to submit the attached manuscript for review. Thank you, Curtis Dunkel
Admin
The results suggest that parental fluency in Hebrew or Yiddish is a valid measure Jewish within group differences and further research using the measure and the Project Talent data file is proscribed.


Proscribed means "forbid, especially by law.", but that doesn't seem right. Is the author suggesting that no further analyses should be made based on these data? Perhaps you meant "prescribed"? :)

The cognitive profile of Jews is odd. They are basically like smart women with their low spatial ability. Even politically correct, generally anti-hereditarian Richard Nisbett thinks it is due to genetics (see: Nisbett 2009: Footnote 173):

Before leaving the topic of Jewish IQ, I should note that there is an anomaly concerning Jewish intelligence. The major random samples of Americans having large numbers of Jewish participants show that whereas verbal and mathematical IQ run 10 to 15 points above the non-Jewish average, scores on tests requiring spatial-relations ability (ability to mentally manipulate objects in two- and three-dimensional space) are about 10 points below the non-Jewish average (Flynn, 1991a) . This is an absolutely enormous discrepancy and I know of no ethnic group that comes close to having this 20 to 25-point difference among Jews. I do not for a minute doubt that the discrepancy is real. I know half a dozen Jews who are at the top of their fields who are as likely to turn in the wrong direction as in the right direction when leaving a restaurant. The single ethnic difference that I believe is likely to have a genetic basis is the relative Jewish incapacity for spatial reasoning. I have no theory about why this should be the case, but I note that it casts an interesting light on the Jews' wandering in the desert for forty years!


My girlfriend suggested that it is mediated by a different testosterone level, assuming that testosterone boosts spatial ability, which seems plausible.

The evidence is not so strong it seems. There appears to be a nonlinear relationship. Some cites:

Silverman, Irwin, et al. "Testosterone levels and spatial ability in men." Psychoneuroendocrinology 24.8 (1999): 813-822.

Gouchie, Catherine, and Doreen Kimura. "The relationship between testosterone levels and cognitive ability patterns." Psychoneuroendocrinology 16.4 (1991): 323-334.

Shute, Valerie J., et al. "The relationship between androgen levels and human spatial abilities." Bulletin of the Psychonomic Society 21.6 (1983): 465-468.

Sixteen tests of cognitive ability were administered to participants. The scores from the full base year sample on the sixteen tests were submitted to an Exploratory Factor Analysis using Principal Axis Factoring. The first unrotated factor, with an Eigenvalue of 7.71 and accounting for 48.19% of the variance among scales, was used to compute g. The individual tests with their factor loadings are as follows: Abstract Reasoning (.70), Advanced Math (.52), Arithmetic Reasoning (.78), Creativity (.72), Disguised Words (.66), English Total (.78), High School Math (.77), Information (.88), Mechanical Reasoning (.65), Memory for Sentences (.33), Memory for Words (.54), Reading Comprehension (.86), Visualization in Two Dimensions (.49), Visualization in Three Dimensions (.61), Vocabulary (.85), Word Functions in Sentences (.72).


A table would better show these data.

If one assumes that the data is interval, the correlation between parental fluency in Hebrew or Yiddish and g was, r (10,578) = .10.


Can you calculate the average ability levels of the children by their parents fluency levels? This way the reader will know how large r=.10 is in IQ points (µ=100, SD=15).

Table 4 presents the scores on individual cognitive tests by White Jew and White gentile and myopic or not myopic. For both Jews and gentiles the pattern is the same. Myopia was associated with higher scores with the exceptions of mechanical reasoning and two-dimensional visualization.


This would better be presented visually. One could do the same with all the tables.

I don't have any particular objections to publication, just the above comments. It's a fine paper.

The data are public yes? They need to be linked to or attached before the paper can be published as this journal has mandatory data sharing.
I believe "proscribed" can also mean "commanded".

When conducting MCV, you should correct for artifacts such as unreliability of the vectors (te Nijenhuis et al., 2014 have a full list of such corrections).

Also, given the lower spatial ability amongst Jewish populations, one would expect spatial tests to have higher g-loadings (Michael Woodley pointed this out in his review of te Nijenhuis et al., 2014). An analysis should be conducted to confirm or disconfirm this.
Admin
When conducting MCV, you should correct for artifacts such as unreliability of the vectors (te Nijenhuis et al., 2014 have a full list of such corrections).

Also, given the lower spatial ability amongst Jewish populations, one would expect spatial tests to have higher g-loadings (Michael Woodley pointed this out in his review of te Nijenhuis et al., 2014). An analysis should be conducted to confirm or disconfirm this.


For this to be possible however, it must have IQ data at several waves. Then he can correlate the vectors and estimate the reliability. I'm sure, given the description, that this set of tests is not common.

I don't think it will change the result however. As he hypothesized. The positive correlation was only effective in Hebrew/Yiddish language, not the others where you have negative r, and in one group, you have null r. Even correction for vector unreliability would not change the null r for that group. Nearly all of the correlation are very high, i.e., correction for the listed artifacts by Schmidt/Hunter would increase the r but by not much.

I'm more concerned, however, about the subtest reliability. Is there no data on subtest reliability for that battery ? Jensen (1998) always recommends doing this, because positive correlation between g-loadings and other variables of interest can be "faked" by differential subtest reliability. For example, if you have 6 tests with rtt=0.80 and 6 others with rtt=0.60, you might obtain incorrect effect sizes (i.e., correlations) because g-loadings are stronger for more reliable subtests. In general, however, IQ tests are highly reliable, and their subtests too. And their reliability don't differ that much across subtests. So, hypothetically, you can assume you won't have any troubles with MCV. But having the subtest reliability correction done is better than not.

Is it possible to conduct such analysis ? If not, I would like to validate the publication of this study. I believe it really worths it. (although I recommend to add a little note about the impossibility of doing subtest reliability correction, assuming you can't do it of course)

P.S. (If you can do and plan on doing subtest reliability correction, there are 2 ways to do it, either by dividing each column by SQRT of the reliability, or by correlating the two variables with subtest reliability partialed out, using the method of partial correlation. Jensen seems to believe the last method is better.)
I'm more concerned, however, about the subtest reliability. Is there no data on subtest reliability for that battery ? Jensen (1998) always recommends doing this, because positive correlation between g-loadings and other variables of interest can be "faked" by differential subtest reliability.


As an alternative, communalities could be used. The problem is that doing so over corrects and tends to diminish correlations.

Because reliability coecoefficients of these tests have not been determined directly in a comparable subject sample, each test's communality (i.e. the proportion of its total variance accounted for by the common factors) is used as a lower-bound estimate of the test's reliability. Partialling out the vector of communalities (as surrogate reliability coecients) is an extremely stringent procedure, because generally the largest proportion of the communalities is contributed by the PC1, so some part of the PC1 vector's correlation with d is removed in the partial correlation, thereby tending to work against the outcome predicted by Spearman's hypothesis. If the rs is nonsigni®cant ( p > 0.10, 2-tailed test), the partial correlation is not computed. Controlling variation in test reliabilities (estimated by the communalities), however, seems preferable to no control whatsoever. These results are shown in Table 2. (Jensen and Nyborg, 2000).
Admin
Thanks. I almost forgot this thing. But I remembered I have never been convinced by this method. Why it reverses your correlation, and it behaves unexpectedly, etc. Perhaps as you say, overcorrection. The problem is that Nyborg/Jensen explain it's better than no correction at all. But why so ? I want the explanation. I remember I have asked Nyborg himself but he told me that he's definitely done with psychometrics. In the end, I don't really know what to think about it.
Admin
I'll ask Nyborg to comment on this, since I know him personally.
The estimates of the subtest reliabilities are available. I can include them in a Table Emil recommended with the factor loadings. To recalculate correcting for reliablity menghu wrote, "by correlating the two variables with subtest reliability partialed out, using the method of partial correlation."

Is it as simple as running a partial correlation between the factor loadings with fluency-score correlations while controlling for the subtests reliability?

Thanks, Curt
Admin
The estimates of the subtest reliabilities are available. I can include them in a Table Emil recommended with the factor loadings. To recalculate correcting for reliablity menghu wrote, "by correlating the two variables with subtest reliability partialed out, using the method of partial correlation."

Is it as simple as running a partial correlation between the factor loadings with fluency-score correlations while controlling for the subtests reliability?

Thanks, Curt


Yes, I think so.

The alternative method is to correct for unreliability as done in a psychometric meta-analysis. See: https://www.goodreads.com/book/show/895784.Methods_of_Meta_Analysis

You can find the book for free here: http://gen.lib.rus.ec/book/index.php?md5=D43B07F375C98403D672D186F91C3407&open=0

The really short version is that you use the formula found on Wikipedia here (https://en.wikipedia.org/wiki/Correction_for_attenuation) if you have the reliability of both measurements, and then you run the correlation with subtest g-loadings on the corrected correlations.

Furthermore, one can correct for measurement error in g-loadings, that is, the g-loadings are not always estimated correctly by latent variable methods (principle components, factor analysis, maximum likelihood estimation, etc) and this might bias findings. The expert in this area is Jan te Niejenhus, a Dutch psychologist. You can contact him at nijen631 [removethis_replacewith@] planet.nl.
Is it as simple as running a partial correlation between the factor loadings with fluency-score correlations while controlling for the subtests reliability?
Thanks, Curt


I would just use Jensen's basic method. See 589 of the g-factor. One thing that Jensen notes elsewhere is to use the square root of the reliability (see also KJ Kan (2012)) -- so just divide both vectors by this.
Admin
You are not the only one to dig around in the Project TALENT dataset, even though it is from 1960! This shows the usefulness of publishing the data.

https://www.researchgate.net/publication/261066207_Linear_and_nonlinear_associations_between_general_intelligence_and_personality_in_Project_TALENT
Admin
I have used partial correlation only a few times. But I remember it does not give you similar results. Try this (some WISC data).

reliability BW gap g-loadings
Vocabulary 0.89 0.834 0.8020
Information 0.85 0.854 0.7780
Comprehension 0.78 0.793 0.6820
Similarities 0.82 0.769 0.7680
Arithmetic 0.79 0.609 0.6600
Picture Completion 0.77 0.697 0.5320
Picture Arrangement 0.74 0.746 0.5160
Block Design 0.85 0.895 0.6060
Coding 0.70 0.448 0.3540
Digit Span 0.74 0.263 0.4520
Object Assembly 0.70 0.792 0.4960
Mazes (Rushton 1999) 0.72 0.729 0.3750

1) Uncorrected r = 0.568
2) corrected by SQRT of 2 vectors = 0.462
3) Partialing out reliability vector = 0.238

I also use the data I have displayed in this post. It's the MISTRA data given by Johnson/Bouchard (2011). I choose the column "genetic" "reliability" "g" in the XLS. Here's the result.

1) not corrected
r(h2*g)=0.525

2) corrected with SQRT
r(h2*g)=0.520

3) corrected with partial corr
r(h2*g)=0.489

I never used the 3rd method in the past, for one obvious reason. Because no one else did that. They always SQRT the 2 vectors and then, correlate the variables. To be honest, I don't know which one is best. Jensen only says "it's more definitive". But the sentence is myteriously phrased and I don't know what he wants to say.
Admin
Perhaps we need to do a simulation study to figure out which method is best. But if reviewers don't know which is the best, then surely the paper cannot be faulted for not employing that correction.
Thank all of you for the suggestions. Here are a list of changes I made to the original manuscript.

List of changes:

1. In the abstract I changed “proscribed” to “prescribed”.

2. Given the ideas concerning T level and spatial reasoning and the results where myopia is positively associated with g, but negatively with mechanical reasoning, it would be expected that T level would be inversely associated with myopia. But the opposite may be the case, high T-high degree of myopia.

Chen et al. (2011). Polymorphisms in steroidogenesis genes, sex steroid levels, and high myopia in the Taiwanese population. Molecular Vision.

3. I wasn’t sure how to best present the information on how to access the data. I added a section after the references with the information.

7 Data Access

To access the data follow this link, doi:10.3886/ICPSR33341.v2, and download the “combined classes” file. One will need to be or become a member of the Inter-university Consortium for Political and Social Research.

I am not able to directly share the data.

4. Create a table for the factor loadings.

I included this in a new table, Table 2.

5. “This would better be presented visually. One could do the same with all the tables.”

Is the suggestion to change the tables to figures? I find it easier to get information from tables. I would prefer to keep tables. Is that okay?

6. Can you calculate the average ability levels of the children by their parents fluency levels? This way the reader will know how large r=.10 is in IQ points (µ=100, SD=15).

Hmm…The scores are factor scores of the z-scores of the tests using the SPSS command of “save factor scores” and “regression”. My guess is one could get a fairly accurate estimate of IQ score, but I hesitate to make this conversion and suggest that the estimated IQ score holds some water. On the other hand, I could be very explicit about the estimate just being an estimate. What do you think?

7. Jensen Effects correcting for reliability.

The suggestion was made by Philbrick to correct for unreliability when testing for Jensen Effects. Chuck and MengHu suggest two different ways the correction could be performed. It doesn’t look like anyone knows which is the preferred method at this point (see Emil’s suggestion) so I just did two analyses. These can be seen in Table 2. Consistent with MengHu I found that the partial correlation method results in a more substantial reduction in the correlation.
[hr]
You are not the only one to dig around in the Project TALENT dataset, even though it is from 1960! This shows the usefulness of publishing the data.

https://www.researchgate.net/publication/261066207_Linear_and_nonlinear_associations_between_general_intelligence_and_personality_in_Project_TALENT


Yes, I am familiar with the article. In fact, I have been working on a short response demonstrating the effect of controlling for the GFP.

In terms of Project Talent, Chuck Reeve uses this data quite a bit and while there are numerous issues with it the scope of the sample is remarkable.
Admin
Papers cannot be published if the data are not made public. However, the data are public right here: http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/33341?q=talent&searchSource=icpsr-landing

Is the suggestion to change the tables to figures? I find it easier to get information from tables. I would prefer to keep tables. Is that okay?


Yes, but if you prefer the other, that's fine. I mostly wanted you to make a table with the g-loadings, which you did (Table 2).

1. In the abstract I changed “proscribed” to “prescribed”.


It is still "proscribed".

I would like to thank Emil O.W.Kirkegaard for his assistance in preparing the manuscript.


Missing spaces.

6. Can you calculate the average ability levels of the children by their parents fluency levels? This way the reader will know how large r=.10 is in IQ points (µ=100, SD=15).

Hmm…The scores are factor scores of the z-scores of the tests using the SPSS command of “save factor scores” and “regression”. My guess is one could get a fairly accurate estimate of IQ score, but I hesitate to make this conversion and suggest that the estimated IQ score holds some water. On the other hand, I could be very explicit about the estimate just being an estimate. What do you think?


I said it badly. What I meant is that you should estimate the IQs for each of the levels of parental fluency in Hebrew/Yiddish. In your Table 1, you have the mean g for each level of fluency, except no fluency.

You note below the table:
For the full White only sample, M = .40, SD = .88.


Is this with or without those who were identified as Jewish?

What we can assume is that those whose parents do not speak H/Y have a mean IQ of about 100 (they are European non-Jews). From the dataset, you can calculate the average g of that group. I'm not sure if that's what .40 is above, or if that is for all those who self-identified as "white" including those whose parents speak H/Y.

When you have the non-Jew European mean g and the SD of that group, you can compare this with the mean g of the 5 groups of Jewish children. With this you can calculate the effect size (d) between the non-Jew Europeans and the Jew categories. When one has that, one can calculate the IQs by setting the European non-Jews to 100 and setting the Jewish children to 100+d*15.

--

I can ask Jan if he wants to review the use of corrections. He is the leading expert in this area, perhaps along with Michael Woodley.
Papers cannot be published if the data are not made public. However, the data are public right here: http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/33341?q=talent&searchSource=icpsr-landing

Is the suggestion to change the tables to figures? I find it easier to get information from tables. I would prefer to keep tables. Is that okay?


Yes, but if you prefer the other, that's fine. I mostly wanted you to make a table with the g-loadings, which you did (Table 2).

1. In the abstract I changed “proscribed” to “prescribed”.


It is still "proscribed".

I would like to thank Emil O.W.Kirkegaard for his assistance in preparing the manuscript.


Missing spaces.

6. Can you calculate the average ability levels of the children by their parents fluency levels? This way the reader will know how large r=.10 is in IQ points (µ=100, SD=15).

Hmm…The scores are factor scores of the z-scores of the tests using the SPSS command of “save factor scores” and “regression”. My guess is one could get a fairly accurate estimate of IQ score, but I hesitate to make this conversion and suggest that the estimated IQ score holds some water. On the other hand, I could be very explicit about the estimate just being an estimate. What do you think?


I said it badly. What I meant is that you should estimate the IQs for each of the levels of parental fluency in Hebrew/Yiddish. In your Table 1, you have the mean g for each level of fluency, except no fluency.

You note below the table:
For the full White only sample, M = .40, SD = .88.


Is this with or without those who were identified as Jewish?

What we can assume is that those whose parents do not speak H/Y have a mean IQ of about 100 (they are European non-Jews). From the dataset, you can calculate the average g of that group. I'm not sure if that's what .40 is above, or if that is for all those who self-identified as "white" including those whose parents speak H/Y.

When you have the non-Jew European mean g and the SD of that group, you can compare this with the mean g of the 5 groups of Jewish children. With this you can calculate the effect size (d) between the non-Jew Europeans and the Jew categories. When one has that, one can calculate the IQs by setting the European non-Jews to 100 and setting the Jewish children to 100+d*15.

--

I can ask Jan if he wants to review the use of corrections. He is the leading expert in this area, perhaps along with Michael Woodley.


Emil, it gets a bit tricky due to the data file. The initial wave of the data collection did not include a question about race/ethnicity. A follow-up wave of data collection did (but it was poorly worded to boot). And then the answer to the question was retroactively put into the first wave.

Here are the frequencies for race/ethnicity question (note the large number of "unknowns":

RACE
Frequency Percent Valid Percent Cumulative Percent
Valid 0 4722 1.3 1.3 1.3
White of Caucasian 147355 39.1 39.2 40.5
Black or Negro or Afro Am 6533 1.7 1.7 42.2
Oriental 999 .3 .3 42.5
American Indian 239 .1 .1 42.6
Mexican American 323 .1 .1 42.6
Puerto Rican American 37 .0 .0 42.7
Eskimo 1 .0 .0 42.7
Cuban 1 .0 .0 42.7
Unknown (or conflctng resp) 215339 57.1 57.3 100.0
Total 375549 99.6 100.0
Missing System 1467 .4
Total 377016 100.0


Beaver (2013) showed that intelligence is inversly associated with attrition meaning that those we have data from are going to represent individuals higher in intelligence. So I really hesitate to move from the relative values within the data.
Admin
It is customary to upload one's datafiles (e.g. SPSS or excel). Can you upload yours?
I am afraid I can not. The data isn't mine to share. However, as you indicated and I tried to convey at the end of the manuscript, to access the data one simply has to go the ICPSR, register, and they can download the base year data (follow-up data is not as easily available).
Admin
If the data cannot be shared, the paper cannot be published in this journal. It is a mandatory data sharing journal. If you have the data, then surely you can upload it somewhere anonymously.