MCV can be used to test if differences in some variable X are primarily driven by g (or the other way round, but let's assume the direction is g->X) OR some other sources of variance. These other sources are non-g factor variance, test specificity, and test unreliability, all of which are uncorrelated with g by definition. The vector of reliabilities can be partialed out (which usually has no substantial effect on the MCV correlation because the amount of random error is unrelated to sources of reliable variance), which means that only non-g factor variances and test specificities "compete" with g as explanations. If the differences in X are strongly dependent on g, there will be a large positive correlation. If, on the other hand, the differences in X are primarily driven by any of the non-g sources of variance, there will be a large negative correlation. If the correlation is close to zero, the differences in X cannot be attributed primarily either to g, non-g factors, or specificities.
The requirement that MCV be able to decompose the non-g variance is arbitrary. MCV tests whether some variable is associated with g or not, without specifying what the "not" is. This is a limitation but it is by no means a fatal limitation. To say that MCV is useless because it (usually) cannot specify the non-g sources of variance is analogous to saying that behavioral genetics is useless because it cannot pinpoint specific genes and environments. Certainly some have made this argument against behavioral genetics, but I don't think it makes any more sense than your criticism of MCV. Also, Dolan's criticism that MCV produces false positives (and false negatives) can be easily overcome with meta-analyses.
It is possible to use non-g factor loadings in MCV to examine what, if not g, drives the correlation (e.g., Jensen, 1998, p. 146). In practice, it may be difficult to collate enough data for meta-analysis with non-g loadings.
Back to [Archive] Post-review discussions
MCV is not a perfect or sophisticated method, but it is a remarkably powerful one. Dalliard's comment above is 100 percent correct. If this paper is to be at all thorough in its discussion of the race-IQ data, it must discuss MCV. There is no way around it.
I can't explain this in a more basic way.
I will not decipher this for you again. In that case, I will never have my answer and it will be difficult for me give my approval until some caution is explicited in the article concerning MCV. Or that you prove me wrong about MCV.
Offer a plausible non-g explanation for the B/W SH and I will see if I can test it using MCV.
Have you read Dolan's work carefully ? In my comments I said in the non-g (competing) models, depending on the best fitted models, you can have verbal+performance+memory factors explaining the totality of the BW gap. Or you can have, e.g., verbal+performance solely explaining the totality of BW gap. In the g model, if the strong SH is true, only g explains the totality of the BW gap, and if not, you must have g + one or several first-order factors explaining the totality of the BW gap. I say it again, you should look at Dolan (2000). Table 8.
Is this discussion really relevcant to the review of Dalliard's paper?
Dalliard says the evidence of g is strong, but it is based mainly on method that is weak in the sense that models are not clearly specified and thus can't be really tested. We don't know what is the first-order factor model here. And he makes that point several times, see pp 16-18 here. He mentioned quite enough for me to care about it. This is not nit-picking. For instance, nit-picking would be to say that Dalliard analysis of Add health (in table 1, second row) is flawed (which is true) because he looks at individual variables, measured with error, and not even "summed" into a "battery" of questionnaire. Rushton said it many times in his early works that questionnaire variables are quite unreliable, and without summing them, you can easily get group differences close to zero. But in his table 1, Dalliard also cited many studies with variables that seemed to be quite reliable. So, this will not change his conclusion, then I said nothing about it.
I know some of you want to push me to quickly accept it and then publish it without "wasting time". I can give my approval right now, because even with such fatal error concerning MCV, I can tell it's a very good piece. Now, if I approve this, it will also means I endorse Dalliard's opinion about MCV. I don't want to lie to myself.
MCV can be used to test if differences in some variable X are primarily driven by g (or the other way round, but let's assume the direction is g->X) OR some other sources of variance. These other sources are non-g factor variance, test specificity, and test unreliability, all of which are uncorrelated with g by definition. The vector of reliabilities can be partialed out (which usually has no substantial effect on the MCV correlation because the amount of random error is unrelated to sources of reliable variance), which means that only non-g factor variances and test specificities "compete" with g as explanations. If the differences in X are strongly dependent on g, there will be a large positive correlation. If, on the other hand, the differences in X are primarily driven by any of the non-g sources of variance, there will be a large negative correlation. If the correlation is close to zero, the differences in X cannot be attributed primarily either to g, non-g factors, or specificities.
I understand that argument. In The g Factor, you have even some analyses on that, ch 12 page 380. I have even once prepared a blog article where I should have challenged Dolan's critique against MCV. But then my article has been deleted (by myself) after i spent time reading more about Dolan's work. I came to the conclusion that the stupid guy here wasn't him but me. The reason : MCV can't identify the non-g model. This is what Dolan said and I concur with his argumentation.
Now indeed, what are these non-g sources ? In MGCFA, you know what they are. In first order factor level; such as memory, performance, verbal, etc. You can calculate the magnitude of score gap owing to each of them, along or without g. I say it again, I don't see this in MCV. You seem to admit MCV can't test first order factor vs g model. That's troublesome because, as i said, this means MCV can't evaluate the most relevant models. We know that first-order g fits badly, we don't want it, so MCV is of no help. If you remember Panizzon (2014), he said most studies examining the correlated group factor model vs 2nd order g show stronger evidence for the better fit of the former, not the second. This is what you should be testing. In MCV you don't even know what you're testing against the g model. Test specificity you say ? But in MGCFA, you have the "specificity" or better called residuals for the individual subtests and also for the first order group factors. In MCV, you have only the "residuals" of the individual subtests.
And if some people would argue that g can be tested against non g factors such as in multiple regression (with dominance and relative weigh analysis), even if we accept it, this is not MCV anymore.
The requirement that MCV be able to decompose the non-g variance is arbitrary. MCV tests whether some variable is associated with g or not, without specifying what the "not" is. This is a limitation but it is by no means a fatal limitation. To say that MCV is useless because it (usually) cannot specify the non-g sources of variance is analogous to saying that behavioral genetics is useless because it cannot pinpoint specific genes and environments. Certainly some have made this argument against behavioral genetics, but I don't think it makes any more sense than your criticism of MCV. Also, Dolan's criticism that MCV produces false positives (and false negatives) can be easily overcome with meta-analyses.
What do you mean by arbitrary ? If you think it's useless, I shall disagree once more. It's the most relevant information. I explained that here why.
Also, meta-analysis does correct for nothing actually. Most of the meta-analyses by te Nijenhuis involve test batteries roughly similar (e.g., WPPSI, WISC, WISC, WISC, WAIS, WAIS, WAIS). Each of them have problems of psychometric sampling bias (i.e., unbalanced composition of tests). Ideally, you should use meta-analyses on different kind of batteries, all without any problems of psychometric sampling bias (or error). This is rarely the case because the Wechsler test is just over-used. Then, the only advantage of meta-analysis is the correction for range restriction and measurement error. But I thought MGCFA can deal with the latter. As for the former, perhaps MGCFA decomposition of g/non-g is subjected to this bias (if for example, range restriction of g-loading reduce the score gap). But I don't see why model fit indices will be. Then, what remains of MCV that is superior than MGCFA is sampling error. This is a tiny advantage. But not even really so, because in MGCFA you can easily have large samples, as in Dolan (2000) in B-W difference, or in his studies (2006) of sex differences in Spain. The problem of sampling error is not linear. In fact, it seems to follow a logarithm curve. Starting from extremely small N, an increase in N is very meaningful, but its benefit because less and less the more N you have.
The most important thing is this sentence of yours "whether some variable is associated with g or not, without specifying what the "not" is" that proves you agree with me on this point. This is what I was saying all this time. In MGCFA you know exactly what are these competing models. Not in MCV. Now. How can you test model 1 vs 2 when you know 1 but not 2 ? If you can't define 2, it's dead end.
And the comparison with behavioral genetics does not sound even correct to me, because it's not the purpose of genetic model fitting to find the genes. Also, a more meaningful comparison would be to say that MGCFA can partition shared and non-shared environment, and then, test the shared environment against genetic factor (AE vs CE models) whereas in MCV you have only genetic + environment (non partitioned) and thus you can't evaluate AE vs CE models.
It is possible to use non-g factor loadings in MCV to examine what, if not g, drives the correlation (e.g., Jensen, 1998, p. 146). In practice, it may be difficult to collate enough data for meta-analysis with non-g loadings.
Non-g factor loading is not equivalent to first order factor model. And what is its form ? A first order correlated factors ? Or a first order non-correlated factors such as the bi-factor ? Like i say, it's difficult to tell. Non-g source is too vague, and you agree with me on that now, but you still believe we can test g vs non-g model even if we do not know what are these sources ?
MCV is not a perfect or sophisticated method, but it is a remarkably powerful one.
Try to answer these questions. Can MCV test different models ? What would look like the first-order factor model vs second-order g model ? If not, can you really claim MCV support g model if it cannot test g vs non-g model ? Can you even compare g vs non-g model if MCV cannot specify what is the non-g model, as Dalliard and I implied ? If you can't show other models are inferior to the g model, you can't win the debate. That's the purpose of model fitting analyses.
If, say, the B-W difference operated through the intermediary of the first-order factor of spatial ability, than MCV (d x spatial loading) would produce a positive result. MCV (d x g) would produce mixed results depending on the g-loading of the spatial subtests (on the Wechsler, that would mean a strong negative result). Regardless, unless the battery were spatially tilted, MCV would not yield a positive result, though it would take some further investigation to discover the actual source of the B-W gap.
If, say, the B-W difference operated through the intermediary of the first-order factor of spatial ability, than MCV (d x spatial loading) would produce a positive result. MCV (d x g) would produce mixed results depending on the g-loading of the spatial subtests (on the Wechsler, that would mean a strong negative result).
Philbrick, do you approve the paper? Emil, Davide, and I already do so.
MH, what exactly do you want D to add? Summarize briefly. Can it be added in a lengthy footnote or do you wish for a whole new section? To be honest, if you demand a major change, your vote will probably simply be bypassed.
If, say, the B-W difference operated through the intermediary of the first-order factor of spatial ability, than MCV (d x spatial loading) would produce a positive result. MCV (d x g) would produce mixed results depending on the g-loading of the spatial subtests (on the Wechsler, that would mean a strong negative result). Regardless, unless the battery were spatially tilted, MCV would not yield a positive result, though it would take some further investigation to discover the actual source of the B-W gap.
There are generally 3 or 4, or more group factors in most IQ batteries. What we need to know is how many of them can account for group difference (and, for each of them, to which extent) above what is already accounted for by g.
Also, your idea of r(d*spatial_loading) is nonsense by the way. Just try to imagine. 10 subtests subjected to EFA. This yields verbal+spatial+memory (let's say you also subjected the matrix corr to parallel analysis and it suggests 3 factors to be retained so that your conclusion is strong). Suppose spatial factor has 3 subtests, which evidently load on this factor of spatial ability (we will assume unidimensionality in all the 10 subtests of the battery for simplicity). Then, your spatial loading vector implies you correlate the 7 other subtests with this spatial factor and create your spatial loading vector, isn't ? This reduces the number of subtests required for MCV. But wait. There is more. What do you think about the verbal factor (4 subtests) which will be correlated with the other (non-verbal) subtests ? You are then going to create another vector of verbal loading using a different set of subtests than what you used previously for the spatial loading vector. So then now, how can you test the following weak version(s) of the spearman's hypothesis using MCV ?
1. r(d*g) + r(d*spatial) + r(d*verbal)
2. r(d*g) + r(d*spatial) + r(d*memory)
3. r(d*g) + r(d*verbal) + r(d*memory)
4. r(d*g) + r(d*verbal)
5. r(d*g) + r(d*memory)
6. r(d*g) + r(d*spatial)
Like I said, MCV is hopeless. There is no possible salvation for this miserable technique. Insofar as you're merely looking at how much x and y are correlated, MCV is fine. If you're going to test your hypotheses vs alternatives, you are going under serious troubles.
MH, what exactly do you want D to add? Summarize briefly
As I said, if you're not going to clarify your argumentation it's a disappointment. I consider it important enough. That debate is no useless, as some would imply.
But to answer the question, I was already thinking about that last night because it's been a while we go on in this discussion and the more I comment the more I believe you'll stick with MCV no matter what, so we have to find some sort of arrangement. To recall, I have 3 disagreements with what Dalliard said in the article :
(1). Multivariate genetic analyses support g model (though g was not modeled).
(2). SH is strongly supported with regard to group differences (citing MCV research).
(3). Wicherts (2004) proves FE gains and BW gap to be unrelated.
For (1), I want to Dalliard to specify that the correlation of (genetic) first-order factors suggest, indeed, the existence of g, but that it needs to be modeled explicitly such as in Panizzon (2014) before drawing definitive conclusion. Dalliard can cite that study, of course (and I recommend it).
For (2), I want Dalliard to specify that MCV does not specify clearly what is the non-g model (that MCV is supposed to evaluate against g model). Insofar as he agrees with me on that, I think he will not see it as problematic.
For (3), I still believe the argument is flawed because the MI in Wicherts account probably for around 0% of the FE gains. On the other hand I think he is very confident with Wicherts, not with me. I will not insist. But I would like him to put the Ang et al. (2010) study for the reasons I suggested in earlier comments.
Ideally, I prefer all these 3 changes to be made. Although the largely most important point is (2). So, even if only (2) is modified according to what I said, I will give my approval.
But to answer the question, I was already thinking about that last night because it's been a while we go on in this discussion and the more I comment the more I believe you'll stick with MCV no matter what, so we have to find some sort of arrangement. To recall, I have 3 disagreements with what Dalliard said in the article :
(1). Multivariate genetic analyses support g model (though g was not modeled).
(2). SH is strongly supported with regard to group differences (citing MCV research).
(3). Wicherts (2004) proves FE gains and BW gap to be unrelated.
For (1), I want to Dalliard to specify that the correlation of (genetic) first-order factors suggest, indeed, the existence of g, but that it needs to be modeled explicitly such as in Panizzon (2014) before drawing definitive conclusion. Dalliard can cite that study, of course (and I recommend it).
For (2), I want Dalliard to specify that MCV does not specify clearly what is the non-g model (that MCV is supposed to evaluate against g model). Insofar as he agrees with me on that, I think he will not see it as problematic.
For (3), I still believe the argument is flawed because the MI in Wicherts account probably for around 0% of the FE gains. On the other hand I think he is very confident with Wicherts, not with me. I will not insist. But I would like him to put the Ang et al. (2010) study for the reasons I suggested in earlier comments.
Ideally, I prefer all these 3 changes to be made. Although the largely most important point is (2). So, even if only (2) is modified according to what I said, I will give my approval.
That sounds reasonable.
Dalliard, can you make the requested changes?
I approve the paper, yes.
That gives 4 approvals, however, I would prefer if we could get 5. :)
For (1), I want to Dalliard to specify that the correlation of (genetic) first-order factors suggest, indeed, the existence of g, but that it needs to be modeled explicitly such as in Panizzon (2014) before drawing definitive conclusion. Dalliard can cite that study, of course (and I recommend it).
I added a reference to Panizzon et al. (2014) and a brief explanation of their findings. See p. 18. I see no reason to explain this in more detail as my paper is not about defending g -- that the g model is correct is assumed in the paper, otherwise many arguments in it don't make sense.
For (2), I want Dalliard to specify that MCV does not specify clearly what is the non-g model (that MCV is supposed to evaluate against g model). Insofar as he agrees with me on that, I think he will not see it as problematic.
I added a discussion of the MCV. See pages 16-17.
For (3), I still believe the argument is flawed because the MI in Wicherts account probably for around 0% of the FE gains. On the other hand I think he is very confident with Wicherts, not with me. I will not insist. But I would like him to put the Ang et al. (2010) study for the reasons I suggested in earlier comments.
Whether or not MI violations can account for the Flynn gaps in Wicherts's paper is irrelevant. Wicherts shows that the causal processes behind the b-w gap and the Flynn effect cannot be the same, and I cite him to that effect. I added a reference to Ang et al. (2010) (see page 26), but I cite it to make a different point than you.
Given that four reviewers have already accepted the paper, you can take it or leave it. I'm not going to change it anymore.
I have added the publication date and made a PDF. If the author can confirm this one is right, then I will proceed with publication.
Whether or not MI violations can account for the Flynn gaps in Wicherts's paper is irrelevant. Wicherts shows that the causal processes behind the b-w gap and the Flynn effect cannot be the same, and I cite him to that effect. I added a reference to Ang et al. (2010) (see page 26), but I cite it to make a different point than you.
Given that four reviewers have already accepted the paper, you can take it or leave it. I'm not going to change it anymore.
You have some indenting problems; could you fix those? It should only take a minute.
p. 4 "However ..."
p. 5 "However..."
p. 5 "It is..."
p. 13 "Kaplan's..."
p. 14 "Kaplan..."
p. 16 "Research..."
etc.
If you mean the extra long spaces, they are on purpose. It is because the author uses the "justified" alignment option. I personally hate it, but we have a policy of allowing authors to choose stuff like that.
I have added the publication date and made a PDF. If the author can confirm this one is right, then I will proceed with publication.
The formatting's off in that file. Use this one instead.
Very well.
http://openpsych.net/ODP/2014/08/the-elusive-x-factor-a-critique-of-j-m-kaplans-model-of-race-and-iq/
If you can verify this is right, then I will move this thread.
http://openpsych.net/ODP/2014/08/the-elusive-x-factor-a-critique-of-j-m-kaplans-model-of-race-and-iq/
If you can verify this is right, then I will move this thread.
Very well.
http://openpsych.net/ODP/2014/08/the-elusive-x-factor-a-critique-of-j-m-kaplans-model-of-race-and-iq/
If you can verify this is right, then I will move this thread.
There's a hyphen missing in the abstract: "methodology for testing for Xfactors." Also, date formatting should be the same for "August 10" and "August 25th." Other than that, it's ok.
My mistake. Fixed.
Moving...
Moving...
Here's a new version of the article with an initial added to my name so as to get Google Scholar to list the article.
Emil, don't use that version above. The formatting is messed up, and I don't have the program to render it properly now. I'll post a proper version tomorrow.