(2014-Aug-13, 19:03:06)Dalliard Wrote: The genetic correlations indicate that the same genes explain most of the heritability of seemingly unrelated different abilities, e.g. verbal and perceptual ability. This is consistent with the g model. It does not prove the validity of the model, just increases its plausibility. Modelling g explicitly would not prove the g model, either, just potentially increase its plausibility even more.

Normally, a better model fit increases the likelihood of this model, and it's why it is selected against others. Of course, as you say, CFA modeling is not aimed to "confirm" a model. That's because a theory cannot be proven in science. We can only reject theories, not confirm them; but to the extent we continuously fail to disconfirm a given theory, it is becoming more and more plausible. But it's still not a proof in itself. So, any methods aimed at "hypothesis-testing" approach should only disprove models (or theories underlying these models). This can be done when our model has a worse fit than the alternative model. In the case of g vs non-g, it's not clear at all what to conclude. At the very least, because g model supports the weak version of Spearman, I accept the idea that g model is superior than non-g model, but without superior fit for g model, I cannot conclude the evidence is strong. But that it is only weak, or meager proof in favor of g.

(2014-Aug-13, 19:03:06)Dalliard Wrote: You will never have a single test that will determine what the correct model is. The are always alternative models in CFA that fit equally well. You will have to look at the big picture, all the evidence.

Alternative models, if I'm not mistaken, have to do with models that are mathematically equivalent but conceptually different, e.g., reversing path arrows. When the df of the models is the same, you will expect equal fit. See below.

Quote:https://groups.google.com/forum/#!msg/la...zpGsl_YesJ

This is one of the quirks of CFA/SEM. In theory, this second-order model should provide exactly the same fit as the (correlated) three-factor model, since the number of free parameters is the same. But it often fails. Sometimes, adding std.lv=TRUE may help, but not always.

By equal fit, I say "exactly" the same. In Dolan (2000) the models are not the same, and the df weren't the same either. The model fits however are very similar. But not "exactly" equal. But this is sufficient to conclude there is no proof in favor or disfavor of g model.

With regard to the other methods, I agree that it is more in accords with g models than not. Unfortunately, Dolan and others believe these (e.g., MCV, PCA) are weak methods, and they think we should give more weight on MG-CFA.

Quote:My point of citing Wicherts on the Flynn vs. b-w gap is that the causal processes behind these gaps are different, as indicated by MI analyses.

Yes, except that his analysis is false. It can't show you what he wanted to show. Again, I could have agreed if loading invariance is violated, which it isn't. It's only the intercept, and the direction of bias is both-sided, tend to cancel out. Imagine for example that the BW difference is somewhat biased (with cancel out at the total test score) but it is modest (as assessed by modest model fit decrement). Now, you took another test for these same groups, and find strong violation of MI at intercept level, but the mean score difference is the same. Why ? The answer is because the biases are stronger. Instead of -1 IQ points (against blacks) for subtests 1-4 and -1 IQ points (against whites) for subtests 5-8, now you have -5 IQ points for subtests 1-4 and -5 points for subtests 5-8. Given this the total score difference is the same. You cannot say, as Wicherts claimed, that the IQ difference is biased because there are larger IQ losses for either groups in some or all subtests. To be sure, it's nonsense to speak about "bias" without mentioning the direction of bias, when there is one such pattern.

The purpose of his analysis was that BW can't be explained by psychometric bias, and that Flynn gain can be explained by psychometric bias, thus his conclusion that they both are unrelated. And yet, it's untrue that FE gains can be explained by psychometric bias. If so, the FE gains would have vanished. I don't see it.

---

---

EDIT

After reading Piffer's comment, I think I agree with him. It's not a question of how much variance they explain (assuming R² is an effect size, that is...) but just to show that we know now of such genes.