Back to [Archive] Post-review discussions

[ODP] Crime, income and employment among immigrant groups in Norway and Finland
"Specificity" is preferable to "specificness". Also, the predictive value of variables differs between studies quite a bit, and it would be nice to address that. I am incompetent to address the statistical techniques used in this paper.
Admin
I asked Wicherts to review the paper, but he declined due to time constraints. I will ask Meisenberg too, but I was looking to get a non-hereditarian on the review board.
P1 = predictor 1, P2 = predictor 2, etc.
V1 = outcome var 1, V2 = outcome var 2, etc.


I understand the labels, but I quoted this sentence "An interaction would be that a given predictor P1 is better at predicting variable V1 than P2, but that P2 is better at predicting V2 than P1" for another reason; it's that I don't understand the meaning of it. By saying V1 and then V2, you have in mind two different regression models.

Originally, the question was asked by Dalliard :

3) "Are some predictors just generally better at predicting than others, or is there an interaction effect between predictor and variables?"

Not sure what you mean by interaction here. The question is whether any of the predictors have unique predictive power.


What you said is whether or not the inclusion of interaction terms will affect the (independent) relative strength of your independent variables, within the same regression. But not two different ones.

Another thing I don't understand, it's because an interaction between predictors is aimed to answer the question about if adding interaction such as P1*P2, with or without squaring them (P1+P2+P2^2+P2^3+P1*P2+P1*P2^2+P1*P2^3), can change your independent coefficients. If the slopes are not linear but curvilinear, the addition of an interaction term will fit the data better. In general, what happens when an interaction is meaningful, is that the main effects (i.e., P1 and P2) will be attenuated. Even if one of the two predictors is more attenuated than the other, I don't think it's relevant here. The interpretation of the main effects becomes totally different when you add interactions. With an interaction, P1 and P2 are the effects net of the interaction, but the interaction itself includes and confounds the effect of both.

I remember several months ago when I attempted to perform regression with wordsum (dep) and race + SES (indep) variables. With the interaction of race*SES, the coefficient of race was near zero. A plot of the predicted values from the model revealed that at the very low SES levels, the BW gap in Wordsum was just meaningless, but that it increases considerably when SES increases. In such a situation, how can we say that race has become less important ?

You cannot say that SES is more important than race just because the interaction term nullifies the main effect of race, because the interaction term confounds the two effects. (when I say "more important", I am of course talking about the direct effects of the independent variables)
Admin
P1 = predictor 1, P2 = predictor 2, etc.
V1 = outcome var 1, V2 = outcome var 2, etc.


I understand the labels, but I quoted this sentence "An interaction would be that a given predictor P1 is better at predicting variable V1 than P2, but that P2 is better at predicting V2 than P1" for another reason; it's that I don't understand the meaning of it. By saying V1 and then V2, you have in mind two different regression models.

Originally, the question was asked by Dalliard :

3) "Are some predictors just generally better at predicting than others, or is there an interaction effect between predictor and variables?"

Not sure what you mean by interaction here. The question is whether any of the predictors have unique predictive power.


What you said is whether or not the inclusion of interaction terms will affect the (independent) relative strength of your independent variables, within the same regression. But not two different ones.

Another thing I don't understand, it's because an interaction between predictors is aimed to answer the question about if adding interaction such as P1*P2, with or without squaring them (P1+P2+P2^2+P2^3+P1*P2+P1*P2^2+P1*P2^3), can change your independent coefficients. If the slopes are not linear but curvilinear, the addition of an interaction term will fit the data better. In general, what happens when an interaction is meaningful, is that the main effects (i.e., P1 and P2) will be attenuated. Even if one of the two predictors is more attenuated than the other, I don't think it's relevant here. The interpretation of the main effects becomes totally different when you add interactions. With an interaction, P1 and P2 are the effects net of the interaction, but the interaction itself includes and confounds the effect of both.

I remember several months ago when I attempted to perform regression with wordsum (dep) and race + SES (indep) variables. With the interaction of race*SES, the coefficient of race was near zero. A plot of the predicted values from the model revealed that at the very low SES levels, the BW gap in Wordsum was just meaningless, but that it increases considerably when SES increases. In such a situation, how can we say that race has become less important ?

You cannot say that SES is more important than race just because the interaction term nullifies the main effect of race, because the interaction term confounds the two effects. (when I say "more important", I am of course talking about the direct effects of the independent variables)


Meng Hu,

Here's a very simple scenario:

Imagine we have three predictors and 5 outcome variables.

The correlations between the three predictors and 5 outcome variables. Suppose we obtain their prediction vectors, i.e. the correlations between the predictors and each of the outcome variables.


vec.a = c(.1, .2, .3, .4, .5) #vector a
vec.b = c(.15, .15, .4, .6, .8) #vector b
vec.c = c(0.26, -1.12, -0.33, -0.34, 0.06) #vector c
DF.vec = cbind(vec.a, vec.b, vec.c) #dataset
DF.vec.cor = cor(DF.vec) #cortrix
round(DF.vec.cor,2)


Which gives:

vec.a vec.b vec.c
vec.a 1.00 0.97 0.11
vec.b 0.97 1.00 0.33
vec.c 0.11 0.33 1.00


So we see that the r between a and b is very high, so they function in the same way, but may not be equally strong predictors. However, c is clearly very different and has low r's with the other two. This means there is a predictor x outcome variable interaction.

I only talk of correlations, no regression models. I am not talking about adding interactions variable (e.g. a*b) in regression models.

I am also not talking about predicting unique parts of the variance in multiple regression.

Apparently, the term has confused some readers. What term do prefer me to use? Perhaps just talk about testing the generality vs. specifically of the predictors predictive power?

---

Also, Dalliard's point made in the review of the International S factor paper about the use of variables that have not been reversed holds here as well. If one reverses them so that predictors always predict something better with a positive value, the correlations will get smaller.

I have used the data as given by the sources and not biased them in any way. Reversing them arguably makes the results less interpretable e.g. using Islam prevalence to predict low-crime as opposed to high crime.

However, suppose one really wants to minimize correlations. Doing it consistently to make as many correlations positive as possible, the new mean abs. correlations are .54 (Norwegian datasets) and .90 (Danish). The Danish dataset is much better since it has much less sampling error (all 25 vars have near N=70). So even arguably biasing the results against the hypothesis yields a strong positive outcome.
Admin
Someone has created the Open Science Framework. It is possible to create projects there and have them host the files. This is faster then using the forum to upload stuff and potentially saves me from having to upgrade the server for more space.

I have created a repository for this project: https://osf.io/emfag/
Admin
I asked Meisenberg to review the paper.

The data and results seem basically sound. The correlations between predictors look unusually high. I am more used to correlations of about 0.7 between “development indicators” such as IQ and lgGDP, but it seems you base these correlations only on those countries that have sent sufficiently large migrant groups, therefore sample sizes are small. Mainly, there are many little ways in which this paper can be improved. Especially, you have to make sure that your style is clear so that readers can follow it without irrelevant cognitive effort. I am attaching the file with lots of sticky notes that have specific suggestions how the writing can be improved.

Gerhard


His further comments are here: https://osf.io/7jaxm/
Admin
Replying in the order of the notes in the PDF.

Do you mean variables pertaining to the country of origin, or the destination country? This should be made clear in this sentence.


Made it more clear.

These two sentences are linguistically suboptimal. Better: This is because part of the reason...is that the people living there possess behavioral traits that are unfavorable for the generation of wealth. When they move..., they will still possess these traits,.... Also, does the hypothesis specify the mechanism of behavioral transfer (genetic or cultural), or is it agnostic about mechanisms? But perhaps you put this in the discussion section.


Sentences seem fine to me. Added a note about the agnosticism of the spatial transferability hypothesis about the cause of the stability of traits.

Does this last paragraph refer only to tertiary educational attainment? In that case it should not be a separate paragraph. Also, does the website define what specifically landbakgrunn means? For example, how would it list someone with Norwegian citizenship and a Moroccan mother and Norwegian father? One of the central question in migration research is in what way first and second and third generation immigrant are different from each other and the natives of the host country.


Joined it with the last paragraph.

Added a footnote about the meaning of "landbakgrunn". I could not find any clarification on their website. It is probably a legalistic definition akin to Denmark. There is no information about immigrant generation. Presumably, they are mostly 1st gen. immigrants.

for simple correlatens we don't really have predictors, only correlates. "Predictor" implies causality, which we cannot infer from correlation alone. In multiple regression, the term predictor should be avoided as well ("independent variable" is the better term), because you can only say that the model predicts the outcome (the "dependent variable" in a statistical sense, again without proving causality. Check this throughout the following text.


"predictor" does not imply causality, it simply is another term for "independent variable" which is far longer. Added a footnote about this.

Compare: https://en.wikipedia.org/wiki/Dependent_and_independent_variables#Statistics_synonyms

This would be expressed more clearly if you state that you need 1. a sufficiently large sample of countries so that country comparisons can produce statistically significant results, and 2. a sufficiently large number of individuals representing each country to reduce random sampling errors, and that there is a tradeoff between these two requirements.


I think it is fine the way it is. I don't care that much about statistical significance. I am interested in effect sizes. Standard statistical significance tests are not suited for grouped data anyway.

Observations about the results in this table: The correlations of crime and education with IQ and lgGDP that you describe in Norway and Finland are very similar to those at the country level worldwide: Crime, and especially violent crime, correlates more with IQ than with lgGDP. In regression models, lgGDP is usually not a significant predictor of crime, but IQ and racial diversity are the important independent predictors. Education (measured as educational degrees or average years in school) correlates more with lgGDP than with IQ, most likely because rich countries can afford extensive school systems. You may want to discuss your results on this background, if you don't do it in your discussion section already.


I rather not. The crime variable have a small sample which may effect their strength. The Danish datasets are better suited for this kind of comparison because it has a larger sample of countries, and is both age (by age groups) and sex controlled (men only). See the previous paper: http://openpsych.net/ODP/2014/05/educational-attainment-income-use-of-social-benefits-crime-rate-and-the-general-socioeconomic-factor-among-71-immmigrant-groups-in-denmark/

Better: Correlations of 1 indicate perfect prediction (in the statistical sense), and 0 means no relationship.


This is not right. See the pedagogical example above: http://openpsych.net/forum/showthread.php?tid=136&pid=1704#pid1704

These correlations are not predictor x outcome variable correlations, they are predictor vector intercorrelations.

This could be expressed much clearer. What you seem to mean is collinearity, which means correlations among the independent variables in a regression model. Better write that you want to determine the correlations among those country-level variables that predict migrant outcomes in Norway and Finland. Also, in the table you should indicate the number of countries on which these correlations are based. You mention N = 9 in the text, but it is easier for readers when they don't have to search the text when trying to make sense of the table.


How would you rewrite it? It seems clear to me. I am waiting for the input of Dalliard and Meng Hu about what terminology they prefer me to use, since it seems that the word "interaction" causes misunderstandings.

The table does not concern co-linearity as such (one could assess that with variable inflation factors). The paper does not include discussion of multiple regression results as that seemed uninteresting to me.

Added information about sample size to table captions for both predictor vector analyses.

In this place, you can delete "or fewer".


Deleted.

Syntax of this sentence.


Syntax seems fine to me.

To reduce cognitive effort needed to read and understand this, it would be better to write something more like "I want to know how socioeconomic outcomes of migrants in Norway (measured as S factor scores) relate to country-level variables measured in the migrants' countries of origin."


Added "country-level" to the sentence. The term "S factor score" is used throughout the paper and does not need further explanation. However, changed the abstract to the non-abbreviated term, as well as the section (4) where the international S scores are introduced. Note that section 5 also mentions the abbreviation.

There are similar differences for lgGDP, IQ and Altinok. But none of them are very large, and because of the limited number of countries they may be chance findings.


There are not. E.g. IQ x local S for imputed Danish dataset is .54, Norway is .59. You must mean the smaller datasets. Note that the IQ predictor strength decreases with increasing sample size, while the Islam one doesn't change much. The others tend to increase. The differences in the smaller samples are probably statistical artifacts.

Here, make clear whether you mean the S factor calculated for immigrant groups in the host country (calculated from income, crime rate etc), or for the countries of origin (calculated from lgGDP, national IQ etc).


Made it more clear.

----

An updated draft can be found in the paper repository. https://osf.io/g2fsr/
"Are some predictors just generally better at predicting than others, or is there an interaction effect between predictor and variables? An example of an interaction effect would be that Islam is better than IQ as predicting crime, while IQ is better at predicting educational attainment."

Interaction in regression analysis means that the predictor variables interact non-additively. For example, if the relation between predictor A and outcome variable Y varies at different levels of predictor B, then there's A x B interaction. Interaction means that aside from the additive main effects of the predictors there are interactive effects between them. To say that there is an interaction between predictor and outcome variables is a misuse of terminology.

You should include also predictor intercorrelations in Table 2 (below the diagonal) because reporting just the correlations between the "prediction vectors" is confusing.
Admin
Predictor intercorrelations can be seen in the supplementary material. E.g. here: https://osf.io/3752j/ Rownames are missing due to the way the export function works. They are in the same order as the colnames. So e.g. IQ x Altinok is .91. The 4 non-Islam predictors have high intercorrelations and so are not much use together in MR. Islam does not correlate highly, so it can be combined with one of them in MR.

I will work on a version that fixes the confusion with nonstandard use of "interaction".
Predictor intercorrelations can be seen in the supplementary material. E.g. here: https://osf.io/3752j/ Rownames are missing due to the way the export function works. They are in the same order as the colnames. So e.g. IQ x Altinok is .91. The 4 non-Islam predictors have high intercorrelations and so are not much use together in MR. Islam does not correlate highly, so it can be combined with one of them in MR.

I will work on a version that fixes the confusion with nonstandard use of "interaction".


So add the predictor correlations to the paper. They are essential for interpreting the results.
Thanks for the clarification, Emil. In fact, it wasn't just the word "interaction" but also the words "predictor" and "outcome" that had confused me. Generally, people refer to regression when they use these terms. If you had used the word interaction alone, I would not have thought about regressions.

I'm unsure about what terminology would fit best.
Admin
Dalliard and Meng Hu,

The new version has:
- Some slight language changes
- A paragraph discussing predictor intercorrelations
- A section in the Appendix with the predictor intercorrelations
- Reworded all the instances of "interaction"

PDF is here: https://osf.io/g2fsr/
You know I have already approved, but I just wanted to say that I think the description below is clear.

Are some predictors just generally better at predicting than others, or is there specificity such that while predictor A may be better at predicting outcome X, predictor B is better at predicting outcome Y? An example of this would be that Islam is better than IQ as predicting crime, while IQ is better at predicting educational attainment.
Admin
MH,

Some of the approvals were given when the paper was much shorter and less comprehensive. Even though there is no policy against this, it seems clearly wrong for authors to first submit a simple paper, then gain reviewers' approval, then drastically change the paper and claim that reviewers have already approved it.

Dalliard is a harsh critic, so I want to get his approval. I tried to get both Wicherts and Flynn to review the paper, but both declined (Wicherts not enough time citing his presence on other editorial boards, Flynn claimed lack of expertise). The journal ought to have at least some reviewers hostile to genetic models of group differences, but who to invite?

---

Approvals:
Piffer, early version
P Frost, early version
Meng Hu, early version
Meng Hu, indirect approval, later version
Chuck, later version

Since 4 approvals are necessary, and 2 two are given for the later version, I will wait.
Trust me, if there was really something definitely wrong, I will say it. As you already know, my only problem is with your application of imputation (only three, and no mention of the % of missing cases per variables, and finally, the sentence that seems to suggest that imputation can deal with the problem of "not missing at random", which thing is probably not true in most cases). But since the imputation provides very similar result to the other methods, I don't think I can reject it.

Concerning the last reviewer, maybe try Kevin Beaver. He has published (for instance, see here) on the topic of criminality between racial groups such as blacks and whites.
Dalliard is a harsh critic, so I want to get his approval. I tried to get both Wicherts and Flynn to review the paper, but both declined (Wicherts not enough time citing his presence on other editorial boards, Flynn claimed lack of expertise). The journal ought to have at least some reviewers hostile to genetic models of group differences, but who to invite?


You're not really testing a genetic model here -- so that shouldn't matter in this instance. To begin to test one, this way, you would need to decompose associations by migrant generations.
Admin
You know as well as I do, that people who are against the genetic model tend to be against.. everything else too.

Most data isn't broken down by generation. 3rd generations are beginning to emerge in DK. The statistics agency follows them closely to see if they perform better than 2nd gen.
You know as well as I do, that people who are against the genetic model tend to be against.. everything else too.

Most data isn't broken down by generation. 3rd generations are beginning to emerge in DK. The statistics agency follows them closely to see if they perform better than 2nd gen.


I'll open a separate thread to discuss the matter; the review section of your paper really isn't the place to do so. Can you try to secure approval for this paper?
1) "Islam correlates around weak to moderately with the others (-.14 to -.43, mean -.29)."

--> "correlates weakly to moderately"

2) "that Islam is better than IQ as predicting crime,"

--> "the prevalence of Islam predicts crime better than (national) IQ"

Other than that, the paper is OK and I approve it for publication.
Admin
Fixed both. New revision: https://osf.io/g2fsr/

This makes for 3 approvals for extended study. So need either Piffer or Peter Frost.