[ODP] Crime, income and employment among immigrant groups in Norway and Finland
I have a good knowledge on imputation. If you want I can guide toward the best methods. Not all imputations are equivalent. Some are adequate only in some type of data. What's the data you use ? Is it the file named "dataset.csv" that you uploaded here ? Because I'm not sure i can recommend the use of imputation. I have explained that here. Your variables must be highly correlated (approaching at least 0.40 or so). The ratio of subjects/variables (put in the imputation procedure) should not fall below 10/1. Alternatively, Hardt et al. (2012) recommend a maximum ratio of 1:3 for variables (with or without auxiliaries) against complete cases, that is, for 60 people having complete data, up to 20 variables could be used. The auxiliary variables are those that can be substitute because of their high correlation, such as, identical variables measured at different points in time (i.e., repeated measures). If the % of missing cases is too high, such as in your variables ViolentCrimeNorway, LarcenyNorway, ViolentCrimeFinland, and LarcenyFinland, I can tell you'll have big troubles.

Besides, the superiority of imputation over complete case is not restricted to FA, but extend to all kind of analyses. For example, in that case, multiple regression is inefficient, and then nearly 100% of this kind of analyses published in various other journals should be wrong, because they almost never apply imputation. Either because they don't know a thing about it or because it's time consuming. For example, 5 imputation minimum is recommended, but can be higher given the features of your data. But with 5 imputation, you will need to run the analysis five times with each of the imputed data set, and then, average the results, and as is recommended, you should also provide the standard error or CI, or standard deviation, to let the readers know about how much the estimates vary over the imputation. If your estimates vary too much, that may be a problem, a signals that your estimate is not stable, and that maybe, you'll need more imputation, 10, 20, 30, etc. But repeating the analysis 30 times with your 30 dataset is something researchers don't want to do. And I understand that...

Personally, I prefer maximum likelihood (ML) estimation, because multiple imputation (MI) gives me some headache about choosing which kind of MI is good depending on the data you have. If you have AMOS, or R, you can easily use ML.

*****

In your last sentence of your paper, "All datasets and source code is available in the supplementary materials.", it's optional but I recommend you to add "on OpenPsych Forum".
If someone wants to use multiple imputation (MI), I have nothing against it. But (s)he seriously needs to read a lot about MI. I'm serious. When I read about that, I really gave me a headache. And I have also read some data analysts (Paul D Allison maybe?) saying the same thing : if you are not very sure about what you do, then don't do it. A lot of researchers don't know the proper way to use MI, and some have reported others used a sub-optimal option of imputation.

This said, can you tell me the meaning of this syntax ?

#impute
DF.norway.miss.1.impute = mi(DF.norway.miss.1, n.iter=200) #imputes, needs more interations
DF.norway.miss.1.imputed = mi.data.frame(DF.norway.miss.1.impute, m = 3)
DF.norway.miss.2.impute = mi(DF.norway.miss.2, n.iter=200) #imputes, needs more interations
DF.norway.miss.2.imputed = mi.data.frame(DF.norway.miss.2.impute, m = 3)

Why is there norway.miss.1 and norway.miss.2 ? Does that mean you use 2 data imputations ? I ask this because normally, in the literature, the number of imputed data set is called "m". And in your data, m=3. So, does that mean you use 3*2=6 imputations ? [edit: no, I understand now. You had 2 data sets, one with N=18 the other with N=26, so in fact you use 3 imputations]

Also, can you tell me the % of missing value per variables ? Normally, the more missing value you have, the more imputation you need.
You should make explicit how many imputations you had (m=3, or more). And remember that the more missing values you have, the more imputation you need.
http://www.statisticalhorizons.com/more-imputations
Ok, I managed to keep it at 8 pages due to dropping some of the scatter-plots.

New version is attached as well as updated supplementary material.

You didn't make the corrections regarding the intro [url=bio-ecological]which I noted previously[/url].
The essence of imputation is that you can never have the same result. You only need to write it in the paper, as you did. But next time, i recommend you to use more imputation. Ideally, the number of imputations should be a function of % of missing values.
I remembered the first time I was using imputation, it was on AMOS. I have requested several data set, one by one. And when working with each of them, I had identical results. I discovered after this, that I was using the option "regression imputation" while the recommended option would have been "stochastic regression imputation" (even though AMOS is bad tool for making imputed data sets). In the latter option, you cannot have identical data set. And all data analysts will tell you not to work with imputation that is not "stochastic". In the first case, there is no random component (i.e., no error term), and you will under-estimate standard errors. When I said it's not possible to get identical data set, i was referring to the imputation with random component, as is usually recommended to do.
1) "Recent studies show that criminality and other useful socioeconomic traits"

I don't think criminality is a "useful socioeconomic trait." Perhaps "important social and economic characteristics"?

2) Use the equal or greater than sign (≥) rather than >=.

3) "Are some predictors just generally better at predicting than others, or is there an interaction effect between predictor and variables?"

Not sure what you mean by interaction here. The question is whether any of the predictors have unique predictive power.

4) "using multiple imputation8 to impute data to cases with 1 or fewer missing values"

How is it possible to have fewer than 1 missing values?

5) "Table 4 shows description statistics"

Descriptive statistics.

6) "the squared multiple correlation of regression the first factor on the original variables"

Word missing or something.

"Factor analytic methods require that there are no missing values. The easiest and most common way to deal with this is to limit the data to the subset with complete cases. This however produces biased results if the data are not missing completely at random ...For the above reasons, I used three methods for handling missing cases"

One of the MI assumptions is that data is MAR. Why would the possibility of MNAR then be reason to use it (as opposed to deletion), as your wording suggests?
An interaction would be that a given predictor P1 is better at predicting variable V1 than P2, but that P2 is better at predicting V2 than P1. A predictor x outcome variable interaction. This shows up as lower than |1| correlations between the prediction correlation vectors. However, surprisingly, they were all close or closish to 1.

Perhaps someone here can translate for me ? I don't understand the entire sentence.
> DF.Denmark.predict.cor IQ Altinok Islam logGDP S.scoreIQ 1.00 0.99 -0.96 0.98 0.99Altinok 0.99 1.00 -0.94 0.98 0.98Islam -0.96 -0.94 1.00 -0.94 -0.96logGDP 0.98 0.98 -0.94 1.00 0.99S.score 0.99 0.98 -0.96 0.99 1.00