Back to [Archive] Post-review discussions

[ODP] Increasing inequality in general intelligence and socioeconomic status as a res
2.3. “This has repeatedly been found for so many years that it hardly bears repeating”. Too informal?
8. It should be made more clear in the text that the data reported in fig.7 refer to simulation and not actual data. Also explain that you could not find this kind of data in the military Danish draft data, because it would have been useful to test your model against the actual data.
Add these explanations before or after this paragraph “In our modeled scenario, We decided to examine IQs 130 and 70 which are usually used as the thresholds for intellectually gifted and disabled, respectively. Figure 7 shows just this, along with the ratio of disabled per gifted using the no-gains model.”
Admin
Dear Piffer,

2.3. “This has repeatedly been found for so many years that it hardly bears repeating”. Too informal?


No rules against that.

8. It should be made more clear in the text that the data reported in fig.7 refer to simulation and not actual data. Also explain that you could not find this kind of data in the military Danish draft data, because it would have been useful to test your model against the actual data.


It is already clear. Both the text referring to the figure and the figure caption mention it.

Figure 7 shows just this, along with the ratio of disabled per gifted using the no-gains model.

...

Figure 7: Percents of disabled and gifted individuals and their ratio, Denmark 1980-2014. Results based on the no-gains model.


How would you like us to make it more clear? In the figure title? I have added "Simulation results based on census data" to the figure title.

I don't see how anyone could be confused, since the paper is about modeling and there is no mention of any actual data aside from the one army study.

Draft updated to version 9.
Dear Piffer,

2.3. “This has repeatedly been found for so many years that it hardly bears repeating”. Too informal?


No rules against that.


Just a suggestion, some readers (especially hard nosed academics) may find it too informal. Personally I don't care.

How would you like us to make it more clear? In the figure title? I have added "Simulation results based on census data" to the figure title.

I don't see how anyone could be confused, since the paper is about modeling and there is no mention of any actual data aside from the one army study.

Draft updated to version 9.


Modeling has two phases, developing (theory) and testing (empirical), so it can also entail testing the model against real data (empirical part), unless you're writing a purely theoretical paper, which does not seem to be the case. Be that as it may, I think it's fine now since you added ""Simulation results based on census data" to the figure title." and this makes it clearer to the reader.
I approve the paper as it is, my review is done.
I wanted to make sense of the R syntax. But it has exhausted my patience. Honestly, if R programmers expect people to even care about reviewers examining the syntax given by the researchers, they should make R simple to understand (e.g., the use of multiple [] and {} and () is exceedingly exhausting for my little brain). Now, unless a reviewer is a statistician and a programmer, and is willing to spare some time, he will look at it. But that's beyond my expertise. I understand R, but only the simple codes.

However, I understand the description of the models you use. Still, I don't get several things. You say :

Then, for each year, we calculated the composite population using the population data and their IQs (using the same data as in Section 3). The plot of the results is shown in Figure 6.


But in Figure 6, the legend reads :

Figure 6: Change in mean IQ and SD over time in Denmark modeled from population data by country of origin and national IQs.


Why am I thinking it's odd here ? Given the above paragraph, you use the actual data, and calculate the means/SD per year, so it's the observed data. In the legend of Figure 6, it is said "modeled" as if you have used some statistical modeling while you have only made a descriptive analysis. Correlation and Cohen's d for example are this kind of descriptive analysis. At the beginning of section 4, you say that the model presented in figure 6 did not assume g gains for immigrants. As I said, it's odd because in the description, you merely said you have calculated the means and SD over time. Do you mean "no g gain model" because your descriptive analysis does not incorporate IQ gain for immigrants ? I ask this question based on what I understand (not much) of your syntax below :

# IQ vector - no gains
IQ.vector = unlist((DF["IQ"]-100)/15) #standardized IQs
names(IQ.vector) = rownames(DF) #set names again
#IQ.vector = c(-2,2) #for testing purposes

#IQ vectors for gains
for (case in 1:length(IQ.vector)){ #loop over each IQ
if (IQ.vector[case] < -0.18666667){ #is it lower than DK?
diff.to.DK = (-0.18666667-IQ.vector[case])
IQ.vector[case] = IQ.vector[case]+diff.to.DK*.75 #change this value for the other scenarios
}
}


And, more important. When one wants to examine what model has the better "fit" to the data (I use fit in parentheses because there is no fit indices) you need to compare the differing models (presented in table 2) with what is observed in the actual data. Why I'm perplexed is that in section 5, you have plotted the scenario of no g gain. Section 6, you have listed in the table 2, the expected outcomes for the different scenarios. Section 7, you talk about the military danish data, but I don't see the link between section 7 and sections 5-6. In section 7, you only mentioned that the SD of the immigrants is higher than the natives. I don't see how it helps to evaluate whether the model of "no g gain" is better than the others.

One other question here :

The cause of the larger than expected SD is perplexing. The fact that some non-Danes are classified as 'Danish origin' means that the SD should be smaller than modeled, not larger.


Why smaller ? I'm not sure I get the idea. Also when you say "larger than expected SD" you also said that your model predicts 11.3% SD higher, but that the actual data shows 14.2% higher. But these values are still close, no ? Also, what do you mean by "Using the model to predict this value using 2003 data, gives 11.3% which is not too far off (estimated SD's 15.01 for 'western' and 16.70 for 'non-western')." ? Which model exactly ?
Admin
MH,

I wanted to make sense of the R syntax. But it has exhausted my patience. Honestly, if R programmers expect people to even care about reviewers examining the syntax given by the researchers, they should make R simple to understand (e.g., the use of multiple [] and {} and () is exceedingly exhausting for my little brain). Now, unless a reviewer is a statistician and a programmer, and is willing to spare some time, he will look at it. But that's beyond my expertise. I understand R, but only the simple codes.


The code is extensively commented, so if one knows R and statistics, one can follow it.

[] {} and () are not the same. [] chooses subsets/values by index. E.g. if values = c(5,2,7), then values[1] is 5, values[2] is 2 and values[3] is 7. {} is used in control flow. () are used for calculations when order of operations are important. [size=xx-small](To make it worse, [[]] is how one selects an item from a list in R.)[/size]

Why am I thinking it's odd here ? Given the above paragraph, you use the actual data, and calculate the means/SD per year, so it's the observed data. In the legend of Figure 6, it is said "modeled" as if you have used some statistical modeling while you have only made a descriptive analysis. Correlation and Cohen's d for example are this kind of descriptive analysis. At the beginning of section 4, you say that the model presented in figure 6 did not assume g gains for immigrants. As I said, it's odd because in the description, you merely said you have calculated the means and SD over time. Do you mean "no g gain model" because your descriptive analysis does not incorporate IQ gain for immigrants ? I ask this question based on what I understand (not much) of your syntax below :


I don't know what you don't understand. These are the same. Modeling is a broad term and does not imply any fancy stuff like fit indexes (as in latent trait modeling, confirmatory factor analysis etc.). It just means one is calculating based on a model of how reality works.

Wikipedia has a nice description:

Modeling and simulation (M&S) is getting information about how something will behave without actually testing it in real life. For instance, if we wanted to design a race car, but weren't sure what type of spoiler would improve traction the most, we would be able to use a computer simulation of the car to estimate the effect of different spoiler shapes on the coefficient of friction in a turn. We're getting useful insights about different decisions we could make for the car without actually building the car.


Section 4 has no reference to Figure 6. You must mean Section 5. [size=xx-small](It is not "did not assume g gains for immigrants", it is "assumed no g gains for immigrants". These are different.)[/size]

No gains model is the one where there no immigrant gains, yes.

What do you understand about the syntax? The loop goes over each IQ in the IQ vector (the list of IQs for each country of origin). Then it checks if it is lower than Danish. If it is, it calculates the difference to Danish IQ, and adds a fraction of that to the IQ in the vector. In the code you quote, the fraction is .75, so it is the 75% gains model. One merely changes that value to calculate a different model (well, technically, it is one overall model with 4 parameter, but it is not important for present purposes).

And, more important. When one wants to examine what model has the better "fit" to the data (I use fit in parentheses because there is no fit indices) you need to compare the differing models (presented in table 2) with what is observed in the actual data. Why I'm perplexed is that in section 5, you have plotted the scenario of no g gain. Section 6, you have listed in the table 2, the expected outcomes for the different scenarios. Section 7, you talk about the military danish data, but I don't see the link between section 7 and sections 5-6. In section 7, you only mentioned that the SD of the immigrants is higher than the natives. I don't see how it helps to evaluate whether the model of "no g gain" is better than the others.


There is no data to compare against except for the army study. As you can see, the army study found that immigrants that a higher SD than predicted by the no gains model. Since the other models give a smaller SD for the immigrants, the best fitting model is the no gains one.

Why smaller ? I'm not sure I get the idea. Also when you say "larger than expected SD" you also said that your model predicts 11.3% SD higher, but that the actual data shows 14.2% higher. But these values are still close, no ? Also, what do you mean by "Using the model to predict this value using 2003 data, gives 11.3% which is not too far off (estimated SD's 15.01 for 'western' and 16.70 for 'non-western')." ? Which model exactly ?


No gains model is the default/primary. It is the one used unless otherwise stated.

When non-Danes are classify as 'Danish', this increases the raw score SD for the 'Danish' group. Since value is in the denominator, the ratio decreases i.e. smaller %.
I understand R much better without curly brackets because I don't know these things (I read your link, but I understand nothing at all). And my main problem with your model with gain is this :

for (case in 1:length

I notice you use it quite often, but I don't see what is "for", what is "in 1" and why you have ":" just after. And why you would need the "length" too.

When I said "At the beginning of section 4, you say that the model presented in figure 6 did not assume g gains for immigrants" there was indeed a mistake. It was section 6, not 4 : "The critical reader will have noticed that the model assumes that there are no immigrant changes in g.".

Modeling and simulation (M&S) is getting information about how something will behave without actually testing it in real life.


I agree with that statement. But when you were merely calculating means and say it's modeling, I'm confused. Or perhaps that's my definition that is narrow.

When you say "Since the other models give a smaller SD for the immigrants, the best fitting model is the no gains one" I wonder what is you are referring to. In table 2, you mentioned these other models, but this was about the magnitude of the increase in SD over time, not whether the immigrant SD is higher than the SD of the natives. I suppose you didn't refer to this, but the statement that "the other models give a smaller SD for the immigrants" compared to the no g gain model is not explicited in your text.

When non-Danes are classify as 'Danish', this increases the raw score SD for the 'Danish' group. Since value is in the denominator, the ratio decreases i.e. smaller %.


Tell me if I'm wrong. Your model did not take into account that (some of the) non-Danes were misclassified as Danish, and so, the no g gain model gives an over-estimated SD.
Admin
I understand R much better without curly brackets because I don't know these things (I read your link, but I understand nothing at all).


What you need is to read an introduction to programming in R. Read this: http://health.adelaide.edu.au/psychology/ccs/teaching/lsr/

You cannot really utilize programming without understanding simple control flow.


I notice you use it quite often, but I don't see what is "for", what is "in 1" and why you have ":" just after. And why you would need the "length" too.


You cut off the code in the example. Don't do that. Here it is:

#IQ vectors for gains
for (case in 1:length(IQ.vector)){ #loop over each IQ
if (IQ.vector[case] < -0.18666667){ #is it lower than DK?
diff.to.DK = (-0.18666667-IQ.vector[case])
IQ.vector[case] = IQ.vector[case]+diff.to.DK*.75 #change this value for the other scenarios
}
}


In R, typing any integer, a colon, and any new integer creates a vector of numbers. Very useful. Just do e.g. 1:10 in R and see.

You appear not to understand loops. This is not the place for me lecturing you on basics of programming. It is more effective if you use a textbook.

If there is a command you don't understand, just look it up. Type ?command (e.g. ?length) in R and it will open help.

I agree with that statement. But when you were merely calculating means and say it's modeling, I'm confused. Or perhaps that's my definition that is narrow.

When you say "Since the other models give a smaller SD for the immigrants, the best fitting model is the no gains one" I wonder what is you are referring to. In table 2, you mentioned these other models, but this was about the magnitude of the increase in SD over time, not whether the immigrant SD is higher than the SD of the natives. I suppose you didn't refer to this, but the statement that "the other models give a smaller SD for the immigrants" compared to the no g gain model is not explicited in your text.


Your definition is too narrow, yes.

I don't understand why you don't understand it. Read the paper again perhaps? Immigrant SD is of course higher since they are composed of many groups. Any composite population of standard normal distributions with different means has a larger SD than 1 (also stated in paper).

Immigrant 'non-western' SD was ~14% larger than 'Western' SD, but the no-gains model only predicts it to be ~11% larger. The other models fare even worse. There is something more going on, perhaps differential selection for g between countries. This would increase the SD.

Tell me if I'm wrong. Your model did not take into account that (some of the) non-Danes were misclassified as Danish, and so, the no g gain model gives an over-estimated SD.


No. It gives an underestimated SD ratio between the groups.

Look, if you have: (non-western SD/western SD) and you increase western SD due to misclassification, the ratio becomes smaller.
You assume I didn't read these books an the help commands. But I already did. I told you that before. I repeat, but these weren't helpful.

Immigrant 'non-western' SD was ~14% larger than 'Western' SD, but the no-gains model only predicts it to be ~11% larger. The other models fare even worse. There is something more going on, perhaps differential selection for g between countries. This would increase the SD.


That's what I said. But I also said it is not clearly stated in your article. If no-gain model predicts 11% larger, why not mentioning the % for the other models ?

Tell me if I'm wrong. Your model did not take into account that (some of the) non-Danes were misclassified as Danish, and so, the no g gain model gives an over-estimated SD.


No. It gives an underestimated SD ratio between the groups.

Look, if you have: (non-western SD/western SD) and you increase western SD due to misclassification, the ratio becomes smaller.


I understand the second sentence, but you misread my comment. I said your no gain model did not take into account racial misclassification. So logically, the no gain model predicts higher SD for immigrants than what was observed in the data, because misclassification underestimates the ratio nonwestern/western SD.

EDIT :

Concerning what's modeling and what is not. I think you should probably say in your article that the graph in figure 6, something like "correspond to the scenario we expect under the no g gain model". As I said, Figure 6 is not really "modeled" because it's descriptive but I can accept that it corresponds to a model of yours.

Generally, what I think is a model is something close to a "prediction" such as in regression. In this analysis, you predict the individual's outcome, not based on individual's characteristics but on group characteristics. Regression is usually understood as an aggregation method. When you hold constant several independent variables, they are held constant for the values of the entire group. We can't control an individual's characteristics. Only group characteristics.

So in my opinion, a "descriptive" stats is not a model, but can correspond to a model you have in mind.
Admin
MH,

That's what I said. But I also said it is not clearly stated in your article. If no-gain model predicts 11% larger, why not mentioning the % for the other models ?


We did not calculate these values.

I understand the second sentence, but you misread my comment. I said your no gain model did not take into account racial misclassification. So logically, the no gain model predicts higher SD for immigrants than what was observed in the data, because misclassification underestimates the ratio nonwestern/western SD.


The data shows that the SD is larger than the no gains model predicts.

This smells of another language confusion. The data are biased towards a higher ratio, yet the model that produces the highest predicted NW/W-ratio still underpredicts it. I.e. it is likely that something else is going on.

EDIT :

Concerning what's modeling and what is not. I think you should probably say in your article that the graph in figure 6, something like "correspond to the scenario we expect under the no g gain model". As I said, Figure 6 is not really "modeled" because it's descriptive but I can accept that it corresponds to a model of yours.

Generally, what I think is a model is something close to a "prediction" such as in regression. In this analysis, you predict the individual's outcome, not based on individual's characteristics but on group characteristics. Regression is usually understood as an aggregation method. When you hold constant several independent variables, they are held constant for the values of the entire group. We can't control an individual's characteristics. Only group characteristics.

So in my opinion, a "descriptive" stats is not a model, but can correspond to a model you have in mind.


Your definition of "model" is idiosyncratic. I have already supplied quotes supporting my use of "model".
You need to understand first what is modeling. By this, it is understood how many elements can explain the data. By elements, I meant "variables", e.g., interaction terms, squared and/or cubic terms, and maybe additional variables. See here, for a pictural illustration. The purpose is to look for the most parsimonious model. If you have main effect and squared term of age, and then you decide to add its cubic terms as well, you will try to compare the two nested models by, say, chi-squared test, and then discovers that the two models don't differ significantly (I dislike p-value, but it's just for the sake of our argument). You can conclude the cubic term is not necessary and that a model with squared effects of age is sufficient to explain the observed data.

When you do a statistical model, you are actually comparing the models to the observed data, and evaluate which one has the best approximation (fit) to the data. Modeling makes no sense when you have no point of comparison. Because a statistical model serves to predict the observed data.

Thus, you cannot say "we have modeled the trend of..." when you have, e.g., just computed the means and SDs. This is a descriptive stats. This is not a statistical model even though it can describe your model. These are two different things, that I kept saying it.

Further, Andy Field (2009, pp.32-33) "Discovering Statistics with SPSS" has a nice description of what is a model, certainly much better than the wiki you have cited.

We saw in the previous chapter that scientists are interested in discovering something about a phenomenon that we assume actually exists (a ‘real-world’ phenomenon). These real-world phenomena can be anything from the behaviour of interest rates in the economic market to the behaviour of undergraduates at the end-of-exam party. Whatever the phenomenon we desire to explain, we collect data from the real world to test our hypotheses about the phenomenon. Testing these hypotheses involves building statistical models of the phenomenon of interest.

The reason for building statistical models of real-world data is best explained by analogy. Imagine an engineer wishes to build a bridge across a river. That engineer would be pretty daft if she just built any old bridge, because the chances are that it would fall down. Instead, an engineer collects data from the real world: she looks at bridges in the real world and sees what materials they are made from, what structures they use and so on (she might even collect data about whether these bridges are damaged). She then uses this information to construct a model. She builds a scaled-down version of the real-world bridge because it is impractical, not to mention expensive, to build the actual bridge itself. The model may differ from reality in several ways – it will be smaller for a start – but the engineer will try to build a model that best fits the situation of interest based on the data available. Once the model has been built, it can be used to predict things about the real world: for example, the engineer might test whether the bridge can withstand strong winds by placing the model in a wind tunnel. It seems obvious that it is important that the model is an accurate representation of the real world. Social scientists do much the same thing as engineers: they build models of real-world processes in an attempt to predict how these processes operate under certain conditions (see Jane Superbrain Box 2.1 below). We don’t have direct access to the processes, so we collect data that represent the processes and then use these data to build statistical models (we reduce the process to a statistical model). We then use this statistical model to make predictions about the real-world phenomenon. Just like the engineer, we want our models to be as accurate as possible so that we can be confident that the predictions we make are also accurate. However, unlike engineers we don’t have access to the real-world situation and so we can only ever infer things about psychological, societal, biological or economic processes based upon the models we build. If we want our inferences to be accurate then the statistical model we build must represent the data collected (the observed data) as closely as possible. The degree to which a statistical model represents the data collected is known as the fit of the model.

Figure 2.2 illustrates the kinds of models that an engineer might build to represent the real-world bridge that she wants to create. The first model (a) is an excellent representation of the real-world situation and is said to be a good fit (i.e. there are a few small differences but the model is basically a very good replica of reality). If this model is used to make predictions about the real world, then the engineer can be confident that these predictions will be very accurate, because the model so closely resembles reality. So, if the model collapses in a strong wind, then there is a good chance that the real bridge would collapse also. The second model (b) has some similarities to the real world: the model includes some of the basic structural features, but there are some big differences from the real-world bridge (namely the absence of one of the supporting towers). This is what we might term a moderate fit (i.e. there are some differences between the model and the data but there are also some great similarities). If the engineer uses this model to make predictions about the real world then these predictions may be inaccurate and possibly catastrophic (e.g. the model predicts that the bridge will collapse in a strong wind, causing the real bridge to be closed down, creating 100-mile tailbacks with everyone stranded in the snow; all of which was unnecessary because the real bridge was perfectly safe – the model was a bad representation of reality). We can have some confidence, but not complete confidence, in predictions from this model. The final model (c) is completely different to the real-world situation; it bears no structural similarities to the real bridge and is a poor fit (in fact, it might more accurately be described as an abysmal fit). As such, any predictions based on this model are likely to be completely inaccurate. Extending this analogy to the social sciences we can say that it is important when we fit a statistical model to a set of data that this model fits the data well. If our model is a poor fit of the observed data then the predictions we make from it will be equally poor.


I have highlighted the important passages.
Admin
You are again arguing for some narrow definition of model. This is not the only way the word is used in science.

One does not need actual comparison data for modeling. In many cases, such data are not actually available... which is also why one is doing the modeling in the first place. In this case, there are population data available and one data point from the army study, which the model results can and is compared to in the study. So, by your narrow definition, it is still modeling.

The model* also makes predictions for data not yet publicly available, i.e. what the mean IQ should be in the immigrant population.

* ... or models, depending whether you want to call it 1 model with 4 parameters, or 4 models. We talk about them as 4 models in the paper, but it is perhaps better to call it one model with 4 parameters.
I thought I have insisted enough on this. I did not say model. I said statistical model. There's a huge difference between these terms. Your figure 6 is said to be modeled. As I told you before, and illustrated by the quote of Field (2009), a statistical model, by definition, implies several assumptions (e.g., constraints). But if what you do in Figure 6 is a simple computation of means, there is no assumption made here (at least, I don't see any). There is no parameters constrained to be zero, or equal to another parameter, etc.

or models, depending whether you want to call it 1 model with 4 parameters, or 4 models. We talk about them as 4 models in the paper, but it is perhaps better to call it one model with 4 parameters.


In models with weak/medium/strong gains, perhaps I can accept the use of statistical modeling, because you're predicting the IQ of the immigrants given some assumed values of IQ gains. You made an assumption here. But in the no g gain model (Figure 6) there is no such thing. It's only descriptive because it's your observed data. The models with weak/medium/strong gains are not your observed data.
Admin
It is not a statistical model in the sense that is used for e.g. latent variable modeling/structural equation modeling.

The four parameters makes assumptions, namely, that the parameter is what they say it is. The assumption of the no-gains model being that... there are no gains due to environment.
1) As I said earlier, there should be some data on the countries of origin of the immigrant population. Most readers have no idea who actually moves to Denmark. At the very least, there should be basic information like "x% of the immigrant population is of non-European origin and y% of European origin as of 2014." Generally, non-European immigration would be expected to increase inequality more, given that IQ levels are relatively uniform across Europe.

2) Sample sizes should be indicated in Table 1, at least mention in the caption that Ns range from x to y countries.

3) "Then, for each year, we calculated the composite population using the population data and their IQs"

This could be expressed more clearly, e.g., "for each year, we estimated the composite IQ distribution by modeling the effect on Danish IQ of changes in the composition of the population, based on the national IQs of the immigrants' countries of origin."

4) "one where there are large gains, one with medium gains and one with small gains. Concretely, we modeled these as the immigrants closing the g gap to the IQ of Denmark by 25%, 50% and 75% respectively"

The percentages should be presented from large to small as in the preceding sentence.

5) Language should be improved -- for example:

a) "We think the immigration to western countries leads to a policy conundrum"

We think that immigration to Western countries...

b) 'western' should be capitalized throughout

c) "Immigration will lead to higher socioeconomic inequality in the countries"

Immigration will cause higher socioeconomic inequality in Western countries

d) "There are two parts of the spatial transferability hypothesis."

... two parts TO the spatial...

e) "Comparing with cognitive data from the military draft"

Comparison with cognitive data...

6) If these issues are dealt with, I approve publication.
Admin
Dalliard,

Thanks for the review.

1) As I said earlier, there should be some data on the countries of origin of the immigrant population. Most readers have no idea who actually moves to Denmark. At the very least, there should be basic information like "x% of the immigrant population is of non-European origin and y% of European origin as of 2014." Generally, non-European immigration would be expected to increase inequality more, given that IQ levels are relatively uniform across Europe.


We have added a table to a new subsection in 1.1, that gives the top 10 countries by 10 year intervals as well as their relative percentages.

2) Sample sizes should be indicated in Table 1, at least mention in the caption that Ns range from x to y countries.


Added "Sample sizes range from 119 to 154 with a mean of 130" to the caption.

3) "Then, for each year, we calculated the composite population using the population data and their IQs"

This could be expressed more clearly, e.g., "for each year, we estimated the composite IQ distribution by modeling the effect on Danish IQ of changes in the composition of the population, based on the national IQs of the immigrants' countries of origin."


The section now reads:

Then, for each year, we estimated the composite IQ distribution by modeling the effect on Danish IQ of changes in the composition of the population, based on the national IQs of the immigrants' countries of origin (using the same national IQ data as previously). The plot of the results is shown in Figure 6.


4) "one where there are large gains, one with medium gains and one with small gains. Concretely, we modeled these as the immigrants closing the g gap to the IQ of Denmark by 25%, 50% and 75% respectively"

The percentages should be presented from large to small as in the preceding sentence.


Fixed.

5) Language should be improved -- for example:

a) "We think the immigration to western countries leads to a policy conundrum"

We think that immigration to Western countries...


Changed to:
We think that immigration as it is happening right now to Western countries leads to a policy conundrum for some policy makers. Our argument is as follows:


b) 'western' should be capitalized throughout


Done.

c) "Immigration will lead to higher socioeconomic inequality in the countries"

Immigration will cause higher socioeconomic inequality in Western countries


Fixed.

d) "There are two parts of the spatial transferability hypothesis."

... two parts TO the spatial...


Changed to: The spatial transferability hypothesis has two parts.

e) "Comparing with cognitive data from the military draft"

Comparison with cognitive data...


Fixed.

---

I will have a Native speaker friend of mine read it through and see if she can find more instances where the language can be improved.

A new draft is available, version 10.
The assumption of the no-gains model being that... there are no gains due to environment.


I said that the calculation made in your figure6 is just your observed data. And I have strongly insisted on observed data. The other models (those with IQ gains) are not your actual data. This is what is usually meant by assumption. Modeling is not a descriptive stats and is not your observed data. So, what you should have in the description under figure 6 should be "Figure 6: Change in mean IQ and SD over time in Denmark calculated from population data by country of origin and national IQs.". Or you can use "computed". At least, if you don't say "modeled", it's fine.
Admin
Again, you are using an idiosyncratic narrow definition of what a model is. It does not make sense to alter the wording to fit that usage.

We do have a language rewrite on the way, so hopefully it will be somewhat better. It is hard for non-natives to write completely fluently. Even for someone who speaks a closely related language (Danish and English both being Germanic languages, and there also being flow between them both recently and in Norse items).
Again, you are using an idiosyncratic narrow definition of what a model is. It does not make sense to alter the wording to fit that usage.


That's handwaving argument. In fact, it's not even an argument at all. For this to be an argument, such affirmation should have been accompanied by some citations, which are absent in your post. By saying that my definition is an "idiosyncratic narrow definition" you're showing you don't obviously know what's a model. Because that definition I have given to you is actually not "my" definition. But it's a logical conclusion anyone can derive from what statisticians are writing. So, affirming I'm distorting the definition of a statistical model proves that you don't understand the manner in which statisticians use the term "statistical model".

Anyone who reads enough papers on this matter can certainly notice that statisticians (and even non-statisticians) employ quite often a sentence like this : "the models are fitted against the data". That's the perfect occasion for asking you this question : why do you think they are saying "models are fitted against the data" ? The response is obvious. They make a distinction between the statistical models (unobservables) and the observed data (observables).

In your paper, what you have is :

model0 = observed data
model1 ≠ observed data
model2 ≠ observed data
model3 ≠ observed data

Thus, while models 1-3 (IQ gains) can be statistically tested between each other against the data, this is not the case for model0 (no IQ gains). Models 1-3 can be said to be approximations of the observed data, but not model0. Thus, model0 violates the definition of a statistical model. By definition, a statistical model can be "statistically tested" with respect to the data. It's the purpose of a statistical model, i.e., to know how a given model approximates the data. And a model (e.g., model0) which is equivalent to the observed data cannot be "statistically tested" because model0 = data. No one can say, for example, that model0 has better model fit than models 1-3, even if it's the most accurate description of your data (which is not difficult because model0=data). Every models can fail the statistical test when they are inconsistent with the data; and the possibility of failure can apply to models 1-3 but not to model0, because, once again, model0=data.

Have you ever heard of the following saying ? From the statistician George E. P. Box :

Essentially, all models are wrong, but some are useful.


And I have seen several economists quoting him, in order to make clear what's a model. What this sentence reveals is that a model necessarily incorporates a degree of inexactness. That's what I meant earlier by approximations. It is only when models are approximations that they can be compared and tested against each other. As other asked, how can we not compare models ?

If you don't trust my words, perhaps you will trust the words of others. Models are expressed as equations, and understood as approximations with regard to the data. For instance :

Nachtigall et al. 2003 p. 4
(Why) Should We Use SEM? Pros and Cons of Structural Equation Modeling

Jeffrey M. Wooldwridge 2012 pp. 3-5
Introductory Econometrics: A Modern Approach

Konishi & Kitagawa 2008 p. 4
Information Criteria and Statistical Modeling (Springer Series in Statistics)

Rex B. Kline 2011 pp. 8, 16
Principles of Structural Equation Model

Sheldon M. Ross 2010 p. 540
Introductory Statistics (3rd edition)

Marloes Maathuis 2012
1. Role of statistical models

Model is by definition a simplification of (a complex) reality.


Anu Maria 1997
Introduction to Modeling and Simulation

Modeling is the process of producing a model; a model is a representation of the construction and working of some system of interest. A model is similar to but simpler than the system it represents. One purpose of a model is to enable the analyst to predict the effect of changes to the system. On the one hand, a model should be a close approximation to the real system and incorporate most of its salient features. On the other hand, it should not be so complex that it is impossible to understand and experiment with it


Galit Schmueli 2010
To Explain or to Predict?

Exploratory data analysis (EDA) is a key initial step in both explanatory and predictive modeling. It consists of summarizing the data numerically and graphically, reducing their dimension, and “preparing” for the more formal modeling step.

...

2.6.1 Validation. In explanatory modeling, validation consists of two parts: model validation validates that f adequately represents F, and model fit validates that fˆ fits the data {X, Y}. In contrast, validation in predictive modeling is focused on generalization, which is the ability of fˆ to predict new data {Xnew,Ynew}.

...

The top priority in terms of model performance in explanatory modeling is assessing explanatory power ... In contrast, in predictive modeling, the focus is on predictive accuracy or predictive power, which refer to the performance of fˆ on new data.


Cosma Shalizi 2011
Evaluating Statistical Models

Using a model to summarize old data, or to predict new data, doesn't commit us to assuming that the model describes the process which generates the data. But we often want to do that, because we want to interpret parts of the model as aspects of the real world. We think that in neighborhoods where people have more money, they spend more on houses - perhaps each extra $1000 in income translates into an extra $4020 in house prices. Used this way, statistical models become stories about how the data were generated. If they are accurate, we should be able to use them to simulate that process, to step through it and produce something that looks, probabilistically, just like the actual data. This is often what people have in mind when they talk about scienti c models, rather than just statistical ones.

An example: if you want to predict where in the night sky the planets will be, you can actually do very well with a model where the Earth is at the center of the universe, and the Sun and everything else revolve around it. You can even estimate, from data, how fast Mars (for example) goes around the Earth, or where, in this model, it should be tonight. But, since the Earth is not at the center of the solar system, those parameters don't actually refer to anything in reality. They are just mathematical ctions. On the other hand, we can also predict where the planets will appear in the sky using models where all the planets orbit the Sun, and the parameters of the orbit of Mars in that model do refer to reality.


SAS/STAT(R) 9.2 User's Guide, Second Edition

Obviously, the model must be "correct" to the extent that it sufficiently describes the data-generating mechanism


Topics in Statistical Data Analysis: Revealing Facts From Data

The following figure illustrates the statistical thinking process based on data in constructing statistical models for decision making under uncertainties.


Mueller & Hancock 2007
Best Practices in Structural Equation Modeling

A central issue addressed by SEM is how to assess the fit between observed data and the hypothesized model, ideally operationalized as an evaluation of the degree of discrepancy between the true population covariance matrix and that implied by the model's structural and nonstructural parameters. As the population parameter values are seldom known, the difference between an observed, sample-based covariance matrix and that implied by parameter estimates must serve to approximate the population discrepancy.


Kenneth A. Bollen 1989 pp. 68, 72
Structural Equations with Latent Variables

Model-reality consistency is a more "slippery" issue. Here the question is whether the model mirrors real-world processes. For instance, does an econometric model of the U.S. economy really correspond to the behavior of the economy? Fully assessing model-reality consistency is not possible since it presupposes perfect knowledge of the "real" world with which to evaluate the model. In practice, we imperfectly evaluate model-reality consistency in several ways. One is comparing the predictions implied by a model to those observed in a context different from the data that supply the model parameter estimates. For instance, we might check the realism of an econometric model by contrasting its predictions of inflation rates to those observed in the future. If we are fortunate enough to be able to manipulate variables in the model, we can do so and see if the model correctly predicts the consequences. Or, we can examine the assumptions and relations embedded in a model and debate their validity based on other experiences or insights.

It is tempting to use model-data consistency as proof of model-reality consistency, but we could be misled by so doing. The problem lies in the asymmetric link between these two consistency checks. If a model is consistent with reality, then the data should be consistent with the model. But, If the data are consistent with a model, this does not imply that the model corresponds to reality.

[...]

In sum, structural equation models face the same restrictions as other empirical methodologies. We can only reject a model - we can never prove a model to be valid. A good model-to-data fit does not mean that we have the true model.


The last paragraph helps to better understand why models are not actual data. Since all models are "wrong", so to speak, the best fitting model is not a proof this model is the true model, as they are all approximations.

And finally, the best one is that blog article :

The True Meaning Of Statistical Models

Briggs (2014) has nicely summarized the essence of a typical statistical model : "Why substitute perfectly good reality with a model?", "Because a statistical model is only interested in quantifying the uncertainty in some observable, given clearly stated evidence", "Every model (causal or statistical or combination) implies (logically implies) a prediction". This cannot illustrate better all I have said earlier. A statistical model is an approximation, and thus is different from a descriptive stats. Unfortunately, your so-called statistical model of no gain has no uncertainty in it.

I repeat, the description in your figure 6 definitely needs to be rewritten.
Admin
I don't think we will reach agreement on this semantic issue.

I have asked Ken Kura to review the paper (as author-chosen reviewer). He has some criticism as well. I asked him to post it here on the forum for others to see (he sent it to my email). We will be revising the paper according to his criticism.

If we can get Dalliard, Kura, Meisenberg, and Fuerst or Piffer, then we will have 4 approvals.
Admin
Here's the new version, #11. It now has a proper introduction (about ½ page), another paragraph for results for immigrants only over time (requested by Kura), another paragraph in the discussion, a table with information about the largest countries of origin requested by Dalliard, and a lot of language edits thanks to Laird Shaw.

https://osf.io/dei73/