[size=medium]Journal:
[/size]
[size=medium]Open Differential Psychology[/size]
[size=medium]Authors:[/size]
[size=medium]Bryan J. Pesta[/size]
[size=medium]Title:[/size]
[size=medium]And the Next President of the United States is: Predicted by Race-adjusted State IQ and Well-being
[/size]
[size=medium]Abstract:[/size]
[size=medium]I report U.S. state-level relationships between measures of IQ, race, well-being (e.g., income, health), and the results of the 2016 U.S. presidential election. Based on prior research (Pesta & McDaniel, 2014), I predicted that IQ and race would be relatively unrelated to election results in bivariate analysis. Instead, a mutual suppression effect was expected, such that IQ would more strongly predict election outcomes when controlling for race, and vice versa. The predicted pattern appeared; so too did mutual suppression effects between racial composition and most but not all measures of state well-being (i.e., religiosity, crime, education, health, and income). The suppression patterns consistently revealed that after adjusting for state racial composition, blue states were smarter and more prosperous than red states. I conclude that conservatism is inversely co-linear with IQ.[/size]
[size=medium]
Length:[/size]
[size=medium]~5465 words, ~26 pages.[/size]
I thought this paper was accepted at Intelligence, but it was rejected after the second revise and resubmit. If appropriate, I'm willing to share feedback from prior reviewers. At any rate, please consider this for publication here.
Sincerely,
Bryan
Back to [Archive] Post-review discussions
Hi Bryan,
I recall this paper. Note that the result of this paper does not hold when one analyses counties (n=3100), so I would recommend rewriting this to be more weakly worded. Otherwise, it would get immediately disproved by the county paper.
Aside from that, I'll read your paper and get back to you with my thoughts.
I recall this paper. Note that the result of this paper does not hold when one analyses counties (n=3100), so I would recommend rewriting this to be more weakly worded. Otherwise, it would get immediately disproved by the county paper.
Aside from that, I'll read your paper and get back to you with my thoughts.
Hi Bryan,
I recall this paper. Note that the result of this paper does not hold when one analyses counties (n=3100), so I would recommend rewriting this to be more weakly worded. Otherwise, it would get immediately disproved by the county paper.
Aside from that, I'll read your paper and get back to you with my thoughts.
Hi Emil,
Thanks for the comment. This wasn't something prior reviewers brought up, but I had thought about how "reliable" my results were, and therefore how strongly or weakly I could discuss them.
It occurred to me that I'm not trying to estimate population values from some sample. I have the population. There are no more cases to add, and so this study is completely descriptive versus inferential. In fact, I'm not sure my use of significance testing and p-values is even appropriate (I included them as interpretational aids in that these are what journal readers expect to see).
County-level analysis producing different results is interesting, but it does not invalidate the results I found when using U.S. states as the unit of analysis. In other words, the numbers are the numbers. The population parameters I reported are what they are. A remaining puzzle, then, is why results differ across units of analysis?
I could be wrong about this, and am interested in what others here think.
Bryan
Bryan,
Please find attached my commented version of your paper.
Note that your study files must be uploaded to a repository. Currently, the data are entirely missing. As before, I recommend that you make a user at OSF, create a repository and upload the files.
The paper has no figures. Perhaps you can a way to visualize your main suppression finding?
https://osf.io/
Please find attached my commented version of your paper.
Note that your study files must be uploaded to a repository. Currently, the data are entirely missing. As before, I recommend that you make a user at OSF, create a repository and upload the files.
The paper has no figures. Perhaps you can a way to visualize your main suppression finding?
https://osf.io/
Bryan,
Please find attached my commented version of your paper.
Note that your study files must be uploaded to a repository. Currently, the data are entirely missing. As before, I recommend that you make a user at OSF, create a repository and upload the files.
The paper has no figures. Perhaps you can a way to visualize your main suppression finding?
https://osf.io/
Hi Emil,
Thanks for your review. I'm in somewhat of a dilemma here. I probably shouldn't have submitted this paper until mid-August, as then I'd have the proper time to deal with responses to reviews. I'm in the middle of coordinating and then running an event that ends August 18. It's now apparent that I'll have little time to deal with this paper till then.
Given that, should I retract, or would it be ok to leave it unattended for a month?
If I did retract, could I resubmit later?
Thank you for your consideration,
Bryan
Bryan,
Please find attached my commented version of your paper.
Note that your study files must be uploaded to a repository. Currently, the data are entirely missing. As before, I recommend that you make a user at OSF, create a repository and upload the files.
The paper has no figures. Perhaps you can a way to visualize your main suppression finding?
https://osf.io/
Hi Emil,
Thanks for your review. I'm in somewhat of a dilemma here. I probably shouldn't have submitted this paper until mid-August, as then I'd have the proper time to deal with responses to reviews. I'm in the middle of coordinating and then running an event that ends August 18. It's now apparent that I'll have little time to deal with this paper till then.
Given that, should I retract, or would it be ok to leave it unattended for a month?
If I did retract, could I resubmit later?
Thank you for your consideration,
Bryan
You can leave it unattended for some time. I myself do this. We don't actually have any rule in place about a time-out effect. However, we did once move a paper to the 'abandoned' category after >1 year without any author replies despite multiple contact attempts.
Bryan,
Please find attached my commented version of your paper.
Note that your study files must be uploaded to a repository. Currently, the data are entirely missing. As before, I recommend that you make a user at OSF, create a repository and upload the files.
The paper has no figures. Perhaps you can a way to visualize your main suppression finding?
https://osf.io/
Hi Emil,
Thanks for your review. I'm in somewhat of a dilemma here. I probably shouldn't have submitted this paper until mid-August, as then I'd have the proper time to deal with responses to reviews. I'm in the middle of coordinating and then running an event that ends August 18. It's now apparent that I'll have little time to deal with this paper till then.
Given that, should I retract, or would it be ok to leave it unattended for a month?
If I did retract, could I resubmit later?
Thank you for your consideration,
Bryan
You can leave it unattended for some time. I myself do this. We don't actually have any rule in place about a time-out effect. However, we did once move a paper to the 'abandoned' category after >1 year without any author replies despite multiple contact attempts.
Emil,
I have some time now to work on this, and I hope to have a response to your review soon. For now, though, I've attached the data. I'm happy to host this somewhere else as well, if that's required.
Bryan
Emil,
Here is my reply to your review.
Best,
Bryan
Here is my reply to your review.
Best,
Bryan
Hi Bryan,
I'll reply in thread since that's easier to follow. Anything not commented on is things that were ok.
Sounds good.
My point is that a state can be consistently blue/red while the actual voter margin is fairly close to 50%.
https://en.wikipedia.org/wiki/Red_states_and_blue_states
Example: Montana is listed as a consistent red state (4/4 last elections to R). However, the average voter margin is only 3-10%, meaning that results like 45-55% and 48.5-51.5% are seen. Quite small difference in actual voter behavior.
This red/blue state terminology is an artifact of the bizarre voting system (FPTP).
Don't have to cite all my work, just some suggestions. Unethical for reviewers to require submissions to cite their own work. Cite it if you think it is relevant.
Haha. Yes. This is alright.
The dimensionality of political preferences is not very well researched I'm afraid. Some references of interest:
- http://journals.sagepub.com/doi/10.1177/1465116512436995
- http://journals.sagepub.com/doi/abs/10.1177/1465116511434618
US research on the topic is particularly annoying in that it is almost always reduced to a single left-right/liberal-conservative dimension. Another artifact of the FPTP system I guess.
Maybe add a brief note to a robustness section. Good practice to have a robustness section that briefly summarizes what happens if various alternative method choices. Don't want to end up like this!
http://www.nature.com/news/crowdsourced-research-many-hands-make-tight-work-1.18508
The discussion of this is more philosophical. I lean towards the other view, but it's a judgment call.
I think of S as a formative factor, not a reflective one though. It's a useful index of social well-being, but it's not a cause or much of a thing at all.
Seems like it is worth mentioning in a robustness section.
Standardized metrics are tricky. A std. beta of .65 at the state level and at the individual level is not the same in terms of IQ at the individual level. It depends on the variance. As I mentioned, the state-level IQ SD is only 2.7 (18% of the individual-level one). When you find a std. beta at the state level of .65, it means that for an .65 decrease in Trump voting (state-level), the IQ of a state goes down by .65*2.7=1.8 IQ. Very small effect size.
I'll reply in thread since that's easier to follow. Anything not commented on is things that were ok.
Notes. I started by replying to your Word comments in Word, but that was hard to follow. So, I decided to do my replies here as an outline. I will upload the entire revised paper once I know what changes other reviewers suggest. Also, I am now rewriting various sections of the paper (and changing the title) to sound less dramatic / conclusive about the results.
Sounds good.
2. Actually, it’s about how the states vote in the presidential elections cf. first past the post. The states don’t have to be very skewed politically for this to happen. I’m not sure I completely understand your first point here. I agree it’s the states—via how its residents vote—that are either blue or red. If it helps, here is the dictionary definition of a blue state: A US state that predominantly votes for or supports the Democratic Party. Please let me know if I’m not addressing your concern.
My point is that a state can be consistently blue/red while the actual voter margin is fairly close to 50%.
https://en.wikipedia.org/wiki/Red_states_and_blue_states
Example: Montana is listed as a consistent red state (4/4 last elections to R). However, the average voter margin is only 3-10%, meaning that results like 45-55% and 48.5-51.5% are seen. Quite small difference in actual voter behavior.
This red/blue state terminology is an artifact of the bizarre voting system (FPTP).
4. Results which were also found by Kirkegaard 2015a, b. Now cited.
Don't have to cite all my work, just some suggestions. Unethical for reviewers to require submissions to cite their own work. Cite it if you think it is relevant.
5. Very old citations [on suppression]. Are there any newer reviewers? When I originally submitted this to a different journal, I relied on Pesta and McDaniel (2014) as the background citation on suppression effects. A reviewer there wanted more details about these effects, including mention of some classic papers. Thus, my paper evolved to discussion of that. This is an example of the rabbit hole one goes down when resubmitting a manuscript to a different journal. I hope you won’t ask me to go back up it!
Haha. Yes. This is alright.
6. I think this section should clarify matters of unidimensional vs. multi-dimensional measures of politics vs. self-identification with labels/parties. Messy findings result from messy conceptualization. Noah’s studies used a 2-dimensional conceptualization. Good point. This was a hard section to write, and I will work on making this distinction in the revision.
The dimensionality of political preferences is not very well researched I'm afraid. Some references of interest:
- http://journals.sagepub.com/doi/10.1177/1465116512436995
- http://journals.sagepub.com/doi/abs/10.1177/1465116511434618
US research on the topic is particularly annoying in that it is almost always reduced to a single left-right/liberal-conservative dimension. Another artifact of the FPTP system I guess.
8. Is this with or without excluding proportions to third parties? It matters in some states. The non-perfect correlation here means that the non-main parties were not excluded. Does it change results if they are? In my data, (100% - %Trump - %Clinton) = %Third Party votes. Across the 50 states, this mean / residual value was 5.97 (SD = 3.54). But, it cannot be allocated to some unified “independent” vote. In California, for example, although 93.4% of residents voted for either Trump or Clinton, the remaining percentages were scattered across several other, distinct candidates / parties. Specifically, 3.4% voted Libertarian; 2.00% voted Green, 0.28% voted “independent,” and 1.00% of votes were coded as “others” (https://en.wikipedia.org/wiki/United_States_presidential_election,_2016[color=#00000a]).[/color]
In my data, the third party vote correlated nominally with percent Clinton (r = -.28, p = .05). It also correlated strongly with percent Black (r = -.52), and with Health (r = .44). The percent Clinton correlation, however, was attenuated to .11 when controlling for percent Black. Thus, I don’t see much value in adding data about third party votes to my manuscript.
Maybe add a brief note to a robustness section. Good practice to have a robustness section that briefly summarizes what happens if various alternative method choices. Don't want to end up like this!
http://www.nature.com/news/crowdsourced-research-many-hands-make-tight-work-1.18508
12. Unclear if adjusted for overfitting [page 13]. I don’t think the analyses I’m running are “inferential,” in that I’m not trying to estimate population values from some sample. Instead, I have data for an entire population—the 50 U.S. states. Statistical techniques used to make a sample more representative of a population don’t seem relevant / appropriate here.
Nonetheless, my understanding is overfitting is a concern with small sample sizes. I agree that N = 50 is small, but each case is quite “stable” in that it is an aggregate number based on the voting patterns of millions of people in each state. In sum, I don’t think overfitting is a concern here, but I will again defer to those with more knowledge of the topic.
The discussion of this is more philosophical. I lean towards the other view, but it's a judgment call.
14. Why [is well-being the common dimension]? What about reverse causation? What about common cause? Good question, and I now mention these other possibilities in the discussion.
In my thinking, we know that scores on seemingly distinct mental tests nonetheless correlate. We explain this by appeal to a latent trait common to scores on all mental tests. I’ve applied the same idea to the seemingly distinct social, political, and economic variables that nonetheless correlate by state. My explanation appeals to a general factor named well-being. It includes IQ; whereas your S factor does not.
I think of S as a formative factor, not a reflective one though. It's a useful index of social well-being, but it's not a cause or much of a thing at all.
15a. A finding that parallels the usual finding of more left-wing politics in cities, and cities are more prosperous. Maybe try a control for population density? Thank you. The correlations for state population density are .07 (IQ), .50 (% Trump), .32 (% White), and -.37 (% Black or Hispanic). However, in a regression with IQ, % White, and population density predicting % Trump, the IQ suppression effect still occurred (i.e., IQ’s Beta was still -.563).
I was going to add a section reporting all this, but when % Black or Hispanic is instead the race variable, the IQ suppression result goes away (IQ’s Beta was -.241, n/s). Given the lack of consistency (and the many additional analyses previous reviewers asked me to add), I chose not to present these data in the revision.
Seems like it is worth mentioning in a robustness section.
15b. It would be wise to note the effect sizes here are quite tiny. To derive it, you need the standard deviation of state-level IQ, which is quite small: 2.7 IQ. The IQ / voting pattern effect sizes are small only initially, because they are being suppressed by race. When percent White is controlled, for example, the effect size for IQ now predicting percent Trump is -.65.
Standardized metrics are tricky. A std. beta of .65 at the state level and at the individual level is not the same in terms of IQ at the individual level. It depends on the variance. As I mentioned, the state-level IQ SD is only 2.7 (18% of the individual-level one). When you find a std. beta at the state level of .65, it means that for an .65 decrease in Trump voting (state-level), the IQ of a state goes down by .65*2.7=1.8 IQ. Very small effect size.
Hi Emil,
Like you, I didn't paste below again points I think we've reached consensus on (or I assume require no further action on my part).
I agree that using "blue" or "red" implies an all or none situation. But, in some sense, the labels are just conveniences. One can say "red" versus "a state that had relatively higher percentages of votes cast for Trump" every time a red state is referenced. Note, though, all the analyses here use the full range of percents cast for Trump (i.e., I did regressions, versus first categorize states as red or blue, and then run tests on them).
That said, I tried researching what constitutes a small versus a landslide win in a presidential election. I found very mixed results. For example:
"One generally agreed upon measure of a landslide election is when the winning candidate beats his opponent or opponents by at least 15 percentage points in a popular vote count. Under that scenario a landslide would occur when the winning candidate in a two-way election receives 58 percent of the vote, leaving his opponent with 42 percent.
There are variations of the 15-point landslide definition. The online political news source Politico has defined a landslide election as being on in which the winning candidate beats his opponent by at least 10 percentage points, for example. And the well-known political blogger Nate Silver, of The New York Times, has defined a landslide district as being one in which a presidential vote margin deviated by at least 20 percentage points from the national result."
So, there is at least one definition of "landslide" here that falls within your (high-end) definition of "average voter margin".
Finally, I compared the N = 14 states with 42% or less Trump votes to the 13 with 58% or more. It basically produced the same results as that reported for all 50 states.
Thank you, I will check these out.
Will do-- as mentioned below too.
I guess it's what's causing the intercorrelations among all the different variables that leads me to my view of it.
Will do.
I agree 1.8 IQ points is small, assuming the typical SD of 15 for IQ tests.
But, I'm perhaps missing something: wouldn't moving 1.8 IQ points in an IQ distribution with a SD of 2.7 be a large effect? I tried researching whether a standardized beta is a measure of effect size, but got mixed results...
Like you, I didn't paste below again points I think we've reached consensus on (or I assume require no further action on my part).
2. Actually, it’s about how the states vote in the presidential elections cf. first past the post. The states don’t have to be very skewed politically for this to happen. I’m not sure I completely understand your first point here. I agree it’s the states—via how its residents vote—that are either blue or red. If it helps, here is the dictionary definition of a blue state: A US state that predominantly votes for or supports the Democratic Party. Please let me know if I’m not addressing your concern.
Emil: My point is that a state can be consistently blue/red while the actual voter margin is fairly close to 50%.
https://en.wikipedia.org/wiki/Red_states_and_blue_states
Example: Montana is listed as a consistent red state (4/4 last elections to R). However, the average voter margin is only 3-10%, meaning that results like 45-55% and 48.5-51.5% are seen. Quite small difference in actual voter behavior.
This red/blue state terminology is an artifact of the bizarre voting system (FPTP).
I agree that using "blue" or "red" implies an all or none situation. But, in some sense, the labels are just conveniences. One can say "red" versus "a state that had relatively higher percentages of votes cast for Trump" every time a red state is referenced. Note, though, all the analyses here use the full range of percents cast for Trump (i.e., I did regressions, versus first categorize states as red or blue, and then run tests on them).
That said, I tried researching what constitutes a small versus a landslide win in a presidential election. I found very mixed results. For example:
"One generally agreed upon measure of a landslide election is when the winning candidate beats his opponent or opponents by at least 15 percentage points in a popular vote count. Under that scenario a landslide would occur when the winning candidate in a two-way election receives 58 percent of the vote, leaving his opponent with 42 percent.
There are variations of the 15-point landslide definition. The online political news source Politico has defined a landslide election as being on in which the winning candidate beats his opponent by at least 10 percentage points, for example. And the well-known political blogger Nate Silver, of The New York Times, has defined a landslide district as being one in which a presidential vote margin deviated by at least 20 percentage points from the national result."
So, there is at least one definition of "landslide" here that falls within your (high-end) definition of "average voter margin".
Finally, I compared the N = 14 states with 42% or less Trump votes to the 13 with 58% or more. It basically produced the same results as that reported for all 50 states.
Emil: The dimensionality of political preferences is not very well researched I'm afraid. Some references of interest:
- http://journals.sagepub.com/doi/10.1177/1465116512436995
- http://journals.sagepub.com/doi/abs/10.1177/1465116511434618
US research on the topic is particularly annoying in that it is almost always reduced to a single left-right/liberal-conservative dimension. Another artifact of the FPTP system I guess.
Thank you, I will check these out.
8. Is this with or without excluding proportions to third parties? It matters in some states. The non-perfect correlation here means that the non-main parties were not excluded. Does it change results if they are? In my data, (100% - %Trump - %Clinton) =
%Third Party votes. Across the 50 states, this mean / residual value was 5.97 (SD = 3.54). But, it cannot be allocated to some unified “independent” vote. In California, for example, although 93.4% of residents voted for either Trump or Clinton, the remaining percentages were scattered across several other, distinct candidates / parties. Specifically, 3.4% voted Libertarian; 2.00% voted Green, 0.28% voted “independent,” and 1.00% of votes were coded as “others” (https://en.wikipedia.org/wiki/United_States_presidential_election,_2016[color=#00000a]).[/color]
In my data, the third party vote correlated nominally with percent Clinton (r = -.28, p = .05). It also correlated strongly with percent Black (r = -.52), and with Health (r = .44). The percent Clinton correlation, however, was attenuated to .11 when controlling for percent Black. Thus, I don’t see much value in adding data about third party votes to my manuscript.
Emil: Maybe add a brief note to a robustness section. Good practice to have a robustness section that briefly summarizes what happens if various alternative method choices. Don't want to end up like this!
http://www.nature.com/news/crowdsourced-research-many-hands-make-tight-work-1.18508
Will do-- as mentioned below too.
14. Why [is well-being the common dimension]? What about reverse causation? What about common cause? Good question, and I now mention these other possibilities in the discussion.
In my thinking, we know that scores on seemingly distinct mental tests nonetheless correlate. We explain this by appeal to a latent trait common to scores on all mental tests. I’ve applied the same idea to the seemingly distinct social, political, and economic variables that nonetheless correlate by state. My explanation appeals to a general factor named well-being. It includes IQ; whereas your S factor does not.
Emil: I think of S as a formative factor, not a reflective one though. It's a useful index of social well-being, but it's not a cause or much of a thing at all.
I guess it's what's causing the intercorrelations among all the different variables that leads me to my view of it.
15a. A finding that parallels the usual finding of more left-wing politics in cities, and cities are more prosperous. Maybe try a control for population density? Thank you. The correlations for state population density are .07 (IQ), .50 (% Trump), .32 (% White), and -.37 (% Black or Hispanic). However, in a regression with IQ, % White, and population density predicting % Trump, the IQ suppression effect still occurred (i.e., IQ’s Beta was still -.563).
I was going to add a section reporting all this, but when % Black or Hispanic is instead the race variable, the IQ suppression result goes away (IQ’s Beta was -.241, n/s). Given the lack of consistency (and the many additional analyses previous reviewers asked me to add), I chose not to present these data in the revision.
Emil: Seems like it is worth mentioning in a robustness section.
Will do.
15b. It would be wise to note the effect sizes here are quite tiny. To derive it, you need the standard deviation of state-level IQ, which is quite small: 2.7 IQ. The IQ / voting pattern effect sizes are small only initially, because they are being suppressed by race. When percent White is controlled, for example, the effect size for IQ now predicting percent Trump is -.65.
Emil: Standardized metrics are tricky. A std. beta of .65 at the state level and at the individual level is not the same in terms of IQ at the individual level. It depends on the variance. As I mentioned, the state-level IQ SD is only 2.7 (18% of the individual-level one). When you find a std. beta at the state level of .65, it means that for an .65 decrease in Trump voting (state-level), the IQ of a state goes down by .65*2.7=1.8 IQ. Very small effect size.
I agree 1.8 IQ points is small, assuming the typical SD of 15 for IQ tests.
But, I'm perhaps missing something: wouldn't moving 1.8 IQ points in an IQ distribution with a SD of 2.7 be a large effect? I tried researching whether a standardized beta is a measure of effect size, but got mixed results...
I guess it's what's causing the intercorrelations among all the different variables that leads me to my view of it.
This is the reflective interpretation. I don't think it is very plausible. But I don't think we need to debate that here. See e.g. https://www.rasch.org/rmt/rmt221d.htm
I agree 1.8 IQ points is small, assuming the typical SD of 15 for IQ tests.
But, I'm perhaps missing something: wouldn't moving 1.8 IQ points in an IQ distribution with a SD of 2.7 be a large effect? I tried researching whether a standardized beta is a measure of effect size, but got mixed results...
Depends what you mean. It is large in the standardized sense at the state-level. It is quite small at the individual level. Generally, I think it is more sensible to use the individual-level norms for interpretation. So, while you do find evidence that Republication majority states have lower IQs controlling for various factors, this effect is quite small. This is what would be expected because the individual-level literature on the topic finds only minor and inconsistent differences in IQ between preferred party, and conservative/liberal self-placements (depending on sample, measure, control variables etc.).
I don't see any updated version of the submission. As per standard practice, please create a repository on OSF for it and place (and update) the submission there. OSF is easy to use and free.
[size=medium]"Controlling for percent White (or percent Black or Hispanic), blue states were smarter than red states. [...]. Controlling for race, blue states had even higher levels of global well-being, and health, and even lower levels of crime and religiosity.[/size]
[size=medium]You should have gone beyond the data and ask what those differences mean. If we look at whites who voted for Trump, they tended to be those who have borne the brunt of globalization, either through outsourcing of manufacturing to low-wage countries like China (rust belt states) or through insourcing of low-wage labor into industries which, by their very nature, cannot be relocated overseas (construction, landscaping, agribusiness, food processing, and most service jobs). In other words, the traditional working class voted for Trump. In contrast, the upper class and upper middle class tended to vote for Clinton.[/size]
[size=medium]Of course, even people in higher-up jobs will eventually suffer the effects of globalization. This is already happening in programming and other high-tech jobs. The global marketplace will tend to level incomes around the world, and there's going to be a lot more levelling down than levelling up. It will also tend to redistribute income from labor to capital. An unemployed American doesn't have the option of relocating to a country with a higher GDP per capita and less unemployment. For one thing, those countries are becoming fewer and fewer. For another, he or she cannot easily emigrate to such countries either legally (not enough money or skills, sorry) or illegally (the U.S. is not recognized as a refugee-producing country). Meanwhile, owners of capital have much more freedom to move their money from one country to another.[/size]
[size=medium]So I disagree with the inference that Americans with higher IQs support globalism because it is a better political choice. It's better in the short term for them because they don't suffer the negative effects of globalism. In fact, they benefit by getting cheaper manufactured goods (made in low-wage countries) and cheaper services (maids, restaurant help, landscapers, etc.). But it's not a better choice for Americans in general.[/size]
[size=medium]You should have gone beyond the data and ask what those differences mean. If we look at whites who voted for Trump, they tended to be those who have borne the brunt of globalization, either through outsourcing of manufacturing to low-wage countries like China (rust belt states) or through insourcing of low-wage labor into industries which, by their very nature, cannot be relocated overseas (construction, landscaping, agribusiness, food processing, and most service jobs). In other words, the traditional working class voted for Trump. In contrast, the upper class and upper middle class tended to vote for Clinton.[/size]
[size=medium]Of course, even people in higher-up jobs will eventually suffer the effects of globalization. This is already happening in programming and other high-tech jobs. The global marketplace will tend to level incomes around the world, and there's going to be a lot more levelling down than levelling up. It will also tend to redistribute income from labor to capital. An unemployed American doesn't have the option of relocating to a country with a higher GDP per capita and less unemployment. For one thing, those countries are becoming fewer and fewer. For another, he or she cannot easily emigrate to such countries either legally (not enough money or skills, sorry) or illegally (the U.S. is not recognized as a refugee-producing country). Meanwhile, owners of capital have much more freedom to move their money from one country to another.[/size]
[size=medium]So I disagree with the inference that Americans with higher IQs support globalism because it is a better political choice. It's better in the short term for them because they don't suffer the negative effects of globalism. In fact, they benefit by getting cheaper manufactured goods (made in low-wage countries) and cheaper services (maids, restaurant help, landscapers, etc.). But it's not a better choice for Americans in general.[/size]
[attachment=749]
Thanks, Peter,
I will try to address your comments soon-- I need some time to think about them.
Emil:
I’m pretty much ready to upload my revision and host everything at OSF, but first I have a comment, and then a question.
1. My comment regards how to interpret the size of the key effect here (i.e., the -.65 beta for IQ predicting Trump when controlling White). I still think it’s a large effect. As you note, it’s only 1.8 IQ points because the state SD is 2.7 (not 15). I guess I still don’t understand why we should impute the 1.8 points to individual IQ scores with a SD of 15, given that the paper uses only state-level data.
2. Robustness tests. Frankly, I don’t know how to do these, nor do I have a stats package like R (or have ever used it). Here is a plot for the key effect in my paper (IQ predicting Trump when White is in the equation). I don’t see data points bunched up on the left side relative to the right side. I’m not sure what to do next, so any guidance is appreciated.
Bryan
p.s. The biggest outlier is Vermont, for some reason.
ETA, I guess this BBS doesn't like SPSS output tables, so I've attached it as a file.
Thanks, Peter,
I will try to address your comments soon-- I need some time to think about them.
Emil:
I’m pretty much ready to upload my revision and host everything at OSF, but first I have a comment, and then a question.
1. My comment regards how to interpret the size of the key effect here (i.e., the -.65 beta for IQ predicting Trump when controlling White). I still think it’s a large effect. As you note, it’s only 1.8 IQ points because the state SD is 2.7 (not 15). I guess I still don’t understand why we should impute the 1.8 points to individual IQ scores with a SD of 15, given that the paper uses only state-level data.
2. Robustness tests. Frankly, I don’t know how to do these, nor do I have a stats package like R (or have ever used it). Here is a plot for the key effect in my paper (IQ predicting Trump when White is in the equation). I don’t see data points bunched up on the left side relative to the right side. I’m not sure what to do next, so any guidance is appreciated.
Bryan
p.s. The biggest outlier is Vermont, for some reason.
ETA, I guess this BBS doesn't like SPSS output tables, so I've attached it as a file.
Attached is a revision that addresses, I think, all Emil's concerns.
All stuff is also posted at OSF:
http://osf.io/twkem
Bryan
All stuff is also posted at OSF:
http://osf.io/twkem
Bryan
Bryan,
Looks good to me. Will be interesting to see whether this suppression effect replicates at the county level. I didn't check.
I approve.
Looks good to me. Will be interesting to see whether this suppression effect replicates at the county level. I didn't check.
I approve.
This version of the paper is further improved from the one that I reviewed peviously for Intelligence. I have no additional recommendations and suggest acceptance.
This version of the paper is further improved from the one that I reviewed peviously for Intelligence. I have no additional recommendations and suggest acceptance.
Thank you for the review-- I think you've seen this four times now!
The final version (I think) is attached.
Bryan
p.s. Emil. You knew someone who did formatting on these to make them look good / be standardized?