Back to [Archive] Post-review discussions

[OQSPS] Explaining Terrorism Threat Level Across Western Countries
Admin
When I get a chance I will pick a couple of random countries and compare RofP data to the Global Terrorism Database to see if the numbers match.


Please note that the website itself admits that its numbers may well be an undercount.

You now have three measures of roughly the same outcome, Islamic terrorism. You report their intercorrelations early on and I was surprised that you did not factor analyze them to produce a single measure that should have less idiosyncratic variance.


I would prefer not to include an additional factor analysis of the three measures in the paper. I believe it is interesting enough to report the associations and conditional models separately for each one. But feel free to use my data to run the analysis yourself.
Admin
At least, do the analysis and report whether the results were similar. The reader may be wondering: "hmm, if the author used a composite measure, he might get stronger results, shame he didn't do that.". You don't have to add another full page of results, just a paragraph summarizing the results with the composite.

--

Furthermore, you note that you got non robust/fragile results with some predictors. I think this is not true. It seems to me that you being misled by NHST thinking. The betas are always positive and in fact vary fairly little (given the small sample size), but they do bounce across the p < alpha line. This is not lack of robustness. A lack of robustness would be the beta bouncing between positive and negative values (or near zero values). But this isn't what we see. The two you call non-robust have betas: 0.21–0.46 and 0.33–0.52. So the betas have ranges of .25 and .19. Fairly small, given the small N's. Compare with one of those you call robust: 0.45–0.72, a range of .27. No difference at all in the robustness.

Please read e.g.: http://www.phil.vt.edu/dmayo/personal_website/Schmidt_StatSigTesting.pdf
Admin
New version of the paper.

I have included an additional Appendix which reports the OLS models using a principal component of Islamist terrorism as the dependent variable. I have changed the example of Nigeria to Azerbaijan. And I have altered the language in the Abstract and Conclusion to reflect Emil's point about significance levels.
Admin
Progress has been made, but, sorry, but I am still not satisfied.

while military intervention in the Middle East has a fairly strong relationship with the first (d = 0.69–1.63, r = .40; β = 0.39–0.45) and second measure (d = 0.53–1.46, r = .41; β = 0.28– 0.46), but a less consistent relationship with the third (d = 0.39–2.37, r = .30; β = 0.33–0.52).


Both hypotheses received some support from the analyses. Percentage of Muslims in the population (logged) had a strong association with the first and third measures of terrorist threat, but a somewhat weaker association with the second. Military intervention in the Middle East has a fairly strong relationship with the first and second measure, but a less consistent relationship with the third: although all three measures of military intervention were significant in the multiple regression models, only part of anti-ISIS military coalition had significant bivariate association with log of 1 + number of arrests for religious terrorism.


What the seems looks to be talking about is the consistency of the effect. But as can be seen, the effect is 100% directionally and size-wise fairly consistent. What is not consistent is whether the p-value is below alpha or not, but of what interest is this? Given the effect size and the sample size, even if there is a true effect, we would not have a p-value < alpha in all cases.

Let me explain why I think it is a problem calling this a lack of consistency by quoting the classic Schmidt and Hunter (1984) paper:
http://onlinelibrary.wiley.com/doi/10.1111/j.1744-6570.1984.tb01453.x/abstract

By the end of World War 11, hundreds of validation studies had been conducted to determine the extent to which ability tests predict performance on different jobs. When these studies were compiled by Ghiselli (1966),he found marked variation in validity coefficients even across sets of studies in which very similar tests were used to predict performance on seemingly similarjobs. Ghiselli concluded that most of this variance was apparently due to specific factors that determined the nature of job performance in particular settings. This situational specificity hypothesis of employment test validities predated Ghiselli’s research; the effect of Ghiselli’s findings was to provide seemingly strong support for that hypothesis.

The situational specificity hypothesis, as it has existed historically, makes two predictions, one well known and obvious and one that is less well known and more subtle. The well known and obvious prediction is that there will be substantial variation in validities across settings and organizations for the same and similar tests and jobs and this variation will be real. Research in validity generalization has challenged this prediction by demonstrating that most, and in many cases all, of the variance in validity coefficients across studies is due to statistical and measurement artifacts, principally sampling error. The alternative hypothesis tested has been that situational specificity of job performance is either nonexistent or has little or no effect on validity coefficients of ability tests used in employment, and that, as a consequence, employment test validities are generalizable. In recent years, research evidence has accumulated indicating that the validities of numerous employment selection procedures are indeed generalizable across settings, organizations. and time periods. For summaries of the research on validity generalization, see Schmidt and Hunter (1981), Hunter and Schmidt ( 1983), and Linn and Dunbar (1982)


In other words, using language like this ("less consistent") invites the error that plagued the earlier studies of validity of tests for personnel selection: attributing sampling error to moderator effects whereas none existed.

So, I would be happy if the author either: 1) clarified that it was only the p-value that not always < alpha, as expected by sampling error, OR 2) removed the mentioning of the lack of consistency of the p-values.

---

I have yet to re-do my analytic replication because I am swamped with work from another project. I will get it done.
Admin
Here's the output (including echoed commands) of my analytic replication. I went over it together with Noah. The only thing that did not replicate was the confidence intervals for Cohen's d values. Perhaps STATA calculates these some other way. It is of minor importance, however.

I will have Noah put my R code in the repository.

So I am satisfied with the replicability of this paper.


> source('Z:/Projects/Noah Carl terrorism 2016 paper/R/start.R', echo=TRUE)

> ### ANALYTIC REPLICATIONS OF NOAH CARL'S Explaining Terrorism Threat Level Across Western Countries
>
>
> # libs --------------------------------- .... [TRUNCATED]

> p_load(XLConnect, kirkegaard, psych, weights, magrittr, effsize, lsr, compute.es, MASS)

> # data --------------------------------------------------------------------
>
> d_main = readWorksheetFromFile(file = "data/Terrorism Threat.xlsx", .... [TRUNCATED]

> rownames(d_main) = d_main$country

> # missing pop data --------------------------------------------------------
> #if pop data is missing, use eu pop data
> d_main$pop10 = d_main[c("po ..." ... [TRUNCATED]

> # derived variables ------------------------------------------
>
> #dummies
> d_main$west = as.logical(d_main$west)

> d_main$europe = as.logical(d_main$europe)

> #muslim 2015 estimation
> d_main$muslim15 = d_main$muslim10 + .25 * (d_main$muslim30 - d_main$muslim10)

> d_main$muslim15_log = log10(d_main$muslim15 + 1)

> #winzor turkey's value
> d_main["Turkey", "muslim15"] = 19.075

> ## outcome transformations
>
> #casualties
> d_main$casualties = d_main$deaths + d_main$injuries

> d_main$casualties_per_cap = (d_main$casualties / d_main$pop10) * 1e6

> d_main$casualties_per_cap_log = log10(d_main$casualties_per_cap + 1)

> #arrests outcome
> d_main$arrests = df_func(d_main, pattern = "rear", func = function(row, na.rm) {
+ if(all(is.na(row))) return(NA)
+ sum(row, .... [TRUNCATED]

> d_main$arrests_per_cap = (d_main$arrests / d_main$popeu10) * 1e6

> d_main$arrests_per_cap_log = log10(d_main$arrests_per_cap + 1)

> #var names
> v_terror_measures = c("fco", "casualties_per_cap_log", "arrests_per_cap_log")

> #military deaths
> d_main$mildeaths_log = log10(1 + d_main$mildeaths)

> #log gdp
> d_main$gdp_log = log10(d_main$gdpcap14)

> d_main$gdpeu_log = log10(d_main$gdpeu14)

> #factors
> d_main$any_num = d_main$any

> d_main$any = as.factor(d_main$any)

> d_main$part_num = d_main$part

> d_main$part = as.factor(d_main$part)

> # subsets -----------------------------------------------------------------
>
> d_west = d_main[d_main$west, ]

> d_euro = d_main[d_main$europe, ]

> d_eu = d_main[d_main$eu %>% as.logical(), ]

> d_eu_sub = d_main[d_main$eu %>% as.logical() & !rownames(d_main) %in% c("United Kingdom", "Croatia"), ]

> d_euwest = d_main[d_main$eu | d_main$west, ]

> d_oecd = d_main[!is.na(d_main$unemp14), ]

> d_euro_oecd = d_main[!is.na(d_main$unemp14) & d_main$europe, ]
> source('Z:/Projects/Noah Carl terrorism 2016 paper/R/analyses.R', echo=TRUE)

> # section 2.1 -------------------------------------------------------------
>
> #threat level
> psych::describe(d_west$fco)
vars n mean sd median trimmed mad min max range skew kurtosis se
1 1 28 2.43 1.14 2.5 2.42 1.48 1 4 3 0.02 -1.48 0.21

> #casualities
> psych::describe(d_west$casualties_per_cap)
vars n mean sd median trimmed mad min max range skew kurtosis se
1 1 28 4.11 9.7 0.2 1.93 0.3 0 44.11 44.11 2.86 8.15 1.83

> #arrests
> psych::describe(d_eu_sub$arrests_per_cap) #notice all data sample
vars n mean sd median trimmed mad min max range skew kurtosis se
1 1 26 1.98 2.62 0.95 1.53 1.41 0 11.12 11.12 1.86 3.36 0.51

> #intercors
> cor.test(d_euwest$fco, d_euwest$casualties_per_cap_log)

Pearson's product-moment correlation

data: d_euwest$fco and d_euwest$casualties_per_cap_log
t = 4.9694, df = 33, p-value = 2.019e-05
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.4104835 0.8107262
sample estimates:
cor
0.6542357


> cor.test(d_euwest$fco, d_euwest$arrests_per_cap_log)

Pearson's product-moment correlation

data: d_euwest$fco and d_euwest$arrests_per_cap_log
t = 5.2074, df = 24, p-value = 2.46e-05
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.4750103 0.8701950
sample estimates:
cor
0.7283488


> cor.test(d_euwest$arrests_per_cap_log, d_euwest$casualties_per_cap_log)

Pearson's product-moment correlation

data: d_euwest$arrests_per_cap_log and d_euwest$casualties_per_cap_log
t = 5.1142, df = 24, p-value = 3.114e-05
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.4647961 0.8669761
sample estimates:
cor
0.7221351


> #scatterplots
> GG_scatter(d_euwest, x_var = "casualties_per_cap_log", y_var = "fco")
Loading required package: multilevel
Loading required package: nlme

Attaching package: ‘psychometric’

The following object is masked from ‘package:ggplot2’:

alpha

The following object is masked from ‘package:psych’:

alpha


> GG_scatter(d_euwest, x_var = "arrests_per_cap_log", y_var = "fco")

> GG_scatter(d_euwest, x_var = "arrests_per_cap_log", y_var = "casualties_per_cap_log")

> # section 2.2 -------------------------------------------------------------
>
> #muslims
> psych::describe(d_west$muslim15)
vars n mean sd median trimmed mad min max range skew kurtosis se
1 1 28 3.12 2.58 2.52 3 3.45 0 8.2 8.2 0.31 -1.38 0.49

> psych::describe(d_eu_sub$muslim15)
vars n mean sd median trimmed mad min max range skew kurtosis se
1 1 26 3.88 5.15 2.35 2.92 3.34 0 22.7 22.7 2.05 4.54 1.01

> #western deaths
> table(d_west$any)

0 1
7 21

> table(d_west$any) %>% prop.table()

0 1
0.25 0.75

> psych::describe(d_west$mildeaths)
vars n mean sd median trimmed mad min max range skew kurtosis se
1 1 28 293.86 1295.96 11 29.83 16.31 0 6878 6878 4.68 20.8 244.91

> #western isis
> table(d_west$part)

0 1
21 7

> table(d_west$part) %>% prop.table()

0 1
0.75 0.25

> #eu subsample deaths
> table(d_eu_sub$any)

0 1
7 19

> table(d_eu_sub$any) %>% prop.table()

0 1
0.2692308 0.7307692

> psych::describe(d_eu_sub$mildeaths)
vars n mean sd median trimmed mad min max range skew kurtosis se
1 1 26 20.77 27.19 6.5 16.95 9.64 0 86 86 1.08 -0.23 5.33

> #eu sub against isis
> table(d_eu_sub$part)

0 1
22 4

> table(d_eu_sub$part) %>% prop.table()

0 1
0.8461538 0.1538462

> # section 3.1 ---------------------------------------------------------------------
> cor.test(d_west$fco, d_west$muslim15_log)

Pearson's product-moment correlation

data: d_west$fco and d_west$muslim15_log
t = 4.755, df = 26, p-value = 6.417e-05
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.4143524 0.8410761
sample estimates:
cor
0.6820019


> lm("fco ~ poly(muslim15_log, 2)", data = d_west) %>% summary()

Call:
lm(formula = "fco ~ poly(muslim15_log, 2)", data = d_west)

Residuals:
Min 1Q Median 3Q Max
-2.21732 -0.21609 -0.08645 0.65262 1.51373

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.4286 0.1625 14.948 5.67e-14 ***
poly(muslim15_log, 2)1 4.0265 0.8597 4.684 8.46e-05 ***
poly(muslim15_log, 2)2 -0.4081 0.8597 -0.475 0.639
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.8597 on 25 degrees of freedom
Multiple R-squared: 0.4699, Adjusted R-squared: 0.4275
F-statistic: 11.08 on 2 and 25 DF, p-value: 0.0003585


> cohen.d(d = d_west$fco, f = d_west$any)

Cohen's d

d estimate: -0.6888248 (medium)
95 percent confidence interval:
inf sup
-1.6418342 0.2641846

> cor.test(d_west$fco, d_west$mildeaths_log)

Pearson's product-moment correlation

data: d_west$fco and d_west$mildeaths_log
t = 2.1992, df = 26, p-value = 0.03696
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.02693856 0.67010334
sample estimates:
cor
0.3960353


> lm("fco ~ poly(mildeaths_log, 2)", data = d_west) %>% summary()

Call:
lm(formula = "fco ~ poly(mildeaths_log, 2)", data = d_west)

Residuals:
Min 1Q Median 3Q Max
-1.76252 -0.87979 0.09177 0.88862 1.96319

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.4286 0.2048 11.856 9.29e-12 ***
poly(mildeaths_log, 2)1 2.3382 1.0839 2.157 0.0408 *
poly(mildeaths_log, 2)2 -0.1427 1.0839 -0.132 0.8963
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.084 on 25 degrees of freedom
Multiple R-squared: 0.1574, Adjusted R-squared: 0.09002
F-statistic: 2.336 on 2 and 25 DF, p-value: 0.1175


> SMD_matrix(d_west$fco, d_west$part)

Attaching package: ‘plyr’

The following objects are masked from ‘package:Hmisc’:

is.discrete, summarize

0 1
0 NA -1.632013
1 -1.632013 NA

> cohen.d(d = d_west$fco, f = d_west$part)

Cohen's d

d estimate: -1.632013 (large)
95 percent confidence interval:
inf sup
-2.6807130 -0.5833124

> #scatterplots
> GG_scatter(d_west, x_var = "muslim15_log", y_var = "fco")

> GG_scatter(d_euwest, x_var = "mildeaths_log", y_var = "fco")

> #table 1
> d_betas = MOD_APSLM(dependent = "fco", predictors = c("muslim15_log", "any_num", "mildeaths_log", "part_num", "gdp_log", "unemp14", "ineq ..." ... [TRUNCATED]

Attaching package: ‘gtools’

The following object is masked from ‘package:psych’:

logit


> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
muslim15_log any_num mildeaths_log part_num gdp_log unemp14 ineq0911 AIC BIC r2 r2.adj. R R.adj. N
8 0.72 0.37 NA NA NA NA NA 60.84 66.17 0.60 0.57 0.77 0.75 28
9 0.68 NA 0.39 NA NA NA NA 59.51 64.84 0.62 0.59 0.79 0.77 28
10 0.53 NA NA 0.39 NA NA NA 61.39 66.72 0.59 0.56 0.77 0.75 28
108 0.59 0.43 NA NA 0.28 0.33 0.19 54.85 64.18 0.74 0.68 0.86 0.82 28
112 0.58 NA 0.45 NA 0.19 0.37 0.00 57.86 67.19 0.71 0.64 0.84 0.80 28
113 0.47 NA NA 0.41 0.11 0.30 0.16 57.95 67.28 0.71 0.64 0.84 0.80 28

> # section 3.2 -------------------------------------------------------------
>
> cor.test(d_west$casualties_per_cap_log, d_west$muslim15_log)

Pearson's product-moment correlation

data: d_west$casualties_per_cap_log and d_west$muslim15_log
t = 2.4505, df = 26, p-value = 0.0213
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.07166936 0.69407601
sample estimates:
cor
0.4331643


> lm("casualties_per_cap_log ~ poly(muslim15_log, 2)", data = d_west) %>% summary()

Call:
lm(formula = "casualties_per_cap_log ~ poly(muslim15_log, 2)",
data = d_west)

Residuals:
Min 1Q Median 3Q Max
-0.52171 -0.27188 -0.08152 -0.00535 1.32899

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.32293 0.08716 3.705 0.00105 **
poly(muslim15_log, 2)1 1.11005 0.46121 2.407 0.02381 *
poly(muslim15_log, 2)2 0.13051 0.46121 0.283 0.77952
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.4612 on 25 degrees of freedom
Multiple R-squared: 0.1902, Adjusted R-squared: 0.1254
F-statistic: 2.936 on 2 and 25 DF, p-value: 0.07154


> cohen.d(d = d_west$casualties_per_cap_log, f = d_west$any)

Cohen's d

d estimate: -0.533525 (medium)
95 percent confidence interval:
inf sup
-1.4777792 0.4107291

> cor.test(d_west$casualties_per_cap_log, d_west$mildeaths_log)

Pearson's product-moment correlation

data: d_west$casualties_per_cap_log and d_west$mildeaths_log
t = 2.2741, df = 26, p-value = 0.03145
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.04036461 0.67744253
sample estimates:
cor
0.4073078


> lm("casualties_per_cap_log ~ poly(mildeaths_log, 2)", data = d_west) %>% summary()

Call:
lm(formula = "casualties_per_cap_log ~ poly(mildeaths_log, 2)",
data = d_west)

Residuals:
Min 1Q Median 3Q Max
-0.38671 -0.23202 -0.18360 -0.00793 1.30405

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.3229 0.0859 3.759 0.000917 ***
poly(mildeaths_log, 2)1 1.0438 0.4546 2.296 0.030316 *
poly(mildeaths_log, 2)2 0.5588 0.4546 1.229 0.230409
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.4546 on 25 degrees of freedom
Multiple R-squared: 0.2134, Adjusted R-squared: 0.1505
F-statistic: 3.392 on 2 and 25 DF, p-value: 0.04973


> cohen.d(d = d_west$casualties_per_cap_log, f = d_west$part)

Cohen's d

d estimate: -1.455832 (large)
95 percent confidence interval:
inf sup
-2.4815849 -0.4300796

> #scatterplots
> GG_scatter(d_west, x_var = "muslim15_log", y_var = "casualties_per_cap_log")

> GG_scatter(d_euwest, x_var = "mildeaths_log", y_var = "casualties_per_cap_log")

> #table 2
> d_betas = MOD_APSLM(dependent = "casualties_per_cap_log", predictors = c("muslim15_log", "any_num", "mildeaths_log", "part_num", "gdp_log ..." ... [TRUNCATED]

> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
muslim15_log any_num mildeaths_log part_num gdp_log unemp14 ineq0911 AIC BIC r2 r2.adj. R R.adj. N
8 0.46 0.28 NA NA NA NA NA 77.82 83.15 0.27 0.21 0.51 0.45 28
9 0.43 NA 0.40 NA NA NA NA 74.35 79.67 0.35 0.30 0.59 0.55 28
10 0.26 NA NA 0.45 NA NA NA 74.06 79.39 0.36 0.31 0.60 0.55 28
108 0.37 0.30 NA NA 0.19 0.21 0.22 79.56 88.89 0.37 0.23 0.61 0.47 28
112 0.35 NA 0.44 NA 0.16 0.30 0.01 77.56 86.89 0.41 0.28 0.64 0.53 28
113 0.21 NA NA 0.46 0.09 0.24 0.15 75.91 85.24 0.45 0.32 0.67 0.57 28

> # section 3.3 -------------------------------------------------------------
>
> cor.test(d_eu_sub$arrests_per_cap_log, d_eu_sub$muslim15_log)

Pearson's product-moment correlation

data: d_eu_sub$arrests_per_cap_log and d_eu_sub$muslim15_log
t = 4.1268, df = 24, p-value = 0.0003823
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.3423411 0.8255832
sample estimates:
cor
0.64426


> lm("arrests_per_cap_log ~ poly(muslim15_log, 2)", data = d_eu_sub) %>% summary()

Call:
lm(formula = "arrests_per_cap_log ~ poly(muslim15_log, 2)", data = d_eu_sub)

Residuals:
Min 1Q Median 3Q Max
-0.51482 -0.14481 -0.07429 0.18269 0.49859

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.3516 0.0495 7.104 3.09e-07 ***
poly(muslim15_log, 2)1 1.0233 0.2524 4.054 0.000492 ***
poly(muslim15_log, 2)2 -0.1017 0.2524 -0.403 0.690613
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.2524 on 23 degrees of freedom
Multiple R-squared: 0.4192, Adjusted R-squared: 0.3687
F-statistic: 8.299 on 2 and 23 DF, p-value: 0.001934


> cohen.d(d = d_eu_sub$arrests_per_cap_log, f = d_eu_sub$any)

Cohen's d

d estimate: -0.3896577 (small)
95 percent confidence interval:
inf sup
-1.3471069 0.5677916

> cor.test(d_eu_sub$arrests_per_cap_log, d_eu_sub$mildeaths_log)

Pearson's product-moment correlation

data: d_eu_sub$arrests_per_cap_log and d_eu_sub$mildeaths_log
t = 1.5137, df = 24, p-value = 0.1432
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.1040370 0.6125213
sample estimates:
cor
0.2952116


> lm("arrests_per_cap_log ~ poly(mildeaths_log, 2)", data = d_eu_sub) %>% summary()

Call:
lm(formula = "arrests_per_cap_log ~ poly(mildeaths_log, 2)",
data = d_eu_sub)

Residuals:
Min 1Q Median 3Q Max
-0.51633 -0.25960 0.00144 0.19440 0.62486

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.35164 0.06083 5.781 6.88e-06 ***
poly(mildeaths_log, 2)1 0.46890 0.31017 1.512 0.144
poly(mildeaths_log, 2)2 0.30038 0.31017 0.968 0.343
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.3102 on 23 degrees of freedom
Multiple R-squared: 0.1229, Adjusted R-squared: 0.04665
F-statistic: 1.612 on 2 and 23 DF, p-value: 0.2213


> cohen.d(d = d_eu_sub$arrests_per_cap_log, f = d_eu_sub$part)

Cohen's d

d estimate: -2.365502 (large)
95 percent confidence interval:
inf sup
-3.7444042 -0.9865992

> #scatterplots
> GG_scatter(d_eu_sub, x_var = "muslim15_log", y_var = "arrests_per_cap_log")

> GG_scatter(d_eu_sub, x_var = "mildeaths_log", y_var = "arrests_per_cap_log")

> #table 3
> d_betas = MOD_APSLM(dependent = "arrests_per_cap_log", predictors = c("muslim15_log", "any_num", "mildeaths_log", "part_num", "gdpeu_log" .... [TRUNCATED]

> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
muslim15_log any_num mildeaths_log part_num gdpeu_log uneu14 ineu14 AIC BIC r2 r2.adj. R R.adj. N
8 0.72 0.34 NA NA NA NA NA 61.33 66.37 0.53 0.49 0.73 0.70 26
9 0.66 NA 0.33 NA NA NA NA 61.57 66.60 0.52 0.48 0.72 0.69 26
10 0.46 NA NA 0.49 NA NA NA 55.81 60.85 0.62 0.58 0.79 0.76 26
108 0.69 0.38 NA NA 0.05 0.13 -0.19 65.42 74.22 0.56 0.45 0.75 0.67 26
112 0.63 NA 0.36 NA 0.01 0.13 -0.21 65.64 74.45 0.56 0.45 0.75 0.67 26
113 0.45 NA NA 0.52 -0.10 0.11 -0.06 60.46 69.26 0.64 0.55 0.80 0.74 26

> # appendix B --------------------------------------------------------------
>
> #table B1
> d_betas = MOD_APSLM(dependent = "fco", predictors = c(" ..." ... [TRUNCATED]

> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
muslim15_log any_num mildeaths_log part_num gdp_log unemp14 ineq0911 AIC BIC r2 r2.adj. R R.adj. N
8 0.68 0.26 NA NA NA NA NA 73.39 79.49 0.59 0.56 0.77 0.75 34
9 0.69 NA 0.33 NA NA NA NA 69.47 75.57 0.63 0.61 0.80 0.78 34
10 0.57 NA NA 0.33 NA NA NA 71.53 77.64 0.61 0.58 0.78 0.76 34
108 0.55 0.36 NA NA 0.40 0.32 0.37 62.19 72.88 0.75 0.71 0.87 0.84 34
112 0.61 NA 0.31 NA 0.27 0.28 0.23 65.17 75.86 0.73 0.68 0.85 0.82 34
113 0.48 NA NA 0.35 0.27 0.33 0.22 65.63 76.31 0.72 0.68 0.85 0.82 34

> #table B2
> d_betas = MOD_APSLM(dependent = "fco", predictors = c("muslim15_log", "any_num", "mildeaths_log", "part_num", "gdp_log", "unemp14", "ine ..." ... [TRUNCATED]

> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
muslim15_log any_num mildeaths_log part_num gdp_log unemp14 ineq0911 AIC BIC r2 r2.adj. R R.adj. N
8 0.76 0.37 NA NA NA NA NA 48.48 53.19 0.67 0.64 0.82 0.80 24
9 0.71 NA 0.34 NA NA NA NA 49.77 54.48 0.65 0.62 0.81 0.79 24
10 0.60 NA NA 0.26 NA NA NA 53.82 58.53 0.59 0.55 0.77 0.74 24
108 0.65 0.43 NA NA 0.21 0.32 0.14 44.05 52.30 0.79 0.73 0.89 0.85 24
112 0.61 NA 0.41 NA 0.20 0.38 0.01 47.38 55.62 0.75 0.69 0.87 0.83 24
113 0.50 NA NA 0.34 0.09 0.29 0.15 51.42 59.67 0.71 0.63 0.84 0.79 24

> # appendix C --------------------------------------------------------------
>
> #table C1
> d_betas = MOD_APSLM(dependent = "casualties_per_cap_log ..." ... [TRUNCATED]

> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
muslim15_log any_num mildeaths_log part_num gdp_log unemp14 ineq0911 AIC BIC r2 r2.adj. R R.adj. N
8 0.59 0.20 NA NA NA NA NA 84.94 91.04 0.42 0.38 0.65 0.62 34
9 0.60 NA 0.17 NA NA NA NA 85.50 91.61 0.41 0.37 0.64 0.61 34
10 0.56 NA NA 0.14 NA NA NA 86.18 92.29 0.40 0.36 0.63 0.60 34
108 0.54 0.27 NA NA 0.12 0.07 0.32 86.25 96.93 0.49 0.40 0.70 0.64 34
112 0.59 NA 0.16 NA 0.03 0.04 0.22 88.72 99.40 0.46 0.36 0.68 0.60 34
113 0.54 NA NA 0.13 0.04 0.06 0.22 89.50 100.19 0.44 0.34 0.67 0.59 34

> #table C2
> d_betas = MOD_APSLM(dependent = "casualties_per_cap_log", predictors = c("muslim15_log", "any_num", "mildeaths_log", "part_num", "gdp_lo ..." ... [TRUNCATED]

> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
muslim15_log any_num mildeaths_log part_num gdp_log unemp14 ineq0911 AIC BIC r2 r2.adj. R R.adj. N
8 0.53 0.29 NA NA NA NA NA 65.18 69.89 0.34 0.28 0.58 0.52 24
9 0.48 NA 0.31 NA NA NA NA 64.83 69.55 0.35 0.29 0.59 0.53 24
10 0.31 NA NA 0.38 NA NA NA 64.24 68.95 0.36 0.30 0.60 0.55 24
108 0.47 0.32 NA NA 0.11 0.19 0.13 68.94 77.18 0.40 0.23 0.63 0.48 24
112 0.42 NA 0.34 NA 0.11 0.25 0.02 68.92 77.16 0.40 0.23 0.63 0.48 24
113 0.23 NA NA 0.45 0.07 0.22 0.14 66.95 75.20 0.45 0.29 0.67 0.54 24

> # appendix D --------------------------------------------------------------
>
> #general terrorism score
> pca_terror = principal(d_eu_sub[v_terror .... [TRUNCATED]

> d_eu_sub$pca_terror = pca_terror$scores %>% as.vector()

> #table D1
> d_betas = MOD_APSLM(dependent = "pca_terror", predictors = c("muslim15_log", "any_num", "mildeaths_log", "part_num", "gdpeu_log", "uneu1 ..." ... [TRUNCATED]

> #the models we want
> d_betas[c(8:10, 108, 112, 113), ]
muslim15_log any_num mildeaths_log part_num gdpeu_log uneu14 ineu14 AIC BIC r2 r2.adj. R R.adj. N
8 0.77 0.34 NA NA NA NA NA 57.86 62.89 0.59 0.55 0.77 0.74 26
9 0.71 NA 0.31 NA NA NA NA 58.80 63.84 0.57 0.53 0.76 0.73 26
10 0.54 NA NA 0.39 NA NA NA 56.44 61.47 0.61 0.57 0.78 0.76 26
108 0.67 0.43 NA NA 0.24 0.28 -0.10 58.10 66.90 0.67 0.58 0.82 0.76 26
112 0.60 NA 0.38 NA 0.19 0.27 -0.13 60.26 69.07 0.64 0.55 0.80 0.74 26
113 0.45 NA NA 0.45 0.07 0.24 0.01 59.15 67.95 0.65 0.57 0.81 0.75 26

> # extra -------------------------------------------------------------------
>
> #factor analysis
> fa_terror = fa(d_west[v_terror_measures])

> d_west$fa_terror = fa_terror$scores %>% as.vector()

> #lasso
> d_betas_lasso = MOD_LASSO(df = d_west, dependent = "fa_terror", predictors = c("muslim15_log", "any_num", "mildeaths_log", "part_num", "gdp ..." ... [TRUNCATED]
Loading required package: Matrix
Loading required package: foreach
foreach: simple, scalable parallel programming from Revolution Analytics
Use Revolution R for scalability, fault tolerance and more.
http://www.revolutionanalytics.com
Loaded glmnet 2.0-5

Missing data removed.
Data standardized.

> MOD_summarize_models(d_betas_lasso)
muslim15_log any_num mildeaths_log part_num gdp_log unemp14 ineq0911
mean 0.055 0 0 0.014 0 0 0
median 0.060 0 0 0.000 0 0 0
sd 0.053 0 0 0.024 0 0 0
mad 0.089 0 0 0.000 0 0 0
fraction_zeroNA 0.412 1 1 0.648 1 1 1
There were 50 or more warnings (use warnings() to see the first 50)
Admin
Thanks to Emil for the replication. New versions have been uploaded.
Admin
I approve of publication.
1. I think that the Conclusion is correct that, of the three measures of Islamist terrorism, the casualties per capita measure is the most valid. As the manuscript mentions, analyses with the measure of terror threat level can be "somewhat tautological" (p. 4) if percentage Muslim informs the terror level; the third measure can also be biased if arrests are influenced by percentage Muslim. If the casualties per capita measure is the most valid, it might be best to discuss that measure first.

2. The number of casualties per capita is an important measure to predict for assessing "the risk of Islamist terrorism" (p. 3), but it might be worth also predicting the number of attacks as a different measure of risk.

3. I'm not sure why the interpolated 2015 estimate for percentage Muslim is used, for at least some models. For example, casualties sum across years from 2001 to 2016, so the 2010 estimate closer to the middle of the time period would appear to be a better predictor than the interpolated 2015 estimate that is near the end of the time period.

4. Page 7 of the manuscript indicates that there are four of 26 EU countries identified as part of the anti-ISIS military coalition, but the Excel file indicates five ("part" variable: Belgium, Denmark, France, Netherlands, and the UK) of 28 EU countries ("eu" variable).

5. Suggested edits:

* page 9 "of any military deaths in in Iraq" [double "in"]

* page 8: "it was significant (p = 0.896)" [not significant?]

* page 10 "it was significant (p = 0.230)" [not significant?]

* page 12 "it was significant (p = 0.343)" [not significant?]
Admin
Thanks to L.J. Zigerell for the review. New versions have been uploaded.

it might be worth also predicting the number of attacks as a different measure of risk.


I have added a new section with Islamist terrorist attacks per capita as the dependent variable.

If the casualties per capita measure is the most valid, it might be best to discuss that measure first.


The two sections on Islamist terrorist attacks and casualties from Islamist terrorism now come first.

I'm not sure why the interpolated 2015 estimate for percentage Muslim is used.


I now use percentage Muslim in 2010 in all models.

Page 7 of the manuscript indicates that there are four of 26 EU countries identified as part of the anti-ISIS military coalition, but the Excel file indicates five


This is because the UK, one of the countries that is part of the anti-ISIS military coalition, is missing on the variable arrests for religiously motivated terrorism. Note that in the most recent version of the paper, Germany also has a 1 on part of anti-ISIS military coalition. I re-checked the wikipedia page, and it seems the German parliament voted to provide military aid in late 2015.

Typos have been changed.
Thanks for the additional analyses, changes, and the explanation.

I approve the submission for publication.

L.J
Admin
I would like to assign John Fuerst as an external reviewer, since he has already reviewed the paper.
I checked, if somewhat haphazardly, over the "the religion of peace" data. It seems to be reliable. Granting so, I don't see any unresolved problems with the paper and so I approve publication.
Admin
We have three approvals per the journal requirement:
1. Emil Kirkegaard
2. LJ Zigerell
3. John Fuerst "Chuck" (external)

So, before publication, what remains to be done is for the author to make a final version with the right dates (and any last check for errors), upload it to OSF, notify me and then I will publish it.
Admin
Noah has finished the publication version and updated the OSF. It has been published:

http://openpsych.net/OQSPS/2016/06/an-analysis-of-islamist-terrorism-across-western-countries/

Moving thread...
Admin
Following the spate of Islamist terrorist attacks over the summer, I have added a post-publication supplement to the paper's OSF page.