Back to Post-publication discussions

No Fair Sex in Academia: Is Hiring to Editorial Boards Gender Biased?

Submission status
Accepted

Submission Editor
Noah Carl

Authors
George Francis
Emil O. W. Kirkegaard

Title
No Fair Sex in Academia: Evidence of Discrimination in Hiring to Editorial Boards

Abstract

The editorial boards of academic journals overrepresent men, even above their proportion in university faculties. We test whether this sex disparity is caused by anti-female bias, supposing that anti-female discrimination means women must have a higher research output than men to overcome bias against them. We collect a dataset of the research output and sex of 4,319 academics on the editorials boards of 120 journals within four social science disciplines: Anthropology, Psychology, Political Science and Economics. Using a transformation of the h-index as our indicator of research output, we find male research output to be 0.35 standard deviations (p < 0.001) above female research output. However, the gap falls to 0.13 standard deviations (p < 0.001) when years publishing is controlled for. Our results are replicated with alternative dependent variables and using robust regression. We followed up our research with a survey of 231 academics, asking for their attitudes towards discrimination in hiring to editorial boards. Although two-thirds of academics supported no bias, for every 1 academic who supported discrimination in favour of men, 11 supported discrimination in favour of women. Our results were consistent with the hypothesis that academics and journal editors are biased in favour of women, rather than against women.

 

Keywords
gender, sex, discrimination, academia

Supplemental materials link
https://osf.io/9ckdt/

Pdf

Paper

Typeset Pdf

Typeset Paper

Reviewers ( 0 / 0 / 2 )
Reviewer 1: Accept
Reviewer 2: Accept
Public Note
Supplementary files and code will be added at a later date.

Mon 12 Jul 2021 19:41

Author | Admin

Thank you for your quick response. I've made edits in line with your suggestions. We have also provided an explanation to allay your concerns regarding Table 6.  

 

34 -> “Only 2% of the individuals considered to be ‘eminent’ in science, before 1950”

Fixed

 

 

137 These sentences are confusing if the reader is not fully aware that you’re referring to the sex of the actual board members specifically – if it refers to academics in general, the conclusions would be the opposite. So to avoid confusion you might want to be overexplicit, like “Thus if women on boards have a higher academic output, despite their lower variance in IQ, we can be confident that there is anti-female bias for admission to the board. We can also say that the larger the sex difference in favour of men on boards, the lower the likelihood of anti-female bias and the higher the likelihood of anti-male bias for admission to the board. So if men on editorial boards have a higher academic output than women we can be confident that there is no anti-female bias for admitting board members.

 

I’ve edited this section now to be more explicit that I’m referring to editorial board members:

It must be noted that a sex difference in the academic output of editorial board members can only be an indicator, not proof of sex bias. As mentioned, the variance in intelligence is higher amongst males, and their average also seems to be somewhat higher. This would cause men, on editorial boards, to have a higher academic output even if there was no bias. Thus if women have a higher academic output, despite their lower variance in IQ, we can be confident that there is anti-female bias. We can also say that the larger the sex difference in favour of men, the lower the likelihood of anti-female bias and the higher the likelihood of anti-male bias. So if men have a higher academic output than women we can be confident that there is no extreme anti-female bias.

 

272 -> “scaled into standard deviation units as Z-scores, according to” or even better  

271 -> “were first log10 transformed and then Z-transformed into standard deviation units within each academic discipline

Fixed

 

423 Very confusing to look at a graph with a distribution mean of 0 and the caption reads “Distributions of Log10 Transformed h-Index”. Please add that the data are z-transformed.

New caption: “Distributions of Log10 then Z-Transformed h-Index of female and male editorial board members"

 

 

Table 6. Still not clear to me what the function of the numbers 1-12 is, before I thought it was to identify the same the horizontal indicies. The F values still seem much too high. I ran several MRA with similar data, and got for example R2 = 0.048, F = 4.66, p = .0013 and R2 = 0.086, F = 5.016, p = .0017. So regardless of higher or lower R2, F is always much lower than your values. I’m not saying you’re wrong, but please make sure you’re not.

 

The row is labelled model numbers and then there are numbers above each model, which help us refer to our results in the text. I hope you consider this reasonable.

In the models with low R^2s (eg. 0.03) we only have 1 variable (K =1) and sample sizes of around 1000. Using the formula below F = (0.03/1)/(0.97/998) = 31. The R squared is reasonable (sex alone shouldn’t explain much of the variation), sample size is right, formula is right so the reasonably high F values are also as we should expect.

Let’s take the even higher F values we get from our models with two variables. R^2 is around 0.5 (sex and years publishing should explain something like 50% of the variation, this makes sense). F = (0.5/2)/(0.5/997) = 498.5 .

Given our sample size is high, our R^2s are right and reasonable and the formula is right, our high F values are expected. This calibration exercise shows our high F values are reasonable.

F = (R^2/K)/((1-R^2)/(n-k-1))

 

 

600 One cannot see the x-axis in the PDF – obscured by the “Note”

Fixed

 

 

600 I assume you mean “For questions regarding age and sex preference, lower scores indicate”

But that is the opposite of “we labelled the right end of responses “They should favor females above their academic accomplishments” and the left the same but for males”, no?

 

Yep, the caption was the wrong way around. This is fixed.

 

735 “uncertain about the reasons for this, but suggest that (1) older scholars have had more time..” Mustn’t this explanation also include sex somehow?

Changed to “In regression results, we found that controlling for years publishing reduces the male advantage in research output, implying men in our sample have been publishing for longer. We are uncertain about the reasons for this, but suggest that (1) older scholars have had more time to publish papers, (2) younger cohorts of scholars are worse than older ones and (3) journals could have a pro-old age bias.”  

 

 

Reviewer
OK, looks great. I think it's ready to publish.

 

Reviewer
Replying to Forum Bot

Authors have updated the submission to version #9

The authors have largely responded to my concerns, especially regarding clarity of exposition and adjusting their manuscript to account for obviously erroneously data collection.

The only thing unaddressed is that I still don't quite understand how the disciplines have significant main effects. As I noted in my initial review: "I may have misunderstood how the authors normalize their dependent variables, but if they normalize at the journal-level (and journals are nested within fields), why are there main effects for different fields in columns (9–12) in Tables 6 and 7? (This might be a misunderstanding on my part of what exactly the authors are doing.)"

If this is clarified/explained to me, then I would be happy to accept the paper.

Author | Admin
Replying to Reviewer 2
Replying to Forum Bot

Authors have updated the submission to version #9

The authors have largely responded to my concerns, especially regarding clarity of exposition and adjusting their manuscript to account for obviously erroneously data collection.

The only thing unaddressed is that I still don't quite understand how the disciplines have significant main effects. As I noted in my initial review: "I may have misunderstood how the authors normalize their dependent variables, but if they normalize at the journal-level (and journals are nested within fields), why are there main effects for different fields in columns (9–12) in Tables 6 and 7? (This might be a misunderstanding on my part of what exactly the authors are doing.)"

If this is clarified/explained to me, then I would be happy to accept the paper.

Apologies for missing an explanation of this. In the original manuscript, we standardised by journal, but due to some journal editorial boards being small we changed it to standardising by academic discipline. This didn't appear to affect our results so we just kept to the new method. To account for possible journal effects we ran another set of models in Table 10 of the appendix using dummy variables to control for the effects of each journal. 

When we run a regression with only discipline dummies as our only explanatory variables, the coefficients are trivial (eg. 1.12e-15) and p values are equal to 1. This implies our standardisation was done properly. An image of this regression result has been uploaded to our supplementary files as proof. It's called "Only Dummy Regression.png".

The discipline dummies sometimes have significant effects once we control for sex and years publishing. In these models, the discipline dummies (plus the constant) can be interpreted as the standardised h Index for a male author with 0 years of publishing within discipline x. Thus although we standardised the average h index of each discipline to mean 0, that does not mean the expected standardised h index for such a hypothetical individual would also be 0. In our main results, the dummies are only ever significant when years of publishing is controlled for. This implies that the effect of experience or lack thereof is different between disciplines.

The different sex ratios of disciplines also cause the dummies to have non-zero coefficients. Imagine men perform better than women and this difference is the same across disciplines, discipline 1 has mostly men but discipline 2 has mostly women, then the mean h index in each discipline, which we set to zero, will be partially affected by discipline effects and partially by the sex ratio. In our hypothetical, an average man in discipline 1 will have the same h index as an average man in discipline 2. But because the standardisation has been affected by sex ratios, the man in discipline 2 will have a higher standardised h index due to the many women in his discipline. This means the discipline dummies should have non-zero coefficients when sex is controlled for.

 

 

Author | Admin
Replying to George Francis
Replying to Reviewer 2
Replying to Forum Bot

Authors have updated the submission to version #9

The authors have largely responded to my concerns, especially regarding clarity of exposition and adjusting their manuscript to account for obviously erroneously data collection.

The only thing unaddressed is that I still don't quite understand how the disciplines have significant main effects. As I noted in my initial review: "I may have misunderstood how the authors normalize their dependent variables, but if they normalize at the journal-level (and journals are nested within fields), why are there main effects for different fields in columns (9–12) in Tables 6 and 7? (This might be a misunderstanding on my part of what exactly the authors are doing.)"

If this is clarified/explained to me, then I would be happy to accept the paper.

Apologies for missing an explanation of this. In the original manuscript, we standardised by journal, but due to some journal editorial boards being small we changed it to standardising by academic discipline. This didn't appear to affect our results so we just kept to the new method. To account for possible journal effects we ran another set of models in Table 10 of the appendix using dummy variables to control for the effects of each journal. 

When we run a regression with only discipline dummies as our only explanatory variables, the coefficients are trivial (eg. 1.12e-15) and p values are equal to 1. This implies our standardisation was done properly. An image of this regression result has been uploaded to our supplementary files as proof. It's called "Only Dummy Regression.png".

The discipline dummies sometimes have significant effects once we control for sex and years publishing. In these models, the discipline dummies (plus the constant) can be interpreted as the standardised h Index for a male author with 0 years of publishing within discipline x. Thus although we standardised the average h index of each discipline to mean 0, that does not mean the expected standardised h index for such a hypothetical individual would also be 0. In our main results, the dummies are only ever significant when years of publishing is controlled for. This implies that the effect of experience or lack thereof is different between disciplines.

The different sex ratios of disciplines also cause the dummies to have non-zero coefficients. Imagine men perform better than women and this difference is the same across disciplines, discipline 1 has mostly men but discipline 2 has mostly women, then the mean h index in each discipline, which we set to zero, will be partially affected by discipline effects and partially by the sex ratio. In our hypothetical, an average man in discipline 1 will have the same h index as an average man in discipline 2. But because the standardisation has been affected by sex ratios, the man in discipline 2 will have a higher standardised h index due to the many women in his discipline. This means the discipline dummies should have non-zero coefficients when sex is controlled for.

 

 

I replied within an hour of Reviewer 2's last comment. I am concerned Reviewer 2 may not have noticed the email update because I replied so quickly, so I'm replying again to notify reviewer 2 just in case. I hope that is ok.

Reviewer

Thanks for the reply.

I also think the paper is ready to publish now. 

Bot

The submission was accepted for publication.

Bot

Authors have updated the submission to version #11

Bot

Authors have updated the submission to version #12

interesting information