Thursday, June 1, 2017

Confessions of an Associate Editor

For the last year and a half I've been an Associate Editor at the journal Cognition. I joined up because Cognition is the journal closest to my core interests; I've published nine papers there, more than in any other outlet by a long shot. Cognition has been important historically, and continues to publish recent influential papers as well. I was also excited about a new initiative by Steve Sloman (the EIC) to require authors to post raw data. Finally, I joined knowing that Cognition is currently an Elsevier journal. I – perhaps naively – hoped that like Glossa, Cognition could leave Elsevier (which has a very bad reputation, to say the least) and go open access. I'm stepping down as an AE in the fall because of family constraints and other commitments, and so I wanted to take the opportunity to reflect on the experience and some lessons I've learned.

Be kind to your local editor. Editing is hard work done by ordinary academics, and it's work they do over and above all the typical commitments of non-editor academics. I was (and am) slow as an editor, and I feel very guilty about it. The guilt over not moving faster has been the hardest aspect of the job; often when I am doing some other work, I will be distracted by my slipping editorial responsibilities.1 Yet if I keep on top of them I feel that I'm neglecting my lab or my teaching. As a result, I have major empathy now for other editors – and massive respect for the faster ones. Also, whenever someone complains about slow editors on twitter, my first thought is "cut them some slack!"

Make data open (and share code too, while you're at it)! I was excited by Sloman's initiative for data openness when I first read about it. I'm still excited about it: It's the right thing to do. Data sharing is prerequisite for ensuring the reproducibility of results in papers, and enables reuse of data for folks doing meta-analysis, computational modeling, and other forms of synthetic theoretical work. It's also very useful for replication – students in my graduate class do replications of published papers and often learn a tremendous amount about the paradigm and analyses of the original experiment by looking at posted data when they are available. But sharing data is not enough. Tom Hardwicke, a postdoc in my lab and in the METRICS center at Stanford, is currently doing a study of computational reproducibility of results published in Cognition – data are still coming in, but our first impression is that it's often difficult to reproduce the findings in a good number of papers based on the raw data and their written description of analyses. Cognition and other journals can do much more to facilitate posting of analytic code.

Open access is harder than it looks. I care deeply about open access – as both an ethical priority and a personal convenience. And the journal publishing model is broken. At the same time, my experiences have convinced me that it is no small thing to switch a major journal to a truly OA model. I could spend an entire blogpost on this issue alone (and maybe I will later), but the key issue here is money: where it comes from and where it goes. Running Cognition is a costly affair in its current form. There is an EIC, two senior AEs, and nine other AEs. All receive small but not insignificant stipends. There is also a part-time editorial assistant, and an editorial software platform. I don't know most of these costs, but my guess is that replicating this system as is – without any of the legal, marketing, and other infrastructure – would be minimally $150,000 USD/year (probably closer to 200k or more, depending on software).


On the Glossa "journal flip" model, a transition to open access for Cognition would require either massive increases in revenue through publication fees (minimally a mandatory $1000 fee or more likely a substantially higher cost with a waiver program), big cost reductions (from free software, decreased stipends, no editorial help), or likely both. (Glossa had lower volume and much lower costs to begin with and so doesn't have to charge fees, thanks to a grant from Open Library of the Humanities). A flip for Cognition would also require a new home in perpetuity, something that's difficult for most non-institutional players to guarantee. And that's without resolving the issue of Elsevier's ownership of the Cognition name and archives. None of these problems are insoluble, but all are difficult – and all are accompanied by real risks of fragmentation of the journal community and possibly failure of the enterprise altogether.2

So my view now is that although OA is a priority, the way to achieve it is not through pressure on journals. The journals themselves don't have control over the relevant funds. Instead, progress is likely to come from other directions. First, green OA using preprint archives like PsyArXiv is a reality. If you don't already deposit your preprints there, start! Second, talk to your funders – they can mandate open access much more easily than journals (as with NIH's highly successful PubMed archiving policies) or even take on the job of hosting journals (e.g., Wellcome Open Research). Finally, talk to your institution's librarians – an alternative model for Cognition or other journals would simply be for all articles to be paid, gold OA publications – provided that the libraries paid for publication fees, but not for access itself. What unites all of these proposals, though, is the focus on other actors besides the journals themselves.

Triple-blind the review process! Cognition is a single-blind review journal, like most in psychology.3  I have talked to many advocates for open review and I understand the arguments for making reviewers and editors be associated with a paper post-publication; I'm agnostic on this issue. But I don't buy the arguments for unblinding during the review process. Here's the fundamental question: Does the pressure towards civility in unblind review outweigh the bias that is exerted by knowing author and reviewer identities? My answer is no.

Bias is pervasive – on the basis of gender, career status, and general reputation among other dimensions. Double-blinding in review reduces bias (e.g., one recent study on gender bias, others cited here). It's not that most reviewers or editors are actively asking for men to be added as authors. Instead, the cases I worry about are the subtle differences between "accept with revisions" and "revise and resubmit" – small differences in perceived competence can have a big effect on these kinds of threshold-based decisions. I try very hard to consider these biases in my own reviewing and editing (and have also declined to edit many manuscripts due to direct and indirect conflicts of interest). But I would much prefer to have my hands tied as a reviewer and an editor so that I don't know for certain whose work I'm reading!4 Triple blinding would be very simple from a software perspective – I don't see why more journals don't do it (see Simine Vazire's argument's as well).5

Learn to read like an editor. One major effect of the AE position has been that I've read many more papers in the last 18 months than previously. The question I ask about each paper is whether the work it reports represents a substantial empirical and/or theoretical contribution to a question of interest. Early on, I mentioned how slowly I was reading to Steve and he gave me a great piece of advice that I still think about all the time: that as an editor, you read for the critical flaw in a paper. For me, reading for the flaw means determining early on whether there are issues of design, analysis, implementation, or theoretical interpretation that would keep a paper from meeting the journal's publication bar. It's an ungenerous way of reading but it's also been very helpful for me in honing my skills in quickly seeing the primary issue with a manuscript. Which leads me to my view on peer review.

Peer review is a probabilistic filter. There are a number of studies, older and more recent, that suggest that reviewers tend not to agree on their judgements about papers. Some people find these truly damning to the process and suggest that we should abolish the system entirely. I am less negative on the process, especially when it comes to judging the correctness/appropriateness of manuscripts.

First, most studies of peer review deal with relatively high quality manuscripts where good researchers are honestly trying to make progress. These are hard to judge. My experience is that there are also a sizable number of manuscripts that are just inappropriate – under-reported, unconnected with previous literature, using wacky methods – and these would be rejected by almost any set of reviewers, with high agreement. So (I would guess), agreement numbers are a function of the sample. Second, I think my view as an editor is not to count votes but to see whether any reviewer spotted a critical flaw of the type described above. If the conclusions are valid, then it's a matter of taste and the journal's editorial mission to decide whether the manuscript should be published in that journal. Quite often my conclusion is that the manuscript is interesting and useful but should be published somewhere other than Cognition.6 Synthesizing these points, my view is that peer review acts as a probabilistic filter on the literature. It filters out truly inappropriate work with relatively high accuracy but as work gets better the level of noise gets higher. It doesn't do well with technical problems in statistics, but in my view these are best dealt with at the level of code/data archiving (see above).

In contrast, evaluation for "impact" I see as much more subjective and less likely to be a real contribution. To the extent that we should be assessing papers for impact, I would assign that job to the editor, knowing that it is a subjective judgment of editorial "taste" – it's what the editor wants the journal to publish or represent – rather than some kind of semi-objective scientific assessment. One corollary of this view is that it is most consistent with some version of the overlay journal model, where reviews are for accuracy and editorial taste is imposed after the fact as editors collect interesting or high-quality papers for dissemination. But until we establish a viable system of this type with wide-spread buy-in, we are stuck with what we've got.

Conclusions. I've learned an incredible amount working as an AE, and I'm very thankful for the opportunity to be part of the journal. It's given me a much greater insight into the journal publications system – its flaws as well as some of the reasons for them. Perhaps this experience has been part of my transformation into a crotchety old conservative, but I hope not. I still deeply believe in open access and reform of the publication process, I just feel more aware of some of the issues that have prevented these changes up until now.




Thanks to Dave Barner, Tom Hardwicke, and Steve Sloman for helpful comments. Made some edits on 6/2 to clarify that Glossa has no fees.

1 Cognition uses EES for managing manuscripts (Elsevier is moving to Evise but we haven't gotten there yet!). EES is really terrible. It's bloated, complicated, and slow. Every time I want to email someone, I have to click six or seven times, with a substantial click-to-click lag. I would guess that being able to handle correspondence for manuscripts using lighter-weight, standard tools (e.g., gmail plus some email templates) would speed up my response times substantially. But you need to have some software system for tracking and production of manuscripts, so figuring out how to make this more of a wrapper on standard tools would be a huge win (e.g., freshdesk for editing). I suspect that one of the big cost challenges for OA journals is finding a good, cheap, lightweight platform.

2 For this reason, I encourage people not to boycott reviewing for journals from publishers that they don't like (though it's fine with me if they don't submit). If you won't review for Elsevier but an author submits work for which you're the best reviewer, the cost is to the community. Your review will improve that paper – either by keeping it from being published or ideally by providing important feedback. Boycotts of submissions can be very useful tools to change publishing practices, but reviewing boycotts seem to me mostly to hurt editors, authors, and consumers of scholarship. (You could argue that if enough people boycott review, then editors will quit – that's maybe true but pretty slow and indirect as a mechanism for change. I don't think anyone tracks review declinations in the same way that all journals live and die by their submission/rejection rates.)

3 With the exception of many developmental journals, which are often double-blind, a feature that I like.

4 Many people talk about how blinding doesn't work because they can guess authors accurately. Sure, you sometimes can guess the group a paper is written by. But A) this isn't always true, so you still debias statistically for the papers you can't guess (which in my experience is many). And B) how differently will you treat a paper if it's first-authored by the senior, prize-winning advisor vs. by their junior PhD student? I've caught myself forgiving sloppiness on the part of a well-known, more senior first author when I would attribute it to a lack of knowledge in an unknown or junior first author. Why not take away that possibility?

5 I recognize that there's a prima facie conflict between green OA preprint archiving and double/triple-blind review. There are a number of possible solutions to this issue, including 1) simply asking authors not to google (possibly dubious though maybe better than nothing), 2) posting pre-prints after acceptance but before publication (though this reduces the utility of preprint posting), or 3) blind preprint escrow, where preprints are posted anonymously until they are published (though this is not ideal for preprint citation). Honestly, I'm not sure how to reconcile this conflict between openness and bias reduction.

6 I recognize that this part of the process leads to much duplication of effort (not to mention problems with multiple evaluations leading to a higher error rate on filtering). I would be in favor of having reviews "follow" papers from journal to journal, although there are clearly some issues there with malicious reviewers that try to "block" papers. (I've seen *very* few of these in my experiences at Cognition).

3 comments:

  1. You describe the move to a Glossa-type open access model as involving "either massive increases in revenue through publication fees (minimally a mandatory $1000 fee or more likely a substantially higher cost with a waiver program), big cost reductions (from free software, decreased stipends, no editorial help), or likely both." But it should be pointed out that Glossa actually has NO publication fee.

    Whether the needs of Cognition are different enough that the Glossa no-fee model is not viable is an important question, but please don't gloss over (sorry) that the Glossa model has no publication fee. Glossa is funded by the Open Library of the Humanities, to which over 200 university libraries contribute a membership fee.

    More broadly, you have a lot of good and wide-ranging points in this post!

    ReplyDelete
    Replies
    1. We have started an information resource, psyOA.org, to assist those interested in flipping their journals with information about publishing costs and legal issues.

      Delete
    2. Hi Alex, thanks for reading and commenting! Sorry if I was unclear that Glossa is no cost. I made a quick edit above that hopefully resolves the issue.

      Let me try to clarify my argument - it's not that smaller/lower-cost journals should not flip. They should! But for larger journals like Cognition, flipping would require substantial changes to how things are done (giving up all stipends, editorial assistant, etc.) OR would require publication fees. These are hard choices to make.

      Delete