Monday, March 2, 2020

Advice on reviewing

(Several people I work with have recently asked me for reviewing advice, so I thought I'd share my thoughts more broadly.)

Peer review – organized scrutiny of scientific work prior to publication – is a critical part of our current scientific ecosystem. But I have heard many of the peer review horror studies out there and experienced some myself. The peer review ecosystem could be improved – better tracking and sharing of peer review, better credit assignment, more fair allocations of review requests, better online systems for editors and reviewers, to name a few.*

Should we have peer review at all? In my view, peer review is primarily a filter that limits the amount of truly terrible work that appears in reputable journals (e.g., society publications, high-ranked international outlets). Don't get me wrong: plenty of incorrect, irreproducible, and un-replicable science still appears in print! But there are certain minimal standards that peer review enforces – published work typically ends up conforming to the standards of its field, even if those standards themselves could be improved. Without peer review, more of this terrible work would appear and there would be even more limited cues for distinguishing the good from the bad.** To paraphrase, it's the worst solution to the problem of quality control in science – except for all the others!

So all in all I'm an advocate for peer review.

But for an early career researcher (say a grad student or postdoc especially), getting involved poses some tradeoffs. On the one hand, there are several positives. Being a reviewer helps you:
  • learn about other new work in the field by engaging with it deeply, 
  • calibrate your judgment to that of the editor and other reviewers, and
  • get credit from editors (and occasionally authors and other readers, in the case of open review) for contributing.
But it also can be time-consuming, especially at first. How do you decide when to review and when not to review? Here's my advice. 
On average, try to review about 2.5x as many papers as you submit as first author. Try to do those reviews at the places you publish and want to publish. Be efficient with your reviewing.
I'll explain each part here in a bit more depth.

1. On average, not right now. As my wife is fond of saying, we have seasons of giving. You don't have to do everything at once! This means, first, you should try not to have more than a few reviews out at a time.  Otherwise it gets very overwhelming. So try to space things out: don't feel like you have to review continuously. Take a break from time to time, especially if family or career circumstances mean you have a lot on your plate. I did a ton of one-off reviewing for several years, then did a bunch of editorial service, got burned out – related confessional blogpost here – then took a breather, and now am back doing a mix of editing and reviewing.

2. Review at the population replacement rate. Most papers have 2–3 reviewers. So if everyone reviews 2–3 papers for each first authored paper they submit, then we should have as many reviews coming into the system as going out. But again, this doesn't have to be all at once!  If you haven't submitted anything yet from your PhD, doing a lot of reviewing is not usually a great idea. I tend to suggest focusing on your own work until then. This is also not a hard and fast rule and it's great to be generous with reviewing if you have the curiosity and capacity. If you're submitting one paper this year, I think it's fine – maybe even good – to review more than two or three papers. But I wouldn't necessarily review ten unless you really want to.

3. Review at places you (want to) publish. Peer review is an important part of socialization into a scientific community. It's one way our communities develop norms as to statistical or methodological standards. A lot has been said about the ways these norms are occasionally negative (e.g., requiring HARKing – "hypothesizing around known results"). Plenty of this socialization is good, though. For example, my recent reviewers have required more breadth in the cited literature, required more reproducible code, asked for additional studies, and many other steps that have made my and my collaborators' work better.*** By participating in specific communities' review, you learn what they want from their contributors. You also have a chance to show editors your thoughtfulness and judgment. (This isn't a big motivator but it's not nothing.) So choosing outlets carefully helps you give back to the scholarly community you want to be part of and it also helps you learn about how that community works.

4. Spend time on reviews, but not too much time. My first review ever (as a grad student) was eight pages long. I included information on every typo in the paper. I'm sure there was useful feedback in there, but as an editor, these kinds of over-the-top reviews don't actually help that much. And as an author, they are a pain – they are either "writing for the author," or nitpicking specific wording decisions. Authors should get some autonomy in what they write, provided the underlying research is sound. The advice I received from my advisor (after he had a nice chuckle about the length of my review was): summarize the paper in no more than a paragraph, provide a small handful of major points that are critical to your evaluation of the paper, and if you feel it's appropriate, make a recommendation.**** Then you can list a few minor points that are helpful to the authors but don't themselves make or break the paper.

Writing a review like this takes time, but not too much time. I recommend reading the paper through soon after you get it, making a few notes, thinking it over, and then coming back and writing the review as you reread. That way you can form an opinion and then check it. It's hard to say how long this process should take – everyone is different, and the process gets way faster with experience. But if a normal length paper is taking more than 3-5 hours to review, I think that's probably too much, unless you are really taking time to check a specific calculation or analysis.

Finally, what do you do if a particular reviewing opportunity just isn't right? Don't be afraid to say no. Editors are people too, and they will totally understand if you tell them how many reviews you already have outstanding or share that you are on leave or otherwise occupied (finishing your thesis, for example). Editors generally are totally fine with a quick and helpful decline response, especially when you name other people who you think are qualified.***** You can always say "happy to help next time!"

* I won't talk about blinding vs. not blinding here, though I did share some thoughts elsewhere.
** In some fields, there aren't huge incentives for publishing random nonsense. Theoretical physics comes to mind – you can upload random junk to arXiv but it's not a huge deal, in the sense that it's just more spam that needs to be filtered out. In contrast, in biomedicine or even in psychology, publication can in a strong journal can lead to positive commercial consequences. So we need significant filtering to prevent unscrupulous researchers from taking advantage of this route.
*** They also of course misunderstood simple points; got the stats wrong; asked me to cite their own work; and said trenchant stuff about my writing that made me feel bad for days. Criticism is always a mixed bag.
**** Some people say that reviewers should assess but not recommend. But most journals make you  choose your recommendation from a dropdown menu so I don't know what that really means. I think that if you have a clear recommendation, you should state it in the review and argue for it. E.g., "for this paper to be acceptable, the authors would need to do X, Y, and Z."
***** Especially helpful to decline by giving names of early career experts as most editors think of the same prominent researchers for reviews in a particular domain and then have trouble generating a broader review pool for areas they don't know as well.

No comments:

Post a Comment