Posts Tagged ‘reviewing process’

Figures in manuscripts

Tuesday, May 4th, 2010

Drugmonkey:

Yet manuscript review is still stuck in the dark ages. Most journal submission procedures I am familiar with still require the figures to be separate documents from the text. The figures are then appended to the back of the file when the online submission engine creates the final pdf.

Why? Why do we do this? Why not allow the authors to format the manuscript in a pdf with the figures inserted as the authors feel best? If necessary high-resolution figures could be required to be appended and the publisher could even require a parallel figure-free copy of the manuscript text for their own typesetting purposes.

Couldn't agree more!  When reviewing, I would prefer to have a version of the manuscript as close to the formatting it will finally appear in, but I realize that isn't necessarily easy if authors use different word processing systems where not all can get a template that gets close to the final form, but at least we could get the figures in the text close to where they are referred to so I don't have to check three different places every time a figure referred to (the main text, the sheet with the figure legends, and the actual figure).

In defence of author-pays business models

Saturday, May 1st, 2010

"Science in the open" has a very nice piece on the "author pays" publishing model.

There are several good points in the post, but in particular I want to address this one:

The more insidious claim made is that there is a link between this supposed light touch review and the author pays models; that there is pressure on those who make the publication decision to publish as much as possible. Let me put this as simply as possible. The decision whether to publish is mine as an Academic Editor and mine alone. I have never so much as discussed my decision on a paper with the professional staff at PLoS and I have never received any payment whatsoever from PLoS (with the possible exception of two lunches and one night’s accommodation for a PLoS meeting I attended – and I missed the drinks reception…). If I ever perceived pressure to accept or was offered inducements to accept papers I would resign immediately and publicly as an AE

I am an AE at PLoS ONE myself, and that is work I do for free.  Like any other editorial or reviewing job. It is part of academic life and basically just expected of scientists. I have never been paid to review or to serve as editor, so I really have no interest in making money for any publisher. The financial gain for me is exactly the same if I accept or reject a paper.  I get exactly nothing in either case.

If the author pays model was a scheme to earn money from papers that cannot be published elsewhere, it seems a bit dumb to leave the decision of whether a paper should be published with people who have nothing to gain from accepting papers over rejecting them.

As a side note, you usually have some author fees at most high tier journals as well.  They will charge you both to publish and then to read the papers.

Another point that is rarely raised is that the author pays model is much more widely used than people generally admit. Page charges and colour charges for many disciplines are of the same order as Open Access publication charges. The Journal of Biological Chemistry has been charging page rates for years while increasing publication volume. Author fees of one sort or another are very common right across the biological and medical sciences literature. And it is not new. Bill Hooker’s analysis (here and here) of these hidden charges bears reading.

I guess one of the reasons that this discussion pops up again and again is PLoS ONE's approach to papers. That is just PLoS ONE, the other PLoS journals are a very different story, by the way.  At ONE the philosophy is to publish anything that is considered sound science. There is a lot of science that is solid enough, but where the results are not that ground breaking.  Those are hard to get published, but that doesn't make it bad science.

A prime example is negative results. The reason we have to worry about publication bias is exactly because we are more likely to publish positive results over negative results.  We need to know about the negative results as well, but they are usually too hard to get published.

At PLoS ONE, and also BMC Research Notes where I'm also an AE, negative results are welcome.

This doesn't mean that we will accept any paper that is submitted.  Not at all. The methods used must be of the same standard as required everywhere else. The statistics just as solid. The impact of the discoveries are just not a criteria.

If you don't believe me, try to go to the journal and read some of the papers.  I think you will find that they are usually of the same quality as you would find elsewhere.

This week in the blogs...

Sunday, February 1st, 2009

As I promised last week, I am planning to post a list of the posts I've enjoyed during the week, at the end of each week.

It is, of course, going to be a very subjective selection and is going to reflect what my interests have been the past week as much as what has been going on in the blogs the past week.

This week, that means that it will be a bit programming heavy with no genetics or bioinformatics.  Not that I haven't been thinking about that this week, but it's mainly been my own research and that is more the topic for separate posts (when I get around to it).

I've just recently started reading Michael Nielsen's blog, with much enjoyment, so there's a few favorite "open science" links as well.

Anyway, here goes...

Programming

R graphics

Open science and blogs in science

Statistics

Reviewing

--

32-51=-19

Supplemental material

Thursday, January 8th, 2009

In this post John Hawks complains about the important information that is left out of papers and hidden in supplemental material.

I mention this, because Asger Hobolth and I were just discussing this yesterday.

In the olden days, ten years ago, I would simply put the two papers side by side and find the discrepancies. But nooooo, we can't do that any more. Now, all the relevant parameters from one of the papers (you guessed it, the one published by the Nature Publishing Group) are hidden away in a supplement.

You'd think that might not be so bad, since I have the supplement. But I have to keep tracking the cross references to the paper to find out where the methods apply. It's a pain in the neck. Nobody else ever seems to complain. But that's because they simply don't read the papers! AAARGGGH!

Trust me, we complain!

Asger has just spent the last week trying to reproduce a result from a paper, only to find out that a lot of crucial info was left out of both paper and the supplemental material (that contained the data, but not the filtering that was done on the data). He had to get that info from one of the authors.

Personally, I spent a couple of weeks in December trying to reconstruct a method hidden deep in the supplemental material of a Nature paper -- on a project very related to the rest of John's post, by the way -- but never managed to reproduce the results from the paper.  I got close, but never quite there.

It might be taking it a bit too far to say that people don't complain, because they don't read the papers, but I think very few people read the supplemental material.  At least in any kind of details.

I know that I only read the supplemental material for very few of the papers I read; only those where I want to reconstruct a method, or where I don't really believe the results and want to see how the data really support it.

Sadly, very often the supplemental information doesn't help much there either.

Is the supplemental material even reviewed?

--

Post score: 8-4 = 4

On review quality...

Monday, June 16th, 2008

Following up on the last post, and an older one, I'm going to rant a little bit about the reviews I've gotten on two papers recently.

I'm not complaining about the reviews I've gotten for the Bioinformatics paper I mentioned in the previous post.  Those are detailed, thoughtful, relevant and all reasonable.  There my only problem is that I have a page limit that keeps me from adressing all the comments.

What I am a bit miffed about is two papers submitted to BMC Bioinformatics.  Do not take it as a critisism of that journal, though, I have also gotten nice reviews there.  I have another paper submitted there, that is getting nice reviews (in the sense that there are lots of suggestions to consider, not that they are just positive). Not so for the last two papers.

First of all, the review reports are very short.  Maybe 15-20 sentences.  Secondly, there aren't really any constructive criticism. Not surprising with less than 20 sentences, of course. Thirdly, and this is the most annoying, they haven't made any decision!

The "positive" reviews are just summaries of the paper (essentially paraphrasing the abstract).  The "negative" reviews are saying things like: "I do not really like this / I do not find it interesting" or "other people are doing something similar".

Of course reviewers are permitted to not like a paper and to not find it interesting.  They shouldn't make their decision on this, but on whether the results are novel and sound.  If they think that the results are too small an increment on existing work -- and there will always be similar work out there, if I submit it to BMC; it it was truly novel I would go for higher impact -- in that case they should say so, justify it, and reject the paper!

Telling me that they do not find the results interesting, and then telling me to resubmit is just crazy! How can I make any improvements if that is all the criticism I get?

If I resubmit, the paper will end up with the same reviewers, and they still won't like it.

The form letter from the editor just asks us to resubmit and include a cover letter "addressing the reviewer concerns".  That is of no help at all!  "To make the paper more interesting, we have included a Dilbert strip and a picture of a clown."  Is that going to work?  I doubt it.

This is really pissing me off.

If, as a reviewer, you do not have any constructive criticism -- good or bad -- just make your decision and let us get on with our lives.  If the paper is rejected, it would probably also be rejected after a resubmission, but now I know that so I can decide on whether to abandon the paper or try somewhere else.

It is not just the reviewers that are the problem here, though.  In a situation like this, I think the editor has a lot of the responsibility.  The final decision is his, so he should get involved at some point.  By now, he should a good idea about whether the papers will get accepted or rejected.  After all, there are no additional experiments or improvements suggested, so the content of the papers are not going to change.

As a side note, BMC isn't that bad in this regard.  We once had a paper at European J of Hum Gen in review for more than a year, where each iteration consisted of very minor changes but the form letter kept telling us that no decision was made yet.

We all want our papers as good as we can get them, so if you have made your decision then let us know!  If it gets rejected, we wont waste any more time on it, and if it gets accepted we will still address reasonable comments to improve the final version.

You are no more "unable to make a decision" at this point than you will be after a resubmission, if the reviews do not ask for any actual additional work!

Grrr!