Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Rejected Papers Get More Citations When Eventually Published

timothy posted more than 2 years ago | from the sir-do-you-know-how-fast-your-neutrino-was-going? dept.

The Media 73

scibri writes "In a study of more than 80,000 bioscience papers, researchers have illuminated the usually hidden flows of papers from journal to journal before publication. Surprisingly, they found that papers published after having first been rejected elsewhere receive significantly more citations on average than ones accepted on first submission. There were a few other surprises as well...Nature and Science publish more papers that were initially rejected elsewhere than lower-impact journals do. So there is apparently some reason to be patient with your paper's critics — they will do you good in the end."

Sorry! There are no comments related to the filter you selected.

Surprisingly? (4, Interesting)

Anonymous Coward | more than 2 years ago | (#41637179)

Not at all. Papers that were previously rejected benefit from additional, careful revisions by their authors, therefore they end being of higher quality than they would have.

Re:Surprisingly? (1)

reve_etrange (2377702) | more than 2 years ago | (#41638595)

There are several levels of "rejection" and "acceptance."

Typically, a paper may receive "reject", "reject and resubmit," "accept with revisions" or "accept." All except the last would require major revisions, and the last would still receive minor changes and proofreading. TFA refers to the first category: papers which cannot be resubmitted to the same journal.

I don't have access to the full paper so I can't say if these conclusions apply all or in part to papers resubmitted to the same journal ("reject and resubmit").

Re:Surprisingly? (5, Insightful)

Arker (91948) | more than 2 years ago | (#41638687)

Not at all. Papers that were previously rejected benefit from additional, careful revisions by their authors, therefore they end being of higher quality than they would have.

The conclusion the journals would like you to reach, and it may explain some of the effect. Seems to me likely the larger effect is simply that papers which break new ground tend to be controversial to old guard, thus accumulate rejections, but when and if finally published they also tend to accumulate more citations as well, as even the old guard will then have to cite in their rebuttals.

Re:Surprisingly? (3, Insightful)

Sir_Sri (199544) | more than 2 years ago | (#41639841)

Reviews, in my experience in physics and computer science anyway, tend to be more about process than results. Did the work you do cite whomever the reviewer knows about in your field (usually not), is the process you used for your work valid, or importantly, is it clear that it is valid? Are your results understandable to someone who didn't do your particular experiment? Is your paper clear enough that an expert in the field but not your research can build on it? Admittedly I might be biased because I'm bad at explaining myself.

Reviewers aren't out to screw submitters for the fun of it, because they don't want to be screwed themselves and the whole point of research is to find new stuff. But internally when you're preparing a paper your supervisor, or your grad students or the like know what the fuck you're talking about, and someone on the outside can say something to the effect of 'this makes no sense'. "This makes no sense' by the way doesn't mean the work isn't valid, just that you might have done a terrible job trying to write about within whatever constraints your target journal has.

Journals are also becoming a bit overspecialized, and it makes it hard when you do something really high impact that isn't very narrow to know where to put it. My particular corner of the academic universe is broadly under the field of game development, but we really combine work in AI, computational social science, strategic studies and economics, and an AI journal may look at our work and feel (correctly) that it's not enough of an AI problem for them, the computational social science people will say the same thing and so on. The big dogs of Nature and Science publish things that can be cross disciplinary and that aren't (and shouldn't be) pigeon holed into a particular basket. The place I'm at has, or had at least, some really stellar computer vision and computer algebra researchers but odds are if you aren't specifically in those fields you couldn't care less what they do. That can be good work with few citations just because it's really really important to one very tiny problem. A journal rejecting you because you aren't doing enough pathfinding for their pathfinding edition doesn't mean you didn't do good work, it just means the work you didn't isn't as applicable to what they're doing.

I expect as time goes on we'll see more of this. New researchers aren't as biased by picking up a physical journal and reading it, they do internet and database searches (which are becoming much easier and much higher quality) and glue together concepts from multiple journals, but then their work may not really fit with what the previous ones did. It can be broadly interesting and broadly useful, but it's not the feeder disciplines.

If you want a good example take Quantum computing. The 'Quantum' mechanics side of the research fits in easily a dozen different journals (e.g. MRI, Laser spec, semiconductors etc) but very little of the work is meaningfully original, it's a new application of an old problem. The theory of computing side of things probably only belongs in one or two different theory of comp journals, but again, it's re-examining some of the fundamentals of computing theory in a different way and there's not much to do after the first guy does it. But they are, together very interesting.

Re:Surprisingly? (0)

Anonymous Coward | more than 2 years ago | (#41640037)

My experience is mainly with physics journals. While what you say about the reviews being more about the process than the results tends to be true for a lot of journals, some journals have requirements about the results being important to a general physics audience (e.g. PRL) or require some notability to the results. In those cases, I've had papers rejected with reviews along the lines of, "Great paper, but not relevant enough to the general paper," or " If you answer these couple questions it would be fine from research point, but results are too incremental for this journal, try sending it to this other journal."

Re:Surprisingly? (1)

Xylantiel (177496) | more than 2 years ago | (#41641331)

My question is why is this not just a demonstration that citation optimization works as expected? They say that resubmission is dominated by a flow from high-impact to low-impact journals. i.e. people submit their paper to the highest impact journal they think the "might" be able to get it into, and then resubmit to lower ones until it gets accepted. This means the scientists are using resubmissions to actively attempt to increase their citation count by getting their paper into the highest impact journal possible. So of course citation rates are higher for resubmitted papers, because resubmitted papers are the ones being subject to this optimization process.

While this is majorly labor-intensive on the reviewer side, it is a decently useful practice in a large field like biology, since it will help insure that "important" results receive a lot of visibility. High-impact journals are nominally high-impact because the community treats them that way. More people read & cite them, so people try to get their stuff in there first and then go to other journals.

I would also note that this study itself is an example of the distorting side-effects this process can have. These guys don't give the above very reasonable explanation for their results, and precisely because of their poor analysis, it is an "interesting puzzle" and gets into a high-impact journal (science in this case). Of course if they'd just done their analysis properly, it would be a completely uninteresting result, and not make it into the high-impact journal. Insane isn't it. That's scientific publication today. Science and nature tend to contain things that are speculative, inconclusive, or plain wrong because knowing the answer is boring.

By the way science and nature are run more like magazines than journals. Most journals don't have an "interesting" cut, just "useful to others, not done before, and done properly". Both science and nature will actually relax the "done properly" part if it is "interesting" enough. This isn't necessarily bad science, it just means that both the proposal of a hypothesis and its falsification are published separately. Nominally this is good for high-profile questions. The problem is that many people don't realize this and often the the proposal of the hypothesis is much "higher-impact" than its falsification!

Re:Surprisingly? (0)

Anonymous Coward | more than 2 years ago | (#41642303)

The thing with the resubmission process and trying for a higher impact journal, is that isn't done for every paper. When you have results that are publishable but you know are not something major, you don't waste a lot of effort going for the higher level journals in that case. Either way you might push a little knowing there is a chance for rejection, but as you get more experience, you get an idea where each paper should go. So in some sense, there is going to be "bias" just from the authors, that they will push harder on papers they think are more important. Where as simpler ones they will "play it safe" because they want to get it out the door and move on with other things.

Missed steps... (1, Interesting)

raydobbs (99133) | more than 2 years ago | (#41637195)

People often miss steps when they work on academic papers - such as proofreading, and copy-editing. Remembering to cite sources can be a good reason in of itself to have the paper evaluated by a copy or proof-editor. Just like a novel released for commercial gain, you need to put your best effort forward to get accepted the FIRST time. Failing that, it looks like you can get the establishment to do that for you...assuming you want to be the laughing stock the first time around.

Re:Missed steps... (1, Interesting)

WaywardGeek (1480513) | more than 2 years ago | (#41637907)

I'm bitter about having ALL of my submitted papers (about 9) rejected, other than those where I was invited to present. You forgot to list the MOST important factor: what professor you list as an author, regardless of whether he contributed or not.

Now, my writing does comparatively suck, and I've never had the patience to do all the leg work as you're suggesting. I don't get paid to write papers after all. Instead I just find out where algorithms can be improved and work on that. In a sane world, publishing algorithmic breakthroughs wouldn't require sucking up to a famous prof. So, my companies patent the stuff, and it's valuable to them, but sharing ideas is what conferences and journals should be all about. They suck.

Re:Missed steps... (2)

blueg3 (192743) | more than 2 years ago | (#41639329)

Now, my writing does comparatively suck, and I've never had the patience to do all the leg work as you're suggesting.

So, your writing is bad and you don't have the patience for proofreading or copyediting, but you're surprised -- or rather, have come up with a near-conspiratorial excuse for the fact -- that your submissions to journals whose purpose is, ostensibly, to communicate the results of your work to others so that they may learn from it have been rejected? ...

Perhaps there's a simpler explanation here.

Re:Missed steps... (1)

TheRaven64 (641858) | more than 2 years ago | (#41640025)

I was about to post the same thing. Add to that, not all journals are at the same level and so something rejected from one might have been accepted at another. Picking the correct venue is important. Adding professors' names? A lot of journals require anonymous submissions, so pick one of these if you think you're being victimised. Or it could be that the work just isn't novel enough: I've not found a vaguely respectable journal that has as low standards as the US patent office...

Re:Missed steps... (0)

Anonymous Coward | more than 2 years ago | (#41640051)

The worst battles I've seen over author lists has been internal group politics that wouldn't affect whether the paper gets rejected or not (and more often than not is someone wanting their name off the paper, because they didn't have enough time to think about the conclusions to be sure they haven't missed a contradiction). I am not sure how the publisher or reviewer would even know if such a person was involved, unless that particular person complained to the publisher, which is usually after being published and can be serious enough of a stink most won't bother unless they really care. Citations on the other hand, there seems to almost always be a comment about missing citations, but 90+% of the time that is trivial to fix.

Don't publish... Just, blog... (1)

jopsen (885607) | more than 2 years ago | (#41640281)

Seriously, I think conferences and journals are overrated unless you're really into the theoretical computer science. These places are old fashion, proof-reading and formal correctness is important here. Also these places often require copyright assignment to greedy publishers, who will put your work behind a pay-wall.
To be fair though, I doubt that it's a matter of whether or not there's a professor name next to you. There's a lot of counter measures for discrimination, so maybe it's just that your writing does "comparatively suck".

Anyways, If you want to communicate interesting things, blog about it. Most real world developers don't have access to the journals anyway. A very few people will pay 50$ for a paper based on a poorly written abstract. So if you do want to publish, make sure your publication is worth the 50$ it'll be sold for.
- Don't worry, you wont get the money though :)

Re:Don't publish... Just, blog... (1)

WaywardGeek (1480513) | more than 2 years ago | (#41641065)

That's the thing... I wouldn't get any money. I'd have to do all the effort on my own time, as there's not any time allocated for writing papers at work. On the other hand, I used to get $2K for each patent, and a couple of weeks to write them. Papers in my field, EDA, are almost exclusively published by universities, while the vast majority of advances are made in industry, and generally kept trade secret or patented.

I'm simply suggesting that the reason the best papers are first rejected is that most of the best ideas don't come from universities.

Re:Don't publish... Just, blog... (0)

Anonymous Coward | more than 2 years ago | (#41642421)

I'm simply suggesting that the reason the best papers are first rejected is that most of the best ideas don't come from universities.

That seems to be a really big extrapolation. And at the least it could be highly field specific. In some areas related to plasma physics, industrial papers are revered simply because they don't typically publish stuff. Quiet a few times I've had someone excited come to my office because "Company Foo published some results" while it is rarer to see that excitement from university publications because then they only care about the results. I've seen this excitement from journals and conferences too, who are constantly trying to get some of those companies to publish too.

Re:Missed steps... (1)

interkin3tic (1469267) | more than 2 years ago | (#41638379)

Also more substantive stuff, like the reviewers suggest an experiment you didn't think of. Or you realize the paper isn't going to get anywhere without some more work that you were loathe to do, and maybe that pays off more than you expect.

Peer review doing its job (5, Insightful)

Anonymous Coward | more than 2 years ago | (#41637237)

Peer review *does* work. Yes, part of its job is to filter out the poor papers that don't deserve publication. That's the obvious part. But I've gotten plenty of papers back with comments like "deserves publication, but X, Y, and Z need to be fixed". Or even "rejected, but if X were addressed, should be reconsidered", and so on. So, you go off and do X, Y, and Z, resubmit, and you've got a better paper because you've addressed the critical comments. Good papers are ones that incorporate constructive criticism, so it makes sense those might eventually get cited more. Also, if it's a paper that was rejected somewhere, then it might be something controversial that people want to argue about. So, publish a paper that makes a claim some people don't agree with (hence the rejection), and those critics will publish their own paper slagging the original one. Putting it another way, in order to say someone else's paper is full of crap, you have to cite it, and if a lot of people are saying it's crap, then you'll get a lot of citations :-)

Peer review isn't perfect, but the described pattern makes sense. What I'm surprised at is their ability to statistically detect these patterns given all the other variables involved, but I guess a sample size of 80000 helps.

Re:Peer review doing its job (2)

WaywardGeek (1480513) | more than 2 years ago | (#41637967)

My problem with this system is that only people who get paid to publish papers wind up getting heard. For example, the entire EDA industry continues to use interconnect delay estimation algorithms that have been obsolete for 15 years, because a paper I tried to publish on how to do it right was rejected. Sure, it would be a better paper if I put in the work you talk about, but I don't get paid for that. I just get paid to deliver better implemented solutions. You could read patents I have in the area, owned by QuckLogic, but good luck finding a sensible paper in the field.

Re:Peer review doing its job (0)

Anonymous Coward | more than 2 years ago | (#41640069)

Of course if you don't want to or are unable to put the effort in to making a paper of such a style and format, the journal is not going to accept it. However, there are a crap ton of options for releasing work in addition to that. I see enough commercial white papers get cited that were not peer reviewed, especially if self-contained enough to be evaluated by the reader. Additionally, three are usually journals with lower barriers of entry, and three is always industry conferences, many of which have almost no barrier to entry for presentations and high visibility.

Re:Peer review doing its job (4, Informative)

ThreeKelvin (2024342) | more than 2 years ago | (#41640171)

Communication skills matter in science!

It doesn't matter that you have the invented the greatest algorithm since quicksort if you can't or won't tell other people about it. If you can't convince other people how great your work is, they won't use it, and therefore you won't have contributed to the field. When you die the knowledge disappears, and you might as well never have invented the algorithm in the first place.

Therefore, it is important to convince your audiance that:
- Your algorithm gets the job done. (Proofs)
- Your algorithm is better and/or just different than existing algorithms. (Extensive litterature search so that you can compare your algorithm to existing ones)

Just reporting your algorithm together with a "this is how I do it" doesn't cut it. We researchers don't have the time to examine every claim somebody makes about something in our field.

Re:Peer review doing its job (2)

WaywardGeek (1480513) | more than 2 years ago | (#41641157)

You are quite right. One thing I've learned over the years is that I should have learned to write well in college, rather than trying to pick it up while being paid to program. I've written SBIR grant proposals (all were funded), various statement of work proposals, and a number of patents. It was damned hard for me, but it had to be one. I'm steering my 10 year old son towards taking writing more seriously, and hopefully he'll not have the same writing handicap I had when entering the workforce.

I will dispute one point. In EDA at least, results are what counts. Once algorithms are written and benchmarks performed showing yours beats other well known algorithms in some area, IMO, you have data worth sharing. This is not how it works today. I put my own hobby research on the web, but as I've said, I gave up on dealing with the PITA journals ages ago. Here's a great algorithm for better speech frequency analysis. [vinux-project.org] Here's a better speech speedup algorithm for > 2X. [vinux-project.org] Promoting algorithms I develop for free is also painful. Getting sonic into Debian was not fun at all, though it seems to have been adopted in Android and several TTS engines almost magically. I believe it's now even in the Android Audible client, which is now far superior than the iOS client for high speed. I can't get either algorithm linked to on Wikipedia because my web pages don't pass their test as a credible source.

How are you supposed to share great ideas?

Re:Peer review doing its job (0)

Anonymous Coward | more than 2 years ago | (#41641997)

the entire EDA industry continues to use interconnect delay estimation algorithms that have been obsolete for 15 years, because a paper I tried to publish on how to do it right was rejected.

The entire industry refuses to use an apparently better algorithm because your paper was rejected for publication?

Is this really what you mean?

Re:Peer review doing its job (1)

WaywardGeek (1480513) | more than 2 years ago | (#41646647)

Yes. Industry still uses mathematical estimation techniques similar to AWE (asymptotic waveform analysis), but they are considerably inferior. We used a far better algorithm in this patent [google.com] . Since then, we've advanced well beyond this algorithm, but the fact is, backwards trapezoid can be computed exactly for the future point in time very fast in a near RCL tree. A 100 point actual simulation, taking into account nonlinear effects at the driver as well as non-linear capacitance, is simultaneously the most accurate (better than SPICE because of it's target error), and fastest approach. Universities studying this topic are in the dark.

Re:Peer review doing its job (0)

Anonymous Coward | more than 2 years ago | (#41643205)

It may turn out that way, but there's nothing in the submission guidelines saying you have to be paid to publish papers in order to submit them and eventually get them accepted and published. Papers get evaluated for their content, not whether authors are employed. There can be many reasons for rejection of a paper that have nothing to do with the quality of the new scientific work that has been done. It can be the presentation, both in a scientific sense and in more mundane things like the writing and formatting. For example, it's normal to have to spend a lot of time explaining not just your own work, but also how it fits into previously-published work. You have to put it in context. I just had a paper come back from review, and I had to completely re-write the introduction and background because at about the time I submitted it, a new volume came out with a bunch of papers on the subject that negated a lot of my summary of previous work. The reviewers were more in tune with that stuff and already knew about it. It was a lot of work to get up to speed on the new stuff and incorporate it into the revisions. And I've gotten editors comments back and there is still one thing to fix. This will be 2 rounds of review. But at least they accepted all the other changes.

I guess that's the problem you're describing. It can be a *long* grind through submission-review-revision-resubmission-review2... etc. 6 months or more is pretty common. It is indeed difficult to set aside the time necessary to get through all that. It takes persistence, and there are plenty of situations where it may not be worth it. For an algorithm, it may be good enough to just put up the documentation and code on a web page, and call it a day.

Re:Peer review doing its job (1)

WaywardGeek (1480513) | more than 2 years ago | (#41646675)

The process you're describing is only acceptable to people who get paid while going through that process. Tell me of one example of someone who just wanted to do the right thing by informing the world, who actually went through that process. Someone currently alive.

Another possible mechanism for this (5, Insightful)

Brett Buck (811747) | more than 2 years ago | (#41637245)

I haven't done a lot of publishing in open literature, but many times, the papers that fly through the vetting process with little effort are are on topics that are somewhat straightforward/trivial. And would thus not be as likely to be useful as a citation. The interesting topic raises many more questions and is more likely to require multiple tries to get through the review, but ultimately is more useful and more likely to get a citation.


Re:Another possible mechanism for this (5, Insightful)

bmacs27 (1314285) | more than 2 years ago | (#41637379)

I do publish. I think you are closer to the truth than the summary. A related model is that people tend to only try high impact (reach/likely to reject) journals when they are very confident they have something interesting and of high quality. We have papers for which we "turn the crank" (submit to lower impact journals with little resistance in review) or "run the gauntlet" (begin with a journal from which it will almost certainly be rejected, and continue down the chain until it finally sticks). Usually the latter are better papers to begin with. I bet Nature and Science are accepting papers that are rejected by one another, not by lower impact journals. People don't get rejected by low impact journals, and resubmit to Nature. That would be batty.

Re:Another possible mechanism for this (1)

Brett Buck (811747) | more than 2 years ago | (#41638919)

I also publish/produce/review/approve *many* papers and other documents, just not in public. But I see the uninteresting papers are mostly pencil-whipped, and the interesting papers are picked to pieces, and go back through the cycle many times before approval. I expect the same dynamic is in play for the unclassified world.

Re:Another possible mechanism for this (1)

sFurbo (1361249) | more than 2 years ago | (#41640419)

People don't get rejected by low impact journals, and resubmit to Nature. That would be batty.

A professor at my university suggested doing just that. After all, you have gotten constructive criticism, so after working that into your paper, you have a better paper, fitting for a higher impact journal. I agree, though, that is batty.

Re:Another possible mechanism for this (4, Informative)

WaywardGeek (1480513) | more than 2 years ago | (#41638061)

There are two kinds of papers: invited papers and papers where professors do the peer review thing. I've published some invited papers, but in my experience, there's always at least one a-hole on the review committee who will shoot down my work. The worst example is Professor Larry Pillage, who QuickLogic paid $20K to review my work which I did based on a paper called RICE. He never got it working properly like I did, but after reviewing my work, he claimed it as his own and published it at DAC the next year for a best paper in show award, and then made a mint selling it to all our competitors. That guys is a serious a-hole. He was on most committees I ever tried to publish a paper in, and while I don't get names with the reviews, the psychotic analysis I did sometimes receive seemed 100% Prof Larry Pillage. He stole ideas from great guys like Prof. Ronald Rohrer, who told me once, "We don't tell Larry anything!"

If publishing papers were more important to me, I'd do something about it, but the reality is people with ideas don't get to publish. People with the right connections and background do. This explains why Nature would do better with rejected papers.

Re:Another possible mechanism for this (0)

Anonymous Coward | more than 2 years ago | (#41640159)

First off, that sounds more like a problem not about peer-review journals, but about hiring some person with out good enough documentation or agreement. I can say from experience that pulling something like that from a journal review results in a huge mess and some serious implications because there is a massive paper trail involved.

Additionally, with most non-crappy that don't have a huge shortage of reviewers, it is pretty difficult to get sunk by one bad reviewer. I've had my share of papers that had one horrible review. Although frequently it seems to lazy reviewers that take a long time to submit their review and simply didn't read or understand the paper. But typically there is some effectively automatic process to ask for an additional reviewer, assuming the paper isn't a complete mess in terms of language. With papers that have two reviewers, adding a third can quickly isolate when one is being an ass or lazy, although it works even in journals that default normally to one reviewer. Also, the editor usually has some say if you can politely point out that obviously one person is not respond to the content of the paper. Then there are appeal processes to other editors if the editor is causing the problem, or simply cut loses and try a different journal.

Most of these processes involve just writing a simple letter with a couple paragraphs, not much more than what you've written hear, just needs to be kept somewhat polite and professional. It has taken me typically more time to format the letter and look up the email to send it to than to actually write it.

And especially at smaller journals, (5, Informative)

aussersterne (212916) | more than 2 years ago | (#41638747)

the search for legitimacy of their own leads them to ultimately consider only papers that completely agree with conventional wisdom and support the already big names and big theories.

Not to mention that the reviewers that are willing to review for smaller journals are usually in the same boat—younger faculty trying to get a leg up—and subject to the same pressures and tendencies.

But even at the large and important journals, there is a tendency to dismiss really interesting papers unless they come from a large name / large name school. You'd better have a long track record and big names behind you or you won't get serious consideration, even if your work is sound and earth-shattering. It's just a matter of the probability of returns on the investment of labor.

I say all of this as someone that did sit as a managing editor on an academic journal and that has been a part of the review process for any number of articles.

There are serious inherent biases built into the system, both for good and for bad.

Much more important to my eye is the fact that this is all free labor but earns the publishers huge profits and costs the schools huge dollars. It's only a matter of time before the current system is overturned. Right now, schools pay money to faculty to write papers, pay money to faculty to review papers, then pay lots of money for the journals. Yet all of the authority of the paper comes from the faculty and from the institution, and circulation is limited to academics because articles run $30-$60 a pop for public access. It's only a matter of time until they cut out the middleman, save tons of costs, and grow their audience at the same time.

Re:Another possible mechanism for this (2)

Grieviant (1598761) | more than 2 years ago | (#41638753)

I haven't done a lot of publishing in open literature, but many times, the papers that fly through the vetting process with little effort are are on topics that are somewhat straightforward/trivial. And would thus not be as likely to be useful as a citation. The interesting topic raises many more questions and is more likely to require multiple tries to get through the review, but ultimately is more useful and more likely to get a citation.

My experience is the exact opposite. Papers that address new topics or ones that are 'all the rage', even when there isn't much substance to them, get preferential treatment from reviewers. See MIMO and Cognitive Radio in the field of wireless communications. These areas are cash cows for grant money until the next flavor of the day comes along, and the fact that very little practical impact was made is quickly forgotten.

Same thing when novel but inferior algorithms are presented. For example, I once saw polynomial prediction applied to a specific problem where linear prediction had already been well studied - the results were worse in every possible way, but the novelty factor was enough to push it through.

In contrast, classical topics that are considered 'old' or 'well-studied' (but are by no means 'solved' and are still quite relevant) are poorly received. The reviewers tend to lazily dismiss the work without giving it due consideration.

Re:Another possible mechanism for this (1)

makomk (752139) | more than 2 years ago | (#41640513)

Same thing when novel but inferior algorithms are presented. For example, I once saw polynomial prediction applied to a specific problem where linear prediction had already been well studied - the results were worse in every possible way, but the novelty factor was enough to push it through.

Surely that's a useful thing to publish though - it tells everyone else in the field that approach has been tried already and doesn't work very well.

Re:Another possible mechanism for this (1)

Grieviant (1598761) | more than 2 years ago | (#41641875)

I'm not against publishing negative findings as long as the results aren't a forgone conclusion. There is a such a huge number of slight variations and dead ends to every problem though, not all of them are worth writing a paper about.

maybe not editing? (2, Insightful)

Anonymous Coward | more than 2 years ago | (#41637273)

The summary seems to suggest that when a paper is rejected, the author edits it in hope of being less rejection-worthy the second time around.

I don't think the data provided is adequate to show that. An alternative hypothesis is that papers vary in risk and "risky papers" are more likely to both be rejected and , once approved, to be cited.

Re:maybe not editing? (4, Insightful)

Chuckstar (799005) | more than 2 years ago | (#41639289)

This is one of those "I came on here to say this but you said it better" posts.

The only study they site about how much editing is done between submissions seems to indicate "not much at all".

Also, this could explain the prevalence of Nature and Science in the study. Risky papers may be rejected by the orthodoxy of more specialized journals. Nature and Science may avoid that kind of orthodoxy simply by having a broader array of reviewers than a more specialized journal might.

Pick a completely zany example in a field I know nothing about: You've done cutting edge work on the transport of lipids in palm fronds. There aren't that many people in the field, and most of them have been following a line of reasoning that lipids are transported by osmosis, as there has been some evidence of that. You have the hypothesis that lipids are transported by tiny aphids. You do some interesting lab work, that seems to support the hypothesis. The results are supportable, publishable, but not entirely definitive -- you know, statistically significant, but there's really a lot more work to be done before anyone can say you've really proven your hypothesis correct.

You submit your paper to a botany journal. They have it reviewed by a bunch of palm frond experts. All of them have studied palm fronds very closely. They mostly adhere to the unproven pet theory of osmosis. They do not consciously reject your paper because it contradicts their pet theory, but between the kinda shaky results, and their unconscious bias, it gets rejected.

Now you submit to Nature. They only have a couple palm frond experts in their stable, so they have a fern expert and a deciduous leaf expert review it. Neither the fern expert nor the deciduous expert have any unconscious bias. They think the whole thing seems pretty interesting, and worthy of broader discussion. So it gets approved. Now that the whole palm frond field has seen your work, a couple guys start trying to replicate your work, adding their own twists. They come up with supporting evidence and publish, with a reference to your work. A couple more guys come up with evidence that supports a different hypothesis. Even if you end up having been wrong at this point, your work gets referenced because it is what stimulated them to do their work. Let's say they discovered a third mechanism, that seems to explain transport even better. This third mechanism is unrelated to osmosis, so few of the osmosis studies are being referenced. But because your work stimulated the whole line of inquiry, your wrong study still gets lots of references.

(I added the part at the end about the work ending up being wrong just to illustrate that risky scientific invetigation doesn't have to end up being right in order to get referenced a lot. It just has to stimulate inquiry in a different direction such that people keep referencing the original study. A paper that just advances the field in an orthodox direction may still be great science, but may get lost in a sheaf of other similar studies advancing the field in that similar direction.)

Working as designed (1)

girlintraining (1395911) | more than 2 years ago | (#41637301)

That's how science is supposed to work. People show an interest in the work, check it over, tell you what's wrong with it. Then, because they contributed to it, of course they want to use it (ie, cite it) when it's published. But all that said, I'm skeptical -- how do we know that it's not because of Unforseen Variable X that papers that are initially (or repeatedly?) rejected are also often cited more? It could simply be that papers that are rejected more is a sign of increased interest in the topic, and that higher interest level is what drives both metrics. And it also says nothing about the quality of other papers which are published without being rejected -- it could simply be that it is either too specialized or that the research doesn't have any practical application. This could just be a case of someone assuming that high correlation necessarily leads to a relationship existing between the two, instead of doing their homework and building a model.

Alternate viewpoint (1, Insightful)

merlinokos (892352) | more than 2 years ago | (#41637347)

"So there is apparently some reason to be patient with your paper's critics — they will do you good in the end." I have a different possible viewpoint. The papers that are most likely to be rejected are the ones that are controversial because they challenge the status quo. But once they're accepted, they're game changers. And since they're game changers, and the first publications with the new viewpoint, they're cited disproportionately frequently by follow up work.

Re:Alternate viewpoint (1)

chienandalou (2637845) | more than 2 years ago | (#41637925)

Game-changing papers may encounter more initial resistance, but I have to tell you as a reviewer that most rejected papers are rejected because they're poor and/or trivial. There's an awful lot of dross out there and, really, how many game-changing papers emerge in a given year? How often do games change? Moreover if I think about the fields I know well ... it may not be fair, but most of the game-changing is done by folks who are already prominent and know how to get their stuff published. The extra-effort hypothesis has a lot more going for it. It's also possible that as you take more time and resubmit to a new journal, you add more citations, increasing the density of connections to it which may in turn raise its chances of citation. Plus and more reviewers see it as it goes through more referee gauntlets, which may also raise its profile a tiny bit. The first hurdle with raising your profile in a field is just getting read in the first place. Publication in a journal does not guarantee that anyone else will ever read the article.

Re:Alternate viewpoint (2)

pnot (96038) | more than 2 years ago | (#41638253)

Game-changing papers may encounter more initial resistance, but I have to tell you as a reviewer that most rejected papers are rejected because they're poor and/or trivial.

True, but remember that here we're not considering the set of all rejected papers; we're considering the set of rejected papers which were subsequently accepted. That probably removes from consideration a large chunk of the just-plain-awful ones.

Re:Alternate viewpoint (1)

WaywardGeek (1480513) | more than 2 years ago | (#41638153)

Interesting view. If we say that the status quo means professors at well known universities get to publish most "advances", then I agree. Back in the 1990's I attended every FPGA algorithms conference available, and here's what I found. At least 90% of the papers published were total crap, because the researchers had no clue about where the state of the art in FPGAs was. They were still trying to adapt ASIC detailed routers, for example, when we'd already leaped ahead to integrated global/detail a-star routers, or in the case of QuickLogic, separate global/detail because the detail router was optimally solvable, 100% of the time, with a simple linear algorithm after global routing.

I think the reason rejected papers get more citations is simply because there is a system in place designed to publish professors, regardless of whether they have a clue. The cheapest and easiest way to get a great idea published is the patent system.

Re:Alternate viewpoint (2)

mikael (484) | more than 2 years ago | (#41640663)

There has been a study on how research group leaders tend to cross-reference each others work. That's the only way they can keep publishing. It's known as "citation analysis". Depending on the field of science or industry these are known as "Collaboration graphs", "citation graphs" or "Hollywood graphs" (for movies actors have starred in - some actors can form a natural inspiration for each other, like Laurel and Hardy, or The Three Stooges). For academics, they co-author papers together because they are experts in the same field.

Had one of my papers blocked for publication around 2004/2005 only to see the exact same paper published from California. That sucks.

Alternate Viewpoint (5, Insightful)

merlinokos (892352) | more than 2 years ago | (#41637373)

"So there is apparently some reason to be patient with your paper's critics — they will do you good in the end."

I have a different possible viewpoint. The papers that are most likely to be rejected are the ones that are controversial because they challenge the status quo. But once they're accepted, they're game changers. And since they're game changers, and the first publications with the new viewpoint, they're cited disproportionately frequently by follow up work.

(formatted correctly this time)

Re:Alternate Viewpoint (1)

mattwardfh (95047) | more than 2 years ago | (#41638245)

I think you've got the right idea. In my experience, a lot of rejections have little to do with the quality of the underlying research and rather reflect the biases of the reviewers. Bad reviewers reject a good paper, and someone else is smart enough to accept it.

Of course, the reality is that both reasons probably contribute to this phenomenon—but just saying that this is e peer review process working as expected is complacent and facile.

(My mod points seem to have expired, or I would give you some.)

Re:Alternate Viewpoint (0)

Anonymous Coward | more than 2 years ago | (#41638597)

As someone who has published many papers, most in top quality journals and conferences, I tend to agree. Refereeing these days is done too quickly - because time we spend as academics doing a thorough job is not part of the metrics our universities impose and we are too busy trying to maximize those metrics. Several recent papers submitted by me that have received significant comments from the referees have shown that they have failed to read the paper - asking me to define something that is clearly defined, for instance. Or, more importantly, claiming another paper has the same or better results when it very clearly does not. In each case, the reviews did little to affect the quality of the paper in question but did delay its publication.

Re:Alternate Viewpoint (1)

blueg3 (192743) | more than 2 years ago | (#41639337)

There aren't that many "game changers" to be statistically meaningful, and there really isn't much of a bias toward rejecting papers because they're controversial. I think the explanation posed above is much more likely. When you have a relatively low-impact paper, you fire it off to a low-impact journal that's likely to accept it and move on. When you have a really great paper, you start at the top and keep submitting to journals until it gets accepted. (Thing is, there's no shortage of pretty great papers, and there's a real shortage of space in very high-profile journals. So any paper that was great to start with is likely to see a lot of rejections unless it is absolutely one of the best.)

Re:Alternate Viewpoint (3, Interesting)

tgv (254536) | more than 2 years ago | (#41639963)

Perhaps 'game changers' is an exaggeration, but only papers that make minor extensions to the existing literature are accepted on first submission. In my (ex) field, papers that challenge a certain view get their share of flak from the reviewers. I've seen papers being shot down (see what I did there?) because the reviewers belonged to a different school. It's of course not always the case, but it does happen too often. One of the reasons is that such papers usually get reviewed by at least one of the opponents, or someone closely involved. Consequently, when such papers get accepted, they generate replies, and thus citations, in contrast to the papers that are in line with the main view.

I think the conclusion that the GP has a good point and that the conclusion "peer review works" cannot be drawn on.

Re:Alternate Viewpoint (2)

mikael (484) | more than 2 years ago | (#41640691)

It's s a shame, but it's true. Like the invention of the combustion engine. The first working model would be considered noisy, fuel inefficient, and extremely environmentally unfriendly with soot and oil in the exhaust, but the paper published would have been considered seminal. Successive papers would document how to improve airflow, air-mixing, reduce turbulence, reduce noise, improve burn rate, improve fuel inefficiency, but they wouldn't be considered as ground-breaking.

If you had a monopoly on all the research on that field and could afford to wait 5 to 10 years, then you could put together all the improvements and that would appear to be a completely revolutionary engine.

Mac, Windows and I dare say the new Linux Desktop (0)

PsyMan (2702529) | more than 2 years ago | (#41637437)

Couldnt be bothered to read it all but sounds similar to my Ask Slashot question. (rejected and overlooked) http://slashdot.org/submission/2278195/ask-slashdot-easiest-linux-distro-to-join-mac-server-via-gui [slashdot.org] Surely, some nerd can find a response, even if it is like a piratebay cursing response.

Re:Mac, Windows and I dare say the new Linux Deskt (0)

Anonymous Coward | more than 2 years ago | (#41637749)

Bitter much?

Duh (2, Interesting)

Anonymous Coward | more than 2 years ago | (#41637553)

Papers initially rejected are improved based upon the reviews of outside critics. It seems this means they end up being better papers overall. Who'da thunk it!

As a PhD student I was advised early on that you learn to love the rejections.

Surprising (2)

Joe Torres (939784) | more than 2 years ago | (#41637567)

Something that surprised me was that "75% of all published papers appear in the journal to which they are first submitted."

I would be very interested in seeing the difference of this rate between junior faculty and senior faculty. With my limited sample size (and personal bias along with it), it has seemed that this number would be much lower for junior faculty. Possibly, junior faculty may be too eager to try to swing for the fences (Science and Nature) and miss (going down the ranks to PLOS ONE) while senior faculty already have favorite field-specific journals (where they may know editors) that will likely be accepted with revisions.

Re:Surprising (0)

Anonymous Coward | more than 2 years ago | (#41637715)

This implies there are either a whole lot of junk journals out there or people frequently give up publishing a paper after the first shot.

Re:Surprising (0)

Anonymous Coward | more than 2 years ago | (#41638047)

I work with new faculty a lot and I think your second comment is a far more likely explanation than anything else suggested above (and in the article). Getting rejected throws inexperienced people off far more than experienced, they don't realize that many rejections are bad luck or due to a poor initial journal selection rather than due to real issues with their work.

The other factor they dont appear to have controlled for is the affect of being in the 'right' social context (yes it matters in Science who your friends are), this can be good connections with the editorial board or simply successful prior publication in the journal in question.

Tldr: article makes too many assumptions to be reliable

Re:Surprising (0)

Anonymous Coward | more than 2 years ago | (#41640083)

Or a third option: with experience, researchers can better judge which journal is appropriate for their results and get better at picking the one they think will accept it. Groups I worked with still tried to push some papers to more general and prominent journals, but the majority of the papers went straight to ta field specific journal that was easier to get into.

Re:Surprising (1)

mikael (484) | more than 2 years ago | (#41640723)

Sometimes the authors of paper aren't assertive about what the original contribution of their paper is within the abstract. They will just state what they write about in the paper. Is it a new algorithm, a comparison of existing algorithms, modification of an existing method, an optimization to reduce memory usage, processing time or improve accuracy? Another type of paper is the STAR (State-of-The-Art-Report), usually submitted by the head of the committee.

Not Surprising at All (1)

Anonymous Coward | more than 2 years ago | (#41637769)

It's not really that surprising if you are familiar with how publications works. When you submit a paper for review, it comes back with feedback for improvements. If a paper is rejected, the authors have incentive to fix any short-comings in their work (such as running more experiments, implementing missing features (if it's software-type research, etc.) and to strengthen it as much as they can before the second submission.

Since submission (in Computer Science, anyway) is usually free, the first submission is the barely done, maybe it'll get accepted version and if it doesn't we'll get free feedback from others submission. The second submission is the one that's gotta fly; after 2-3 submissions, reviewers start recognizing the paper and may not bother to read it again, no matter how much it's improved.

Retraction Watch -- for the details (5, Informative)

sillivalley (411349) | more than 2 years ago | (#41637785)

A very good site to monitor is Retraction Watch - https://retractionwatch.wordpress.com/ [wordpress.com]

They not only follow retractions in journals, but dig into them, and track them to other papers and publications by the same authors.

For those of us in industry, we forget there are areas of Academia that are dog-eat-dog, publish or perish.

Under such pressures, authors make up data, manipulate data and/or images, and more.

Take a look at Retraction Watch for the sordid details -- for us outsiders, it's like a soap opera for the geeky set!

Re:Retraction Watch -- for the details (1)

WaywardGeek (1480513) | more than 2 years ago | (#41638205)

It looks pretty scary. I've seen some of the crap profs will pull to get published, and it's not pretty. I get paid to patent rather than publish, being in industry.

Re:Retraction Watch -- for the details (-1)

Anonymous Coward | more than 2 years ago | (#41640087)

Do you get paid to post to Slashdot too?

not all citations are a sign of a quality paper (2)

ganv (881057) | more than 2 years ago | (#41638145)

Don't underestimate the number of citations you can get by being controversial or wrong.

Re:not all citations are a sign of a quality paper (1)

oneiros27 (46144) | more than 2 years ago | (#41639017)

I've heard that the trick is that you have to have it wrong in such a subtle way that it passes initial peer review, and takes about 3-6 months before someone figures out what's wrong ...

and then you can just roll in the citations.

bum fights (1)

kEnder242 (262421) | more than 2 years ago | (#41638305)

Someones sig I still remember from a long time ago:

Slashdot: A mix between a peer review journal and "bum fights".

poltics in science (0)

Anonymous Coward | more than 2 years ago | (#41638347)

The metric of "number of citations" to measure impact might be extremely misleading. Or rather, we might be measuring "impact" in the same way the number of comments on /. does -- what's "hot," trendy, dumb, gabby... It might say nothing about long-term impact, quality of insight, or the potential for some kind of paradigmatic leap.

Nature, Science publish papers rejected elsewhere? (2)

Convector (897502) | more than 2 years ago | (#41638641)

I have never heard of a paper being rejected by a journal and then sent to Nature or Science. It's the other way around.

Re:Nature, Science publish papers rejected elsewhe (1)

Anonymous Coward | more than 2 years ago | (#41639315)

People keep doing research even after they submit a paper. Sometimes that research gets added to the same "paper" and it becomes good enough to submit to a better journal after being rejected from an earlier one.

What about in mathematics? (4, Interesting)

oneiros27 (46144) | more than 2 years ago | (#41639031)

I want to know how Rejecta Mathematica [rejecta.org] stacks up to the others.

(for those unfamiliar with it ... they only take papers that have already been rejected somewhere else, or when the author doesn't want to make the changes that the peer-reviewer is insisting on)

Effect and Cause (1)

Anonymous Coward | more than 2 years ago | (#41639305)

Papers that are revolutionary enough to elicit a lot of citations are more likely to be rejected.

Discretisation/binning (1)

biodata (1981610) | more than 2 years ago | (#41639951)

Consider that the citation value of a paper is variable on quite a fine scale, but that there are only a handful of possible places for it to be published. If the authors think the paper has a likely impact factor of 10, and they have a choice between journals with impact factors of 8 and 12, they will likely pick the 12 journal. If the 12 journal is doing its job it will get bounced and end up in the 8 journal, but outperform the other papers in the journal because it's inherently more citable.

But why are they cited more? (0)

Anonymous Coward | more than 2 years ago | (#41641385)

The real question is why are they being cited? Are the people citing them using them to support their own research, or are they criticizing the work itself. It's not a good sign if boatloads of people are citing your work specifically in order to say it was flawed and came to incorrect conclusions.

obvious problems (0)

Anonymous Coward | more than 2 years ago | (#41641765)

survey data
uh, like, is the survey representative ?

Reviewers are common to journals
at least in bioscience, if you are in , say, xray crystal structure of membrane proteins, there are a small number of reviewers, and they are *going to be the same in diff journals*

Science and Nature offscale
these two journals are like Roll Royce Cars; they are just not representative
I know profs at MIT and Harvard, and even for them a Nature or Science paper is a big deal - people will stop in the hallway and congratualte them

... only a tiny number of more citations (0)

Anonymous Coward | more than 2 years ago | (#41645271)

The effect size they find is so small, it's silly; check out the relevant figure:

    https://twitter.com/joe_pickrell/status/256756126140477442/photo/1 ... it's like 5% more citations; which could easily be explained by various other factors that some folks have pointed out. For instance, scientist more excited about research = more likely to submit to big fancy journal that rejects most of the papers; but the first thing is also correlated with impact of the paper. The trade-off of 5% more citations for all the extra work and time involved in resubmitting (and rewriting for another journal, perhaps) is definately not worth it for me.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?