Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

Peer Review Highly Sensitive To Poor Refereeing 233

$RANDOMLUSER writes "A new study described at Physicsworld.com claims that a small percentage of shoddy or self-interested referees can have a drastic effect on published article quality. The research shows that article quality can drop as much as one standard deviation when just 10% of referees do not behave 'correctly.' At high levels of self-serving or random behavior, 'the peer-review system will not perform much better than by accepting papers by throwing (an unbiased) coin.' The model also includes calculations for 'friendship networks' (nepotism) between authors and reviewers. The original paper, by a pair of complex systems researchers, is available at arXiv.org. No word on when we can expect it to be peer reviewed."
This discussion has been archived. No new comments can be posted.

Peer Review Highly Sensitive To Poor Refereeing

Comments Filter:
  • by Pojut ( 1027544 ) on Friday September 17, 2010 @10:58AM (#33611148) Homepage

    I can't quite remember what it was, but I seem to remember seeing it everywhere. It was exactly like TFA article, though. [wikipedia.org] Damn, what was that place called again?

  • by Anonymous Coward on Friday September 17, 2010 @11:03AM (#33611210)

    This is precisely what the global warming skeptics say is happening with the global warming alarmist community. ie. scientists review each others' papers, in a 'co-operative' manner as it were.

    I think I'll point some skeptics at this paper and then sit back with a bowl of popcorn and watch what happens.

    • by Pojut ( 1027544 )

      They probably won't believe it's real, and will accuse you of trying to make them look foolish.

    • Re: (Score:3, Insightful)

      by ByOhTek ( 1181381 )

      The funny thing is, the skeptics suffer from the same problem.

      I hope the moderates don't, otherwise were borked.

    • Re: (Score:3, Insightful)

      by jfengel ( 409917 )

      I think I'll point some skeptics at this paper

      Let me know when you find some. I mostly meet deniers, with a deep ignorance of climatology or any other science and a deep conviction of a conspiracy.

      If you locate some actual skeptics, people capable of analyzing the evidence, who have come to the opposite conclusion of the vast majority of actual climatologists, I'd love to hear from them.

      • If you locate some actual skeptics, people capable of analyzing the evidence, who have come to the opposite conclusion of the vast majority of actual climatologists, I'd love to hear from them.

        What about people who aren't skeptics, but are damned tired of the whole thing being hijacked as a way to sell people on junk ideas.

        All the good intention in the world won't do you any good if the 'fix' isn't practical.

        • by oiron ( 697563 ) on Friday September 17, 2010 @01:02PM (#33612500) Homepage

          Well, peer-review the junk ideas out (aka, vote on the economic aspects). The science is pretty well settled, but the economics is not so clear (as if it ever really is).

          Come up with better solutions, implement them if you can, support good solutions if you can't. The problem isn't going away by denying it because you don't like the currently proposed solutions.

          That, we can discuss. "Global cooling in the 1970s" is just noise in the channel.

        • by sycodon ( 149926 ) on Friday September 17, 2010 @01:44PM (#33613064)

          The thing is that the "fix" is plain and simple, it's just rejected out of hand by the Envirowhackos because it doesn't involve government running our lives, a reduction in the standard of living, and allows for more growth and prosperity: Nuclear power.

          Now comes all the posts about peak uranium that ignore technology like breeders and thorium.

          Then comes everyone who thinks all reactors are built like Chernobyl.

          Next are high level waste folks who don't understand what reprocessing does.

          Last, but not least, are all the people who equate a nuclear reactor with a nuclear bomb.
           

          • by The Warlock ( 701535 ) on Friday September 17, 2010 @01:56PM (#33613212)

            Oh please. Let's face it, it's easy for the government to ignore environmental concerns; they've been doing that for years. The real barrier is the general public that's okay with nuclear power as long as the power plant isn't near their neighborhood, as long as trains carrying fuel or waste don't go anywhere near their house. They'd love them some cheap electricity, sure, but just build it near some other people.

          • by oiron ( 697563 ) on Friday September 17, 2010 @02:21PM (#33613458) Homepage

            Speaking as someone who's at least read up on this stuff (not an expert, but definitely an informed layman), such large-scale adoption of nuclear power comes with its own problems. For one, building such plants is going to be extremely costly, and probably can't be done in time to make a useful difference.

            You talk about reprocessing, but even after that, you eventually end up with some radioactive waste products, to say nothing about radiation leakage into the environment.

            Finally, I think it's just yet another "all our eggs in one (radioactive) basket" solution. I'd rather have a wide range of options, from renewables like wind, solar or geothermal, to, yes, nuclear power where that's appropriate.

            It's difficult to comprehend why a place with ample local generation capability (say, solar power in the Thar desert in India) should go with an expensive nuclear power plant, when the alternative is cheaper, and a more efficient use of resources readily available (as opposed to resources mined from the ground a few thousand kilometers away in another continent), as you "nuclear only" types keep coming up with.

          • by techno-vampire ( 666512 ) on Friday September 17, 2010 @03:00PM (#33613876) Homepage
            You left one out: ignorant fools who conflate highly radioactive waste products with wastes with long half-lives. If you listen to them, you come away with the impression that the wastes from a reactor stay Highly Radioactive for thousands of years, ignoring the fact (if they're even aware of it) that unstable isotopes are either highly active or have long half lives, never both, because the two qualities are mutually incompatable.
      • by oiron ( 697563 )

        There were quite a few, like Friis-Christensen [wikipedia.org] who actually raised real scientific questions, but most of those have since come to the conclusion that global warming is indeed caused (primarily) by human influence

        On the other hand, every Real Scientist is a skeptic; if someone claimed, whether in a scientific paper or otherwise, that global warming would cause the ice-caps to melt by Dec. 21 2012 or something like that, you can bet that the entire scientific community would pretty much laugh them out of the

    • by P0ltergeist333 ( 1473899 ) on Friday September 17, 2010 @03:59PM (#33614518)

      As usual, the strong caveat at the end of the article goes unnoticed:

      But Tim Smith, senior publisher for New Journal of Physics at IOP Publishing, which also publishes physics world.com, feels that the study overlooks the role of journal editors. "Peer-review is certainly not flawless and alternatives to the current process will continue to be proposed. In relation to this study however, one shouldn't ignore the role played by journal editors and Boards in accounting for potential conflicts of interest, and preserving the integrity of the referee selection and decision-making processes,"

      IRL the reviewers are not chosen at random. Which burns the straw men built by the summary, most of the article, and the skeptics.

  • by AnonymousClown ( 1788472 ) on Friday September 17, 2010 @11:05AM (#33611228)

    "The system provides an opportunity for referees to try to avoid embarrassment for themselves, which is not the goal at all," he says.

    So, if a reviewer sees a paper that has actual data and a conclusion that goes against the consensus of the scientific community, the reviewer may reject it for fear of appearing foolish? Or rejecting someone just because of their publicized personal beliefs?

    Here's a hypothetical, a climate scientist who's an openly devout Christian finds data that sheds doubt on human caused global warming will be rejected because someone's afraid of looking foolish.

    That's the way I'm interpreting this study.

    • by MozeeToby ( 1163751 ) on Friday September 17, 2010 @11:09AM (#33611266)

      Can anyone give me a good reason why the reviewers get information about the author in the first place? Granted, there are disciplines that are closed knit to the point that the reviewers would recognize the author based on their past work, but in most cases I would think not knowing who the author is would address at least some of the issues that they highlighted here. It's hard to obscure the rest of the review process but limiting nepotism should be relatively simple.

      • by prefect42 ( 141309 ) on Friday September 17, 2010 @11:20AM (#33611380)

        It is done as you've guessed, but it's still often obvious who the author is. Don't forget that sometimes a bad review has nothing to do with knowing who the author is. If you come across a paper that's done almost exactly the same work as you have done, or criticises your work, you could choose to give it a false bad review to try to prevent it from being published. I've seen papers that have received three reviews, two that say it's good, and one that says it's nowhere near worthy of being published. You often question the outliers.

        • The outliers might not be due to conscious suppression of competing research. People just have some ways of thinking that make their subjective opinions sometimes contrast with what an objective observer would think.
        • In Computer Science, top-tier conferences are higher quality than most journals. The admission is determined by a program committee, whom are carefully selected by the program chair because of what they'll bring to the table.

          I've only served on one PC, but can't imagine trying to serve on a program committee they describe in the the paper, with 1/3 "rational" (i.e., self-serving) people and 1/3 "random" (i.e., can't tell a good paper from a bad). Of course you'd get essentially random. But if that happe

          • by Trepidity ( 597 ) <delirium-slashdot@@@hackish...org> on Friday September 17, 2010 @12:51PM (#33612382)

            As a computer scientist, my impression is that the program committees really are pretty random, or at least based on some sort of preference other than a widely agreed "quality" standard. Try it sometime: resubmit a paper rejected from a top CS conference verbatim to another top CS conference. The correlation between the reviews is usually quite low, both in terms of the numerical scores, and especially in terms of what they liked / complained about.

            • by drewhk ( 1744562 )

              Agreed. And don't forget that top conferences have a very low acceptance rate therefore bad reviewers have a more damaging effect. If you accept 15-20% of papers then even the smallest bias is dangerous.

              • by Trepidity ( 597 ) <delirium-slashdot@@@hackish...org> on Friday September 17, 2010 @01:48PM (#33613116)

                The poor review assignment at large conferences contributes to that effect as well, I think. I almost always have at least one of three reviewers, and sometimes even two of three, give a noncommittal review along the lines of, "well this isn't really my area, but it seems pretty good". Those reviews basically are non-reviews, so the acceptance decision is then entirely up to the remaining one or two reviewers. So it often comes down to: did the one person who actually provided an opinion on your paper like it or not like it?

                In my experience that's often pretty subjective, especially for conferences with tight length limits (standard in AI is six pages). If the reviewer personally found the paper to be on an interesting subject with an interesting approach that he/she felt should be investigated, almost any shortcomings can be excused, and the reviewer will conclude that "Overall, this paper provides a valuable contribution to an important ongoing discussion in this area." But if the reviewer doesn't like it, finds it boring, dislikes the approach, etc., it's easy to find something that had insufficient detail, didn't sufficiently distinguish from related work, didn't sufficiently motivate the problem or investigate/validate the applications, etc., etc., since you really can't fit that much in six pages.

        • Re: (Score:3, Informative)

          by pz ( 113803 )

          It is done as you've guessed, but it's still often obvious who the author is. Don't forget that sometimes a bad review has nothing to do with knowing who the author is. If you come across a paper that's done almost exactly the same work as you have done, or criticises your work, you could choose to give it a false bad review to try to prevent it from being published. I've seen papers that have received three reviews, two that say it's good, and one that says it's nowhere near worthy of being published. You often question the outliers.

          Whether the authors are revealed to the reviewers or not varies from journal to journal. All of the large handful of reviews that I've done had the author information presented to all of the reviewers; I've not reviewed for really big name journals though (at least not yet). The reviewers' identities are not made known to the authors, though. It is often, however, rather easy to identify the reviewers because my field is not that large, and personalities can shine right through unedited writing like revi

          • Re: (Score:3, Funny)

            by drewhk ( 1744562 )

            Wow, this gave me an idea! Researcher mimicry :)

            1. Find a successful researcher R in your field
            2. Find a journal/conference J in your field that anonymizies submitters
            3. Make a language profile L(R) of researcher R
            4. Make a paper P so that the profile L(P) is similar to L(R)
            5. Select a subset of citations from R and cite them in P
            6. Submit P to J
            7. ???
            8. Profit

        • Re: (Score:3, Insightful)

          by DerekLyons ( 302214 )

          I've seen papers that have received three reviews, two that say it's good, and one that says it's nowhere near worthy of being published. You often question the outliers.

          With such a small sample size - there's no such thing as an outlier. There is still selection bias and confirmation bias though, as you so aptly demonstrate.

      • Re: (Score:3, Informative)

        by Shrike82 ( 1471633 )
        If you're a reasonably active researcher in a specific discipline (even more so if you work in a small sub-field) then you'll likely get to know your peers when you meet them at conferences and when you collaborate with other groups in projects. These same people will be the first to be asked to review a paper in their (and your) field, and will either recognise your work or simply see your name at the top. Now if they have no specific involvement in the work then ethically they're not in the wrong for revi
      • Malcolm Gladwell [wikisummaries.org] would agree with you.

        The ironic part of this is that you need not look further than human behavioral scientists to help solve this problem. It is also possible that the whole idea of anything human-based being "non-biased" is a fantasy made up to represent an ideal that will never happen. Humans are just biased to their physiology and environment. End of story.

      • Re: (Score:3, Informative)

        by TheRaven64 ( 641858 )
        It's often difficult to hide it. Someone qualified as a referee has to be familiar with the state of the art in a subject, and when it comes to journals the field is usually a very specialised subset of a broader field. The people qualified as a referee will generally recognise the work of their colleagues, and also of their competitors. People can also communicate out-of-band. If you say to the top dozen people in your field 'I submitted a paper about X to this journal / conference this year' then the
      • by DriedClexler ( 814907 ) on Friday September 17, 2010 @01:09PM (#33612584)

        Granted, there are disciplines that are closed knit to the point that the reviewers would recognize the author based on their past work

        Pardon my naivete, but I don't think such fields should exist. They present an extremely hazard for groupthink and inbred rubber-stamping.

        Any speciality should bend over backwards to maintain close ties with the surrounding fields of research so that others will understand how it relates and better be able to detect when bad practices are becoming standard. And it is vanishingly unlikely that this super sub-speciality will *never* stumble upon a problem isomorphic to a well-studied one in a distant field.

        It's because of this "oh this is a hard subfield, stay off my turf" mentality that causes things like ecologists *just now* starting to use the method of adjacency matrix eigenvectors (i.e. PageRank) to identify critical spiecies, despite the method having been known to mathematicians for 40 years.

        Hey scientists: science is a group process. You're special, but you're not that special. Please build off of the existing work. Don't compartmentalize. Good science connects, and connects deeply. Yours should too.

        • Re: (Score:3, Funny)

          by smallfries ( 601545 )

          Exactly. Look how hard it has been for that TimeCube guy to get published just because this reviewers were educated stupid.

    • Re: (Score:3, Informative)

      A referees rejection can be overruled by the editor. It's his job to choose referees that will understand the research and make sure they are just.
    • Re: (Score:3, Interesting)

      >>>a climate scientist who's an openly devout Christian finds data that sheds doubt on human caused global warming will be rejected because someone's afraid of looking foolish.

      Nothing that extreme. More like they would reject papers that claim "global warming caused by natural causes" and accept papers that say "global warming caused by man", in order to protect their Own beliefs. A guy named Thomas Kuhn wrote about this very phenomenon (protecting the current paradigm aka worldview) several deca

      • Re: (Score:3, Insightful)

        by hkmwbz ( 531650 )
        On the other hand, skeptics like Richard Lindzen are actively publishing... Their research might not hold up to closer scrutiny, but somehow it gets through peer review. Odd, then, if peer review is so biased and dismissive.
    • Re: (Score:3, Interesting)

      by Bigjeff5 ( 1143585 )

      I remember hearing about a nutrition paper that was rejected from a medical journal for a reason along the lines of "That can't possibly be true." So the guy updated the paper with an explanation of the basic bodily functions involved and how they work, which shows exactly why it could happen, and still rejected. He submitted it to a different paper where they basically said "This looks sound, we'll publish on the condition that you remove the explanation. Any doctor would know this already." The paper

  • by lymond01 ( 314120 ) on Friday September 17, 2010 @11:10AM (#33611282)

    When you're talking about scientific papers, a "bad apple" reviewer may be able to skew the record in terms of 1-10 scales, but reviewers also do a qualitative write-up of the material. That's really the only important part and if one or two people fall outside the line of general consensus, they'll just be ignored.

    • by Shrike82 ( 1471633 ) on Friday September 17, 2010 @11:27AM (#33611470)
      Well that's not always the case. Different journals have different review processes. Some ask for numerical choices on a scale, others want choices in terms of "strongly agree", "somewhat agree" etc. for specific questions, others want only written comments and a final choice. Even this final choice is different in many cases, sometimes restricted to Accept, Accept with minor corrections, Accept with major corrections, Invite for resubmission and simply Reject, while others take the final choice as an aggregate of multiple choice responses or numerical averages. Some systems are obviously easier to be biased with than others.

      Regardless of all this though, sometimes you'll find out that only two of three reviewers responded, and at least one of those probably got one of their postdocs or even a PhD student to do the review. Some reviews will have empty parts where a reviewer was supposed to write a paragraph but couldn't be bothered, or because they didn't want to reveal the fact that they were totally unfamiliar with the subject matter. Getting a journal paper published is more hit and miss than you'd think. I used to think that a good paper with good ideas was enough, but it's not always the case.
    • We're not talking about the reviewers, we're talking about the referees. The guys who let papers in the journal so that they can be reviewed.

      If the paper never gets published, how will reviewers ever see it?

      • I think you're not entirely clear on how peer review works. Once a paper is published the peer review process is over, bar someone seriously questioning the results or conclusions and forcing the paper to be withdrawn post-publication. This is pretty rare. The usual peer review process goes something like this: Someone submits a paper to a journal. The editor will sometimes take a cursory look to determine if it's totally crap, but often will simply assume that people wouldn't waste their time by . If it's
        • Yeah, I was actually talking about the editors rejecting it without sending it off for review.

          My bad.

  • Since the scientific community is so very obsessed with peer review, will this study actually modify the standard procedure?

    Of course, all I can think of is: gee, I wonder if this has had any impact on all the climate change studies that are constantly contradicting each other...

    • Re: (Score:3, Insightful)

      by arivanov ( 12034 )

      There is an aforism: Democracy as a form of government is riddled with problems. However we are yet to invent anything better.

      Same with the peer review. It has its problems. However, we are yet to invent anything better

  • Almost always - no, that's not a scientific deduction, it's just coming from skewed subjective personal experience - the ones who most complain about problems with article peer review systems are those who have the most problems publishing decent articles at decent places. Also, nepotism? Ever heard of [single/double] blind reviews? I guess this must be one of those slow news days.
  • by Anonymous Coward on Friday September 17, 2010 @11:23AM (#33611404)

    I mean scientists who publish among themselves, i.e. inside their narrow specialty, in their own journals, without checking whether the problem at hand has been solved elsewhere. This is more and more common as people get more specialized, and can lead to very basic errors propagated inside the whole community, like rheologists believing in the existence of pure elongational flow (a trivial misunderstanding of tensor algebra). Since the peers reviewing the papers are members of the same community, those errors usually get unnoticed.

    • Yup, I've seen exactly the same thing in several sub-fields of computer science. You read about solutions in textbooks published in the '80s, describing established knowledge, and then see brand new research papers in a different area where they ignore all of this prior research and come up with inferior solutions. About 90% of the papers I've read on Grid Computing fit in this description.
  • by Remus Shepherd ( 32833 ) <remus@panix.com> on Friday September 17, 2010 @11:24AM (#33611420) Homepage

    Just this week, I was asked to peer review a paper in which I was mentioned in the Acknowledgments. The request was sent out automatically -- the journal has records of all their authors, and the keywords for this paper matched the keywords in my profile, so I was picked to review it.

    I recused myself, but really I should never have been asked. If they're going to handle the peer review process automatically, the artificial intelligence that makes the decisions needs to be improved.

    • by starless ( 60879 ) on Friday September 17, 2010 @12:02PM (#33611820)

      Except that I've heard of people deliberately adding people to acknowledgements to try to make sure they don't get those people as referees (and it hasn't worked)!

    • Re: (Score:3, Insightful)

      by Shrike82 ( 1471633 )

      If they're going to handle the peer review process automatically, the artificial intelligence that makes the decisions needs to be improved.

      I don't think it's a massive problem, it just relies on people being ethical about declining to review something if they have an interest (in the legal sense) in the work. People have a lot to lose if they try and cheat the system and get caught.

  • by dazedNconfuzed ( 154242 ) on Friday September 17, 2010 @11:25AM (#33611432)

    The great anecdote demeaning peer-reviewed journals is The Social Text Affair [nyu.edu], where a prominent peer-reviewed journal published with enthusiasm the article "Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity", only to be informed it was, in fact, computer-generated gibberish submitted as a joke.

  • by nweaver ( 113078 ) on Friday September 17, 2010 @11:26AM (#33611456) Homepage

    There are a couple of significant and important limitations in the model:

    a) It assumes only two reviews per paper, and that the reviews are pure boolean, and that reviewer types are also pure and reviewers are randomly selected (when two of the classes of reviewers, 'mythantropes' (always reject) and 'altruists' (always accept) are specifically selected against by editors and PC chairs based on reputation).

    b) It does not consider the cases (such as conferences) where there is a program committee meeting and the papers are not just considered on their own, but gone through a relative ranking process.

  • by gnutrino ( 1720394 ) on Friday September 17, 2010 @11:40AM (#33611590)
    First off in case anyone is in doubt this study use a model of peer review - no experiment or observation of an actual peer review process was done. That's not to say interesting and enlightening things can't come from modeling but in this case the moldel they use seems very questionable and highly arbitrary. This part in particular is highly dubious:

    Each reviewer produces a binary recommendation within the same timestep: ’accept’ or ’reject’. If a paper gets 2 ’accept’ it is accepted, if it gets 2 ’reject’, it is rejected, if there is a tie (1 ’accept’ and 1 ’reject’) it gets accepted with a probability of 0.5.

    If a single 'bad' reviewer (i.e. one that gives the 'wrong' answer as determined by the 'correct' method of reviewing mentioned as a control in the paper) can cause a paper to have a 50:50 chance of acceptance or rejection it doesn't seem too suprising to me that a relatively small number of them could cause the process to become '[not] much better than by accepting papers by throwing (an unbiased) coin' - because in their model, in the case of a reviewer disagreement, that's exactly what is happening!

  • by Kurofuneparry ( 1360993 ) on Friday September 17, 2010 @11:45AM (#33611644)

    The peer review system is great for regulation, standardization and unification. However, all scientists that I've worked with/researched with/spoken with much about this topic admit that the system can be annoyingly flawed by group think and conformity. One bad apple ruins the bunch, right?

    The good news? While this part of the scientific community is not immune to problems, the slack is picked up elsewhere: As long as methods, data and results are transparent, reproducible and published we can actually have quality science.

    I often speak to people about scientific research and they're shocked that it's not full proof. This is kind of like buying software (perhaps even a Microsoft product) and finding that it's not perfect. Science is done by committee and progresses slowly. "If we know what we were doing, it wouldn't be called research" ~Albert Einstein

    Then again, I'm an idiot....

    • by mcgrew ( 92797 ) *

      I often speak to people about scientific research and they're shocked that it's not full proof.

      I don't quite understand what you mean by "full proof". You mean they don't recieve the full paper, or that the peper isn't complete, it isn't fully proofread, or what?

      Then again, I'm an idiot....

      If you're a researcher it's highly unlikely that you're an idiot. Even if your sig is correct.

    • Re: (Score:3, Informative)

      Yeap, old news.

      http://www.genomeweb.com/peer-review-broken [genomeweb.com]
      http://www.slate.com/id/2116244/ [slate.com]

      All it takes is one bad reviewer that doesn't know what he's talking about, or only skimmed over the paper, to get a paper rejected.

  • by oiron ( 697563 ) on Friday September 17, 2010 @11:46AM (#33611654) Homepage

    I actually read the comments on TFA, and down at , there's a particularly interesting one: [physicsworld.com]

    This study overlooks not only the role of the editor, but also the process in which the authors are able to answer the referees' objections. When the referees are competent, this leads to better papers through useful suggestions. On the other hand, when they aren't, overcoming the exasperation of the authors, their objections are easily brushed away, and the paper eventually gets through. Also, when the case is particularly contentious, there's still the option of calling for an adjudicator. In summary, the peer-review process is far more complex than this simulation might suggest. On the dark side, I’ve also noticed that referees are sometimes reluctant to object papers from certain renowned authors. The human factor is hard to remove. I guess many people will agree that there’s a need to look for better approval systems, specially today, when there’s an explosion of submissions. However we must also acknowledge that the present system has served its purpose of maintaining a certain quality.

    There's actually a reasonably intelligent discussion going on in there...

    • There's actually a reasonably intelligent discussion going on in there...

      I'm not sure I'd be comfortable on that site then. I prefer the casual trolling, mis-modding and general idiocy right here on /.

  • by happy_place ( 632005 ) on Friday September 17, 2010 @11:47AM (#33611662) Homepage
    My dad (has PhD in a scientific field from Cornell) told me that when submitting a thesis to a review board of professors, it really doesn't matter how "Tough" a professor is as long as that professor in your committee has a rival. Take advantage of their ego with an equally assertive ego. You purposefully choose the rival professor to join your committee as well. Then they'll spend all the review board discussions and presentations contradicting and arguing one with another, and in the end they'll both be so incensed, that they cancel each other out, and it doesn't matter what you presented... I guess the TFA is only pointing out that this occurs at the publishing level as well.
  • by 0123456 ( 636235 ) on Friday September 17, 2010 @12:03PM (#33611848)

    Peer review only works if the reviewers can be trusted and don't form a clique to get their work in and keep other people out. Surely anyone with even basic knowledge of human psychology would understand this?

    • Many scientists don't spend much time thinking about the peer review process itself. Maybe they question it when a good paper of theirs gets turned down, or when a bad paper they disagree with gets published, but they don't spend a lot of time thinking about what could replace it.

      Indeed, what COULD replace it? No review at all? A system where you get to strike one reviewers comments?

      Anyway, for most scientists, it's just something that exists and you just deal with it. I certainly have no intention of t

      • "a good paper of theirs gets turned down, or when a bad paper they disagree with gets published"

        This basically says it all. No one ever submits a bad paper, in their own opinion, and bad papers are limited to the ones that they disagree with.

        In case you missed it, peer review is dead. Get over it. Don't be sad, you can still have your journals and pomp and circumstances. You can drag out the rotted corpse and parade around with your scientist friends. It just carries no weight with the public. We've

  • Because this is an important question for serious people, but has no bearing on why various cranks (Intelligent Design people, climate change "skeptics", Time Cube, etc.) may have trouble getting their work in print. Papers by such people generally don't end up in the peer review phase - they aren't sent out for evaluation by the journal, so peer review doesn't matter.

      That said, peer review provides substantially the same benefit as those "shoplifters will be prosecuted" signs you see in department stores.

      Shoplifters are very seldom if ever actually prosecuted - but the threat, even the vaguest menace - of public scrutiny has an impact on behavior. I'm not talking about scientific fraud (which peer review will seldom catch,) but about quality of reasoning, doing the needed controls, etc. We may have a system that rewards good research little-better than an unbiased coin, but the <b>perception</b> that it works, or that it might work for you, motivates people to do the work needed to survive peer-review.
  • by grandpa-geek ( 981017 ) on Friday September 17, 2010 @12:25PM (#33612072)

    The government uses peer review to evaluate proposals for science and engineering grants. The same issues probably apply to those evaluations.

    I have experienced a situation in which one reviewer recommended turning down a grant for reasons that could be considered as biased, although the bias was groupthink rather than individual. The other reviewers were enthusiastic about funding the grant and regarded it as a potential game-changer. It didn't get funded. A few years later the game-changing nature of the technology was recognized, but it was too late for the original applicant.

  • the problems of nepotism and tribalism are everywhere.
    from internet message boards and professional office environments to national government and international politics.

    here's a paradox for you,
    someone could do a study on how to eleminate nepotism and tribalism.
    then they can put it up for peer review.

  • by pesho ( 843750 ) on Friday September 17, 2010 @02:09PM (#33613342)
    Chances are that this paper is not going to pass peer review. A brief read paper shows that they don't even attempt to validate the model with real data (too lazy for real research I guess). Their model is also overly simplistic to the point of stacking the deck towards proving that peer review is bad. The reviewer role is simplified to an accept/reject decision, which has nothing to do with reality. They completely eliminate the revision step in the peer review process, where authors address either through new experiments or through argument the comments of the reviewers. If you look at the 'characters', what they call a 'rational' reviewer looks more like a 'bastard' reviewer. They completely ignore the possibility that a reviewer can make suggestions that improve the paper.

    I have several publications that were significantly improved through the peer review process. When I review papers my goal is not to shoot down the work, rather I try find ways to improve it. Of course there are 'bad' reviewers, who think that reviewing a paper is shredding it to pieces. These are actually easy to spot, because they rarely suggest anything useful and are often ignored by the journal editors. Speaking of which, journal editors are yet another part of the peer review process that is missing from their model

"If it ain't broke, don't fix it." - Bert Lantz

Working...