Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Education Science

New Stanford Institute To Target Bad Science 86

ananyo writes "John Ioannidis, the epidemiologist who published an infamous paper entitled 'Why most published research findings are false', has co-founded an institute dedicated to combating sloppy medical studies. The new institute is to focus on irreproducibility, waste in science and publication bias. The institute, called the Meta-Research Innovation Centre or METRICS, will, the Economist reports, 'create a "journal watch" to monitor scientific publishers' work and to shame laggards into better behaviour. And they will spread the message to policymakers, governments and other interested parties, in an effort to stop them making decisions on the basis of flaky studies. All this in the name of the centre's nerdishly valiant mission statement: "Identifying and minimising persistent threats to medical-research quality."'"
This discussion has been archived. No new comments can be posted.

New Stanford Institute To Target Bad Science

Comments Filter:
  • Does anyone else remember "The journal of irreproducible results."?
    Anyway it would be a great name.
  • by Russ1642 ( 1087959 )

    Sounds like a great idea, but in reality it'll end up being untrusted and reviled by scientists. Set yourself up as THE authority on judging anything and the people you're judging will hate you because of your biases, conflicts of interest, lack of oversight, lack of accountability, and poor dispute resolution.

    • by lgw ( 121541 ) on Tuesday March 18, 2014 @04:03PM (#46519059) Journal

      Depends how "meta" they are. If their careful and question peer review practices and point out common methodology pitfalls, they might do OK. Better still would be to simply do science: science that refutes bogus published results through failure to reproduce the experiment as described. While that's absolutely key for science to work, no one funds it.

      • If their careful and question peer review practices and point out common methodology pitfalls, they might do OK.

        And lets be honest, there's a lot of low-hanging fruit in this area.

    • Oversight already exists everywhere: peer review is the minimum requirement for publishing, funding is awarded based on what your fellow scientists think of your research. If they, for some reason, decide to have it not be scientists, then sure, that will probably be reviled and rightly so.

      It will also depend on how they conduct themselves. It's unlikely to be a heavy handed approach "We did what was suggested in the methods verbatim and it didn't work, so we demand their paper be retracted!!!" isn't
    • Re: (Score:2, Insightful)

      by wbtittle ( 456702 )

      They are only acting as the authority to point out the problems. There are huge problems in epidemiology. The really useful data gathered by epidemiology is not the positive correlations, it is the non correlations. This presents a rather ugly problem. The data that people find interesting are the positive correlations. With the exception of 1 or 2 studies, these are pretty much worthless. The data that shows a link isn't there is what is really useful. This is the source of all the bad research.

      If you lo

    • by pnutjam ( 523990 )
      worked for snopes.
  • by theshowmecanuck ( 703852 ) on Tuesday March 18, 2014 @03:11PM (#46518585) Journal
  • nerdish? wtf. (Score:3, Insightful)

    by rogoshen1 ( 2922505 ) on Tuesday March 18, 2014 @03:21PM (#46518669)
    Why exactly is "Identifying and minimising persistent threats to medical-research quality." even remotely considered "nerdishly valiant"??? That is a pretty important aspect of medicine that gets overlooked all to often by the pharma funded medical testing establishment :(
  • by Anonymous Coward

    Was his paper... ...Eeeeeevil?

  • by Theovon ( 109752 ) on Tuesday March 18, 2014 @03:48PM (#46518937)

    It’s wrong to publish fabricated or falsified results, and people who do that should be slammed. There are other situations where people are being neglegent or hoping you don’t catch their slight of hand. For instance, there are the innumerable parallel computing papers that use O(N^2) algorithms to show a speedup on a GPU or supercomputer where there exists a serial O(log N) algorithm that runs faster on a PC. (No joke.) All of those sorts of things should be actively retracted.

    However, what we don’t want to do is discourage publication of preliminary results that MIGHT be wrong. Honest, legitimate work that gets superceded should not be subject to retraction, and a wrong theory published can often inspire others to do a better job. When a researcher can say, “That was our best hypothesis at the time, and this was the most accurately we could represent the data,” then it should stand as a legitimate publication. Relativity and quantum mechanics supercede Newtonian physics, but that doesn’t mean we should retract everything Newton said.

    Now, most people reading this will say “duh!” Because that’s obvious. All I’m saying is that we need to be careful to not create an environment where publication of preliminary work is discouraged in any way or where honest mistakes can hurt the career of an honest researcher. That would put a damper on science in general. The bar for retraction should be very high and require solid evidence of intentional wrongdoing.

    • However, what we don’t want to do is discourage publication of preliminary results that MIGHT be wrong. Honest, legitimate work that gets superceded should not be subject to retraction, and a wrong theory published can often inspire others to do a better job. When a researcher can say, “That was our best hypothesis at the time, and this was the most accurately we could represent the data,” then it should stand as a legitimate publication.

      The trick is to make that statement when first publishing the research, as opposed to saying it after somebody calls bullshit on apparently dubious claims.

      DISCLAIMER: this paper contains preliminary research - results may not be fully vetted.

      Or something to that effect.

      • However, what we don’t want to do is discourage publication of preliminary results that MIGHT be wrong. Honest, legitimate work that gets superceded should not be subject to retraction, and a wrong theory published can often inspire others to do a better job. When a researcher can say, “That was our best hypothesis at the time, and this was the most accurately we could represent the data,” then it should stand as a legitimate publication.

        The trick is to make that statement when first publishing the research, as opposed to saying it after somebody calls bullshit on apparently dubious claims.

        DISCLAIMER: this paper contains preliminary research - results may not be fully vetted.

        Or something to that effect.

        One would hope that even if the research is preliminary that the results presented have been fully vetted.

        • Well, I understand that the first guy to do the research might not know everything there is to know; I doubt Einstein's first draft of the Theory of Relativity was his last draft, you know? But Einstein had the sense and tact to point out from the get-go that he very well may have been wrong.

          • Well, I understand that the first guy to do the research might not know everything there is to know; I doubt Einstein's first draft of the Theory of Relativity was his last draft, you know? But Einstein had the sense and tact to point out from the get-go that he very well may have been wrong.

            Vetting information presented simply means that the data is correctly presented. It doesn't mean that it is the whole picture. So yes, Einstein's research was vetted, even if it was further refined later (and that later research was also vetted). Publications need to take responsibility for the research they publish, at least to the extent they are verifying it.

            There is another story on slashdot right now about bogus stem cell research. What is the point of having editors for your scientific journals if th

            • What is the point of having editors for your scientific journals if they aren't going to do any fact checking and just blindly publish whatever they get?

              Fair enough - not much point in adding a disclosure if the people publishing the work can't be bothered to verify anything.

      • In effect all published scientific papers, especially those that break new ground are preliminary research. Peer review is akin to spell and syntax checking. After the paper gets published the broader field gets to weigh in and take their whacks at it. Only then if it continues to stand up does it become established science. Even a paper that doesn't hold up can still help you get pointed in the right direction since it shows places to not go.

    • by gstoddart ( 321705 ) on Tuesday March 18, 2014 @04:21PM (#46519195) Homepage

      For instance, there are the innumerable parallel computing papers that use O(N^2) algorithms to show a speedup on a GPU or supercomputer where there exists a serial O(log N) algorithm that runs faster on a PC. (No joke.)

      Except that while there might be some problems which have O(log N) solutions as well as O(N^2) solutions, there are still things which still only have O(N^2) solutions, correct?

      So if you can learn how to solve a known O(N^2) problem better (even if there is a known O(log N) solution), what you learn is still applicable to to other O(N^2) problems for which there isn't a known O(log N) solution.

      I'm not sure what you're describing is evidence of malfeasance, or that they're working on solving a class of solution, and not necessarily that specific problem.

      To me it sounds more like they're probably aware of the O(log N) solution, but that's irrelevant because they're looking at how to use parallelism to address things which are O(N^2), because there's many many of those.

      So much of math comes down to solving an equivalent problem you already know how to solve.

      Maybe they're figuring out how to address a problem which is O(N^2) by one method, so that once they know how to solve it faster with parallelism, they can learn how to solve other problems which nobody has an O(log N) solution for.

      It may not be all about solving that particular problem, but that class of problem. Because mostly it seems like we've never figured out how to do real parallelism except for things which are classed as 'embarassingly parallel' because it already lends itself to breaking it up -- like SETI@Home.

      • by Theovon ( 109752 )

        In fact, there are so many O(N**2) algorithms that they can parallelize that there’s really no excuse for continuing to use the ones that have O(n log n) versions. Yet they keep doing it! Why does everybody keep using O(n**2) n-body and shortest path algorithms? That you can parallelize those teaches us nothing about parallelizing algorithms unless all you care to do is benchmark the supercomputer (in which case there should be an appropriate footnote). This is just laziness.

    • When a researcher can say, âoeThat was our best hypothesis at the time, and this was the most accurately we could represent the data,â then it should stand as a legitimate publication.

      Unfortunately, in many cases when people say this, what they often mean is: "This was the best a posteriori hypothesis we could come up with after trying out dozens of random correlations in our data to find something that could appear to be significant, and this was the most accurately we could represent the data after trying a couple dozen statistical measures to find something to make our minor 'blip' look more interesting."

      In other words, it may well be a statistical fluke, but, hey, it is a "legitima

      • by Anonymous Coward

        Uhh, "honest mistakes" arguably should hurt the career of a researcher. If I'm an engineer designing a bridge, and I screw up my calculations, and the bridge falls down, my career should suffer. If I'm a researcher and I make a significant mistake collecting data or analyzing it properly or whatever, my career should similarly suffer.

        An engineer building bridge is using well established techniques and tools: bridges have been built before. They are often implementing small permutations of a previous design.

        Meanwhile, a researcher is often working with something that has never been done before. A mistake is more likely to happen and in most cases the potential for damage is much lower.

        Research is an iterative process while building a bridge is usually not.

      • And, frankly, even if the research isn't mistaken, but is later superseded by more advances, we should start thinking about how to attach references to those sorts of things too -- lawyers do it when drafting a statute that replaces a previous one, to avoid confusion. Scientists should figure out a mechanism to do the same.

        If only there was a mechanism to refer to or cite previous work. I know... we can call them references, or citations! Awesome, I should publish a new paper telling everyone that they should use this system!!

  • ...and welcome once again to "Bad Science"

    .
  • by Anonymous Coward
    Brought to you by the same people who gave us Putoff, Targ, and Uri Geller.
  • It's one thing to point out flaws in studies and say why they are not reliable, it is a totally different thing to have the purpose of your organization to "shame others into better behaviour." Isn't it enough to discredit a study for such and such reasons? Does Stanford need to start discrediting the people, too?

  • by Dcnjoe60 ( 682885 ) on Tuesday March 18, 2014 @04:51PM (#46519435)

    The real problem isn't with shoddy research and researchers, the world has always had those. The real problem is the integrity of the journals that publish research. If they don't practice due diligence and report faulty studies, then they, the journals are at fault. The proper solution to faulty journals is to publish journals that have integrity and exercise due diligence. In a publish or perish world, not publishing shoddy research corrects the problem. What is needed is not the Stanford science police, but journals, symposiums, etc. with integrity that only allow the publishing/presentation of research that has been reviewed and vetted.

  • Now where will we find work?

    Now if you'll excuse me, I must get back to cooking up my results... statistical significance my ass!

  • by HighOrbit ( 631451 ) on Tuesday March 18, 2014 @05:53PM (#46519823)
    This link is blatant right-wing propaganda, but funny as hell. Especially the one about fish.

    http://www.consumerfreedom.com... [consumerfreedom.com]

    But on a serious note, todays NY Times had an "according to the latest study" acticle about a study that claims that all that stuff we've been told for decades about dietary fat being unhealthly is untrue. http://well.blogs.nytimes.com/... [nytimes.com]. Now since this contradicts several decades of observation, I tend to take "latest study" science with a grain of salt and give more credence to well verified (i.e. long term) science.

    The problem with bad science is that it gets reproduced in the popular press (and popular imagination) even if it is later proven false. Case in point: the notorious vacination-autism fiasco. Another example is the "neutrino faster than light" results released a few years back in Italy. As Mark Twain said, "A lie can travel halfway around the world while the truth is still putting on its shoes."

    You can never fully discount the possibility that the guy releasing the results of the latest study is an attention-whore looking to drum up sensationalism to have his 15 minutes of fame. Scientiest are human and subject to the same vanities as everyone else.

    Bottom line, never trust preliminary results.
  • http://www.scientificamerican.com/article/the-case-against-copernicus/ [scientificamerican.com] has an interesting article on how the scientific evidence available at the time actually disproved Copernicus. It wasn't until much later that the heliocentric solar system was proven true.

    I wonder if we start trying to police science too closely if the great theories of tomorrow that we don't yet have enough evidence to support might get tossed.
  • Heck, that was covered a long time ago. [jir.com]

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...