Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Why Published Research Findings Are Often False

samzenpus posted more than 3 years ago | from the race-to-publish dept.

Science 453

Hugh Pickens writes "Jonah Lehrer has an interesting article in the New Yorker reporting that all sorts of well-established, multiply confirmed findings in science have started to look increasingly uncertain as they cannot be replicated. This phenomenon doesn't yet have an official name, but it's occurring across a wide range of fields, from psychology to ecology and in the field of medicine, the phenomenon seems extremely widespread, affecting not only anti-psychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants. 'One of my mentors told me that my real mistake was trying to replicate my work,' says researcher Jonathon Schooler. 'He told me doing that was just setting myself up for disappointment.' For many scientists, the effect is especially troubling because of what it exposes about the scientific process. 'If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved?' writes Lehrer. 'Which results should we believe?' Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to 'put nature to the question' but it now appears that nature often gives us different answers. According to John Ioannidis, author of Why Most Published Research Findings Are False, the main problem is that too many researchers engage in what he calls 'significance chasing,' or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. 'The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,'"

Sorry! There are no comments related to the filter you selected.

first (0, Redundant)

Anonymous Coward | more than 3 years ago | (#34737662)

fail

Hmmmmm (4, Interesting)

Deekin_Scalesinger (755062) | more than 3 years ago | (#34737664)

Is it possible that there has always been error, but it is just more noticeable now given that reporting is more accurate?

Re:Hmmmmm (4, Insightful)

Joce640k (829181) | more than 3 years ago | (#34737754)

Maybe it's just the the truths being presented in the article are the sort of 'truths' that are hard to measure 100% objectively. Whenever results have a human element there's always the possibility of experimental bias.

Triply so when the phrase "one of the fastest-growing and most profitable pharmaceutical classes" appears in the business plan.

Fortunately for science, the *real* truth usually rears it's head in the end.

Re:Hmmmmm (4, Insightful)

gilleain (1310105) | more than 3 years ago | (#34737824)

Maybe it's just the the truths being presented in the article are the sort of 'truths' that are hard to measure 100% objectively.

Alternatively, this article is almost unbearably stupid. It starts off heavily implying that reality itself is somehow changeable - including a surfing professor who says something like "It's like the Universe doesn't like me...maaan".

This is just a journalistic tactic, though. Start with a ridiculous premise to get people reading, then break out what's really happening : poor use of statistics in science. What was really the point of implying that truth can change?

Re:Hmmmmm (2)

BrokenHalo (565198) | more than 3 years ago | (#34738044)

poor use of statistics in science. What was really the point of implying that truth can change?

There is also an implication that some "sciences" are in fact nothing more than pseudosciences, i.e. little removed from voodoo.

Re:Hmmmmm (5, Interesting)

digsbo (1292334) | more than 3 years ago | (#34738090)

Wow. I didn't pick up any of that at all, and I RTFA. It looked to me much more like acknowledgement of widespread difficulties with randomness, scale, and human fallibility. Exactly the kinds of things that would make someone who's a staunch defender of "science as a means to truth" to disregard valuable critical information about it.

Re:Hmmmmm (1)

Yetihehe (971185) | more than 3 years ago | (#34738104)

This is just a journalistic tactic, though. Start with a ridiculous premise to get people reading, then break out what's really happening : poor use of statistics in science. What was really the point of implying that truth can change?

What is the point of answering a question and then asking the question?

Re:Hmmmmm (1)

burnin1965 (535071) | more than 3 years ago | (#34738068)

Maybe it's just the the truths being presented in the article are the sort of 'truths' that are hard to measure 100% objectively.

For the experiments the author chose as the basis of his claim this may be the case but the pharmaceutical experiments that resulted in billions in profits should be setting off corporate fraud alarms rather than lead to the conclusion that science doesn't work and pseudo-science is just as good. The first question I had was 'How many schizophrenics are there in the population? Am I really surrounded by crazy people?'

The author is begging to drag the United States into the second dark age.

Re:Hmmmmm (4, Insightful)

causality (777677) | more than 3 years ago | (#34738078)

Maybe it's just the the truths being presented in the article are the sort of 'truths' that are hard to measure 100% objectively. Whenever results have a human element there's always the possibility of experimental bias.

Triply so when the phrase "one of the fastest-growing and most profitable pharmaceutical classes" appears in the business plan.

The pharmaceutical industry is easily one of the most corrupt industries known to man. Perhaps some defense contractors are worse, but if so, then just barely. It's got just the right combination of billions of dollars at play, strong dependency on the part of many of its customers, a basis on intellectual property, financial leverage over most of the rest of the medical industry, and a strong disincentive against actually ever curing anything since it cannot make a profit from healthy people. Many of the tests and trials for new drugs are also funded by the very same companies trying to market those drugs.

Fortunately for science, the *real* truth usually rears it's [sic] head in the end.

Sure, after the person suggesting that all is not as it appears to be is laughed at, ridiculed, cursed, given the old standby of "I doubt you know more than the other thousands of real scientists, mmmkay?" for daring to question the holy sacred authority of the Scientific Establishment and daring to suggest that it could ever steer us wrong or that this, too is unworthy of 100% blind faith or that it may have the same problems that plague other large institutions. The rest of us who have been willing to entertain less mainstream, more "fringe" theories that are easy to demagogue by people who have never investigated them already knew that the whole endeavor is pretty good but not nearly as good as it is made out to be by people who really want to believe in it.

Already debunked (5, Insightful)

mangu (126918) | more than 3 years ago | (#34737836)

Is it possible that there has always been error, but it is just more noticeable now given that reporting is more accurate?

Precisely. As mentioned in a Scientific American [scientificamerican.com] blog:

"The difficulties Lehrer describes do not signal a failing of the scientific method, but a triumph: our knowledge is so good that new discoveries are increasingly hard to make, indicating that scientists really are converging on some objective truth."

Re:Already debunked (5, Insightful)

toppavak (943659) | more than 3 years ago | (#34738014)

Scale is also an important factor. With better statistical methodology, more rigorous epidemiology and a growing usage of bio-statisticians in the interpretation of results, we're seeing that weak associations that were once considered significant cannot be replicated in larger experiments with more subjects, more quantitative and accurate measurements. Unlike many, many other fields (particularly theology) when scientific theories are overturned, it is a success of the methodology itself.

That's not to say that individual scientists don't sometimes dislike the outcome and ultimately attempt to ignore and/or discredit the counter-evidence, but in the long run this can never work since hard data cannot be hand-waved away forever.

Medical profession is the worst! (2, Funny)

Anonymous Coward | more than 3 years ago | (#34738144)

I remember some time in the '80s, a doctor published some "research" that claimed to show that abused children could be identified by how they reacted to a pencil shoved into their anus. Yes, really! Unfortunately, doctors think they are scientists and for the most part, they are not, so they did not properly evaluate the methods used for this "research" The real shame of this was that some doctors actually used this "method" to identify supposedly abused children, with all the attendant hurt and distress that these false accusations caused.

Re:Hmmmmm (1)

poetmatt (793785) | more than 3 years ago | (#34738146)

my take is that some scientists back bad theories and then do their best to prevent them from being refuted. Basically bad science work, as gilleain has indicated.

Yes it does. (2, Insightful)

Dr_Ken (1163339) | more than 3 years ago | (#34737670)

The article says "this phenomenon doesn't yet have an official name," [yet] but it actually does. It's called "lying".

Re:Yes it does. (2)

oldhack (1037484) | more than 3 years ago | (#34737704)

Better yet, "statistical evidence". Unreproducible stat evidence is oxymoron.

Re:Yes it does. (3, Interesting)

Rockoon (1252108) | more than 3 years ago | (#34738026)

There is a lot of science where new data is not generated at a rate where true reproducibility is an option.

For example, anything to do with the general health of a person can only really be measured over long time scales (decades), as well as measurements of the climate and things like that.

In those cases, 'reproduction' means taking the same data, sifting it in possibly the same way (but maybe not), and getting the same or similar result.

Now take this fact in the context of data dredging.

Data dredging does not have to be intentional (ie: an intent to defraud, although it certainly can be.)

If you take 1000 scientists and give them all the same data, they will probably look at that data in several thousand ways. If you are dealing with 95% intervals, and the data is looked at in 2000 ways, then about 100 of those ways will present something 'significant' by simple random chance.

The same phenomenon exists in that whole bullshit "Equidistant Letter Spacing" Bible-Code crap, but is much easier to dismiss because you have to believe something extremely unlikely (God exists, and orchestrated the translation of the bible into English so that it would have hidden codes.)

When you really get into dismissing Bible Code in a mathematical manner, you end up realizing that in any data set there exists many things of statistically significance and yet also completely bullshit.

Re:Yes it does. (4, Insightful)

shadowofwind (1209890) | more than 3 years ago | (#34737772)

I agree. Though its not lying with the Clintonesque definition of lying that most people use. Its more lying my omission, distorting the meaning of the results by not putting them in their complete context. At least that's how it is with the papers I've read and known enough about to have an educated opinion on. Although the misrepresentation is usually at least partially intentional, I don't think its all intentional.

Re:Yes it does. (1)

shadowofwind (1209890) | more than 3 years ago | (#34737806)

my omission

^my^by

Freudian substitution there. I'll have to look at that :)

Re:Yes it does. (-1, Offtopic)

burnin1965 (535071) | more than 3 years ago | (#34738128)

Is it a Clintoneque 'I did not have sex with that women' kind of lying or is it more of a Bush Junioresque 'There will be a mushroom cloud over U.S. cities if we don't spend trillions on military corporations to bomb the shit out of the people of Iraq first'?

The pharmaceutical example in the article noted some nice corporate profits that resulted from the "science".

Non reproducible truth. (0)

Anonymous Coward | more than 3 years ago | (#34737848)

The article says "this phenomenon doesn't yet have an official name," [yet] but it actually does. It's called "lying".

For today's PC Nazis, I prefer "non reproducible truth".

:-P

Read the Fucking Article, Doucheebag (3, Informative)

bit trollent (824666) | more than 3 years ago | (#34737910)

If you had bothered to read the fucking article instead of jumping to some half assed conclusion you would see that the article has nothing to do with lying.

It's not "the oil companies have paid scientists to lie about science"

It's "I'm fascinated that trends I detected early in my research seem to fall apart as I continue to investigate"

Anyway.. thanks for lowering the level of discussion on /. even further, douche.

Not that simple. (5, Informative)

fyngyrz (762201) | more than 3 years ago | (#34737916)

It's called "lying".

That's not a given. Particularly in the soft sciences - psychology, for instance - it is extremely difficult to control for all factors (I'm more inclined to say nearly impossible) and so replication of results can be subsumed by other effects, or even simply not work at all. You know that whole generation gap thing? That's a good example of groups of people who are different enough that the reactions they will have to certain subject matter can be polar opposites. So something that was "definitively determined" in 1960 may be statistically irrelevant among the current generation.

That's just one example of how squishy this all is. Without having to bring lying into it at all. And then, there will be liars; and there will be people who draw conclusions without scientific rigor at all, simply because it's just too difficult, expensive or time-consuming to attempt to confirm the ideas at hand. And there is the outlier personality; the one who accounts for those other few percent -- all the declarations of "this is how it is" are false for them right out of the gate.

Hard sciences simply lend themselves a lot better to repeatability. Where I think we go wrong is assigning the same certainties to the claims of the soft scientists. I have personally seen psychiatrists, best intent not in doubt, completely err in characterizing a situation to the great detriment of the people involved, because the court took the psychiatrist's word as gospel truth.

All science is an exercise in metaphor, but soft science is an exercise of metaphor that is almost always far too flexible. One place you can see this happening is the trendy / cyclic adherence to Froyd, Jung, Maslow, Rogers and so forth... the "correct" way to raise babies... Ferberizing, etc. This stuff isn't generally lies at all, but it also generally isn't "right." Good intentions do not automatically make good science.

Serious medicine is another good example. Something that might work very well for you might not work at all for me; get the wrong group of test subjects, and your results will skew or worse. This is an area that I think is fair to call a hard science, but where we just don' t know enough about the systems involved. Generally speaking, I don't think our oncologist lies to us; further, I think he's pretty well aware of the limitations of his practice and the state of knowledge that informs it; but they just don't know enough. To which I hopefully add, "yet."

On a personal level - since that's all I can really affect - I treat soft science about the same way I do astrology. If you believe it, you'll probably attempt to modify your behavior because of the predictions, which in turn may, or may not, affect your actual outcome. If you don't, it's either irrelevant or too uncertain to trust anyway. So it's low confidence all the way.

I do, however, still place very high confidence in Boyle's law [wikipedia.org] for gasses. Hard science works very well. :)

Re:Not that simple. (4, Insightful)

shar303 (944843) | more than 3 years ago | (#34738186)

It looks like a lot of the studies that suffer from this effect are concerned with people and their behavior. Personally I don't think its a matter of whether the science is hard or soft but just that the domain has some issues that are not so important with other fields, e.g. the structure of a galaxy or the behavior of a gas with respect to pressure.

The main problem is that when you're looking at anything that has something to do with humans then the tool with which you carry out the investigation is in part the very thing you are investigating [the mind.] This increases the potential for bias no end, and in the opinion of some, renders the whole exercise a completely futile and confounded endeavor. But I would tend to believe that this problem is the exact reason why one should study the mind, exactly because it is the lens through which we view the universe.

In many respects it's a flawed tool for research. Not only filters but active perceptual mechanisms are at work, and function in such a way as to ensure that people seem to create a large part of the reality that they live in. This shouldn't stop scientists from investigating imho, but means that in looking at an area such as the mind, humility is indeed appropriate.

Soft science as you call it should not be conflated with astrology - like many other practices astrology is closer to a very ancient and wonderful art- that of separating people from their money, than it is to scientific investigation. But then perhaps i would say that, being a virgo.

Re:Yes it does. (5, Insightful)

IICV (652597) | more than 3 years ago | (#34737944)

It's only lying if you do it intentionally. If ten labs independently and without knowing of each other perform essentially the same experiment, and one of them has a statistically significant result, is that lying? The other nine won't get published because, unfortunately, people only rarely (and for large or controversial experiments [nytimes.com] ) publish negative results, but the one anomalous study will.

The vast majority of science is performed with all the good will in the world, but it's simply impossible for scientists to not be human. That's why we do replicate experiments - hell, my wife just published a paper where she tried to replicate someone else's results and got entirely different ones, and analyzed why the first guy got it wrong.

Re:Yes it does. (3, Funny)

Frosty Piss (770223) | more than 3 years ago | (#34738180)

It's only lying if you do it intentionally.

Or, as George Costanza says, "It's not a lie if YOU believe it".

It's not a phenomenon, it's more like a syndrome. (0)

Anonymous Coward | more than 3 years ago | (#34738056)

Actually, it's a whole conflated mish-mash of things, including as you say straight-out lying, but also cherry-picking, cognitive bias, statistical naivety and a whole bunch more things.

And really the headline should be "Checking for reproducibility really does identify flawed results: Science works".

Re:Yes it does. (1)

Anonymous Coward | more than 3 years ago | (#34738060)

Actually it's possible to select two different simple random samples from the same population, measure two different quantities, and have them be in statistical agreement because they are both within the expected spread of sample quantities around the population quantities.

What we're seeing isn't bad science, it's just a failure to apply statistics properly to demonstrate agreement with the earlier results.

For example, suppose you know the population mean height. You then choose a simple random sample, measure its height, and come up with a number 1.5 standard deviations away from the population mean. You wouldn't call this sample inaccurate or a falsehood. It was what you measured.

Doing a statistical significance test against your sample versus the population mean would reveal that, even though the sample mean is not the population mean, it still isn't a statistically significant difference. The New Yorker article is crap because they completely ignore the question of sample statistics and statistical significance tests. It's entirely possible that the researcher they interviewed is also ignoring these significance tests.

Just because a measured quantity is 30% lower than the previously measured quantity does not mean that there has been any change in the population or that your previous measurement was wrong. The only statistical technique which has any illuminatory power is the significance test, because it's the only acceptable way to compare data across two different samples. A good scientist actually EXPECTS a spread in statistical means from the same population. This is normal because even a simple random sample has clustering of selected attributes. The only way you ever get a consistent mean is when you measure the entire population.

To conclude: Journalists are idiots.

Re:Yes it does. (4, Insightful)

patjhal (1423249) | more than 3 years ago | (#34738066)

I agree. I was a science major and saw quite a willingness to fudge/manipulate data and I believe it has worked its way into general research. During a breif PhD stint I redid some experiments showed the opposite of what other students had done. Mine showed some significance why theirs had not. Funny thing was my data was ugly, while theirs was pretty. This was from an experiment where organisms where growing in media and had to be counted via microscope and measured with a spectrograph at set time periods. My guess is their data was pretty because they fudged it by saying they took the samples at exactly a particular time ratio. Since I recorded the actual elapsed time (the procedure was complicated and there was variability on how long it took me to complete the tasks sometimes being more than the next check point). I also guess that the student wanted pretty looking data because he thought that would look better to his boss (the professor who ran the lab). Even if the scientists are not doing this from pressure to go higher then their underlings might be doing it to be "impressive". Part of the problem is science is no longer something people do because they love it. It is too commoditized and has become just a job at the low end and a vicious battle for survival at the high end.

Huh? (5, Insightful)

SpinyNorman (33776) | more than 3 years ago | (#34738110)

Did you even read the article?

This is basically about poorly designed clinical drug trials without sufficient controls. Sloppy work, even if it seemed rigorous enough at the time.

The sensationalistic "scientific method in question" stuff is pure BS, but after all this is New Yorker magazine we're talking about, so one wouldn't expect too much scientific literacy. It was the scientific method of "predict and test" that caught these erroneous results, so the method itself is fine. The "scientist" who designed a sloppy experiment is too blame, not the method.

However, I'm not sure that psychiatric drug trials even deserve to be called science in the first place. The principle of GIGO (Garbage In - Garbage Out) applies. This is touchy-feely soft science at best. How do you feel today on a scale of 1-10? Do the green pills make you happy?

It's simple. (5, Interesting)

Lord Kano (13027) | more than 3 years ago | (#34737672)

Even in academia, there's an establishment and people who are powerful within that establishment are rarely challenged. A new upstart in the field will be summarily ignored and dismissed for having the arrogance to challenge someone who's widely respected. Even if that respected figure is incorrect, many people will just go along to keep their careers moving forward.

LK

Re:It's simple. (1)

instagib (879544) | more than 3 years ago | (#34737818)

I'd replace the first word "Even" with "Especially" in parent's very true post.

Re:It's simple. (1, Insightful)

shadowofwind (1209890) | more than 3 years ago | (#34737882)

Yes, but of course its only a matter of time until global warming deniers, creationists, geocentrists, and/or hollow earth believers hijack this discussion and use it as grounds to dismiss every objectional fact that's ever been established.

Re:It's simple. (2)

drsmack1 (698392) | more than 3 years ago | (#34737978)

Global catastrophe science seems to be the only kind that cannot be questioned. That is not how science works.

Questioning "established science" is the cornerstone of the leap forward.

Equating the current findings of the extremely immature science of global climate prediction with well established facts like gravity and a round Earth is just plain stupid.

Re:It's simple. (1)

Lord Kano (13027) | more than 3 years ago | (#34738118)

Yes, but of course its only a matter of time until global warming deniers, creationists, geocentrists, and/or hollow earth believers hijack this discussion and use it as grounds to dismiss every objectional fact that's ever been established.

If you cling to questionable science, what makes you any different? A secular faith is no less of a faith than any other and no more of a science than a field where debate is encouraged and expected.

LK

Re:It's simple. (2)

uassholes (1179143) | more than 3 years ago | (#34738176)

I'm game.

The article is about selective reporting of results, publication bias, and "collective illusion nurtured by strong a-priori beliefs".

Doesn't that fit the blind acceptance of the CO2 hypothesis despite evidence to the contract, exactly?

Re:It's simple. (0)

Anonymous Coward | more than 3 years ago | (#34737870)

And what's more, you can't publish an experiment that has already been published, so there is no motivation to actually verify experiments that have already been done by well established researchers.

Re:It's simple. (1)

BrokenHalo (565198) | more than 3 years ago | (#34738142)

so there is no motivation to actually verify experiments that have already been done by well established researchers.

I have seen any number of cases (in molecular biology) where researchers have done just that, and have come up with equivocal or contradictory conclusions. It's no biggie unless you're on an ego trip or have nasty funding issues attached to your research program.

Re:It's simple. (4, Informative)

Anonymous Coward | more than 3 years ago | (#34737906)

Having worked in multiple academic establishments, I have never seen that. I have seen people argue their point, and respected figures get their way otherwise (offices, positions, work hours, vacation). But when it came to papers, no one was sitting around rejecting papers because it conflicted with a "respected figure." Oftentimes, staff would have disagreements that would sometimes be an agreement to disagree because of lack of data. Is this your personal experience? Because it I don't disagree that this may occur some places, I just haven't seen it. But I want to be sure you have, and are not just spreading an urban legend.

Re:It's simple. (2)

Lord Kano (13027) | more than 3 years ago | (#34737948)

Because it I don't disagree that this may occur some places, I just haven't seen it. But I want to be sure you have, and are not just spreading an urban legend.

That's a fair question. I have not experienced it first hand, but I have seen it as an outside observer.

LK

Science? (-1)

Anonymous Coward | more than 3 years ago | (#34737674)

Perhaps psychology, ecology and medicine are not "science"? Medicine may be an art, psychology just pseudo-science...

Re:Science? (2)

easterberry (1826250) | more than 3 years ago | (#34737758)

I was actually about to feed the troll. I was 2 sentences in before going "oh... right."

Re:Science? (1, Interesting)

hedwards (940851) | more than 3 years ago | (#34737800)

I'm not sure about ecology, but psychology and medicine are definitely not science, nor have they ever been science.

Probably the best indictment of psychology as a pseudo-science I've ever seen is: Trauma Myth The Truth About the Sexual Abuse of Children--and its Aftermath by Susan Clancy [perseusbooks.com]

She herself is basically a scientist, she engages in testing hypotheses in order to determine their validity and has been willing to set aside ones that were demonstrated to be false in favor of better ones. But, unfortunately, most in her field are charlatans.

Re:Science? (1)

Beetle B. (516615) | more than 3 years ago | (#34738098)

Taking an example from a discipline and condemning the whole discipline for it is not intelligent. I mean, I could take some aspects of evolution and point at how biologists study them, and claim it is science - when compared to what most other disciplines do, the rigor is laughable.

Basically, there are two camps in psychology: Those who rigorously follow the scientific method, and those who loosely follow it. Declaring a whole discipline as not science would be like declaring biology not to be science.

The scientist favorite song (1, Insightful)

Anonymous Coward | more than 3 years ago | (#34737678)

The scientist favorite song:

The best things in life are free
But you can keep 'em for the birds and bees
Now give me money (that's what I want)
That's what I want (that's what I want)
That's what I want (that's what I want), yeah
That's what I want

Re: The scientist favorite song (1)

Sponge Bath (413667) | more than 3 years ago | (#34737722)

Now give me money (that's what I want)

Pocket protectors are not free.

Ruh roh. (0)

jra (5600) | more than 3 years ago | (#34737682)

Given the political environment of the last residental administration, and what it did to science, this is much worse than it might initially seem.

Re:Ruh roh. (2)

Enderandrew (866215) | more than 3 years ago | (#34737770)

This is a bit of a fallacy. Bush increased stem cell research funding, fuel cell research funding, etc. He was in office for 8 years, and I believe 2001 was the first time he cut science spending. That was part of a larger goal to cut spending across the board.

How did he respond in 2002? He asked Congress to DOUBLE science spending.

http://www.scienceprogress.org/2008/01/bush-asks-congress-to-double-science-spending/ [scienceprogress.org]

My wife showed me a great graph during the last election that tracked science spending from administration to administration and showed that historically Republicans have spent more on science than Democrats.

http://www.youtube.com/watch?v=x7Q8UvJ1wvk [youtube.com]

Re:Ruh roh. (1)

HungryHobo (1314109) | more than 3 years ago | (#34737956)

Out of interest what was the breakdown of that increase?

I was aware he made heavy cuts into environmental research but what areas benefited the most?
Weapons research? social sciences? medical research? etc etc

Re:Ruh roh. (0)

rubycodez (864176) | more than 3 years ago | (#34737782)

The Obama administration are adept at creating pseudo-science to justify progress-crippling agendas

News Flash: Scientists Human Too, Study Finds (4, Insightful)

girlintraining (1395911) | more than 3 years ago | (#34737688)

After years of speculation, the a study has revealed that scientists are, in fact, human. The poor wages, long hours, and relative obscurity that most scientists dwell in has apparently caused widespread errors, making them almost pathetically human and just like every other working schmuck out there. Every major news organization south of the mason-dixon line in the United States and many religious organizations took this to mean that faith is better, as it is better suited to slavery, long hours, and no recognition than science, a relatively new kind of faith that has only recently received any recognition. In other news, the TSA banned popcorn from flights on fears that the strong smell could cause rioting from hungry and naked passengers who cannot be fed, go to the bathroom, or leave their seats for the duration of the flight for safety reasons....

Re:News Flash: Scientists Human Too, Study Finds (-1)

Anonymous Coward | more than 3 years ago | (#34737808)

Widespread errors? Puhleeze. That means that the issues happened by mistake.

In reality, the researchers fish for a conclusion that matches their own predisposition or, more often, presents an opportunity for additional grant money.

For example, look at any publication involving Mann, Hansen, Schmidt, etc.

When they statistically waterboard their data, it will say what they want it to say; a condition unrelated to reality.

Re:News Flash: Scientists Human Too, Study Finds (5, Interesting)

onionman (975962) | more than 3 years ago | (#34737816)

After years of speculation, the a study has revealed that scientists are, in fact, human. The poor wages, long hours, and relative obscurity that most scientists dwell in has apparently caused widespread errors, making them almost pathetically human and just like every other working schmuck out there...

I'll add another cause to the list. The "publish or perish" mentality encourages researchers to rush work to print often before they are sure of it themselves. The annual review and tenure process at most mid-level research universities rewards a long list of marginal publications much more than a single good publication.

Personally, I feel that many researchers publish far too many papers with each one being an epsilon improvement on the previous. I would rather they wait and produce one good well-written paper rather than a string of ten sequential papers. In fact, I find that the sequential approach yields nearly unreadable papers after the second or third one because they assume everything that is in the previous papers. Of course, I was guilty of that myself because if you wait to produce a single good paper, then you'll lose your job or get denied tenure or promotion. So, I'm just complaining without being able to offer a good solution.

Re:News Flash: Scientists Human Too, Study Finds (1)

hedwards (940851) | more than 3 years ago | (#34737820)

It's not just that, if you're doing research which isn't convenient, you can very easily find yourself in a position where there's no funding to cover further research. If you manage to get a Nobel prize in your field, that helps a lot, but just look what the right did to NASA because of those experiments and observations related to climate change.

Re:News Flash: Scientists Human Too, Study Finds (0)

Anonymous Coward | more than 3 years ago | (#34737982)

The right? If you really want to see some fancy tap dancing, try bringing this up to the left. [wikipedia.org]

See how many grants you get persuing that line of inquiry.

race to the bottom (4, Interesting)

toomanyhandles (809578) | more than 3 years ago | (#34737726)

I see this as one more planted article in mainstream press: "Science is there to mislead you, listen to fake news instead". The rising tide against education and critical thinking in the USA is reminiscent of the Cultural Revolution in China. It is even more ironic that the argument "against" metrics that usefully determine validity is couched in a pseudo-analytical format itself. At this point in the USA, most folks reading (even) the New yorker have no idea what a p-value is, why these things matter, and they will just recall the headline "science is wrong". And then they wonder in Detroit why they can't make $100k a year anymore pushing the button on robot that was designed overseas by someone else- you know, overseas where engineering, science, etc are still held in high regard.

Re:race to the bottom (1)

hedwards (940851) | more than 3 years ago | (#34737832)

Sure they do, the p-value is what determines whether or not to slap him with a paternity suit, duh. Haven't you ever had sex ed?

Re:race to the bottom (2)

demonlapin (527802) | more than 3 years ago | (#34738178)

Don't confuse anti-intellectualism with opposition to learning - Americans still highly value practical knowledge. However, the US has always had a strong anti-intellectualism. This is nothing new. More importantly, it's a valuable cultural trait. Resistance to intellectual ideals is not always bad.

In 250 years, the US has had two major wars on its territory. Both led to significant increases in liberty. By contrast, communism turned the 20th century into a worldwide bloodbath. The ideas pouring out of the academy in the 50s and 60s turned decolonization into a nightmare that dragged hundreds of millions of people down into the abyss, where many of them languish to this day.

Most people aren't smart enough to really understand statistics. As a default position for them, "statistics are usually crap" is a much better standard than "believe the latest academic fad".

Agenda-driven research & "peak-school" pressur (-1)

Anonymous Coward | more than 3 years ago | (#34737728)

So much more published research now is agenda-driven.
Drug companies, tobacco companies, and the climate-change industry are the most obvious culprits.
Beyond that though, we've reached the "peak-school" point in US academia.
Pressure on university researchers can only get worse as the academic bubble deflates.

Quantity, not quality, is often prioritised. (5, Insightful)

water-vole (1183257) | more than 3 years ago | (#34737730)

I'm a scientist myself. It's quite clear from where I'm standing that to get good jobs, research grants, etc one needs plenty of published articles. Whether the conclusions of those are true or false is not something that hiring committees will delve into too much. If you are young and have a family to support, it can be tempting to take shortcuts.

Re:Quantity, not quality, is often prioritised. (0)

Anonymous Coward | more than 3 years ago | (#34738034)

If you have a family to support, don't become a scientist.

Re:Quantity, not quality, is often prioritised. (3, Insightful)

dachshund (300733) | more than 3 years ago | (#34738036)

Whether the conclusions of those are true or false is not something that hiring committees will delve into too much. If you are young and have a family to support, it can be tempting to take shortcuts.

Yes, the incentive to publish, publish, publish leads to all kinds of problems. But more importantly, the incentives for detailed peer-reviewing and repeating others' work just aren't there. Peer-reviewing in most cases is just a drag, and while it's somewhat important for your career, nobody's going to give you Tenure on the basis of your excellent journal reviews.

The inventives for repeating experiments are even worse. How often do top conferences/journals publish a result like "Researchers repeat non-controversial experiment, find exactly the same results"?

Re:Quantity, not quality, is often prioritised. (1)

asnelt (1837090) | more than 3 years ago | (#34738092)

Yes, it is tempting to take shortcuts. But I think as a scientist it is your obligation to do good research and to be honest about your results. Always remember that you get to do the interesting stuff. I'm also a scientist and my track record so far is ok but not overwhelming. Therefore, my career is uncertain and I may be forced to leave academia in a couple of years. Of course, I could have had more publications if I had taken those shortcuts. Granted, I don't have a family to support but I have no sympathy for people who fake their results for their personal benefit.

Re:Quantity, not quality, is often prioritised. (5, Interesting)

Moof123 (1292134) | more than 3 years ago | (#34738114)

Agreed. Way too many papers from academia are ZERO value added. Most are a response to "publish or perish" realities.

Cases in point: One of my less favorite profs published approximately 20 papers on a single project, mostly written by his grad students. Most are redundant papers taking the most recent few months data and producing fresh statistical numbers. He became department head, then dean of engineering.

As a design engineer I find it maddening that 95% of the journals in the areas I specialize in are:

1. Impossible to read (academia style writing and non-standard vocabulary).

2. Redundant. Substrate integrated waveguide papers for example are all rehashes of original waveguide work done in the 50's and 60's, but of generally lower value. Sadly the academics have botched a lot of it, and for example have "invented" "novel" waveguide to microstrip transitions that stink compared to well known techniques from 60's papers.

3. Useless. Most, once I decipher them, end up describing a widget that sucks at the intended purpose. New and "novel" filters should actually filter, and be in some way as good or better than the current state of the art, or should not be bothered to be published.

4. Incomplete. Many interesting papers report on results, but don't describe the techniques and methods used. So while I can see that University of Dillweed has something of interest, I can't actually utilize it.

So as a result when I try to use the vast number of published papers and journals in my field, and in niches of my field to which I am darn near an expert, I cannot find the wheat from the chaff. Searches yield time wasting useless results, many of which require laborious decyphering before I can figure that they are stupid or incomplete. Maybe only 10% of the time does a day long literature search yield something of utility. Ugh.

Re:Quantity, not quality, is often prioritised. (2)

SoftwareArtist (1472499) | more than 3 years ago | (#34738174)

That doesn't match my experience. The currency by which scientists are measured is not publications, but citations of your publications. You can publish a hundred worthless articles in obscure journals that no one ever cites, and you'll get very little credit for them. A handful of good quality, widely cited articles will do more to advance your career.

Not so sure (1)

hardtofindanick (1105361) | more than 3 years ago | (#34737736)

The article falsely gives a sense of "increasing junk"

- Since there is tangible progress in the field of medicine (don't know about others), we must be doing something right.
- Clearly the total scientific output is increasing and the junk is bound to increase. What matters is percentage, not the absolute count.
- The New Yorker article cites a few hand picked cases, that's all this 5 page article is based on?

Re:Not so sure (1)

hedwards (940851) | more than 3 years ago | (#34737856)

Don't forget that these days there's a lot more scrutiny than there was in the past. There's more labs and research institutions world wide, and more people studying it than there used to be.

And don't forget that while in the past an academic scandal involving falsified results probably wouldn't get much beyond the academic community, these days with all the people in opposition to science, it ends up all over the media, justified or not.

Taken apart by a scientist (4, Informative)

IICV (652597) | more than 3 years ago | (#34737738)

This article has already been taken apart by P.Z. Myers in a blog post [scienceblogs.com] on Pharyngula. Here's his conclusion:

But those last few sentences, where Lehrer dribbles off into a delusion of subjectivity and essentially throws up his hands and surrenders himself to ignorance, is unjustifiable. Early in any scientific career, one should learn a couple of general rules: science is never about absolute certainty, and the absence of black & white binary results is not evidence against it; you don't get to choose what you want to believe, but instead only accept provisionally a result; and when you've got a positive result, the proper response is not to claim that you've proved something, but instead to focus more tightly, scrutinize more strictly, and test, test, test ever more deeply. It's unfortunate that Lehrer has tainted his story with all that unwarranted breast-beating, because as a summary of why science can be hard to do, and of the institutional flaws in doing science, it's quite good.

Basically, it's not like anyone's surprised at this.

Re:Taken apart by a scientist (2)

damburger (981828) | more than 3 years ago | (#34737828)

I had already read the article having found it through PZ Myers. A lot of people like to rip on the scientific method, but few of them consider how slight the chance is that they or anyone else can successfully second-guess it.

Re:Taken apart by a scientist (1)

Frosty Piss (770223) | more than 3 years ago | (#34737914)

Yes, elitists like P.Z. Myers don't like to be challenged. SURPRISE!

Re:Taken apart by a scientist (0)

Third Position (1725934) | more than 3 years ago | (#34738158)

P.Z. Myers might be more convincing if more scientists actually acknowledged their results are only ever provisional. Instead, too many of them demand the presumption of infallibility with the arrogance of a medieval pope. "Provisional" is certainly not a word given any emphasis in any IPCC report. IIRC, what we heard for the longest time was that "the science is settled!".

Now, when it turns out that the emperor is wearing no clothes, suddenly we're expected to overlook exorbitant claims because "science is never about absolute certainty".

If people have a lot less faith in science than they used to, it might be in part because too many scientists want to have their cake and eat it too.

Interesting reply to excelent article (4, Insightful)

Pecisk (688001) | more than 3 years ago | (#34737750)

NYT article is well written and informative. It's clearly not assuming that there is something wrong with scientific method, but just asks - could it be? There is excellent reply by George Musser at "Scientific American" http://cot.ag/hWqKo2 [cot.ag]

This is what I call interesting and engaging public discussion and journalism.

Re:Interesting reply to excelent article (1)

0123456 (636235) | more than 3 years ago | (#34737988)

NYT article is well written and informative. It's clearly not assuming that there is something wrong with scientific method, but just asks - could it be?

There's nothing wrong with the scientific method. The problem is that most modern 'science' has nothing to do with the scientific method.

It's worth noting that while many people know Eisenhower warned of the perils of the growing military-industrial complex in his farewell address, they're not aware that he also warned of the perils of the government-scientific complex where almost all science research funding was coming from the government.

Re:Interesting reply to excelent article (1)

UnknowingFool (672806) | more than 3 years ago | (#34738124)

The summary leaves a little to be desired. The article highlights one aspect of drugs (Intended effects on subjects are not always 100% forever) but the slashdot summary extends it to all of science to create doubt of science. In medicine no drug is 100% effective for a variety of reasons. Anyone who has dealt with terminal patients in pain realize that sometimes the most advanced pain killers can do very little at that stage or that the dosage of drugs required might cause more serious harm than alleviate suffering. Also pharmacology is a complicated science because it deals with so many factors. It's not like basic chemistry where everything can be summarized in short, simple equations.

Well this is easy (0)

Uttles (324447) | more than 3 years ago | (#34737760)

Most science is funded by government, and you don't get more funding if your data shows that "everything is OK, no further research needed."

So of course the results aren't reproducible, they are fiction in the first place!

Re:Well this is easy (1)

sstamps (39313) | more than 3 years ago | (#34737984)

This is NOT how government-funded science operates, at all.

Many scientists whose research is funded by the government are in no danger of losing funding if any particular piece of research they are working on turns up null results. Yes, specific project funding may end, assuming that's the kind of funding they're working from (hint: most government funding is broad-based), but they're not in any danger of being out on the streets if they don't show positive results.

This effect is less apparent in the physical sciences (yes, there are unexplained anomalies in physical sciences, too, but they are much fewer and farther between than the philosophical sciences). Physical experiments are much easier to replicate and are replicated repeatedly, especially in the classroom.

The myth of scientific progress? (1)

DoofusOfDeath (636671) | more than 3 years ago | (#34737768)

Some of the things I've taken comfort in as I age are:

  • With all the apparent medical research findings cranked out each year, maybe somethings that hit our parents (arthritis, heart disease, cognitive decline, lower energy, cancer, etc.) will be eased or cured for our generaly, or at worst or childrens' generation.
  • Our children have a good shot at being better off than we are.

But if the fundamental indicator of that progress: publisued scientific results, contains a potentially large and unknown degree of misinformation, then my hopes are called into question.

I mean, obviously some progress is being made. We see that in the life expectancy statistics, in cancer survival rates, etc. But how much potential are we missing due to bogus publications?

Special Pleading. (1)

TB (7206) | more than 3 years ago | (#34737776)

This isnt about issues with the scientific method. Intellectually honest scientists dont care about how studys/experiments turn out. They have no vested interest, or sought outcome. However this article is not about honest scientists at all, its about well known frauds who found that their results didnt match their beliefs, and so they made up an excuse for why.

Re:Special Pleading. (0)

Anonymous Coward | more than 3 years ago | (#34737918)

The root cause is performance based funding then?

Re:Special Pleading. (0)

Anonymous Coward | more than 3 years ago | (#34737940)

The main issue the article seemed to show was this:

“Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”

I assumed scientists were already doing this. If you run an experiment you need to have a predetermined definition of a successful experiment. You can't try interpreting the numbers different ways; if you find something interesting that way you need to run a separate experiment with new success conditions. They mention that the results they're getting appear to be statistically significant until they are replicated. If you run one experiment and check the numbers 20 different ways you shouldn't be surprised that one of the ways you used gave results that should only occur 5% of the time due to chance.

So newsflash, scientists misusing statistics. Again.

Re:Special Pleading. (1)

HungryHobo (1314109) | more than 3 years ago | (#34738042)

no, it's more subtle than that.
the problem is that boring research doesn't get published in really top notch journals.
So if your trial finishes and you end up with results which don't hit the 95% significance threshold then you don't change the numbers, you just keep changing how you look at them until they look interesting.
think shooting at a barn wall with your eyes closed then walking over and drawing your bullseye around the hole.

You don't care about where you hit, just that people were very impressed at how unlikely you were to hit the bullseye with your eyes closed.

there's a simple solution: the big name journals just have to set out a framework where if you want to get published in them then before any results at all are in you have to provide exact details of your methods and how you're planning to analyze the data.
As it were forcing people to draw the bullseye first.

Torturing the data (0)

Anonymous Coward | more than 3 years ago | (#34737780)

Too many researchers are eager to torture the data until they confess to something. Sometimes the data just don't have anything conclusive to say.

Which results should we believe? (4, Insightful)

rennerik (1256370) | more than 3 years ago | (#34737784)

> 'Which results should we believe?'

What a ridiculous question. How about the results that are replicated, accurately, time and time again, and not ones that aren't based off of scientific theory, or failed attempts at scientific theory?

Bogus article (5, Interesting)

Anonymous Coward | more than 3 years ago | (#34737794)

That article is as flawed as the supposed errors it reports on. The author just "discovered" that biases exist in human cognition? The "effect" he describes is quite well understood, and is the very reason behind the controls in place in science. This is why we don't, in science, just accept the first study published, why scientific consensus is slow to emerge. Scientists understand that. It's journalists who jump on the first study describing a certain effect, and who lack the honesty to review it in the light of further evidence, not scientists.

Black and White (0)

Anonymous Coward | more than 3 years ago | (#34737842)

All science is either true or false there is no in between!

Re:Black and White (1)

Somewhat Delirious (938752) | more than 3 years ago | (#34738080)

Hey! That's the Bush doctrine of scientific progress isn't it?

Grant writing cycle (1)

v1x (528604) | more than 3 years ago | (#34737844)

In my field, I have noticed that the grant writing cycle often drives researchers to propose doing things that are inherently difficult to do outside a particular setting (e.g. an academic medical center), but which is helpful in getting funding for research. One of the undesirable consequences of such research then is that it is either difficult to reproduce the exact setting (and consequently the results) elsewhere, and it can lead to findings that have limited external validity.

Publish or Perish (3, Insightful)

0101000001001010 (466440) | more than 3 years ago | (#34737874)

This is the natural outcome of 'publish or perish.' If keeping your job depends almost solely on getting 'results' published, you will find those results.

Discovery is more prestigious than replication. I don't see how to fix that.

Murphy (0)

Anonymous Coward | more than 3 years ago | (#34737880)

After a research is published, there is plenty of time to someone test it or find an experiment that disprove it (could still be done with relativity). And here plays the same mechanism than with the Murphy laws, where we only notice when something goes wrong... we don't count the ones that are not yet disproved, but the disproved ones, so "often" could be misleading.

Not Science (2)

burnin1965 (535071) | more than 3 years ago | (#34737924)

According to John Ioannidis, author of Why Most Published Research Findings Are False, the main problem is that too many researchers engage in what he calls 'significance chasing,' or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. 'The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,'

Before you can question the scientific method through experimentation you first must understand and utilize the scientific process. That last quote is a massive clue that the issue is that they are stepping away from the scientific process and trying to force an answer.

I'll go read the article but before I do I'll just note that in working in semiconductor manufacturing and development both the scientific process and statistical significance are at the core of resolving problems, maintaining repeatable manufacturing and developing new processes and products. And from my 20 years of experience the scientific process worked just fine and when results were not reproducible then you had more work to do but you didn't decide that science no longer worked and that the answer simply changed.

I can guarantee that if we throw away the scientific process and no longer rely of peer review and replication then all those fun little gadgets everyone enjoys these days will become a thing of the past and we'll enter into the second dark age.

THAT'S IT! (0)

Foobar of Borg (690622) | more than 3 years ago | (#34737954)

This just proves that "science" is a load of bullshit. The creationists are right.

Now, where did I put my leaches. I feel a cold coming on...

logical contortions in the article (4, Interesting)

bcrowell (177657) | more than 3 years ago | (#34737964)

The article can be viewed on a single page here: http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer?currentPage=all [newyorker.com]

Not surprisingly, most of the posts so far show no signs of having actually RTFA.

Lehrer goes through all kinds of logical contortions to try to explain something that is fundamentally pretty simple: it's publication bias plus regression to themean. He dismisses publication bias and regression to the mean as being unable to explain cases where the level of statistical significance was extremely high. Let's take the example of a published experiment where the level of statistical significance is so high that the result only had one chance in a million of occurring due to chance. One in a million is 4.9 sigma. There are two problems that you will see in virtually all experiments: (1) people always underestimate their random errors, and (2) people always miss sources of systematic error.

It's *extremely* common for people to underestimate their random errors by a factor of 2. That means the the 4.9-sigma result is only a 2.45-sigma result. But 2.45-sigma results happen about 1.4% of the time. That means that if 71 people do experiments, typically one of them will result in a 2.45-sigma confidence level. That person then underestimates his random errors by a factor of 2, and publishes it as a result that could only have happened one time in a million by pure chance.

Missing a systematic error does pretty much the same thing.

Lehrer cites an example of an ESP experiment by Rhine in which a certain subject did far better than chance at first, and later didn't do as well. Possibly this is just underestimation of errors, publication bias, and regression to the mean. There is also good evidence that a lot of Rhine's published work on ESP was tainted by his assistants' cheating: http://en.wikipedia.org/wiki/Joseph_Banks_Rhine#Criticism [wikipedia.org]

Re:logical contortions in the article (1)

vlm (69642) | more than 3 years ago | (#34738086)

Let's take the example of a published experiment where the level of statistical significance is so high that the result only had one chance in a million of occurring due to chance. One in a million is 4.9 sigma. There are two problems that you will see in virtually all experiments: (1) people always underestimate their random errors, and (2) people always miss sources of systematic error.

It's *extremely* common for people to underestimate their random errors by a factor of 2. That means the the 4.9-sigma result is only a 2.45-sigma result. But 2.45-sigma results happen about 1.4% of the time. That means that if 71 people do experiments, typically one of them will result in a 2.45-sigma confidence level.

In a publish or perish market, they could have spent more time and money to get a higher statistical result, except for:

1) That is money and lifetime out of their pocket for "nothing"

2) Scaling laws and limited expensive tool sampling time might make it impossible.

I'm sure most people would rather get two research stipends and two published papers in their CV for 2.45 level research than one of each for 4.9 level research.

I did read the article and it also seemed to discuss "fads" while trying very hard not to describe them as basic human fad behavior. There's nothing wrong with a fraction of the population entertaining themselves by chasing fads. The scientific method seems quite effective at getting rid of the fads based on the article example. The problem with that, exactly, is what, other than the author wants to make money off telling everyone about it?

Heres a standard slashdot car analogy... At some point in my father's youth, tail fins on cars were the big thing, until they got tired and mostly went away. When I was a kid, back when it was an expensive hobby, spending a lot of money on after market car audio was cool, until that got tired and mostly went away.

Taking basic human nature and claiming insight at noticing scientists behave like humans is pretty much the sociological equivalent of all those moronic business method patents where you take something pedestrian, suffix " ... on the internet" file the patent and wait for the money to toll in.

This is a good thing for all of us (1)

ALeader71 (687693) | more than 3 years ago | (#34737966)

If science has become about "good enough" statistical analysis then many of our scientific truths are actually scientific "truths."
We have far to many politically motivated scientific "research" and paid for "reports" and "studies" that amount to Photoshop Science. Shouldn't we demand more from scientists so we can discredit the "scientists?"

Maybe it is not science (4, Interesting)

fermion (181285) | more than 3 years ago | (#34737968)

The scientific method derives from Galileo. He constructed apparatus and made observations that any trained academician and craftsperson of his day could have made, but they did not because it was not the custom. He built inclined planes, lenses, and recorded what he say. From this he made models that included predictions. Over time those predictions were verified by other such as Newton, and the models became more mathematically complex. The math used is rigorous.

Now science uses different math, and the results are expressed differently, even probabilistically. But in real science those probabilities are not what most think as probability. In a scanning tunneling microscope, for instance, works by the probability that a particle can jump an air gap. Though this is probabilistic, It is well understood so allows us to map atoms. There is minimal uncertainty in the outcome of the experiment.

The research talked about in the article may or may not be science. First, anything having to do with human systems is going to be based on statistics. We cannot isolate human systems in a lab. The statistics used is very hard. From discussions with people in the field, I believe it is every bit as hard as the math used for quantum mechanics. The difference is that much of the math is codified in computer applications and researchers do not necessarily understand everything the computer is doing. In effect, everyone is used the same model to build results, but may not know if the model is valid. It is like using a constant acceleration model for which a case where there is a jerk. The results will be not quite right. However, if everyone uses the faulty model, the results will be reproducible.

Second, the article talks about the drug dealers. The drug dealers are like the catholic church of Galileo's time. The purpose is not to do science, but to keep power and sell product. Science serves a process to develop product and minimize legal liability, not explore the nature of the universe. As such, calling what any pharmaceutical does as the 'scientific method' is at best misguided.

The scientific method works. The scientific method may not be comopletey applicable to fields of studies that try to find things that often, but not, always, work in a particular. The scientific method is also not resistant to group illusion. This was the basis of 'The Structure of Scientific Revolution'. The issue here, if there is one, is the lack of education about the scientific method that tends to make people give individual results more credence than is rational, or that is some sort of magic.

5% of all Hypothesis tests give the wrong answer (1)

Richard_J_N (631241) | more than 3 years ago | (#34738030)

Worth pointing out that, if you do 20 hypothesis tests in a study, all at the 5% level, your expectation should be that approximately 1 of your conclusions is false.
Also, between subjective errors, regression to the mean, and publication bias, it's not surprising that at least some of these major results turn out to be wrong....

Re:5% of all Hypothesis tests give the wrong answe (0)

Anonymous Coward | more than 3 years ago | (#34738106)

There are ways of controlling for that such that you have a 5% level for the 20 test combination. Sadly, I believe that many researchers without a solid statistical background do just as you suggest.

Theory and investigation (1)

sgt101 (120604) | more than 3 years ago | (#34738064)

There are 2 types of valid study; an experimental investigation that tries to test the prediction of a theory to either confirm or disprove it, or secondly a study that attempts to quantify an observed phenomena.

Fishing expeditions (lets see if esp is real, lets see if random compounds do something for condition x) are not valid - for all the reasons outlined in the article, unless they produce results that are stone cold solid. One example of an investigation of this type that has worked is the mapping of novas to redshift that revealed dark energy (or yet another reason to stop believing anything about cosmology, what ever you want to call it) - they were mapping the sky and found that all the models of the universe were utter bollocks (note, any theory that fails to account for 90% of the known physical conditions that it attempts to derive is utter rubbish, and no amount of bum squeezing carping whilst pointing to nonsense sums will make up for it. When you can explain mass we will talk, when you can explain non-baryonic matter I will sit and listen)

Interestingly though that study, which no one can argue with (cos you can look at the sky and see it if you have a few thousand $ of kit) has been dealt with by the cosmology community with a name (dark energy) and a few sheepish looks.

A frank discussion (1)

debrain (29228) | more than 3 years ago | (#34738108)

I just last week had a frank discussion with a former surgeon general about the predecessor article referenced on Slashdot, from the Atlantic Lies, Damned Lies, and Medical Science [theatlantic.com] .

He noted that there was a lot of truth to the article. We discussed a few bases for this phenomenon, most notably:

1. The money: Researchers need funding, but funding is often effectively conditional on a finding of conclusions favourable to the funder (which funders are often either big pharmaceuticals or big governments);

2. The stigma: A "failure" to "prove" a hypothesis looks poorly on a researcher, so they often choose topics that are:
    (a) irrelevant and so unlikely to ever be tested in the future; or
    (b) trite and so unlikely to fail.

We have created a self-perpetuating system of "research" that leads to few useful results in the form of valuable hypotheticals being tested. Where potentially valuable hypotheses are being "tested" the methods used are often contrived so as to reach a specific conclusion, and unconcerned with the truth. These facades of research designed so as to reach specific conclusions allow companies and governments to market product and policy decisions, respectively, which they consider favourable.

All to say, the finding of a useful truth, although supposedly the object of scientific research and generally considered to be at least an incidental consequence of our economic system through e.g. the market's invisible hand, is in practice in the Western world at best irrelevant and at worst heavily counter-incentivized.

The absence of consequence – the curse of affluence – serves to perpetuate an increasing disconnect between reality and the publications that peddle the results of research.

Publication Bias (1)

dcollins (135727) | more than 3 years ago | (#34738136)

"This phenomenon doesn't yet have an official name..."

Sure it does: Publication Bias [wikipedia.org] . It's even mentioned in the article itself: "Jennions, similarly, argues that the decline effect is largely a product of publication bias..." (p. 3 of the linked online article).

Unfortunately, the New Yorker has gotten in the habit of publishing articles in the vein of "Enormous scientific existential mystery!... Or actually, it's a standard topic that's been known for decades". Methinks someone got snookered by the 1st-page article headline/hype.

The Scientific Method (1)

woboyle (1044168) | more than 3 years ago | (#34738184)

ANY "scientific" finding that cannot be replicated must be called into question and absolutely not allowed standing in the domain as "fact". That is the entire purpose of the scientific method. If you cannot replicate your findings, then either your hypothesis is wrong, or your methods are flawed. In either case, you are back to square one, but with knowledge that may help in your next efforts.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?