×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Major Scientific Journal Publisher Requires Public Access To Data

Soulskill posted about 2 months ago | from the open-all-day-every-day dept.

Science 136

An anonymous reader writes "PLOS — the Public Library of Science — is one of the most prolific publishers of research papers in the world. 'Open access' is one of their mantras, and they've been working to push the academic publishing system into a state where research isn't locked behind paywalls and subscription services. To that end, they've announced a new policy for all of their journals: 'authors must make all data publicly available, without restriction, immediately upon publication of the article.' The data must be available within the article itself, in the supplementary information, or within a stable, public repository. This is good news for replicating experiments, building on past results, and science in general."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

136 comments

Good policy (5, Interesting)

MtnDeusExMachina (3537979) | about 2 months ago | (#46339223)

It would be nice to see this result in pressure on other publishers to require similar access to data backing the papers in their journals.

Practicalities (5, Interesting)

Roger W Moore (538166) | about 2 months ago | (#46339525)

Open data is a great idea but it is not always practical. Particle physics experiments generate petabytes of extremely complex, hard to understand data. Making this publicly accessible is extremely expensive and ultimately useless since, unless you understand the innards of the detector and how it responds to particles and spend the time to really understand the complex analysis and reconstruction code there is nothing useful that you can do with the data. In fact one of the previous experiments I worked on went to great trouble to put their data online in a heavily processed and far easier to understand format in the hope that theorists or interested members of the public would look at the data. IIRC they got about 10 hits on the site per year and 1 access to the data.

So I agree with the principle that the public should be able to access all our data but for experiments with massive, complex datasets there needs to be a serious discussion about whether this is practical given the expense and complexity of the data involved. Do we best serve the public interest if we spend 25% of our research funding on making the data available to a handful of people outside the experiments with the time, skills and interest to access it given that this loss in funds would significantly hamper the rate of progress?

Personally I would regard data as something akin to a museum collection. Museums typically own far more than they can sensibly display to the public and so they select the most interested items and display these for all to see. Perhaps we should take the same approach with scientific data. Treat it as a collection of which only the most interesting selections are displayed to/accessible by the public even though the entire collection is under public ownership.

Re:Practicalities (0)

NatasRevol (731260) | about 2 months ago | (#46339585)

So, are you worried that everyone is going to download petabytes of data? To where, their desktops?

Shit, that's the monthly volume of third world countries these days.

Re:Practicalities (3, Insightful)

Anonymous Coward | about 2 months ago | (#46339983)

Uploading and hosting it in the first place to meet such a requirement would be an extremely difficult & costly endeavor.

Perhaps the compromise is to include a clause that requires the author to permit others to obtain a copy and/or access the data, but only if the receiver of the data pay for the cost to transfer/access the data. This is similar to state open records access laws, where you must pay for things like the cost to make copies of documents. So in the above case, satisfying the "must permit access" clause might be as simple as permitting the researcher to come to the facility and access the data from a terminal and browse or whatever it is they do to explore/analysis the data that results from these experiments, thus no costly copying of data is required.

If that isn't agreeable or feasible for the author/institution, then perhaps such research would simply be more appropriately published in a different journal that isn't as focused on openness and verifyability.

Re:Practicalities (0)

Anonymous Coward | about 2 months ago | (#46341027)

So, are you worried that everyone is going to download petabytes of data?

Yes. If they somehow put up a single file with a petabyte of data, without a doubt Timothy would manage to inadvertently link to it on the front page of slashdot.

Re:Practicalities (1)

Roger W Moore (538166) | about 2 months ago | (#46342525)

So, are you worried that everyone is going to download petabytes of data?

No, I am worried about the cost of setting up an incredibly expensive system which can serve petabytes of data to the world and then having it sit there almost unused while the hundreds of graduate students and postdocs the money could have funded move on into careers in banking instead of going on to make a major scientific breakthrough which might benefit all society.

Re: Practicalities (1)

guruevi (827432) | about 2 months ago | (#46339651)

Unlike a museum, data doesn't require anyone to physically interact in order for it to be available. Whether or not you make the data publically available, you have to store and make it privately available, putting in public access is a matter of creating a read-only user and opening a firewall port.

The sad thing is that most scientists don't actually store their data properly, it sits on removable hard drives, cd or an older variant of portable media (zip drive, tape) until it's forgotten about, lost, thrown out or irretrievably degraded. I would bet you that the majority of studies of even the last 3 years would not be able to present their data if asked about; maybe you'll get lucky and find an old, undocumented algorithm for MATLAB on MacOS 9 or so which they used to interpret the data but which is hopelessly useless these days.

Re: Practicalities (4, Informative)

Obfuscant (592200) | about 2 months ago | (#46340137)

Whether or not you make the data publically available, you have to store and make it privately available,

I have boxes and boxes of mag tapes with data on it from past experiments. That's privately available. It will never be publicly available.

putting in public access is a matter of creating a read-only user and opening a firewall port.

It is clear that you have never done such a thing yourself. There is a bit more to it than what you claim. I've been doing it for more than twenty years, keeping a public availability to much of the data we have (but not all -- tapes are not easily made public that way), and there is a lot more to dealing with a public presence than just "a read-only user and a firewall port".

The sad thing is that most scientists don't actually store their data properly, it sits on removable hard drives, cd or an older variant of portable media

And now you point out the biggest issue with public access to data: the cost of making it online 24/7 so the "public" can maybe sometime come look at the data. Removable hard drives are perfectly good for storing old data, and they cost a lot less than an online raid system. For that data, that is storing it "properly".

If you want properly managed, publicly open data for every experiment, be prepared to pay more for the research. And THEN be prepared to pay more for the archivist who has to keep those systems online for you after the grants run out. And by "you", I'm referring to you as the public.

Researchers get X amount of dollars to do an experiment. Once that grant runs out there is no more money for maintenance of the online archive, if there was money for that in the first place. For twenty two years our online access has been done using stolen time and equipment not yet retired. When the next grant runs out, the very good question will be who is going to be maintaining the existing systems that were paid for under those grants. Do they just stop?

Re: Practicalities (0)

Anonymous Coward | about 2 months ago | (#46342177)

As a molecular biologist I generate lots of sequence data and no matter what the journals choose to do, nearly all sequence data in the world if publicly available through GenBank. In addition, DNA alignments and phylogenies are frequently required to be published in an online database (eg, TreeBase). So, for many fields the journals already require this and/or you simply have to make the sequences available anyway for reviewers to be able to check your data. The real issue with this is that many scientists never finalize database submissions as you get the submission ID number without completely finishing the submission info to release it beyond yourself and the reviewers or your paper. I am slightly ashamed to admit that I have two submissions in TreeBank right now that the public cannot access because it is so obnoxious to fill out all the necessary information for public release that I have not taken the time to do it.

Now, one thing I think I must point out that the "public" that we (scientists) are referring to is largely other scientists. Frankly, very few people in the world are interested in my sequences and would have sufficient knowledge/desire to do anything with them. The knowledge part isn't that big of a problem; anyone motivated can teach themselves whatever they want, using free resources. The desire part is the problem. It doesn't matter that we are talking about a limited "public". You must realize that other scientists need to be able to get your data to build up knowledge in the area, providing a important "whole public" service. Eg, what if the person that found that antibiotics could be purified from fungi to kill bacteria only reported this fact but not the species of fungi? Every would need to reinvent the wheel to increase our knowledge of antibiotics and not have a starting point to begin testing.

~Molecular Fungal Systematist

Re: Practicalities (1)

turkeyfish (950384) | about 2 months ago | (#46342959)

Asking that ALL data be saved is a very big requirement, especially for the molecular community. Although the sequences often find their way into Genbank, or at least those that are brought together from pieces of other data that may seldom gets into Genbank. To make maters worse the specimens from which the sequences are made are seldom saved and archived, so that it is often next to impossible to actually verify that the sequences in Genbank are actually from the species that are thought to have been sequenced. I know this is a major problem, since much of my time is spent trying to track down the source of such tissues so that the specimens, should they still exist, which they seldom do, can actually have their identities confirmed. In principle, if the original specimens are saved, since they are in fact the ultimate voucher that makes the data valuable in the first place and the published sequences useless without it, this will greatly benefit the scientific community. However, vouchering of specimens is even more costly than data, which evidently why the molecular community has done such a poor job of it for many species. The problem is large because there is no easy way to define the limits of what is meant by data.

Re: Practicalities (2)

guruevi (827432) | about 2 months ago | (#46342307)

I actually do this for a living; Having data available for projects does require it to be on large data systems which are properly backed up etc. Heck, any halfway decent staged system (Sun used to make really good ones) will allow you to read tapes as if it were a regular network share. The problem will be (which is inevitable) that your PI is going to ask for the data 3 years after they left the institute and your tapes will be unreadable (either because they degrade or because you can't find a reader and associated busses and software)

The mag tapes in boxes problem we fixed years ago by simply putting everything on spinning rust with ZFS. As capacity increases (we're 3 generations in now - 750GB, 2TB and now 4TB drives), the old stuff simply takes up a diminishing percentage of any expansion we put in. Individual data sets from ~10 years ago were 100MB, now they're close to 2GB, those 100MB sets aren't even a noticeable portion today whereas back in the day they filled up the entire *gasp* 3TB array.

I do understand the grant issues, most of those grants will actually mandate a 20 year or-so archival period but never have the money for it. I've figured out that future grants will simply pay for today's "large amount" of data storage in a small overhead because 10 years from now, 2TB of storage for a study will be like today's 100MB for a study.

Re: Practicalities (1)

Bongo (13261) | about 2 months ago | (#46343895)

Thanks, I've been wondering about this problem for a while. I'd seen ZFS as the technical part, but didn't know what to do about the "no money" part.

Re: Practicalities (0)

Anonymous Coward | about 2 months ago | (#46340471)

If it was free yes, but since it's not and funding does not last forever it will go on like this for a very long time. We're not talking about GB or TB it's PB and that much is not cheap to save. If someone like Bill Gates wants to step up and sponsor it then sure, but it's not something most people can even come close to affording. In fact 1PB cost more than most homes for just 3 years of storage.

Re:Practicalities (3, Informative)

RDW (41497) | about 2 months ago | (#46339837)

There could be significant issues with biomedical data, too. For example, the policy gives the example of 'next-generation sequence reads' (raw genomic sequence data), but it's hard to make this truly anonymous (as legally and ethically it may have to be). For example, some researchers have identified named individuals from public sequence data with associated metadata: http://www.ncbi.nlm.nih.gov/pu... [nih.gov]

Re:Practicalities (1, Insightful)

Jane Q. Public (1010737) | about 2 months ago | (#46340103)

Well, but.

I think there's an arguable line to draw between "the entire body of data available", and the statistical sampling data that your typical paper is based on, or the specific data about a newly discovered phenomenon, for example.

Exactly where that line is, I don't claim to know. But it behooves us to be reasonable, and not draw UNreasonable fixed lines in the sand.

My personal opinion is: petabytes or not, if the research is publicly funded then the data belongs to the public, and must be made available in some fashion. That's a somewhat different subject than publishing a paper, but it's a related idea.

Re:Practicalities (1)

Obfuscant (592200) | about 2 months ago | (#46340181)

My personal opinion is: petabytes or not, if the research is publicly funded then the data belongs to the public, and must be made available in some fashion.

The public is currently not paying for this access. Do you want to massively increase the research funding system in the US (or whatever country) to pay for long-term management of all publicly-funded data? Or do you expect to get it for free?

Your desire to access any and all data that was created using public money means that every research grant would need to be extended from the current length (one to three years for many of them) into decades. Someone has to pay for the system administrator, the network access, the electricity, the replacement compute/server hardware, the maintenance contracts, etc. Are you willing? Are you willing to forgo your free access when the funding agencies don't pay? I can tell you, I MIGHT work for free to keep some of the systems I created running, but I wouldn't work for free to maintain the access to the pubic for that data.

Re:Practicalities (1)

aurizon (122550) | about 2 months ago | (#46340385)

A lot of people ignore the collateral functions of the so-called 'peer review' system administered by the publisher.
The publication must be read by someone who knows the subject passable. If his first pass finds it acceptable, he must then select from a number of true experts in these matters (the peers or equals to the writer of the paper). He works for a living as a competent editor for that area of research. The peers he choose are sent a copy of the paper to review and criticize, if not acceptable, the comments are passed back to the author for him to respond. After his responses to fix the flaws, it goes back to the panel and so on until rejected of published. The review mechanism is needed to avoid total BS being published. The publishers have created this nice and profit by it - some say excessively, and I agree. So some way must be found to pay for this. Page fees are the initial solution - the author pays a fee, and this is spread among the experts involved.

As for an Archive, in the USA, the Library of Congress can do this, as long as a proper indexing method is used so that the paper does not become a needle in a haystack. It should be google indexed. Perhaps Google will fund this via ads, because all the biological supply houses will place biological ads, and the same with all the other disciplines.
In fact, this could become a gold mine for Google and at the same time serve PLOS and the research community very well. Large data bases of terabytes of particle data would not be stored, the publisher would grant access to those who wanted to down load it (a precious few will want terabytes of particle data)

So why not someone who has a pipeline to google give them a whistle, they might leap at the chance. It is a natural fit.

Re:Practicalities (1)

Jane Q. Public (1010737) | about 2 months ago | (#46340725)

"A lot of people ignore the collateral functions of the so-called 'peer review' system administered by the publisher."

I don't see this as a stumbling block, though. There are already public-access peer-reviewed journals [peerj.com] . They may have a way to go yet but I expect them to get better and their number to expand in the near future.

Re:Practicalities (1)

aurizon (122550) | about 2 months ago | (#46340809)

Too many badly reviewed articles are published by them.

Re:Practicalities (1)

Jane Q. Public (1010737) | about 2 months ago | (#46341559)

"Too many badly reviewed articles are published by them."

Well, that's a pretty broad statement and I haven't seen any evidence. In any case, I repeat:

"They may have a way to go yet but I expect them to get better"

Re:Practicalities (1)

aurizon (122550) | about 2 months ago | (#46342113)

I had not seen peerj, it looks better than some of the others, and their $99 fee is encouraging, even if optimistic - what happens when the work load gets large, which can happen if they atttract many authors.. There are other journals of easy access and low editorial standard, which is the 'them' I referred to. By the use of a pool of reviewers peerj has a shot at kicking the established journals to the curb = good. In so doing peerj will improve the ecology and hopefully the lower grade journals will smarten up and improve or go away.
I am sure the established journals will fight back, with deep pockets - they have literally billions, and may even fully match peerj and other competent free journals for five or ten years to starve them of good papers. Will they do that? When they see the buzzards circling overhead, they will find a motive.

I am very much in favor of journals like peerj, and I have seen the harm the expensive journals in the third and even the second world have done to deprive their scholars of the books and paper they need. I am happy to see that the modern use of the internet and scanners has spread all expensive journals and books to all these less wealthy countries via scanners and e-mails. This is good.

And while I am on that, the MIT free online university and others like the Khan academy need open source texts for free, because the journal publishers also have another empire, usually in cahoots with profs, to publish course books for $200 or more, and to make last years book obsolete and worthless, so a new book is needed.

Course books are needed for all college years and disciplines, fully open source, update online, also free.

Will it happen? Here in Toronto the University DEMANDS each freshman buy all his course books, and provide a receipt, or the y are not admitted to school. The prof gets a kickback and the college bookstore gets a kickback. Ever see how badly the students are victimized?

That is why I say the entire crooked system needs to change.

That means recognized degrees from MIT/Khan/Et al, which means an accreditation system needs to evolve, and be paid for. This will start to chip away at these monopolies.

This will be a war, without bullets, on economic grounds. Google can become the friend of all here

Re:Practicalities (1)

Jane Q. Public (1010737) | about 2 months ago | (#46340653)

"The public is currently not paying for this access."

I know it isn't. That was an aside, slightly off-topic, I admit.

"Your desire to access any and all data that was created using public money means that every research grant would need to be extended from the current length (one to three years for many of them) into decades."

Not if such a program were to affect only future research. After all: ex post facto laws are forbidden in the United States.

"Someone has to pay for the system administrator, the network access, the electricity, the replacement compute/server hardware, the maintenance contracts, etc. Are you willing? "

I am aware that it would cost somewhat more. But it is arguable that the benefit lost to society is worth far more.

"Are you willing to forgo your free access when the funding agencies don't pay?"

If they don't pay, then it wasn't publicly funded, was it?

"I can tell you, I MIGHT work for free to keep some of the systems I created running, but I wouldn't work for free to maintain the access to the pubic for that data."

If you are profiting on my dime, then yeah. Cough it up, bud.

I didn't say the researchers should pay for it. The public (meaning of course government at some level) would be responsible for maintaining publicly-accessible archives of publicly-funded research.

Re:Practicalities (1)

Goldsmith (561202) | about 2 months ago | (#46341349)

We are paying for that access.

I've been a government employee overseeing research grants. Nearly every single one of them has a clause built in that the data is to be organized and shared with the government and the government has unlimited rights to that data, including all publications. Almost all of them have to have a data management plan and have to describe how the grantee will ensure access to the data.

Almost every single PI simply says "We will follow a standard data management plan." or some other nonsense. The government guys sign off on this, and that's that, there's no enforcement.

When you buy or build equipment on a government grant, you sometimes have a choice to hang on to it or return it to the government at the end of the grant. By agreeing to be the custodian of that equipment, you agree to maintain it, free of charge, for the government. By law, no one gets ownership of free equipment from the government. The government is absolutely terrible at enforcing this.

These legal documents researchers sign with the government have meaning. Read your contracts. That was the first thing I told my PIs. I don't think any of them did.

Re:Practicalities (0)

Sentrion (964745) | about 2 months ago | (#46340297)

But if I have to spend $100k on lobbying before I get public funding, I don't want to have to share the results with freeloaders who didn't pony up the lobbying cash and didn't put the manpower into the research. The rest of society benefits from the public funds after they have bought my product. Take Google, for instance.

Re:Practicalities (2)

Jane Q. Public (1010737) | about 2 months ago | (#46341159)

"But if I have to spend $100k on lobbying before I get public funding, I don't want to have to share the results with freeloaders who didn't pony up the lobbying cash and didn't put the manpower into the research."

You are describing exactly why the current system is broken.

First off, if the research is worthwhile you shouldn't have to spend $100,000 to lobby for it. And I would argue that is an unethical practice: what about the little guy who is doing promising research but doesn't have the funds to lobby?

Second: quite frankly I don't give a flying fuck how much you spent to get the grant. Public money is public money. If I'm paying for it, it belongs to me. Period. And I don't care even a little if you don't like that.

"The rest of society benefits from the public funds after they have bought my product."

Then go pay to get a patent on your own, and leave public funds out of it. Why should the public pay so that you can profit? Independent inventors do it all the time without public funding. What makes you so special?

"Take Google, for instance."

Is Google doing publicly-funded research? That's news to me. If so, I object very strongly.

I suspect you are being sarcastic here. If you're not, I simply disagree with you. Very much.

Re:Practicalities (3, Insightful)

Crispy Critters (226798) | about 2 months ago | (#46340119)

"petabytes of extremely complex, hard to understand data"

The point seems to be missed by a lot of people. RAW DATA IS USELESS. You can make available a thousand traces of voltage vs. time on your detector pins, but that is of no value whatsoever to anyone. The interpretation of these depends on the exact parameters describing the experimental equipment and procedure. How much information would someone require to replicate CERN from scratch?

Some (maybe most, but not all) published research results can be thought of as a layering of interpretations. Something like detector output is converted to light intensity which is converted to frequency spectra and the integrated amplitudes of the peaks are calculated and are fit to a model and the parameters fit giving you a result that the amplitude of a certain emission scales with temperature squared. Which of these layers is of any value to anyone? Should the sequence of 2-byte values that comes out of the digitizer be made public?

It is not possible to make a general statement about which layer of interpretation is the right one to be made public. Higher levels, closer to the final results, are more likely to be reusable by other researchers. However, higher levels of interpretation provide the least information for someone attempting to confirm that the total analysis is valid.

Re:Practicalities (0)

Anonymous Coward | about 2 months ago | (#46343717)

Ideally, the researchers should make public the rawest of the raw data, and all the scripts and software that convert the data through layers of interpretation to produce the final result. That way, anyone can check to see if there was a bug in the program that converts the raw voltage traces to light intensities, and rerun the entire pipeline to determine whether this has any effect on the final result.

Of course, doing the research in this totally-reproduceable way and making it all publicly available would be impractically difficult. But it's worth keeping in mind as an ideal towards which to aspire.

Re:Practicalities (0)

Anonymous Coward | about 2 months ago | (#46340293)

Your idea of practicality has nothing to do with open access, it's a justification for keeping a lid on it. It's merely a justification for the proprietary nature of business. You agree wiht the principle but prefer the prerogative which also makes it easy to promote illusion of success without proof of the opportunity to investigate the reality of the assertions made by researchers. In the case of publicly funded research, all the advantage accrues to those who receive grants, and it precludes anyone else from scrutinizing 'results' or leveraging that which has been accomplished at public expense.

It's a furtherance of the Dole model for privatizing and leveraging public funding, institutionalizing the for-profit-model at universities which have become increasingly mercenary in their activity and firewalling the rest of the world from so as to protect US corporate advantage from competition.

Oh the irony (1)

Roger W Moore (538166) | about 2 months ago | (#46342461)

In the case of publicly funded research, all the advantage accrues to those who receive grants

Really? That's a rather ironic argument given that you are posting it on the web which was something invented and developed at CERN using publicly funded research money.

Your idea of practicality has nothing to do with open access, it's a justification for keeping a lid on it.

So why are you also not complaining that museums with publicly owned collections are not displaying every single item they own? Do you want them to stop researching collections and making acquisitions in the public interest and instead spend money on building thousands of square metres of new display space so every item they own can be displayed?

The public may own the data but there is a cost to making that data publicly available. My own experience has shown that even when that cost is met the public actually have almost no interest in looking at that data. I absolutely zero objections to making all the data publicly accessible provided someone is going to pay for all the network bandwidth, servers, system administration, disk and tape storage, network connections etc. needed to access the data. However as a member of the public I would question whether that is a sensible way to spend all the money required to provide that access and argue that that money would be better going on research. After all that additional money going on data access corresponds to fewer postdocs and graduate students working on the experiment which, unless the data is wildly popular, probably means fewer people using it not more.

Re:Oh the irony (1)

turkeyfish (950384) | about 2 months ago | (#46343027)

Excellent point. Given the modern GOP who are reluctant to even spend money on people, who are starving, its hard to imagine that they are going to be forthcoming with hundreds of millions of dollars necessary to maintain "all relevant data" in archives and repositories that are available on line in electronic form. Having lived through the era of Proxmier's "Golden Fleece Awards", it is totally predictable how politicians would howl at having to fund all sorts of projects that they could mischaracterize out of context as an excuse to cut science budgets further. In the current climate we would probably see legislation calling for the execution of scientists who somehow "mishandle" data. Look at the grief Michael Mann was put through for no good reason. I certainly wish it were true that politicians would see the value and benefit of funding the archival of data, but judging from the behavior of this GOP congress toward scientific research and its funding, such thinking is pure fantasy.

Re:Practicalities (2)

Pseudonym (62607) | about 2 months ago | (#46340299)

There's precedent for this. In many biology experiments, the "raw data" is an actual organism, like a colony of bacteria or something. There are scientific protocols for accessing that "data", but you have to be able to prove that you are an institution that can handle it. Even if the public "owns" it, technically speaking, no reputable scientist is going to send an e. coli sample to just anyone.

So I think we all understand that, in practice, we mean different things by "public access". Sometimes that means that anyone should be able to download the data, and sometimes that means that anyone should be allowed to go there and examine it for themselves.

Re:Practicalities (1)

Immerman (2627577) | about 2 months ago | (#46342091)

What? The organism is not the data - the data is all the measurements you took of that organism and all the situations you subjected them to in order to reach the conclusions that you are publishing.

Re:Practicalities (1)

turkeyfish (950384) | about 2 months ago | (#46343085)

"What? The organism is not the data - the data is all the measurements you took of that organism and all the situations you subjected them to in order to reach the conclusions that you are publishing."

You simply don't understand and have a very naive view of biology and the complexity of life on planet Earth. If you don't have voucher material available to confirm the identity of the organisms under study, then there are no definite subsequent statements that one can make about any of the measures, observations, etc extracted from that species, since it may well be that the study at hand is actually based on another species entirely or a mixture of closely related species that have not been properly identified.

For many species, this is generally not regarded as a serious issue and great pains and expense have gone into establishing particular strains or lineages for purposes of experimentation so this issue can be set aside and assumed to be answered. However, for a great many more, it is always a very serious issue, since few organisms actually come with the correct scientific identification neatly printed on their backs for all to make an easy identification. Just ask yourself, of the 30,000 species of fishes, for example, how many do you think the average scientist can readily identify? Now ask that of coleopterans or even larger or more obscure taxa. In reality, the voucher is the data, since it is the only way one can with any certainty reproduce a biological study. Once you have the identity of the species involved determined and confirmed, then you can go about studying various measurable properties. However, without that critical piece, the rest is conjecture. One needs to recognized thousands of papers have been published in which the organisms in question have been misidentified. The only way one can be sure, is to have saved voucher material.

Re:Practicalities (0)

Anonymous Coward | about 2 months ago | (#46340597)

With respect to particle physics, you can download the datasets that are processed into published paper from the LHC already. Cern requires this of all LHC experiments and hosts that data for them. Therefore they would be ok under PLOS's requirements.

Re:Practicalities (1)

wealthychef (584778) | about 2 months ago | (#46342047)

How hard would it be to grant exceptions to the policy? It's a good policy, no reason it can't be flexible too.

Re:Practicalities (1)

turkeyfish (950384) | about 2 months ago | (#46342871)

Good point. However, even for data that only comes to gigabytes, all such data and the resources necessary to set up and maintain such repositories is going to cost a lot of money. Journals can demand it, but its not clear that authors will be able to pay to put it in the form that journals might like to see. There is also the question of archival costs. Any organization that accumulates such data is going to require a revenue stream to pay for it. This could well be yet another cost that needs to be given consideration, especially now that the cost of just conducting experiments and collecting and analyzing data is already extraordinarily difficult to come by as it is. Adding to these costs may well actually impede research, even though the motives are laudable. However, to the extent that such data can be archived and made available electronically all of science will benefit. PLOS doesn't really begin to address these issues. Its an old issue, that museum curators are all too familiar with, but as always still awaiting funds to actually address it properly. Just having such a good idea, isn't going to make it feasible until someone actually starts addressing the financial aspects of the issue in a realistic way, especially since the problem only gets bigger and bigger with time since data accumulates.

PLOS should host it (1)

Antonovich (1354565) | about 2 months ago | (#46343657)

There would seem to be a relatively easy solution to this problem - make the raw data available from the article itself, or at least as an attachment. If that requires petabytes of storage, then presumably PLOS will provide the necessary infrastructure. That way they can ensure that as long as the article is being offered, all data used is also available. Does that sound unreasonable considering their requirement?

Re:Good policy (2, Interesting)

Pseudonym (62607) | about 2 months ago | (#46340227)

You know who needs to introduce this rule? The ACM.

I'm fed up with so-called scientific papers with results based on proprietary software. It doesn't even have to be open source, though that would clearly be good for peer review. If I can't (given appropriate hardware and other appropriate caveats) run your software, I can't replicate your results. If I can't replicate your results, it's not science.

Re:Good policy (1)

kimvette (919543) | about 2 months ago | (#46341817)

I'd say that if they want the data to be publicly accessible without restriction, they should make the published journals publicly accessible without restriction.

Fantastic. (2)

jpellino (202698) | about 2 months ago | (#46339225)

Will cut a lot of nonsense out of reading stuff into the results.

Global warming advocates won't like that. (0)

Anonymous Coward | about 2 months ago | (#46339361)

They have resisted showing data for years.
I hope this helps, though the warmistas have their own favorite journals...not PLOS.

Re:Global warming advocates won't like that. (0)

Anonymous Coward | about 2 months ago | (#46339611)

Maybe they'll just publish the shrinking volumes of just about every glacier in the world.

Re:Global warming advocates won't like that. (1)

PRMan (959735) | about 2 months ago | (#46339885)

You mean the ice that's so thick that they had to shut down the crab season 3 years in a row in Alaska and the fact that you can't even take boats through Northern Canada anymore?

Re:Global warming advocates won't like that. (0)

Anonymous Coward | about 2 months ago | (#46343421)

You do understand that Global warming is a Global phenomenon, right? And that shifts may occur, year by year, in where the cold and warm bits are, due to jet stream variation, so that, say, the arctic (and most of Europe) is much warmer than it usually while, at the same time, there is a huge cold front over the continental US?

http://xkcd.com/1321/ (1)

mitzampt (2002856) | about 2 months ago | (#46343639)

Yeah, it's getting colder outside on the global scale. Just look out the window every winter. It's all the proof I need. This winter the snow excess here was a football field-size snowflake. Those damn alarmists don't know what they're saying, let's just wait and see how wrong they are.

Re:Fantastic. (0)

Anonymous Coward | about 2 months ago | (#46339919)

Yes and no, depending on specific circumstances. It would be very nice to have access to the complete next gen sequencing data "in the raw" to be able to validate the findings of a paper independently, however just having access to the raw data does not mean that anybody can reconstruct the same information contained in 10-40 billion nucleotides split into 75-100 base pair reads. You need the proprietary software and a massively parallel mainframe to process this "data". A sequencing run takes 18-24 hours to collect raw reads, 2 weeks to 2 months to assemble and polish the results, depending on the availability of an existing scaffold to build on.

In my case, if I collect a 4-wavelength X-ray dataset on one protein crystal, that's close to 12 Gb of gzipped images. How would PLoS like these "made freely available"?

The intention is good, the policy is idiotic.

Re:Fantastic. (0)

Anonymous Coward | about 2 months ago | (#46341507)

I think they'd first check to see if your own servers can be considered a stable public repository.

Re:Fantastic. (1)

turkeyfish (950384) | about 2 months ago | (#46343247)

And to make it worse, someone then discovers that many of the original specimens from such a study were not saved and hence the identifications used in the study can not be duplicated, making the study worthless, because no one can be sure from which organism or linneage the sequences were actually collected from.

Not such good news for getting paid. (2)

Kaz Kylheku (1484) | about 2 months ago | (#46339333)

Public results? Anyone can take your work and use it for something profitable, while you scrape for grants to continue.

Re:Not such good news for getting paid. (1)

NotDrWho (3543773) | about 2 months ago | (#46339403)

Anyone can take your work and use it for something profitable

And patent it.

Re:Not such good news for getting paid. (0)

Anonymous Coward | about 2 months ago | (#46343299)

Anyone can take your work and use it for something profitable

And patent it.

Patenting it is going to be a bit harder, since to patent something it needs to be novel. And this means, among other things, that the patent submission has to be done (even by you) before the submission to the journal.

Re:Not such good news for getting paid. (0)

Anonymous Coward | about 2 months ago | (#46340617)

That was the mythical ethic. That Science was supposedly based upon openness, according to the indoctrination we all received in public school. There's no reason that publicly funded research shouldn't be that open for scrutiny and as the basis for anyone's efforts. Additionally, if it can be shown that private research uses publicly developed IP, it and its derivations should all be free from restriction.

Capitalism is broken, starting with the patent system and its preference for large overly powerful corporate interests that tend to concentrate power and seek monopolies. We need to require broader distribution of wealth faster development of technology in order to foster healthy, sustainable human systems.

like moisture off the backs of us schmucks (-1)

Anonymous Coward | about 2 months ago | (#46339387)

we are carefree imaginary semichosens & always 'win' everything we never had by default https://www.youtube.com/results?search_query=censored+information including our fake history & 'heritage' http://www.youtube.com/results?search_query=unrepentant&sm=3 with history racing up to correct itself & us (so we can move on) will there ever be a better time to consider ourselves in relation to life instead of strife..... which cuts our spirits like a knife, & is completely MANufactured by the WMD on credit zionist nazi genociders

And in open formats? (1)

Anonymous Coward | about 2 months ago | (#46339435)

It would be nice also if journals got on the bandwagon and accepted open formats (OpenDocument) instead of proprietary file formats like .doc and not fully open formats like .docx.

good and bad (3, Interesting)

eli pabst (948845) | about 2 months ago | (#46339447)

Will be interesting to see how this is balanced with patient privacy, in particular with the increasing numbers of human genomes being sequenced. I know a large proportion of the samples I work with in the lab have restrictions on how the data can be used/shared due to the wording of the informed consent forms. Many would certainly not allow public release of their genome sequence, so publishing in PloS (or any other journal with this policy) would be impossible. So while I think the underlying principle is good, I think an unintended consequence might be less privacy for patients wanting to participate in research (or less patients electing to participate at all).

You hit the nail on the head (0)

Anonymous Coward | about 2 months ago | (#46339477)

This may have severe repercussions for how patient samples are collected. Especially in this day and age with so many privacy concerns left and right.

Re:good and bad (3, Informative)

canowhoopass.com (197454) | about 2 months ago | (#46339505)

The linked blog specifically mentions patient privacy as an allowable exception. They also have exceptions for private third party data, and endangered species data. I suspect they want to keep the GPS locations for white rhino's hidden.

Re:good and bad (1)

LourensV (856614) | about 2 months ago | (#46339561)

I work with data collected by others, and those others are typically rather protective of their data for commercial reasons. I can use them for scientific purposes, but I'm not allowed to publish them in raw form. For most of these data there are no alternatives. I'd much rather publish everything of course, but that's impossible in this case, so I wonder if that means that I can't publish in PLOS any more now?

Just to be clear, I applaud this move, we should be publishing the data, plus software and such, where possible. Anyone happen to have a spare couple of tens of millions of euro lying around? That would probably free the data I'm using...

Bad news for ecologists--new license needed (4, Insightful)

Bueller_007 (535588) | about 2 months ago | (#46339489)

This is bad news for ecologists and others with long-term data sets. Some of these data sets require decades of time and millions of dollars to produce, and the primary investigators want to use the data they've generated for multiple projects. Current data licensing for PLOS ONE (and--as far as I know-- all others who insist on complete data archiving) means that when you publish your data set, it is out there for anyone to use for free for any purpose that they wish; not just for verification of the paper in question. There are plenty of scientists out there who poach free online data sets and mine them for additional findings.

Requiring full accessibility of data makes many people reticent to publish in such a journal, because it means giving away the data they were planning on using for future publications. A scientist's publication list is linked not only to their job opportunities and their pay grade, but also to the funding that they can get for future grants. And of course those grants are linked to continuing the funding of the long-term project that produced the data in the first place.

What is needed is a new licensing model for published data that says "anyone is free to use these data to replicate the results of the current study, however it CANNOT be used as a basis for new analyses without written consent of the primary investigator of this paper or until [XX] years after publication." Journals would also need to agree that they would not accept any publications based on data that was used without consent.

It seems to me that this arrangement would satisfy the need to get data out into the public domain while respecting the scientists who produced it in the first place.

Re:Bad news for ecologists--new license needed (4, Insightful)

JanneM (7445) | about 2 months ago | (#46339669)

On the other hand, if I don't have your data I can't check your results. If you want to keep your data secret for a decade, you really should plan to not publish anything relying on it for that time either. Release all the papers when you release the data.

Also, who gets to decide when a study is a replication and when it is a new result? Few replication attempts are doing exactly the same thing as the original paper, for good reason. If you want to see if it holds up you want to use different analysis or similar anyway. And "use" data? What if another group produces their own data and compares with yours? Is that "using" the data? What if they compare your published results? Is that using it?

A partial solution, I think, is for a group such as yours to pre-plan the data use already when collecting it. So you decide from start to publish a subset of that data early and publish papers based on that. Then publish another subset for further results and so on.

But what we really need is for data to be fully citeable. A way to publish the data as a reserach result by itself - perhaps the data, together with a paper describing it (but not any analysis). ANyone is free to use the data for their own research, but will of course cite you when they do. A good, serious data set can probably rack up more citations than just about any paper out there. That will give the producers the scientific credit it deserves.

Re:Bad news for ecologists--new license needed (0)

Anonymous Coward | about 2 months ago | (#46340025)

"Release all the papers when you release all the data" is not realistic.

I'm not going to collect data for 40 years without publishing something along the way. I won't be able to get the funding if no papers are coming out of the project over that time period.

Re:Bad news for ecologists--new license needed (2)

Bueller_007 (535588) | about 2 months ago | (#46340269)

Release all the papers when you release the data.

Not going to happen. You need to publish during the data collection period in order to continue getting the funding you need for data collection.

Few replication attempts are doing exactly the same thing as the original paper, for good reason.

Right, but replication of the experiment is the EXACT reason that we're making the data available. If you want to use the data for something else, that's fine, but if it's data that the original author is still using, then you should contact them about it first.

A partial solution, I think, is for a group such as yours to pre-plan the data use already when collecting it. So you decide from start to publish a subset of that data early and publish papers based on that. Then publish another subset for further results and so on.

Again, this is not realistic in the overwhelming majority of cases. One of the benefits of long-term studies are the unexpected findings. Imagine that I've been collecting data on a population of lemmings over the last 20 years. It seems to me that the lemmings have been getting smaller since I first started capturing them, so one day I decide to regress body size on year and I discover that the lemmings have indeed been shrinking, and I can show that it is probably linked to changes in vegetation driven by climate change. I shouldn't have to give away my entire 20-year data set (which I had been collecting for a different purpose) for anybody to use for any purpose in order for me to get this one study out in a timely fashion.

Besides, many researchers are already dealing with data sets that are >50 years old, and your "plan to release the data before you start collecting the data" suggestion is moot for those people with inherited data sets.

But what we really need is for data to be fully citeable.

Getting your data cited is not NEARLY the same as publishing. Not even close. To get academic positions, pay increases, grants, etc., you need authorship. No one really cares about how often your paper or your data has been cited. That info isn't even on your CV or your grant applications, so no one will even have a rough idea unless it's a particularly preeminent paper.

Re:Bad news for ecologists--new license needed (0)

Anonymous Coward | about 2 months ago | (#46339847)

...

What is needed is a new licensing model for published data that says "anyone is free to use these data to replicate the results of the current study, however it CANNOT be used as a basis for new analyses without written consent of the primary investigator of this paper or until [XX] years after publication." Journals would also need to agree that they would not accept any publications based on data that was used without consent.

It seems to me that this arrangement would satisfy the need to get data out into the public domain while respecting the scientists who produced it in the first place.

Oh yeah, that'll work.

Because scientists never plagiarize nor steal data. After all, they're scientists

:-/

Re:Bad news for ecologists--new license needed (0)

Anonymous Coward | about 2 months ago | (#46340053)

It wouldn't be difficult to force a publisher to issue a retraction for using what would amount to stolen data under that licensing agreement.

Re:Bad news for ecologists--new license needed (2)

Arker (91948) | about 2 months ago | (#46340101)

"What is needed is a new licensing model for published data that says "anyone is free to use these data to replicate the results of the current study, however it CANNOT be used as a basis for new analyses without written consent of the primary investigator of this paper or until [XX] years after publication." "

I could not disagree more.

What is needed here is to deal with the real problem - the issues that force working scientists into a position where doing good science (publishing your data) can harm your career.

Slapping a band-aid on a symptom without addressing the fundamental malfunction here is guaranteed to make things worse, not better.

Re:Bad news for ecologists--new license needed (1)

turkeyfish (950384) | about 2 months ago | (#46343147)

This is a tall order, since scientists are held to a much higher standard than capitalists, and consequently always at a disadvantage. Scientists are expected to give away the product of their labor for free for all to use as they wish, but others are permitted to extract all the profits they may be able to get from the scientist's work, without any of the funds flowing directly to the scientist, who generated the data in the first place. One might ask why government contractors aren't likewise expected to turn all their profits and records over to the public, since their profits are derived entirely from public money?

Perhaps scientists wouldn't be so squeamish about releasing ALL of their data just to publish a single paper, if they can be guaranteed a minimum of 50% of all profits that may derive from their work. My guess is that GOP politicians would immediately object to this a limiting the religious freedom of capitalists to worship money as they see fit. I freely admit, however, that this is just a hunch, based entirely on past performance.

Re:Bad news for ecologists--new license needed (2)

Crispy Critters (226798) | about 2 months ago | (#46340233)

"There are plenty of scientists out there who poach free online data sets and mine them for additional findings."

Right. This leads to a two-class system where the scientists that collect the data (and understand the techniques and limitations) are treated as technicians while those that perform high-level analysis of others' results get the publications. This can lead to unsound, unproductive science in may cases. Those who understand the details are not motivated, and the superficial understanding of those that write the publications leads to errors.

Re:Bad news for ecologists--new license needed (2)

the gnat (153162) | about 2 months ago | (#46340303)

This leads to a two-class system where the scientists that collect the data (and understand the techniques and limitations) are treated as technicians while those that perform high-level analysis of others' results get the publications.

Maybe in some fields, but in genomics and molecular biology, the result tends to be exactly the opposite: the experimentalists (and their collaborators) get top-tier publications, while the unaffiliated bioinformaticists mostly publish in specialty journals.

Re:Bad news for ecologists--new license needed (1)

Crispy Critters (226798) | about 2 months ago | (#46340335)

Good to hear. Unfortunately, it does happen in other fields. (Should have said "can lead...")

Re:Bad news for ecologists--new license needed (3, Interesting)

the gnat (153162) | about 2 months ago | (#46340261)

Some of these data sets require decades of time and millions of dollars to produce, and the primary investigators want to use the data they've generated for multiple projects. . . There are plenty of scientists out there who poach free online data sets and mine them for additional findings.

I work in a field (structural biology) that had this debate back when I was still in grade school: the issue was whether journals should require deposition of the molecular coordinates in a public database, or later, should these data be released immediately on publication, or could the authors keep them private for a limited time. The responses at the time were very instructive: one of the foremost proponents of data sharing was accused of trying to "destroy crystallography as we know it", to which his response was yes, of course, but how was that a bad thing? Skipping to the punchline: nearly every journal now requires immediate release of coordinates and underlying experimental data immediately upon publication, during which time the field has grown exponentially and there have been at least six Nobel prizes awarded for crystallography (at least one of which went to an early opponent of data sharing). The top-tier journals (Science, Nature) average about a paper per week reporting a new structure. Not only did the predicted dire consequences never happen, the availability of a large collection of protein structures has actually accelerated the field by making it easier to solve related sturctures (and easier to test new methods), and facilitated the emergence of protein structure prediction and design as a major field in its own right.

The question I'm worried about: what form do the data need to take? Curating and archiving derived data (coordinates and structure factors) is already handled by the Protein Data Bank, but the raw images are a few orders of magnitude larger, and there is no public database available. Most experimental labs simply do not have the resources to make these data easily available. (The exceptions are a few structural genomics initiatives with dedicated computing support, but those are going away soon.)

Re:Bad news for ecologists--new license needed (1)

oldhack (1037484) | about 2 months ago | (#46340395)

This is preposterous. Unless you self-funded your work, you don't own the data. The people who give out grants don't intend it for you to spend for your own benefit.

Re:Bad news for ecologists--new license needed (0)

Anonymous Coward | about 2 months ago | (#46340673)

How is it bad? Ecologists who hoard data facilitate the demise of the environment they study. Only be establishing the makeup of systems, in order the avoid the problem of Shifting Baselines, do ecologists, who are all grant funded, establish known baselines which can be used to foster protection and inform public debate.

Where's the down side?

Re:Bad news for ecologists--new license needed (1)

turkeyfish (950384) | about 2 months ago | (#46343185)

"Where's the down side?"

Well one area of concern is how the data are used in litigation. Take a particular ecological or molecular study, any that you might think of. Say the data is curated and made available via PLOS or some other archive or entity. Now a good lawyer notices that the data are incomplete, since they do not cite the repositories of any voucher materials that would permit the reidentification of any of the species in the study. None were saved, because it was too costly. Without the vouchers, such studies are essentially useless since a case could be made that the original identifications are suspect or the original tissues were contaminated by the genes of other species not correctly identified. A good corporate lawyer will have an easy time showing any environmental studies are indefensible and incomplete and in now time, there are no environmental studies or laws that can pass a rigorous voucher test. Why weren't vouchers saved? For many of the same reasons most data is not archived for posterity and freely available: the cost in time and effort that is simply unavailable.

Often museums and scientists would love to save the material, but can't afford to do so, since they have become no longer vast collections of well curated and intensively studied materials, but expensive headaches that the public isn't really interested in since they are more fascinated with youtube.com. Perhaps, we will all just be able to watch as humanity bends over and kisses its arse good bye.

Re:Bad news for ecologists--new license needed (2)

Michael Woodhams (112247) | about 2 months ago | (#46340867)

There are plenty of scientists out there who poach free online data sets and mine them for additional findings.

And this is a good thing, despite your word "poach". Analyses which would not have occurred to the original experimenters get done, and we get more science for our money. For many big data projects (e.g. the human genome project, astronomical sky surveys), giving 'poaching' opportunities is the primary purpose of the project.

A former boss of mine once, when reviewing a paper, sent a response which was something like this:

"This paper should absolutely be published. The analysis is completely wrong, but it is a wonderful data set, and somebody will quickly publish a correct analysis once the data is available."

Now I need to stop wasting time on /. and return to my work in hand, which, as it happens, is 'poaching' data from
Ingman, M., H. Kaessmann, S. Paabo, and U. Gyllenstern. 2000.
Mitochondrial genome variation and the origin of modern humans. Nature 408:708--713.

Re:Bad news for ecologists--new license needed (1)

g01d4 (888748) | about 2 months ago | (#46341185)

There are plenty of scientists out there who poach free online data sets and mine them for additional findings.

I think the additional findings are part of what science is all about. How do scientists 'poach' something that's free? Did you think waiting many decades [wikipedia.org] for the Dead Sea Scroll results was acceptable?

If data is that expensive to collect, then its collection and publication should rank as an end in itself.

This is not new at all (2)

umafuckit (2980809) | about 2 months ago | (#46339535)

Standard policy. Nature have been doing this [nature.com] for some time. They state: authors are required to make materials, data and associated protocols promptly available to readers without undue qualifications. So have Cell Press [cell.com] and Science [sciencemag.org] . I stopped searching at this point, but I'm sure other major journals do the same thing.

Re:This is not new - but few comply (1)

Anonymous Coward | about 2 months ago | (#46340135)

And many scientists that get published in these high profile journals are scofflaws when it comes to sharing... It's been covered many times but compliance is near zero.

Re:This is not new - but few comply (1)

umafuckit (2980809) | about 2 months ago | (#46341887)

I'd agree with that. I once tried, very politely, to get data from authors of an NPG paper. They stalled and it become awkward. In the end I gave up because my interest was purely motivated by curiosity and I didn't want to make an enemy (even if the person in question was in a different field). Glad I backed off now as I've ended up moving into that field...

Prolific publishing (2)

hubie (108345) | about 2 months ago | (#46339895)

one of the most prolific publishers of research papers in the world.

Their journals aren't in my field (they are all bio journals), so I have not heard of them, but is it true that they are that big? Their web site [plos.org] wasn't much help in terms of information on subscriptions or article numbers, or I simply missed it. Can anyone familiar with them provide any input?

Their data policy might work for the biosciences, but good luck requiring all the many TB of raw data from a particle physics experiment to be put up somewhere. And in some instances, like that one, the raw data will most likely be useless without knowing what it all means, what the detectors were, what the detector responses are, etc. etc. etc. For experiments where it takes man-months or man-years to collect and process the data, making it all available in raw format will largely be a waste of time.

In general, at least for experiments done in the lab that use specialized equipment, raw data will not be very useful if you don't understand what you're collecting or familiar with the equipment. You can end up with situations like that guy who took the Mars rover images and kept zooming in until he saw a life form.

HIPAA (0)

Anonymous Coward | about 2 months ago | (#46339925)

I guess they don't want any more publications from medicine. There is no way to truly, fully anonymize patient data. This is why the data is rarely provided, or locked behind a "prove you're a researcher" wall, or only a small subset given decade(s) later such that it would be much harder to trace.

Re:HIPAA (1)

sexconker (1179573) | about 2 months ago | (#46340063)

I guess they don't want any more publications from medicine. There is no way to truly, fully anonymize patient data. This is why the data is rarely provided, or locked behind a "prove you're a researcher" wall, or only a small subset given decade(s) later such that it would be much harder to trace.

WTF is this horseshit? Anonymize patients by removing all name, address, etc. info. Just keep the relevant metrics for the study.
HIPAA does not allow a "researchers can access your private, personal data, lol" exception, so there's no fucking change from how shit runs currently.

Re:HIPAA (2)

Crispy Critters (226798) | about 2 months ago | (#46340317)

Unfortunately, it has been shown already that the few details relevant to medical studies can often be used to uniquely identify individuals even after name and address are removed. "Yaniv Erlich shows how research participants can be identified from 'anonymous' DNA" http://www.nature.com/news/pri... [nature.com]

Same will be true for various kinds of employment data and census data.

That Wraps It Up (0)

sexconker (1179573) | about 2 months ago | (#46340045)

Well, that really wraps it up for the global warming crowd.
If their source data has to be publicly accessible, it'll be laughed out off the stage before their "studies" get any traction.

Good idea but... (0)

Anonymous Coward | about 2 months ago | (#46340203)

I'm worried about the wording of around ALL DATA. In many experiments ALL DATA could easily be interpreted as their entire data sets running into the many Tera or even Petabytes. Making this much data publicly available could be prohibitively expensive for many papers.

Conflicts with privacy rights (1)

Antique Geekmeister (740220) | about 2 months ago | (#46341725)

There is a great deal of science, and public policy, that would benefit from public exposure. But medical and sociological research benefits from the privacy of the subject, who then feel more free to be truthful. The same is true of political survey data, and "anonymizing" it can be a lengthy, expensive, and uncertain process, especially when coupled with various metadata that is being collected with the experiments or in parallel with it. It can also be very expensive to make public, even without privacy issues, because transforming it from obsolete media and making it available for public download often takes real engineering time. Long term science projects can span decades, and the first sets of data are often on obsolete media.

Overall, it seems an excellent policy, but exceptions will have to be made.

unless... (1)

l3v1 (787564) | about 2 months ago | (#46343175)

"This is good news for replicating experiments, building on past results, and science in general."

It is, unless the data can't be made "publicly available, without restriction" (very important emph. added), in which case you can't publish there. Yes, there are others, but demanding dropping all restrictions in all cases is simply an approach blind to reality. Also, if they demand so, they must provide free storage, which in some cases could range to multiple gb of data - and you won't want to pay for indefinite storage of large datasets, for certain.

Also, I wish to repeat my hatred towards the kind of open access publication methods most (if not all) major sci outlets use, namely charging the author many thousands of USD/EUR for publication, costs which most grants don't cover (e.g. my institute mandates open access publications, but of course they don't provide the financial resources to do so). This in turn shifts the focus, since now it's in the best interest of a publisher to accept as many as they can (keep the money flowing), instead of accepting the best ones and get the money from interested readers (and yes, if it's good, they come). Of course politician-scientists like the publicity they get from folks for trying to 'set science free'. I just wish they'd do a bit more thinking, they are scientists after all (or so they claim to be).
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...