Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Scientific Data Disappears At Alarming Rate, 80% Lost In Two Decades

samzenpus posted about a year ago | from the here-today-gone-tomorrow dept.

Science 189

cold fjord writes "UPI reports, 'Eighty percent of scientific data are lost within two decades, disappearing into old email addresses and obsolete storage devices, a Canadian study (abstract, article paywalled) indicated. The finding comes from a study tracking the accessibility of scientific data over time, conducted at the University of British Columbia. Researchers attempted to collect original research data from a random set of 516 studies published between 1991 and 2011. While all data sets were available two years after publication, the odds of obtaining the underlying data dropped by 17 per cent per year after that, they reported. "Publicly funded science generates an extraordinary amount of data each year," UBC visiting scholar Tim Vines said. "Much of these data are unique to a time and place, and is thus irreplaceable, and many other data sets are expensive to regenerate.' — More at The Vancouver Sun and Smithsonian."

Sorry! There are no comments related to the filter you selected.

And in 20 years... (5, Insightful)

Anonymous Coward | about a year ago | (#45743845)

And in 20 years, these results too shall be lost.

Re:And in 20 years... (0)

Anonymous Coward | about a year ago | (#45743927)

I feel terrible that I laughed. This is terrible news...

Re:And in 20 years... (1)

Z00L00K (682162) | about a year ago | (#45744225)

Unless it's published in a newspaper or magazine that is widespread. But printed matter seems to be on a decay.

Re:And in 20 years... (5, Insightful)

queazocotal (915608) | about a year ago | (#45744523)

That's not the point.
The actual published results - even if published in an obscure journal tend to stick around _much_ more.

Even old journals which go out of publication get their archives and the rights to distribute them bought - as there is some small amount of value there, in addition to the copies in the various reference libraries around the world.

The problem is that if you are wondering about that graph on page 14 of the paper that the whole paper rests on, you can't get the original data to recreate that graph.

This is a major problem because the only way to check that graph is now to redo the whole experiment.

Why must you have their data? (0)

Anonymous Coward | about a year ago | (#45744743)

Reproduction of results isn't "add the numbers that they produced to see if they sum to the value they said it did". That isn't replication of science.

Since the science is supposed to be repeatable and the paper (if valid science not pseudoscience bollocks) contain enough information to do the assessment again (e.g. like a patent as supposed to be), then you MUST consider it BETTER to re-do the experiment again and collect your OWN data and see if the data fits the result of the previous paper.

What if, for example, there was a bias on the original potentiometer, making all voltages appear different from what they are? The result would be WRONG, but your method of "redoing the experiment" would NEVER show this. Doing the experiment again and producing your OWN data would.

Re:Why must you have their data? (5, Interesting)

n1ywb (555767) | about a year ago | (#45744847)

No but it is amazing what NEW science you can do with OLD data. I've worked with the Transportable Array project for example http://www.usarray.org/researchers/obs/transportable [usarray.org] it's over a decade old and scientists are still discovering new ways to take advantage of the data and will likely be doing so for decades to come. On the other hand a lot of data is just junk due to poor quality metadata; when was that instrument calibrated? I dunno. Damn. At leat in geophysics we have the National Geophysical Data Center to curate this stuff http://www.ngdc.noaa.gov/ [noaa.gov] at least until Congress cuts it's funding.

except... (0)

Anonymous Coward | about a year ago | (#45744855)

Except that as mentioned in TFA, many data sets are unique to a time and place, and thus can never be replicated. They may for example reflect the social temperaments, behaviors, or material/physical qualities of a particular population at a particular point in time.

Re:And in 20 years... (1, Funny)

ObsessiveMathsFreak (773371) | about a year ago | (#45745593)

Well, they're currently behind a paywall, so I don't see how most of us were even supposed to find them in the first place.

but when (0)

Anonymous Coward | about a year ago | (#45743851)

does it reappear?

lulz (3, Funny)

Anonymous Coward | about a year ago | (#45743853)

thats okay, the nsa has a backup

Re:lulz (0)

Anonymous Coward | about a year ago | (#45744051)

Unfortunately the NSA is like a write only memory, so it is there, but you can't get it out.

Concerning... (5, Insightful)

Adam Colley (3026155) | about a year ago | (#45743873)

Trying to ignore that a paper about the unavailability of scientific data is locked behind a paywall.

This is nothing new though, I do occasional conversion from ancient data formats, people need to pay better attention, imagine trying to read an 8" CP/M floppy today.

As libraries move to digital storage rather than the dead tree that's been fine for thousands of years they are inviting a catastrophe, possibly only one well aimed solar mass ejection from massive data loss.

Re:Concerning... (4, Insightful)

Dutch Gun (899105) | about a year ago | (#45743915)

Paper has its own issues. Talk to me about the durability of paper after you recover the books lost throughout time due to natural decay, burning (intentional or otherwise), floods, wars, and social forces (politics, religion, etc). Digital data can be easily copied and archived (when not behind a paywall, of course). It seems to me that redundancy is the best form of insurance against data loss. A solar mass is not going to wipe out every computer with a copy of important data on it, and all the relevant backups. And if it does, we're probably in a lot more trouble for reasons other than losing some scientific research.

Besides which, I sort of wonder if scientific data also follows the 80/20 rule. If so, how much are we really losing? I'm only half joking, of course, since it's difficult to ascertain the value of research immediately in some cases, but wouldn't it stand to reason that any important or groundbreaking research will naturally be widely disseminated, and thus protected against loss?

Re:Concerning... (5, Insightful)

Eunuchswear (210685) | about a year ago | (#45743993)

Digital data can be easily copied and archived

Can be. But mostly isn't.

Re:Concerning... (4, Interesting)

serviscope_minor (664417) | about a year ago | (#45744057)

Besides which, I sort of wonder if scientific data also follows the 80/20 rule. If so, how much are we really losing?

Probably not that much. I'm not claiming this is good, but I don't htink it's as bad as it appears.

If a paper is unimportant and more or less sinks without a trace (perhaps a handful of citations), then the data is probably of no importance since someone is unlikely to ever want it. Generally this is because papers tend to get more obscure over time and also get supereseded.

For important papers, the data just isn't enough: is a paper is important then it will establish some technique or result. In 20 years people will have generally already reanalysed the data and likely also independently verified the result if it is important enough. After 20 years I think the community will have moved on and the result will either be established or discredited.

I think the exception is for things that are "hard" to find such or non-repeatable such as finding fossils. Then again the Natural History Museum has boxes and boxes and boxes of the things in the back room. They still haven't gotten round to sorting all the fossils from the Beagle yet (this is not a joke or rhetoric: I know someone who worked there).

So my conclusion is that it's not really great that the data is being lost, but it's not as bad as it initially sounds.

Re:Concerning... (5, Interesting)

Anonymous Coward | about a year ago | (#45744251)

I designed and built the equipment for scientific experiments that will never be repeated: cochlear implant stimulation of one ear, done in an MRI. This was safe because the older implant technology had a jack that stuck out of the subject's head, and which we could connect to electronics outside the MRI itself. But the old "Ineraid" implants have been replaced, clinically, with implants using embedded electronics and usually magnets. Those are hideously unsafe to to even bring in the same *room* as an MRI, much less actually scan the brain of a person wearing one.

So that experiment is unlikely to ever be repeated. Losing the data, and losing the extensive clinical records of those subjects, would be an immense loss to science. There is especially historical data from decades of testing on these subjects that show the long term effects of their implants, or of different types of redesigned external stimulators. That data is scientifically priceless. When I started that work, we used mag-tape for data, and scientific notebooks for recording measurements. I helped reformat and transfer that data to increasingly modern storage devices several times. We went through 3 different types of storage media in 10 years, and I remember having to write software to allow Exabyte drives to find the end of the tape and add data. (Exabytes had no End-Of-Tape marker.) Preserving that data.... was a lot of work.

Re:Concerning... (1, Interesting)

jabuzz (182671) | about a year ago | (#45744355)

No it won't because it 20-30 years we will be able to do gene therapy to "grow" or "regrow" the stereocilia and hence cochlear implants will be considered as barbaric as medieval blood letting. Consequently the data will only be of obscure historical interest.

Re:Concerning... (5, Insightful)

Lisias (447563) | about a year ago | (#45744497)

Wishful thinking.

Let's make a deal: *first*, the gene therapy works. *THEN* we assume we can afford to lose the data the grandparent talks about.

Re:Concerning... (1)

secretcurse (1266724) | about a year ago | (#45745503)

Why would a cochlear implant ever be considered as barbaric as medieval blood letting? The implants aren't perfect, but they provide a huge increase in the quality of life for a large number of patients. A potential better solution that's decades down the line doesn't make a currently effective treatment barbaric...

Re:Concerning... (1)

dj245 (732906) | about a year ago | (#45744907)

We went through 3 different types of storage media in 10 years, and I remember having to write software to allow Exabyte drives to find the end of the tape and add data. (Exabytes had no End-Of-Tape marker.) Preserving that data.... was a lot of work.

Do what everybody else does. Encrypt it using a strong password, then upload it to The Pirate Bay or the Semi-centralized Filesharing Platform Which Shall Not Be Named, and call it "insurance file xxxxx".

Re:Concerning... (4, Insightful)

Teun (17872) | about a year ago | (#45744491)

In the nineties I had a friend working for a company that bought a lot of old Soviet geophysical data.

It needed some very special transcription technology but once in the clear and fed to modern 3D seismic software it revealed a lot more than the original reports gave.

Retaining old reports is nice, retaining old raw data even nicer.

So you have that raw data, archived, yes? (0)

Anonymous Coward | about a year ago | (#45744755)

I mean, I would like to check that you do find more data, so you have it, right? The raw data?

Is it torrented?

And the programs for manipulating, are they available too? And the results from it?

That makes 2x as much data you have.

Of course, if I reanalyse it, if I have any data, I now must archive it. 3x.

If anyone else wants to recreate it... 4x

Alternatively, for the cost of 20years storage, it may be possible to redo all the measures with UAVs and nanobots in future. And for less cost than 30 years storage...

Re:Concerning... (2, Insightful)

Anonymous Coward | about a year ago | (#45743945)

The problem is not just an issue of digital storage, but also a problem of redundancy.

In the "old days", people understood and accepted the risk that a paper copy would be lost. In fact, it was a GIVEN that they would eventually be lost (or damaged or misplaced or stolen or checked out and simply never returned). So multiple copies were kept because centuries of experience dictated that some copies would be lost no matter how strong, carefully maintained and well preserved the originals were.

Nowadays, people simply think its a matter of "copy and paste". But, as you point out, its not. Different hardware formats on top of different software formats. The card catalog with its rigid but well defined categories was switched for a nebulous and vague "tagging" system. And god help you if the files are corrupted.

Re:Concerning... (3)

thunderclap (972782) | about a year ago | (#45744005)

well dead tree has its own issues. Try finding a book written and published in 1910. Most likely you won't. The paper is so fragile that its has to specially sealed to survive. Rag paper on the other hand still looks good for its age.

Re:Concerning... (3, Interesting)

clickclickdrone (964164) | about a year ago | (#45744031)

That's still 100 years which is a lot better than the data being talked about here.

There was a documentry on the radio this week about the loss of letter writing as a form and how alarmed biographers were getting because it's getting very hard to trace someone's life, thoughts, actions etc without a paper trail as stuff like emails, digital photos etc generally get lost when someone dies.

Personally, I find the increasing rate of loss quite alarming - so much of our lives are digital and so little is properly curated with a view to future access. We know so much about the past from old documents, often hundreds if not thousands of years old but these days we're hard pushed to find something published ten years ago.

Re:Concerning... (4, Informative)

_Shad0w_ (127912) | about a year ago | (#45744045)

I'd go to one of the British deposit libraries and ask to see their copy; deposit libraries have existed since the Statute of Anne in 1710. The British Library has 28,765 books and 1,480 journals in its catalogue from 1910...

Re:Concerning... (4, Informative)

clickclickdrone (964164) | about a year ago | (#45744165)

As an extreme case, the BBC has reported on scrolls from Pompeii and Herculaneum that were 'destroyed' by Vesuvius are now starting to reveal their secrets using some pretty impressive techniques. http://www.bbc.co.uk/news/magazine-25106956 [bbc.co.uk]

Re:Concerning... (1)

Anonymous Coward | about a year ago | (#45744221)

In my experience books and journals from 1910 have survived very well. (These were for the most part printed in the UK and USA; I don't know how well publications from, say, Japan or China have fared.)

Re:Concerning... (1)

DrLang21 (900992) | about a year ago | (#45745097)

Considering that there is active research on original Chinese commentaries dating back to the Tang dynasty, I would say they are holding up pretty good.

Re:Concerning... (1)

Richard_at_work (517087) | about a year ago | (#45744971)

I have a collection of books on my shelf that date from the 1870s, all in top condition and never been stored in any special way.

Re:Concerning... (1)

dbIII (701233) | about a year ago | (#45744113)

imagine trying to read an 8" CP/M

If I didn't already know of two companies that could do that I'd look in the yellow pages. I get your point though and there are older or rarer formats than that which would require a bit of legwork or possibly even reverse engineering.

Re:Concerning... (4, Insightful)

martin-boundary (547041) | about a year ago | (#45744171)

This is nothing new though, I do occasional conversion from ancient data formats, people need to pay better attention, imagine trying to read an 8" CP /M floppy today.

It's not that it's a new problem as such, it's that for the first time in history we have a simple way to solve it, yet we have stupid greedy rich people who sponsor and enact laws to stop us from solving the problem.

The way to solve the problem is through massive duplication of all the data, over and over again through time. We have the technical means to do this on an unprecedented scale.

Even 1000 years ago, people had to painstakingly copy books, by hand, one at a time. And after a handful of copies were produced, there still weren't enough to guarantee that most would survive the ages, wars, fires, censorship, etc. So we generally have tiny collections from the past.

But now it's digital data. Anyone could copy it. We could have millions of copies of some obscure scientific work, all perfect duplicates. If even 0.1% of these copies survive, that's still thousands of copies.

And what do we do? We let a bunch of 1 percenters, who themselves barely know how or care to read, sponsor draconian copyright laws to stop eeryone from copying all that stuff, just on the off chance that they might copy a bunch of songs or movies that are outmoded within two years. And the commercial scienrific pulishers are some of the worst.

It's pathetic.

Re:Concerning... (0)

Anonymous Coward | about a year ago | (#45744951)

So what have YOU done to rectify the situation? beside whine ?

Re:Concerning... (1)

Yvanhoe (564877) | about a year ago | (#45744185)

On this fight, Aaron Swartz came very close to make the whole world totally different.

Re:Concerning... (3, Insightful)

bfandreas (603438) | about a year ago | (#45744203)

The combination of insane copyright claims and the overrelyance on comparatively volatile storage technology is steering us directly into another dark ages.
That's one take on things.
On the other hand we have already lost so much stuff over the centuries that perhaps what I just said is idiotic alarmism. After all we have rebuilt western civilisation after the fall of Rome(that just took the Dark Ages) and we didn't all die off after the Great Library of Alexandria burned down. The stuff that gets often replicated will propably not be lost. But let's hope it isn't a retweet of Miley Cyrus' knickers.

Re:Concerning... (1)

Alien1024 (1742918) | about a year ago | (#45744295)

the dead tree that's been fine for thousands of years

Not so fine.... The Alexandria library fire was perhaps the most catastrophic loss of human knowledge ever. For example, it destroyed the details of a heliocentric theory, which was postulated by Greek astronomer Aristarchus of Samos, millennia before Copernicus brought it to mainstream.

Odd coincidence... (2)

rnturn (11092) | about a year ago | (#45744959)

Some years ago I picked up a copy of "Dark Ages II -- When the Digital Data Die" by Bryan Bergeron (2002) but only now have gotten around to finishing reading (for some reason I never got past the first chapter at the time). When I bought it I had just had my own experience with the not-so-long life of digital data (some CDs I'd burned a few years earlier were already unreadable). The book's a bit dated (it says that there are many people out there with Zip drives connected to their PCs) as, obviously technology marches on, leaving older media in the dust but that's the point of the book and the ideas are still relevant. Worth looking for at your public library if you're still of the mind that a digital format is superior to everything else for long-term storage. Personally, I think we're looking at trouble if everything's converted to bits thinking that it'll always be available. Continued access to one of those aforementioned 8" CPM floppies is a good example. My failed CD-Rs are another.

In The Future (0)

Anonymous Coward | about a year ago | (#45743879)

By 2030, there won't be any left! We must act now!

Lifecycle management (4, Interesting)

FaxeTheCat (1394763) | about a year ago | (#45743887)

So the institutions do not have any data lifecycle management for research data. Are we supposed to be surprised? Ensuring that data are not lost is a huge undertaking and cannot be left to the individual researcher. It may also require a change in the research culture at many institutions. As long as research is measured by the publications, that is where the resources go and where the focus will be.

Will this change? Probably not.

Re:Lifecycle management (1, Troll)

TubeSteak (669689) | about a year ago | (#45743917)

Vines is calling on scientific journals to require authors to upload data onto public archives as a condition for publication.

If authors put their data into the public sphere, people might notice how much of it is fudged.

Re:Lifecycle management (1)

ColdWetDog (752185) | about a year ago | (#45743925)

The answer to both problems is to publish everything in the Journal of Irreproducible Results

Re:Lifecycle management (1)

N1AK (864906) | about a year ago | (#45743965)

Even more reason for us to want it putting there. Publishing research based on falsified information should be a pretty major crime and shouldn't be tolerated. It misleads the public, wastes scientists time trying to build on it etc.

How much storage space is that? (0)

Anonymous Coward | about a year ago | (#45744873)

When you say "put the data into the public", how much storage space and how does it get there?

Will you pony up storage and taxes for this?

Will you ask the same of the "private" data of corporations that rely on government largesse to exist?

And when it's passed to the public, on a thousand servers, how do you know if the one you happened to get to first is genuine or been fudged by someone with an agenda against the science? Do you think AIG would mirror honestly the genetic proof of evolution?

And at what point is it no longer the science institutions requirement to pass this data to the public? Because until then, you'll still need to pay for that access and storage. Then what's to stop every public copy being deleted because nobody cares any more? "the public" won't change storage media and veryfy contents for ever you know.

It's very easy to claim as you have done, but what do you mean by it?

lifecycle management isn't obvious to everyone (0)

Anonymous Coward | about a year ago | (#45743963)

any organization worth its weight in salt have vast libraries of data that go back many decades. not all 'institutions' are so poorly run where data from 20+ years ago cannot be accessed. must be a 'canadian' thing.. not that we'll know.. since the story is behind a bloody paywall.

"Will this change? Probably not."

I have access to vast libraries of data that date back 30+ years, some datasets (this includes computer software too) date back to the early 70s. why? because these institutions/corporations were organized, they knew that retaining data is important and they kept up with technology to ensure that no data is lost. there is no excuse to lose vast amounts of data. the only excuse for not retaining such data that I can think of is cost. the longer you leave datasets rotting away on old tapes, disks and hard drives, the harder it becomes to salvage and finding people who are experts at retrieving data from old media gets harder and more expensive.

Precisely (2, Insightful)

Anonymous Coward | about a year ago | (#45743891)

This is bang on. As a system administrator for a STEM department at a Canadian institution, my budget is 0 for data retention. Long term data retention is just not in the mindset of researchers.

Re:Precisely (0)

Anonymous Coward | about a year ago | (#45744619)

As a technologist for a Biology department at a American institution I see the same thing, and was horrified by it when I was hired on.

Re:Precisely (2)

cold fjord (826450) | about a year ago | (#45745179)

One of the places that I've worked did various sorts of science / engineering type project work. Quarterly backups of filesystems were archived indefinitely. Even if the data was staying online, at the completion of every project an archive was made of the data on a minimum of two pieces of backup media along with various bits of metadata regarding the media and data. The archival copies were tested by restore and diffed before actually going into the archive. Of course they kept examples of the different tape drives, and sometime systems, around to use as needed for quite some time.

Having seen the ugliness of tape drives eating archive media I would be inclined to suggest at least 3 copies.

Re:Precisely (1)

Z00L00K (682162) | about a year ago | (#45745365)

Just increase the disk array size and copy the data as it grows to larger and larger storage systems. Data that's offline is useless.

Re:Precisely (1)

cold fjord (826450) | about a year ago | (#45745509)

Some types of work generate enormous amounts of data in relatively short periods. The only way to keep things under control is to generate the data and ruthlessly pare back whatever isn't needed, preferably as you go. Big datasets cost real money to keep online, especially when there are many of them. Data that isn't needed for current work isn't helpful and doesn't need to be online, but you may have to bring it back online in the future. People say that disk is cheap, and it is, until it has to be high performance, highly reliable, accessible 24x7 to large user bases for simultaneous use, backed up, secured, and managed. Various forms of hierarchical storage can help but don't eliminate the issues unless your budget is very robust or your data creation is low paced.

If... (1)

Bartles (1198017) | about a year ago | (#45743893)

...100% is retained for 2 years, and 17% is lost every year after that, then after 20 years, I get about 3.5% of the data still being accessible, not 20%. WTF, or did someone lose the data for this study and the article is really just a guess.

On the bright side... (4, Interesting)

ron_ivi (607351) | about a year ago | (#45743897)

... poorly collected unreliable data also vanishes at at least the same rate (hopefully faster). And assuming shoddy data disapears faster than good data, then the quality of available data should continually increase.

Re:On the bright side... (1)

Anonymous Coward | about a year ago | (#45744075)

That's fine, until you want to know (for example) what some star was doing twenty years ago, so you can compare to what it's doing today. Then the archival data is your only chance - and shoddy data is better than no data. (Crudely-drawn ancient-Greek star-maps, for example, have been used to study the motion of stars over periods of thousands of years.)

Re:On the bright side... (0)

Anonymous Coward | about a year ago | (#45744507)

... poorly collected unreliable data also vanishes at at least the same rate (hopefully faster). And assuming shoddy data disapears faster than good data, then the quality of available data should continually increase.

I disagree: it means that we will take the conclusions of the shoddy data at face value.

"In 1998, 80% of /. posters were female - unfortunately we don't have the original data anymore, but that is to be expected."

Re:On the bright side... (1)

140Mandak262Jamuna (970587) | about a year ago | (#45744813)

That is a very unreasonable assumption.

There are many entities with vested interest to keep the data that supports their point of view, or their profit motive or their meal ticket alive. For example data collected meticulously by a underfunded biology professor about the allotropic speciation of the salamanders around the lake hole-in-the-mud would disappear in a jiffy. But flawed research supporting the efficacy of a patented clot busting drug would be perpetuated. Epidemiological studies showing the adverse side effects of the same drug would be hunted down and eradicated.

History is written by the winners. At least part of the data/research preservation is done people with vested interests preserving it selectively.

Aren't there always backup copies... (0)

Anonymous Coward | about a year ago | (#45743901)

... at the NSA?

Burning it (-1)

Anonymous Coward | about a year ago | (#45743907)

If we stopped burning though data so much, we wouldn't lose it, and it wouldn't be causing global warming!! PROBLEM SOLVED!!

So...? (4, Insightful)

Anonymous Coward | about a year ago | (#45743909)

I'm a researcher and I don't have time or space to keep old data as I'm generating too much new data. We work hard to maximize the use of these data and analyses when we write and publish papers. If this was talking about the papers (or presentations), that were the product of the data, being lost at this rate it would be one thing, but the raw data isn't usually very useful to anyone without context or knowledge of subtle and poorly documented technicalities. This just seems like ammunition for the climate change deniers to bitch about. It's unreasonable to keep the old data indefinitely without a massive public repository that will be poorly indexed and organized.

Re:So...? (1)

N1AK (864906) | about a year ago | (#45743971)

Your an AC posting about something not remotely controversial so you're either lazy or lying and I'll take your claim with a pinch of salt on those grounds. I don't think anyone is claiming that keeping the data available is either simple or cheap; but those points don't make it any less important. If the data a paper is based on isn't available then the paper itself loses value because anyone can write a paper showing anything and if they don't need to provide the data then it's much harder to investigate. You are absolutely right that simply having the data available isn't always enough to be able to use it, however we've also seen examples of where dubious or wrong mathematical methods being applied to data in academic research so it's important that information on this is available with the results of the research.

Re:So...? (0, Insightful)

Anonymous Coward | about a year ago | (#45743973)

but the raw data isn't usually very useful to anyone without context or knowledge of subtle and poorly documented technicalities

Wow, what a load of patronising bollocks. "My data was so important that I published it in a peer reviewed journal, but nobody else is smart enough to review it.

It's unreasonable to keep the old data indefinitely without a massive public repository

The bound experiment notebook that any undergrad worth anything was taught to keep in the pre-computer era is just as reasonable to demand today. YOU make your living with data, YOU learn how to maintain backups, or use the democratic process in your academic institution to get someone else to do it.

I do absolutely acknowledge that the move away from paper has made this vastly harder. Paper kept in a dry environment takes at least a lifetime to rot, and nearly every adult human in the developed world knows how to read and copy a sheet of paper. Maintaining electronic backup media usually takes far more frequent intervention, and greater expertise - not just with the hardware, but to ensure on-going readability of the data format. This is one of those things where the technologist who are entirely hep with every buzzword of the last 5 years forgets that the world's just slightly longer, and what seems like the only important set of tools in the world today will be a footnote in history tomorrow.

derp (0)

Anonymous Coward | about a year ago | (#45743989)

"I'm a researcher and I don't have time or space to keep old data as I'm generating too much new data."

well if that isn't the silliest thing I've ever read. there's no excuse for not retaining data, no matter how large the sets may be. storage in 2013 is incredibly cheap and there's many different systems, with incredible amounts of storage space you could use to back it all up on but I figure this more of a financial reason than your excuse of 'I'm generating too much data' nonsense.

"but the raw data isn't usually very useful to anyone without context or knowledge of subtle and poorly documented technicalities"

I seriously doubt you are a researcher of any kind based on the quote above. It doesn't really matter about the 'context' or 'poorly documented technicalities' as you so elegantly put it. You cannot just assume that if someone were to pick up your data they won't understand the 'context'. that is ridiculous. It's all do with unorganized researchers/institutions and money.

"This just seems like ammunition for the climate change deniers to bitch about."

"climate change deniers". very amusing. if you want your data and research to stand up to scrutiny then keep all your datasets. what have you to hide? are you hiding the fact that you became a climate researcher so can you stick your hand out for free research money while producing data that is laughable? I have a feeling that's the case :)

Given the responses you've got (0)

Anonymous Coward | about a year ago | (#45744799)

Given the responses you've got, looks like you nailed it.

"Oh, how arrogant!" from one poster. From another "You don't know anything". Isn't that second one arrogant?

Note too how they claim you don't know what you're talking about but even insist they have no method to know better.

So looks like you nailed it.

Re:So...? (1)

rnturn (11092) | about a year ago | (#45745185)

``the raw data isn't usually very useful to anyone without context or knowledge of subtle and poorly documented technicalities''

Wouldn't documenting your experimental method be part of your job? There's really no reason why raw data should be this mysterious entity that nobody can possibly understand unless they were there when it was collected. IMHO, your results -- whatever they are (I only hope it doesn't have anything to do with a drug that physicians might be prescribing to patients) -- are highly questionable if the experiment cannot be reproduced. On the positive side, at least you admit that your documentation efforts were inadequate.

we're in 'perfect' shape again here (0)

Anonymous Coward | about a year ago | (#45743923)

subject to change like the 'weather' & everything else in time space & circumstance. unperfectness abounds what a gig. free the innocent stem cells

what the hell? (2, Insightful)

Anonymous Coward | about a year ago | (#45743941)

I think it is ridiculous that Slashdot's keep posting articles that are behind paywalls. How the hell are we supposed to see them? Do you expect us to pay for subscriptions to services we'd only use once? you, OP, are out of your mind. articles such as this should be rejected as most users, if not all, can't even access the story. This site really has gone down hill in the last few years, over populated with clueless simpletons, frauds, so-called armchair IT experts and -obvious- subscription pushing trolls.

Re:what the hell? (1)

Anonymous Coward | about a year ago | (#45744729)

Here you go, lets just say I'm saving this research paper from being "lost" in 20 years - http://imgur.com/a/ozwBa [imgur.com]

Re:what the hell? (0)

Anonymous Coward | about a year ago | (#45744897)

Most of the articles that have scientific papers also have an account in the general media, usually several with varying degrees of utility. Read those instead. They are generally enough to at least engage in discussion at some level on Slashdot.

If you actually need the academic paper, then do one of several things: keep searching the internet, including the authors pages, it may be there or in some less obvious place, or put it on the list to read the next time you go to a research library that has the journal, or pay for it. The fact that you don't read it today doesn't make the information disappear.... at least if you don't wait a couple of decades. It should still be there next week, month, or year.

You're complaint is difficult to separate from a troll.

Not the worst of it (1)

Karmashock (2415832) | about a year ago | (#45743959)

Many things are based on this data... and when the data is gone it cannot be audited which makes it impossible to verify the finding of the data which is later simply referenced... but the data upon which it is based... *poof*

This practice also gives a free reign to fraudsters because if you don't catch them quickly they can claim the data was just in their other pair of trousers.

Re:Not the worst of it (1)

serviscope_minor (664417) | about a year ago | (#45744099)

This practice also gives a free reign to fraudsters because if you don't catch them quickly they can claim the data was just in their other pair of trousers.

No, the timespan is 20 years. Within 20 years the results will either be sunk without a trace, disproven or replicated. A fourth option is very unlikely.

For example, I doubt the original measurements of superconductivity are still around. If they are, they'd be interesting from a historical perspective, but you could replicate the results yourself with some fairly standard cryolab equipment. No one's going to doubt superonductivity because the original results have gone.

Re:Not the worst of it (1)

Karmashock (2415832) | about a year ago | (#45744123)

In most cases I would agree. However there are some large tables of data that are at least that old that are referred to currently. And they have not been replicated since.

Further, the tables are themselves not raw data but modified data with the raw data and methodology no longer available.

superconductors interesting example (0)

Anonymous Coward | about a year ago | (#45745489)

I once read a story, don't know it's true, that the team that discovered those Yttrium-Barium-Copper oxide high-temperature superconductors [wikipedia.org] had made a silly mistake in sending their very important breakthrough paper for peer review: they had (by accident of course) changed every Y for Yttrium into Yb for Ytterbium.

The peer review process took *ages*. Eventually the paper was accepted. A quick erratum "change Yb to Y everywhere. oops. our secretary made a typo."
The Nobel prize the very next year!!

Meanwhile, several large competing labs in the world had been buying Ytterbium like there was no tomorrow and writing articles about experiments in superconductivity with Ytterbium (which doesn't work) ;-)

GNU it (1)

Faisal Rehman (2424374) | about a year ago | (#45743987)

Publish under GPL license and save it forever.

Its just entropy (2)

Chrisq (894406) | about a year ago | (#45743995)

... wait what was it again ... its gone!

and even worse. (-1)

Anonymous Coward | about a year ago | (#45743999)

and even worse is trying to find untainted data. Just look at 'global warming' as THE prime example.

I still have the raw data (1)

mocm (141920) | about a year ago | (#45744011)

that I used for my paper 15 years ago. It is on a tape, that is somewhere in a drawer, that I have no tape drive for. On the other hand, the LaTeX file and the C and FORTRAN programs I used to evaluate and create the data and write the paper are still on a hard drive that is running on a computer in my network and I can access it right now. I probably can*t compile the the program without change (was written for Solaris and DEC machines) and maybe not even run LaTeX on it without getting some of the included styles, but still it is there.
Since my work was in theoretical physics and numerical the loss of the raw data is probably not as bad as long as you still have the software, but I guess for an experimental physicist the problems would be much greater to keep the massive amount of data they sometimes have and if lost to reproduce the data.

is/are (5, Interesting)

LMariachi (86077) | about a year ago | (#45744029)

Much of these data are unique to a time and place, and is thus irreplaceable, and many other data sets are expensive to regenerate.

Whichever side of the "data is" vs. "data are" argument one falls on, I hope we can all agree that mixing both forms within the same sentence is definitely wrong.

Re:is/are (0)

Anonymous Coward | about a year ago | (#45745417)

Dr. Egon Spengler: There's something very important I forgot to tell you.
Dr. Peter Venkman: What?
Dr. Egon Spengler: Don't cross the streams.
Dr. Peter Venkman: Why?
Dr. Egon Spengler: It would be bad.
Dr. Peter Venkman: I'm fuzzy on the whole good/bad thing. What do you mean, "bad"?
Dr. Egon Spengler: Try to imagine all life as you know it stopping instantaneously and every molecule in your body exploding at the speed of light.
Dr Ray Stantz: Total protonic reversal.
Dr. Peter Venkman: Right. That's bad. Okay. All right. Important safety tip. Thanks, Egon.

Misleading figure caption (1)

Bazman (4849) | about a year ago | (#45744033)

Some idiot sub-editor wrote a misleading figure caption here. The article (which I've read) says nothing about how data is lost with age. It only says something about how much data is lost for papers of a given age as of now.

In other words it does not mean that in 10 years time, 10 year old papers will have such drastic data loss. The world 20 years ago was a very different place in terms of communication, scientific practice, and data storage than it was 10 years ago or is now.

The Slashdot article repeats the fallacy by saying "scientific data disappears". No it doesn't. Some has disappeared, but the paper cannot say anything about whether it is still disappearing.

Come back in 10 years time for that conclusion.

Re:Misleading figure caption (0)

Anonymous Coward | about a year ago | (#45744867)

i agree. I also noticed the subject they surveyed was somewhat limited. If I was cynical I would suggest a funding tin was being waggled....

In all seriousness the public needs to keep a few things in mind.
1) Staff move
2) institutions of all types are usually only funded year to year
3) Not all data lost if a bad thing. Depends on subject, depends on findings.
4) Not all data is lost , sometimes it is superseded. This is particularly true in molecular biophysics/bioinformations etc... Ironially, I think climate modelling keeps an archive of their "best guesses"
5) There is too much data - just about every HEP or sequencing or microarray...

I will repeat, this article (yes behind a paywall) picked a somewhat narrow subject and as the poster above said "misleading" would capture it.

And this study will be lost as well (1)

Psychotria (953670) | about a year ago | (#45744035)

a) because it's behind a paywall; and b) how can the original data even hope to be located when a majority of the population can't even read the paper?

Re:And this study will be lost as well (0)

Anonymous Coward | about a year ago | (#45745381)

If it is an academic article published in a peer-reviewed journal, the majority of the population doesn't have the background to understand it anyway. That's why they have press releases or articles written by places (Scientific American, etc.) that specialize in translating research for a general audience.

Simple solution (0)

frrrp (720185) | about a year ago | (#45744081)

Defund the NSA, kick them out of the Utah data center - and do something useful with it. Like giving all the lost data a permanent home.

Lost forever (3, Interesting)

The Cornishman (592143) | about a year ago | (#45744127)

> many other data sets are expensive to regenerate...
Or maybe impossible to regenerate (for certain values of impossible). I remember reading a classified technical report (dating from the 1940s) related to military life-jacket development, wherein the question arose as to whether a particular design would reliably turn an unconscious person face-up in the water. The experimental design used was to dress some servicemen (sailors, possibly, but I don't recall) in the prototype design, anaesthetise them and drop them in a large body of water, checking for face-down floaters to disprove the null hypothesis. Somehow, I don't think that those data are going to be regenerated any time soon. I hope to God not, anyway.

NSA (0)

Anonymous Coward | about a year ago | (#45744137)

NSA has backups :p

Hmm, is Google working on this? (0)

Anonymous Coward | about a year ago | (#45744315)

This sounds like the sort of "big problem" Google would love to tangle with, considering their mission statement.

Re:Hmm, is Google working on this? (1)

emj (15659) | about a year ago | (#45744443)

add revenue on data.. hmph..

Dave...Dave... (1)

MrKaos (858439) | about a year ago | (#45744499)

I'm....losing...my..mind..Dave......Dave....Would you like me to sing a song?

And It Makes Me Wonder (1)

Zamphatta (1760346) | about a year ago | (#45744561)

The very fact that "Much of these data are unique to a time and place, and is thus irreplaceable, and many other data sets are expensive to regenerate.", makes me wonder if this could even be considered "scientific data" anymore. Since the data is unique to a time & place and irreplaceable, it would completely destroy the reproducibility aspect of the scientific process. Given that, should the lack of reproducibility mean that lost scientific data should be redefined as experimental data or hypothesis data? It also brings up the idea in my mind that scientific data has a half life since it can degrade back to hypothesis or experimental data if not properly stored.

Re:And It Makes Me Wonder (1)

dj245 (732906) | about a year ago | (#45744987)

The very fact that "Much of these data are unique to a time and place, and is thus irreplaceable, and many other data sets are expensive to regenerate.", makes me wonder if this could even be considered "scientific data" anymore. Since the data is unique to a time & place and irreplaceable, it would completely destroy the reproducibility aspect of the scientific process. Given that, should the lack of reproducibility mean that lost scientific data should be redefined as experimental data or hypothesis data? It also brings up the idea in my mind that scientific data has a half life since it can degrade back to hypothesis or experimental data if not properly stored.

Completely incorrect! How can you study "how X has changed over time" if you don't have data from other times? It is also impossible in many, if not most, cases to gather such historical data in the present time.

So will you host it? (0)

Anonymous Coward | about a year ago | (#45744727)

And keep proven logs to show it is not tampered with?

Will you pay for the researchers to keep the data forever? Will you insist that they stop researching anything new because the data storage exponentiates and the old stuff will need moving to new media, checking and eventually more work goes into looking after the media than on archiving new stuff?

Will you accept higher taxes to pay for this, and taxes that increase year-on-year exponentially to cover it?


Then you're going to "lose" data.

The Rosetta Project: building a 10000 year library (3, Interesting)

QilessQi (2044624) | about a year ago | (#45744787)

The Long Now Foundation has devised an interesting mechanism for storing important information which, although not optimal for machine readability, is dense and has an obvious format: a metal disk etched with microprinting, whose exterior shows text getting progressively smaller as an obvious way of saying "look at me under a microscope to see more":

http://rosettaproject.org/ [rosettaproject.org]

I highly recommend reading The Clock of the Long Now if you're interested in the theory and practice of making things last.

There's a bigger problem (1)

onyxruby (118189) | about a year ago | (#45744823)

Before science gets hot and bothered about the loss of data scientists need to do something about the quality of the data they produce to begin with. Frankly given the complete lack of quality controls that a lot of scientists use the loss of their data is probably for the best. Depending on the field as much as 60% of all scientific research cannot even be reproduced. Work that cannot be reproduced by another team is far from isolated to one field either:

http://online.wsj.com/news/articles/SB10001424052970203764804577059841672541590 [wsj.com]
http://www.popsci.com/science/article/2013-05/half-cancer-scientists-have-been-unable-reproduce-studies-survey-finds [popsci.com]
http://www.slate.com/articles/health_and_science/science/2012/08/reproducing_scientific_studies_a_good_housekeeping_seal_of_approval_.html [slate.com]
https://www.xsede.org/gateways-for-open-science [xsede.org]
http://www.eusci.org.uk/articles/data-doesnt-lie-scientists-do [eusci.org.uk]

Depending on the study that means that either the data has been fabricated by unethical scientists, or the data has been misrepresnted for political purposes. Studies are often improperly interpreted by failing to take into account sound statistical modeling and noise is reported as science. In some fields politics have effectively taken over (e.g. social sciences) and standards are used that would never be tolerated in other scientific fields.

The very culture of science that demands quantity over quality needs to change [columbia.edu] as the rat race that inspires junk science to begin with. I can't think of any other field where those kinds of failure rates about the reproducibility of your work would do anything other than get you fired for fraud and destroy your career. I like science, I have since I was a young child, but the junk were getting labeled as science doesn't deserve the label.

Best legitimate use of P2P (1)

naasking (94116) | about a year ago | (#45744895)

Universities should band together to distribute all data from published material on P2P networks so it's redundantly stored at mulitple locations. This has the side-benefit of making a legitimate use of P2P obvious.

The InterPARES Project (1)

cold fjord (826450) | about a year ago | (#45745309)

The InterPARES Project [interpares.org]

The International Research on Permanent Authentic Records in Electronic Systems (InterPARES) aims at developing the knowledge essential to the long-term preservation of authentic records created and/or maintained in digital form and providing the basis for standards, policies, strategies and plans of action capable of ensuring the longevity of such material and the ability of its users to trust its authenticity. The findings and products of the first three phases of the project can be found on this website.

Out of mind, out of sight,gone forever [interpares.org]

not just scientific data (1)

larry bagina (561269) | about a year ago | (#45745359)

slashdot used to purge -1 and 0 rated comments from old stories. "So what?", you say. "Why should they store goatse links and ascii art penises?" But before the misnamed lameness filter, there was a vibrant troll culture. These were works of art that spawned adequacy.org and had a lot of time, creativity, and effort put into them. Much more interesting than the "linux good, microsoft bad" groupthink that made it to +5 informative and wasn't purged.

Need extremely fine, fine print (1)

retroworks (652802) | about a year ago | (#45745385)

As a former paper industry professional (recycled pulp), Paper is fine except that people limit its use to readable font. That is what led to Microfiche (which is now being dumped by the truckload at recycling stations as "obsolete tech"). If you printed a hard copy of everything either to microfiche or extremely small 1-point font, you could store the data in a type of seedbank or gene bank.

A salt mine may not be appropriate, but I'd like to start a business where everyone could send their hard drives to a giant 100 year Time Capsule Vault in the Sonoran desert. We are shredding retired professors hard drives which the professors probably would prefer to see preserved. The "half life" of privacy risk is different for different data... experiments, emails, credit card numbers, and porn browsing cookies are not posing the same posthumous risk/benefit. We are cremating too many of our future fossils.

IMHO the biggest threat to raw data is misplaced or randomized fear of privacy combined with copyright planned obsolescence (or mandated "e-waste" shredding for working tech, out of fear that poor people will misuse a display device). Certain data does need to be destroyed, and certain papers shredded. Treating all "data" as having the same expiration date has something to do with the loss of the data in the article.


TheRealHocusLocus (2319802) | about a year ago | (#45745431)

[OP] "disappearing into old email addresses and obsolete storage devices, a Canadian study (abstract, article paywalled) indicated

Well so much for the study. Money changes everything. Eventually one hundred thousand copies of the abstract will exist on the Internet, but the authors' future descendants will find only only one actual link that leads to content, which terminates at a page saying "this domain is for sale".

You'd think that even science data of extremely low bit rate such as original weather station temperature data [blogspot.com] should be out there somewhere. A lot of other people did too... but all that is available now might be "value added" ajusted data. Not an evil conspiracy per se, it's human nature at it's best and worst.

A handy chronology of the history of data retention:

[2500BC] King Fuckemup boldly slew the enemy and I, Scribe Asskissus hath inscribed it in stone. He is an asshole who owes me back wages."
[1500] "With quivering quill [wikipedia.org] I will write mine own data."
[1866] "Data published at great expense into leather-bound volumes. Dust sold separately."
[1970] "This is really important. we should print it and store it in a binder."
[1971] They didn't.
[1983] "I'll write it to floppy disk with a notsosticky label"
[1985] "After a long and desperate search, the label has been found!"
[1987] "Unlabeled floppy disk keeps coffeemaker level."
[1995] "Roxio CD storage is forever, and Real Scientists don't close their data sessions."
[2003] "Microsoft Word has experienced a problem updating from an older document format and will now close. Save your work as soon as possible."
[2005] "I'll just email it to myself and shut the computer off immediately, then pick it up at work."
[2009] "Yes, three copies! In the safe. There was a fire. Yes, inside the safe. It was a fireproof safe, so no one noticed."
[2010] "This is really important. I should print it and store it in a binder. But my ink cartridge is dry."
[2013] "Our data has been uploaded to the Cloud where it will live forever."
[2500] "King Grapeape slew the primitive humans and buried their statue on the beach. I, Scribe Anthopoapologus hath incribed it in stone."

Perhaps the most mystiying data retention escapade of Modern Times [youtube.com] is the missing Apollo 11 SSTV moon tapes [wikipedia.org] which contained a multiplexed stream of raw telemetry and the original slow-scan TV signal broadcast from the moon. Not 'missing' really, rather we know they were re-used and recorded over because everyone assumed it was someone else's job to ensure that at least one copy was in a safe place. While the earth station operators dutifully sent their tapes to NASA where the sharpest signal of the moon landing was sure to be perserved for posterity (not), fortunately there were some librarians on duty, and you can aquire DVDs of the moonwalk [honeysucklecreek.net] with better quality than the recordings you've seen in countless movies -- an 8mm film camera pointed at an original SSTV monitor at Honeysuckle Creek, and the best quality scan-converted version.

In the Foundation series, Asimov envisioned Gaia [wikipedia.org] , a world in which a telepathic network of sentient (and sensuous) beings kept a 'working set' retrievable data in-memory -- but also via access to progressively less and non-sentient objects, such as plants and even rocks -- a vast archive. Ask the mountain, it will answer in time, a long time.

Our own Earth has a Gaia storage mechanism, a record of its magnetic field over geologic time stored as polarization in crystallized lava floes [youtube.com] . But it is not being used for data storage -- actually the magnetic record contains another signal. Scientists have 'played' back 2 billion years of field data and sped up it resolves to a brief, scratchy desperate voice recording: "Help! I am being held prisoner in a planet factory!"

But what of modern unstructured swarm memory? Look at these stats of torrent retention [torrentfreak.com] . You will see that the largest number of seeders was 144,663 hosting Season 3 episode 1 of "Heroes". The longest hosted torrent is a ASCII-video version of The Matrix movie [chattanoogastate.edu] .

A valiant attempt to store long term data is the Rosetta Project [rosettaproject.org] . "The Rosetta Disk fits in the palm of your hand, yet it contains over 13,000 pages of information on over 1,500 human languages. The pages are microscopically etched and then electroformed in solid nickel, a process that raises the text very slightly - about 100 nanometers - off of the surface of the disk. Each page is only 400 microns across - about the width of 5 human hairs - and can be read through a microscope at 650X as clearly as you would from print in a book. Individual pages are visible at a much lower magnification of 100X.

The etched nickel Rosetta disks are sure to become a valuable resource for generations in the distant future to discover just what the hell was going on in this age. We sure don't have a clue. Some one will discover one keeping a coffeemaker level and realize that it has tiny writing on it. I zoomed in at random and saw text rendered as a black-n-white bitmap. It is like a sending a fax to the future. It was there I saw inscribed, thus: "Dari, pag nda' Tuhan, kamahapan na ya ni inaa si' sikamimon. Pag bay pingka' sangom, pag taabut sayu, ilooy na nika-mpat ngallaw na. Pag-anu, mahpalman na baw Tuhan, yuk na, "Subay situ meka jinisan..." Which explains everything.

Raw science data is DOOMED.
Because it's no fun to keep dusting it off.
Only sexy conclusions are interesting.

The NIDDK was aware of this years ago. (1)

guru42101 (851700) | about a year ago | (#45745515)

The NIDDK was aware of this years ago and had commissioned a feasibility study on creating a storage mechanism that all grant paid research would have to use. Unfortunately after a successful feasibility study the reviewers for the follow up real grant responded with "I do not see the scientific value of this research" and the grant went away with Vanderbilt as the only applicant. I've heard through the vine that someone picked up a new similar grant to work on it, but I haven't seen anything from it yet. The big problem is that researchers do not want to share their unpublished research. From what I've gleamed they want to keep things in their back pocket for future grants/publications.

The site was http://dkcoin.org/ [dkcoin.org]

thank elitism (0)

Anonymous Coward | about a year ago | (#45745563)

And paywalls and the overall exclusivity-oriented nature of academia are to blame for this.

When you do stuff in the open and share it, it's (at least in our current information age) immortal.

When you're a prick about it. It's lost. And most of academia is composed of pricks.

how this happens (3, Informative)

Goldsmith (561202) | about a year ago | (#45745567)

Our scientific research system is built around the process of joining a lab, mastering the work there, and then leaving. There are very few long term research partnerships. The people who stay in place are the professors, who generally do not do the research work.

So you join a lab, produce a few terabytes of data a year, pull a few publishable nuggets out of that and then leave. I have a few backup hard drives that move around with me with what I consider my most important data, probably total 1/10 of the data I have taken. After a few years, this data is really unimportant to me as the labs I have left have done a good job of continuing the research and I have to spend my time and money on something else.

The original data is eventually overwritten by researchers a few "generations" removed from me and that's the end of it.

Yes and? (1)

Meeni (1815694) | about a year ago | (#45745581)

How is that different from the previous state of affairs?

Before digital age, Scientists would have work booklets that would get lost or destroyed when they change job, or when they become too numerous.

Drawning in an overflow of data is about as useful as having no data at all. It could be argued that forgetting is actually a good thing that puts forward important matter, those that we care to keep because they are valuable. Sure, some valuables get lost in the process, but anyway, who would go sort trough all data they ever generated, even if they had them available forever?

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?