Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science IT Technology

IT At the LHC — Managing a Petabyte of Data Per Second 248

schliz writes "iTnews in Australia has published an interview with CERN's deputy head of IT, David Foster, who explains what last month's discovery of a 'particle consistent with the Higgs Boson' means for the organization's IT department, why it needs a second 'Tier Zero' data center, and how it is using grid computing and the cloud. Quoting: 'If you were to digitize all the information from a collision in a detector, it’s about a petabyte a second or a million gigabytes per second. There is a lot of filtering of the data that occurs within the 25 nanoseconds between each bunch crossing (of protons). Each experiment operates their own trigger farm – each consisting of several thousand machines – that conduct real-time electronics within the LHC. These trigger farms decide, for example, was this set of collisions interesting? Do I keep this data or not? The non-interesting event data is discarded, the interesting events go through a second filter or trigger farm of a few thousand more computers, also on-site at the experiment. [These computers] have a bit more time to do some initial reconstruction – looking at the data to decide if it’s interesting. Out of all of this comes a data stream of some few hundred megabytes to 1Gb per second that actually gets recorded in the CERN data center, the facility we call "Tier Zero."'"
This discussion has been archived. No new comments can be posted.

IT At the LHC — Managing a Petabyte of Data Per Second

Comments Filter:
  • by Sponge Bath ( 413667 ) on Friday August 03, 2012 @09:34AM (#40867203)
    We need backup on floppy disk.
  • They may also be using something called load balancing, but we're still waiting for sources to confirm.
    • Good point. Non-story. I can't see anything of interest to nerds here.

    • 'Score: 3, Funny' - This is hilarious, from TFA:

      'The Tier Zero facility is the central hub of the Worldwide LHC Computing Grid, which also connects to some dozen ‘Tier One’ data centres for near-real time storage and analysis of data and over 150 ‘Tier Two’ data centres for batch analysis of experiment data.'

  • Keeping us humble... (Score:3, Interesting)

    by Anonymous Coward on Friday August 03, 2012 @09:37AM (#40867235)

    My wife, a staff physicist at FermiLab in their computing division, manages to keep me humble when I talk about the "big data" work I'm doing in my commercial engineering position. I think having to deal with a billion or so data points per day is big... Not so much in her universe!

  • Scientists have discovered a way to get adequate performance out of Windows?

    • by Anonymous Coward

      Not yet.

      Large Hadron Collider - powered by Linux [internetnews.com]

      • Aah I see they use VMWare to manage the virtual machines. .. I guess Citrix is still lagging behind in the server virtualization field.

        I am from Citrix ... if you know what I mean . :P
        • VMWare is pretty widely recognized as the king of virtualization-- at least so long as you arent concerned with money. Its overhead is far far smaller than the others especially when dealing with huge numbers of connections, and it simply has more features than its competitors.

          Of course, that assumes you're willing to pony up for vRAM entitlements and Enterprise Plus.

          • Re:You mean... (Score:5, Interesting)

            by cduffy ( 652 ) <charles+slashdot@dyfis.net> on Friday August 03, 2012 @11:58AM (#40868955)

            VMWare is pretty widely recognized as the king of virtualization-- at least so long as you arent concerned with money. Its overhead is far far smaller than the others especially when dealing with huge numbers of connections, and it simply has more features than its competitors.

            Which doesn't mean those features are implemented well.

            Not so long ago, I built an automated QA platform on top of Qumranet's KVM. Partway through the project, my employer was bought by Dell, a VMware licensee. As such, we ended up putting software through automated testing on VMware, manual testing on Xen (legacy environment, pre-acquisition), and deployment to a mix of real hardware and VMware.

            In terms of accurate hardware implementation, KVM kicked the crap out of what VMware (ESX) shipped with at the time. We had software break because VMware didn't implement some very common SCSI mode pages (which the real hardware and QEMU both did), we had software break because of funkiness in their PXE implementation, and we otherwise just plain had software *break*. I sometimes hit a bug in the QEMU layer KVM uses for hardware emulation, but when those happened, I could fix it myself half the time, and get good support from the dev team and mailing list otherwise. With VMware, I just had to wait and hope that they'd eventually get around to it in some future release.

            "King of virtualization"? Bah.

            • King of virtualization when it comes to things like "supports live migration of a VM's execution state and/or permenant storage", or "stability and speed of the networking layer".

              I cant speak to KVM as my experience is limited to VMware, and some HyperV and XenServer testing. But just doing a check from RHEV's own fact sheet [redhat.com], there are a number of things that are missing that are quite useful:
              *Storage live migration
              *Hot add RAM, CPU
              *Hot add NICs, disk (note that RHEV has it wrong-- this does not require an

              • VMWare is nice, let's get that out of the way. We have a mix of ESXi and RHEV and are deciding which to use for everything (assuming moving up to paid VSphere is the VMWare option). The fact that RHEV was cheaper, much much better looking, quicker to setup, and easier to use than KVM under RHEL made the decision to migrate from RHEL-based KVM to RHEV fairly easy.

                RHEV is getting there, still lacking some features and still rough around the edges. For instance:
                • Right now you can't have a VM with one disk on
                • ESX's cost is a bit of a PITA-- theres essentials plus, but of course that lacks DRS; and theres the free version which truly is nice for a single-server solution... but there are a lot of good contenders out there for less.

                  Im not gonna say that the others are garbage; I took a peek at Xen and really like that they dont gouge you to death for basic things like "can manage several servers at once". Im just saying that from my experience, as well as from listening to others in the recent ArsTechnica discussi

              • by cduffy ( 652 )

                Those are all pretty core features-- to my mind, ESPECIALLY the disk and NIC hot add. There are a lot of times that it is an absolute blessing to be able to roll out a new VLAN on the network and to just hot-add a NIC to the firewall VM on that VLAN, and your network suffers no outage. With disk, its awfully nice to be able to add a USB disk to the VM without having to reboot the entire thing (again, how does HyperV and RHEV not have this?).

                I can't speak to RHEV -- I ran on bare KVM. RHEV eliminated any fea

            • The King Joffrey of virtualization, perhaps.

  • GRID ack (Score:4, Interesting)

    by PiMuNu ( 865592 ) on Friday August 03, 2012 @09:49AM (#40867365)
    I tried using the GRID - it's deeply embedded in acronyms and crud, practically impossible to use without a PhD. For crying out loud, it's just a batch farm!
  • by peter303 ( 12292 ) on Friday August 03, 2012 @10:16AM (#40867673)
    I was looking up how complicated the detectors were, and they were. They have 75M directional sensors and 9K energy detectors (calorimeters), each which are analyzed 40M times a second for "interesting" events. One out of a billion maybe recorded for subsequent deep analysis.
  • And Still. (Score:5, Funny)

    by CimmerianX ( 2478270 ) on Friday August 03, 2012 @10:42AM (#40867995)

    The head researcher will STILL come to IT and ask them to please help him sync his outlook contacts to his phone.

  • So they just used grep

  • by Travelsonic ( 870859 ) on Friday August 03, 2012 @11:01AM (#40868219) Journal
    Roughly, assuming you can round it off to 53 weeks/year, if you do 1Petabyte/ear, and transferred that much constantly, that would be roughly 2887200000000000000000000000000000000000000000000000 BITS [individual 1s or 0s] per year
    • Is that as much as billions and billions?

      • Quite a bit larger actually
        1 billion = 1e9
        Travelsonics number is ~3e51 which would be 3e6*(1e9)^5
        or millions of billions of billions of billions of billions of billions
        Not quite sure how well Sagan could pull that line off though

    • Actually you're off by 26 orders of magnitude

      1PB/s = 8e15 bits/s
      8e15 bits/s *(3600s/h) *(24h/day)*(~365.25 days/year) ~= 2.5e23 bits/year
      or 252,460,800,000,000,000,000,000 if you prefer counting zeros

      even in stereo it'd only be 5e23.

      • My keyboard is being weird, probably omitted a few 0s when working with the calculations. Either way, it is still a mind boggling number of 1s and 0s. Wonder how long, in continual transfer, one would theoretically have to go to hit transfer of over a googolplex bits of information.
        • Actually the other way around, you've got over twice as many zeros as you should have. You're right though, a mind boggling number regardless. Nowhere near a googolplex (10^googol) though, nor even a googol (10^100) bits. Using my number (~2.5e26) you need ~4e73 years to transfer just one googol bits, or 40 trillion trillion trillion trillion trillion trillion years, and the entire universe is currently only estimated to be ~15 billion years old right now. The universe will probably be so close to abs

  • Power limitations (Score:4, Informative)

    by onyxruby ( 118189 ) <onyxruby&comcast,net> on Friday August 03, 2012 @11:09AM (#40868307)

    Did a bunch of work with some stock exchanges a few years back. It was an interesting environment and I see that CERN had the same problems that the stock exchanges had. They even had the where the number one budgetary item wasn't cost but electric load.

    You only had so much power physically available in the data centers next to the exchanges and server rooms inside them. Monetary cost was never an issue, but electric load was everything. It seems funny considering their load is strictly a science based load and not monetary, but their requirements and distribution remind me greatly of the exchanges.

    • They even had the where the number one budgetary item wasn't cost but electric load.

      Probably true wherever you go, but the NYSE is in the middle of a dense urban area stretching for a hundred miles in every direction. Electricity, along with everything else, is painfully expensive there. I believe that's why so many data centers are built in relatively remote areas. Obviously, the NYSE has a physical location requirement... :\

    • On the other hand, at CERN the power used by their computing farm is probably a small trickle compared to what is being pumped into the components of the ring and its detectors.

  • Maybe they could offload some of the processing to computing platforms running screensavers the way the SETI project once did and how BitCoin does now.
  • Those with further interest in the article may find this informative:

    http://www.geant2.net/upload/pdf/LHC_networking_v1-9_NC.pdf [geant2.net]

    Apparently, CERN uses BGP between T0 and T1, and uses only ACLs, no firewalls, for security.
  • Er, 1 Gb is 100 megabytes. b is bit, B is byte. So which is meant, one gigabit or one gigabyte? I'm guessing the latter, from simple consistency. If we're going to use abbreviations, we should at least get them right.

  • The summary says it's 100 MByte to 1 Gbit, which is confusing in itself. I think "a few hundred megabytes" is correct. It's impressive to run at that rate continuously with high reliability, but it's nothing compared to Youtube and probably Facebook. If you say a "tweet" takes up 200 bytes including overhead, that's 500 000 tweets per second at 100 MB/s, so maybe even Twitter has to deal with that rate. The requirement for redundancy is probably stricter for the LHC, they have at least triply redundant st

    • by fa2k ( 881632 )

      maybe even Twitter has to deal with that rate.

      Never mind, guys, still a few orders of magnitude lower (340 M messages/day according to WP)

  • I'd like to know what their infrastructure looks like for storing that 1GB/s.

    I was at OpenWorld in 2003 and they had some guy there from CERN giving a talk about how they were using Oracle9i (I read later that they upgraded to 10g, but no-doubt they upgrade to later versions relatively quickly), and he did mention that petabyte/s buzzword. It would be very interesting to know how it was all implemented, and how they manage to write 1GB/s to disk. Must be some serious RAC clustering going on, and some seri

What is research but a blind date with knowledge? -- Will Harvey

Working...