Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
IBM NASA Science Technology

NASA Unplugs Its Last Mainframe 230

coondoggie writes "It's somewhat hard to imagine that NASA doesn't need the computing power of an IBM mainframe any more, but NASA's CIO posted on her blog today that at the end of the month, the Big Iron will be no more at the space agency. NASA CIO Linda Cureton wrote: 'This month marks the end of an era in NASA computing. Marshall Space Flight Center powered down NASA's last mainframe, the IBM Z9 Mainframe.'"
This discussion has been archived. No new comments can be posted.

NASA Unplugs Its Last Mainframe

Comments Filter:
  • by Anonymous Coward

    Pardon my youth and naiveness.

    I've seen mainframes used at Insurance companies and Banks, but the rest of the world seems to favour the the cloud ways of Elastic Cloud and what not.

    I've heard mainframes have high IO thoroughput, but what about their equivalent Cloud solutions and scalability especially?

    Thanks.

    • by tysonedwards ( 969693 ) on Sunday February 12, 2012 @03:44PM (#39012731)
      It's also that $730,000 / year in ongoing maintenance for a Z9 is not really all that practical, especially considering that newer deployments based on GPGPUs have far lower operating costs, and provide higher performance than a 5 year old big iron.
      • by Shinobi ( 19308 ) on Sunday February 12, 2012 @03:48PM (#39012763)

        Only in some aspects, and GPGPU clusters have a hard time matching the transaction rates and number of concurrent I/O's of a Z9. I wouldn't want to use a GPGPU cluster for financial/payrolls, just as an example.

        • by Gerzel ( 240421 )

          Yeah but NASA generally is doing the kinds of apps that GPGU clusters excel at and not the types that mainframes excel at. Really they can rent one if they need it, but I don't see a real need at the current time for NASA to have one.

      • by bws111 ( 1216812 ) on Sunday February 12, 2012 @03:54PM (#39012805)

        GPGPUs do not replace mainframes, unless the mainframe in question is being used for the wrong reasons.

        GPGPUs excel at very fast computation and being cheap.

        Mainframes excel at very high transaction rates (lots of I/O), incredible reliability (five 9s), and security.

        GPGPUs are used in scientific (number-crunching) work, mainframes are used for business.

      • by sjames ( 1099 ) on Sunday February 12, 2012 @04:06PM (#39012887) Homepage Journal

        Mainframes aren't about computational performance, they're about reliability and to a lesser degree (these days) I//O performance. If you want computational performance, you go with a cluster or perhaps a cluster of GPUs (depending on the nature of the problem).

        Mainframes are about reliability. When your app absolutely positively must run 24/7, a mainframe is a reasonable consideration. We can get about 90% of that with multiple failover servers and other similar strategies. Where that's good enough, we go that way because of the vastly lower prices. However, if the 90% solution just isn't good enough, mainframe it is.

      • Comment removed based on user account deletion
      • by mcgrew ( 92797 ) *

        It's also that $730,000 / year in ongoing maintenance for a Z9 is not really all that practical, especially considering that newer deployments based on GPGPUs have far lower operating costs, and provide higher performance than a 5 year old big iron.

        I saw a feature of the Z9 in wikipedia that intrigued me. I can't for the life of me figure out how they could accomplish it.

        Concurrent book replacement

        The System z9 supports nondisruptive processor replacement. That means a technician can replace an entire proce

    • by History's Coming To ( 1059484 ) on Sunday February 12, 2012 @03:48PM (#39012761) Journal
      The reason big old fashioned mainframes are still in use in many places is simply the cost of moving away from them. They're generally running custom code, custom databases, custom hardware....the sheer cost of re-doing everything is the big problem, not the fact that modern hardware has any issues doing the job.

      Clusters of PS3s make a perfectly serviceable supercomputer, but if your existing solution still works...
      • by bws111 ( 1216812 ) on Sunday February 12, 2012 @04:01PM (#39012859)

        Mainframes are not supercomputers, and are not marketed as such. Not sure what you mean by 'modern hardware' - you don't think mainframes are modern hardware?

        Mainframes are used for high-volume transaction systems, where uptime and data integrity is absolutely essential. Clusters of PS3's are not going to match that.

      • by Samantha Wright ( 1324923 ) on Sunday February 12, 2012 @04:16PM (#39013007) Homepage Journal
        You're probably aware of all this, but just to anyone who happens by and gets confused: these mainframes are not exactly dinosaurs; the z9 series was introduced seven years ago and uses totally custom 64-bit CISC silicon designed to give the top of the line performance for the day. The hardware is essentially optimized to run VM hypervisors, and one of the major guest OSes for it is Linux. Essentially what the price tag fetches you—very much unlike a pile of PS3s strung together—is ungodly amounts of vendor support. As documentation-fearing folk, we Slashdotters don't generally think about dependability on the scale that IBM does, but there's a very clear market for it, and that's really been the marketing point of Big Blue for at least the past twenty years or so, much moreso than legacy software lock-in.
        • by Forever Wondering ( 2506940 ) on Sunday February 12, 2012 @05:41PM (#39013637)
          And backward compatibility and service. Write once, run forever. You can take a binary program compiled in 1975 and it will run unchanged on the latest mainframe.

          ---

          Even if it's on a punched card deck and you don't have card deck reader hardware anymore, IBM does. Its support group will transfer the card deck to whatever media your current hardware can handle.

          Also, if a mainframe ever does go down, IBM's service escalation policy is unbelievable (e.g. that's what you pay for). I remember when my datacenter's mainframe went down [circa 1975]. The following numbers aren't exact, but similar.

          The local rep must be onsite within a fixed period of time (e.g. 2 hours). He has [say] 4 hours to diagnose/fix the problem. If he is unable to do so, the regional hotshot is called in. If more time goes by, the national service rep and one or more of the system architects must arrive. After 24 hours, an executive vice president must be onsite and stay until the problem is resolved.

          When we had our problem, the onsite VP had the entire mainframe replaced, by diverting a system scheduled to go to a new customer and airfreighting it up. Total round trip time [for complete replacement/install]: 72 hours

          Also, the mainframes in those days were much bigger iron than the one pictured in the article. You could fit five z9's into the space of a single s/370

          • Re: (Score:3, Informative)

            by webnut77 ( 1326189 )

            Also, the mainframes in those days were much bigger iron than the one pictured in the article. You could fit five z9's into the space of a single s/370

            You could literally step inside an IBM 3090.

          • by RadioTV ( 173312 ) on Sunday February 12, 2012 @06:59PM (#39014213)

            I watched an IBM mainframe service tech remove the jacket of his three piece suit, roll up his shirt sleeves, strike up a propane torch and re-sweat the solder joints on the copper pipe for the water cooling system of an ES/9000. I don't know what they are like these days, but 15 years ago those guys were amazing. They had to know how to repair every piece of hardware that IBM made, and how to troubleshoot every operating system.

        • by Shinobi ( 19308 ) on Sunday February 12, 2012 @10:44PM (#39015453)

          Now now, not all of us slashdotters fear documentation, dependability etc :p

          I don't work on mainframes, but I've worked in projects where mainframes have been involved as well as mainframe people, and in many ways it reminds me of my days in the military, the amount of dedication to logistics etc. Sure, to the run-of-the-mill "geek", it probably looks stifling, but for those of us who are used to teamwork etc, it's actually refreshing to have decent planning. Contrast that to working for academia... *shudders*

    • by deoxyribonucleose ( 993319 ) on Sunday February 12, 2012 @03:51PM (#39012781)

      I've seen mainframes used at Insurance companies and Banks, but the rest of the world seems to favour the the cloud ways of Elastic Cloud and what not.

      I've heard mainframes have high IO thoroughput, but what about their equivalent Cloud solutions and scalability especially?

      Thanks.

      Latency. Confidentiality. Reliability. But most of all: sunk costs and proprietary software embodying key business knowledge. Replacing mainframes requires a large enterprise to start not only major software procurement or development (or both, as in ERP), but also business process reengineering... none of which is particularly fun, cheap or in themselves something that helps capture greater market share.

    • by Junta ( 36770 ) on Sunday February 12, 2012 @03:58PM (#39012841)

      I've heard mainframes have high IO thoroughput, but what about their equivalent Cloud solutions and scalability especially?

      Depends on the problem.

      For a relatively naively constructed algorithm, IO will be measurably worse in any 'cloud' platform popular today, and severely worse than mainframe. However, if you understand how to make your application scale (assuming it theoretically can), you can *in aggregate* match mainframe IO benefit at a much lower acquisition cost (though depending on who you talk to the more fudge-friendly 'TCO' metric may or may not follow). The trick is for many applications, the perceived risk and cost to reach that understanding is higher than just continuing to go with the flow of an IBM mainframe. Of course, some moderately broad areas of problems are getting tooling to more easily do that sort of scaling without too much extra thought. On the other hand, some problem areas no one has constructed a 'proper' approach that would negate the need for mainframe-like architecture.

      With respect to the word 'cloud', the overwhelming majority of 'clouds' covered in tech news are EC2 and EC2-workalikes where IO is not particularly optimized. There are also various companies championing a departmental server or two with a few virtual machines on it as a 'cloud', further diluting the message and usually having terrible IO characteristics even with overpriced storage architectures. On the other hand, there are some projects claiming 'cloud' that include arbitration of bare-metal execution that can reasonably compare with a 'boring' scale-out private x86 scale-out solution, but very few people care.

      • by symbolset ( 646467 ) * on Sunday February 12, 2012 @06:01PM (#39013823) Journal

        It's really not hard to configure a single rack server with 1M IOPS, 1-2 TB RAM, 40-160Gbit aggregate networking and 40-48 cores these days. They fit 4-8 per rack, storage and switching included. They don't cost as much as you might think, even with the hand-holding support contract. And they run the OpenStack "cloud" platform quite well.

    • Mainframes have legacy, locality, and privacy, which are particularly important qualities for banks and insurance companies.

      The biggest problem is porting old programs to cloud systems. Sure, it can be done, but it's a million-dollar proposal, and if something goes wrong, it's potentially hundreds of millions of dollars in losses for a big bank. New systems will often use cloud solutions, but that requires convincing managers that they'll work just as well.

      Whether a cloud solution will meet the throughput c

    • by story645 ( 1278106 ) <story645@gmail.com> on Sunday February 12, 2012 @04:06PM (#39012903) Journal

      My mom keeps telling me that UPS is one of the world's largest users of DB2, a statement backed up in this article [theregister.co.uk]. They're not switching off for the same reason financial institutions don't; After pouring lots of money into alternatives, they found that mainframes have better performance.

      • by arth1 ( 260657 ) on Sunday February 12, 2012 @06:33PM (#39014041) Homepage Journal

        And if not always better performance, usually more predictable performance, which can be far more important.
        For some apps, it is better to have a guaranteed transaction time of 10 ms than an average transaction time of 1 ms with no guarantees.
        Linux RT and GRIO are getting better, but not quite there yet.

        It's also easier to scale with big iron - you pay for more performance, Big Blue delivers it, and you won't have to go through painful migrations.

    • by jythie ( 914043 )
      Not sure I would say the 'rest of the world'. Some companies seem to be playing with cloud computing and it is a popular buzz word, but most companies still choose between mainframe or a rack full of servers. Cloud stuff is not known for reliability or flexibility of application.
    • by nurb432 ( 527695 )

      Yes, mainframes are still used. They still have their place in the world.

  • ...can I get a mainframe for $5 shipped on BuyItNow?

    (I wish!)

  • by Anonymous Coward on Sunday February 12, 2012 @03:40PM (#39012687)

    Daisy..., daisy... give me our answer, do,
    I'm half crazy for the love of you ...
    [sounds fades away]

  • by Neil_Brown ( 1568845 ) on Sunday February 12, 2012 @03:41PM (#39012701) Homepage
    all about space saving?
  • by Animats ( 122034 ) on Sunday February 12, 2012 @03:41PM (#39012703) Homepage

    NASA still has a big data center in Slidell, Louisiana. They're hiring. [jobamatic.com] With the mainframes gone, one would expect they'd close down Slidell, but no. Instead, they're building a big museum and PR center [infinitysc...center.org] there.

    NASA seems to spend money at a relatively constant rate, independent of whether they're flying anything.

    • NASA seems to spend money at a relatively constant rate, independent of whether they're flying anything.

      Which makes them no different from any other government agency.

      NASA should disestablished, and it's responsibilities farmed out to other agencies. Give space launch to the Air Force and Navy, and science functions to universities and other research agencies.

    • by sirwired ( 27582 ) on Sunday February 12, 2012 @04:04PM (#39012879)

      I don't follow why a data center would be kept open for one puny mainframe (or closed because it's gone.) I'm pretty sure there's other stuff there. A modern mainframe is about the size of three deep rack cabinets. Even with associated storage and support peripherals, I could fit a complete mainframe installation in my living room. I doubt the only thing in the data center was the mainframe.

      Also, NASA stands for National Aeronautics and Space Administration, NOT National Manned Space Flight Agency. They DO accomplish lots of other stuff other than manned space flight.

    • Did you bother to actually read the pages you link to?

      In the first place the 'data center' in Slidell (if that's what it really is) seems to be part of the Stennis Space Center and have a lot more going on than just housing servers (if you bother to read the jobs listing you linked to).

      Then, if you bother to read the other web page you linked to - NASA isn't building anything. Though NASA owns the land, they haven't paid a thin dime towards science center - it's run by a non profit.

  • TFA (Score:5, Informative)

    by ldapboy ( 946366 ) on Sunday February 12, 2012 @03:46PM (#39012751)
    The cited page is a copy/paste of Linda Cureton's blog post. Lame and uncool to copy someone's article whole without a link, don't you think, even if they are paid with taxpayer $$? Here's the original article : http://blogs.nasa.gov/cm/blog/NASA-CIO-Blog/posts/post_1329017818806.html [nasa.gov]
  • Makes sense... (Score:5, Insightful)

    by sirwired ( 27582 ) on Sunday February 12, 2012 @03:59PM (#39012849)

    For the workloads a mainframe is designed to perform, I can't imagine NASA would have much use for one. They are database and transaction processing monsters. NASA does not handle large volumes of either. I imagine their scientific computing needs are pretty fair-sized, but mainframes are indeed rather cost-ineffective for scientific workloads.

    • For the workloads a mainframe is designed to perform, I can't imagine NASA would have much use for one. They are database and transaction processing monsters.

      That's true today. But it hasn't always been true, especially back when NASA first got into mainframes. Nor are they limited to doing database and transaction processing.

  • by wisebabo ( 638845 ) on Sunday February 12, 2012 @04:00PM (#39012855) Journal

    I mean it's possible to run your old Commodore 64 or TRS-80 (or even Apple II?) software in a software emulator of these machines. And it's (mostly?) legal to do so? (BTW, anyone know of an Apple II emulator which will run the game "Epoch"?)

    So are there software emulators for an IBM 360 or VAX out there? Can I run them on my iPad? There might be some interesting software that you could play with, despite the primitive hardware they did send Man to the moon using these systems as well as defend the U.S. against nuclear attack and run the IRS. (Getting this code might be a bit of a problem!)

    Even if there isn't a software emulator DIRECTLY for a mainframe to run on my iPad, what about one that'll run on a pentium class PC. Then is it practical to run THAT in emulation mode on my iPad?

  • by Anonymous Coward

    The JSC mainframe system(s) used to build and support the shuttle flight software were shutdown on July 29 of 2011. DEVS, PRDS, PATS, SDFC, SDFA, and RTF1 systems.

    These systems had been used since May 6, 1981 (no, not the same computers) under a NASA contract. Photos of the servers were taken. Yes, they are just as boring as they sound.

    It was sad to see the tape silo nearly empty when it would normally hold hundreds or thousands of tapes.

    We have a support group on LinkedIn.

  • Remember the Tao... (Score:5, Interesting)

    by Anonymous Coward on Sunday February 12, 2012 @04:29PM (#39013099)

    There was once a programmer who worked upon microprocessors. "Look at how well off I am here," he said to a mainframe programmer who came to visit, "I have my own operating system and file storage device. I do not have to share my resources with anyone. The software is self- consistent and easy-to-use. Why do you not quit your present job and join me here?"

    The mainframe programmer then began to describe his system to his friend, saying "The mainframe sits like an ancient sage meditating in the midst of the data center. Its disk drives lie end-to-end like a great ocean of machinery. The software is as multifaceted as a diamond, and as convoluted as a primeval jungle. The programs, each unique, move through the system like a swift-flowing river. That is why I am happy where I am."

    The microcomputer programmer, upon hearing this, fell silent. But the two programmers remained friends until the end of their days.

  • Decommissioned computers for whom?
  • by billybob_jcv ( 967047 ) on Monday February 13, 2012 @12:40AM (#39016165)

    ...we had an IBM consultant who worked onsite doing the care & feeding of our IBM 390. He would spend most of his day running diagnostics and printing usage reports. I remember looking at some of his reports sitting next to the printer, and the vast majority of the time the only job running was his diagnostics program...

What is research but a blind date with knowledge? -- Will Harvey

Working...