Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Science

Hosting Problems For distributed.net 214

Yoda2 writes "I've always found the distributed.net client to be a scientific, practical use for my spare CPU cycles. Unfortunately, it looks like they lost their hosting and need some help. The complete story is available on their main page but I've included a snippet with their needs below: 'Our typical bandwidth usage is 3Mb/s, and reliable uptime is of course essential. Please e-mail dbaker@distributed.net if you think you may be able to help us in this area.' As they are already having hosting problems, I hate to /. them, but their site is copyrighted so I didn't copy the entire story. Please help if you can." Before there was SETI@Home, Distributed.net was around - hopefully you can still join the team.
This discussion has been archived. No new comments can be posted.

Hosting Problems For distributed.net

Comments Filter:
  • Suggestion (Score:2, Informative)

    by Jouster ( 144775 )
    Could they just move the project over to SourceForge?

    Jouster
    • Re:Suggestion (Score:5, Informative)

      by hkhanna ( 559514 ) on Tuesday March 26, 2002 @04:53AM (#3227090) Journal
      No because the distributed.net client needs to communicate on it's own port in whatever internal protocol it uses. That's what causes the bandwidth usage, not the downloading of the client, if that's what you think.

      You can't put your own server software on sourceforge's servers, at least not to my knowledge, so all sourceforge would be good for is hosting the client downloads...which it might actually already do. Hope that answers your question.
      Hargun
    • I dont think so since sourceforge is for open source development and last time I checked they had atleast some portions of their code closed to prevent people from cheating.

      I could be wrong though and I'm sure someone will point that out if so. Perhaps you could have some parts closed even on sourceforge.

      Soccer manager: Hattrick [hattrick.org]

  • Distributed hosting? (Score:5, Interesting)

    by gnovos ( 447128 ) <gnovos@ c h i p p e d . net> on Tuesday March 26, 2002 @04:46AM (#3227072) Homepage Journal
    Maybe they should go in for distributed hosting, like say one machine that just houses the IP address and a few thousand mirrors that the requests can be directed to as they come in. Not only is it a project that is just ASKING to be performed by distributed.net, but if they make some catchy point and click (i.e. EASY to use) clients that anyone with a large following can use, we might see the end of such things as Slashdot subscriptions and a resurgence of the "community" feel of the web.
    • Distributed computing != distributed hosting... I don't really know what you mean exactly by distrubted hosting. You have to always get all your data to back to the 'central location' to finally compile the 'answer'.

      Pretty much same concept as any clustered computing, the pipes are always important, and no, u can't 'distribute' the connections.
      • Why the hell not? Machine A grabs a huge chunk of
        keyspace off of the main server. Machine B
        takes a subportion of the keyspace from Machine A.
        Machine C takes a subportion of keyspace from
        Machine D, ad nauseum. When a machine completes
        the checking of its key blocks, it reports it back to
        the machine it was acquired from for consolidation.
        When the main server hears back from Machine A,
        it is a tiny packet saying keys in this entire range have
        all been checked and returned negative. One small
        packet instead of hundreds from each of the
        individual machines that actually processed them.

        This is only one simple configuration for example
        purposes.

        You're still gonna need a host, but the
        bandwidth required will be nothing.
        • This is effectively what we already do with our keymaster, fullserver, personal proxy tiering. Personal proxies can be several layers deep if needed (many of our teams set up their own team servers using personal proxies).
    • The problem with that is, they need a way to make sure that nobody is interfering with the blocks that are being processed, they dont need people cheating and so on, and they need a way to validate the blocks .. thats why they have their own cache's and so on
    • Distrubeted hosting sounds like it would be good news for Dnet but it would mean some complications
      Each mirror-site's code would need to have its own verification scheme to validate someones completed blocks (which i think they have now). And it would have to be tamper-proof and/or trusted mirrors since faking a "done with block, this one wasn't it" in the location of the right answer (stat whores) would be a true setback for the project.
      Since the full block information wouldn't ever be compiled together on a central server, we might have to give up some of the details from stats. Not having all the block by block count of every persons activity being centralized for stat-computation would probably cut back their bandwidth considerably.
    • Multiple problems (Score:5, Insightful)

      by Loki_1929 ( 550940 ) on Tuesday March 26, 2002 @05:18AM (#3227151) Journal
      There are numerous things you just couldn't "distribute." The keys have to be served from somewhere, they must be tracked in real-time from somewhere, and they must be accepted/processed somewhere. Stats must be compiled and then put into a single database. To distribute this to multiple computers would cause the amount of bandwidth used to rise to an extreme level, far beyond what it is now. (ie. send out the info, let each node process it, receive the data from each node, hope to Christ it's right)

      Next, the integrity of the project gets called into question the moment you begin allowing clients to check processed blocks. The number of fals positives could easily shoot through the roof. Also, a computer with bad memory or simply running a faulty OS (such as Win9x/ME) could overlook a true positive, thereby virtually obliterating the project (ie. "we're at 100% completion with no result, guess we start over?")

      As stated above, stats would be impossible to do in this manner, and the same applies for key distrobution. One could argue that the total keys be distributed amoung thousands of nodes and handed out from there, but you create more problems then you solve. You still need a centralized management location to keep track of keys that have or have not been tested. Imagine a node going offline permanently or simply losing the keys it was handed. Suddenly, a large block of keys is missing. As it stands now, the keymaster simply re-issues the keys to someone else after a couple of weeks of no response from the client it sent the original blocks to. Under a distributed format, the keymaster would have to keep track of which keys went to which key distributor, which of those came back, which of those need to be redistributed, where they... (you get the message.)

      Next you run into another problem of integrity. What's to stop each distributed keymaster from claiming it's own client is the one that completed all blocks submitted to it. Consider this example, central keymaster sends out 200,000 blocks of keys to keymaster node 101. Keymaster node 101 distributes these keys to a bunch of clients which process the blocks, then send them back to keymaster node 101. Keymaster node 101, which has been modded slightly, then modifies each data block, changing the user id to that of the keymaster's owner, thereby making it appear that any block coming back from keymaster 101 was processed by keymaster 101. It might be easy to spot, but then how to you find out who to give credit to?

      The webpage doesn't attract the majority of the bandwidth; the projects do. Distributing the projects would be disasterous, as many have already tried taking advantage of the current system to increase their block yields through modded clients. Luckily, this is easy to spot for now. Under a distributed system, this would be next to impossible. All this, and I've yet to make mention of the fact that the code would have to be completely re-written to work alongside a custom P2P application, which would add months of development to a project that probably only has weeks or months left in it.

      In short, someone host the damn thing, k? :)

      • To distribute this to multiple computers would cause the amount of bandwidth used to rise to an extreme level, far beyond what it is now. (ie. send out the info, let each node process it, receive the data from each node, hope to Christ it's right)

        Of course there are protocols for distributed computing by which you need not hope to Christ, but can be quite confident that your results are correct. Good ones can compute formulae and withstand up to one less than 1/3 of the participants being active traitors. But on the other hand, their bit complexity is not exactly lowering your total amount of communication either...
    • Well, there is the Freenet Project [freenetproject.org].
    • Our network already uses a somewhat distributed model to spread out bandwidth demand as best as we can. You can see a bit of it if you look at our Proxy Status page at http://n0cgi.distributed.net/rc5-proxyinfo.html

      Each of the servers listed are in in different DNS rotation grouped roughly by geographically named groups (that try to take in general network topology/connectivity). The servers listed there (known as "fullservers") handle all of the data communication needs requested by clients, and the fullservers in turn keep in contact with the "keymaster". The keymaster is the server responsible for the coordination of unique work between all of the fullservers and assigning out large regions of keyspace to the fullservers (which in turn split up the regions and redistribute to clients).

      The hardware that we had hosted at Insync/TexasNet was actually 3 machines which together served several roles: our keymaster, one of our dns secondaries, our irc network hub, one of our three web content mirrors, and our ftp software distribution mirror (for actual client downloads).

      It's unfortunate that the change in management at Insync/TexasNet caused them to want to re-evaluate all of the free-loading machines that were receiving donated services (there were apparently several others besides us) and cut off anyone who wasn't paying. Regardless, it's a touch economy and companies that want to survive have to look at where their costs are going and do their best to cut spending.
  • by flipflapflopflup ( 311459 ) on Tuesday March 26, 2002 @04:46AM (#3227076) Homepage
    You've now got 10,000 readers hovering over the link, "Ooh, should I, shouldn't I?", then thinking f**k it and clicking anyway.

    A slow, painful, prolonged, /.'ing ;o)
  • by Soft ( 266615 ) on Tuesday March 26, 2002 @04:50AM (#3227084)
    The RC5-64 challenge is currently [distributed.net] at 73%, moving fast. Can you imagine the project shutting down just now?
    • by Anonymous Coward
      Can you imagine them hitting 100% and realizing that due to a software bug, the correct key was already found but no one realized it?
      • Better yet, they never found the correct key due to a software bug, therefore, they have to fix it and start all over.
    • Who cares? (Score:3, Interesting)

      by athmanb ( 100367 )
      Honestly.

      We all know that eventually, the key is going to be found, and some stupid message will be deciphered ("Congratulations on solving the 64 bit challenge. blablabla")

      Why waste trillions of CPU cycles and thousands of $ in bandwidth to find something out that we already know is true?
  • No, they can't shut down yet! I [distributed.net] have to break 10,000 in the rankings!

    Good Lord, what shall I do? :(

  • Maybe it's time for the distributed computing power to get hosted by distributed computers?

    Seriously, what is the current state of p2p-networking when serving common html-pages would be the thing to do?

  • The fact that such a big world-wide project is bound to be hosted near Austin shows that computing technology still has a long way to go...
    • I'm not really sure what to make of your comment. First, there's plenty of good connectivity in Austin, Houston, San Antonio, and Dallas. More importantly, we have a large concentration of staff in Austin, which is very important whenever physically working on the hardware is required.
  • What has it accomplished besides searching a keyspace with a known length and golumb rulers? Seti@home, cancer research, or that distributed raytracing screen saver is far more more useful.
    • seti@home will only be useful if it finds something.

      Dnet has already confirmed the longest golumb ruler of length 24 and is working on discovering the longest ruler length 25. This information is IMMEDIATLY useful to people in many fields of science. I'd point you to their OGR page, but for the fear of /.'ing them.
      • dnet cracks keys by brute force.. Here's 10 keys, try them, oh? they don't work, here, have 10 more? They don't work either? Damn, have some more.

        It does that with a ton of people until it finds the right key. It will eventually crack every crypto they throw at it, because it's only a matter of time.


        Seti@home is searching for something that they don't even know if it's out there, and can you imagine the impact if they do find PROOF that there's life somewhere else? That's far more important then stupid crypto keys and such


        the UD cancer treatment, while iffy because it's probably set up to benifit a company still has a HUGE impact on EVERYONE'S life.. I don't know anybody that either hasn't had cancer or a family member that has had cancer, and to find a cure!

    • Seti@home searchs a fairly insignificant portion of the sky for a completely insignificant number of signals with an un-optimized application which does little more than make pretty color pictures on the screen.

      Cancer research? I've yet to see a viable distributed project for cancer research. By that, I mean an organized effort with real data, a complete and concise goal, and a clean method for reaching that goal. Distributed raytracing? More pretty pictures on the computer screen.

      You want to draw pretty pictures, I want to brute force an encrypted message to prove current laws regarding encryption are draconian and need to be changed immediately. Gee, I can't imagine why anyone would think dnet is more usefull than raytracing....

      • Um, insignificant portion of the sky? Do you know how large the milky way is let alone the local galactic cluster? An even smaller field of view is just fine for a search for et.

        I don't run any of the cancer dist. projects so someone else can answer that better than me.

        What you are doing is hardly worth the effort. It would be like someone memorizing an entire DVD in binary than repeating it to fight the absurdities of the DMCA. Everyone knows how large the keyspace is and no one is surprized that it is taking them this long to find the key.

        Seti@home has potential but I agree it needs to look for multi band signals at the least as it currently looks exclusively in the hydrogen emission band. It would be like people in france not having boats and looking across the ocean at britain expecting to see a fire that reaches a certain height above the ground, it is a very limited idea of what we should be looking for.

      • by Graspee_Leemoor ( 302316 ) on Tuesday March 26, 2002 @06:51AM (#3227313) Homepage Journal
        "Cancer research? I've yet to see a viable distributed project for cancer research. By that, I mean an organized effort with real data, a complete and concise goal, and a clean method for reaching that goal. "

        http://members.ud.com/home.htm

        This is real research, worked on by United Devices, helped by the University of Oxford, Intel and the National Foundation for Cancer Research.

        It meets all your criteria- this is from their site:

        "The research centers on proteins that have been determined to be a possible target for cancer therapy. Through a process called "virtual screening", special analysis software will identify molecules that interact with these proteins, and will determine which of the molecular candidates has a high likelihood of being developed into a drug. The process is similar to finding the right key to open a special lock--by looking at millions upon millions of molecular keys."

        graspee

        • by BovineOne ( 119507 ) on Tuesday March 26, 2002 @08:14AM (#3227416) Homepage Journal
          Because distributed.net is a purely volunteer project, many of its staff also have their paid day-time jobs working for United Devices (who are responsible for the THINK Cancer project). That includes myself [distributed.net], Nugget [distributed.net], Decibel [distributed.net], Moose [distributed.net], Moonwick [distributed.net]
        • by Anonymous Coward
          > will determine which of the molecular
          > candidates has a high likelihood of being
          > developed into a drug
          >
          Which will then be sold back to you at prices, where dying from cancer is probably the better choice. Profits, amazingly, do not get donated to the Free Software Foundation but to lawyers fighting the demand for affordable generic drugs. The drug-empire CEO's meanwhile sip martini's floating in their yacht just a couple miles off the coast of the Karposi-Belt...
  • Copy + Paste (Score:1, Informative)

    by Anonymous Coward
    we need your help!

    URGENT: We have recently learned that our long-standing arrangement with Texas.Net (formerly Insync) would end at noon, Friday, March 22. Through an agreement with Insync, we were hosted at no charge for many years. Though we have tried to make other arrangements with them or to continue our current service until we can make other arrangements, in the end we had no choice but to move.

    Several of the Austin cows made a road trip Friday morning to retrieve our equipment from their colocation facility.

    We have no reason to complain about Texas.Net or their current decision. As a business, they chose to donate to us for a long time, and have now decided that they must stop. In dbaker's words in a letter to Texas.Net: "Our experience with Insync has been excellent; I've never been happier with an Internet provider. I've recommended them (and indirectly, Texas.Net) to everyone and even this [situation] won't change that."

    Though United Devices has kindly offered to colocate our primary servers for a short time at no expense, we find ourselves in the market for a new ISP. If any of our participants work for a major ISP in Texas (preferably within a few hours of Austin, but we're not picky), and would be willing to donate colocation space and connectivity, we would eagerly like to speak with you. Our typical bandwidth usage is 3Mb/s, and reliable uptime is of course essential.

    Please e-mail dbaker@distributed.net if you think you may be able to help us in this area.
  • Distributed net?

    Correct me if I'm wrong, but isn't this the outfit which is concerned with breaking low-grade crypto? How's that going to improve my daily life? I'd much sooner donate my CPU cycles to the evil international pharmaceutic corps which does benefit cancer study. If you get a rash from commercial ventures, there's the folding@home. It's more like basic research, so it won't produce any miracle cures, but it might eventually lead to research that could.

    But breaking crypto? Why?
    • Correct me if I'm wrong, but isn't this the outfit which is concerned with breaking low-grade crypto?

      If you consider 56/64/128-bit RC5 low-grade, yes.

      How's that going to improve my daily life?

      I have no idea what your daily life is like, but if it involves encrypting things you'd prefer stayed private, it should eventually help you in that aspect. Not to mention your boost of confidence as you follow your daily stats and see yourself advancing past others every day.

      But breaking crypto? Why?

      Because if a few thousand unspecialized computers can brute force the best encryption allowed by law with minimal optimization and research, then we have some good reasons to push for the law to be changed. Personally, I don't like the idea of the best encryption available to me being useful for all of 3 seconds while it's being broken. I don't usually have anything worth decrypting, but I like to think that when I do, it'll be worth my time to encrypt it.

      • Because if a few thousand unspecialized computers can brute force the best encryption allowed by law with minimal optimization and research, then we have some good reasons to push for the law to be changed.

        There's absolutely no evidence to suggest that the government will ever change crypto laws based on what happens at distributed.net.

        Its not like those in government who are responsible for consulting in these matters (NSA, etc) aren't aware of the issues at play here with current export-level encryption -- if you think that they are somehow unaware of these issues and dnet is required to bring them to light, please pass the crack pipe.

        • I agree with you. It is a dated paragraf that distributed.net should have removed. It was a valid argument back in the days when 42-bit keys was the maximum allowed for exported systems.

          And 42 bits are clearly too veak. Today when 128-bits ar common and allowed to be used by almost the entire world, it's not an issue anymore.
      • Because if a few thousand unspecialized computers can brute force the best encryption allowed by law with minimal optimization and research, then we have some good reasons to push for the law to be changed.

        That might have been true when d.net was working on DES, but things have changed.

        I think a more accurate wording would be

        Because if a over ten thousand computers, working for three years, can't brute force 64-bit encryption, when 256-bit encryption is readily available [rijndael.com] then we have very little reason to push for the law to be changed.
    • Re:Practical? (Score:2, Insightful)

      by DaveSchool ( 154247 )
      If you find the key, you get $2000, I don't know about you, but that would sure improve MY daily life.
  • I have always found distributed.net to be a relative structured organization. Their software with personal proxies made joining much easier than the Seti project, esp for people behind corporate firewalls. Small unobtrusive clients (esp for the des/rc5 projects) for a LOT of platforms.

    It would be a shame to see them disappear. They've had/has a lot of cumulative computing power, and it ought to be put to real use.

    Ah, the days of installing the res/rc5-42 clients on lots of 386 and 486 machines and actually having them do some real computing....
    • You know, it's funny. I ran RC5-64 in 1997 on my pentium 200 for over a year. I recently, about 40 days ago started doing RC5-64 again, but this time with my cluster of seven Duron 900s. I surpassed my old account in about 15 days. Amazing how much faster the computers are now.
  • I find this an excercise in futility; if the protocols used to transmit the data are not available to /.ers, we cannot suggest a scheme that would be meaningful. If the blocks are indexed, and all that's returned is an "index <X> complete" message, then a system of proxies sending message like "indexes 1217-1250 completed by my subnodes" to the main server once every hour makes sense. If, on the other hand, the bulk of the data is used to verify that processing actually occured, and that it occured with the official client (which I suspect is the case), we would need to know details of the data being passed back and forth in order to help.

    I know that I, for one, have boxen and bandwidth to pull off 3 Mb/s of CPU-intensive network traffic 24/7, but I'm not about to devote my precious resources to something that I don't understand, especially when I haven't even had the chance to ascertain that a solution that utilized my donated resources was, in fact, the best one.

    Jouster
    • After running a perproxy for over a year now, I think I can speak to this.

      Each 'message' to the keyserver is more like 'ipaddres,date,username,keyrange,size of key range,client version'

      They do work in ranges, and dnet has been working to make those ranges larger but not too large (larger == lower bandwidth, but more time need be spent cracking it). If it takes to long to crack a range, that range risks being recycled before a user submits it. It's a very dynamic system that they've been working on for many years now and seems to be doing well. Maybe they could tweak some more for bandwidth, but that would be a question better asked of the fine dnetc staff.
    • I, for one, have boxen and bandwidth to pull off 3 Mb/s of CPU-intensive network traffic 24/7

      Sweet! What sort of connection is that? The cable modem provider in my area offers very limited "business" symmetric connections up to 5 Mbps, but they charge dearly for it. A lot cheaper than a fractional T3, though.
    • The "keymaster" (the machine that utilizes the ~3Mbit/sec) already distributes larger regions of uncomputed work to all of the "fullservers", which are the ones that in turn distribute the actual work to clients after splitting the blocks into sizes that correspond to what is needed by clients. You can see the list of all of the fullservers at http://n0cgi.distributed.net/rc5-proxyinfo.html

      All of the chatty, multi-step network communications overhead with dealing directly with the clients is done at the fullserver level, including doing a windowed-history based coalescing on result submissions.
  • by crudeboy ( 563293 ) on Tuesday March 26, 2002 @05:22AM (#3227163)
    I think the use of spare cpu cycles is an excellent way to support science, but...
    For some time the only one around was seti@home which analyzes noise from space, I think, in search for alien lifeforms, then there's distributed.net doing crypto and math stuff, (correct me if I'm wrong). And then there's people like Intel running medical research in areas like cancer and alzheimer.

    I don't know about you, but to me medical research feels a somewhat more beneficial to humanity than search for aliens. Don't get me wrong, I'm not saying that the work done by seti and distributed isn't important or shouldn't be done, just that there's other research that might be more worthwhile supporting.

    That's just my opinion, but if you feel the same way, checkout this site [intel.com].

    • by Sircus ( 16869 ) on Tuesday March 26, 2002 @05:35AM (#3227186) Homepage
      You're wrong, so I'll correct you :-)

      d.net was around a long time before SETI@home - I've personally been running the client since 1997. SETI@home launched on May 13, 1999 (though they were fundraising and doing development for a couple of years before that).

      I'm personally strongly interested in cryptography for various reasons, so d.net gets my processor time. I seem to recall various people have concerns about how exactly the cancer project will use the eventual data it collects - i.e. whether the products produced as a result of the project will be commercially exploited - they don't want companies just using this large distributed network to make a fast buck.
      • Heh, I stand corrected :-)

        Thanks for pointing out the errors in fact, but still the cancer research appeals more to me personally even though I share the general concerns about the use of the results.

      • by mosch ( 204 ) on Tuesday March 26, 2002 @09:33AM (#3227649) Homepage
        If you hold an interest in cryptography, then you should realize that d.net is an incredibly boring application. It does the cryptographic equivalent of proving that it's possible to count to a million, by ones. It's absolutely useless.

        If d.net did something interesting, like attempt to find an improved factoring algorithm, or to find a way to perform interesting analysis on ciphertext, then it would be useful. Right now though, it's a 100% useless application.

        Think for a moment about what d.net truly does, and tell me with a straight face that it's interesting to either a cryptologist or a cryptanalyst.

        If you want to help somebody with your spare cycles, you can help cure diseases [intel.com] or if you're so inclined, you can perform FFTs on random noise. [berkeley.edu] Don't try to tell me that d.net helps anything though; you're kidding yourself if you think so.

        • by athmanb ( 100367 ) on Tuesday March 26, 2002 @10:30AM (#3227953)
          By proving that RC5-56 can be broken by simple home PCs (with an algorithm as simple as you call it "counting to a million by ones", they IMHO did a large part to educate lawmakers that the age old U.S. export restrictions have to be overturned.
          And they succeeded in this.

          What I however don't understand is why they kept doing their cryptography projects afterwards. Proving that RC5-64 is breakable while you can buy 256 bit encryption freely is indeed just a stupid waste of CPU cycles and bandwidth.

          I'd like to see them discontinue RC5-64, and concentrate their work on OGR and maybe on other, new projects.
        • Not completely useless... it gives you an idea of how long it takes to count to a million, by ones, on general-purpose, widely available hardware.

          For example, they showed that RC5-56 was not terribly secure since it "only" took 250 days; similarly for DES in 22 hours (and I think they did RC5-48 before RC5-56). However, I think that with the time they've been taking on RC5-64 (over four years now, and nearly another year to go to exhaust the keyspace) shows that that key length is still fairly secure against "casual" hackers.

          In conclusion, I think that d.net does help something -- it tells people "56 bits bad, 64 bits still decent". IMO.

          • However, I think that with the time they've been taking on RC5-64 (over four years now, and nearly another year to go to exhaust the keyspace) shows that that key length is still fairly secure against "casual" hackers.

            Depends on your definition of casual, but in any case, determining how long a brute force attack might take is still useful. Many security experts use 20 years as the benchmark for how long something should be safe from an attack. In the extreme, this means that if we are able to complete the RSA challenge before 2016 or so (I don't remember exactly when they offered the challenges), then RC5-64 isn't secure.

            Admittedly, that's a very extreme view, but given the progress that a group of volunteers has been able to make against RC5-64 I hope it shows that nothing that needs long-term protection should be encrypted with RC5-64 (imagine how long it would take to brute force RC5-64 in 2010, for example).
        • Half of cryptography is (and pretty much always has been) politics. d.net is, in my eyes, a political project. Sure, its political point was more trenchant at the time of RC5-56, but the escalating keyrate still makes a good point about the folly of limiting export key length now.
        • Have you even LOOKED at d.net in the last two years or so? I'd have to guess not, or else you missed OGR [distributed.net], which can be used for exactly the kind of things you're asking for!
      • You know, I would help out with all this distributed computing stuff, but my spare CPU cycles are all taken up running multiple instances of Progress Quest [progressquest.com].
    • I always figured dnet was on the way to UD anyway. This [ud.com] seemed to imply that, but I guess it was just people they took, noth the project. I guess there's no money in cracking crypto ;) The idea of distributed computing has been proven, and I think the original goal of allowing stronger crypto standards in the US has been achieved as well(?), so now it's on to more useflul tasks.

      I still like seing my clients from my first job 4 years ago still submitting packets, gives me a nice feeling ;) Wish I never used my real email address though.
    • ...by VAPORIZING us!! YEEAAARRGGHH!!!
    • Don't forget another practical distributed project. Stanford's protein folding project: folding@home [stanford.edu]
    • I was interested in what you said about Intel & their distributed cancer research, so I checked it out. Unfortunately, their site is a little scarce on the details of who this research benefits.
      However it does mention that finding drugs to combat various diseases is a first priority. So I assume that a particular pharmecutical company would benefit from this, as would a small percentage of people with cancer who also have private health insurance.
      I would want my CPU time going in open-source medicine, and not someone else's patent that will be abused to make the most money possible.
      I'm not saying that this is the case with Intel's distributed cancer-curing client, but it kinda looks like that given the lack of details of beneficiaries.
      Anyone know for sure?
      I might email them...
    • I agree that the medical research might be more worthwhile to support, but AFAIK there are only Wintel clients available. (Case in point, United Devices [ud.com] and even your own link to Intel [intel.com].)

      That leaves an awful lot of non-intel boxes, and even non-windows intel boxes with spare cycles that can't participate. Until they have the option to do so, I anticipate a lot of cycles going to 'less worthy' causes...
  • by Trepidity ( 597 ) <delirium-slashdot@@@hackish...org> on Tuesday March 26, 2002 @05:45AM (#3227205)
    A continuous three Megabits per second works out to somewhere just under a Terabyte a month. Not going to be cheap.
    • Um just under a Terabyte, are u sure? When i calculated it assuming they meant 3Mbit/sec (and not megabyte, who knows, they used an abbreviation and are americans...) taking 30 as the average average of days in a month ;) makes 972000000000 bytes a month (says windows calculator) this divided by 1024*1024*1024*1024 gives me approx 0.75186 Terabyte. which is not just under a terabyte i'd say. it's nevertheless a lot - too much for almost any company in europe i think. out of curiosity - how were they able to afford this until now? What's traffic price in the states, or asked different how hard is it to get a green card these days ;)?
    • A continuous three Megabits per second works out to somewhere just under a Terabyte a month. Not going to be cheap.

      It's going to run about $600/month plus server costs. Bandwidth prices have been dropping rapidly over the past year.

  • more than 160000 PII 266MHz computers

    This is a present? For me?

    Cool, but they should have presenting me with 10000 Athlons 2000+ plus 10000 Geforce4's and 10000 Game Tits XP instead. I mean, does anybody know why they use PII's 266 MHz as a reference?

    • Re:Don't laugh! (Score:2, Insightful)

      by pne ( 93383 )

      Because they started RC5-64 over four years ago and probably didn't change their frame of reference since then, only the multiplier.

      Sort of like how some PC magazines do benchmarks of things such as hard drives with old systems, to ensure that you can compare last week's results with some numbers published two years ago, in a semi-meaningful way since the only thing changed is the different hard drive.

    • it sure does sound more impressive to a non-tech than 11000 PIII Xeon 800s or whatever the equivalent would be.
      Big number in the front... ooohh look, shiny things.
  • by Anonymous Coward
    Just think of how many cpu cycles will be wasted if they are forced to shut down... boggles the mind!
  • by Skuto ( 171945 )
    Distributed.net has gotten to be a more or less pointless project by now.

    Originally, the point they wanted to make was that 64-bit RC5 was not strong enough to protect privacy.

    They started, what, 4-5 years ago? About 30 000 computers running for 4 years can't break 64-bit encryption. Geez, I'd say that, if anything, the conclusion would be that 64-bits is plenty for shopping etc. unless you've got some really _big_ secrets. Certainly plenty for day-to-day mail. More or less the opposite of what they wanted to prove.

    Nowadays they've added the OGR stuff to appear at least a bit more usefull, but in reality, the applications of those results are very limited.

    Really, the right thing to do is not to waste power on such pointless projects.

    --
    GCP (Moderation suggestion: -1 Disagree)
    • I'm not sure that it is a pointless project. The dnet project is quite interesting mainly because of the scale of the thing and the fact that they've managed to balance it all.

      I would be very suprised if the future of the 'net didn't focus around sharing computational power, distributed computing and storage on-mass. As such the distributed.net effort is a great starting place to learn from.

      Of course - there are still hundreds of other issues not covered by dnet. The reality of the 'net is that it is STILL in its infancy. It has a lot of growing up to do - and lot of issues still need to be solved that are currently buried deep into its complexities. Such as search engines - as the web grows these become increasing flaky.
  • In most areas of the country, a single rack in a colo / exchange facility costs $ 1500 per month or less, and 3 Mbps would cost ~ $ 1200 per month. They didn't say how many racks they need, but at that bandwidth, my guess is no more than one or two.

    So, they have been getting $ 3000 per month or more of free bandwidth and rack space.

    IMHO, if their work is really important, they should be able to raise $ 36K per year from the crypto community.
  • It's just a suggestion, but wouldn't it make sense just to link to the Google mirror, rather than the site itself?

    Of course, don't bother trying if Google hasn't had time to cache the site yet...
  • Do people running their own keyservers for their teams help with the bandwidth at all? If they requested (required?) that each time over a certain size run their own keyserver, might it help?
    • Re:keyservers? (Score:3, Informative)

      by BovineOne ( 119507 )
      Running our personal proxy [distributed.net] for large teams (particularly if they are all at a single corporation or a single school) can indeed help, because it reduces some of the overhead of communications with each individual client. There is also some optimization done by the personal proxy to allow it to request larger blocks of work and partition it into smaller portions when it finally distributes to the actual clients.

      However, this doesn't reduce the bandwidth at the keymaster any further, since this sort of splitting is already also being done at a larger scale between the keymaster and fullservers (and the bandwidth issue is with the keymaster, not the fullservers).

  • I run the Folding @ Home client on Linux, and it runs quite well!

    I prefer to use my spare cycles for Medical research.

    http://folding.stanford.edu [stanford.edu]



  • I just saw this statement at the bottom of their front page:


    distributed.net and United Devices join forces: distributed.net and United Devices have announced a partnership which will combine the skills and experience of distributed.net with the commercial backing of United Devices. Several distributed.net volunteers are leaving their old day jobs and joining United Devices full time. United Devices will be providing distributed.net with new hardware and hosting services, as well as sponsoring a donation program that will help support distributed.net's charitable activities."


    I guess they are okay for the time being?
    • Re:Issues Resolved? (Score:5, Informative)

      by BovineOne ( 119507 ) on Tuesday March 26, 2002 @09:20AM (#3227590) Homepage Journal
      Although United Devices is currently graciously hosting some of the displaced distributed.net hardware temporarily, they've indicated that they are not willing to do this long term (which is quite a reasonable decision, since it is a lot of bandwidth).

      Note that several of the distributed.net volunteer staff (including myself) do indeed work for United Devices during the day, and that our employment there began awhile ago (more than 15 months ago), so that partnership announcement is not really related.
  • by karlm ( 158591 ) on Tuesday March 26, 2002 @09:50AM (#3227759) Homepage
    Finally I've got a good excuse for not carefully reading the article :-)

    Thier site is popular enoug that it would seem to be a good time to experiment with moving the http stuff to freenet, since it's only updated once per day. The people willing to download the dnet client are would seem to be some of the most willing people to download the freenet client. Freenet is designed so that the slashdot effect actually increases reliability and speed of acess for the commonly requested data. Distributed.net would seem to have reached a critical mass of readership in order to have reasonable reliability for its freenet page. Your could have the client get your team and individual scores sent to it as part of the block submission cinfirmation.

    It would seem to me that they could arbitrarily reduce their bandwidth requirements by increasing the minimum size of keyspace portions they're handing out. It would seem that thier project traffic would be (or could be made) the same for each work unit, regardless of the size of the work units. Bigger work units are really only a problem for clients that are turned off and on regularly. They client still only needs to keep track of current state (current key in the case of RC5), the final state of the work unit (last key to check for RC5) and the current checksum for the work unit. None of these change in memory requirements as you increase work unit sizes. 99% of the people don't know the work unit size anyway, so changing the work unit size won't cause many people to complain, particularly if it's necessary to keep dnet hosted.

    Unless I'm mistaken, the server really only needs to send the client a brief prefix identifying the message as a work unit, followed by "start" and "stop" points for the computation. For RC5, this would mean a 64-bit starting key and a 64-bit ending key. I haven't sat down and worked out the cannocalization scheme for GRs, but it seems that they are countable (in the combinatorics sense, not the kindergarten sense) and could be represented fairly compactly. The current minimum ruler length need not be sent, snce you'd probably always want the client to send back the minimum ruler length in it' work unit anyway. The client would need to send back a work unit identifier (this could be left out, but it's not strictly safe) and an MD5 sum of all of the computational results or some other way to compare results when they duplicate work units. (A certain percentage of the work units are actually sent tomultiple clients in order to check that everyone is playing fairly.)

  • The original idea of distributed.net dates back to when the government was conspiring to restrict the number of bits in encryption and students protested that 64 bits wasn't enough. Well it may be technically breakable but economics made it unbreakable in the end.
  • We have colocation bandwidth [tiernetworking.com] for $87 per 1 Mbps with 99.95% uptime SLA. We have a secondary connection for $262 per 1 Mbps with 99.99% uptime SLA.

    Please feel free to email sales@tiernetworking.com [mailto] for more information.

  • I sent an e-mail to my guys at pair.net and they said they would look into it. They also said thanks for pointing the site out. Maybe some of you guys can try some other hosting sites? Worth a shot!

    JOhn

What is research but a blind date with knowledge? -- Will Harvey

Working...