Message boards : Number crunching : Goodbye Rosetta@home
Author | Message |
---|---|
aginghippie Send message Joined: 20 Oct 06 Posts: 1 Credit: 27,099,143 RAC: 0 |
I just found out today--in a rather classless fashion--that Rosetta@home just can't figure out how to accommodate the legions of PPC machines that are still quite useful. While I've always found Rosetta@home to be one of those "Sekrit Boyz Klub" Winderz-centric shops, I could put it aside because of the greater good of the project. At some point, however, one gets fed up with the "We support everything--as long as it's Winderz" attitude. I will be pulling ALL my machines off this project beginning now; I expect to be done before the New Year begins. Goodbye Rosetta. Hello World Community Grid. aginghippie |
![]() Send message Joined: 17 Sep 05 Posts: 815 Credit: 1,812,737 RAC: 0 |
YOu can also consider POEM, Superlink, and um, ... SIMAP ... All doing this Bio stuff ... as is WCG ... Docking? I can't recall if they have OS X or not ... Anyway, there are still a few projects out there ... including one that seems to run better on a G5 than it does on my "faster" INtel Mac Pro and that is Wanless ... |
![]() ![]() Send message Joined: 2 Jul 06 Posts: 2842 Credit: 2,020,043 RAC: 0 |
a great loss to the project..thanks to your past contributions and staying with bio research.. I just found out today--in a rather classless fashion--that Rosetta@home just can't figure out how to accommodate the legions of PPC machines that are still quite useful. |
The_Bad_Penguin![]() Send message Joined: 5 Jun 06 Posts: 2751 Credit: 4,271,025 RAC: 0 |
Sorry to lose you at Rosie, but glad you're staying with Distributed Computing / Boinc. FWIW: Simap is one of my favorite Projects. Hope you'll take a look at it. |
LizzieBarry Send message Joined: 25 Feb 08 Posts: 76 Credit: 201,862 RAC: 0 |
I just found out today--in a rather classless fashion--that Rosetta@home just can't figure out how to accommodate the legions of PPC machines that are still quite useful. Far be it for a relative noob like me to second guess someone whose credits demand an enormous amount of respect, nor would I knock WCG nor the other projects that are doing great work (and I'll give time to as well once I'm a bit more established) but it's a disappointing irony that you worry about the 'classless fashion' you heard about the inability (or lack of willingness) to support PPC machines, then decide to pull your whole farm from the project in what amounts to little more than a tantrum. Of course, remove your PPC machines to another project immediately - that only makes sense. But all of them? The term 'classless' isn't one I'd normally use about anyone or anything, but it seems to work in all manner of ways... Best wishes for the new year and hoping for a bit more clarity in your decision-making. |
![]() Send message Joined: 21 Sep 05 Posts: 55 Credit: 4,216,173 RAC: 0 |
Not quite good-bye from me. Have deleted my BOINC "install" (manual download and unzip / chmod / .profile change etc.) and used Synaptic to install BOINC that comes with Intrepid Ibex (Ubuntu 8.10). Have also split my time between Rosetta and POEM in the ratio 2:1. Just want to see if the Computation Errors and SIGSEGVs go away using the Synaptic version of BOINC as it did appear to download a couple of packages that I'd not seen before. Maybe it was just me all along.......let's see shall we........ Cheers ![]() |
AMD_is_logical Send message Joined: 20 Dec 05 Posts: 299 Credit: 31,460,681 RAC: 0 |
It looks like aginghippie has crunched a very impressive 27,095,192 credits for Rosetta. Looking at the list of his computers active in the last 30 days, the PPC machines have a total of only 17,666 credits. That's less than 1/1000 of his total. I don't see what the fuss is about. |
![]() ![]() Send message Joined: 30 May 06 Posts: 5691 Credit: 5,859,226 RAC: 0 |
who knows, but he made up his mind to leave. so that is the end of him until he cools off and maybe comes back. It looks like aginghippie has crunched a very impressive 27,095,192 credits for Rosetta. Looking at the list of his computers active in the last 30 days, the PPC machines have a total of only 17,666 credits. That's less than 1/1000 of his total. |
![]() ![]() Send message Joined: 25 Dec 08 Posts: 9 Credit: 137,501 RAC: 0 |
who knows, but he made up his mind to leave. so that is the end of him until he cools off and maybe comes back. Isn't the fuss about not getting additional work on those PPC machines? Why leave the farm intact on a project that will not provide any additional work units? It seems pretty clear from my view... -- Pugs Helping Man by Sleeping on Couches ![]() |
![]() ![]() Send message Joined: 30 May 06 Posts: 5691 Credit: 5,859,226 RAC: 0 |
who knows, but he made up his mind to leave. so that is the end of him until he cools off and maybe comes back. we are saying take your ppc machines off rah and attach somewhere else, but why take your good hardworking windows,linux machines off. |
![]() Send message Joined: 17 Sep 05 Posts: 815 Credit: 1,812,737 RAC: 0 |
Well, I do not know about linux, but I am certainly taking all my windows machines off ... I just killed two more Tasks that were over the clock one at 13 hours and the other at 6 something ... NOt sure if the project is paying any attention, but they might want to look at the code differences between OS-x which has not had an overrun that I can tell (40 some tasks) and WIndows which can't seem to avoid it ... It is a little sad to me because I was going to drop a couple hundred thousand CS into this project ... Oh, well, I will do my other projects and come back in 6 months or so an see if things are different ... |
![]() Send message Joined: 31 Dec 08 Posts: 3 Credit: 118,213 RAC: 0 |
Hello. Traffic is bidirectional. I am gradually closing down some WCG projects and trying this. I must admit, that WCG has some good projects, too. One has initial replication of 1! But yesterday I realized that WCG uses initial replicator of 19 (!) in many projects. It means that computing efficiency is a little bit over 5 %. Some lost WUs now and then is very little wasting compared to that. I used this as a positive example to calm down one frustrated cruncher, who was angry about buggy Clean Energy project. That moment I didn't think it as a bad thing to have 19 replicas, but my view of point changed, when I noticed the whole thread had "disappeared". Now my only criterias are: 1. Benefit to mankind 2. Overall efficiency |
Dagorath Send message Joined: 20 Apr 06 Posts: 32 Credit: 29,176 RAC: 0 |
Hello. Traffic is bidirectional. I am gradually closing down some WCG projects and trying this. If overall efficiency is one of your criteria then you will want to avoid LHC@home too. They also waste a lot of CPU cycles needlessly. And if you complain about it they just make the thread "disappear". It really is time for crunchers to band together and boycott the projects that abuse our valuable donation. BOINC FAQ Service Official BOINC wiki Installing BOINC on Linux |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
Pentti, you are correct that Rosetta does not use a quarum system. So, each specific sequence of calculations is only performed once across all of the participants. The one exception to that would be if a task gets resent due to a missed deadline, or a failure. But, very small %. However, I wanted to point out that some projects with large numbers of apparently identical work have actually set things up so that the work is not identical. Many projects, including Rosetta, are using Monte Carlo estimations for the basis of their calculations. So, even if 19 tasks are sent, if they each use a random number (often based on the time the tasks starts running, in microseconds, so very unique) in some fashion as the task starts, you will end up with 19 different results. All based on the same raw data, and forumlas and assumptions. In Rosetta's case, this random number is established by the server as the task is created. This makes it more clear for debugging etc. exactly what work was attempted for the task. Actually, specifically, the random number "seed" is generated. This seed is then used to come up with unique (yet predictably defined) starting points for each model studied in the task. I won't purport to know the specifics at WCG, but wanted to point out the possibility (likelihood) of such an approach being used. Rosetta Moderator: Mod.Sense |
![]() Send message Joined: 17 Sep 05 Posts: 815 Credit: 1,812,737 RAC: 0 |
I must admit, that WCG has some good projects, too. One has initial replication of 1! But yesterday I realized that WCG uses initial replicator of 19 (!) in many The quorum is large also ... I forget what it is, but it is pretty high (15?). I do not know why they have selected a large quorum but I have no problems with that. The question is more about what the project is doing. Having a large quorum means that tight agreement on a large number of times the work has been processed is a better indicator of the validity of the work. I would argue that many of the projects that have gone down in quorum have decreased the validity of the work and the quality of the research. In many cases the work only has to be processed one time and there is no need for double checking the work ... But, for work that may affect a life? Well, 15-19 times checking is well worth the extra effort ... |
![]() Send message Joined: 31 Dec 08 Posts: 3 Credit: 118,213 RAC: 0 |
I admit that replication MAY catch platform related errors. And it is also true, that for Monte Carlo it suits. I am, however, born on era, when computer capasity was valuable. And to me it looks like totally free of charge computing capasity makes scientists arrogant and wasting their present. If we are willing to get the rest 600 000 000 PCs into proper use, this is not a good attitude. I am a computer freak, and hopefully continue the crunching for long time. But during my short 4 month BOINC experiment I have, however, noticed, that most visits are much more than 10 times shorter. I have also met situations, which might well have ended to similar flares and quit as aginghippie's. Of course not so impresssive, because I only have my few little home computers and a short boincing career. And my son just yesterday laughed, that I nearly quitted boincing on September, and here I still sit, my nose on PC screen ;-))) I don't think it is good attitude to say that "war does not need one (wo)man". Most people are like rats. They look, how it is going with their mates, and make their own decisions according to that. So WE NEED everybody. For each writing cruncher there are 1000 others, who always only read, and make their silent decisions.. Let's keep in mind, that our target is 600 000 000+ computers crunching. Happy New Year |
![]() Send message Joined: 17 Sep 05 Posts: 815 Credit: 1,812,737 RAC: 0 |
I admit that replication MAY catch platform related errors. And it is also I too am from an era when resources were scarce ... And I note that some of the sub-projects on WCG are using much smaller replications and consensus ... CEP for example seems to be 2 ... and this is why I would argue that whomever set up the requirements over there has a good solid reason for doing so ... If you mouse about and look I have long been arguing the case for the participant and for the insanity that is the BOINC Community. On and off the project bemoan the lack of participants and sometimes will recognize the speed at which we lose new "hires" ... but then will do nothing at all to change the practices which are the root causes of the attrition. I had to take a two year break from my BOINC addiction and when I came back about a year ago the total number of people doing BOINC had not changed significantly... it still has not ... I don't believe it will until many things change ... I am not sure, but try Google on "Participants Rights And Responsibilities" ... I cannot remember if I posted it here ... I know I did in SaH and CPDN ... |
The_Bad_Penguin![]() Send message Joined: 5 Jun 06 Posts: 2751 Credit: 4,271,025 RAC: 0 |
"good solid reason" being relative, of course. A few months back, I attended a meeting at IBM, with those that actually created and are currently running WGC. One "good solid reason" is their default assumptions on the donor's available bandwidth. If the results file is too "large" (relative to their determination), rather than place the burden on a single donor's bandwidth to upload the results back to WCG, they will instead have 3 or 4 donors crunch the exact same wu, and have donor #1 send back the first 25% of the results, donor #2 send the next 25% of results... While I appreciate their concern, most people are no longer using dial-up, and to me personally, it is a waste of computing resources. Much as I like WCG's projects, I abhor their waste of cpu cycles (imho), and thus I only donate a small percentage of my resource allocation to them. And I note that some of the sub-projects on WCG are using much smaller replications and consensus ... CEP for example seems to be 2 ... and this is why I would argue that whomever set up the requirements over there has a good solid reason for doing so ... |
![]() Send message Joined: 17 Sep 05 Posts: 815 Credit: 1,812,737 RAC: 0 |
"good solid reason" being relative, of course. Well, you have personal information that was not available to me and so, yes I can see your point. This is tangentally what some of us have been arguing in the Dev mailing list about the work fetch policy changes in that BOINC Manager does not do a good job of measuring resource usage for CPUs and does not measure at all other resources and their usage. Sadly, that seems likely to remain the case as Dr. Anderson is embarked on a journey to do the least change possible to sorta keep BOINC Manager working in the new era of GPU computing ... While I do not know the percentage of users that are still on dial up vs DSL or cable, sometimes we can be surprised just how high a percentage that could be... so I cannot argue the point intenligently one way or the other ... ON THE OTHER HAND ... ) You knew that there was a "but" coming ... I have to give WCG the nod for the fact that the stuff off of the project is pretty rock solid. I have had one or two bad tasks that I have seen, but, nothing like most other projects. So, it really is more fire it up and forget it ... I decided, as a counter example, to dump a bunch of time into Rosetta and to my chagrin had as many as half the tasks die, run well past their deadline, etc. meaning of course that I was wasting my time. Granted it is the holidays, but nary a peep out of the project. Everyone's milage varies, but, in my case, I cannot say for certain that WCG is wasting my time and effort; I can say that Rosetta is ... I am not leaving for good, but, I have only one computer doing RaH now because that is the only one where the work seems to run correctly ... and that is the OS-X machine. Should that change I will drop RaH with no regret at all ... Anyway, that is one of the nice things about today's world ... there are choices ... |
![]() Send message Joined: 17 Sep 05 Posts: 815 Credit: 1,812,737 RAC: 0 |
Just did a mini-review and of the projects for which I have results: Minimum Quorum / Initial Replication CEP is 2 / 2 Human Protean Folding is 15 / 19 Nutricious Rice is 10 / 19 Cancer is 2 / 2 Fight Aids is 2 / 2 or 1 / 1 I don't have results for Beta or Dengue projects in my list so can't recall or post the numbers ... Not that you should change for me ... but if you do think that the replication and quorum are sub-optimal you can always opt out of those sub-projects ... of the things I think WCG does well, perhaps better than other projects ... Anyway, projects abound and we have choices now so we can all elect what and where we will spend our time and resources. I just did not want people to get the wrong idea about WCG, many of the projects are valuable and the "wastage" is by no means universal. Which is one of the reasons why I take and hold the position that maybe their choices have a rational basis ... Of course, I have long held the radical notion that we do not do enough checking and that more projects should be doing at least a replication of 3 min quorum of 3 with the inclusion of test tasks that actually certify the machines that we are running on ... to make sure that we are, in fact, getting good results ... and not just results that agree with someone else's results... agreement among wrong answers is worse than simply having wrong answers ... because now we are convinced that the wrong answer is somehow more right than it really is ... |
Message boards :
Number crunching :
Goodbye Rosetta@home
©2025 University of Washington
https://www.bakerlab.org