Does anyone know the relative merits of folding@home and rosetta@home, which I currently run? I don’t understand enough of the science involved to compare them, yet I would like to contribute to the project which is likely to be more important. I found this page, which explains the differences between the projects (and has some information about other distributed computing projects), but I’m still not sure what to think about which project I should prefer to run.
Personally I run Rosetta@home because, based on my research, it could be more useful to designing new proteins and computationally predicting the function of proteins. Folding seems to be more about understanding how they proteins fold, which can help with some diseases, but isn’t nearly the game changing that in silico design and shape prediction would be.
I also think that the SENS Foundation (Aubrey de Grey & co) have some ties to Rosetta, and might use it in the future to design some proteins.
So I think I have it working but… theres nothing to tell me if my CPU is actually doing any work. It says it’s running but… is there supposed to be something else? I used to do SETI@home back in the day and they had some nice feedback that made you feel like you were actually doing something (of course, you weren’t because your computer was looking for non-existent signals, but still).
...of course, you weren’t because your computer was looking for non-existent signals...
The existence of ET signals is an open qustion. SETI is a fully legitimate organization ran according to a well thought out plan for collecting data to help answer this question.
Right on, but just so you know, other (highly informed) people think that we may find a signal by 2027, so there you go. For an excellent short article (explaining this prediction), see here.
I don’t think the author deals with the Fermi paradox very well, and the paradox is basically my reason for assigning a low probability to SETI finding something.
The Fermi paradox also struck me as a big issue when I first looked into these ideas, but now it doesn’t bother me so much. Maybe this should be the subject of another open thread.
I use the origami client manager thingie; it handles deploying the folding client, and gives a nice progress meter. The ‘normal’ clients should have similar information available (I’d expect that origami is just polling the clients themselves).
Donating money to scientific organizations (in the form of a larger power bill). You run your CPU (otherwise idle) to crunch difficult, highly parallel problems like protein folding.
But you’ve already paid for the hardware, you’ve already paid for the power to run the CPU at baseload, and the video card, and the hard disk, and all the other components; if you turn the machine off overnight, you’re paying for wear and tear on the hardware turning it off and on every day, and paying for the time you spend booting up, reloading programs and reestablishing your context before you can get back to work.
In other words, the small amount of money spent on the extra electricity enables the useful application of a much larger chunk of resources.
That means if you run Folding@home, your donation is effectively being matched not just one for one but severalfold, and not by another philanthropist, but by the universe.
if you turn the machine off overnight, you’re paying for wear and tear on the hardware turning it off and on every day, and paying for the time you spend booting up, reloading programs and reestablishing your context before you can get back to work.
I’ve seen numerous discussions about whether it’s better / more economical to turn off your machine or to leave it running all the time, and I have never seen a satisfactory conclusion based on solid evidence.
That’s because it depends on the design. On the lifetime point, for example: if the machine tends to fail based on time spent running (solder creep, perhaps), leaving it running more often will reduce the life, but if the machine tends to fail based on power cycling (low-cycle fatigue, perhaps), turning it on and off more often will reduce the life.
Given that I’ve dropped my MacBook from a height of four feet onto a concrete slab, I figure the difference is roundoff error as far as I am concerned.
A severalfold match isn’t very impressive if the underlying activity is at least several orders of magnitude less efficient than alternatives, which seems likely here.
It seems highly unlikely to me. Biomedical research in general and protein folding in particular are extremely high leverage areas. I think you will be very hard put to it to find a way to spend resources even a single order of magnitude more efficiently (let alone make a case that the budget of any of us here is already being spent more efficiently, either on average or at the margin).
Moore’s Law means that the cost of computation is falling exponentially. Even if one thought that providing computing power was the best way to spend money (on electricity) it would likely be better to save the money spent on the electric power and buy more computing power later, unless the computation is much much more useful now.
Biomedical research already gets an outsized portion of all R&D, with diminishing returns. The NIH budget is over $30 billion.
Slightly accelerating protein folding research doesn’t benefit very much from astronomical waste considerations compared to improving the security of future progress with existential risk reduction.
it would likely be better to save the money spent on the electric power and buy more computing power later, unless the computation is much much more useful now.
In principle, this is true; in practice, saying things like these seems more likely to make the people in question to simply cease donating electricity, instead of ceasing to donate electricity and donating the saved money to something more useful. Installing a program and running it all the time doesn’t really feel like you’re spending money, but explicitly donating money requires you to cross the mental barrier between free and paid in a way that running the program doesn’t.
For those reasons, I’d be very hesitant about arguing against running programs like Folding@Home; it seems likely to cause more harm than good.
Furthermore, it seems to me like things like F@H are rather unlikely to cause a “good deed of the day” effect for very long: by their nature, they’re continuing processes that rather quickly fade into the background of your consciousness and you partially forget about. If F@H automatically starts up whenever you boot your computer, then having it running wouldn’t count for a day’s good deed for most people. Constantly seeing the icon might boost a cached self effect of “I should do useful things”, though.
In practice, it is worth doing the computation now—we can easily establish this by looking at the past, and noting that the people who performed large computations then, would not have been better off waiting until now.
$30 billion is a lot of money compared to what you and I have in our pockets. It’s dirt cheap compared to the trillions being spent on unsuccessful attempts to treat people who are dying for lack of better biotechnology.
By far the most important way to reduce real life existential risks is speed.
Even if you could find a more cost effective research area to finance, it is highly unlikely that you are actually spending every penny you can spare in that way. The value of spending resources on X, needs to be compared to the other ways you are actually spending those resources, not to the other ways you hypothetically could be spending them.
Whether it makes sense in general doing a calculation now or just waiting isn’t always so clear cut. Also, at least historically there hasn’t always been a choice. For example, in the 1940s and 1950s, mathematicians studying the Riemann zeta function really wanted to do hard computations to look at more of the non-trivial zeros. but this was given very low priority by the people who controlled computers and by the people who programmed them. The priority was so low that by the time it advanced up the queue the computer in question would already be labeled as obsolete and thus would not be maintained. It wasn’t until the late 1950s that the first such calculation was actually performed
It is also not equal to ‘some’. The vast majority of computers today will use more power when running folding at home than they would if they were not running folding at home. There may be some specific cases where this is not true but it will generally be true.
My desktop is old enough that it uses very little more power at full capacity than it does at idle.
You’ve measured that have you? Here’s an example of some actual measurements for a range of current processors’ power draw at idle and under load. It’s not a vast difference but it is real and ranges from about 30W / 40% increase in total system power draw to around 100W / 100% increase.
Additionally, you can configure (may be the default, not sure) the client to not increase the clock rate.
I couldn’t find mention of any such setting on their site. Do you have a link to an explanation of this setting?
On further consideration, my complaint wasn’t my real/best argument, consider this a redirect to rwallace’s response above :p
That said, I personally don’t take ‘many’ as meaning ‘most’, but more in the sense of “a significant fraction”, which may be as little as 1⁄5 and as much as 4⁄5. I’d be somewhat surprised if the number of old machines (5+ years old) in use wasn’t in that range.
Has anybody considered starting a folding@home team for lesswrong? Seems like it would be a fairly cheap way of increasing our visibility.
After a brief 10 word discussion on #lesswrong, I’ve made a lesswrong team :p
Our team number is 186453; enter this into the folding@home client, and your completed work units will be credited.
Does anyone know the relative merits of folding@home and rosetta@home, which I currently run? I don’t understand enough of the science involved to compare them, yet I would like to contribute to the project which is likely to be more important. I found this page, which explains the differences between the projects (and has some information about other distributed computing projects), but I’m still not sure what to think about which project I should prefer to run.
Personally I run Rosetta@home because, based on my research, it could be more useful to designing new proteins and computationally predicting the function of proteins. Folding seems to be more about understanding how they proteins fold, which can help with some diseases, but isn’t nearly the game changing that in silico design and shape prediction would be.
I also think that the SENS Foundation (Aubrey de Grey & co) have some ties to Rosetta, and might use it in the future to design some proteins.
I’m a member of the Lifeboat Foundation team: http://lifeboat.com/ex/rosetta.home
But we could also create a Less Wrong team if there’s enough interest.
So I think I have it working but… theres nothing to tell me if my CPU is actually doing any work. It says it’s running but… is there supposed to be something else? I used to do SETI@home back in the day and they had some nice feedback that made you feel like you were actually doing something (of course, you weren’t because your computer was looking for non-existent signals, but still).
The existence of ET signals is an open qustion. SETI is a fully legitimate organization ran according to a well thought out plan for collecting data to help answer this question.
I think the probability they ever find what they’re looking for is extraordinarily low. But I don’t have anything against the organization.
Right on, but just so you know, other (highly informed) people think that we may find a signal by 2027, so there you go. For an excellent short article (explaining this prediction), see here.
I don’t think the author deals with the Fermi paradox very well, and the paradox is basically my reason for assigning a low probability to SETI finding something.
The Fermi paradox also struck me as a big issue when I first looked into these ideas, but now it doesn’t bother me so much. Maybe this should be the subject of another open thread.
I use the origami client manager thingie; it handles deploying the folding client, and gives a nice progress meter. The ‘normal’ clients should have similar information available (I’d expect that origami is just polling the clients themselves).
What is this?
I wrote a quick introduction to distributed computing a while ago:
http://michaelgr.com/distributed-computing/
My favorite project (the one which I think could benefit humanity the most) is Rosetta@home.
Donating money to scientific organizations (in the form of a larger power bill). You run your CPU (otherwise idle) to crunch difficult, highly parallel problems like protein folding.
Granted that in many cases, it’s donating money that you were otherwise going to burn.
No, modern CPUs use considerably less power when they are idle. A computer running folding at home will be drawing more power than if it were not.
But you’ve already paid for the hardware, you’ve already paid for the power to run the CPU at baseload, and the video card, and the hard disk, and all the other components; if you turn the machine off overnight, you’re paying for wear and tear on the hardware turning it off and on every day, and paying for the time you spend booting up, reloading programs and reestablishing your context before you can get back to work.
In other words, the small amount of money spent on the extra electricity enables the useful application of a much larger chunk of resources.
That means if you run Folding@home, your donation is effectively being matched not just one for one but severalfold, and not by another philanthropist, but by the universe.
I’ve seen numerous discussions about whether it’s better / more economical to turn off your machine or to leave it running all the time, and I have never seen a satisfactory conclusion based on solid evidence.
That’s because it depends on the design. On the lifetime point, for example: if the machine tends to fail based on time spent running (solder creep, perhaps), leaving it running more often will reduce the life, but if the machine tends to fail based on power cycling (low-cycle fatigue, perhaps), turning it on and off more often will reduce the life.
Given that I’ve dropped my MacBook from a height of four feet onto a concrete slab, I figure the difference is roundoff error as far as I am concerned.
A severalfold match isn’t very impressive if the underlying activity is at least several orders of magnitude less efficient than alternatives, which seems likely here.
It seems highly unlikely to me. Biomedical research in general and protein folding in particular are extremely high leverage areas. I think you will be very hard put to it to find a way to spend resources even a single order of magnitude more efficiently (let alone make a case that the budget of any of us here is already being spent more efficiently, either on average or at the margin).
Moore’s Law means that the cost of computation is falling exponentially. Even if one thought that providing computing power was the best way to spend money (on electricity) it would likely be better to save the money spent on the electric power and buy more computing power later, unless the computation is much much more useful now.
Biomedical research already gets an outsized portion of all R&D, with diminishing returns. The NIH budget is over $30 billion.
Slightly accelerating protein folding research doesn’t benefit very much from astronomical waste considerations compared to improving the security of future progress with existential risk reduction.
In principle, this is true; in practice, saying things like these seems more likely to make the people in question to simply cease donating electricity, instead of ceasing to donate electricity and donating the saved money to something more useful. Installing a program and running it all the time doesn’t really feel like you’re spending money, but explicitly donating money requires you to cross the mental barrier between free and paid in a way that running the program doesn’t.
For those reasons, I’d be very hesitant about arguing against running programs like Folding@Home; it seems likely to cause more harm than good.
http://lesswrong.com/lw/1d9/doing_your_good_deed_for_the_day/
But on the other hand http://lesswrong.com/lw/4e/cached_selves/ ; it doesn’t seem clear to me which effect dominates, so we should be careful about drawing inferences based on that.
Furthermore, it seems to me like things like F@H are rather unlikely to cause a “good deed of the day” effect for very long: by their nature, they’re continuing processes that rather quickly fade into the background of your consciousness and you partially forget about. If F@H automatically starts up whenever you boot your computer, then having it running wouldn’t count for a day’s good deed for most people. Constantly seeing the icon might boost a cached self effect of “I should do useful things”, though.
In practice, it is worth doing the computation now—we can easily establish this by looking at the past, and noting that the people who performed large computations then, would not have been better off waiting until now.
$30 billion is a lot of money compared to what you and I have in our pockets. It’s dirt cheap compared to the trillions being spent on unsuccessful attempts to treat people who are dying for lack of better biotechnology.
By far the most important way to reduce real life existential risks is speed.
Even if you could find a more cost effective research area to finance, it is highly unlikely that you are actually spending every penny you can spare in that way. The value of spending resources on X, needs to be compared to the other ways you are actually spending those resources, not to the other ways you hypothetically could be spending them.
Whether it makes sense in general doing a calculation now or just waiting isn’t always so clear cut. Also, at least historically there hasn’t always been a choice. For example, in the 1940s and 1950s, mathematicians studying the Riemann zeta function really wanted to do hard computations to look at more of the non-trivial zeros. but this was given very low priority by the people who controlled computers and by the people who programmed them. The priority was so low that by the time it advanced up the queue the computer in question would already be labeled as obsolete and thus would not be maintained. It wasn’t until the late 1950s that the first such calculation was actually performed
They have high-performance GPU clients that are a lot faster than CPU-only ones.
Assuming whatever gets learned through folding@home has applications they should offer users partial ownership of the intellectual property.
It’s scientific research, the results are freely published.
I’m not saying it isn’t a net gain, it may well be according to your own personal weighing of the factors. I’m just saying it is not free. Nothing is.
Many != all.
My desktop is old enough that it uses very little more power at full capacity than it does at idle.
Additionally, you can configure (may be the default, not sure) the client to not increase the clock rate.
It is also not equal to ‘some’. The vast majority of computers today will use more power when running folding at home than they would if they were not running folding at home. There may be some specific cases where this is not true but it will generally be true.
You’ve measured that have you? Here’s an example of some actual measurements for a range of current processors’ power draw at idle and under load. It’s not a vast difference but it is real and ranges from about 30W / 40% increase in total system power draw to around 100W / 100% increase.
I couldn’t find mention of any such setting on their site. Do you have a link to an explanation of this setting?
On further consideration, my complaint wasn’t my real/best argument, consider this a redirect to rwallace’s response above :p
That said, I personally don’t take ‘many’ as meaning ‘most’, but more in the sense of “a significant fraction”, which may be as little as 1⁄5 and as much as 4⁄5. I’d be somewhat surprised if the number of old machines (5+ years old) in use wasn’t in that range.
re: scaling, the Ubuntu folding team’s wiki describes the approach.
Idle could also mean ‘off’, which would be significant power savings even (especially?) for older CPUs.
One who refers to their powered-off computer as ‘idle’ might find themselves missing an arm.
Except I’m talking about opportunity cost rather than redefining the word. You can turn off a machine you aren’t using, a machine that’s idle.