What is that even supposed to mean? For one thing, I don’t think it’s necessarily irrefutable, but don’t most people (even most good people) act more or less like that anyway, regardless of whether they are “convinced” by it? Virtually no-one even tries to maximize the good they do if you measure “good” as “human QALYs saved anywhere on Earth at the present time with high certainty”. Even what the SIAI is doing seems far closer to Folding to me than it does to “giving money to starving Africans”.
It means what you said is true and irrelevant. I would not notice if they lived or not. That doesn’t matter to my ethics. What good are they to me? Probably nothing; my track record on investing in charitable donations is precisely −100% (none have ever even paid interest!). That doesn’t matter to my ethics.
If you are going to attack consequentialism or valuing people besides oneself, this is entirely the wrong place to do so, and makes about as much sense as discussing Nagarjuna’s arguments that nothing exists as a refutation of the assertion ‘Obama is a bad president’.
Even what the SIAI is doing seems far closer to Folding to me than it does to “giving money to starving Africans”.
They think that is not true. I guess you have a different opinion on the danger or likelihood of AI, or their effect on either one. You should take that up with them.
So why do you say that the Folding money would better be spent on starving Africans then? Shouldn’t it be donated to the SIAI instead, if you believe it? If not, why not criticize them on the same basis? I am also not claiming that one shouldn’t value other people, just that you don’t have to weight all lives equally and shouldn’t expect others to. And I don’t really believe that anyone truly maximizes “lives of others”, or would want to if they knew what it meant.
Also, “Charity X doesn’t optimize under my personal ethics” is not the same as “Charity is not about helping”, not that I disagree that signaling is important.
So why do you say that the Folding money would better be spent on starving Africans then? Shouldn’t it be donated to the SIAI instead, if you believe it? If not, why not criticize them on the same basis?
Because I am not writing for the tiny cluster of fellow zealots who agree about the high EV of donating to SIAI. I am writing for intelligent people in general, and one of the standard practices of philosophy—and heck, writing in general—is to not make highly controversial claims you do not need to make. I do not have to prove SIAI is the highest EV charity in existence in an essay about Folding@home; I only need to compare to a better charity, to establish a lower bound on how much harm choosing Folding@home does.
Also, “Charity X doesn’t optimize under my personal ethics” is not the same as “Charity is not about helping”, not that I disagree that signaling is important.
Fine, don’t look at my personal ethics. If you asked a random Folding@homer, ‘would you be willing to participate in murdering a few people just to make yourself look better’, what do you think they would say?
They would say no, but of course everyone “murders” some fraction of a person every day that they don’t maximize their life-saving effectiveness. If they were playing video games instead to make themselves feel better, isn’t that even “worse”?
People do folding because it’s easy and the costs are hidden. If you could magic it out of existence, they wouldn’t suddenly start donating equivalent money to maximally-efficient charities.
Of course they do; this follows directly from consequentialism, and is, in fact, an argument that could be used about donating to any charity but the most effective charity. If this was pointed to most people, they wouldn’t care and would ignore it or, like the XKCD link, simply indulge themselves that much more.
That’s why Folding@home is so interesting as a charity, because while most charities are simply committing sins of omission, Folding@home is committing sins of comission. (You did see the section headers, right? They aren’t meaningless.)
Sure, if you assume (as you do) that Folding won’t save many lives in the long run, it looks like a bad use of resources if you’re purely concerned about charity. But that assumption could be applied to any research that doesn’t pay off immediately or with certainty. The LHC uses 180 MW (http://lhc-machine-outreach.web.cern.ch/lhc-machine-outreach/faq/lhc-energy-consumption.htm) for example, or 12x as much as much as your numbers for F@H, and is arguably even less practical, despite no doubt producing many more papers.
So, you’re resorting to the same teapot argument I specifically addressed in the essay. No wonder this conversation is so frustrating for me—it looks to me like you didn’t read it, or skimmed it at best.
How is this situation comparable to any “teapot argument”? There are clear and obvious ways that something like Folding could produce valuable results. It might not, but it wouldn’t be shocking if it did, and it doesn’t need either a high probability or an especially large magnitude of success to make up for a few lives a year in EV. Comparing it to space teapots is getting things wrong by many orders of magnitude.
Anything looks bad if you incorrectly assume that the probability of it working is astronomically small.
edit: And the reason this conversation has been a bit confusing is because you made two different points (that F@H is a net negative even ignoring opportunity costs, and that giving charity to anything other than the #1 Givewell charity or close equivalent is bad), and while I don’t agree with either of them, they’re unrelated enough that it’s tricky to argue against both at once.
There are clear and obvious ways that something like Folding could produce valuable results.
Which it has not. 10 years it has run, and the paper count seems to be dropping.
What is the prior probability for Folding@home justifying either its sins of commission or sins of omission? Now, what’s the posterior probability, conditional on what we have (not) observed, of it now retroactively justifying all past expenses, and then its ongoing expenses?
edit: Also at best you don’t seem to have any better justification for thinking that the probability of significant success is astronomically low than I do for thinking that it’s low but not astronomically so, so this aspect of the argument isn’t really going anywhere.
Folding has already incurred somewhere around $100m in total expenses. Do the Bayesian update on a 1/10th chance not happening… It’s not epsilon or zero, I’ll tell you that!
Let me ask you something. I know what evidence would convince me that Folding was a good idea: show me a drug based on Folding results or a therapy change or something like that. But is there any evidence that could convince you that Folding is not a good idea? Because everything you’ve said seems like it could apply to any project.
What is that even supposed to mean? For one thing, I don’t think it’s necessarily irrefutable, but don’t most people (even most good people) act more or less like that anyway, regardless of whether they are “convinced” by it? Virtually no-one even tries to maximize the good they do if you measure “good” as “human QALYs saved anywhere on Earth at the present time with high certainty”. Even what the SIAI is doing seems far closer to Folding to me than it does to “giving money to starving Africans”.
It means what you said is true and irrelevant. I would not notice if they lived or not. That doesn’t matter to my ethics. What good are they to me? Probably nothing; my track record on investing in charitable donations is precisely −100% (none have ever even paid interest!). That doesn’t matter to my ethics.
If you are going to attack consequentialism or valuing people besides oneself, this is entirely the wrong place to do so, and makes about as much sense as discussing Nagarjuna’s arguments that nothing exists as a refutation of the assertion ‘Obama is a bad president’.
They think that is not true. I guess you have a different opinion on the danger or likelihood of AI, or their effect on either one. You should take that up with them.
So why do you say that the Folding money would better be spent on starving Africans then? Shouldn’t it be donated to the SIAI instead, if you believe it? If not, why not criticize them on the same basis? I am also not claiming that one shouldn’t value other people, just that you don’t have to weight all lives equally and shouldn’t expect others to. And I don’t really believe that anyone truly maximizes “lives of others”, or would want to if they knew what it meant.
Also, “Charity X doesn’t optimize under my personal ethics” is not the same as “Charity is not about helping”, not that I disagree that signaling is important.
Because I am not writing for the tiny cluster of fellow zealots who agree about the high EV of donating to SIAI. I am writing for intelligent people in general, and one of the standard practices of philosophy—and heck, writing in general—is to not make highly controversial claims you do not need to make. I do not have to prove SIAI is the highest EV charity in existence in an essay about Folding@home; I only need to compare to a better charity, to establish a lower bound on how much harm choosing Folding@home does.
Fine, don’t look at my personal ethics. If you asked a random Folding@homer, ‘would you be willing to participate in murdering a few people just to make yourself look better’, what do you think they would say?
They would say no, but of course everyone “murders” some fraction of a person every day that they don’t maximize their life-saving effectiveness. If they were playing video games instead to make themselves feel better, isn’t that even “worse”?
People do folding because it’s easy and the costs are hidden. If you could magic it out of existence, they wouldn’t suddenly start donating equivalent money to maximally-efficient charities.
Of course they do; this follows directly from consequentialism, and is, in fact, an argument that could be used about donating to any charity but the most effective charity. If this was pointed to most people, they wouldn’t care and would ignore it or, like the XKCD link, simply indulge themselves that much more.
That’s why Folding@home is so interesting as a charity, because while most charities are simply committing sins of omission, Folding@home is committing sins of comission. (You did see the section headers, right? They aren’t meaningless.)
Sure, if you assume (as you do) that Folding won’t save many lives in the long run, it looks like a bad use of resources if you’re purely concerned about charity. But that assumption could be applied to any research that doesn’t pay off immediately or with certainty. The LHC uses 180 MW (http://lhc-machine-outreach.web.cern.ch/lhc-machine-outreach/faq/lhc-energy-consumption.htm) for example, or 12x as much as much as your numbers for F@H, and is arguably even less practical, despite no doubt producing many more papers.
So, you’re resorting to the same teapot argument I specifically addressed in the essay. No wonder this conversation is so frustrating for me—it looks to me like you didn’t read it, or skimmed it at best.
How is this situation comparable to any “teapot argument”? There are clear and obvious ways that something like Folding could produce valuable results. It might not, but it wouldn’t be shocking if it did, and it doesn’t need either a high probability or an especially large magnitude of success to make up for a few lives a year in EV. Comparing it to space teapots is getting things wrong by many orders of magnitude.
Anything looks bad if you incorrectly assume that the probability of it working is astronomically small.
edit: And the reason this conversation has been a bit confusing is because you made two different points (that F@H is a net negative even ignoring opportunity costs, and that giving charity to anything other than the #1 Givewell charity or close equivalent is bad), and while I don’t agree with either of them, they’re unrelated enough that it’s tricky to argue against both at once.
Which it has not. 10 years it has run, and the paper count seems to be dropping.
What is the prior probability for Folding@home justifying either its sins of commission or sins of omission? Now, what’s the posterior probability, conditional on what we have (not) observed, of it now retroactively justifying all past expenses, and then its ongoing expenses?
It costs up to a billion dollars (http://en.wikipedia.org/wiki/Drug_development) and up to 14 years (http://www.addictiontreatmentmagazine.com/addiction-treatment/what-it-takes-to-bring-new-treatment-drugs-to-market/) to create a new drug once the basic idea is discovered, and seeing as the companies doing this stay in business, that level of investment must be (reasonably) worthwhile economically. Folding is a drop in the bucket compared to that and even if it never achieves anything serious and is eventually shut down, it seems like it was worth trying—and there’s still a chance that it could discover something important about proteins.
edit: Also at best you don’t seem to have any better justification for thinking that the probability of significant success is astronomically low than I do for thinking that it’s low but not astronomically so, so this aspect of the argument isn’t really going anywhere.
Folding has already incurred somewhere around $100m in total expenses. Do the Bayesian update on a 1/10th chance not happening… It’s not epsilon or zero, I’ll tell you that!
Let me ask you something. I know what evidence would convince me that Folding was a good idea: show me a drug based on Folding results or a therapy change or something like that. But is there any evidence that could convince you that Folding is not a good idea? Because everything you’ve said seems like it could apply to any project.