Why are you adding utility functions together? We’re discussing what an effective altruist who cares about animals should do as an individual. We are not trying to work out CEV or something. If we did, I’d hope animals get counted for more than just how much the humans care about them on average. If Alice is an effective altruist and Bob and Carol are not, in which case it can be assumed that Bob and Carol’s money would otherwise be wasted on themselves when they don’t need it very much, or possibly on charity that doesn’t do very much good, then Alice shouldn’t care much how much Bob and Carol pay.
Perhaps it’s more obvious if we suppose that they somehow get hold of a list of people who are keen on vegetarianism, and find that each one of those 10,000 people values a person-month of vegetarianism at $10. Is it now a good deal if all of them spend $10 to make Alice a vegetarian for a month? Has her abstinence from meat for that month suddenly done 3000x more good than when it was just her, Bob and Carol who knew about it?
I don’t think a situation that extreme can really come up. If the whole thing will stop because of one person not donating, there’s no way the other 10,000 people will all donate.
I’m not. I’m (well, actually Fluttershy is and I’m agreeing) adding amounts of money together, and I’m suggesting that to make the Alice/Bob/Carol outcome seem like a good one you’d have to add together utility functions that ought not to be added together (even if one were generally willing to add up utility functions).
If Alice is an effective altruist and Bob and Carol are not, [...]
Look again at the description of the situation: Alice, Bob and Carol are all making their ethical position on meat-eating a central part of their decision-making. For each of them, at least about half of their delta-utility-converted-to-dollars in this situation is coming from the reduction in animal suffering that they anticipate. They are choosing their actions to optimize the outcome including this highly-weighted concern about animal suffering. This is the very definition of effective altruism. (Or at least of attempted effective altruism. Any of them might be being incompetent. But we don’t usually require competence before calling someone an EA.)
I don’t think a situation that extreme can really come up.
If some line of reasoning gives absurd results in such an extreme situation, then either there’s something wrong with the reasoning or there’s something about the extremeness of the situation that invalidates the reasoning even though it wouldn’t invalidate it in a less extreme situation. I don’t see that there’s any such thing in this case.
BUT
I do actually think there’s something wrong with Fluttershy’s example, or at least something that makes it more difficult to reason about than it needs to be, and that’s the way that the participants’ values and/or knowledge change. Specifically, at the start of the experiment Alice is eating meat even though (on reflection + persuasion) she actually values a month’s animal suffering more than a month’s meat-eating pleasure. Are we to assess the outcome on the basis of Alice’s final values, or her initial values (whatever they may have been)? I think different answers to this question yield different conclusions about whether something paradoxical is going on.
Why are you adding utility functions together? We’re discussing what an effective altruist who cares about animals should do as an individual. We are not trying to work out CEV or something. If we did, I’d hope animals get counted for more than just how much the humans care about them on average. If Alice is an effective altruist and Bob and Carol are not, in which case it can be assumed that Bob and Carol’s money would otherwise be wasted on themselves when they don’t need it very much, or possibly on charity that doesn’t do very much good, then Alice shouldn’t care much how much Bob and Carol pay.
I don’t think a situation that extreme can really come up. If the whole thing will stop because of one person not donating, there’s no way the other 10,000 people will all donate.
I’m not. I’m (well, actually Fluttershy is and I’m agreeing) adding amounts of money together, and I’m suggesting that to make the Alice/Bob/Carol outcome seem like a good one you’d have to add together utility functions that ought not to be added together (even if one were generally willing to add up utility functions).
Look again at the description of the situation: Alice, Bob and Carol are all making their ethical position on meat-eating a central part of their decision-making. For each of them, at least about half of their delta-utility-converted-to-dollars in this situation is coming from the reduction in animal suffering that they anticipate. They are choosing their actions to optimize the outcome including this highly-weighted concern about animal suffering. This is the very definition of effective altruism. (Or at least of attempted effective altruism. Any of them might be being incompetent. But we don’t usually require competence before calling someone an EA.)
If some line of reasoning gives absurd results in such an extreme situation, then either there’s something wrong with the reasoning or there’s something about the extremeness of the situation that invalidates the reasoning even though it wouldn’t invalidate it in a less extreme situation. I don’t see that there’s any such thing in this case.
BUT
I do actually think there’s something wrong with Fluttershy’s example, or at least something that makes it more difficult to reason about than it needs to be, and that’s the way that the participants’ values and/or knowledge change. Specifically, at the start of the experiment Alice is eating meat even though (on reflection + persuasion) she actually values a month’s animal suffering more than a month’s meat-eating pleasure. Are we to assess the outcome on the basis of Alice’s final values, or her initial values (whatever they may have been)? I think different answers to this question yield different conclusions about whether something paradoxical is going on.