I’m not. I’m (well, actually Fluttershy is and I’m agreeing) adding amounts of money together, and I’m suggesting that to make the Alice/Bob/Carol outcome seem like a good one you’d have to add together utility functions that ought not to be added together (even if one were generally willing to add up utility functions).
If Alice is an effective altruist and Bob and Carol are not, [...]
Look again at the description of the situation: Alice, Bob and Carol are all making their ethical position on meat-eating a central part of their decision-making. For each of them, at least about half of their delta-utility-converted-to-dollars in this situation is coming from the reduction in animal suffering that they anticipate. They are choosing their actions to optimize the outcome including this highly-weighted concern about animal suffering. This is the very definition of effective altruism. (Or at least of attempted effective altruism. Any of them might be being incompetent. But we don’t usually require competence before calling someone an EA.)
I don’t think a situation that extreme can really come up.
If some line of reasoning gives absurd results in such an extreme situation, then either there’s something wrong with the reasoning or there’s something about the extremeness of the situation that invalidates the reasoning even though it wouldn’t invalidate it in a less extreme situation. I don’t see that there’s any such thing in this case.
BUT
I do actually think there’s something wrong with Fluttershy’s example, or at least something that makes it more difficult to reason about than it needs to be, and that’s the way that the participants’ values and/or knowledge change. Specifically, at the start of the experiment Alice is eating meat even though (on reflection + persuasion) she actually values a month’s animal suffering more than a month’s meat-eating pleasure. Are we to assess the outcome on the basis of Alice’s final values, or her initial values (whatever they may have been)? I think different answers to this question yield different conclusions about whether something paradoxical is going on.
I’m not. I’m (well, actually Fluttershy is and I’m agreeing) adding amounts of money together, and I’m suggesting that to make the Alice/Bob/Carol outcome seem like a good one you’d have to add together utility functions that ought not to be added together (even if one were generally willing to add up utility functions).
Look again at the description of the situation: Alice, Bob and Carol are all making their ethical position on meat-eating a central part of their decision-making. For each of them, at least about half of their delta-utility-converted-to-dollars in this situation is coming from the reduction in animal suffering that they anticipate. They are choosing their actions to optimize the outcome including this highly-weighted concern about animal suffering. This is the very definition of effective altruism. (Or at least of attempted effective altruism. Any of them might be being incompetent. But we don’t usually require competence before calling someone an EA.)
If some line of reasoning gives absurd results in such an extreme situation, then either there’s something wrong with the reasoning or there’s something about the extremeness of the situation that invalidates the reasoning even though it wouldn’t invalidate it in a less extreme situation. I don’t see that there’s any such thing in this case.
BUT
I do actually think there’s something wrong with Fluttershy’s example, or at least something that makes it more difficult to reason about than it needs to be, and that’s the way that the participants’ values and/or knowledge change. Specifically, at the start of the experiment Alice is eating meat even though (on reflection + persuasion) she actually values a month’s animal suffering more than a month’s meat-eating pleasure. Are we to assess the outcome on the basis of Alice’s final values, or her initial values (whatever they may have been)? I think different answers to this question yield different conclusions about whether something paradoxical is going on.