I would feel like I deserved to receive the credit for the prevention of animal suffering achieved by me becoming vegetarian, rather than the person who sponsored the ad receiving the credit for the animal suffering caused by my conversion to vegetarianism.
Why do you consider those mutually exclusive? I see no reason why responsibility cannot add to more than one. We’re not deontologists here (unless you are, in which case ignore this comment). If both of you saved the animals, then both of you get credit for it.
Let’s say that three individuals (Alice, Bob, and Carol) each hypothetically consider the benefit of one meat-eater becoming vegetarian for exactly one month (after which point the person again begins to eat meat just as frequently as they used to) as being worth 10, 15, and 20 USD per month, respectively.
Carol pays Bob, who is very persuasive, $15 to have a conversation with Alice to convince her to become a vegetarian for exactly one month. Bob values his time, and would never normally accept less than $25 for performing such a service, but is willing to charge $15 to speak with Alice, because he values converting meat-eaters to vegetarians. Let’s assume that Bob is perfectly persuasive, and will reliably cause Alice to become a vegetarian for exactly one month if and only if he is paid to chat with her. Alice thinks meat is tasty, so she values the ability to eat meat as being worth $5 per month, but she will realize that she actually values the reduction in animal suffering which would be caused by her being vegetarian for one month at 10 USD if Bob is paid to talk to Alice.
In this scenario, the parties incur the following costs in order to get Alice to become a vegetarian for one month:
Alice: 5
Bob: 10
Carol: 15
This adds up to a total cost of $30 across all parties, even though no party actually values converting a person to vegetarianism for exactly one month as being worth more than $20.
Please forgive any silly-feeling aspects of this model, as it is a hypothetical/toy model, after all.
How much do they disvalue each other losing money? If they don’t care at all, then the given scenario would be better than nothing for all involved. If they do care, then that should count as a cost, and should be considered accordingly. I generally value someone having a few dollars as worth vastly less than the same amount of money donated to the best charity, so I would be willing to pay essentially that much to get that person to donate.
Edit:
Consider this similar scenario. Alice, Bob, and Carol are roommates. They find a nice picture at a store for $30. Alice values having the picture in their living room at $10, Bob at $15, and Carol at $20. They agree to split the costs, with Alice paying $5, Bob paying $10, and Carol paying $15. This adds up to a total cost of $30 across all parties, even though no party actually values the painting as being worth more than $20. Is there any sort of paradox going on here?
In the case of the picture, presumably what Alice values is “Alice being able to look at the picture”, what Bob values is “Bob being able to look at the picture”, and likewise for Carol. (If not—if each is making a serious attempt to include the others’ benefit from the picture—then indeed their decision is probably a mistake.)
But with Alice, Bob and Carol all interested in having someone become a vegetarian, what they’re valuing is (something like) “a person-month less of animal-eating”, and if you add up all their individual values for that you’re double-counting (er, triple-counting).
Perhaps it’s more obvious if we suppose that they somehow get hold of a list of people who are keen on vegetarianism, and find that each one of those 10,000 people values a person-month of vegetarianism at $10. Is it now a good deal if all of them spend $10 to make Alice a vegetarian for a month? Has her abstinence from meat for that month suddenly done 3000x more good than when it was just her, Bob and Carol who knew about it?
Why are you adding utility functions together? We’re discussing what an effective altruist who cares about animals should do as an individual. We are not trying to work out CEV or something. If we did, I’d hope animals get counted for more than just how much the humans care about them on average. If Alice is an effective altruist and Bob and Carol are not, in which case it can be assumed that Bob and Carol’s money would otherwise be wasted on themselves when they don’t need it very much, or possibly on charity that doesn’t do very much good, then Alice shouldn’t care much how much Bob and Carol pay.
Perhaps it’s more obvious if we suppose that they somehow get hold of a list of people who are keen on vegetarianism, and find that each one of those 10,000 people values a person-month of vegetarianism at $10. Is it now a good deal if all of them spend $10 to make Alice a vegetarian for a month? Has her abstinence from meat for that month suddenly done 3000x more good than when it was just her, Bob and Carol who knew about it?
I don’t think a situation that extreme can really come up. If the whole thing will stop because of one person not donating, there’s no way the other 10,000 people will all donate.
I’m not. I’m (well, actually Fluttershy is and I’m agreeing) adding amounts of money together, and I’m suggesting that to make the Alice/Bob/Carol outcome seem like a good one you’d have to add together utility functions that ought not to be added together (even if one were generally willing to add up utility functions).
If Alice is an effective altruist and Bob and Carol are not, [...]
Look again at the description of the situation: Alice, Bob and Carol are all making their ethical position on meat-eating a central part of their decision-making. For each of them, at least about half of their delta-utility-converted-to-dollars in this situation is coming from the reduction in animal suffering that they anticipate. They are choosing their actions to optimize the outcome including this highly-weighted concern about animal suffering. This is the very definition of effective altruism. (Or at least of attempted effective altruism. Any of them might be being incompetent. But we don’t usually require competence before calling someone an EA.)
I don’t think a situation that extreme can really come up.
If some line of reasoning gives absurd results in such an extreme situation, then either there’s something wrong with the reasoning or there’s something about the extremeness of the situation that invalidates the reasoning even though it wouldn’t invalidate it in a less extreme situation. I don’t see that there’s any such thing in this case.
BUT
I do actually think there’s something wrong with Fluttershy’s example, or at least something that makes it more difficult to reason about than it needs to be, and that’s the way that the participants’ values and/or knowledge change. Specifically, at the start of the experiment Alice is eating meat even though (on reflection + persuasion) she actually values a month’s animal suffering more than a month’s meat-eating pleasure. Are we to assess the outcome on the basis of Alice’s final values, or her initial values (whatever they may have been)? I think different answers to this question yield different conclusions about whether something paradoxical is going on.
How much do they disvalue each other losing money? If they don’t care at all, then the given scenario would be better than nothing for all involved.
Perhaps there’s an intervention (buying a certain number of free range eggs and selling them at the price of eggs from cage-raised hens, let’s say), which costs $20 and prevents as much suffering as Alice would prevent by becoming a vegetarian for exactly one month in the above example. In this case, Alice, Bob, and Carol would better achieve their goals by donating to an organization which buys free range eggs and sells them to the public at the same price at which normal eggs are sold, than they would by investing in making Alice a vegetarian.
In this way, having multiple people believe that they should be the sole individual to receive credit for causing a given intervention to be implemented can result in suboptimal outcomes.
If Alice knows Bob and Carol would otherwise donate to such an intervention, and she goes with that other plan you gave, then she’s responsible for the donations they failed to receive. I think it can generally be assumed that people will not donate their money wisely, so you don’t have to worry.
Why do you consider those mutually exclusive? I see no reason why responsibility cannot add to more than one. We’re not deontologists here (unless you are, in which case ignore this comment). If both of you saved the animals, then both of you get credit for it.
Let’s say that three individuals (Alice, Bob, and Carol) each hypothetically consider the benefit of one meat-eater becoming vegetarian for exactly one month (after which point the person again begins to eat meat just as frequently as they used to) as being worth 10, 15, and 20 USD per month, respectively.
Carol pays Bob, who is very persuasive, $15 to have a conversation with Alice to convince her to become a vegetarian for exactly one month. Bob values his time, and would never normally accept less than $25 for performing such a service, but is willing to charge $15 to speak with Alice, because he values converting meat-eaters to vegetarians. Let’s assume that Bob is perfectly persuasive, and will reliably cause Alice to become a vegetarian for exactly one month if and only if he is paid to chat with her. Alice thinks meat is tasty, so she values the ability to eat meat as being worth $5 per month, but she will realize that she actually values the reduction in animal suffering which would be caused by her being vegetarian for one month at 10 USD if Bob is paid to talk to Alice.
In this scenario, the parties incur the following costs in order to get Alice to become a vegetarian for one month: Alice: 5 Bob: 10 Carol: 15
This adds up to a total cost of $30 across all parties, even though no party actually values converting a person to vegetarianism for exactly one month as being worth more than $20.
Please forgive any silly-feeling aspects of this model, as it is a hypothetical/toy model, after all.
How much do they disvalue each other losing money? If they don’t care at all, then the given scenario would be better than nothing for all involved. If they do care, then that should count as a cost, and should be considered accordingly. I generally value someone having a few dollars as worth vastly less than the same amount of money donated to the best charity, so I would be willing to pay essentially that much to get that person to donate.
Edit:
Consider this similar scenario. Alice, Bob, and Carol are roommates. They find a nice picture at a store for $30. Alice values having the picture in their living room at $10, Bob at $15, and Carol at $20. They agree to split the costs, with Alice paying $5, Bob paying $10, and Carol paying $15. This adds up to a total cost of $30 across all parties, even though no party actually values the painting as being worth more than $20. Is there any sort of paradox going on here?
In the case of the picture, presumably what Alice values is “Alice being able to look at the picture”, what Bob values is “Bob being able to look at the picture”, and likewise for Carol. (If not—if each is making a serious attempt to include the others’ benefit from the picture—then indeed their decision is probably a mistake.)
But with Alice, Bob and Carol all interested in having someone become a vegetarian, what they’re valuing is (something like) “a person-month less of animal-eating”, and if you add up all their individual values for that you’re double-counting (er, triple-counting).
Perhaps it’s more obvious if we suppose that they somehow get hold of a list of people who are keen on vegetarianism, and find that each one of those 10,000 people values a person-month of vegetarianism at $10. Is it now a good deal if all of them spend $10 to make Alice a vegetarian for a month? Has her abstinence from meat for that month suddenly done 3000x more good than when it was just her, Bob and Carol who knew about it?
Why are you adding utility functions together? We’re discussing what an effective altruist who cares about animals should do as an individual. We are not trying to work out CEV or something. If we did, I’d hope animals get counted for more than just how much the humans care about them on average. If Alice is an effective altruist and Bob and Carol are not, in which case it can be assumed that Bob and Carol’s money would otherwise be wasted on themselves when they don’t need it very much, or possibly on charity that doesn’t do very much good, then Alice shouldn’t care much how much Bob and Carol pay.
I don’t think a situation that extreme can really come up. If the whole thing will stop because of one person not donating, there’s no way the other 10,000 people will all donate.
I’m not. I’m (well, actually Fluttershy is and I’m agreeing) adding amounts of money together, and I’m suggesting that to make the Alice/Bob/Carol outcome seem like a good one you’d have to add together utility functions that ought not to be added together (even if one were generally willing to add up utility functions).
Look again at the description of the situation: Alice, Bob and Carol are all making their ethical position on meat-eating a central part of their decision-making. For each of them, at least about half of their delta-utility-converted-to-dollars in this situation is coming from the reduction in animal suffering that they anticipate. They are choosing their actions to optimize the outcome including this highly-weighted concern about animal suffering. This is the very definition of effective altruism. (Or at least of attempted effective altruism. Any of them might be being incompetent. But we don’t usually require competence before calling someone an EA.)
If some line of reasoning gives absurd results in such an extreme situation, then either there’s something wrong with the reasoning or there’s something about the extremeness of the situation that invalidates the reasoning even though it wouldn’t invalidate it in a less extreme situation. I don’t see that there’s any such thing in this case.
BUT
I do actually think there’s something wrong with Fluttershy’s example, or at least something that makes it more difficult to reason about than it needs to be, and that’s the way that the participants’ values and/or knowledge change. Specifically, at the start of the experiment Alice is eating meat even though (on reflection + persuasion) she actually values a month’s animal suffering more than a month’s meat-eating pleasure. Are we to assess the outcome on the basis of Alice’s final values, or her initial values (whatever they may have been)? I think different answers to this question yield different conclusions about whether something paradoxical is going on.
Perhaps there’s an intervention (buying a certain number of free range eggs and selling them at the price of eggs from cage-raised hens, let’s say), which costs $20 and prevents as much suffering as Alice would prevent by becoming a vegetarian for exactly one month in the above example. In this case, Alice, Bob, and Carol would better achieve their goals by donating to an organization which buys free range eggs and sells them to the public at the same price at which normal eggs are sold, than they would by investing in making Alice a vegetarian.
In this way, having multiple people believe that they should be the sole individual to receive credit for causing a given intervention to be implemented can result in suboptimal outcomes.
If Alice knows Bob and Carol would otherwise donate to such an intervention, and she goes with that other plan you gave, then she’s responsible for the donations they failed to receive. I think it can generally be assumed that people will not donate their money wisely, so you don’t have to worry.