Let’s say for simplicity that there’s only one other guy and he splits his donations $500/$500. If you prefer to donate $500/$500 rather than say $0/$1000, that means you like world #1, where charity A and charity B each get $1000, more than you like world #2, where charity A gets $500 and charity B gets $1500. Now let’s say the other guy reallocates to $0/$1000. If you stay at $500/$500, the end result is world #2. If you reallocate to $1000/$0, the end result is world #1. Since you prefer world #1 to world #2, you should prefer reallocating to staying. Or am I missing something?
OK, so “preferences over the total amounts of money donated to each charity” mean that you ignore any information you can glean from knowing that “the other guy reallocates to $0/$1000″, right? Like betting against the market by periodically re-balancing your portfolio mix? Or donating to a less-successful political party when the balance of power shifts away from your liking? If so, how does it imply that “your participation in politics should be 100% extremist”?
you ignore any information you can glean from knowing that “the other guy reallocates to $0/$1000”
Good point, but if your utility function over possible worlds is allowed to depend on the total sums donated to each charity and additionally on some aggregate information about other people’s decisions (“the market” or “balance of power”), I think the argument still goes through, as long as the number of people is large enough that your aggregate information can’t be perceptibly influenced by a single person’s decision.
I think the argument still goes through, as long as the number of people is large enough
This sounds suspiciously like trying to defend your existing position in the face of a new argument, rather than an honest attempt at evaluating the new evidence from scratch. And we haven’t gotten to your conclusions about politics yet.
Let’s say for simplicity that there’s only one other guy and he splits his donations $500/$500. If you prefer to donate $500/$500 rather than say $0/$1000, that means you like world #1, where charity A and charity B each get $1000, more than you like world #2, where charity A gets $500 and charity B gets $1500. Now let’s say the other guy reallocates to $0/$1000. If you stay at $500/$500, the end result is world #2. If you reallocate to $1000/$0, the end result is world #1. Since you prefer world #1 to world #2, you should prefer reallocating to staying. Or am I missing something?
OK, so “preferences over the total amounts of money donated to each charity” mean that you ignore any information you can glean from knowing that “the other guy reallocates to $0/$1000″, right? Like betting against the market by periodically re-balancing your portfolio mix? Or donating to a less-successful political party when the balance of power shifts away from your liking? If so, how does it imply that “your participation in politics should be 100% extremist”?
Good point, but if your utility function over possible worlds is allowed to depend on the total sums donated to each charity and additionally on some aggregate information about other people’s decisions (“the market” or “balance of power”), I think the argument still goes through, as long as the number of people is large enough that your aggregate information can’t be perceptibly influenced by a single person’s decision.
This sounds suspiciously like trying to defend your existing position in the face of a new argument, rather than an honest attempt at evaluating the new evidence from scratch. And we haven’t gotten to your conclusions about politics yet.
The original argument also relied on the number of people being large enough, I think.