Reposting a comment I made on Yvain’s livejournal:
There’s a standard argument about “efficient charity” that says you should concentrate all your donations on one charity, because presumably you have preferences over the total amounts of money donated to each charity (not just your own donations), so choosing something like a 50⁄50 split would be too sensitive to other people’s donations.
I just realized that the argument applies in equal force to politics. If you’re not using “beliefs as attire” but actually care about politics, your participation in politics should be 100% extremist. That’s troubling.
You might be an extreme centrist. Or an extreme pragmatic. Not all extremists are the “take some idea to its (il)logical conclusion and start blowing things up” type.
I believe the point is that while your personal beliefs may lie at any point in some high-dimensional space, if you’re getting involved in politics in some anonymous way you should throw all your support behind the single “best” group, even if, like in two-party politics, that means supporting a group you have significant differences with. Non-anonymity (nonymity) changes things, leading to behavior like lobbying multiple parties.
I don’t really find it that disturbing, but it does get a little weird when you remember how bad humans are at separating acts from mental states.
Impact of charitable donations is, at least within the domains that most people can give, directly proportional to the size of the donations. It’s not at all clear, however, that extremist participation in politics produces a greater impact in the desired direction than casual participation.
I think that in some cases, it probably does, whereas in others it does not.
It probably depends of the decision process you’re trying to influence:
If you’re voting for a candidate, you don’t have any incentive to vote in a way more extreme than your preferences—with more than two candidates, you can have strategic voting which is often the opposite incentive, i.e. voting for a candidate you like less that has more chances of making it.
If a bureaucrat is trying to maximize utility by examining people’s stated preferences, then you can have an incentive to claim extreme preferences for the reasons Yvain gives.
Informal discussions of what social norms should be look more like the second case.
Elected politicians have to deal with the two systems, on one side they want to take a moderate position to get the maximum number of voters (median voter etc.), on the other hand once elected they have an incentive to claim to be more extreme when negotiating in their constituents’ interest.
Could you spell out what you mean by extremist, and how the analogous argument goes?
If there are three candidates, then yes, you should give all your support to one candidate, even if you hate one and don’t distinguish between the other two.
But that hardly makes you an extremist. I don’t see any reason that this kind of argument says you should support the same party in every election, or for every seat in a particular election, or that you should support that party’s position on every issue. Even if you are an extremist and, say, want to pull the country leftward on all issues, it’s not obvious whether equal amounts of support (say, money) to a small far-left party will be more effective than to a center-left party. Similarly, if your participation in politics is conversation with people, it’s not obvious that always arguing left-wing positions is the most effective way to draw people to the left. It may be that demonstrating a willingness to compromise and consider details may make you more convincing. In fact, I do think the answer is that the main power individuals have in arguing about politics is to shift the Overton window; but I think that is a completely different reason than the charity argument.
and then I looked up your comment on LJ and the comment it replies to and I strongly disagree with your comment. This has nothing to do with the charity argument. Whether this argument is correct is different matter. I think the Overton window is a different phenomenon. I think the argument to take extreme positions to negotiate compromises better applies to politicians than to ordinary people. But their actions are not marginal and so this is clearly different from the charity argument.
I agree with everything in your comment. “Extremist” was a bad choice of word, maybe “single-minded” would be better. What I meant was, for example, if success at convincing people on any given political issue is linearly proportional to effort, you should spend all your effort arguing just one issue. More generally, if we look at all the causes in the world where the resulting utility to you depends on aggregated actions of many people and doesn’t include a term for your personal contribution, the argument says you should support only one such cause.
What I meant was, for example, if success at convincing people on any given political issue is linearly proportional to effort, you should spend all your effort arguing just one issue.
But this isn’t at all likely. For one thing you probably have a limited number of family and friends who highly trust your opinions, so your effectiveness (i.e., derivative of success) at convincing people on any given political issue will start out high and quickly take a dive as you spend more time on that issue.
But this isn’t at all likely. For one thing you probably have a limited number of family and friends who highly trust your opinions, so your effectiveness (i.e., derivative of success) at convincing people on any given political issue will start out high and quickly take a dive as you spend more time on that issue.
I’m inclined to agree. A variant of the strategy would be to spend a lot of time arguing for other positions that are carefully selected to agree with and expand eloquently on the predicted opinions of the persuasion targets.
Yes, that is the charity argument. Yes, you should not give money both to a local candidate and to a national candidate simultaneously.
But the political environment changes so much from election to election, it is not clear you should give money to the same candidate or the same single-issue group every cycle.
Moreover, the personal environment changes much more rapidly, and I do not agree with the hypothesis that success at convincing people depends linearly with effort. In particular, changing the subject to the more important issue is rarely worth the opportunity cost and may well have the wrong effect on opinion. If effort toward the less important issue is going to wear out your ability to exert effort for the more important issue an hour from now, then effort may be somewhat fungible. But effort is nowhere near as fungible as money, the topic of the charity argument.
Value of information about which political side is more marginally valuable makes unbiased discussion a cause that’s potentially more valuable than advocacy for any of the political sides, and charities are similarly on the same scene. So the rule is not “focus on a single element out of each class of activities”, the choice isn’t limited to any given class of activities. Applied to politics, the rule can be stated only as, “If advocacy of political positions is the most marginally valuable thing you can do, focus on a single side.”
Sorry, can you explain why it also applies to investing? For reference, here’s an expanded version of the argument.
Say you have decided to donate $500 to charity A and $500 to charity B. Then you learn that someone else has decided to reallocate their $500 from charity A to charity B. If you’re a consequentialist and have preferences over the total donations to each charity, rather than the warm fuzzies you get from splitting your own donations 50⁄50, you will reallocate $500 to charity A. Note that the conclusion doesn’t depend on your risk aversion, only on the fact that you considered your original decision optimal before you learned the new info. That means your original decision for the 50⁄50 split relied on an implausible coincidence and was very sensitive to other people’s reallocations in both directions, so in most cases you should allocate all your money to one charity, as long as your donations aren’t too large compared to the donations of other people.
Let’s say for simplicity that there’s only one other guy and he splits his donations $500/$500. If you prefer to donate $500/$500 rather than say $0/$1000, that means you like world #1, where charity A and charity B each get $1000, more than you like world #2, where charity A gets $500 and charity B gets $1500. Now let’s say the other guy reallocates to $0/$1000. If you stay at $500/$500, the end result is world #2. If you reallocate to $1000/$0, the end result is world #1. Since you prefer world #1 to world #2, you should prefer reallocating to staying. Or am I missing something?
OK, so “preferences over the total amounts of money donated to each charity” mean that you ignore any information you can glean from knowing that “the other guy reallocates to $0/$1000″, right? Like betting against the market by periodically re-balancing your portfolio mix? Or donating to a less-successful political party when the balance of power shifts away from your liking? If so, how does it imply that “your participation in politics should be 100% extremist”?
you ignore any information you can glean from knowing that “the other guy reallocates to $0/$1000”
Good point, but if your utility function over possible worlds is allowed to depend on the total sums donated to each charity and additionally on some aggregate information about other people’s decisions (“the market” or “balance of power”), I think the argument still goes through, as long as the number of people is large enough that your aggregate information can’t be perceptibly influenced by a single person’s decision.
I think the argument still goes through, as long as the number of people is large enough
This sounds suspiciously like trying to defend your existing position in the face of a new argument, rather than an honest attempt at evaluating the new evidence from scratch. And we haven’t gotten to your conclusions about politics yet.
The key difference is risk aversion. People are (quite rightly in my opinion) very risk-averse with their own money, almost nobody would be happy to trade all their possessions for a 51% shot at twice as much, mostly because doubling your possessions doesn’t improve your life as much as losing them worsens it.
On the other hand, with altruistic causes, helping two people really does do exactly twice as much good as helping one, so there is no reason to be risk averse, and you should put all your resources on the bet with the highest expected pay-off, regardless of the possibility that it might all amount to nothing if you don’t get lucky.
Right, and there is risk in everything. A charity might fold, or end up consisting of crooks, or its cause might actually be harmful, or the estimate of buying malaria nets being more useful than supporting SI might turn out to be wrong. Hence the diversification.
Politics is even worse, you can never be sure which policy is better, and whether, when carried to its extreme, it turns out to be harmful.
This is where cousin_it’s naive argument for radicalization falls flat.
This doesn’t matter. Whether you should be risk averse doesn’t depend on how much risk there is, whether you should be risk averse depends on whether your pay-offs suffer diminishing returns, it is a mathematical equivalence (if your pay-offs have accelerating returns, you should be risk-seeking).
I think you don’t understand risk aversion. Consider a simple toy problem, investment A has a 90% chance of doubling your money and a 10% chance of losing all of it, investment A has a 90% chance of multiplying your money by one half and a 10% chance of losing all of it. Suppose you have $100,00, enough for a comfortable lifestyle. If you invest it all in A, you have a 90% chance of a much more comfortable lifestyle, but a 10% chance of being out on the street, which is pretty bad. Investing equal amounts in both reduces your average welath from $180,000 to $157,500, but increases your chance of having enough money to live from 90% to 99%, which is more important.
If they are instead charities, and we substitute $1,000 return for 1 life saved, then diversifying just reduces the number of people you save, it also increases your chance of saving someone, but this doesn’t really matter compared to saving more people in the average case.
Look at it this way, in personal wealth, the difference between some money and no money is huge, the difference between some money and twice as much money is vastly less significant. In charity, the difference between some lives saved and twice as many lives saved is exactly as significant as the difference between some lives saved and no lives saved.
I’m not explaining this very well, because I’m a crap explainer, here’s the relevant wikipedia page
In charity, the difference between some lives saved and twice as many lives saved is exactly as significant as the difference between some lives saved and no lives saved.
Note that even those wealthy guys who are not in danger of living on the street, diversify, lest they lose a large chunk of their investment. Similarly, if you assign a large disutility to non-optimal charity, your utility losses from a failed one will not be in any way compensated by your other charities performing well. Again, the situation in politics, which is the real question (charity is just an unfortunate analogy), the stakes are even higher, so picking an extreme position is even less justified.
Again, the situation in politics, which is the real question (charity is just an unfortunate analogy), the stakes are even higher, so picking an extreme position is even less justified.
I’m not talking about the politics case, there are other problems with Cousin It’s argument. I’m arguing with your ‘refutation’ of the non-diversifying principle.
Note that even those wealthy guys who are not in danger of living on the street, diversify, lest they lose a large chunk of their investment.
They may not be in danger of homelessness, but there is still diminishing returns. The difference between $1m and $2m is more important than the difference between $2m and $3m. Notice the operative word ‘large’ in your sentence. If those guys were just betting for amounts on the scale of $10, sufficiently small that the curve becomes basically linear, then they wouldn’t diversify (if they were smart).
The situation with charity is somewhat similar, your donation is as small on the scale of the whole problem being fixed, and the whole amount being donated, as $10 is for a rich investment banker. The diminshing returns that exist do not have any effect on the scale of individuals.
Politics, if you insist on talking about it, is the same. Your personal influence has no effect on the marginal utilities, it is far too small.
Similarly, if you assign a large disutility to non-optimal charity, your utility losses from a failed one will not be in any way compensated by your other charities performing well.
Yes, if you donate to make yourself feel good (as opposed to helping people) and having all your money go to waste makes you feel exceptionally bad, then you should diversify. If you donate to help people, then you shouldn’t assign an exceptionally large disutility to non-optimal-charity, you should assign utility precisely proportional to the number of lives you can save.
Reposting a comment I made on Yvain’s livejournal:
There’s a standard argument about “efficient charity” that says you should concentrate all your donations on one charity, because presumably you have preferences over the total amounts of money donated to each charity (not just your own donations), so choosing something like a 50⁄50 split would be too sensitive to other people’s donations.
I just realized that the argument applies in equal force to politics. If you’re not using “beliefs as attire” but actually care about politics, your participation in politics should be 100% extremist. That’s troubling.
You might be an extreme centrist. Or an extreme pragmatic. Not all extremists are the “take some idea to its (il)logical conclusion and start blowing things up” type.
I believe the point is that while your personal beliefs may lie at any point in some high-dimensional space, if you’re getting involved in politics in some anonymous way you should throw all your support behind the single “best” group, even if, like in two-party politics, that means supporting a group you have significant differences with. Non-anonymity (nonymity) changes things, leading to behavior like lobbying multiple parties.
I don’t really find it that disturbing, but it does get a little weird when you remember how bad humans are at separating acts from mental states.
Impact of charitable donations is, at least within the domains that most people can give, directly proportional to the size of the donations. It’s not at all clear, however, that extremist participation in politics produces a greater impact in the desired direction than casual participation.
I think that in some cases, it probably does, whereas in others it does not.
It probably depends of the decision process you’re trying to influence:
If you’re voting for a candidate, you don’t have any incentive to vote in a way more extreme than your preferences—with more than two candidates, you can have strategic voting which is often the opposite incentive, i.e. voting for a candidate you like less that has more chances of making it.
If a bureaucrat is trying to maximize utility by examining people’s stated preferences, then you can have an incentive to claim extreme preferences for the reasons Yvain gives.
Informal discussions of what social norms should be look more like the second case.
Elected politicians have to deal with the two systems, on one side they want to take a moderate position to get the maximum number of voters (median voter etc.), on the other hand once elected they have an incentive to claim to be more extreme when negotiating in their constituents’ interest.
Could you spell out what you mean by extremist, and how the analogous argument goes?
If there are three candidates, then yes, you should give all your support to one candidate, even if you hate one and don’t distinguish between the other two.
But that hardly makes you an extremist. I don’t see any reason that this kind of argument says you should support the same party in every election, or for every seat in a particular election, or that you should support that party’s position on every issue. Even if you are an extremist and, say, want to pull the country leftward on all issues, it’s not obvious whether equal amounts of support (say, money) to a small far-left party will be more effective than to a center-left party. Similarly, if your participation in politics is conversation with people, it’s not obvious that always arguing left-wing positions is the most effective way to draw people to the left. It may be that demonstrating a willingness to compromise and consider details may make you more convincing. In fact, I do think the answer is that the main power individuals have in arguing about politics is to shift the Overton window; but I think that is a completely different reason than the charity argument.
and then I looked up your comment on LJ and the comment it replies to and I strongly disagree with your comment. This has nothing to do with the charity argument. Whether this argument is correct is different matter. I think the Overton window is a different phenomenon. I think the argument to take extreme positions to negotiate compromises better applies to politicians than to ordinary people. But their actions are not marginal and so this is clearly different from the charity argument.
I agree with everything in your comment. “Extremist” was a bad choice of word, maybe “single-minded” would be better. What I meant was, for example, if success at convincing people on any given political issue is linearly proportional to effort, you should spend all your effort arguing just one issue. More generally, if we look at all the causes in the world where the resulting utility to you depends on aggregated actions of many people and doesn’t include a term for your personal contribution, the argument says you should support only one such cause.
But this isn’t at all likely. For one thing you probably have a limited number of family and friends who highly trust your opinions, so your effectiveness (i.e., derivative of success) at convincing people on any given political issue will start out high and quickly take a dive as you spend more time on that issue.
I’m inclined to agree. A variant of the strategy would be to spend a lot of time arguing for other positions that are carefully selected to agree with and expand eloquently on the predicted opinions of the persuasion targets.
Yes, that is the charity argument. Yes, you should not give money both to a local candidate and to a national candidate simultaneously.
But the political environment changes so much from election to election, it is not clear you should give money to the same candidate or the same single-issue group every cycle.
Moreover, the personal environment changes much more rapidly, and I do not agree with the hypothesis that success at convincing people depends linearly with effort. In particular, changing the subject to the more important issue is rarely worth the opportunity cost and may well have the wrong effect on opinion. If effort toward the less important issue is going to wear out your ability to exert effort for the more important issue an hour from now, then effort may be somewhat fungible. But effort is nowhere near as fungible as money, the topic of the charity argument.
Value of information about which political side is more marginally valuable makes unbiased discussion a cause that’s potentially more valuable than advocacy for any of the political sides, and charities are similarly on the same scene. So the rule is not “focus on a single element out of each class of activities”, the choice isn’t limited to any given class of activities. Applied to politics, the rule can be stated only as, “If advocacy of political positions is the most marginally valuable thing you can do, focus on a single side.”
Yeah, I agree. I wonder how many people would subscribe to the rule in full generality.
If this argument was universal, it would be rational to invest in a single stock and the saying about all eggs in one basket would not exist.
Sorry, can you explain why it also applies to investing? For reference, here’s an expanded version of the argument.
Say you have decided to donate $500 to charity A and $500 to charity B. Then you learn that someone else has decided to reallocate their $500 from charity A to charity B. If you’re a consequentialist and have preferences over the total donations to each charity, rather than the warm fuzzies you get from splitting your own donations 50⁄50, you will reallocate $500 to charity A. Note that the conclusion doesn’t depend on your risk aversion, only on the fact that you considered your original decision optimal before you learned the new info. That means your original decision for the 50⁄50 split relied on an implausible coincidence and was very sensitive to other people’s reallocations in both directions, so in most cases you should allocate all your money to one charity, as long as your donations aren’t too large compared to the donations of other people.
Sorry, I must be misunderstanding the argument. Why would you shift your donations from B to A if someone else donates to B?
Let’s say for simplicity that there’s only one other guy and he splits his donations $500/$500. If you prefer to donate $500/$500 rather than say $0/$1000, that means you like world #1, where charity A and charity B each get $1000, more than you like world #2, where charity A gets $500 and charity B gets $1500. Now let’s say the other guy reallocates to $0/$1000. If you stay at $500/$500, the end result is world #2. If you reallocate to $1000/$0, the end result is world #1. Since you prefer world #1 to world #2, you should prefer reallocating to staying. Or am I missing something?
OK, so “preferences over the total amounts of money donated to each charity” mean that you ignore any information you can glean from knowing that “the other guy reallocates to $0/$1000″, right? Like betting against the market by periodically re-balancing your portfolio mix? Or donating to a less-successful political party when the balance of power shifts away from your liking? If so, how does it imply that “your participation in politics should be 100% extremist”?
Good point, but if your utility function over possible worlds is allowed to depend on the total sums donated to each charity and additionally on some aggregate information about other people’s decisions (“the market” or “balance of power”), I think the argument still goes through, as long as the number of people is large enough that your aggregate information can’t be perceptibly influenced by a single person’s decision.
This sounds suspiciously like trying to defend your existing position in the face of a new argument, rather than an honest attempt at evaluating the new evidence from scratch. And we haven’t gotten to your conclusions about politics yet.
The original argument also relied on the number of people being large enough, I think.
The key difference is risk aversion. People are (quite rightly in my opinion) very risk-averse with their own money, almost nobody would be happy to trade all their possessions for a 51% shot at twice as much, mostly because doubling your possessions doesn’t improve your life as much as losing them worsens it.
On the other hand, with altruistic causes, helping two people really does do exactly twice as much good as helping one, so there is no reason to be risk averse, and you should put all your resources on the bet with the highest expected pay-off, regardless of the possibility that it might all amount to nothing if you don’t get lucky.
Right, and there is risk in everything. A charity might fold, or end up consisting of crooks, or its cause might actually be harmful, or the estimate of buying malaria nets being more useful than supporting SI might turn out to be wrong. Hence the diversification.
Politics is even worse, you can never be sure which policy is better, and whether, when carried to its extreme, it turns out to be harmful.
This is where cousin_it’s naive argument for radicalization falls flat.
This doesn’t matter. Whether you should be risk averse doesn’t depend on how much risk there is, whether you should be risk averse depends on whether your pay-offs suffer diminishing returns, it is a mathematical equivalence (if your pay-offs have accelerating returns, you should be risk-seeking).
I think you don’t understand risk aversion. Consider a simple toy problem, investment A has a 90% chance of doubling your money and a 10% chance of losing all of it, investment A has a 90% chance of multiplying your money by one half and a 10% chance of losing all of it. Suppose you have $100,00, enough for a comfortable lifestyle. If you invest it all in A, you have a 90% chance of a much more comfortable lifestyle, but a 10% chance of being out on the street, which is pretty bad. Investing equal amounts in both reduces your average welath from $180,000 to $157,500, but increases your chance of having enough money to live from 90% to 99%, which is more important.
If they are instead charities, and we substitute $1,000 return for 1 life saved, then diversifying just reduces the number of people you save, it also increases your chance of saving someone, but this doesn’t really matter compared to saving more people in the average case.
Look at it this way, in personal wealth, the difference between some money and no money is huge, the difference between some money and twice as much money is vastly less significant. In charity, the difference between some lives saved and twice as many lives saved is exactly as significant as the difference between some lives saved and no lives saved.
I’m not explaining this very well, because I’m a crap explainer, here’s the relevant wikipedia page
Note that even those wealthy guys who are not in danger of living on the street, diversify, lest they lose a large chunk of their investment. Similarly, if you assign a large disutility to non-optimal charity, your utility losses from a failed one will not be in any way compensated by your other charities performing well. Again, the situation in politics, which is the real question (charity is just an unfortunate analogy), the stakes are even higher, so picking an extreme position is even less justified.
I’m not talking about the politics case, there are other problems with Cousin It’s argument. I’m arguing with your ‘refutation’ of the non-diversifying principle.
They may not be in danger of homelessness, but there is still diminishing returns. The difference between $1m and $2m is more important than the difference between $2m and $3m. Notice the operative word ‘large’ in your sentence. If those guys were just betting for amounts on the scale of $10, sufficiently small that the curve becomes basically linear, then they wouldn’t diversify (if they were smart).
The situation with charity is somewhat similar, your donation is as small on the scale of the whole problem being fixed, and the whole amount being donated, as $10 is for a rich investment banker. The diminshing returns that exist do not have any effect on the scale of individuals.
Politics, if you insist on talking about it, is the same. Your personal influence has no effect on the marginal utilities, it is far too small.
Yes, if you donate to make yourself feel good (as opposed to helping people) and having all your money go to waste makes you feel exceptionally bad, then you should diversify. If you donate to help people, then you shouldn’t assign an exceptionally large disutility to non-optimal-charity, you should assign utility precisely proportional to the number of lives you can save.