I have been thinking recently about how to get other people to do the things I want them to do, because it seems like overwhelmingly the best way to get those things done.
I typically encounter two difficulties off the bat:
Convincing people they should care about the future of humanity.
Convincing people that they should spend time thinking about how to act strategically on that preference.
One interesting observation is that people typically reject either one or the other. Some people agree that if they wanted to help other people, they should do rather extreme and unusual things, but claim to not care about other people. Some people agree that helping other people is very important, but think that typical philanthropic activity is reasonably effective (so effective that its not worth spending much time to optimize further). I think this is a consistent quirk of human nature: people encounter a conclusion they don’t like and a bunch of premises that seem reasonable, and they choose one premise to reject while they can be manipulated into accepting the others. This is probably a useful thing to make a mental note of and try to exploit productively.
That observation aside, I think the easiest plan is to talk to smart people who already care deeply about other humans, but don’t think too hard about how to act strategically on that preference. One approach is to present an idea very concisely which strongly suggests that 5 minutes of thought about strategic behavior is warranted. With the 5 minutes, maybe an idea can be communicated which implies that an hour of thought about strategic behavior is warranted. With an hour, maybe more progress can be made. And so on.
I would like to brainstorm ideas which have a good ratio of “Amount of re-evaluation of priorities / strategies they inspire in someone who is unconsciously trying very hard to avoid changing their behavior” / “Amount of time required to communicate to someone who is attentive but unconsciously trying very hard to avoid changing their behavior.” Perhaps more important are thoughts about how to present these ideas efficiently. A lot of what I am about to say is somewhat redundant with other content at LW: try and consider it particularly in the context of strong time limitations and trying to talk to someone who is not fundamentally interested in becoming rational.
1. Rational philanthropy. Present two or more activities many people engage in (donating to different charitable organizations, donating vs. volunteering, solving the same social problem in different ways), together with a concise and maximally incontrovertible argument that one of those activities is significantly better than the other, and that anyone who does the other is wasting their energy. Suggest that the collective behavior of society is a horrible marker for what is effective. Suggest that to the extent that socially typical activities are optimally useful, it is coincidental. Suggest that if some people do very little good in the world because they don’t think enough, we may also do very little good (compared to our potential) because we don’t think enough.
2. Value of technological progress. Present a plausible deductive argument that either (A) the value of increasing the speed of technological progress is immensely higher than the value of doing traditional philanthropic work or (B) the value of controlling the direction of technological progress is immensely higher than the value of doing traditional philanthropic work. Suggest that, regardless of how their careful consideration of the argument turns out, it could cause a complete change in priorities. Suggest further that, if the argument fails and the listener doesn’t have to change their behavior, its not because the listener has considered all the possibilities at length and chosen the best one. Indeed, the majority of people engaged in philanthropic work haven’t; to the extent that their behavior is maximally effective it is coincidental. The potential difference between “maximally effective” and the default is incredibly large.
5⁄2. Any other extremely important considerations which you can plausibly argue might change someone’s priorities completely. That was the best one I could think of, but having more seems good. Even if the listener ultimately rejects the argument, if you can maintain even for a couple of minutes the possibility that this idea—which they haven’t even given a minute’s thought—could change their views, then you might be able to get some leverage out of it.
3. Scope insensitivity. Point out that scope insensitivity exists and can do ridiculous things to people’s judgments, using the most quickly convincing empirical evidence available; so far the best I have heard (probably from EY, though I don’t know) is a study about saving wildlife, whose details I should surely know by heart if there is no more effective study (and whose methodological soundness I should briefly confirm before using it in such an argument). Point out that a billion people is a lot, and its unlikely that human intuition ever comes up with the right answer on questions that concern a billion people (much less larger numbers).
4. Society is generally insane. Offer any evidence that suggests that, especially when a market isn’t filtering out the bad ideas, social consensus can fail catastrophically. Suggest that any beliefs derived from social consensus should be questioned, and that even very good ideas don’t actually propagate through society that quickly when they don’t help people make money.
These arguments are all more towards the quick end of the spectrum, because thats what I think about most right now. The hope is that if you can win a little more actual introspection from a listener, you can make more progress. The other stages are better covered by existing material on LW, but I think it is worth looking at exactly what you would do with, say, thirty minutes of introspection. I feel like I have less to contribute here, so I will just mention some things briefly.
5. Examples of common biases. This is a harder argument to make than LW collectively seems to think. It is not obvious to me that my hindsight bias is a big deal—you need to really convince me that the quality of my reasoning is important and that a particular bias affects it in a significant way.
6. Evidence that people rarely change their mind. This seems like the most important one, but it again is a fairly hard argument. You need to actually convince the listener that they personally should be changing their mind more often than they do, and that they are suffering significantly (or are significantly less effective at helping others) as a result.
7. The importance of thinking about the value of future humans. This is also difficult, because discussions along these lines seem invariably to get pulled into the category of “philosophical garbage” (about which people are happy to talk at great length without thinking very much or updating any beliefs) rather than discussions about what to actually do with yourself tomorrow. But thinking explicitly about the value of the future is not too far fetched for most people to stomach, and realizing how important this question is (whatever the answer turns out to be) may suggest how important thinking about other previously unconsidered things may be.
These ideas are towards the middle of the range. I expect if carefully constructed you could get mileage out of them in the course of a single conversation (starting from some easier topics). Here are what I imagine as the last and hardest things to sell, though I can’t say anything about how to do it.
8. Let your thoughts about what you should do control what you actually do. Seriously, when you walk away from this, let it have some effect on your behavior.
9. You have to make decisions in the face of incredible uncertainty. You don’t get to abstain from having beliefs just because thinking about the future is hard. Learn to think precisely about uncertainty, and then do it. Don’t reject a belief because it is ridiculous.
Approaching rationality via a slippery slope
I have been thinking recently about how to get other people to do the things I want them to do, because it seems like overwhelmingly the best way to get those things done.
I typically encounter two difficulties off the bat:
Convincing people they should care about the future of humanity.
Convincing people that they should spend time thinking about how to act strategically on that preference.
One interesting observation is that people typically reject either one or the other. Some people agree that if they wanted to help other people, they should do rather extreme and unusual things, but claim to not care about other people. Some people agree that helping other people is very important, but think that typical philanthropic activity is reasonably effective (so effective that its not worth spending much time to optimize further). I think this is a consistent quirk of human nature: people encounter a conclusion they don’t like and a bunch of premises that seem reasonable, and they choose one premise to reject while they can be manipulated into accepting the others. This is probably a useful thing to make a mental note of and try to exploit productively.
That observation aside, I think the easiest plan is to talk to smart people who already care deeply about other humans, but don’t think too hard about how to act strategically on that preference. One approach is to present an idea very concisely which strongly suggests that 5 minutes of thought about strategic behavior is warranted. With the 5 minutes, maybe an idea can be communicated which implies that an hour of thought about strategic behavior is warranted. With an hour, maybe more progress can be made. And so on.
I would like to brainstorm ideas which have a good ratio of “Amount of re-evaluation of priorities / strategies they inspire in someone who is unconsciously trying very hard to avoid changing their behavior” / “Amount of time required to communicate to someone who is attentive but unconsciously trying very hard to avoid changing their behavior.” Perhaps more important are thoughts about how to present these ideas efficiently. A lot of what I am about to say is somewhat redundant with other content at LW: try and consider it particularly in the context of strong time limitations and trying to talk to someone who is not fundamentally interested in becoming rational.
1. Rational philanthropy. Present two or more activities many people engage in (donating to different charitable organizations, donating vs. volunteering, solving the same social problem in different ways), together with a concise and maximally incontrovertible argument that one of those activities is significantly better than the other, and that anyone who does the other is wasting their energy. Suggest that the collective behavior of society is a horrible marker for what is effective. Suggest that to the extent that socially typical activities are optimally useful, it is coincidental. Suggest that if some people do very little good in the world because they don’t think enough, we may also do very little good (compared to our potential) because we don’t think enough.
2. Value of technological progress. Present a plausible deductive argument that either (A) the value of increasing the speed of technological progress is immensely higher than the value of doing traditional philanthropic work or (B) the value of controlling the direction of technological progress is immensely higher than the value of doing traditional philanthropic work. Suggest that, regardless of how their careful consideration of the argument turns out, it could cause a complete change in priorities. Suggest further that, if the argument fails and the listener doesn’t have to change their behavior, its not because the listener has considered all the possibilities at length and chosen the best one. Indeed, the majority of people engaged in philanthropic work haven’t; to the extent that their behavior is maximally effective it is coincidental. The potential difference between “maximally effective” and the default is incredibly large.
5⁄2. Any other extremely important considerations which you can plausibly argue might change someone’s priorities completely. That was the best one I could think of, but having more seems good. Even if the listener ultimately rejects the argument, if you can maintain even for a couple of minutes the possibility that this idea—which they haven’t even given a minute’s thought—could change their views, then you might be able to get some leverage out of it.
3. Scope insensitivity. Point out that scope insensitivity exists and can do ridiculous things to people’s judgments, using the most quickly convincing empirical evidence available; so far the best I have heard (probably from EY, though I don’t know) is a study about saving wildlife, whose details I should surely know by heart if there is no more effective study (and whose methodological soundness I should briefly confirm before using it in such an argument). Point out that a billion people is a lot, and its unlikely that human intuition ever comes up with the right answer on questions that concern a billion people (much less larger numbers).
4. Society is generally insane. Offer any evidence that suggests that, especially when a market isn’t filtering out the bad ideas, social consensus can fail catastrophically. Suggest that any beliefs derived from social consensus should be questioned, and that even very good ideas don’t actually propagate through society that quickly when they don’t help people make money.
These arguments are all more towards the quick end of the spectrum, because thats what I think about most right now. The hope is that if you can win a little more actual introspection from a listener, you can make more progress. The other stages are better covered by existing material on LW, but I think it is worth looking at exactly what you would do with, say, thirty minutes of introspection. I feel like I have less to contribute here, so I will just mention some things briefly.
5. Examples of common biases. This is a harder argument to make than LW collectively seems to think. It is not obvious to me that my hindsight bias is a big deal—you need to really convince me that the quality of my reasoning is important and that a particular bias affects it in a significant way.
6. Evidence that people rarely change their mind. This seems like the most important one, but it again is a fairly hard argument. You need to actually convince the listener that they personally should be changing their mind more often than they do, and that they are suffering significantly (or are significantly less effective at helping others) as a result.
7. The importance of thinking about the value of future humans. This is also difficult, because discussions along these lines seem invariably to get pulled into the category of “philosophical garbage” (about which people are happy to talk at great length without thinking very much or updating any beliefs) rather than discussions about what to actually do with yourself tomorrow. But thinking explicitly about the value of the future is not too far fetched for most people to stomach, and realizing how important this question is (whatever the answer turns out to be) may suggest how important thinking about other previously unconsidered things may be.
These ideas are towards the middle of the range. I expect if carefully constructed you could get mileage out of them in the course of a single conversation (starting from some easier topics). Here are what I imagine as the last and hardest things to sell, though I can’t say anything about how to do it.
8. Let your thoughts about what you should do control what you actually do. Seriously, when you walk away from this, let it have some effect on your behavior.
9. You have to make decisions in the face of incredible uncertainty. You don’t get to abstain from having beliefs just because thinking about the future is hard. Learn to think precisely about uncertainty, and then do it. Don’t reject a belief because it is ridiculous.