My response was that even if all of this were true, EA still provided a pool of people from which those who are strategic could draw and recruit from.
This … doesn’t seem to be responding to your interlocutor’s argument?
The “anti-movement” argument is that solving alignment will require the development of a new ‘mental martial art’ of systematically correct reasoning, and that the social forces of growing a community impair our collective sanity and degrade the signal the core “rationalists” were originally trying to send.
Now, you might think that this story is false—that the growth of EA hasn’t made “rationality” worse, that we’re succeeding in raising the sanity waterline rather than selling out and being corrupted. But if so, you need to, like, argue that?
If I say, “Popularity is destroying our culture”, and you say, “No, it isn’t,” then that’s a crisp disagreement that we can potentially have a productive discussion about. If instead you say, “But being popular gives you a bigger pool of potential converts to your culture,” that would seem to be missing the point. What culture?
the development of a new ‘mental martial art’ of systematically correct reasoning
Unpopular opinion: Rationality is less about martial arts moves than about adopting an attitude of intellectual good faith and consistently valuing impartial truth-seeking above everything else that usually influences belief selection. Motivating people (including oneself) to adopt such an attitude can be tricky, but the attitude itself is simple. Inventing new techniques is good but not necessary.
I agree with this in some ways! I think the rationality community as it is isn’t what the world needs most, since putting effort into being friendly and caring for each other in ways that try to increase people’s ability to discuss without social risk is IMO the core thing that’s needed for humans to become more rational right now.
IMO, the techniques are relatively quite easy to share once you have trust to talk about them, and merely require a lot of practice, but convincing large numbers of people that it’s safe to think things through in public without weirding out their friends seems to me to be likely to require making it safe to think things through in public without weirding out their friends. I think that scaling a technical+crafted culture solution to creating emotional safety to discuss what’s true, that results in many people putting regular effort into communicating friendliness toward strangers when disagreeing, would do a lot more than scaling discussion of specific techniques for humanity’s rationality.
The problem as I see it right now is that this only works if it is seriously massively scaled. I feel like I see the reason CFAR got excited about circling now—seems like you probably need emotional safety to discuss usefully. But I think circling was an interesting thing to learn from, not a general solution. I think we need to design an internet that creates emotional safety for most of its users.
With finesse, it’s possible to combine the techniques of truth-seeking with friendliness and empathy so that the techniques work even when the person you’re talking to doesn’t know them. That’s a good way to demonstrate the effectiveness of truth-seeking techniques.
It’s easiest to use such finesse on the individual level, but if you can identify general concepts which help you understand and create emotional safety for larger groups of people, you can scale it up. Values conversations require at least one of the parties involved to have an understanding of value-space, so they can recognize and show respect for how other people prioritize different values even as they introduce alternative priority ordering. Building a vocabulary for understanding value-space to enable productive values conversations on the global scale is one of my latest projects.
To be honest, I’m not happy with my response here. There was also a second simultaneous discussion topic about whether CEA was net positive and even though I tried simplifying this into a single discussion, it seems that I accidentally mixed in part of the other discussion (the original title of this post in draft was EA vs. rationality).
The “anti-movement” argument is that solving alignment will require the development of a new ‘mental martial art’ of systematically correct reasoning, and that the social forces of growing a community impair our collective sanity and degrade the signal the core “rationalists” were originally trying to send.
This may be an argument, but the one I’ve heard (and saw above) is something more like “It’s better to focus our efforts on a small group of great people and give them large gains, then a larger group of people each of whom we give small gains… or something.
It’s possible that both of these points are cruxes for many people who hold opposing views, but it does seem worth separating.
This … doesn’t seem to be responding to your interlocutor’s argument?
The “anti-movement” argument is that solving alignment will require the development of a new ‘mental martial art’ of systematically correct reasoning, and that the social forces of growing a community impair our collective sanity and degrade the signal the core “rationalists” were originally trying to send.
Now, you might think that this story is false—that the growth of EA hasn’t made “rationality” worse, that we’re succeeding in raising the sanity waterline rather than selling out and being corrupted. But if so, you need to, like, argue that?
If I say, “Popularity is destroying our culture”, and you say, “No, it isn’t,” then that’s a crisp disagreement that we can potentially have a productive discussion about. If instead you say, “But being popular gives you a bigger pool of potential converts to your culture,” that would seem to be missing the point. What culture?
Unpopular opinion: Rationality is less about martial arts moves than about adopting an attitude of intellectual good faith and consistently valuing impartial truth-seeking above everything else that usually influences belief selection. Motivating people (including oneself) to adopt such an attitude can be tricky, but the attitude itself is simple. Inventing new techniques is good but not necessary.
I agree with this in some ways! I think the rationality community as it is isn’t what the world needs most, since putting effort into being friendly and caring for each other in ways that try to increase people’s ability to discuss without social risk is IMO the core thing that’s needed for humans to become more rational right now.
IMO, the techniques are relatively quite easy to share once you have trust to talk about them, and merely require a lot of practice, but convincing large numbers of people that it’s safe to think things through in public without weirding out their friends seems to me to be likely to require making it safe to think things through in public without weirding out their friends. I think that scaling a technical+crafted culture solution to creating emotional safety to discuss what’s true, that results in many people putting regular effort into communicating friendliness toward strangers when disagreeing, would do a lot more than scaling discussion of specific techniques for humanity’s rationality.
The problem as I see it right now is that this only works if it is seriously massively scaled. I feel like I see the reason CFAR got excited about circling now—seems like you probably need emotional safety to discuss usefully. But I think circling was an interesting thing to learn from, not a general solution. I think we need to design an internet that creates emotional safety for most of its users.
Thoughts on this balance, other folks?
With finesse, it’s possible to combine the techniques of truth-seeking with friendliness and empathy so that the techniques work even when the person you’re talking to doesn’t know them. That’s a good way to demonstrate the effectiveness of truth-seeking techniques.
It’s easiest to use such finesse on the individual level, but if you can identify general concepts which help you understand and create emotional safety for larger groups of people, you can scale it up. Values conversations require at least one of the parties involved to have an understanding of value-space, so they can recognize and show respect for how other people prioritize different values even as they introduce alternative priority ordering. Building a vocabulary for understanding value-space to enable productive values conversations on the global scale is one of my latest projects.
To be honest, I’m not happy with my response here. There was also a second simultaneous discussion topic about whether CEA was net positive and even though I tried simplifying this into a single discussion, it seems that I accidentally mixed in part of the other discussion (the original title of this post in draft was EA vs. rationality).
Update: I’ve now edited this response out.
This may be an argument, but the one I’ve heard (and saw above) is something more like “It’s better to focus our efforts on a small group of great people and give them large gains, then a larger group of people each of whom we give small gains… or something.
It’s possible that both of these points are cruxes for many people who hold opposing views, but it does seem worth separating.